uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,499,615 | arxiv | \section{Introduction}
The ultrapower construction for II$_1$ factors,
originally introduced in \cite{Wr54, Sa62}, first came to prominence following McDuff's work \cite{MD69c}. It has since played a fundamental role in the study of von Neumann algebras. In particular, the analysis of ultrapowers of II$_1$ factors was a crucial ingredient in Connes' celebrated classification of injective factors \cite{Co75}. In the same paper, ultrapowers were used by Connes to formulate his famous (still unsolved) {\it embedding problem}. More recently, ultrapower techniques have been instrumental in the advances made in the classification of II$_1$ factors by Popa's deformation/rigidity theory (see e.g. \cite{Po04}). For more history on ultrapowers and ultraproducts of von Neumann algebras, see \cite{AH13}.
While ultrapowers of II$_1$ factors have been extremely useful in various applications, the following intrinsic problem remained open: how many ultrapowers (with respect to a fixed ultrafilter) of separable II$_1$ factors exist, up to isomorphism?
Recently, a closely related problem has been considered in the emerging field of continuous model theory of operator algebras \cite{FHS09,FHS10,FHS11}: how many elementary equivalence classes of II$_1$ factors exist?
This problem has received a lot of attention (see e.g.\ \cite{FHS11, GS14} and the survey \cite{Fa14}). The connection between the two problems stems from the continuous version of the Keisler-Shelah theorem \cite{Ke61, Sh71}. This asserts that two II$_1$ factors $M$ and $N$ are elementarily equivalent if and only if they have isomorphic ultrapowers, $M^{\mathcal{U}}\cong N^{\mathcal{V}}$, with respect to ultrafilters $\mathcal{U}$ and $\mathcal{V}$ on arbitrarily large sets.
At present, only three different elementary equivalence classes of II$_1$ factors appear in the literature. More precisely, it was noticed in \cite{FGL06, FHS11} that, for separable II$_1$ factors, property Gamma and the property of being McDuff are elementary properties (i.e. they are remembered by ultrapowers).
Thus, the hyperfinite II$_1$ factor $R$, the free group factor $L(\mathbb F_2)$, and any non-McDuff separable II$_1$ factor that has property Gamma (see \cite{DL69}), are not elementarily equivalent.
By contrast, the existence of uncountably many non-isomorphic separable II$_1$ factors has been known for a long time \cite{MD69b, Sa69}.
This situation is partially explained by the fact that elementary equivalence of II$_1$ factors is a much coarser notion of equivalence than isomorphism. An illuminating explanation of this fact is provided by a result in \cite{FHS11} which states that any II$_1$ factor is elementarily equivalent to uncountably many non-isomorphic II$_1$ factors.
In this paper we solve the above problems, by proving the existence of a continuum of separable II$_1$ factors whose ultrapowers, with respect to any ultrafilters, are non-isomorphic.
\subsection{Construction and statement of the result}
\label{sectionconstruction}
Our examples of II$_1$ factors with non-isomorphic ultrapowers come from McDuff's work \cite{MD69a,MD69b}. The construction relies on two functors $T_0$ and $T_1$, from the category of countable groups to itself, defined as follows.
Consider a countable group $\Gamma$. Let $\Gamma_i$, $i\geq 1,$ be isomorphic copies of $\Gamma$, and $\Lambda_i$, $i\geq 1,$ be isomorphic copies of $\mathbb Z$. We define $\widetilde\Gamma = \bigoplus_{i\geq 1}\Gamma_i$ and denote by $S_{\infty}$ the group of finite permutations of $\{1,2,...\}$. We consider the semidirect product $\widetilde\Gamma\rtimes S_{\infty}$ associated to the action of $S_{\infty}$ on $\widetilde\Gamma$ which permutes the copies of $\Gamma$. Following \cite{MD69b},
\begin{itemize}
\item we define $T_0(\Gamma)$ as the group generated by $\widetilde\Gamma$ and $\Lambda_i, i\geq 1,$ with the only relations that $\Gamma_i$ and $\Lambda_j$ commute for every $i\geq j\geq 1$;
\item we define $T_1(\Gamma)$ as the group generated by $\widetilde\Gamma\rtimes S_{\infty}$ and $\Lambda_i, i\geq 1,$ with the only relations that $\Gamma_i$ and $\Lambda_j$ commute for every $i\geq j\geq 1$.
\end{itemize}
The definition of $T_0$ is due to Dixmier and Lance in \cite[\S 21]{DL69}, who were inspired by a construction in \cite{MvN43}.
The identification $\Gamma = \Gamma_1$ gives an embedding of $\Gamma$ inside $T_\alpha(\Gamma)$, for $\alpha \in\{ 0,1\}$. Moreover, every inclusion $\Sigma \subset\Sigma'$ of countable groups canonically induces an inclusion $T_{\alpha}(\Sigma)\subset T_{\alpha}(\Sigma')$. Hence, any sequence $\pmb\alpha = (\alpha_n)_{n \geq 1}$ of $0$'s and $1$'s, gives rise to a sequence of inclusions
\[\Gamma \subset T_{\alpha_1}(\Gamma) \subset T_{\alpha_1} \circ T_{\alpha_2}(\Gamma) \subset T_{\alpha_1} \circ T_{\alpha_2} \circ T_{\alpha_3}(\Gamma) \subset \cdots\]
\begin{definition}
\label{defintro}
Given a sequence $\pmb\alpha$ of $0$'s and $1$'s we define
\begin{itemize}
\item $T_{\pmb\alpha}(\Gamma) := \Gamma$, if $\pmb\alpha$ is the empty sequence;
\item $T_{\pmb\alpha}(\Gamma) := T_{\alpha_1} \circ T_{\alpha_2} \circ \cdots \circ T_{\alpha_n}(\Gamma)$, if $\pmb\alpha = (\alpha_1,\alpha_2,\dots,\alpha_n)$ is a finite sequence;
\item $T_{\pmb\alpha}(\Gamma)$ is the inductive limit of the increasing sequence of groups $(T_{(\alpha_1,\dots,\alpha_n)}(\Gamma))_{n \geq 1}$, if $\pmb\alpha = (\alpha_n)_{n \geq 1}$ is an infinite sequence.
\end{itemize}
We denote by $M_{\pmb\alpha}(\Gamma) := L(T_{\pmb\alpha}(\Gamma))$ the associated von Neumann algebra.
\end{definition}
The countable family of non-isomorphic II$_1$ factors constructed in \cite{MD69a} is just $M_{\pmb\alpha_n}(\mathbb F_2)$, $n \geq 1$, where $\pmb\alpha_n$ denotes the finite $0$-valued sequence of length $n$ and $\mathbb F_2$ is the free group on two generators.
The uncountable family $M_{\pmb\alpha}(\Gamma)$, indexed over infinite sequences $\pmb\alpha$, is precisely the family of non-isomorphic II$_1$ factors constructed in \cite{MD69b}.
\begin{theorem}\label{main}
Consider a countable group $\Gamma$ and two different sequences $\pmb\alpha \neq \pmb\beta$ of $0$'s and $1$'s.
Assume that $\Gamma=\mathbb F_2$, or that the sequences $\pmb\alpha$ and $\pmb\beta$ are infinite.
Then $M_{\pmb\alpha}(\Gamma)^\mathcal{U}$ is not isomorphic to $M_{\pmb\beta}(\Gamma)^\mathcal{V}$, for any ultrafilters $\mathcal{U}$ and $\mathcal{V}$.
\end{theorem}
We will actually deduce Theorem \ref{main} from two results.
\begin{itemize}
\item Firstly, we prove that any ultrapower of $M_{\pmb\alpha}(\mathbb{F}_2)$ remembers the length of $\pmb\alpha$. This is achieved by introducing a new invariant for von Neumann algebras, called {\it McDuff depth}, or {\it depth} for short, which quantifies property Gamma. We will show that if $\pmb\alpha$ is a sequence of $0$'s and $1$'s, then the depth of any ultrapower of $M_{\pmb\alpha}(\Gamma)$ is at least the length of $\pmb\alpha$, with equality for $\Gamma = \mathbb{F}_2$. This first half can be found in Section \ref{sectiondepth}; see Theorem \ref{countable}.
\item Secondly, we show that two infinite different sequences give different ultrapowers. For this we generalize McDuff's {\it property V} \cite{MD69b}. Using our notion of depth, we show that any ultrapowers of $M_{\pmb\alpha}(\Gamma)$ has property $V$ at depth $k$ if and only if $\alpha_k = 1$. Hence the sequence $\pmb\alpha$ is an invariant of $M_{\pmb\alpha}(\Gamma)$ and its ultrapowers. This part is done in Section \ref{sectionpropV}; see Theorem \ref{uncountable}.
\end{itemize}
As a consequence of Theorem \ref{main} we deduce the existence a continuous family of separable non-nuclear $\mathcal{Z}$-stable C$^*$-algebras with non-isomorphic ultrapowers. We thank Ilijas Farah for pointing this out to us. For any sequence $\pmb\alpha$ of $0$'s and $1$'s and any group $\Gamma$, define $A_{\pmb\alpha}(\Gamma) = C^*_r(T_{\pmb\alpha}(\Gamma)) \otimes \mathcal{Z}$, where $\mathcal{Z}$ is the Jiang-Su algebra.
\begin{corollary}\label{maincor}
In the setting of Theorem \ref{main}, assume moreover that $\pmb\alpha$ and $\pmb\beta$ are non-empty.
Then the C$^*$-algebraic ultrapowers $A_{\pmb\alpha}(\Gamma)^\mathcal{U}$ and $A_{\pmb\beta}(\Gamma)^\mathcal{V}$ are not isomorphic, for any ultrafilters $\mathcal{U}$ and $\mathcal{V}$.
\end{corollary}
Note that the proof of Corollary \ref{maincor} also implies that the reduced group C$^*$-algebras $C^*_r(T_{\pmb\alpha}(\Gamma))$ and $C^*_r(T_{\pmb\beta}(\Gamma))$ do not have isomorphic ultrapowers.
Throughout the article, we will use the above notations. In addition, we will often consider the following subgroups of $T_0(\Gamma)$ or $T_1(\Gamma)$:
\[\widetilde\Gamma_n=\bigoplus_{i\geq n}\Gamma_i,\;\; \text{ and } \;\;\widetilde\Gamma_{n,n'}=\bigoplus_{n'>i\geq n}\Gamma_i,\;\;\text{ for every } n'>n\geq 1.\]
\subsection*{Acknowledgements} I.C. is very grateful to Isaac Goldbring for kindly showing to him that the results in \cite{ZM69} can be used to produce a fourth elementary equivalence class of II$_1$ factors. A.I. would like to thank Sorin Popa and Thomas Sinclair for many stimulating discussions. We are also grateful to Ilijas Farah for various comments, and especially for pointing out Corollary \ref{maincor} to us.
\section{Preliminaries}
\subsection{Terminology}
Throughout this article we work with {\it tracial von Neumann algebras} $(M,\tau)$, i.e. von Neumann algebras $M$ endowed with a faithful normal trace $\tau:M\rightarrow\mathbb C$. We say that $M$ is {\it separable} if it is separable with respect to the norm $\|x\|_2=\tau(x^*x)^{1/2}$.
We denote by $\mathscr U(M)$ the group of {\it unitaries} of $M$.
If $n\geq 1$, then we denote by $M^{{\bar\otimes} n}$ the tensor product von Neumann algebra ${\bar\otimes}_{i=1}^nM$.
If $A,B\subset M$, then we denote $A'\cap B=\{x\in B\,|\,xy=yx,\;\text{for all}\;y\in A\}$. A tracial von Neumann algebra $M$ is a {\it II$_1$ factor} if it is infinite dimensional and has trivial center.
If $\Gamma$ is a countable group, then we denote by $(u_g)_{g\in\Gamma}\subset\mathscr U(\ell^2(\Gamma))$ its left regular representation given by $u_g(\delta_h)=\delta_{gh}$, where $(\delta_h)_{h\in\Gamma}$ is the usual orthonormal basis of $\ell^2(\Gamma)$.
The weak (operator) closure of the linear span of $(u_g)_{g\in\Gamma}$ is a tracial von Neumann algebra, which we denote by $L(\Gamma)$. The so-called {\it group von Neumann algebra} $L(\Gamma)$ is a II$_1$ factor precisely when $\Gamma$ has infinite non-trivial conjugacy classes (icc).
\subsection{Ultrafilters and ultraproducts}
In this subsection we collect together several elementary facts regarding ultraproducts of von Neumann algebras.
An {\it ultrafilter} $\mathcal U$ on a set $S$ is collection of subsets of $S$ which is closed under finite intersections, does not contain the empty set, and contains either $S'$ or $S\setminus S'$, for every subset $S'\subset S$. An ultrafilter $\mathcal U$ is called {\it free} if it contains the complements of all finite subsets of $S$.
If $f\in\ell^{\infty}(S)$, then its limit along $\mathcal U$, denoted by $\lim_{\mathcal U}f(s)$, is the unique $\ell\in\mathbb C$ such that $\{s\in S\,|\, |f(s)-\ell|<\varepsilon\}\in\mathcal U$, for every $\varepsilon>0$. The map $\ell^{\infty}(S)\ni f\mapsto\lim_{\mathcal U}f(s)\in\mathbb C$ is a $*$-homomorphism, which allows to identify $\mathcal U$ with a point in the Stone-\v{C}ech compactification $\beta S$ of $S$. Via this identification, an ultrafilter $\mathcal U$ is free if and only if it belongs to $\beta S\setminus S$.
Given an ultrafilter $\mathcal{U}$ on a set $S$ and a family of tracial von Neumann algebras $(M_s,\tau_s), s\in S$, we define the {\it ultraproduct} algebra $\prod_{\mathcal U}M_s$ as the quotient $\mathcal A/\mathcal I$, where $\mathcal A$ is the C$^*$-algebra $\mathcal A=\{(x_s)_s\in\prod {M_s}\,|\,\sup_s\|x_s\|<\infty\}$ and $\mathcal I$ is the closed ideal of $(x_s)_s\in\mathcal A$ such that $\lim_{\mathcal U}\|x_s\|_2=0$. It turns out that $\prod_{\mathcal U}M_s$ is a tracial von Neumann algebra, with the canonical trace given by $\tau((x_s)_s)=\lim_{\mathcal U}\tau_s(x_s)$. When $(M_s)_s$ is the constant family $(M)_s$, we write $M^\mathcal{U}$ and call it the {\it ultrapower} von Neumann algebra. In this case, the map $\pi:M\rightarrow M^{\mathcal U}$ given by $\pi(x)=(x_s)_s$, where $x_s=x$, for all $s\in S$, is an injective $*$-homomorphism.
Next, we recall the known fact that depending on the ultrafilter $\mathcal{U}$, $M^{\mathcal{U}}$ is either non-separable or isomorphic to $M$ (see Lemma \ref{complete}).
\begin{definition}
An ultrafilter $\mathcal U$ on a set $S$ is called {\it countably cofinal} (or {\it countably incomplete}) if there is a sequence $\{A_n\}$ in $\mathcal U$ such that $\cap_{n}A_n=\emptyset$. Otherwise, $\mathcal U$ is called {\it countably complete}.
\end{definition}
Any free ultrafilter on a countable set is countably cofinal, while any principal (i.e.\ non-free) ultrafilter is countably complete.
The hypothesis that there exists a countably complete free ultrafilter is a very strong axiom which is not provable from ZFC (e.g.\ \cite[Section 5]{Ke10}).
However, such set-theoretic issues will not be important here.
\begin{lemma}\label{ccofinal} Let $\mathcal U$ be a countably cofinal ultrafilter on a set $S$.
For every $s\in S$, let $M_s$ be a tracial von Neumann algebra and $\{M_{s,n}\}_{n\geq 1}$ be an increasing sequence of von Neumann subalgebras whose union is weakly dense in $M_s$. Let $Q\subset \prod_{\mathcal U}M_s$ be a separable subalgebra.
Then for every $s\in S$ we can find an integer $n_s\geq 1$ such that $Q\subset\prod_{\mathcal U}M_{s,n_s}$.
\end{lemma}
{\it Proof.} Since $\mathcal U$ is countably cofinal, we can find a sequence $\{A_n\}_{n\geq 2}$ in $\mathcal U$ such that $\cap_{n}A_n=\emptyset$. Let $A_1=S\setminus A_2$.
For $s\in S$, let $f(s)$ be the largest integer $n\geq 1$ such that $s\in A_n$. It is clear that $f:S\rightarrow\mathbb N$ is well-defined and $\lim_{\mathcal U}f(s)=+\infty$.
Let $Q\subset\prod_{\mathcal U}M_s$ be a separable subalgebra. Let $\{x_k\}_{k\geq 1}$ be a $\|\cdot\|_2$-dense sequence in $Q$.
For $k\geq 1$, represent $x_k=(x_{k,s})_s$, where $x_{k,s}\in M_s$, for all $s\in S$. Let $s\in S$. Since $\cup_{n\geq 1}M_{s,n}$ is weakly dense in $M$ and $M_{s,n}\subset M_{s,n+1}$, for all $n\geq 1$, we can find $n_s\geq 1$ such that $$\|x_{k,s}-E_{M_{s,n_s}}(x_{k,s})\|_2\leq\frac{1}{f(s)},\;\;\text{for all $1\leq k\leq f(s)$}.$$
Since $\lim_{\mathcal U}f(s)=+\infty$, it follows that for every $k\geq 1$ we have $\lim_{\mathcal U}\|x_{k,s}-E_{M_{s,n_s}}(x_{k,s})\|_2=0$. This implies that $x_k\in\prod_{\mathcal U}M_{s,n_s}$, for every $k\geq 1$, and hence $Q\subset\prod_{\mathcal U}M_{s,n_s}$. \hfill$\blacksquare$
The first assertion of the next lemma is well-known \cite{Fe56}, while the second assertion follows from the proof of \cite[Proposition 6.1(2)]{GH01}. Nevertheless, we include a proof for completeness.
\begin{lemma}\label{complete}
Let $\mathcal U$ be an ultrafilter on a set $S$. Let $(M,\tau)$ be a tracial von Neumann algebra.
\begin{enumerate}
\item If $\mathcal U$ is countably cofinal and $M$ has a diffuse direct summand, then $M^{\mathcal U}$ is non-separable.
\item If $\mathcal U$ is countably complete and $M$ is separable, then $\pi:M\rightarrow M^{\mathcal U}$ given by $\pi(x)=(x_s)_s$, where $x_s=x$, for all $s\in S$, is a $*$-isomorphism.
\end{enumerate}
\end{lemma}
{\it Proof.} (1) If $p\in M$ is a projection, then $(pMp)^{\mathcal U}$ is a subalgebra of $M^{\mathcal U}$. If $M$ is a diffuse von Neumann algebra, then $M$ contains a copy of $A:=L^{\infty}([0,1])$, hence $M^{\mathcal U}$ contains a copy of $A^{\mathcal U}$.
We may therefore reduce to the case when $M=A$.
Let $\{A_n\}_{n\geq 1}$ be an increasing sequence of finite dimensional subalgebras of $A$ such that $\cup_n A_n$ is weakly dense in $A$. Assuming that $A^{\mathcal U}$ is separable, Lemma \ref{ccofinal} implies that $A^{\mathcal U}=\prod_{\mathcal U}A_{n_s}$, for some integers $n_s\geq 1, s\in S$.
But since $A$ is diffuse and $A_{n_s}$ is finite dimensional, we can find $u_s\in\mathscr U(A)$ such that $E_{A_{n_s}}(u_s)=0$, for every $s\in S$. Then $u=(u_s)_s\in\mathscr U(A^{\mathcal U})$ would be orthogonal to $\prod_{\mathcal U}A_{n_s}$, which is a contradiction.
\noindent (2) Since $\mathcal U$ is countably complete, $\cap_n A_n\not=\emptyset$, for any sequence $\{A_n\}$ in $\mathcal U$. Then the collection $\mathcal U'$ of all sets of the form $\cap_n A_n$, where $\{A_n\}$ is a sequence in $\mathcal U$, is a filter on $S$. Since $\mathcal U\subset\mathcal U'$ and $\mathcal U$ is an ultrafilter, we get that $\mathcal U'=\mathcal U$ and hence $\cap_n A_n\in\mathcal U$, for any sequence $\{A_n\}$ in $\mathcal U$.
Let $f_m\in\ell^{\infty}(S)$, for $m\geq 1$, and put $\ell_m=\lim_{\mathcal U}f_m(s)$. Then the previous paragraph implies that \begin{equation}\label{inter}\{s\in S \,|\, f_m(s)=\ell_m \,|\;\text{ for all }m\geq 1\}=\bigcap_{m,n\geq 1}\{s\in S\,|\, |f_m(s)-\ell_m|<\frac{1}{n}\}\in\mathcal U.\end{equation}
Assuming that $M$ is separable, let us show that $\pi:M\rightarrow M^{\mathcal U}$ is onto. Indeed, let $x=(x_s)_s\in M^{\mathcal U}$. Let $\{z_m\}$ be a $\|\cdot\|_2$-dense sequence in $M$. For every $m$, define $f_m\in\ell^{\infty}(S)$ by $f_m(s)=\tau(x_sz_m)$. By \eqref{inter}, there exists a set $A\in\mathcal U$ such that $f_m(s)=f_m(s')$, for all $s,s'\in A$ and every $m\geq 1$. Hence $x_s=x_s'$, for every $s,s'\in A$. Choosing $s_0\in A$, it clearly follows that $\pi(x_{s_0})=x$. This shows that $\pi$ is onto. Since $\pi$ is also injective, we conclude that $\pi$ is a $*$-isomorphism.
\hfill$\blacksquare$
Let us also record a simple consequence of Lemma \ref{ccofinal}, more specific to our problem.
\begin{corollary}\label{combo2}
Let $\mathcal U$ be a countably cofinal ultrafilter on a set $S$. Let $\Gamma$ be a countable group, $\alpha\in\{0,1\}$, and consider the notation from Section \ref{sectionconstruction}. Denote by $M=L(T_{\alpha}(\Gamma))$ and $P_n=L(\widetilde\Gamma_n)$, for $n\geq 1$. For every $s\in S$, let $t_s\geq 1$ be an integer and let $Q_s$ be a tracial von Neumann algebra.
If $A\subset\prod_{\mathcal{U}}(M^{{\bar\otimes} t_s}{\bar\otimes} Q_s)$ is a separable subalgebra, then there are integers $n_s\geq 1$, $s\in S$, satisfying \[\prod_{\mathcal{U}}P_{n_s}^{{\bar\otimes} t_s}\subset A'\cap\prod_{\mathcal{U}} M^{{\bar\otimes} t_s}.\]
\end{corollary}
{\it Proof.} From the definition of $T_0$ and $T_1$, we see that the increasing union $\cup_{n\geq 1}(P_n'\cap M)$ is weakly dense in $M$. Thus, the increasing union $\cup_{n\geq 1}\big[\big((P_n^{{\bar\otimes} t_s})'\cap M^{{\bar\otimes} t_s}\big){\bar\otimes} Q_s\big]$ is weakly dense in $M^{{\bar\otimes} t_s}{\bar\otimes} Q_s$, for all $s\in S$.
Using Lemma \ref{ccofinal}, for each $s\in S$, there exists an integer $n_s\geq 1$ such that $A\subset\prod_{\mathcal U}\big[\big((P^{{\bar\otimes} t_s}_{n_s})'\cap M^{{\bar\otimes} t_s}\big){\bar\otimes} Q_s\big].$
This clearly implies the conclusion.
\hfill$\blacksquare$
\subsection{Residual inclusions} A subalgebra $P$ of a tracial von Neumann algebra $M$ is called residual if it ``absorbs" central sequences: any central sequence of $M$ asymptotically lies in $P$. In this subsection, we define and use a quantitative notion of residual subalgebras.
\begin{definition} Let $(M,\tau)$ be a tracial von Neumann algebra, $k\geq 1$ an integer, and $C>0$.
A von Neumann subalgebra $P\subset M$ is said to be $(k,C)$-{\it residual} if there exist unitary elements $u_1, u_2, ..., u_k\in M$ such that for all $\xi\in M$ we have $$
\|\xi-E_{P}(\xi)\|_2^2\leq C \sum^k_{i=1}\|[\xi, u_i]\|_2^2.$$
\end{definition}
\begin{lemma} \label{bigger}
Let $k\geq 1$ and $C>0$. Let $\mathcal U$ be an ultrafilter on a set $S$. For any $s\in S$, let $M_s$ and $Q_s$ be tracial von Neumann algebras, and $P_s\subset M_s$ be a $(k,C)$-residual von Neumann subalgebra.
Then there exists a separable von Neumann subalgebra $A\subset \prod_{\mathcal U} M_s$ such that $$A'\cap\prod_{\mathcal U} (M_s{\bar\otimes} Q_s)\subset \prod_{\mathcal U}(P_s{\bar\otimes} Q_s).$$
\end{lemma}
{\it Proof.} Let $s\in S$. Let $u^{s}_1,... ,u^{s}_k\in\mathscr U(M_s)$ such that
$\|\xi-E_{P_{s}}(\xi)\|_2^2\leq C \sum^k_{i=1}\|[\xi,u_i^s]\|_2^2$, for all $\xi\in M_s$. From this it follows that $\|\xi-E_{P_s{\bar\otimes} Q_s}(\xi)\|_2^2\leq C\sum_{i=1}^k\|[\xi,u^s_i]\|_2^2$, for all $\xi\in M_s{\bar\otimes} Q_s$.
Denote by $u_i=(u^{s}_i)_s\in \mathscr U(\prod_{\mathcal U} M_s)$, for $1\leq i\leq k$. Then the last inequality implies that $$\|\xi-E_{\prod_{\mathcal U}(P_{s}{\bar\otimes} Q_s)}(\xi)\|_2^2\leq C \sum^k_{i=1}\| [\xi,u_i]\|_2^2,\;\;\text{for all $\xi\in\displaystyle{\prod_{\mathcal U}}(M_s{\bar\otimes} Q_s)$}.$$
Finally, we notice the von Neumann algebra $A$ generated by $u_1,...,u_k$ satisfies the conclusion. \hfill$\blacksquare$
\begin{definition}\cite{MD69a}\label{sres} Let $\Gamma$ be a countable group. A subgroup $\Lambda<\Gamma$ is called \emph{strongly residual} if there exist elements $a,b\in \Gamma$ and a subset $F\subset\Gamma\setminus\Lambda$ satisfying the following properties:
\begin{enumerate}
\item [(i)] $a\Lambda a^{-1}=\Lambda$,
\item [(ii)] $aFa^{-1}\cup F=\Gamma\setminus\Lambda$, and
\item [(iii)] $\{ b^{k}F b^{-k}\}_{k\in \mathbb Z}$ is a family of disjoint subsets of $\Gamma\setminus \Lambda$.
\end{enumerate}
\end{definition}
\begin{lemma}\label{resalg}
Let $\Lambda<\Gamma$ be a strongly residual subgroup.
Then $L(\Lambda)\subset L(\Gamma)$ is a $(2,100)$-residual subalgebra.
\end{lemma}
{\it Proof.}
Let $a,b\in\Gamma$ and $F\subset\Gamma\setminus\Lambda$ be as in Definition \ref{sres}.
Let $\xi\in L(\Gamma)$. View $\xi\in\ell^2(\Gamma)$ and for any subset $A\subset\Gamma$, define $\nu(A)=\sum_{h\in A}|\xi(h)|^2$. Then $\nu$ is a finite measure on $\Gamma$ and the Cauchy-Schwarz inequality implies that for every $g\in\Gamma$ and $A\subset\Gamma$ we have
\begin{align*}
|\nu(gAg^{-1})-\nu(A)| & = \Big|\sum_{h\in A}|\xi(ghg^{-1})|^2-|\xi(h)|^2\Big| \\
& \leq \sum_{h\in A} (|\xi(ghg^{-1})| + |\xi(h)|)||\xi(ghg^{-1})| - |\xi(h)||\\
& \leq (\nu(gAg^{-1})^{1/2}+\nu(A)^{1/2})\sum_{h \in A} |\xi(ghg^{-1}) - \xi(h)|^2.
\end{align*}
Hence
\begin{equation}\label{ecu1}|\nu(gAg^{-1})-\nu(A)| \leq (\nu(gAg^{-1})^{1/2}+\nu(A)^{1/2})\|[u_g,\xi]\|_2.\end{equation}
On the other hand, by using conditions (ii) and (iii) from Definition \ref{sres} we get that
\begin{equation}\label{ecu2}\nu(F)+\nu(aFa^{-1})\geq\nu(\Gamma\setminus\Lambda)\geq\nu(F)+\nu(bFb^{-1})+\nu(b^{-1}Fb).\end{equation}
Combining \eqref{ecu1} with \eqref{ecu2}, and using $F,aFa^{-1},bFb^{-1},b^{-1}Fb\subset\Gamma\setminus\Lambda$ we deduce that \begin{align*}\nu(\Gamma\setminus\Lambda)&\leq 3(\nu(F)+\nu(aFa^{-1}))-2(\nu(F)+\nu(bFb^{-1})+\nu(b^{-1}Fb))
\\ & \leq 3|\nu(aFa^{-1})-\nu(F)|+2|\nu(bFb^{-1})-\nu(F)|+2|\nu(b^{-1}Fb)-\nu(F)|\\&\leq 6\;\nu(\Gamma\setminus\Lambda)^{1/2}\|[u_a,\xi]\|_2+8\;\nu(\Gamma\setminus\Lambda)^{1/2}\|[u_b,\xi]\|_2\\&\leq 10\;\nu(\Gamma\setminus\Lambda)^{1/2}(\|[u_a,\xi]\|_2^2+\|[u_b,\xi]\|_2^2)^{1/2}.\end{align*}
Since $\nu(\Gamma\setminus\Lambda)=\|\xi-E_{L(\Lambda)}(\xi)\|_2^2$, the conclusion follows.
\hfill$\blacksquare$
\begin{lemma}\label{SRSex}
Let $\Gamma$ be a countable group, and consider the notations from Section \ref{sectionconstruction}.
Then $\widetilde\Gamma_n$ is strongly residual in $T_{\alpha}(\Gamma)$, for every $n\geq 1$ and $\alpha\in\{0,1\}$.
\end{lemma}
This statement is proven in \cite[\S 21]{DL69} in the case $\alpha=0$ and $n=1$, and is used (without proof) in full generality in \cite{MD69a,MD69b}. For completeness, we provide a proof.
{\it Proof.} Let $n\geq 1$. When $\alpha=0$, we define $\Sigma_n<T_0(\Gamma)$ to be the subgroup generated by $\widetilde\Gamma,\Lambda_1,\Lambda_2,...,\Lambda_n$, and $\Delta_n<T_0(\Gamma)$ to be the subgroup generated by $\widetilde\Gamma_n,\Lambda_{n+1},\Lambda_{n+2},...$.
Similarly, when $\alpha=1$, we define $\Sigma_n<T_1(\Gamma)$ to be the subgroup generated by $\widetilde\Gamma\rtimes S_{\infty},\Lambda_1,\Lambda_2,...,\Lambda_n$, and $\Delta_n<T_1(\Gamma)$ to be the subgroup generated by $\widetilde\Gamma_n,\Lambda_{n+1},\Lambda_{n+2},...$.
In both cases one can check that $\Sigma_n$ and $\Delta_n$ generate $T_{\alpha}(\Gamma)$ so that $T_{\alpha}(\Gamma)=\Sigma_n \ast_{\widetilde\Gamma_n}\Delta_n$. Moreover, if $a$ and $b$ are the generators of $\Lambda_1$ and $\Lambda_{n+1}$, then $a$ commutes with $\widetilde\Gamma_n$, $a\in\Sigma_n\setminus\widetilde\Gamma_n$, and $b^k\in\Delta_n\setminus\widetilde\Gamma_n$, for all $k\geq 1$.
The conclusion is now a consequence of the following fact. Let $G = H_1 \ast_{H_0} H_2$ be an amalgamated free product group such that there exist $a\in H_1\setminus H_0$, $b\in H_2\setminus H_0$ satisfying $aH_0a^{-1}= H_0$ and $b^k\notin H_0$, for all $k\geq 1$. Then $H_0$ is a strongly residual subgroup of $G$. Indeed, let $F \subset G \setminus H_0$ be the set of reduced words of the form $g_1g_2...g_k$, for some $k\geq 1$ and $g_1 \in H_1\setminus H_0$, $g_2\in H_2\setminus H_0$, $g_3 \in H_1 \setminus H_0,...$. It is easy to see that $a,b,F$ verify conditions (i)-(iii) listed in Definition \ref{sres}.
\hfill$\blacksquare$
\begin{lemma}\emph{\cite{MD69a}}\label{srprod}
Let $\Lambda_i<\Gamma_i$ be strongly residual subgroups, for every $1\leq i\leq n$.
Then $\oplus^n_{i=1} \Lambda_i< \oplus^n_{i=1} \Gamma_i$ is a strongly residual subgroup.
\end{lemma}
For completeness, we recall the proof from \cite{MD69a}.
{\it Proof.} Let $a_i,b_i\in\Gamma_i$ and $F_i\subset\Gamma_i\setminus\Lambda_i$ be as in Definition \ref{sres}. Denote by $\Lambda=\oplus^n_{i=1} \Lambda_i$ and $\Gamma=\oplus^n_{i=1} \Gamma_i$. Let $a=(a_1,...,a_n)$, $b=(b_1,...,b_n)\in\Gamma$, and consider the following set
\[F := \{(g_1,...,g_n) \in \Gamma \, | \, \exists i \text{ such that } g_1\in\Lambda_1, \dots, g_{i-1}\in\Lambda_{i-1} \text{ and } g_i \in F_i \}.\]
Then $a,b,F$ satisfy conditions (i)-(iii) from Definition \ref{sres} for $\Lambda<\Gamma$. \hfill$\blacksquare$
The following key corollary of the above results will be frequently used in the sequel.
\begin{corollary}\label{combo}
Let $\mathcal U$ be an ultrafilter on a set $S$. Let $\Gamma$ be a countable group, $\alpha\in\{0,1\}$, and consider the notation from Section \ref{sectionconstruction}. Denote by $M=L(T_{\alpha}(\Gamma))$ and $P_n=L(\widetilde\Gamma_n)$, for every $n\geq 1$. For every $s\in S$, let $n_s,t_s\geq 1$ be integers and let $Q_s$ be a tracial von Neumann algebra.
Then there exists a separable subalgebra $A\subset\prod_{\mathcal{U}}M^{{\bar\otimes} t_s}$ such that $$A'\cap\prod_{\mathcal{U}}(M^{{\bar\otimes} t_s}{\bar\otimes} Q_s)\subset\prod_{\mathcal{U}}(P_{n_s}^{{\bar\otimes} t_s}{\bar\otimes} Q_s).$$
\end{corollary}
{\it Proof.} Let $s\in S$.
By combining Lemmas \ref{SRSex} and \ref{srprod}, we get that $\oplus_{i=1}^{t_s}\widetilde\Gamma_{n_s}<\oplus_{i=1}^{t_s}T_{\alpha}(\Gamma)$ is a strongly residual subgroup. Then Lemma \ref{resalg} implies that $P_{n_s}^{{\bar\otimes} t_s}\subset M^{{\bar\otimes} t_s}$ is a $(2,100)$-residual subalgebra. The conclusion now follows from Lemma \ref{bigger}.
\hfill$\blacksquare$
\section{McDuff depth of a von Neumann algebra}
\label{sectiondepth}
\subsection{Properties at depth $k$}
\begin{definition}
Let $\mathcal{M}$ be a (typically non-separable) von Neumann algebra, and $I$ be a directed set. On the set of subalgebras of $\mathcal{M}$, consider the partial order given by inclusion. A decreasing net $(A_i)_{i \in I}$ of subalgebras of $\mathcal M$ is called a {\it residual net} if:
\begin{itemize}
\item For any separable subalgebra $Q \subset \mathcal{M}$, there exists $i \in I$ such that $A_i \subset Q' \cap \mathcal{M}$, and
\item For any $i \in I$, there exists a separable subalgebra $Q \subset \mathcal{M}$ such that $Q' \cap \mathcal{M} \subset A_i$.
\end{itemize}
A residual net is called {\it trivial} if there exists $i \in I$ such that $A_i = \mathbb{C}$.
\end{definition}
\begin{example}\label{standard}
Given a von Neumann algebra $\mathcal{M}$, consider the set $I$ of separable subalgebras of $\mathcal{M}$, ordered by inclusion. Then the net $(Q' \cap \mathcal{M})_{Q \in I}$ is clearly a residual net, which we call the {\it standard residual net}.
\end{example}
The following result provides our main example of a residual net.
\begin{lemma}\label{MDex}
Let $\Gamma$ be a countable group and $\alpha \in \{0,1\}$. For $n'>n\geq 1$, let $M = L(T_\alpha(\Gamma))$, $P_n = L(\widetilde\Gamma_n)$, $P_{n,n'}=L(\widetilde\Gamma_{n,n'})$, where $\widetilde\Gamma_{n,n'}<\widetilde\Gamma_n<\widetilde\Gamma<T_{\alpha}(\Gamma)$ are defined as in Section \ref{sectionconstruction}.
Let $\mathcal U$ be a countably cofinal ultrafilter on a set $S$. For every $s\in S$, let $t_s\geq 1$ be an integer. Define $\mathcal{M}=\prod_{\mathcal U}M^{{\bar\otimes} t_s}$.
Endow $I = \mathbb{N}^{S}$ with the following partial order:
$(n_s)_s < (m_s)_s \, \text{ iff } \, n_s < m_s, \, \text{ for all } \, s\in S.$
For $i =(n_s)_{s\in S}\in I$, define $A_i=\prod_\mathcal{U} {P_{n_s}}^{{\bar\otimes} t_s}$.
Then $(A_i)_{i \in I}$ is a residual net of $\mathcal{M}$. Moreover, if $\Gamma$ is icc and $i=(n_s)_s<j=(m_s)_s$, then $A_j' \cap A_i=\prod_\mathcal{U} {P_{n_s,m_s}}^{{\bar\otimes} t_s}$.
\end{lemma}
{\it Proof.}
The fact that $(A_i)_{i\in I}$ is a residual net of $\mathcal{M}$ follows easily from Lemmas \ref{combo2} and \ref{combo}.
Now, if $\Gamma$ is icc, then $L(\Gamma)$ is a II$_1$ factor. It follows that for $n'>n\geq 1$ we have $P_{n'}'\cap P_n=P_{n,n'},$ which clearly implies the moreover part.
\hfill$\blacksquare$
Motivated by the moreover assertion of Lemma \ref{MDex}, we introduce the following definition.
\begin{definition}
Let $\mathcal{P}$ be a property for von Neumann algebras.
Let $\mathcal{M}$ be a von Neumann algebra with a residual net $(A_i)_{i\in I}$. We say that $\mathcal{M}$ has {\it property $\mathcal{P}$ at depth $1$} if for all $i_1 \in I$, there exists $i_2 > i_1$ such that for all $i_3 > i_2$ there exists $i_4 > i_3$, such that the inclusion $A_{i_3}' \cap A_{i_2} \subset A_{i_4}' \cap A_{i_1}$ contains an intermediate von Neumann subalgebra with property $\mathcal{P}$.
\end{definition}
\begin{remark}\label{trivial}
A von Neumann algebra $\mathcal{M}$ is trivial at depth $1$ if and only if it admits a separable subalgebra $Q\subset\mathcal{M}$ such that $Q'\cap\mathcal{M}=\mathbb C1$.
\end{remark}
\begin{lemma}\label{indepnet}
Having property $\mathcal{P}$ at depth $1$ is independent of the choice of a residual net.
\end{lemma}
{\it Proof.}
Consider a von Neumann algebra $\mathcal{M}$ with two residual nets $(A_i)_{i \in I}$ and $(B_j)_{j \in J}$. From the definition of residual nets we see that for every $i \in I$ there exists $j \in J$ such that $B_j \subset A_i$. Symmetrically, for every $j\in J$ there exists $i\in I$ such that $A_i\subset B_j$.
Assume that $\mathcal{M}$ has property $\mathcal{P}$ at depth $1$ with respect to $(A_i)_{i\in I}$.
Fix $j_1 \in J$. Then there exists $i_1 \in I$ such that $A_{i_1} \subset B_{j_1}$. Take $i_2 > i_1$ as in the definition of property $\mathcal{P}$ at depth $1$.
Then there exists $j_2 > j_1$ such that $B_{j_2} \subset A_{i_2}$.
Take an arbitrary $j_3 > j_2$, and pick $i_3>i_2$ such that $A_{i_3} \subset B_{j_3}$. Next, we find $i_4 > i_3$ as in the definition of property $\mathcal{P}$ at depth $1$.
Then there exists $j_4 > j_3$ such that $B_{j_4} \subset A_{i_4}$.
Altogether, we have the following inclusions
\[B_{j_4} \subset A_{i_4} \subset A_{i_3} \subset B_{j_3} \subset B_{j_2} \subset A_{i_2} \subset A_{i_1} \subset B_{j_1}.\]
From this we get that
\[B_{j_3}' \cap B_{j_2} \subset A_{i_3}' \cap A_{i_2} \subset A_{i_4}' \cap A_{i_1} \subset B_{j_4}' \cap B_{j_1}.\]
By our choice of $i_1,i_2,i_3,i_4 \in I$, there is an intermediate subalgebra with property $\mathcal{P}$ inside the inclusion
$A_{i_3}' \cap A_{i_2} \subset A_{i_4}' \cap A_{i_1}$. Thus, $\mathcal{M}$ has property $\mathcal{P}$ at depth $1$ with respect to $(B_j)_{j\in J}$.
\hfill$\blacksquare$
\begin{definition}\label{atdepthk}
Let $\mathcal{P}$ be a property of von Neumann algebras. We define inductively on $k \geq 0$ what it means for a von Neumann algebra $\mathcal{M}$ to have {\it property $\mathcal{P}$ at depth $k$}. We denote this property by $\mathcal{P}^{(k)}$.
\begin{itemize}
\item If $k = 0$, then we say that $\mathcal{M}$ has property $\mathcal{P}^{(0)}$ if it has property $\mathcal{P}$.
\item If $k \geq 0$, then we say that $\mathcal{M}$ has $\mathcal{P}^{(k+1)}$ if it has property $\mathcal{P}^{(k)}$ at depth $1$.
\end{itemize}
\end{definition}
\begin{definition}
A von Neumann algebra $\mathcal{M}$ is said to have {\it finite McDuff depth} if there exists $k$ such that $\mathcal{M}$ is trivial at depth $k$. The {\it McDuff depth} of $\mathcal{M}$ is defined as the smallest $k \geq 0$ such that $\mathcal{M}$ is trivial at depth $k+1$.
If $\mathcal{M}$ does not have finite McDuff depth, then we define its McDuff depth to be infinite.
\end{definition}
\begin{examples} Let $M$ be a separable II$_1$ factor and $\mathcal{U}$ be a countably cofinal ultrafilter.
\begin{itemize}
\item By Remark \ref{trivial}, $M^{\mathcal{U}}$ has depth $0$ if and only if $M$ does not have property Gamma.
\item If $M$ has property Gamma but is non McDuff, then $M' \cap M^\mathcal{U}$ is abelian and non trivial. This easily implies that $M^\mathcal{U}$ has infinite depth.
\item If $M$ is the hyperfinite II$_1$ factor, then $M^\mathcal{U}$ has depth $1$.
\end{itemize}
\end{examples}
\subsection{Computing the depth} The aim of this subsection is to prove the following result.
\begin{theorem}\label{countable}
Let $\mathcal{U}$ be a countably cofinal ultrafilter, and $\Gamma$ be a non-trivial countable group. Let $\pmb\alpha$ be a (finite or infinite) sequence of $0$'s and $1$'s. Let $M_{\pmb\alpha}(\Gamma)$ be as defined in Section \ref{sectionconstruction}.
Then the depth of $M_{\pmb\alpha}(\Gamma)^\mathcal{U}$ is at least the length of $\pmb\alpha$. Moreover, if $\Gamma =\mathbb F_2$, then we have equality.
\end{theorem}
Let us fix a countably cofinal ultrafilter $\mathcal{U}$ on a set $S$. Towards proving Theorem \ref{countable}, we first provide an upper bound on depth when $\Gamma = \mathbb F_2$.
\begin{lemma}\label{count1}
For all $k \geq 0$, $\pmb\alpha \in \{0,1\}^k$, and all integers $t_s \geq 1, s\in S$, the von Neumann algebra $\prod_{\mathcal{U}} M_{\pmb\alpha}(\mathbb F_2)^{{\bar\otimes} t_s}$ is trivial at depth $k+1$.
\end{lemma}
{\it Proof.}
We proceed by induction on the length $k$ of the sequence $\pmb\alpha$. First assume $k = 0$. Then $\pmb\alpha$ is the empty sequence and $M_{\pmb\alpha}(\mathbb F_2) = L(\mathbb F_2)$. Since the trivial subgroup is strongly residual in $\mathbb F_2$ (see e.g.\ the proof of Lemma \ref{SRSex}), by combining Lemmas \ref{srprod}, \ref{resalg}, and \ref{bigger}, we deduce the existence of a separable subalgebra $Q\subset\prod_{\mathcal{U}} M_{\pmb\alpha}(\mathbb F_2)^{{\bar\otimes} t_s}$ with trivial relative commutant. In other words, $\prod_{\mathcal{U}} M_{\pmb\alpha}(\mathbb F_2)^{{\bar\otimes} t_s}$ is trivial at depth $1$ (see Remark \ref{trivial}).
Assume the conclusion holds for $k \geq 0$. Take integers $t_s\geq 1, s\in S$ and a sequence $\pmb\alpha \in \{0,1\}^{k+1}$ of length $k+1$. Put $\mathcal{M} = \prod_{\mathcal{U}} M_{\pmb\alpha}(\mathbb F_2)^{{\bar\otimes} t_s}$.
We want to show that, at depth 1, $\mathcal{M}$ has the property of being trivial at depth $k+1$.
By Lemma \ref{indepnet}, in order to check this property at depth 1 we can use any residual net for $\mathcal{M}$.
From Definition \ref{defintro} we see that $T_{\pmb\alpha}(\mathbb F_2)=T_{\alpha_1}(T_{\pmb\beta}(\mathbb F_2))$ and hence $M_{\pmb\alpha}(\mathbb F_2) = L(T_{\alpha_1}(T_{\pmb\beta}(\mathbb F_2)))$, where $\pmb\beta = (\alpha_2,\dots,\alpha_{k+1}) \in \{0,1\}^k$.
Applying Lemma \ref{MDex} (to $\Gamma = T_{\pmb\beta}(\mathbb F_2)$ and $\alpha = \alpha_1$) we obtain that
$A_i = \prod_\mathcal{U} {P_{n_s}}^{{\bar\otimes} t_s}$, where $i = (n_s)_s\in I$, is a residual net for $\mathcal{M}$.
Now, for any indices $i_4 > i_3 > i_2 > i_1$, the inclusion $A_{i_3}' \cap A_{i_2} \subset A_{i_4}' \cap A_{i_1}$ has $A_{i_4}' \cap A_{i_1} $ as an intermediate algebra. Moreover, since $T_{\pmb\beta}(\mathbb F_2))$ is icc, by Lemma \ref{MDex} we get that $A_{i_4}'\cap A_{i_1}=\prod_\mathcal{U} {P_{{n_1}_s,{n_4}_s}}^{{\bar\otimes} t_s}$. Since $P_{m,m'}\cong M_{\pmb\beta}(\mathbb F_2)^{{\bar\otimes} (m'-m)}$, for any $m'>m\geq 1$, we conclude that $A_{i_4}'\cap A_{i_1}$ is of the form $\prod_\mathcal{U} {M_{\pmb\beta}(\mathbb F_2)}^{{\bar\otimes} v_s}$, for some integers $v_s\geq 1, s\in S$. By our induction assumption, $A_{i_4}'\cap A_{i_1}$ is trivial at depth $k+1$. This shows that $\mathcal{M}$ is trivial at depth $k+2$, as desired.
\hfill$\blacksquare$
Obtaining a lower bound on depth requires additional work. We start by defining some more general residual nets.
\begin{definition}
Let $\mathcal{N} \subset \mathcal{M}$ be an inclusion of von Neumann algebras and let $I$ be a directed set. Let $(A_i)_{i \in I}$ and $(B_i)_{i \in I}$ be two decreasing nets of subalgebras of $\mathcal{M}$ such that $A_i \subset \mathcal{N}$, $B_i \subset \mathcal{M}$, and $A_i \subset B_i$, for all $i \in I$. The net $(A_i \subset B_i)_{i\in I}$ is called a {\it residual pair} for the inclusion $\mathcal{N}\subset \mathcal{M}$ if the following two properties are satisfied:
\begin{itemize}
\item For any $i \in I$, there exists a separable subalgebra $Q \subset \mathcal{N}$ such that $Q' \cap \mathcal{M} \subset B_i$, and
\item For any separable subalgebra $Q \subset \mathcal{M}$, there exists $i \in I$ such that $A_i \subset Q' \cap \mathcal{N}$.
\end{itemize}
\end{definition}
If $\mathcal{N} = \mathcal{M}$ and $(A_i)_{i\in I}$ is a residual net of $\mathcal{M}$, then $(A_i \subset A_i)_{i\in I}$ is a residual pair for $\mathcal{N}\subset\mathcal{M}$.
As in Lemma \ref{MDex}, we have the following key example.
\begin{lemma}\label{MDex2}
Let $\Gamma$ be a countable group and $\alpha \in \{0,1\}$. For $n'>n\geq 1$, let $M = L(T_\alpha(\Gamma))$, $P_n = L(\widetilde\Gamma_n)$, $P_{n,n'}=L(\widetilde\Gamma_{n,n'})$, where $\widetilde\Gamma_{n,n'} < \widetilde\Gamma_n < \widetilde\Gamma < T_{\alpha}(\Gamma)$ are defined as in Section \ref{sectionconstruction}.
Recall that $\mathcal U$ is a countably cofinal ultrafilter on the set $S$. Let $Q_s, s\in S$, be tracial von Neumann algebras. Define $\mathcal{N}=\prod_{\mathcal U}M$ and $\mathcal{M}=\prod_{\mathcal U}(M{\bar\otimes} Q_s)$.
Consider $I = \mathbb{N}^{S}$ ordered as in Lemma \ref{MDex}.
For $i=(n_s)_{s\in S}\in I$, define $A_i=\prod_\mathcal{U} {P_{i_s}} \subset \mathcal{N}$ and $B_i=\prod_\mathcal{U} (P_{i_s}{\bar\otimes} Q_s) \subset \mathcal{M}$.
Then the net $(A_i\subset B_i)_{i \in I}$ is a residual pair for the inclusion $\mathcal{N}\subset\mathcal{M}$.
Moreover, if $\Gamma$ is icc and $i=(n_s)_s < j=(m_s)_s$, then we have $A_j' \cap B_i=\prod_\mathcal{U} (P_{n_s,m_s}{\bar\otimes} Q_s)$ and $B_j'\cap A_i=\prod_\mathcal{U} P_{n_s,m_s}$.
\end{lemma}
The following result is a simple variation of Lemma \ref{indepnet}.
\begin{lemma}\label{inclusion}
Let $\mathcal{P}$ be a property of von Neumann algebras.
Let $\mathcal{A} \subset \mathcal{M} \subset \mathcal{B}$ be von Neumann algebras. Assume that $\mathcal{M}$ has property $\mathcal{P}$ at depth $1$.
Then for any residual pair $(A_i \subset B_i)_{i \in I}$ for the inclusion $\mathcal{A} \subset \mathcal{B}$, we have:
For all $i_1 \in I$, there exists $i_2 > i_1$ such that for all $i_3 > i_2$ there exists $i_4 > i_3$ such that the inclusion $B_{i_3}' \cap A_{i_2} \subset A_{i_4}' \cap B_{i_1}$ contains an intermediate von Neumann algebra with property $\mathcal{P}$.
\end{lemma}
{\it Proof.}
Fix $i_1 \in I$. Then there exists a separable subalgebra $Q_1 \subset \mathcal{A}$ such that $Q_1' \cap \mathcal{B} \subset B_{i_1}$.
Since $\mathcal M$ has property $\mathcal{P}$ at depth $1$, Lemma \ref{indepnet} implies that $\mathcal{M}$ has property $\mathcal{P}$ at depth $1$ with respect to the standard residual net from Example \ref{standard}. Thus, there exists a separable subalgebra $\mathcal{M}\supset Q_2\supset Q_1$ such that for any separable subalgebra $\mathcal{M}\supset Q_3\supset Q_2$ we can find a separable subalgebra $\mathcal{M}\supset Q_4\supset Q_3$ such that the inclusion $(Q_3'\cap \mathcal{M})' \cap (Q_2' \cap \mathcal{M}) \subset (Q_4' \cap \mathcal{M})' \cap (Q_1' \cap \mathcal{M})$ contains an intermediate subalgebra with property $\mathcal{P}$.
Next, let $i_2 > i_1$ such that $A_{i_2} \subset Q_2' \cap \mathcal{A}$.
Pick any index $i_3\in I$ with $i_3 > i_2$. Then one can find a separable subalgebra $Q \subset \mathcal{A}$ such that $Q' \cap \mathcal{B} \subset B_{i_3}$.
Let $Q_3$ be the von Neumann algebra generated by $Q$ and $Q_2$. Then $\mathcal{M}\supset Q_3\supset Q_2$ and $Q_3'\cap\mathcal{B}\subset B_{i_3}$.
Let $Q_4\supset Q_3$ be as given by the previous paragraph. Since $Q_4\subset\mathcal M\subset\mathcal B$, there exists $i_4 > i_3$ such that $A_{i_4} \subset Q_4' \cap \mathcal{A}$.
Altogether, we have the following inclusions
\[A_{i_4} \subset Q_4' \cap \mathcal{M} \subset Q_3' \cap \mathcal{M} \subset B_{i_3}\]
\[A_{i_2} \subset Q_2' \cap \mathcal{M} \subset Q_1' \cap \mathcal{M} \subset B_{i_1}.\]
These lead to
\[B_{i_3}' \cap A_{i_2} \subset (Q_3'\cap \mathcal{M})' \cap (Q_2' \cap \mathcal{M}) \subset (Q_4' \cap \mathcal{M})' \cap (Q_1' \cap \mathcal{M}) \subset A_{i_4}' \cap B_{i_1}.\]
It follows that the inclusion $B_{i_3}' \cap A_{i_2} \subset A_{i_4}' \cap B_{i_1}$ contains an intermediate von Neumann algebra with property $\mathcal{P}$, as claimed.
\hfill$\blacksquare$
We are now ready to prove the second half of Theorem \ref{countable}: the lower bound on the depth.
\begin{lemma}\label{count2}
Fix a non-trivial countable group $\Gamma$.
Then for any $k\geq 0$ and $\pmb\alpha \in \{0,1\}^k$ and any family of tracial von Neumann algebras $Q_s$, $s\in S$, any intermediate von Neumann subalgebra $M_{\pmb\alpha}(\Gamma)^{\mathcal{U}}=\prod_\mathcal{U} M_{\pmb\alpha}(\Gamma) \subset \mathcal{M} \subset \prod_\mathcal{U} (M_{\pmb\alpha}(\Gamma) {\bar\otimes} Q_s)$ is not trivial at depth $k$.
\end{lemma}
{\it Proof.}
We proceed by induction on $k$. First assume $k = 0$. Then $\pmb\alpha$ is the empty sequence and $M_{\pmb\alpha}(\Gamma) = L(\Gamma) \neq \C1$, hence the ultrapower ${M_{\pmb\alpha}(\Gamma)}^\mathcal{U}$ is not trivial, so $\mathcal{M}$ is not trivial either.
Assume the result holds for some $k \geq 0$. Take a sequence $\pmb\alpha$ of length $k + 1$ and suppose by contradiction there exist tracial von Neumann algebras $Q_s, s\in S$ and an intermediate von Neumann subalgebra $\mathcal{M}$ satisfying
\[M_{\pmb\alpha}(\Gamma)^\mathcal{U} \subset \mathcal{M} \subset \prod_\mathcal{U} (M_{\pmb\alpha}(\Gamma) \bar\otimes Q_s),\]
that is trivial at depth $k+1$.
Therefore, at depth $1$, $\mathcal{M}$ has the property of being trivial at depth $k$. From Definition \ref{defintro}, we see that $M_{\pmb\alpha}(\Gamma) =L(T_{\alpha_1}(T_{\pmb\beta}(\Gamma)))$, where $\pmb\beta = (\alpha_2,\dots,\alpha_{k+1}) \in \{0,1\}^k$. Let $(A_i \subset B_i)_{i \in I}$ be the residual pair for the inclusion $M_{\pmb\alpha}(\Gamma)^\mathcal{U} \subset \prod_\mathcal{U} (M_{\pmb\alpha}(\Gamma) \bar\otimes Q_s)$ obtained by applying Lemma \ref{MDex2} to $T_{\pmb\beta}(\Gamma)$ and $\alpha_1$ instead of $\Gamma$ and $\alpha$.
Lemma \ref{inclusion} then implies that for all $i_1 \in I$, there exists $i_2 > i_1$ such that for all $i_3 > i_2$ there exists $i_4 > i_3$ such that the inclusion ${B_{i_3}}' \cap A_{i_2} \subset {A_{i_4}}' \cap B_{i_1}$ contains an intermediate subalgebra which is trivial at depth $n$.
Since $T_{\pmb\beta}(\Gamma)$ is icc, Lemma \ref{MDex2} implies that for all indices $i_4 > i_3 > i_2 > i_1$ we have
\[{B_{i_3}}' \cap A_{i_2} = \prod_\mathcal{U} P_{n_{2,s},n_{3,s}} \, \text{ and } \, {A_{i_4}}' \cap B_{i_1} = \prod_\mathcal{U} (P_{n_{1,s},n_{4,s}} \bar\otimes Q_s).\]
Choose $i_3=(n_{3,s})$ such that $n_{3,s} = n_{2,s}+1$ for all $s\in S$. Since $P_{m,m'} \cong {M_{\pmb\beta}(\Gamma)}^{\bar{\otimes} (m'-m)}$, for any $m'>m\geq 1$, we see that the inclusion ${B_{i_3}}' \cap A_{i_2} \subset {A_{i_4}}' \cap B_{i_1}$ is of the form
\[\prod_\mathcal{U} P_{n_{2,s},n_{2,s}+1}= \prod_\mathcal{U} M_{\pmb\beta}(\Gamma) \subset \prod_\mathcal{U} (P_{n_{1,s},n_{4,s}} {\bar{\otimes}} Q_s) = \prod_\mathcal{U} (M_{\pmb\beta}(\Gamma) {\bar{\otimes}} \widetilde Q_s),\]
for some tracial von Neumann algebras $\widetilde Q_s$. Since by the induction assumption there is no intermediate subalgebra in this inclusion which is trivial at depth $k$, we get a contradiction.
\hfill$\blacksquare$
\section{Distinguishing uncountably many ultrapowers}
\label{sectionpropV}
\subsection{Property $\widetilde V$ and proof of the main results}
In order to show that the II$_1$ factors $M_{\pmb{\alpha}}(\Gamma)$ are non-isomorphic, McDuff introduced \cite{MD69b} a certain property for separable II$_1$ factors, called property $V$ (cf.\ with the earlier notions of asymptotically abelian II$_1$ factors \cite{Sa68, DL69, ZM69}). In this section, inspired by property $V$, we define the following new property for non-separable von Neumann algebras:
\begin{definition}\label{propV}
A non-separable von Neumann algebra $\mathcal{M}$ has {\it property $\widetilde V$} if there exists a separable subalgebra $A \subset \mathcal{M}$ such that for any separable subalgebra $B \subset A' \cap \mathcal{M}$ and any separable subalgebra $C \subset \mathcal{M}$, there exists a unitary $u \in \mathcal{M}$ such that $uBu^* \subset C' \cap \mathcal{M}$.
\end{definition}
One can check that if a separable II$_1$ factor $M$ has property $V$, then $M^{\omega}$ has property $\widetilde V$, for any free ultrafilter $\omega$ on $\mathbb N$.
Let $\Gamma$ be a countable group. Then $L(T_1(\Gamma))$ has property $V$ by \cite[Lemma 1]{MD69b}, hence $L(T_1(\Gamma))^{\omega}$ has property $\widetilde V$. On the other hand, if $\Gamma$ is non-amenable, then we show that $L(T_0(\Gamma))^\omega$ does not have property $\widetilde V$. More generally, we have the following theorem.
As in the previous section, we fix a countably cofinal ultrafilter $\mathcal{U}$ on a set $S$.
\begin{theorem}\label{uncountable}
Let $\Gamma$ be a non-amenable countable group. Let $k \geq 1$ and $\pmb{\alpha} \in \{0,1\}^k$. Let $M_{\pmb{\alpha}}(\Gamma)$ be as defined in Section \ref{sectionconstruction}.
Then $M_{\pmb{\alpha}}(\Gamma)^\mathcal{U}$ has property $\widetilde V$ at depth $k - 1$ (see Definition \ref{atdepthk}) if and only if $\alpha_{k} = 1$.
Moreover, if instead $\pmb\alpha$ is a sequence of $0$'s and $1$'s with length greater than $k$, then the same conclusion holds for arbitrary $\Gamma$ (possibly amenable).
\end{theorem}
Note that the moreover part of Theorem \ref{uncountable} follows from the first part of the statement. Indeed, if $k\geq 1$ and $\pmb\alpha$ has length greater that $k$, then Definition \ref{defintro} implies that the factor $M_{\pmb\alpha}(\Gamma)$ is of the form $M_{\pmb\beta}(\Lambda)$, for some non-amenable group $\Lambda$ and the truncated sequence $\pmb\beta = (\alpha_n)_{n=1}^k \in \{0,1\}^k$.
Let us explain how this theorem implies our main results.
{\bf Proof of Theorem \ref{main}.} Assume that $M_{\pmb\alpha}(\Gamma)^{\mathcal{U}}\simeq M_{\pmb\beta}(\Gamma)^{\mathcal{V}}$, for two ultrafilters $\mathcal{U}$ and $\mathcal{V}$. If $\mathcal{U}$ and $\mathcal{V}$ are countably cofinal, then combining Theorem \ref{countable} and Theorem \ref{uncountable} leads to a contradiction. If one of the ultrafilters, say $\mathcal{U}$, is not countably cofinal, then Lemma \ref{complete} readily implies that
$M_{\pmb\alpha}(\Gamma)\simeq M_{\pmb\beta}(\Gamma)$, and \cite{MD69b} gives a contradiction. Alternatively, we may choose a free ultrafilter $\omega$ on $\mathbb N$ and derive a contradiction from $M_{\pmb\alpha}(\Gamma)^{\omega}\simeq M_{\pmb\beta}(\Gamma)^{\omega}$, as above.
\hfill$\blacksquare$
{\bf Proof of Corollary \ref{maincor}.}
Denote by $\tau_1$ the canonical trace on $C^*_r(T_{\pmb\alpha}(\Gamma))$, by $\tau_2$ the unique trace on $\mathcal{Z}$, and let $\tau = \tau_1 \otimes \tau_2$. Note that the von Neumann algebra generated by $A_{\pmb\alpha}(\Gamma)$ acting via the GNS representation with respect to $\tau$ is $M_{\pmb\alpha}(\Gamma) \bar{\otimes} R \simeq M_{\pmb\alpha}(\Gamma)$. Indeed, $M_{\pmb\alpha}(\Gamma)$ is McDuff, as soon as $\pmb\alpha$ is non-empty.
It is an easy exercise to show that for any ultrafilter $\mathcal{U}$, the von Neumann algebra generated by $A_{\pmb\alpha}(\Gamma)^\mathcal{U}$ acting via the GNS representation associated with $\tau^\mathcal{U}$ is precisely $M_{\pmb\alpha}(\Gamma)^\mathcal{U}$.
Hence, in order to deduce Corollary \ref{maincor} from Theorem \ref{main}, we only need to check that $\tau^\mathcal{U}$ is the only trace on $A_{\pmb\alpha}(\Gamma)^\mathcal{U}$.
Note that for any group $\Gamma$ and $\alpha \in \{0,1\}$, $T_\alpha(\Gamma)$ is equal to the increasing union of the free product groups $G_n:=\Sigma_n \ast \Lambda_{n+1}$, where $\Sigma_n$ is the subgroup generated by $\Lambda_1 \ast \dots \ast \Lambda_n$ and $\Gamma_1 \oplus \dots \oplus \Gamma_n$ if $\alpha=0$ (respectively, $(\Gamma_1 \oplus \dots \oplus \Gamma_n) \rtimes S_n$ if $\alpha = 1$). In particular, $T_{\pmb\alpha}(\Gamma)$ is an increasing union of Powers groups (see e.g.\ \cite{dlHS86}):
\begin{equation}\label{powers}
T_{\pmb\alpha}(\Gamma) = \bigcup_n G_n.
\end{equation}
Since $\mathcal{Z}$ has a unique trace, \cite[Corollary 7]{dlHS86} implies that $A_{\pmb\alpha}(\Gamma)$ has a unique trace. Moreover, if $\Gamma$ is exact, so is $A_{\pmb\alpha}(\Gamma)$ and \cite[Theorem 8]{Oz13} shows that $A_{\pmb\alpha}(\Gamma)^\mathcal{U}$ has the unique trace property. Let us treat the general case, when $\Gamma$ is not necessarily exact.
By \eqref{powers}, it is sufficient to show that for all families of integers $(k_s)_s$, the algebra $\prod_{\mathcal{U}} (C^*_r(G_{k_s}) \otimes \mathcal{Z})$ has a unique trace. Since $\mathcal{Z}^\mathcal{U}$ has a unique trace and all the $G_{k_s}$'s are Powers groups, this is an easy consequence of \cite[Lemma 5]{dlHS86}.\hfill$\blacksquare$
The rest of this section is devoted to the prove Theorem \ref{uncountable}. As explained above we only need to prove the first statement. We will proceed by induction on $k$, and treat the base case and the inductive step in two separate subsections.
\subsection{The base case}
The case $k = 0$ of Theorem \ref{uncountable} is dealt with by the following two lemmas.
\begin{lemma}\label{V1}
Let $\mathcal{U}$ be a countably cofinal ultrafilter on a set $S$. Let $\Gamma$ be a countable group and denote by $M = L(T_1(\Gamma))$. For every $s\in S$, let $t_s\geq 1$ be an integer. Let $\mathcal{M}=\prod_{\mathcal{U}}M^{{\bar\otimes} t_s}$.
Then $\mathcal{M}$ has property $\widetilde V$.
\end{lemma}
{\it Proof.} Recall from Section \ref{sectionconstruction} that $T_1(\Gamma)$ is generated by $\widetilde\Gamma\rtimes S_{\infty}$ and $\Lambda_j, j\geq 1$, where $\Gamma_i, i\geq 1$, and $\Lambda_j, j\geq 1,$ are isomorphic copies of $\Gamma$ and $\mathbb Z$, respectively, and $S_{\infty}$ acts on $\widetilde\Gamma=\oplus_{i\geq1}\Gamma_i$ by permutations, with the only relations that $\Gamma_i$ and $\Lambda_j$ commute whenever $i\geq j$. Put $P=L(\widetilde\Gamma)$.
Then Corollary \ref{combo} provides a separable subalgebra $A\subset\mathcal{M}$ such that $A'\cap\mathcal{M}\subset\prod_{\mathcal{U}}P^{{\bar\otimes} t_s}$.
For $n' > n\geq 1$, recall that $\widetilde\Gamma_{n,n'}=\bigoplus_{n'>i\geq n}\Gamma_i$. For $n\geq 1$, let $H_n<T_1(\Gamma)$ be the subgroup generated by $\widetilde\Gamma_{1,n+1}\rtimes S_{n}$ and $\Lambda_1,...,\Lambda_n$, where we view $S_{n}$ as the group of all permutations of $\{1,2,...\}$ leaving each $k>n$ fixed. Denote by $R_n=L(\widetilde\Gamma_{1,n+1})$ and $M_n=L(H_n)$.
Let $B\subset\prod_{\mathcal{U}}P^{{\bar\otimes} t_s}$ and $C\subset\mathcal{M}$ be separable subalgebras. Since $\cup_{m\geq 1}R_m$ is weakly dense in $P$ and $\cup_{n\geq 1}M_n$ is weakly dense in $M$, then by Lemma \ref{ccofinal} exist integers
$m_s,n_s\geq 1$, for $s\in S$, such that \begin{equation}\label{BC}B\subset\prod_{\mathcal{U}}R_{m_s}^{{\bar\otimes} t_s}\;\;\text{and}\;\; C\subset\prod_{\mathcal{U}}M_{n_s}^{{\bar\otimes} t_s}.\end{equation}
Finally, for every $s\in S$, let $\sigma_s\in S_{\infty}$ be a permutation such that $\sigma_s(k)>n_s$, for any $1\leq k\leq m_s$.
Then $\sigma_s\widetilde\Gamma_{1,m_s+1}\sigma_s^{-1}\subset\oplus_{i>n_s}\Gamma_i$, and hence $\sigma_s\widetilde\Gamma_{1,m_s+1}\sigma_s^{-1}$ commutes with $H_{n_s}$. Thus, the unitary element $u_s=u_{\sigma_s}\in \mathscr U(M)$ satisfies $u_sR_{m_s}u_s^*\subset M_{n_s}'\cap M$. Therefore, letting $u=(u_s^{\otimes t_s})_s\in\mathscr U(\mathcal{M})$, the equation \eqref{BC} implies that $uBu^*\subset C'\cap\mathcal{M}$, as desired.
\hfill$\blacksquare$
\begin{lemma}\label{V2}
Let $\mathcal{U}$ be an ultrafilter on a set $S$. Let $\Gamma$ be a countable non-amenable group and denote by $M = L(T_0(\Gamma))$.
For every $s\in S$, let $Q_s$ be a tracial von Neumann algebra.
Then any intermediate subalgebra $M^\mathcal{U} \subset \mathcal{M} \subset \prod_\mathcal{U} (M {\bar\otimes} Q_s)$ does not have property $\widetilde V$.
\end{lemma}
Lemma \ref{V2} strengthens \cite[Lemma 3]{MD69b}. In order to prove it, we will need two additional results.
Recall from Section \ref{sectionconstruction} that $T_0(\Gamma)$ is generated by $\widetilde\Gamma=\oplus_{i\geq 1}\Gamma_i$ and $\widetilde\Lambda=*_{j\geq 1}\Lambda_j$, where $\Gamma_i,\Lambda_j$ are isomorphic copies of $\Gamma,\mathbb Z$, respectively, with the only relations being that $\Gamma_i$ and $\Lambda_j$ commute whenever $i\geq j\geq 1$.
For $n\geq 1$, we denote by $\pi_n:\Gamma\rightarrow\widetilde\Gamma$ the canonical embedding with $\pi_n(\Gamma)=\Gamma_n$.
We also let $\widetilde\Gamma_n=\oplus_{i\geq n}\Gamma_i$.
\begin{lemma}\label{normal}
Let $g\in T_0(\Gamma), g'\in\widetilde\Gamma_{n+1}$, and $g''\in\Gamma_{n}$, for some $n\geq 1$. Assume that $g'gg''=g$.
Then $g'=g''=e$.
\end{lemma}
{\it Proof.}
Fix $n$, $g,g',g''$ as in the statement of the lemma satisfying $g'gg'' = g$.
Note that $T_0(\Gamma)$ splits as an amalgamated free product $T_0(\Gamma) = \Sigma \ast_{\widetilde \Gamma_{n+1}} \Delta$, where
\begin{itemize}
\item $\Sigma < T_0(\Gamma)$ is the subgroup generated by $\widetilde \Gamma$, $\Lambda_1,..., \Lambda_n$;
\item $\Delta < T_0(\Gamma)$ is the subgroup generated by $\widetilde \Gamma_{n+1}$ and $\Lambda_{n+1}, \Lambda_{n+2}, ...$.
\end{itemize}
Then $g'' \in \Sigma$ is conjugate inside $T_0(\Gamma)$ to $g' \in \widetilde\Gamma_{n+1}$. By the General fact below, this implies that $g''$ and is actually conjugate inside $\Sigma$ to an element of $\widetilde\Gamma_{n+1}$. Now note that $\widetilde\Gamma_{n+1}$ is normal inside $\Sigma$ (it is even in product position). This forces $g''$ to belong to $\Gamma_n \cap \widetilde\Gamma_{n+1}$. Hence $g'' = e$, and further $g' = e$.
{\bf General fact.} Consider an amalgamated free product of groups $A = A_1 \ast_{A_0} A_2$. Assume that two elements $a_1 \in A_1$ and $a_2 \in A_2$ are conjugate inside $A$. Then $a_1$ is conjugate inside $A_1$ to an element in $A_0$.
Indeed, assume that $a_2 = ha_1h^{-1}$ for some $h \in A$. If $h \in A_1$ then clearly $a_2 \in A_1 \cap A_2$ and we are done. Otherwise, write $h$ as a product $h = h_0b$ for some $b \in A_1$ and some reduced word $h_0 \in A$ with rightmost letter in $A_2 \setminus A_0$. Then $ba_1b^{-1}$ lies inside $A_0$, for if not $h_0(ba_1b^{-1})h_0^{-1}$ would be a reduced word with a letter $bab^{-1} \in A_1 \setminus A_0$, so it could not lie inside $A_2$.
\hfill$\blacksquare$
\begin{lemma}\label{nonamen}
There exist $g_1,...,g_m\in\Gamma$ and $C>0$ such that the following holds:
For any $n\geq 1$, any unitaries $v_1,...,v_m\in\mathscr U(L(\widetilde\Gamma_{n+1}))$, and any $\xi\in M$ we have that
$$\|\xi\|_2\leq C\sum_{k=1}^m\|u_{\pi_n(g_k)}\xi-\xi v_{k}\|_2.$$
\end{lemma}
{\it Proof.} Let $\lambda_n:\Gamma_n\rightarrow\mathcal U(\ell^2(T_0(\Gamma)/\widetilde\Gamma_{n+1}))$ be the quasi-regular representation of $\Gamma_n$.
Lemma \ref{normal} implies that $\Gamma_n$ acts freely on $T_0(\Gamma)/\widetilde\Gamma_{n+1}$. Thus, $\lambda_n$ is a multiple of the left regular representation of $\Gamma_n$, hence $\lambda_n\circ\pi_n$ is a multiple of the left regular representation of $\Gamma$.
Since $\Gamma$ is non-amenable, there exist $g_1,...,g_m\in\Gamma$ and $C>0$ such that for every $\xi\in\ell^2(T_0(\Gamma)/\widetilde\Gamma_{n+1})$ and $n\geq 1$ we have \begin{equation}\label{nonamen1}\|\xi\|_2\leq C\sum_{k=1}^m\|\lambda_n(\pi_n(g_k))\xi-\xi\|_2.\end{equation}
We identify $L^2(M)\equiv\ell^2(T_0(\Gamma))$ as usual, via the unitary given by $u_g\mapsto\delta_g$, for any $g\in T_0(\Gamma)$.
Fix $n\geq 1$.
For $S\subset T_0(\Gamma)$, we denote by $P_S$ the orthogonal projection from $\ell^2(T_0(\Gamma))$ onto the $\|\cdot\|_2$-closed linear span of $\{\delta_g\,|\,g\in S\}$.
We define $T:\ell^2(T_0(\Gamma))\rightarrow \ell^2(T_0(\Gamma)/\widetilde\Gamma_{n+1})$ by letting $$T(\xi)(g\widetilde\Gamma_{n+1})=\|P_{g\widetilde\Gamma_{n+1}}(\xi)\|_2,\;\;\text{for every $\xi\in\ell^2(T_0(\Gamma))$ and $g\in T_0(\Gamma)$}.$$
Then for every $\xi,\eta\in\ell^2(T_0(\Gamma))$, $g\in\Gamma_n$, and $v \in \mathscr U(L(\widetilde\Gamma_{n+1}))$ we have that \begin{equation}\label{nonamen2}
\|T(\xi)-T(\eta)\|_2\leq \|\xi-\eta\|_2,\;\;\;\;\;T(u_g\xi)=\lambda_n(g)(T(\xi)),\;\;\;\;\text{and}\;\;\;\;T(\xi u)=T(\xi).
\end{equation}
For the last identity, just notice that since the $\|\cdot\|_2$-closed linear span of $\{\delta_h\,|\,h\in g\widetilde\Gamma_{n+1}\}$ is a right $L(\widetilde\Gamma_{n+1})$-module, we have that $P_{g\widetilde\Gamma_{n+1}}(\xi v)=P_{g\widetilde\Gamma_{n+1}}(\xi)v$, for every $g\in T_0(\Gamma)$ and $v \in \mathscr U(L(\widetilde\Gamma_{n+1}))$.
Finally, let $\xi\in\ell^2(T_0(\Gamma))$ and $v_1,...,v_m\in\mathscr U(L(\widetilde\Gamma_{n+1}))$. By applying \eqref{nonamen1} to $T(\xi)$ and using \eqref{nonamen2} we deduce
\[\|\xi\|_2=\|T(\xi)\|_2 \leq C\sum_{k=1}^m\|T(u_{\pi_n(g_k)}\xi)-T(\xi v_k)\|_2 \leq C\sum_{k=1}^m\|u_{\pi_n(g_k)}\xi-\xi v_k\|_2.\]
This finishes the proof.
\hfill$\blacksquare$
\subsection*{Proof of Lemma \ref{V2}} Assume by contradiction that there exists an intermediate subalgebra $M^{\mathcal{U}}\subset\mathcal{M}\subset\prod_{\mathcal{U}}(M{\bar\otimes} Q_s)$ with property $\widetilde V$. Let $A\subset\mathcal{M}$ be as in Definition \ref{propV}. For $n\geq 1$, put $P_n=L(\widetilde\Gamma_n)$.
Applying Corollary \ref{combo2}, we can find integers $i_s\geq 1$, for every $s\in S$, such that
\[\prod_{\mathcal{U}}P_{i_s}\subset A'\cap\mathcal{M}.\]
Let $\rho:\Gamma\rightarrow\mathcal U(\prod_{\mathcal{U}}P_{i_s})$ be the homomorphism given by $\rho(g)=(u_{\pi_{i_s}(g)})_s$, for $g\in\Gamma$.
Consider $B\subset A' \cap \mathcal{M}$ the (separable) subalgebra generated by $\rho(\Gamma)$. Also, use Corollary \ref{combo} to get a separable subalgebra $C\subset M^{\mathcal{U}}$ such that
\[C'\cap\mathcal{M}\subset\prod_{\mathcal{U}}(P_{i_s+1}{\bar\otimes} Q_s).\]
Since $\mathcal{M}$ has property $\widetilde V$, there exists a unitary $v \in \mathcal{M} \subset \prod_{\mathcal{U}}(M{\bar\otimes} Q_s)$ such that
\[v^*Bv\subset C' \cap \mathcal{M} \subset \prod_{\mathcal{U}}(P_{i_s+1}{\bar\otimes} Q_s).\]
Represent $v=(v_s)_s$, where $v_s\in M{\bar\otimes} Q_s$ is a unitary, for any $s\in S$. Let $g_1,...,g_m\in\Gamma$ and $C>0$ be given by Lemma
\ref{nonamen}. For $1\leq k\leq m$, denote by $u_k=v^*\rho(g_k)v$. Since $u_k\in\prod_{\mathcal{U}}(P_{i_s+1}{\bar\otimes} Q_s)$, we can represent $u_k=(u_{k,s})_s$, where $u_{k,s}\in P_{i_s+1}{\bar\otimes} Q_s$ is a unitary.
Since $\rho(g_k)v=vu_k$, we get
$$\lim\limits_{\mathcal{U}}\|u_{\pi_{i_s}(g_k)}v_s-v_su_{k,s}\|_2=0,\;\;\text{for every $1\leq k\leq m$}.$$
Since $\|v_s\|_2=1$, for every $s\in S$, this clearly contradicts the conclusion of Lemma \ref{nonamen}. \hfill$\blacksquare$
\subsection{The inductive step}
Theorem \ref{uncountable} clearly follows from the next two lemmas.
\begin{lemma}
Let $\Gamma$ be a countable group. Use the notations of Section \ref{sectionconstruction}.
For any $k \geq 1$ and $\pmb{\alpha} \in \{0,1\}^k$ such that $\alpha_k = 1$, any integers $t_s\geq 1, s\in S$, we have that $\prod_\mathcal{U} {M_{\pmb\alpha}(\Gamma)}^{{\bar{\otimes}} t_s}$ has property $\widetilde V$ at depth $k - 1$.
\end{lemma}
{\it Proof.}
We proceed by induction on $k$. If $k = 1$, then $M_{\pmb\alpha}(\Gamma) =L(T_1(\Gamma))$ and the conclusion follows from Lemma \ref{V1}.
Assume the conclusion holds for some $k \geq 1$. Let $\pmb\alpha \in \{0,1\}^{k+1}$ be such that $\alpha_{k+1} = 1$. Then the sequence $\pmb\beta = (\alpha_{n+1})_{n = 1}^k$ has length $k$ and $\beta_{k} = \alpha_{k+1}=1$. Moreover, we have that $M_{\pmb\alpha}(\Gamma) =L(T_{\alpha_1} (T_{\pmb\beta}(\Gamma)))$. Let $t_s\geq 1, s\in S$, be integers and denote by $\mathcal{M} = \prod_{\mathcal{U}} {M_{\pmb\alpha}(\Gamma)}^{{\bar{\otimes}} t_s}$.
By applying Lemma \ref{MDex} (to $T_{\pmb\beta}(\Gamma)$ and $\alpha_1$ instead of $\Gamma$ and $\alpha$) we get that
$A_i = \prod_\mathcal{U} {P_{i_s}}^{{\bar{\otimes}} t_s}$, where $i = (i_s)_s\in I$, is a residual net for $\mathcal{M}$.
Since $\beta_{k} = 1$ and $P_{m,m'}\cong {M_{\pmb\beta}(\Gamma)}^{{\bar{\otimes}} (m'-m)}$, for any $m'>m\geq 1$, then using the inductive hypothesis and repeating the end of the proof of Lemma \ref{count1} it follows that, at depth $1$, $\mathcal{M}$ has property $\widetilde V$ at depth $k - 1$.
This shows that $\mathcal{M}$ has property $\widetilde V$ at depth $k$, and finishes the proof.
\hfill$\blacksquare$
\begin{lemma}
Let $\Gamma$ be a non-amenable countable group and keep the notations from Section \ref{sectionconstruction}.
For any $k \geq 1$ and $\pmb{\alpha} \in \{0,1\}^k$ such that $\alpha_k = 0$ and any family of tracial von Neumann algebras $Q_s\geq 1, s\in S$, no intermediate von Neumann subalgebra ${M_{\pmb\alpha}(\Gamma)}^\mathcal{U} = \prod_\mathcal{U} M_{\pmb\alpha}(\Gamma) \subseteq \mathcal{M} \subseteq \prod_\mathcal{U} (M_{\pmb\alpha}(\Gamma) {\bar\otimes} Q_s)$ has property $\widetilde V$ at depth $k - 1$.
\end{lemma}
{\it Proof.}
We proceed by induction on $k$. If $k = 1$, then $M_{\pmb\alpha}(\Gamma) = L(T_0(\Gamma))$ and since $\Gamma$ is non-amenable, the conclusion follows from Lemma \ref{V2}.
Now assume the conclusion holds for some $k \geq 1$. Let $\pmb\alpha \in \{0,1\}^{k+1}$ be such that $\alpha_{k+1} = 0$. Then the sequence $\pmb\beta = (\alpha_{n+1})_{n = 1}^k$ has length $k$ and $\beta_{k} = \alpha_{k+1} = 0$. Moreover, $M_{\pmb\alpha}(\Gamma) =L(T_{\alpha_1} (T_{\pmb\beta}(\Gamma)))$.
Suppose by contradiction that there exist tracial von Neumann algebras $Q_s, s\in S$, and an algebra $\mathcal{M}$ satisfying
\[{M_{\pmb{\alpha}}(\Gamma)}^\mathcal{U} \subset \mathcal{M} \subset \prod_\mathcal{U} (M_{\pmb{\alpha}}(\Gamma) {\bar\otimes} Q_s),\]
that has property $\widetilde V$ at depth $k$.
Then, at depth $1$, $\mathcal{M}$ has property $\widetilde V$ at depth $k-1$. Let $(A_i \subset B_i)_{i \in I}$ be the residual pair for the inclusion $M_{\pmb\alpha}(\Gamma)^\mathcal{U} \subset \prod_\mathcal{U} (M_{\pmb\alpha}(\Gamma) {\bar\otimes} Q_s)$ described in Lemma \ref{MDex2}, with $T_{\pmb\beta}(\Gamma)$ and $\alpha_1$ instead of $\Gamma$ and $\alpha$. Since $\beta_{k+1} = 0$ and $P_{m,m'}\cong M_{\pmb\beta}(\Gamma)^{{\bar\otimes} (m'-m)}$, for any $m'>m\geq 1$, by using the inductive hypothesis and repeating the end of the proof of Lemma \ref{count2} we get a contradiction.
\hfill$\blacksquare$
|
1,116,691,499,616 | arxiv |
\section{Further Work}
\label{sec:conclusion}
There are two avenues that may be followed. One extends the encoding
to the full language of mixed sessions, by taking into consideration
the axioms in the reduction relation that match $\Keyword{lin}$ choices against
$\Keyword{un}$ choices. The other pursues semantic
preservation~\cite{DBLP:journals/iandc/KouzapasPY19} by establishing a
full abstraction result, requiring the development of typed
equivalences for the two languages.
\section{A Minimal Encoding}
\label{sec:correspondences}
This section covers typing and operational correspondences; we follow
Kouzapas et al.~\cite{DBLP:journals/iandc/KouzapasPY19} criteria for
typed encodings, and aim at a minimal encoding.
Let $\mathcal{C}$ range over classical processes, and
$\mathcal{M}_0$
range over the fragment of mixed choice processes where $\Keyword{lin}$
processes only reduce against $\Keyword{lin}$ processes, and $\Keyword{un}$ processes
only reduce against $\Keyword{un}$ processes,
i.e., the reduction rules for $\mathcal{M}_0$ are those for
mixed processes, except for \reductionrulename{LinUn} and \reductionrulename{UnLin}
(Figure~\ref{fig:mixed-sessions}).
The function
$\rmap{\cdot}: {\mathcal{M}_0}\longrightarrow \mathcal{C}$ in
Figure~\ref{fig-translation-mixed2classical} denotes a translation from
mixed choice processes in $\mathcal{M}_0$ to classical processes in
$\mathcal{C}$. We overload the notation and denote by $\rmap{\cdot}$
the encoding of both types (Figure~\ref{fig:embedding}) and processes
(Figure~\ref{fig-translation-mixed2classical}).
We start by addressing typing criteria.
The \emph{type preservation} criterion requires that
$\rmap{\operatorname{op}(T_1,\dots,T_n)} =
\operatorname{op}(\rmap{T_1},\dots,\rmap{T_n})$. Our encoding, in
Figure~\ref{fig:embedding}, can be called weakly type preserving in
the sense that we preserve the direction of type operations, but not the
exact type operator. For example, a $\Keyword{un}\oplus$ type is
translated in a $\Keyword{un}!$ type (and $\Keyword{un}\&$ type is translated in
$\Keyword{un}?$). Both $\oplus$ and $!$ can be seen as output types (and $\&$
and $?$ as input), so that direction is preserved.
We now move to \emph{type soundness}, but before we need to be able to
type the $\operatorname{NDChoice}$ operator.
\begin{lemma}
\label{lem:ndchoice}
The following is an admissible typing rule for typing $\operatorname{NDChoice}$.
\begin{equation*}
\frac{
\isProc{P_i}
\quad
i\in I
}{
\isProc{\operatorname{NDChoice}\{P_i\}_{i\in I}}
}
\end{equation*}
\end{lemma}
\begin{proof}
The typing derivation of the expansion of $\operatorname{NDChoice}$ leaves open the
derivations for $\isProc{P_i}$.
\end{proof}
The type soundness theorem for our translation is item~\ref{item:p}
below; the remaining items help in building the main result.
\begin{theorem}[Type Soundness]\
\begin{enumerate}
\item\label{item:t} If $\UN T$, then $\UN\rmap T$.
\item\label{item:g} If $\UN \Gamma$, then $\UN\rmap\Gamma$.
\item\label{item:s} If $\isSubt ST$, then $\isSubt[\rmap\Theta]{\rmap S}{\rmap T}$
\item\label{item:v} If $\isValue vT$, then $\isValue[\rmap\Gamma] v {\rmap T}$.
\item\label{item:p} If $\isProc P$, then $\isProc[\rmap\Gamma]{\rmap{\isProc P}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{item:t}: By case analysis on $T$ and the fact that types are
contractive.
%
\ref{item:g}: By induction on $\Gamma$ using case \ref{item:t}.
%
\ref{item:s}: By coinduction on the hypothesis.
%
\ref{item:v}: By rule induction on the hypothesis using items \ref{item:g}
and \ref{item:s}.
%
\ref{item:p}: By coinduction on the hypothesis, using items \ref{item:g}
and \ref{item:v}, and lemma~\ref{lem:ndchoice}.
\end{proof}
The \emph{syntax preservation} criterion consists of ensuring that
parallel composition is translated into parallel composition and that
name restriction is translated into name restriction, which is
certainly the case with our translation. It further requires the
translation to be name invariant. Our encoding transforms each
channel end in itself and hence is trivially name invariant.
We conclude that our translation is syntax preserving.
We now address the criteria related to the operational semantics.
We denote by $\msred$ the reflexive and transitive closure of the
reduction relations, $\osred$, in both the source and target
languages.
Sometimes we use subscript ${\mathcal{M}_0}$ to denote the reduction
of mixed choice processes and the subscript $\mathcal{C}$ for
the reduction of classical processes, even though it should be clear
from context.
The behavioral
equivalence $\asymp$ for classical sessions we are interested in
extends structural congruence $\equiv$ with the following rule
\begin{equation*}
\NR{ab}\prod_{i\in I} \SELECT{a}{l_i}{\mathbf 0} \;\asymp\; \mathbf 0.
\end{equation*}
The new rule
allows collecting processes that are left by the encoding of
non-deterministic choice. We call it \emph{extended structural
congruence}.
The following lemma characterizes the reductions of $\operatorname{NDChoice}$
processes: they reduce to one of the processes that are to be chosen
and leave an inert term $G$.
\begin{lemma}
\label{lem:ndchoice_red}
$\operatorname{NDChoice}\{P_i\}_{i\in I} \osred P_k \mid G \asymp P_k$, for any $k\in I$.
\end{lemma}
\begin{proof}
$\operatorname{NDChoice}\{P_i\}_{i\in I} \osred P_k \mid G$, where
$G=\NR{st} \prod_{i\in I}^{i\neq k} \SELECT{s}{l_i}{\mathbf 0}$ and
$G \asymp \mathbf 0$.
\end{proof}
We now turn our attention to \emph{barbs} and \emph{barb
preservation}. We say that a typed classical session process $P$
\emph{has a barb in} $x$, notation $\isProc{\barb{x}}$, if
$\isProc{P}$ and
\begin{itemize}
\item either $P\equiv \NR{x_ny_n}\ldots\NR{x_1y_1} (x\OUT{v} Q \mid R)$
where $x\not \in \{x_i, y_i\}_{i=1}^n$
\item or $P\equiv \NR{x_ny_n}\ldots\NR{x_1y_1} (\SELECT{x}{l} Q \mid R)$
where $x\not \in \{x_i, y_i\}_{i=1}^n$.
\end{itemize}
On the other hand, we say that a typed mixed session process $P$
\emph{has a barb in} $x$, notation $\isProc{\barb{x}}$, if
$\isProc{P}$ and
$P\equiv \NR{x_ny_n}\ldots\NR{x_1y_1} (\CHOOSE qx{i\in I}{M_i} \mid
R)$ where $x\not \in \{x_i, y_i\}_{i=1}^n$ and
$\isValue{x}{\CHOICE q\oplus {i\in I} {U_i}}$. Notice that only types
can discover barbs in processes since internal choice is
indistinguishable from external choice at the process level in
$\mathcal{M}_0$.
The processes with \emph{weak barbs} are those which reduce to a
barbed process: we say that a process $P$ \emph{has a weak barb in}
$x$, notation $\isProc{\wbarb{x}}$, if $P\transred P'$ and
$\isProc[\Gamma']{\barb[P']{x}}$.
The following theorem fulfills the \emph{barb preservation criterion}: if a
mixed process has a barb, its translation has a weak barb on the same
channel.
\begin{theorem}[Barb Preservation] The translation $\rmap{\cdot}: \mixedProcesses\longrightarrow \classicalProcesses$
preserves barbs, that is, if $\isProc{\barb{x}}$, then
$\isProc[\rmap{\Gamma}]{} \wbarb[\rmap{\isProc[\Gamma]{P}}]{x}$.
\end{theorem}
\begin{proof}
An analysis of the translations of processes with barbs. In the case
that $x$ is linear, rearranging the choice in $P$ in fragments, we
obtain that
$P\equiv \NR{x_ny_n}\ldots\NR{x_1y_1} (\LIN x\sum_{i\in I} (F_i^? +
F_i^!) \mid R)$ and so its translation is
\begin{align*}\rmap{\isProc P}\equiv \NR{x_ny_n}\ldots\NR{x_1y_1}
(\mathsf{NDChoice}\{ \enspace &
\SELECT{x}{l_i^!}\operatorname{NDChoice}{\SEND{x}{v_{ij}}{\rmap{\isProc[\Gamma_3,
x\colon T_i]{P_{ij}}}}}_{j\in J},
\\
& \SELECT{x}{l_i^?}\operatorname{NDChoice}{\RECEIVE{x}{y_{ik}}{\rmap{\isProc[\Gamma_2\cdot\Gamma_3,x\colon
T'_i, y_{ik}\colon S'_i]{P'_{ik}}}}}_{k\in K}
\}_{i\in I} \mid \\
&\rmap{\isProc[\Gamma'] R}).
\end{align*}
This process makes internal reduction steps in the resolution of the
outermost $\operatorname{NDChoice}$, non-deterministically choosing one of the
possible fragments, via Lemma~\ref{lem:ndchoice_red}. However,
independently of which branch is chosen, they are all of the form
$\SELECT{x}{\ell}C$, which has a barb in $x$. That is:
$\rmap{\isProc P} \transred \NR{x_ny_n}\ldots\NR{x_1y_1}
(\SELECT{x}{\ell}C \mid \rmap{\isProc[\Gamma'] R} \mid G)$, which has a
barb in $x$. The $G$ term is the inert remainder of the $\operatorname{NDChoice}$
reduction.
In the unrestricted case, we have
$P\equiv \NR{x_ny_n}\ldots\NR{x_1y_1} (\UN x\sum_{i\in I} (F_i^? +
F_i^!) \mid R)$. The translation is
\begin{align*}\rmap{\isProc P}\equiv \NR{x_ny_n}\ldots & \NR{x_1y_1}
(\NR{uv}
( \SENDn{u}{()} \mid \UN\RECEIVE{v}{\_} \mathsf{NDChoice}\{ & \\
& \NR{ab} \SEND{x}{a} \SELECT{b}{l_i^!}\operatorname{NDChoice}{\{\SEND{b}{v_{ij}}
(\SENDn{u}{()} \mid {\rmap{\isProc[\Gamma_1
]{P_{ij}}})}\}}_{j\in J}, & \\
& \NR{ab} \SEND{x}{a} \SELECT{b}{l_i^?}
\operatorname{NDChoice}{\{\RECEIVE{b}{y_{ik}}
(\SENDn{u}{()} \mid
{\rmap{\isProc[\Gamma_1, y_{ik}\colon S_i']{P'_{ik}}})}\}}_{k\in K}
\}_{i\in I} ) \mid \rmap{\isProc[\Gamma_2] R}) .
\end{align*}
The process starts by reducing via \reductionrulename{UnCom} on the $u,v$ channels to the process
\begin{align*}\rmap{\isProc P}\transred \NR{x_ny_n}\ldots & \NR{x_1y_1}
\NR{uv}
( \mathsf{NDChoice}\{ & \\
& \NR{ab} \SEND{x}{a} \SELECT{b}{l_i^!}\operatorname{NDChoice}{\{\SEND{b}{v_{ij}}
(\SENDn{u}{()} \mid {\rmap{\isProc[\Gamma_1
]{P_{ij}}})}\}}_{j\in J}, & \\
& \NR{ab} \SEND{x}{a} \SELECT{b}{l_i^?}
\operatorname{NDChoice}{\{\RECEIVE{b}{y_{ik}}
(\SENDn{u}{()} \mid
{\rmap{\isProc[\Gamma_1, y_{ik}\colon S_i']{P'_{ik}}})}\}}_{k\in K}
\}_{i\in I} \mid \rmap{\isProc[\Gamma_2] R} \mid U)
\end{align*}
where $U$ is the persistent part of the unrestricted process.
This process, in turn, reduces via the $\operatorname{NDChoice}$ (Lemma~\ref{lem:ndchoice_red}) to one
of the possible branches which are all of the form $\NR{ab}x\OUT{a}C$,
\begin{align*}\rmap{\isProc P}\transred \NR{x_ny_n}\ldots & \NR{x_1y_1}
\NR{uv}
\NR{ab} (\SEND{x}{a} C ) \mid \rmap{\isProc[\Gamma'] R} \mid U \mid G).
\end{align*}
Since $P$ has a barb in $x$, $x\not \in \{x_i, y_i\}_{i=1}^n$ and so this process also
has a barb in $x$, concluding that $\rmap{\isProc P}$ has indeed a weak barb in $x$.
\end{proof}
Finally, we look at operational completeness.
Operational completeness relates the behavior of mixed sessions
against their classical sessions images: any reduction step in mixed
sessions can be mimicked by a sequence of reductions steps in
classical sessions, modulo extended structural congruence. The ghost
reductions result from the new channels and communication inserted by
the translation, namely those due to the $\operatorname{NDChoice}$ and to the
encoding of ``loops'' for $\Keyword{un}$ mixed choices.
\begin{theorem}[Reduction Completeness]
\label{thm:reduction_completeness}
The translation $\rmap{\cdot}: \mixedProcesses\longrightarrow \classicalProcesses$
is \emph{operationally complete}, that is, if
$P \osred_{\mathcal{M}_0} P'$, then
$\rmap {\isProc P}
\msred_\mathcal{C}\asymp_\mathcal{C}
\rmap{\isProc{P'}}$,
\end{theorem}
\begin{proof}
By rule induction on the derivation of
$P \osred_{\mathcal{M}_0} P'$. We detail two cases.
Case \reductionrulename{Par}. We can show that
%
if $Q_1 \msred_\mathcal{C} Q_1'$, then
$Q_1\mid Q_2 \msred_\mathcal{C} Q_1' \mid Q_2$,
by induction on the length of the reduction. Then we have
$\rmap {\isProc[\Gamma] {P_1 \mid P_2} } = \rmap {\isProc[\Gamma_1]
{ P_1 } } \mid \rmap {\isProc[\Gamma_2] { P_2 } }$ with
$\Gamma = \Gamma_1\cdot \Gamma_2$. By induction we have
$\rmap {\isProc[\Gamma_1]{P_1}} \msred_\mathcal{C}
Q\asymp_\mathcal{C} \rmap{\isProc[\Gamma_1]{P_1'}}$. Using
the above result and the fact that $\asymp_\mathcal{C}$ is a
congruence, we get
$\rmap {\isProc[\Gamma_1] { P_1 } } \mid \rmap {\isProc[\Gamma_2] {
P_2 } } \msred_\mathcal{C} Q \mid \rmap
{\isProc[\Gamma_2] { P_2 }} \asymp_\mathcal{C}
\rmap{\isProc[\Gamma_1]{P_1'}} \mid \rmap {\isProc[\Gamma_2] {P_2}}
= \rmap {\isProc{P'_1 \mid P_2}}$.
%
The cases for \reductionrulename{Res} and \reductionrulename{Struct} are similar.
\begin{sloppypar}
Case \reductionrulename{LinLin}. Let
$\Gamma, x\colon R,y\colon S = \Gamma' \cdot \Gamma'' \cdot
\Gamma'''$ and
$\isValue[\Gamma']{x}{\CHOICE{\Keyword{lin}}{\&}{}{\BRANCH{l}{!}{T_0}{R_0},\ldots}}$
and
$\isValue[\Gamma'']{y}{\CHOICE{\Keyword{lin}}{\oplus}{}{\BRANCH{l}{?}{U_0}{S_0},
\ldots}}$, with $\areEquiv{T_0}{U_0}$ and
$\areDual{R_0}{S_0}$.
%
Let $\Gamma' = \Gamma_1' \cdot \Gamma_2' \cdot \Gamma_3'$ and
$\Gamma'' =\Gamma_1'' \cdot \Gamma_2'' \cdot \Gamma_3''$.
We have:
\end{sloppypar}
\begin{align*}
&\rmap{\isProc[\Gamma]{\NR{xy}(\Keyword{lin} x(\BRANCHP {l}!v{P} + M) \mid \Keyword{lin} y(\BRANCHP {l}?z{Q} + N) \mid O)}}
\\
=& \NR{xy}
\TBRANCH{x}{l^?}{\operatorname{NDChoice}\{ \SEND{x}{v}\rmap{\isProc[\Gamma_3',x:R_0]P}, \ldots\}, \ldots}{} \mid\\
&\qquad \enspace\operatorname{NDChoice}\{\SELECT{y}{l^?}\operatorname{NDChoice}\{\RECEIVE{y}{z}\rmap{\isProc[(\Gamma_2''\cdot \Gamma_3'', y:S_0, z:U_0)]Q},
\ldots\}, \ldots\} \mid
\rmap{\isProc[\Gamma''']{O}})\\
\osred \asymp & \NR{xy}
\TBRANCH{x}{l^?}{\operatorname{NDChoice}\{ \SEND{x}{v}\rmap{\isProc[\Gamma_3',x:R_0]P}, \ldots\}, \ldots}{} \mid\\
&\qquad \enspace\SELECT{y}{l^?}\operatorname{NDChoice}\{\RECEIVE{y}{z}\rmap{\isProc[(\Gamma_2''\cdot \Gamma_3'', y:S_0, z:U_0)]Q},
\ldots\} \mid
\rmap{\isProc[\Gamma''']{O}})\\
\osred & \NR{xy}
{\operatorname{NDChoice}\{ \SEND{x}{v}\rmap{\isProc[\Gamma_3',x:R_0]P}, \ldots\}} \mid\\
&\qquad \enspace\operatorname{NDChoice}\{\RECEIVE{y}{z}\rmap{\isProc[(\Gamma_2''\cdot \Gamma_3'', y:S_0, z:U_0)]Q},
\ldots\} \mid
\rmap{\isProc[\Gamma''']{O}})
\\
\osred \osred\asymp & \NR{xy}(
{\SEND{x}{v}\rmap{\isProc[\Gamma_3',x:R_0]P}} \mid
\RECEIVE{y}{z}\rmap{\isProc[(\Gamma_2''\cdot \Gamma_3'', y:S_0, z:U_0)]Q}
\mid
\rmap{\isProc[\Gamma''']{O}})
\end{align*}
\begin{align*}
\osred & \NR{xy}(
{\rmap{\isProc[\Gamma_3',x:R_0]P}} \mid
\rmap{\isProc[\Gamma_2''\cdot \Gamma_3'', y:S_0, z:U_0]Q }\subs vz
\mid
\rmap{\isProc[\Gamma''']{O}})
\\
= & \NR{xy}(
{\rmap{\isProc[\Gamma_3',x:R_0]P}} \mid
\rmap{\isProc[\Gamma_2'\cdot \Gamma_2''\cdot \Gamma_3'', y:S_0]Q\subs vz }
\mid
\rmap{\isProc[\Gamma''']{O}})
\\
= & \rmap{ \isProc[\Gamma_2'\cdot \Gamma_3'\cdot\Gamma_2''\cdot \Gamma_3''\cdot \Gamma''']
{\NR{xy} (P \mid Q\subs vz \mid O)} }
\\
= & \rmap{ \isProc {\NR{xy} (P \mid Q\subs vz \mid O)} }
\end{align*}
%
Notice that $\Gamma_1' = \Delta_1,x\colon R$
where $\Delta_1$ is $\Keyword{un}$, hence $\Delta_1$ is in $\Gamma_2'$ and in
$\Gamma'_3$. The same reasoning applies to $\Gamma_1'' $. Since
context $\Gamma_2'$ is used to type $v$, the substitution
lemma~\cite{DBLP:journals/iandc/Vasconcelos12} reintroduces it in
the context for $Q\subs vz$.
The case for \reductionrulename{UnUn} is similar, albeit more verbose. The cases for
\rift and \reductionrulename{IfF} are direct.
\end{proof}
We can show that the translation does \emph{not} enjoy reduction
soundness. Consider the classical process $Q$ to be the encoding of
process $P$ of the form $\Keyword{un} y(\BRANCHP m ? z \mathbf 0)$, described in
the right part of Figure~\ref{fig:example3_translation}. Soundness
requires that if $Q \osred_\mathcal{C} Q'$, then
$P \msred_{\mathcal{M}_0} P'$ and
$Q \msred_\mathcal{C} \asymp_\mathcal{C}
\rmap{\isProc{P'}}$. Clearly, $Q$ has an initial reduction step (on
channel $u_2v_2$), which cannot be mimicked by $P$. But this
reduction is a transition internal to process~$Q$, a $\tau$
transition. Equipped with a suitable notion of labelled transition
systems on both languages that include $\tau$ transitions, and by
using a weak bisimulation that ignores such transitions, we expect
soundness to hold.
\section{The Syntax, Operational Semantics, and Type System of Mixed
and Classical Sessions}
\paragraph{Mixed Sessions}
\input{fig-mixed-sessions}
\input{fig-mixed-sessions2}
\input{fig-mixed-sessions3}
The syntax of process and the operational semantics are in
Figure~\ref{fig:mixed-sessions}.
The syntax of types, and the notions of subtyping and type duality are
in Figure~\ref{fig:mixed-sessions2}.
The \Keyword{un} and \Keyword{lin} predicates, the context split and update operations,
and the typing rules are in Figure~\ref{fig:mixed-sessions3}.
\paragraph{Classical Sessions}
\input{fig-traditional-sessions}
The syntax, operational semantics, and type system are in
Figure~\ref{fig:classical-sessions}.
\section{Mixed Sessions as Classical Sessions}
\label{sec:embedding}
This section shows that a subset of the language of mixed sessions can
be embedded in that of classical sessions.
We restrict our attention to choices that reduce against choices with
the same qualifier, that is, we do not consider the case where an
ephemeral ($\Keyword{lin}$) process reduces against a persistent ($\Keyword{un}$)
one. For this reason, we assume that a process and its type always
have the same $\Keyword{lin}/\Keyword{un}$ qualifier.
One of the novelties in mixed sessions is the possible presence of
duplicated label-polarity pairs in choices. This introduces a form of
non-determinism that can be easily captured in classical sessions.
The $\mathsf{NDchoice}$ classical session process creates a race
condition on a new channel with endpoints $s,t$ featuring multiple
selections on the $s$ endpoint, for only one branch on the $t$
endpoint. This guarantees that exactly one of the branches is
non-deterministically selected. The remaining selections must eventually
be garbage collected.
We assume that $\prod_{1\le i\le n} Q_i$ denotes the process
$Q_1\mid\dots\mid Q_n$ for $n>0$, and that $\Pi$ binds tighter than
the parallel composition operator.
\begin{equation*}
\operatorname{NDChoice}\{P_i\}_{i\in I} = \NR{st} \left(\prod _{i\in I}
\SELECT{s}{l_i}{\mathbf 0} \mid
\TBRANCH{t}{l_i}{P_i}{i\in I}\right)
\end{equation*}
The type $S$ of channel end $s$ is of the form
$\CHOICE \Keyword{un} \oplus {i\in I} {l_i\colon S}$, an equation that can be
solved by type $\REC a {\CHOICE \Keyword{un} \oplus {i\in I} {l_i\colon a}}$,
and which SePi abbreviates to $*\oplus \{l_i\}_{i\in I}$. The
qualifier must be $\Keyword{un}$ because $s$ occurs in multiple threads in
$\operatorname{NDChoice}$; recursion arises because of the typing rules for
processes reading or writing in unrestricted channels.
Equipped with $\operatorname{NDChoice}$ we describe the translation of mixed
sessions to classical sessions via variants of the examples in
Section~\ref{sec:intro}. All examples fully type check and run in
SePi~\cite{DBLP:conf/sefm/FrancoV13}.
To handle duplicated label-polarity pairs in choices, we organize
choice processes by label-polarity fragments. Each such fragment
represents a part of a choice operation where all possible outcomes
have the same label and polarity. When a reduction occurs, one of the
branches is taken, non-deterministically, using the $\operatorname{NDChoice}$
operator. After a non-deterministic choice of the branch, and
depending on the polarity of the fragment, the process continues by
either writing on or reading from the original channel.
The translation of choice processes is guided by their types. For each
choice we need to know its qualifier ($\Keyword{lin}, \Keyword{un}$) and its view
($\oplus, \&$), and this information is present in types alone.
\input{fig-translation-example1}
Figure~\ref{fig:example1_translation} shows the translation of the
mixed process
\lstinline*(new xy)(lin x (m!3.0 + n?w.0) | lin y (m?z.0))*, where
\lstinline|x| is of type \lstinline|lin&{m!int.end, n?bool.end}|.
The corresponding type in classical sessions is
\lstinline|lin&{m:!int.end, n:?bool.end}|, which should not come as a
surprise.
Because channel end \lstinline|x| is of an external choice type
(\lstinline|&|), the choice on \lstinline|x| is encoded as a
\lstinline|case| process. The other end of the channel, \lstinline|y|,
is typed as an internal choice (\lstinline|++|) and is hence
translated as a \lstinline|select| process.
Occurrences of the $\operatorname{NDChoice}$ process appear in a degenerate form,
always applied to a single branch. We have four of them: three for
each of the branches in \lstinline|case| processes
(\lstinline|s_1t_1|, \lstinline|s_2t_2|, and \lstinline|s_4t_4)| and
one for the external choice in the mixed session process
(\lstinline|s_3t_3|).
In general, an external choice is translated into a classical
branching (\lstinline|case|) over the unique labels of the fragments
of the process, but where the polarity of each label is inverted. The
internal choice, in turn, is translated as (possibly nondeterministic
collection of) classical \lstinline|select| process but keeps the
label polarity.
This preserves the behavior of the original process: in mixed choices,
a reduction occurs when a branch $\BRANCHP l!vP$ matches another
branch $\BRANCHP l?zQ$ with the same label but with dual polarity
($l^!$ against $l^?$), while in a classical session the labels alone
must match ($l$ against $l$). Needless to say, we could have followed
the strategy of dualizing internal choices rather than external.
If we label reduction steps with the names of the channel ends on
which they occur, we can see that, in this case a $\labred{xy}$
reduction step in mixed sessions is mimicked by a long series of
classical reductions, namely
$ \labred{s_3t_3} \labred{xy} \labred{s_1t_1} \labred{s_4t_4}$
$ \labred{xy} $ or
$ \labred{s_3t_3} \labred{xy} \labred{s_4t_4} \labred{s_1t_1}
\labred{xy} $.
Notice the three reductions to resolve non-determinism
(on $s_it_i$) and the two reductions on $xy$ to encode branching
followed by message passing, an atomic operation in mixed sessions.
\input{fig-translation-example2}
Figure~\ref{fig:example2_translation} shows an example of a mixed
choice process with a duplicated label-polarity pair,
\lstinline|m!|. If we assign to \lstinline|x| type
\lstinline|lin&{m!int}|, then we know that the choice on \lstinline|x|
is encoded as \lstinline|case| and that on \lstinline|y| as
\lstinline|select|.
In this case, the $\operatorname{NDChoice}$ operator is applied in a non-degenerate
manner to decide whether to send the values 3 or 5 on \lstinline|x|
channel end, by means of channel \lstinline|s_1t_1|.
Again we can see that the one step reduction on channel \lstinline|xy|
in the original mixed session process originates a sequence of five
reduction steps in classical sessions, namely
$
\labred{s_2t_2}
\labred{xy}
\labred{s_1t_1}
\labred{s_3t_3}
\labred{xy}
$
or
$
\labred{s_2t_2}
\labred{xy}
\labred{s_3t_3}
\labred{s_1t_1}
\labred{xy}
$.
In this case, however, the computation is non-deterministic: the last
reduction step may carry integer \lstinline|3| or \lstinline|5|.
\input{fig-translation-example3}
Figure~\ref{fig:example3_translation} shows the encoding of mixed
choices on unrestricted channels. The mixed choice process is that of
Figure~\ref{fig:example2_translation} only that the two ephemeral
choices (\lstinline|lin|) have been replaced by their persistent
counterparts (\lstinline|un|).
The novelty, in this case, is the loops that have been created around the
\lstinline|case| and the \lstinline|select| process. Loops in
classical sessions can be implemented with a replicated input: a
process of the form \lstinline|v*?x.P| is a persistent process that,
when invoked with a value \lstinline|v| becomes the parallel
composition \lstinline@P[v/x] | v*?x.P@. The general form of the loops
we are interested in are \lstinline@(new uv : *!())(u!() | v*?x.P)@,
where \emph{continue calls} in process \lstinline|P| are of the form
\lstinline|u!()|. The contents of the messages that control the loop
are not of interest and so we use the unit type \lstinline|()|, so
that \lstinline|u| is of type \lstinline|*!()|.
We can easily see the calls \lstinline|u_1!()| and \lstinline|u_2!()|
in the last lines in Figure~\ref{fig:example3_translation}, reinstating
the unrestricted choice process.
In this case, one step reduction in mixed sessions corresponds to
a long sequence of transitions in their encodings.
\input{fig-type-translation}
We now present translations for types and processes in general. The
translation of mixed choice session types into classical session types
is in Figure~\ref{fig:embedding}.
In general, the (atomic) branch-communicate nature of mixed session
types, $\{l_i^\star{S_i}\}$, is broken in its two
parts, $\{l_i\colon\star S_i\}$, branch first, communicate after.
In mixed sessions, choice types are labelled by label-polarity
pairs ($l^!$ or $l^?$); in classical session choices are labelled by
labels alone. Because we want the encoding of a label $l^!$ to match
the encoding of $l^?$, we must dualize one of them. We arbitrarily
chose do dualize the labels in the $\&$ type.
The typing rules for classical unrestricted processes of type
$S = \CHOICE \UN\sharp {i\in I} {\BRANCH {l_i}\star {S_i}{T_i}}$
require $T_i$ to be equivalent ($\approx$) to $S$ itself. We take
advantage of this restriction when translating $\Keyword{un}$ types.
\input{fig-translation-mixed2classical_new}
The translation of mixed choice processes is in
Figure~\ref{fig-translation-mixed2classical}. Since the translation is
guided by the type of the process to be translated, we also provide
the typing context to the translation function, hence the notation
$\rmap{\isProc P}$.
Because label-polarity pairs may be duplicated in choice processes, we
organize such processes in label-polarity fragments,
so that a process of the form
$\CHOOSE qx{i\in I}{\BRANCHP{l_i}{\star_i}{v_i}{P_i}}$ (where
$q::=\Keyword{lin}\mid\Keyword{un}$ and $\star::=!\mid?$) can be written as
$\CHOOSE qx{i\in I}{(F_i^? + F_i^!)}$. Each label-polarity fragment
($l^!_i$ or $l^?_i$) groups together branches with the same label and
the same polarity. Such fragments may be empty for external choices,
for not all label-polarity pairs in an external choice type need to be
covered in the corresponding process (internal choice processes do not
need to cover all choices offered by the external counterpart).
The essence of the translation is discussed in the three examples
above.
We distinguish four cases for choices, according to qualifiers ($\Keyword{lin}$
or $\Keyword{un}$) and views ($\oplus$ or $\&$) in types. In all of them an
$\operatorname{NDChoice}$ process takes care of duplicated label-polarity pairs in
branches.
Internal choice processes feature an extra occurrence of $\operatorname{NDChoice}$
to non-deterministically select between output and input \emph{on the
same label}.
Notice that external choice must still accept both choices, so that it
is not equipped with an $\operatorname{NDChoice}$.
Finally, unrestricted mixed choices require the encoding of a loop,
accomplished by creating a new channel for the effect ($uv$),
installing a replicated input $\UN\RECEIVE{v}{\_}P$ at one end of the
channel, and invoking the input once to ``start'' the loop and again
at the end of the interaction on channel end $x$. The calls are all
accomplished with processes of the form $\SENDn{u}{()}$. The contents
of the messages are of no interest and so we use the unit value $()$.
Following the encoding for types, the encoding for external choice
processes exchanges the polarities of choice labels: a label $l_i^!$
in mixed sessions is translated into $l_i^?$, and vice-versa, in the
cases for $\Keyword{lin}\&$ and $\Keyword{un}\&$ choices. This allows reduction to
happen in classical sessions, where we require an exact match between
the label of the \lstinline|select| process and that of the
\lstinline|case| process.
\section{Classical Sessions, Mixed Sessions}
\label{sec:intro}
Mixed sessions were introduced by Vasconcelos et al.~\cite{ESOP2020}
as an extension of classical session
types~\cite{DBLP:journals/acta/GayH05,DBLP:conf/esop/HondaVK98,DBLP:journals/iandc/Vasconcelos12}.
They form an interesting point in the design space of session-typed
systems: an extremely concise process calculus (four constructors only)
that allows the natural expression of algorithms quite cumbersome to
write in classical sessions.
The original paper on mixed sessions~\cite{ESOP2020} shows that there
is an encoding of classical sessions into mixed sessions. This
abstract shows that the converse is also true for a fragment of mixed
sessions.
A translation of mixed sessions into classical sessions would allow to
leverage the tools available for the latter: one could program in
mixed sessions, translate the source code into classical sessions,
check the validity of the source code against the type system for the
target language, and run the original program under an interpreter for
classical sessions (SePi~\cite{DBLP:conf/sefm/FrancoV13}, for
example). A mixed-to-classical encoding would further allow a better
understanding of the relative expressiveness of the two languages.
Processes in classical binary
sessions~\cite{DBLP:journals/acta/GayH05,DBLP:conf/concur/Honda93,DBLP:conf/esop/HondaVK98,DBLP:conf/parle/TakeuchiHK94}
(here we follow the formulation
in~\cite{DBLP:journals/iandc/Vasconcelos12}) communicate by exchanging
messages on bidirectional channels. We introduce classical sessions by
means of a few examples. Each channel is denoted by its two ends and
introduced in a process \lstinline|P| as
\lstinline|(new xy)P|. Writing a value \lstinline|v| on channel end
\lstinline|x| and continuing as \lstinline|P| is written as
\lstinline|x!v.P|. Reading a value from a channel end \lstinline|y|,
binding it to variable \lstinline|z| and continuing as \lstinline|Q|
is written as \lstinline|y?z.Q|. When the two processes get together
under a new binder that ties together the two ends of the channel,
such as in
\begin{lstlisting}
(new xy) x!v.P | y?z.Q
\end{lstlisting}
value \lstinline|v| is communicated from the \lstinline|x| channel end
to the \lstinline|y| end. The result is process
\lstinline@(new xy) P | Q[v/z]@, where notation \lstinline|Q[v/z]|
denotes the result of replacing \lstinline|v| for \lstinline|z| in
\lstinline|Q|.
Processes may also communicate by offering and selecting options in
choices. The different choices are denoted by labels, \lstinline|ell|
and \lstinline|m| for example. To select choice \lstinline|ell| on
channel end \lstinline|x| and continue as \lstinline|P| we write
\lstinline|x select ell.P|. To offer a collection of options at
channel end \lstinline|y| and continue with appropriate continuations
\lstinline|Q| and \lstinline|R|, we write
\lstinline|case y of {ell -> Q, m -> R}|. When \lstinline|select| and
\lstinline|case| processes are put together under a \lstinline|new|
that binds together the two ends of a channel, such as in
\begin{lstlisting}
(new xy) x select ell.P | case y of {ell -> Q, m -> R}
\end{lstlisting}
branch \lstinline|Q| is selected in the \lstinline|case| process. The
result is the process \lstinline@(new xy) P | Q@. Selecting a choice
is called an internal choice, offering a collection of choices is
called an external choice.
We thus see that classical sessions comprise four atomic interaction
primitives. Furthermore, choices are directional in the sense that one
side offers a collection of possibilities, the other selects one of them.
To account for unbounded behavior classical sessions count with
replication: an input process that yields a new copy of itself after
reduction, written \lstinline|y*?z.Q|. A process of the form
\begin{lstlisting}
(new xy) x!v.P | y*?z.Q
\end{lstlisting}
reduces to \lstinline@(new xy) P | Q[v/z] | y*?z.Q@. If we use the
\lstinline|lin| prefix to denote an ephemeral process and the
\lstinline|un| prefix to denote a persistent process, an alternative
syntax for the above process is \lstinline@(new xy) lin x!v.P | un y?z.Q@.
Mixed sessions blur the distinction between internal and external
choice. Under a unified language construct---mixed choice---processes
may non-deterministically select one choice from a multiset of output
choices, or branch on one choice, again, from a multiset of possible
input choices.
Together with an output choice, a value is (atomically) sent; together
with an input choice, a value is (again, atomically) received,
following various proposals in the literature~\cite{DBLP:conf/concur/DemangeonH11,DBLP:journals/iandc/Sangiorgi98,DBLP:conf/ecoop/Vasconcelos94}.
The net effect is that the four common operations on session
types---output, input, selection, and branching---are effectively
collapsed into one: mixed choice.
Mixed choices can be labelled as ephemeral (linear, consumed by
reduction) or persistent (unrestricted, surviving reduction),
following conventional versus replicated inputs in some versions of
the pi-calculus~\cite{DBLP:journals/mscs/Milner92}.
Hence, in order to obtain a core calculus, all we have to add is name
restriction, parallel composition, and inaction (the terminated
process), all standard in the pi-calculus.
We introduce mixed sessions by means of a few examples.
Processes communicate by offering/selecting choices with the same label and opposite
polarities.
\begin{lstlisting}
(new xy) lin x (m!3.P + n?z.Q) | lin y (m?w.R + n!5.S + p!7.T)
\end{lstlisting}
The above processes communicate over the channel with ends named
\lstinline|x| and \lstinline|y| and
reduce in one step along label \lstinline|m| to
\lstinline@(new xy) P | R[3/w]@ or along label \lstinline|n| to
\lstinline@(new xy) Q[5/z] | S@.
Non-determinism in mixed sessions can be further achieved by allowing
duplicated labels in choices. An example in which a 3 or a 5 is
non-deterministically sent over the channel is
\begin{lstlisting}
(new xy) lin x (m!3.P + m!5.Q) | lin y (m?z.R)
\end{lstlisting}
This process reduces in one step to either \lstinline*(new xy) P | R[3/z]* or
\lstinline*(new xy) Q | R[5/z]*.
Unrestricted behavior in choices is achieved by the \lstinline|un|
qualifier in the choice syntax.
\begin{lstlisting}
(new xy) un x (m!3.P + m!5.P) | un y (m?z.Q)
\end{lstlisting}
This process reduces to itself together with either of the choices
taken,
\par
\begin{minipage}{.4\textwidth}
\centering
\begin{lstlisting}
(new xy)
un x (m!3.P + m!5.P) |
un y (m?z.Q) |
P | Q[3/z]
\end{lstlisting}
\label{fig:prob1_6_2}
\end{minipage}%
\begin{minipage}{0.1\textwidth}
or
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\begin{lstlisting}
(new xy)
un x (m!3.P + m!5.P) |
un y (m?z.Q) |
P | Q[5/z]
\end{lstlisting}
\end{minipage}
The complete set of definitions for the syntax, operational semantics,
and type system for mixed sessions are in appendix,
Figures~\ref{fig:mixed-sessions} to~\ref{fig:mixed-sessions3}. For
technical details and main results, we direct the reader to
reference~\cite{ESOP2020}.
The complete set of definitions for the syntax, operational semantics,
and type system for classical sessions are in appendix,
Figure~\ref{fig:classical-sessions}. For further details, we refer the
reader to
references~\cite{DBLP:journals/iandc/Vasconcelos12,ESOP2020}.
|
1,116,691,499,617 | arxiv | \section{introduction}
The maximum likelihood estimate is a fundamental problem in statistics. Maximum likelihood degree is the number of potential solutions to the maximum likelihood estimation problem on a projective variety. When the variety is smooth, Huh \cite{Hu} showed that the Maximum likelihood degree is indeed a topological invariant. If the variety is a general complete intersection, the maximum likelihood degree is computed in \cite{CHKS} (see also \cite{HS}).
In a recent preprint \cite{AAGL}, Agostini, Alberelli, Grande and Lella studied the maximum likelihood degree of Fermat hypersurfaces. They obtained formulas for the maximum likelihood degree of a few special families of Fermat surfaces. However, their approach is through a case-by-case study.
In this note, we propose to compute the Maximum likelihood degree of Fermat hypersurfaces in a more systematic way via topological method. In general, the formula given in \cite{CHKS} does not work for all the Fermat hypersurfaces, because the intersection of hypersurfaces
$$\{x_0^d+x_1^d+\cdots+x_n^d=0\}\cap \{x_0+x_1+\cdots+x_n=0\}\subset \pp^n$$
may not be transverse. We will compute the error terms introduced by the non-transverse intersections. The main ingredient is Milnor's result on the topology of isolated hypersurfaces singularities. This topological approach is closely related to the approach of \cite{BW} and \cite{RW}. In fact, for an isolated hypersurface singularity, the Euler obstruction is up to a sign equal to the Milnor number plus one. So we essentially apply the ideas of \cite{BW} and \cite{RW} to these particular examples.
First, let us recall the definition of Maximum likelihood degree. Let $\pp^n$ be the $n$-dimensional complex projective space with homogeneous coordinates $(x_0, x_1, \ldots, x_n)$. Denote the coordinate plane $\{x_i=0\}\subset \pp^n$ by $H_i$, and the hyperplane $\{x_0+x_1+\cdots+x_n=0\}$ by $H_+$. Let the index set $\Lambda=\{0, 1, \ldots, n, +\}$, and let $\sH=\bigcup_{\lambda\in \Lambda}H_\lambda$. Let $X\subset\pp^n$ be a complex projective variety. Denote the smooth locus of $X$ by $X_{\textrm{reg}}$. The \textbf{Maximum likelihood degree} of $X$ is defined to be the number of critical points of the likelihood function
$$l_u=\frac{x_0^{u_0}x_1^{u_1}\cdots x_n^{u_n}}{(x_0+x_1+\cdots+x_n)^{u_0+u_1+\cdots+u_n}}$$
on $X_{\textrm{reg}}\setminus \sH$ for generic $(u_i)_{0\leq i\leq n}\in \zz^{n+1}$.
\begin{theorem}\label{thm}
Denote the Fermat hypersurface $\{x_0^d+x_1^d+\cdots+x_n^d=0\}\subset \pp^n$ by $F_{n, d}$, and denote its maximum likelihood degree by $\MLdeg(F_{n, d})$. Then,
\begin{equation}\label{main}
\MLdeg(F_{n, d})=d+d^2+\cdots+d^n-\sum_{0\leq j\leq n-1}{n+1\choose j}\beta_{n-j, d-1}
\end{equation}
where $\beta_{\mu, \nu}$ is the number of complex solutions of the system of equations
\begin{gather*}
z_1^{\nu}=z_2^{\nu}=\ldots=z_{\mu}^{\nu}=1\\
z_1+\ldots+z_{\mu}+1=0.
\end{gather*}
\end{theorem}
When $\mu$ or $\nu$ is small, $\beta_{\mu,\nu}$ can be easily calculated. For example,
\begin{equation}
\beta_{\mu, 1}=0.
\end{equation}
\begin{equation}
\beta_{1, \nu}=
\begin{cases}
0 & \text{if $\nu$ is odd},\\
1 & \text{if $\nu$ is even}.
\end{cases}
\end{equation}
\begin{equation}
\beta_{2, \nu}=
\begin{cases}
2 & \text{if $\nu$ is divisible by 3},\\
0 & \text{otherwise}.
\end{cases}
\end{equation}
With these calculations, we recover all the closed formulas in \cite{AAGL}.
\begin{cor}
\begin{equation}
\MLdeg(F_{n, 2})=2^{n+1}-2
\end{equation}
\begin{equation}
\MLdeg(F_{2, d})=
\begin{cases}
d^2+d & \text{if $d\equiv 0, 2\mod 6$},\\
d^2+d-3 & \text{if $d\equiv 3, 5\mod 6$},\\
d^2+d-2 & \text{if $d\equiv 4\mod 6$},\\
d^2+d-5 & \text{if $d\equiv 1\mod 6$}.
\end{cases}
\end{equation}
\end{cor}
When $\nu$ is a power of a prime number, we have formulas to compute $\beta_{\mu, \nu}$. Equivalently, when $d-1$ is a power of a prime number, we have closed formulas for $\MLdeg(F_{n, d})$. In fact, by a straight forward computation one can deduce the following corollary from Theorem \ref{thm} and Proposition \ref{com}.
\begin{cor}
Suppose $d-1=p^r$, where $p$ is a prime number and $r$ is a positive integer. Then
$$
\MLdeg(F_{n, d})=d+d^2+\cdots+d^n-\frac{1}{d-1}\sum \frac{(n+1)!}{\left(n+1-p(s_1+\cdots+s_k)\right)!\cdot \left((s_1)!\cdots(s_k)!\right)^p}
$$
where $k=\frac{d-1}{p}$ and the sum is over all nonnegative integers $s_1, \ldots, s_k$ with $1\leq s_1+\cdots+s_k\leq \frac{n+1}{p}$.
\end{cor}
To find a general formula for $\beta_{\mu, \nu}$ would be a very hard question in number theory and combinatorics. In fact, determining when $\beta_{\mu, \nu}\neq 0$ had been an open question for a long time, and it was solved by Lam and Leung \cite{LL} in 2000.
Since the Fermat hypersurface $F_{n, d}$ is smooth, by \cite{Hu} $\MLdeg(F_{n, d})$ is equal to the the signed Euler characteristic $\chi(F_{n, d}\setminus \sH)$. In section \ref{two}, we will compute $\chi(F_{n, d}\setminus \sH)$, and we will postpone the technical calculation of the Milnor numbers to section \ref{three}. In the last section, we will briefly discuss what we know about the constants $\beta_{n, d}$.
\subsection*{Acknowledgement}
We thank Jiu-Kang Yu and Zhengpeng Wu for helpful discussions about the constants $\beta_{\mu, \nu}$.
\section{Computing the Euler characteristics}\label{two}
By the following theorem of Huh \cite{Hu}, we reduce the problem of computing $\MLdeg(F_{n, d})$ to computing $\chi(F_{n, d}\setminus \sH)$. Recall that in $\pp^n$, $\sH=\bigcup_{\lambda\in \Lambda}H_\lambda$ is the union of all coordinate hyperplanes and the hyperplane $H_{+}=\{x_0+x_1+\cdots+x_n=0\}$.
\begin{theorem}[Huh, \cite{Hu}]\label{Huh}
If $X\subset \pp^n$ is a subvariety such that $X\setminus \sH$ is smooth, then
$$
\MLdeg(X)=(-1)^{\dim(X)}\chi(X\setminus \sH).
$$
\end{theorem}
Since the Euler characteristic is additive for algebraic varieties, by the inclusion-exclusion principle,
\begin{equation}\label{inc}
\chi(X\setminus\sH)=\sum_{0\leq i\leq n}\;\sum_{\substack{\Lambda'\subset \Lambda\\ \left\vert \Lambda'\right\vert=i}}(-1)^i\chi(X\cap H_{\Lambda'})
\end{equation}
where $H_{\Lambda'}=\bigcap_{\lambda\in \Lambda'}H_\lambda$.
The Fermat hypersurface $F_{n, d}=\{x_0^d+x_1^d+\cdots+x_n^d=0\}$ is invariant under any permutation of the coordinates. Therefore, (\ref{inc}) can be written as
\begin{equation}\label{total}
\chi(F_{n, d}\setminus \sH)=\sum_{0\leq i\leq n}(-1)^i\left({n+1 \choose i}\chi(F_{n, d}\cap V^i)+{n+1 \choose i-1}\chi(F_{n, d}\cap W^i)\right)
\end{equation}
where $V^i=\bigcap_{0\leq j\leq i-1} H_j$ and $W^i=H_+\cap \bigcap_{0\leq j\leq i-2}H_j$ ($W^0=\emptyset$ and $W^1=H_+$).
$F_{n, d}\cap V^i$ is a smooth hypersurface in $\pp^{n-i}$ of degree $d$. Euler characteristics of such hypersurfaces only depend on $n-i$ and $d$, and they are calculated in \cite{D} Chapter 5, (3.7). However, it turns out that we don't have to compute each of these Euler characteristics. For now, we simply denote the Euler characteristic of a smooth degree $d$ hypersurfaces in $\pp^m$ by $e_{m, d}$. In particular,
\begin{equation}\label{smooth}
\chi(F_{n,d}\cap V^i)=e_{n-i, d}.
\end{equation}
$F_{n, d}\cap W^i$ is a possibly singular hypersurface in $W^i$ for $1\leq i\leq n$. In fact, $F_{n, d}\cap W^i$ is isomorphic to the intersection of the Fermat hypersurface $F_{n-i+1, d}\subset \pp^{n-i+1}$ and the hyperplane $\{x_0+x_1+\cdots+x_{n-i+1}=0\}$. Using Lagrange multiplier method, one can easily see that all the singular points of $F_{n, d}\cap W^i$ are isolated and there are exactly $\beta_{n-i+1, d-1}$ many of them. The Euler characteristics of such hypersurfaces can be computed using Milnor numbers.
\begin{theorem}{\cite[Chapter 5 (4.4)]{D}}
For any singular point $P$ of $F_{n, d}\cap W^i$ we can define the Milnor number $\mu(F_{n, d}\cap W^i, P)$ by considering $F_{n, d}\cap W^i$ as a hypersurface of $W^i$. Then,
\begin{equation}\label{sing}
\chi(F_{n, d}\cap W^i)=e_{n-i, d}+ (-1)^{n-i}\sum_{P}\mu(F_{n, d}\cap W^i, P)
\end{equation}
where the sum is over all the singular points $P$ of $F_{n, d}\cap W^i$.
\end{theorem}
\begin{prop}\label{num}
For any singular point $P$ of $F_{n, d}\cap W^i$,
\begin{equation}\label{milnor}
\mu(F_{n, d}\cap W^i, P)=1.
\end{equation}
\end{prop}
We will postpone the proof of the proposition to next section. The next corollary follows immediately from (\ref{sing}) and (\ref{milnor}).
\begin{cor}
\begin{equation}\label{sss}
\chi(F_{n, d}\cap W^i)=e_{n-i, d}+(-1)^{n-i}\beta_{n-i+1, d-1}.
\end{equation}
\end{cor}
Now, combining (\ref{total}), (\ref{smooth}) and (\ref{sss}), we have
\begin{equation}\label{total2}
\chi(F_{n, d}\setminus \sH)=\sum_{0\leq i\leq n}(-1)^i\left({n+1 \choose i}e_{n-i, d}+{n+1 \choose i-1}\left(e_{n-i, d}+(-1)^{n-i}\beta_{n-i+1, d-1}\right)\right)
\end{equation}
Since
$
{n+1\choose i}+{n+1\choose i-1}={n+2\choose i},
$
(\ref{total2}) is equivalent to
\begin{equation}\label{close}
\chi(F_{n, d}\setminus \sH)=\sum_{0\leq i\leq n}(-1)^i{n+2 \choose i}e_{n-i, d}+\sum_{1\leq i\leq n}(-1)^{n}{n+1 \choose i-1}\beta_{n-i+1, d-1}.
\end{equation}
Suppose $X$ is a general hypersurface of degree $d$ in $\pp^n$. Then (\ref{inc}) implies that
\begin{equation}\label{irene}
\chi(X\setminus \sH)=\sum_{0\leq i\leq n}(-1)^{i}{n+2\choose i}e_{n-i, d}
\end{equation}
The maximum likelihood degree of a general hypersurfaces is well-understood.
\begin{prop}{\cite[1.11]{HS}}
The maximum likelihood of a general degree $d$ hypersurfaces in $\pp^n$ is equal to $d+d^2+\cdots+d^n$.
\end{prop}
Combining the proposition, (\ref{irene}) and Theorem \ref{Huh}, we have
\begin{equation}
\begin{split}
\sum_{0\leq i\leq n}(-1)^{i}{n+2\choose i}e_{n-i, d}&=\chi(X\setminus \sH)\\
&=(-1)^{n-1}\MLdeg(X)\\
&=(-1)^{n-1}(d+d^2+\cdots+d^n)
\end{split}
\end{equation}
Therefore, (\ref{close}) is equivalent to
\begin{equation}
\chi(F_{n, d}\setminus \sH)=(-1)^{n-1}(d+d^2+\cdots+d^n)+\sum_{1\leq i\leq n}(-1)^n{n+1\choose i-1}\beta_{n-i+1, d-1}
\end{equation}
Again, by Theorem \ref{Huh}, we have
\begin{equation}
\MLdeg(F_{n, d})=d+d^2+\cdots+d^n-\sum_{1\leq i\leq n}{n+1\choose i-1}\beta_{n-i+1, d-1}
\end{equation}
which is the statement of Theorem \ref{thm}.
\section{The Milnor numbers}\label{three}
We prove Proposition \ref{num} in this section.
For the geometric meaning of Milnor number, we refer to \cite[Chapter 3]{D}. Here we compute the Milnor numbers using Jacobian ideals. Denote the ring of germs of holomorphic functions at $0\in \cc^l$ by $\sO$. Let $f\in \sO$ be a nonzero germ of holomorphic function such that the germ of hypersurface $f^{-1}(0)$ has an isolated singularity at the origin $0\in \cc^l$. The Jacobian ideal of $f$, denoted by $J_f$ is defined by
$$J_f=(\frac{\partial f}{\partial z_1}, \cdots, \frac{\partial f}{\partial z_l})\subset \sO$$
where $z_1, \ldots, z_l$ are the coordinates of $\cc^n$.
\begin{theorem}{\cite[Chapter 3, (2.7)]{D}}\label{1st}
The Milnor number of $f^{-1}(0)$ at the origin, denoted by $\mu(f^{-1}(0), 0)$, is given by the formula
\begin{equation}
\mu(f^{-1}(0), 0)=\dim_\cc \sO/J_f.
\end{equation}
\end{theorem}
Recall that $W^i=\{x_0=x_1=\cdots=x_{i-2}=x_0+x_1+\cdots+x_n=0\}\subset \pp^n$. Denote $y_j=x_{i-1+j}$, $0\leq j\leq n-i+1$. Then the intersection $F_{n, d}\cap W^i$ is isomorphic to the intersection
$$\{y_0^d+y_1^d+\cdots+y_{n-i+1}^d=0\}\cap \{y_0+y_1+\cdots+y_{n-i+1}=0\}$$
in $\pp^{n-i+1}$. Without lost of generality, we can work on the affine space $y_0\neq 0$, and rewrite the intersection in affine coordinates
$$\{1+\bar{y}_1^d+\cdots+\bar{y}_{n-i+1}^d=0\}\cap \{1+\bar{y}_1+\cdots+\bar{y}_{n-i+1}=0\}.$$
Here we use $\bar{y}_j$ to denote the corresponding affine coordinate of $y_j$, that is, $\bar{y}_j=y_j/y_0$. Suppose $(\xi_1, \ldots, \xi_{n-i+1})$ is a singular point of the above intersection. Then by Lagrange multiplier method,
\begin{equation}
\xi_1^{d-1}=\xi_2^{d-1}=\cdots=\xi_{n-i+1}^{d-1}=1.
\end{equation}
We can eliminate $\bar{y}_{n-i+1}$ by $\bar{y}_{n-i+1}=1-\bar{y_1}-\cdots-\bar{y}_{n-i}$. On this affine chart, $F_{n, d}\cap W^i$ is isomorphic to the hypersurface $\{f=0\}$ in $\cc^{n-i}$, where
\begin{equation}
f=1+\bar{y}_1^d+\cdots+\bar{y}_{n-i}^d+(1-\bar{y}_1-\cdots-\bar{y}_{n-i})^d.
\end{equation}
Let $z_j=\bar{y}_j-\xi_j$. Then
\begin{equation}
f=1+(z_1+\xi_1)^d+\cdots+(z_{n-i}+\xi_{n-i})^d+(\xi_{n-i+1}-z_1-\cdots-z_{n-i})^d.
\end{equation}
\begin{prop}\label{2nd}
In the local ring $\sO$, the Jacobian ideal $J_f=(\frac{\partial f}{\partial z_1}, \ldots, \frac{\partial f}{\partial z_{n-i}})$ is equal to the maximal ideal $(z_1, z_2, \ldots, z_{n-i})$.
\end{prop}
\begin{proof}
Notice that $\xi_j^{d-1}=1$ for all $1\leq j\leq n-i+1$. Therefore,
\begin{equation*}
\begin{split}
\frac{\partial f}{\partial z_j}&=\frac{d(d-1)}{2}\cdot \xi_j^{d-2}z_j+\frac{d(d-1)}{2}\cdot \xi_j^{d-2} (z_1+\cdots+z_{n-i})+\text{higher degree terms}\\
&=\frac{d(d-1)}{2}\cdot \xi_j^{d-2}(z_1+\cdots+z_{j-1}+2z_j+z_{j+1}+\cdots+z_{n-i})+\text{higher degree terms}
\end{split}
\end{equation*}
By Nakayama's lemma, we only need to show that the vectors $z_1+\cdots+z_{j-1}+2z_j+z_{j+1}+\cdots+z_{n-i}$, $1\leq j\leq n-i$ span the whole vector space $\cc z_1\oplus \cc z_2\oplus\cdots \oplus \cc z_{n-j}$. By adding all such vectors together, we see $z_1+z_2+\cdots+z_{n-i}$ is contained in their span. Thus
$$z_j=(z_1+\cdots+z_{j-1}+2z_j+z_{j+1}+\cdots+z_{n-i})-(z_1+z_2+\cdots+z_{n-i})$$
is in the span.
\end{proof}
Now, Proposition \ref{num} follows from Theorem \ref{1st} and Proposition \ref{2nd}.
\section{The constants $\beta_{\mu,\nu}$}\label{four}
Instead of working with the constants $\beta_{\mu, \nu}$, we define $\alpha_{\mu, \nu}$ to be the number of complex solutions to the system of equations
\begin{equation}\label{system}
\begin{cases}
z_1^\nu=z_2^\nu=\cdots=z_\mu^\nu=1\\
z_1+z_2+\cdots+z_\mu=0.
\end{cases}
\end{equation}
Then clearly $\beta_{\mu, \nu}=\frac{1}{\nu}\cdot\alpha_{\mu+1, \nu}$. The advantage of working with $\alpha_{\mu, \nu}$ is that their defining equations have better symmetry.
We would like to answer the following question.
\begin{question}
Give a formula for $\alpha_{\mu, \nu}$ in terms of $\mu$ and the prime factorization of $\nu$.
\end{question}
This is definitely a very hard question. The work of Lam and Leung gives a necessary and sufficient condition of $\alpha_{\mu, \nu}\neq 0$.
\begin{theorem}{\cite{LL}}
Suppose $\nu=p_1^{a_1}\cdots p_l^{a_l}$ is the prime factorization. Then $\alpha_{\mu, \nu}\neq 0$ if and only if $\mu\in \zz_{\geq 0}\cdot p_1+\cdots+\zz_{\geq 0}\cdot p_l$.
\end{theorem}
When $\nu=p^r$ has only one prime factor, we can give a formula of $\alpha_{\mu, \nu}$. In this case, suppose $(z_1, \ldots, z_\mu)$ is a solution to (\ref{system}). Then the collection $\{z_1, \ldots, z_\mu\}$ can be divided into groups of $p$ elements such that each group is a rotation of $1, e^{2\pi i/p}, \ldots, e^{2(p-1)\pi i/p}$. Therefore, if $p$ does not divide $\mu$, then $\alpha_{\mu, \nu}=0$. If $p$ divides $\mu$, then
\begin{equation}\label{ab}
\alpha_{\mu, \nu}=\sum \frac{\mu!}{\left((s_1)!(s_2)!\cdots(s_{k})!\right)^p}
\end{equation}
where $k=\nu/p$, and the sum is over all $s_1, \ldots, s_k\in \zz_{\geq 0}$ such that $s_1+\cdots+s_k=\mu/p$. Since $\beta_{\mu, \nu}=\frac{1}{\nu}\cdot\alpha_{\mu+1, \nu}$, we can translate (\ref{ab}) into a statement about $\beta_{\mu, \nu}$.
\begin{prop}\label{com}
Suppose $\nu=p^r$, where $p$ is a prime number and $r$ is a positive integer. Then $\beta_{\mu, \nu}=0$ when $p$ does not divide $\mu+1$, and when $p$ divides $\mu+1$
\begin{equation}
\beta_{\mu, \nu}=\frac{1}{\nu}\sum \frac{(\mu+1)!}{\left((s_1)!(s_2)!\cdots(s_{k})!\right)^p}
\end{equation}
where $k=\nu/p$, and the sum is over all $s_1, \ldots, s_k\in \zz_{\geq 0}$ such that $s_1+\cdots+s_k=\frac{\mu+1}{p}$.
\end{prop}
Suppose $\nu=p^rq^s$ has two distinct prime factors, and suppose $(z_1, \ldots, z_\mu)$ is a solution to (\ref{system}). Then by \cite[Corollary 3.4]{LL}, the collection $\{z_1, \ldots, z_\mu\}$ can be divided into groups of $p$ or $q$ elements such that each group is a rotation of $1, e^{2\pi i/p}, \ldots, e^{2(p-1)\pi i/p}$ or a rotation of $1, e^{2\pi/q}, \ldots, e^{2(q-1)\pi i/q}$ respectively. However, this decomposition is not unique, and this is the main difficulty to find a formula for $\alpha_{\mu, \nu}$ in this case. Now, this is already a problem beyond our capability.
When $\nu$ has at least three distinct prime factors, the statement of \cite[Corollary 3.4]{LL} is not true any more. Therefore, the question becomes much harder and deeper.
|
1,116,691,499,618 | arxiv | \section{Introduction}
The compute resources required to train state of the art (SOTA) machine/deep learning/artificial intelligence (ML/DL/AI) models is increasing at a `super-Moore' rate - doubling every 3.5 months \cite{Openai}, massively increasing the amount of energy required to generate these models \cite{Strubell}. The proposed solutions have been centered around improved co-design around architecture and algorithms as seen in CMOS-based TPUs, FPGAs, ASICs and spike-based hardware. The use of transistors for efficient analog computing has regained some popularity but are not mainstream. There is also growing interest in exploring the use of emerging devices like memristors, photonics and atomic switch networks to build a new generation of AI hardware. While these novel devices show great promise of energy efficiency, high density and non-linearity, they have often been hindered by stochastic device behavior, manufacturing variability and challenges of large scale implementation relative to traditional CMOS. Successful realization of neuromorphic systems with these emerging devices is key to building more efficient hardware to meet the growing demands for compute.
The goal of this paper is to identify the the fundamental problems in the current framework that hinder the successful integration of these novel devices for AI hardware. If we are able to successfully address these problems, we would then be able to engineer a novel paradigm of complex systems with the potential to realize faster, robust and more efficient information processing \cite{Teuscher}. We will start by analyzing the exponential success we have achieved over the last six decades under a description $\leftrightarrow$ design framework in Sec. II. In Sec. III, we will use the same framework to explain why the time is ideal to completely reboot some of our fundamental ideas in both description and design in order to make progress. Ideas of complexity, complexity engineering and self-organization will be introduced in Sec. IV and V, and will pave the way towards discussing a complexity engineering approach to neuromorphic design in Sec. VI. We will discuss the changes necessary to the descriptive framework with respect to the reservoir computing framework and provide a path forward using non-equilibrium thermodynamics in sec. VII. We will summarize the paper and conclude in Sec. VIII.
\section{Description $\leftrightarrow$ Design}
The title of this section represents one of the central ideas from this paper. In our field, the \textbf{description} of computation both influences and is influenced by elements of \textbf{design}. The modern computing technology stack is complex with multiple interdependent components. We will break it down to 4 fundamental parts that will be our focus (Fig. 1) -
\begin{itemize}
\item[(a)] \textbf{Task} for which the system is being built for.
\item[(b)] \textbf{Theoretical framework} used to describe how computational states are represented and the algorithm used to achieve the task.
\item[(c)] \textbf{System architecture} describes how the different parts of the systems are connected.
\item[(d)] \textbf{Physical computing devices} corresponds to how computation is physically realized i.e. the hardware of the system.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{icons_fig3.png}
\end{center}
\caption{Description $\leftrightarrow$ Design - The computing stack divided into four fundamental interconnected components: Description consisting of the Task and the Theoretical Framework, and the Design consisting of System Architecture and Physical Devices.}
\end{figure}
\par\noindent The task and theoretical framework components correspond to \emph{Description} - How is the task described computationally, what is the algorithm, how are inputs and outputs represented? What is considered as achieving the task in a computational manner? The latter two - architecture and devices correspond to \emph{Design} - How are the different blocks necessary to achieve the computation arranged efficiently and what are the physical devices that can realize the specific input and outputs? These 2 categories and the 4 components constantly influence each other, which we will explore further in the next section.
The components during the era of digital computing are -
\begin{itemize}
\item[(a)] Tasks - Performing large mathematical operations.
\item[(b)] Theoretical framework - Boolean algebra, finite state automata and Turing machines.
\item[(c)] System architecture - General purpose computing has been built on a variant of the von Neumann architecture.
\item[(d)] Physical devices - CMOS devices in binary digital mode.
\end{itemize}
\par\noindent The relative stability of these factors represent a \emph{perfect storm} that drove the digital computing revolution. Let us explore description-design relationship further with respect to these components.
At the heart of modern-day computing is Turing's seminal work in 1936, in which he established a very general model for computation using the idea of Turing machines, showing that \emph{`any function computable by an algorithm, can be computed on a Turing machine'} by manipulating binary symbols like `0' and `1' on the machine tape \cite{Turing1}. Modern computers are not replicas of Turing machines but are based on the idea of manipulating symbols based on efficient algorithms in order to achieve computations. Claude Shannon's work in proving the equivalence between the behavior of networks of electrical switches and Boolean logic functions is another fundamental building block of digital computing \cite{Shannon1}. The first established the theoretical framework and the latter indicated the type of physical systems that can implement the framework - which together pushed for the search for switching devices required to instantiate the binary symbols.
The primary task of building computers to perform large mathematical calculations was influenced by early digital computers built around the 2nd World War for performing the calculations needed in artillery firing tables, cryptoanalysis, etc and utilized electromechanical switches. The ENIAC machine completed in 1945 utilized vaccuum tubes and is historically important as it introduced the stored-program architecture (also known as the von-Neumann architecture) \cite{Eniac}. It was the first general purpose digital computer, Turing complete and allowed for the system to be reprogrammed by storing the data and program in an external memory. Before the stored-program architecture, we had fixed-program systems in which the program was hardwired in the system for a particular task and could not be reprogrammed - similar to our design of modern day ASICs, albeit a lot less efficient and flexible. Modern day computer architecture is a lot more advanced and complicated, but are built on top of the original von-Neumann architecture.
Transistor technologies (BJT, MOS, CMOS, etc) given their smaller size, faster speeds, lower power consumption, better SNR and ability to be combined with the integrated chip (IC) technology became the preferred device of choice to realize 0's and 1's, and quickly replaced bulkier vacuum tubes in the 1950s. A decade later in 1965, Gordon Moore made his famous observation about the number of transistors on an integrated chip doubling about every two years i.e. Moore's law \cite{Moore}. With the powerful Turing machine theoretical framework, a von-Neumann stored program architecture, Shannon's work on digital switches and the exponential increase in transistor density to realize it, decrease in cost per compute and the growing interest in the scientific study of computers, efficient algorithms, etc, the digital technological revolution was well underway. As the decades passed by, more and more problems across different fields of science, engineering, medicine, economics, etc were made tractable by casting them as a computational problem. And computers became ubiquitous in our everyday lives. Given this exponential progress, it is reasonable to question why the time is right for another revolution of ideas.
\section{Viva la Revolution!!}
The unintended consequence of the incredible success of computing has been a \emph{streetlight effect} i.e. continuing to do what we have already been very successful at. We live in a period where the availability of cheap and powerful compute encourages us to cast all problems in a manner that can be solved by our existing computers, and then look to optimize both the system hardware and software to improve the implementation efficiency. This has also served to discourage a number of ideas to replace conventional systems.
CMOS devices are considered near irreplaceable in the computing stack with billions of dollars invested in their continued development and in construction of SOTA fabrication facilities. Moore's law has been both the tip of the spear for our progress, and as a shield for CMOS transistor devices (against possible novel replacements) while components (a)-(c) have remained relatively unchanged. Over the many decades, there have been number of research programs focused on identifying devices like spintronics, carbon nanotubes, graphene, quantum-dots, molecular cellular automata, etc (sometimes referred to as unconventional computing \cite{Stepney}). While some of these have been able to match and even surpass CMOS devices in terms of device speed and power dissipation, critics of these novel approaches often point towards their inability to match device robustness, signal-to-noise ratios, scalability and integration with IC design processes. The ability of these devices to construct robust logical gates at scale, which is central to the current computational paradigm is also seen as a major roadblock to their adoption. However with Moore's law slowing down (and Dennard scaling completely stopped) as we approach the physical limits of device scaling, now is the time to invest heavily in the research and development of these new emerging devices at the levels comparable to CMOS technology \cite{Apte}. This should help us both extend our current progress, as well as identify suitable devices for new tasks of interest.
The architecture of the system has been the more flexible component when compared to physical devices. FPGAs, ASICs and system on a chip (SoC) for parallel processing, scientific computing, high performance computing, graphic processing units, etc are perfect examples of modifying (c) the system architecture according to the (a) specific task of interest while keeping the fundamental (b) theoretical computing framework (though they use specialized algorithms) and (d) CMOS devices unchanged. Increasingly the focus of the field has shifted away from general purpose computing and towards \emph{AI tasks} - a set of tasks that are associated with intelligence and cognitive abilities. With this shift has come the increasing demand for compute in the field of ML and AI to realize these tasks. The backpropagation algorithm, central to ML was invented in 1986 by Rumelhart and Hinton \cite{Rumelhart}, but the algorithms were not feasible until the availability of GPUs with increased parallelism to perform the large number of computations required \cite{Alexnet}. The lesson here being - the value of an algorithm is dependent on the availability of existing hardware to execute it feasibly. Thus design of computational algorithmic descriptions (of learning and intelligence) are undoubtedly influenced by the type of operations that are feasible on existing hardware i.e. existing \textit{design driving description}.
The hardware solutions to provide the necessary support for ML have mainly focused on architectural improvements, which have been influenced by the machine learning algorithms themselves that needed to be executed. Learning is generally described as weight changes, using gradient descent techniques on a suitable loss function $E$, given by the equation below
\begin{equation}\label{SGD}
w_{t+1}=w_t-\eta \frac{dE}{dw}
\end{equation}
\par\noindent Learning is achieved during the training phase by performing the above operation in Eq.(\ref{SGD}) on billions of parameters using large amounts of training data. This requires the hardware to perform an extremely large number of matrix multiplication and addition operations. The shift towards more parallel architectures, crossbar structures for more efficient matrix operations, reduced precision arithmetic and improved data movement to combat the memory bottleneck represent significant changes to the system architecture, influenced by descriptions of what learning entails i.e. a case of \textit{description driving design}. These have been adopted by both industry giants (like Intel, NVidia, AMD, ARM, Google) and startups (Cerebras, Mythic, Graphcore, SambaNova) alike to improve the efficiency of the hardware implementing these compute intensive algorithms. Of course more radical descriptions of learning will drive the search for novel hardware (for eg: Shor's algorithm \cite{Shor} for prime factorization was a major driving factor for quantum computing).
While we have focused on the use of transistors as switches for digital computation, they can also function as analog computational elements when used in appropriate device modes (Interestingly the use of transistors in this analog manner exploiting the richer device physics is reminiscent of ideas employed in unconventional computing). An important hardware paradigm that has re-emerged is the field of \emph{neuromorphic computing}. Neuromorphic computing was coined in 1990 by Carver Mead \cite{Mead}, who defined ``neuromorphic systems'' as the use of very large scale integration (VLSI) techniques with analog components that mimicked computation in biological neural systems (and digital blocks for communication). However the use of this term has evolved to become much broader, meaning different things to different researchers. Systems are often defined to be \emph{neuromorphic} at very various levels of the computing stack - algorithm, architecture and device. It includes a wide range of implementations based on both spike-based biologically-inspired algorithms as well deep-learning based artificial neural networks. A detailed survey of neuromorphic systems has been explored in \cite{Schuman} illustrating this very point. Many of these systems have shown tremendous improvements in terms of energy efficiency but much work is needed in improving these algorithms to compete with SOTA deep learning techniques. It might serve the field to clearly define where this neuromorphic boundary lies in order for the term to be meaningful in an useful sense. In any case, hybrid digital-analog systems built based on Mead's original definition can be seen as an natural co-design extension of the fully digital CMOS systems discussed above. In addition to the architectural changes to the system, the transistor devices have been used in an unconventional but natural analog manner to mimic neuronal and synaptic behavior to achieve the tasks in the AI suite. The task, architecture and physical device components have changed to learning tasks, crossbar/parallel architectures and analog computation to efficiently implement the learning algorithms, while the theoretical computing framework i.e. describing learning as the computation of weight changes using Hebbian or gradient descent based techniques, remains consistent across the various systems.
Of particular interest in this paper is the design of neuromorphic hardware using novel emerging devices like memristors \cite{Li}, photonics \cite{Lima}, spintronics \cite{Grollier}, atomic switch networks etc. While we will mainly refer to memristors in this paper (given their increasing popularity in their use in memory, in-memory compute and artificial neural networks), the underlying ideas can be extended to other novel devices as well. Both on-chip and off-chip learning have been achieved in these systems using mainly gradient descent-based algorithms (while some systems have utilized more biologically inspired local Hebbian mechanisms as well). These devices given their small sizes (and thus large density), energy efficiency, speed, non-linearity, etc have shown great promise, but device variability, sneak path currents, self-discharge \cite{Asapu} and latency due necessary control circuitry in dense crossbar structures have hindered their progress with respect to scalability and stackability \cite{Adam}. As incremental advances continue to be made to improve the realization of existing algorithms as well tuning algorithms to account for device variations \cite{Querlioz}, it is necessary to question if the problem isn't one algorithm versus another but rather the underlying computational description and engineering methodologies itself? We must be willing to ask if we need to fully rethink descriptions and design in a manner that maximizes the potential of these novel devices. Changing the description framework alongside the task, architecture and devices will represent a change in all four components concurrently for the first time in over six decades - a big reason why the time might be ripe to make fundamental changes. We will explore this in further detail over the next few sections.
\section{Complex Systems, Complexity Science \& Engineering}
The goal of building neuromorphic hardware is to identify properties of the human brain that are useful for intelligence and emulate it using a different hardware substrate. The human brain is a \emph{complex system}. It is necessary to clearly understand what this term \emph{complex} entails as we look to engineer systems that mimic it. Systems like the human brain, social networks, ant colonies, the internet are a few examples of complex systems. Complexity is roughly defined as being situated between order and disorder. Complex systems are usually identified using some properties that are common to systems that we identify as complex \cite{Mitchell} -
\begin{enumerate}
\item[(a)] Large number of simple components.
\item[(b)] Non-linear interaction among parts.
\item[(c)] No central control.
\item[(d)] Emergent behaviors like hierarchical organizations, robustness, phase transitions, complex dynamics, information processing capabilities, evolution and learning.
\end{enumerate}
\par\noindent Here emergence corresponds to properties that are seen in the system as a whole but not in the components, that arise due to the interaction between them (colloquially referred to as the `whole being greater than the sum of the parts'). The author in \cite{Mitchell} also distinguishes between disorganized and organized complexity. The first involves billions and billions of parameters and assumes very little interaction between those variables. However our focus will be on the latter, which involves a moderate number of strongly interacting variables and exhibits the properties listed above. The burgeoning science of complexity seeks to identify a unified theory across multiple disciplines.
The main research direction in complexity is to understand it's emergence in natural and engineered systems by identifying and studying the properties of networks. Critical to this task is to define measurable quantities suitable for characterization and analysis \cite{Lloyd1}. An important aspect of this in engineered systems is to address it as a problem that needs to be tackled and to augment the system to cope accordingly \cite{Frei1}. Another option as proposed by the authors in \cite{Buchli} is to engineer systems that looks to take advantage of the complexity rather than suppressing or managing it. This is sometimes referred to as \textit{emergent} \cite{Ulieru} or \textit{complexity engineering} \cite{Wolfram}. It is important to differentiate here between complex and complicated systems. Complicated systems are systems which have a large number of components, predictable, do not show emergent properties and ultimately reducible. Given the difference in properties between complicated and complex systems, the engineering of complex system will look very different to traditional classical engineering ideas and require a significant shift in our design thinking. Let us explore this difference further.
Classical engineering is what is usually taught in universities everywhere and corresponds to applying methods and techniques to solve problems using a reductionist approach, characterized by intuitive analysis, detailed understanding, determinism and predictability \cite{Gershenson}. It requires the systems to be designed to be well-defined and engineers make the reasonable hypothesis that the parts of a system interact in some well-known and well defined ways without further influencing each other. An example of this the \emph{divide and conquer} strategy \cite{Heylighen} - the problem is cut into it's simplest components, and each is analyzed separately, and detailed descriptions are generated. The parts are then connected together as required (ideally the components are modular in nature) and the entire problem is solved.
Complexity engineering is less formalized currently and is akin to a search of the design space to produce a robust complex system to be situated in a dynamic environment. These systems, by definition are not reducible into their various components. They have \emph{emergent functionality} which means that the required function is not instantiated by a single component or restricted to a part of the system, but instead distributed across the entire system arising at the macroscale due to the interaction of many microscale components (here macro and microscale are relative). It is thus necessary to engineer the right type of interactions between the different components so that the overall system dynamics produces the function of interest. Unlike classical engineering, where the dynamics and functions of every component is fully understood and specified, we will have to relax this constraint in the design of complex systems. We replace it with an approximate understanding of the overall behavior of the system, and the ability to control and predict some aspects of the system output even though it might be difficult (and computationally expensive) to understand how the system produced the output. It is important to understand that it is not a question of which of the two - classical vs complexity engineering is better, but rather a question of the situations for which one might be more suited than the other.
Classical and complexity engineering is also going have different roles for the engineer. Rather than specifying the performance of different components and controlling it, the engineer must now act more as a \emph{facilitator} to guide and enable the system's self-organizing process to produce the results of value, as discussed in \cite{Ulieru}. A loss of complete control and predictability over the systems we design might seem alien for engineers, but it is something we must be willing to explore moving forward. However we are in the early stages in this discipline of complexity engineering. Moving forward we need to expand on the theoretical base from complexity science, a framework to describe and translate concepts such as emergence, evolvability and learning from natural systems to be used in the engineering of technological systems, and a solid methodology to obtain `design' protocols to engineer the complex systems with properties we desire.
\section{Self-Organization - Specification Trade-off}
In addition to being a complex system, the brain (like all biological systems) is a self-organized system. Self-organized systems share some properties that overlap with complex systems like dynamical, robustness, adaptivity and autonomous (with no external or centralized control) \cite{Wolf}. For the purposes of this paper, we will provide a more rigorous definition of self-organization \cite{SelfOrg}. Self-organization is the \textit{`process that produces dissipative non-equilibrium order at macroscopic levels, because of collective, nonlinear interactions between multiple microscopic components. This order is induced by interplay between intrinsic and extrinsic factors, and decays upon removal of the energy source.'} It is not to be confused with self-assembly which is non-dissipative order at equilibrium, persisting without the need for energy. Though we use a more thermodynamics based definition, there are others based on the use of measures that relates self-organization with the increase in statistical complexity of the system \cite{Shalizi}. It is also important in the context of ML/AI to distinguish between self-organization as defined above and self-organizing or Kohonen maps \cite{Kohonen}, which are unsupervised learning algorithms for weights in a neural network. A very useful way to think about classical vs complexity engineering is by using a \textit{self-organization - specification trade-off} introduced in \cite{Buchli}. The author states - \emph{On one hand we need certain functionality in the systems, i.e. we have to be able to specify what the system should do (or a part of it). On the other hand, if we specify ``every'' detail of the system, if we design it by decomposing it, ``linearizing'' the problem, then no self-organization will happen and no emergent phenomena can be harnessed. Thus, there is a trade-off between self-organization on one hand and specification or controllability on the other: If you increase the control over your system you will suppress self-organization capabilities. If you do not suppress the self-organization processes by avoiding constraining and controlling many variables, it is difficult to specify what the system should do''.}
We can discuss this trade-off in a more concrete manner in terms of the number of variables $N$ used to describe a system at some suitable level of abstraction \cite{Buchli}. Let $N_C$ be the number of constrained variables, which indicate the variables or parameters that the engineer can control in order to extract the required functionality from the system. This makes the rest of the $N$ variables - the unconstrained variables $N_U$ which are not under the control of the engineer and influenced by the system dynamics alone (and evolve as allowed by physical law). By definition we have $N_C+N_U=N$. The two limits include a fully engineered system with $N_C = N$ and no self-organization and no emergent phenomenon on one end, and a fully self-organized system with $N_U = N$. In complexity engineering, the goal is to produce systems with $N_U >> N_C$ - to not exert control over most variables except for a small number of constrained variables to guide the system evolution in the directions that will produce efficient solutions. This does not imply that we can achieve self-organization by taking an existing system and remove the controls in it to make $N_U>>N_C$. Instead we have to take into account the different components and the interactions between them so that the self-organization of the system over time, under the constraints of $N_C$ produces the results we want. By increasing the number of degrees of freedom $N_U$ that self-organize to be larger, we are looking to exploit the intrinsic physics of the system to a greater extent to achieve computing goals. The work presented in this paper can be seen as a natural extension of Mead's originial idea to further exploit CMOS device physics to build analog elements of neuromorphic system, as well as recent work looking to leverage greater amount of physics in algorithms and nanoscale material for neuromorphic computing \cite{Markovic}.
\section{Complexity Engineering Approach to Neuromorphic Design}
The use of complexity engineering approaches to engineer emergence in computing has been limited to unconventional \cite{Polack} and intrinsic \cite{Feldman} computing, and there is no literature of it's use in neuromorphic hardaware to the extent of the author's knowledge. In this section, we will study the advantages of designing neuromorphic hardware using a complexity engineering approach. We start by analyzing neuromorphic design under the traditional engineering approach, which starts with a specific learning task and an algorithm to solve it efficiently. Most learning algorithms - local Hebbian type or global backpropagation based rules are of the form given in Eq.(\ref{SGD}) and operate at the level of weights. This level of the description, that we will refer to as \textit{fine-grained} or \textit{microscale} influences the design of the circuits to implement the algorithm as mentioned earlier. Irrespective of transistor or memristor-based synaptic circuits at the crossbar junctions, we need to build read, write and control circuitry at this microscale level in order to be able to change the weights (constrained variables) - this corresponds to having a system with $N_C>>>N_U$. As we scale up, the additional circuitry required to overcome sneak path currents become increasingly harder and expensive to achieve. Furthermore the issue of variability in the memristor device and behavior is also problematic if we require very specific weight changes. For these reasons, traditional approaches to neuromorphic design using memristors suffer many sizeable challenges. As we continue to make improvements under traditional approaches, we must ask whether it is feasible to engineer such complex systems with the required emergent properties using novel devices in this paradigm \cite{Johnson}.
We explore the complexity engineering approach by first analyzing a set of conditions a system would need to satisfy in order to be engineered through self-organization \cite{Frei1} - (a) Autonomous and interacting units, (b) Minimal external control, (c) Positive and negative feedback, (d) Fluctuations/variations and (e) Flat dynamic internal architecture. We now map these conditions onto the required neuromorphic hardware systems that is built through self-organization. Such a system will be made of a large number of non-linear neurons interacting through their weights. Currently we build external control into the circuit at a fine-grained (microscale) individual neuron and weight level (i.e. $N_C >> N_U$) to correctly realize the learning algorithm. In order to allow for self-organization, we will have to use a macroscale descriptions of learning (we will explore in further detail in the next section) that would reduce the number of constrained variables and allow the system to evolve freely under it's native dynamics ($N_U>>N_C$), exploiting the rich intinsic behavior of these new devices. Note that by reducing $N_C$, we also make the system evolution more autonomous, satisfying (b) from above. It is also necessary to ensure that as we control the small number of $N_C$, we prevent the system from entering no-go regions in their state-space that are of no value to the users. The use of recurrent architecture and external error signals to influence system evolution can provide the necessary feedback signals to the system. The variations in the device manufacture and behavior are now preferred as we seek for a diversity enabled sweet-spot in self-organized networks \cite{Doyle}. Fluctuations in the microscale (weight) components are tolerable as long as the overall system can realize the necessary functionality and make the system more robust. \emph{Noise} corresponds to unexpected variations to constrained variables $N_C$. On the other hand, variations in the unconstrained variables are acceptable and `noise' is now a resource we can exploit (as done in simulated annealing \cite{annealing}). A flat internal architecture is achieved by using number of neurons of similar behavior and system connectivity that evolves dynamically based on the input stimuli presented to it. For these reasons, neuromorphic hardware systems based on emerging non-linear devices (like memristors) would be a very promising candidate for being engineered through self-organization. These systems can further exploit the existing bottom-up fabrication techniques that are already used to build systems with these devices. It is important to understand the exact type of conditions in which self-organization and complexity engineering approaches will trump traditional ideas. We are not proposing that all neuromorphic systems to be built this way. If we are using digital CMOS devices in which we have engineered away most of the device physics, then traditional techniques will continue to be much better suited.
\section{Macroscale Descriptions of Learning}
A macroscale description of learning corresponds to the change in the theoretical framework that we discussed the need for in section III. For tasks that fall under the AI suite to be realized more effectively, using emerging devices like memristors and brain-inspired architectures requires a change in the description from microscale to macroscale (design driving description), which in turn will also require a change from traditional to complexity engineering methodologies (description driving design).
In computational neuroscience, Marr's levels of analysis \cite{Marr} was inspired from how information systems have been traditionally built and understood, and there has been a continuous overlap of ideas of between the two fields over the decades. The change in the description of learning can be seen as a technological extension of recent ideas being discussed in computational and systems neuroscience \cite{Blake}. In this paper, the authors describe the classical framework in systems neuroscience as observing and developing a theory of how individual neurons compute and then assembling a circuit-level picture of how the neurons combine their operations. They note how this approach has worked well for simple computations but not for more complicated functions that require a very large number of neurons. They propose to replace it with a deep-learning based framework that shifts the focus from computations realized by individual neurons, to a description comprising of - (a) Objective functions that describe goals of the system, (b) Learning rules that specify the dynamics of neurons and weights and (c) Architectures that constrain how the different units are connected together. They further state that the - \textit{``This optimization framework has an added benefit: as with ANNs, the architectures, learning rules and objective functions of the brain are likely relatively simple and compact, at least in comparison to the list of computations performed by individual neurons.''} This idea of understanding artificial neural networks at a macroscale or coarse-grained level of objective functions, learning rules and architecture of the network overall as opposed to studying the system at the microscale or fine-grained level of individual neurons (where it is often hard to intelligibly describe the system comprising of billions of parameters) has been explored in \cite{Kording}. We are simply suggesting that as we move towards adopting these macroscale descriptions of neural networks, we must look towards complexity engineering methodologies to design and build systems based on these newer descriptions.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{icons_fig2.png}
\end{center}
\caption{Reservoir computing with input layer feeding input signals $u(t)$ to the static reservoir. The reservoir generates a higher order non-linear transformation of the input signals in it's states $x(t)$. The output layer is trained to generate the outputs $y(t)$ using the reservoir states \cite{Du}.}
\end{figure}
We will now introduce a simple example system of reservoir computing (Fig.2) to better clarify the ideas discussed and identify the types of problems that need to be addressed. The reservoir computing paradigm is an umbrella term that includes techniques like liquid state machines and echo state networks \cite{Luko}. They have been physically implemented with non-linear dynamic elements \cite{Tanaka}, \cite{Du}, and provide a much simpler way to train recurrent neural networks (RNN). Reservoir computing systems consists of an input layer, a RNN-based reservoir and a single output layer. The weights in the reservoir remain fixed during training and the network generates a non-linear transformation of the inputs in it's states. The output signal is generated at the output layer as a combination of the reservoir signals. Only the weights in the single output layer are trained using gradient descent with a teacher signal as the target. In order for the system to function properly and approximate the target signal, the weights in the reservoir (generated prior to training using evolutionary algorithms) are chosen such that connection matrix $\mathcal{W}$ of the entire network satisfy the \textit{echo state} or \textit{fading memory property} \cite{Tanaka}. The property is characterized in different ways - a simple and popular one is having the spectral radius of $\mathcal{W}<1$ (a sufficient condition that depends upon input statistics was proposed in \cite{Manjunath}). This is an example of a macroscale condition for learning and inference on the entire reservoir network (that allows for multiple weight level solutions) as opposed to a microscale condition on the individual weights. We can view the static weights as unconstrained variables while the macroscale condition on $\mathcal{W}$'s spectral radius or Schur stability is a constraint. The extension of the static reservoir is an \textit{adaptive reservoir} in which we have a new macroscale reservoir condition to achieve echo state property. This condition accommodates changes to the microscale weights of the network over time depending upon the input statistics in order to adapt and learn new inputs. Unlike traditional RNNs, we do not train the reservoir using an algorithm at the microscale weight-level and instead let the weights evolve (self-organize) as an unconstrained variable. Thus successful identification and implementation of this macroscale condition will result in a RNN with weights that are evolving without external control at the microscale level, while producing the required network functionality.
We propose a (non-exhaustive) list of properties any macroscale description of learning would need to satisfy in order to be useful in a complexity engineering approach to design. These include the ability to:
\begin{itemize}
\item[(a)] Address system evolution and self-organization.
\item[(b)] Be implementation independent like computation.
\item[(c)] Quantify information processing and computation. Mapping to existing work in ML is a bonus.
\item[(d)] Address questions of accuracy and efficiency.
\item[(e)] Can be studied experimentally and in simulation.
\item[(f)] Tied to physical law and generate no-go results.
\end{itemize}
\par\noindent The author proposes the use of thermodynamics as a possible macroscale descriptive framework for learning. The field of thermodynamics was invented in the 19th century to address questions of efficiency in engines and has evolved to address the same in information engines \cite{Um}, and computational descriptions, it is universally applicable to all systems. The physical costs of information processing has been widely studied in the thermodynamics of information field \cite{Parrondo}. There is also a rich history of using thermodynamics based ideas in the history of machine learning. Early energy-based models like Hopfield and Boltzmann machines \cite{Rojas}, and free-energy based Helmholtz machines \cite{Dayan} are based on equilibrium thermodynamics - evolving the network towards a state of maximum entropy/minimum free-energy at equilibrium. However self-organized complex systems of interest (like the human brain) are open systems that continuously exchange matter and energy with an external dynamic environment, show a wider range of behavior and are more accurately described by non-equilibrium thermodynamics.
In the last few decades, the field of non-equilibrium thermodynamics has undergone a revolution making a tremendous improvement in theoretical tools available to characterize systems far from equilibrium \cite{Jarzynski}, \cite{Crooks}, \cite{England}. While equilibrium thermodynamics focuses on the distributions of states at static equilibrium (in the infinite time limit), non-equilibrium fluctuation theorems characterize the associated work and heat distributions of dynamical trajectories of states as it is driven by external inputs over finite time. There is a growing body of work in understanding the relationship between non-equilibrium thermodynamics and learning in physical systems \cite{Still}, \cite{Ganesh}, \cite{Ganguli1}, \cite{Ganguli2}. In \cite{Still2}, the author discusses the relationship between minimizing heat dissipation and information-bottleneck based predictive inference in Markov-chain models. There is also very interesting work generalizing the generative model of the Boltzmann machine, in which learning is characterized as distributions of the work done by varying the energy landscape of a non-equilibrium system \cite{Salazar}. The authors in \cite{Salazar} cast the contrastive divergence algorithm for training restricted Boltzmann machines as a physical process that approximately minimizes the difference between entropy variation and average heat, and discuss the relationship between annealed importance sampling \cite{AIS} and the thermodynamic Jarzynski equality \cite{Jarzynski}. In \cite{Boyd}, the authors establish an equivalence between thermodynamics and machine learning by showing that agents that maximize work production also maximize their environmental model’s log-likelihood. The relationships between computational descriptions of learning processes and non-equilibrium thermodynamic descriptions of the same as a physical process are what we are looking to achieve. These descriptions can serve as the basis for developing non-equilibrium control protocols using ideas from thermodynamic control \cite{Deffner}, \cite{Rotskoff} to design physical systems to realize these protocols. These systems will realize the corresponding thermodynamic condition and by the aforementioned equivalency the computational learning as well. Engineering hardware based on these thermodynamic descriptions might seem alien when compared to our existing top-down standard protocols for design using computational descriptions. However designing physical systems based on thermodynamic considerations is very common in the field of molecular and bio-engineering, and unconventional computing. A good example of this are chemical Boltzmann machines \cite{Poole}, in which the authors construct a mapping from Boltzmann machines to chemical reactions. And these reactions are usually designed based on kinetic and thermodynamic factors \cite{Prausnitz}.
\begin{figure}
\begin{center}
\label{ASNs}
\includegraphics[scale=0.42]{icons_fig5.png}
\end{center}
\caption{(a) An optical image of an SiO2 coated wafer with 120 Pt electrodes (top) \& SEM of a self-assembled Ag+ network after submersion in AgNO3 (bottom). (b) The silver nanowire network (top) takes the form of a tiny square of mesh at the center of the device (bottom) \cite{Steig}.}
\end{figure}
A list of some of the considerations when trying to engineer a self-organized system is provided in \cite{Fiakowlski} - (a) identify suitable interactions, (b) choosing competing interactions and potentials, (c) choosing proper scale and (d) synthesis - moving from small systems with minimal components to larger systems. A detailed review of the principles behind directed self-organization by manipulating the energy and entropy landscape of the system is available in \cite{Furst}. This change in our design approach to computing hardware is going to need a restructuring of our philosophies and significant interdisciplinary work. We do not have start from scratch though, as we can look to build upon work in nanoarchitectonics \cite{Ariga}, computational matter and in-materio evolution \cite{Konkoli}. We also have a good idea of the properties that we want in the final engineered self-organized systems - a network with large number of heterogeneous components, sparse connections with synaptic behavior, scale-free/small-world topology, criticality, etc. Examples of self-organized networks that satisfy such properties and have been shown to be capable of achieving learning include \cite{Steig} (Fig.3), \cite{Alvarez}, \cite{Manning}, \cite{Bose} and \cite{Milano}. Fabrication of these networks are not based on computational descriptions and provide the ideal base to experiment on and build a framework to understand novel macroscale descriptions of learning and the relationship between choices in the design process to functional capabilities of the self-organized system.
\section{Discussion \& Conclusion}
In this paper, we studied the connections between the Description and Design components of the computing stack. This framework allowed us to understand the exponential success that we have achieved over the years, and also identify the issues facing us moving forward in the design of neuromorphic hardware. If our goal is to engineer energy efficient neuromorphic hardware that can mimic the abilities of a complex self-organized brain using emerging stochastic devices, we must be willing to replace the traditional reductionist engineering approach with complexity engineering methodologies. This will require a significant shift in how we understand computation and describe learning in physical systems, our design philosophy on what it means for a complex system to be `designed,' and the role of the engineer in these systems. In this paper, the author identifies two challenges that need to be addressed to achieve progress - (a) identifying new macroscale descriptions of learning that would leverage the intrinsic physics of the system to a greater degree and suggested non-equilibrium thermodynamics as a possible path forward and (b) take inspiration from other fields to develop new design protocols using the above descriptions that would have better synergy with these emerging fabrics.
The author recognizes the tremendous challenges and work that lies ahead of us, but views these as an unique set of opportunities that not are often available to the research community. Complexity engineering is a relatively new field that requires a lot of research to be formalized, expanded and brought into the mainstream of hardware design. One can look back at history and point to traditional hardware design to be facing the exact same challenges at the start of the digital computing paradigm in the 1940s. However eight decades later, with the appropriate investments in research we have made great strides in understanding traditional design and building a large number of tools and infrastructure to make it feasible and profitable. The author hopes that given the massive benefits that we could reap from efficient self-organizing hardware for AI applications, the community will increase focus on these new ideas. Success would allow us to meet the growing compute demand in the short term and usher in a technological revolution in the long term.
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,499,619 | arxiv | \section*{Value of the Data}
\begin{itemize}
\itemsep=0pt
\parsep=0pt
\item The role of women within the society has increased in importance and the way we approach and refer to them is crucial. Misogyny is a form of discrimination towards women and has been spreading exponentially through
the Web. A popular communication tool in social media platforms are memes which are typically composed of pictorial
and textual components, and that are often used to convey misogynistic messages. Automatic detection of misogynistic content is becoming fundamental to counteract online discrimination, cybersexism and violence.
As misogyny is conveyed by both textual and visual media, a dataset of multimodal content, such as a dataset of memes, is mandatory to build efficient machine learning techniques that promptly intercept these offensive messages online.
\item The presented dataset will be particularly useful for: i) researchers from Natural Language Processing and Computer Vision communities that deal with social media data analysis and that are interested in developing machine learning models that combine textual and visual cues; ii) companies that develop artificial intelligent strategies to control social media activities; and iii) social science researchers that study gender discrimination especially on the Web.
\item This dataset presents a data structure that can be adopted for the collection of further data on the topic to develop more robust machine learning techniques. In fact the automatic detection of misogynistic content is particularly challenging especially considering that: (i) misogynistic and non misogynistic memes can share the same visual content but a different text, and (ii) misogyny can be expressed by text, image or by their combination.
This dataset can also be adopted to increase the quality of the labelling task, as it can be used as a gold standard to check the reliability of annotators on crowdsourcing platforms.
\item This dataset will pave the way not only to the development of machine learning techniques able to promptly intercept on offensive content against women the Web, but also to stimulate other researchers in devoting their attention to this phenomenon, increasing the awareness and sensitivity to misogyny as well as to other forms of discrimination.
\item This dataset provides the labels from both domain experts and subjects recruited within the population. The comparison between the two distributions is also significant, not only with respect to misogyny by itself, but also with respect to the perception of aggressiveness and irony.
\item Up to our knowledge, this is the first dataset of memes that face the misogynist phenomenon. A related dataset is the one proposed by Facebook AI for the Hateful Meme (HM) Challenge \cite{kiela2021hateful}. The main difference between the HM dataset and the proposed one is that our memes have been collected from social media platforms, describing a real scenario, while the others could refer to synthetic memes automatically generated as benign confounders of similar hateful memes. Moreover, while the HM dataset is generally devoted to general hateful and non-hateful memes, without focusing on any specific types and targets of hate, our dataset is centered on women as target.
\end{itemize}
\section*{Data Description}
In the latest years, misogyny has found in the Web a new and powerful way of diffusion together with the new phenomenon of cybersexism, where women are often victim of offensive messages and, in the most serious cases, of abuse and threats.
Online platform providers have introduced policies to prevent offensive content. However, due to the speed of dissemination of messages in social media, systems able to automatically filter offensive content are urgently needed \cite{anzovino,gasparini}.
While new opportunities for females have been opened on the Web, systematic inequality and discrimination offline is replicated in online spaces
in the form of offensive contents against them \cite{frenda,plaza}. \\
Memes are especially popular communication means for social media users, being able to efficaciously convey funny and/or ironic jokes \cite{memeDef}. A meme is defined as an image composed of a pictorial information on which a text is superimposed a posteriori by a human \cite{memeDef}. Given the characterization of memes, they have been progressively used to convey hate \cite{farrell} and, in this specific dataset, we are interested in memes circulating in the Web and targeting women with sexist and aggressive messages \cite{paciello2021online, franks2011unwilling}.\\
In order to develop efficient machine learning techniques able to automatically detect multi-modal misogynistic messages online, we here present a dataset composed of:
\begin{itemize}
\item \textbf{800 memes}, saved as jpeg images, resized to have the greatest dimension equal to 640 pixels. These memes are saved with a progressive unique ID.
\item A \textbf{table} saved as a .csv file, where all the data collected are reported, according to the following structure:
\begin{itemize}
\item \textit{memeID}: unique identifier associated to the meme;
\item \textit{text}: transcription of the text reported in the meme;
\item \textit{misogynisticDE}: Boolean attribute related to the presence of misogynistic content as reported by the Domain Experts (DE);
\item \textit{aggressiveDE}: Boolean attribute; in case of a misogynist meme it represents the presence of aggressiveness, as reported by the DE;
\item \textit{ironicDE}: Boolean attribute; in case of a misogynist meme it represents the presence of irony, as reported by the DE;
\item \textit{misogynisticCS}: Boolean attribute related to the presence of misogynistic content, as reported by the annotators of the CrowdSourcing platform (CS);
\item \textit{aggressiveCS}: Boolean attribute; in case of misogynist meme it represents the presence of aggressiveness, as reported by the CS;
\item \textit{ironicCS}: Boolean attribute; in case of misogynist meme it represents the presence of irony, as reported by the CS.
\item \textit{confidence\_M\_CS}: agreement on the misogynist attribute among the CS;
\item \textit{confidence\_A\_CS}: agreement on the aggressiveness attribute among the CS;
\item \textit{confidence\_I\_CS}: agreement on the misogynist irony among the CS;
\end{itemize}
\end{itemize}
\noindent We underline that the labels of the 3 experts have an agreement of 100\% for all the 800 memes, as it was the criterion to select them among all the downloaded memes.
\paragraph{Data Distribution}
The distribution of the three labels (misogyny, aggressiveness and irony) given by the domain experts are reported in Figure \ref{fig:pieED}. As the dataset of 800 memes was selected starting from the misogynistic DE labels, the first pie chart on the left confirms that the dataset is equally distributed within the two classes with 400 memes each.
The second and third pie charts reported for the memes labelled as misogynistic the percentage of them considered respectively aggressive and ironic.
In Figure \ref{fig:pieCS}, the corresponding label distributions given by the CS annotators are reported.
The first pie on the left shows that in this case the memes labelled as misogynistic are less than in the case of the DE evaluation.
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{Images/pieED.png}
\caption{Distribution of the three labels given by the domain experts (DE)}
\label{fig:pieED}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{Images/pieCS.png}
\caption{Distribution of the three labels given by the crowdsourcing annotators (CS)}
\label{fig:pieCS}
\end{figure}
Given the labels provided by DE and CS, 59 memes have been annotated differently with respect to the misogynistic label. Among them, 76.28\% were considered misogynistic by the experts but not by the annotators, while only 23.72 \% of them were considered misogynistic only by the annotators.
The complexity of the annotation process is also reflected by the agreements of the CS labellers, which have been reported in the corresponding columns in the Annotation Data sheet.
\section*{Experimental Design, Materials and Methods}
\subsection*{Meme collection process}
\noindent The most popular social media platforms, i.e. Facebook, Twitter, Instagram and Reddit, have been considered in the data collection phase. Memes that convey potential misogynistic content have been collected in the October-November 2018 period through the following operations \cite{ACIIW}:
\begin{itemize}
\item Searching for threads or conversations dedicated and written by anti-women/feminists supporters, such as the Men Going Their Own Way (MGTOW) website and the related thread on Reddit;
\item Exploring discussions on sexism in political or social events;
\item Browsing hastags such as \#girl, \#girlfriend, \#women.
\end{itemize}
Subsequently, the dataset has been enlarged by collecting memes from websites dedicated to meme creation and/or collection, as follows:
\begin{itemize}
\item Browsing hashtags such as \#girl, \#girlfriend, \#women, \#feminist;
\item Consulting collections on all the variations of famous memes involving female characters.
\end{itemize}
\noindent In parallel, memes with non-misogynistic contents have been manually downloaded from the same web sources and by adopting the same keywords, for a non trivial collection of the memes.
\subsection*{Expert labelling and dataset definition}
\noindent Three domain experts have evaluated all the collected memes, labelling them as misogynist or non-misogynist.
The final dataset is composed of 800 memes, selected among those with an agreement of 100\%, in order to have a perfect balanced dataset with respect to the two classes.
\noindent The experts have also annotated the misogynistic memes with respect to aggressiveness and irony. This phase provided the three corresponding boolean labels reported in the data sheet with DE suffix.
\subsection*{Dataset Annotation through the crowdsourcing platform}
\noindent The 800 memes selected were labelled adopting a crowdsourcing platform (Figure Eight in 2018, now called Appen, https://appen.com/).\\
A controlled labelling experiment was chosen to provide judgments from trusted and reliable participants with equally distributed age (between 20 and 50 years old) and gender. All the data were anonymized.
\noindent The annotation task was designed as follows:
\begin{itemize}
\item the order of the memes in the experiment is randomized to avoid bias;
\item the maximum number of judgments that any contributor can provide is limited to 40 memes, to limit fatigue and consequent unreliable annotations;
\item any annotator can leave whenever he/she wants;
\item for each annotator, the task expires after an hour and a half, regardless of the number of evaluated memes, in order to limit external stimuli;
\item each annotation page shows only one meme at a time, to not influence or bias the participant by seeing other meme contemporaneously;
\item each meme is evaluated by three different subjects.
\end{itemize}
\noindent For each meme, the question
\textbf{In your opinion, is this meme misogynistic?} is proposed to the participant, as depicted in
Figure \ref{fig:sexist}.
\begin{figure}
\centering
\includegraphics[width=0.4\columnwidth]{Images/sexistmeme.jpg}
\caption{The first question in the crowdsourcing annotation task}
\label{fig:sexist}
\end{figure}
\noindent Then, only in the case of a meme evaluated as misogynistic, the following two questions were proposed:
\begin{enumerate}
\item In your opinion, is this meme ironic?
\item In your opinion, is this meme aggressive?
\end{enumerate}
\noindent Definitions and guidelines were not provided to the CS annotators, wanting to collect their perception of misogyny, irony and aggressiveness present in the meme, without influencing their decision.
This crowdsourcing annotation provided the three corresponding boolean labels reported in the data sheet with CS suffix, together with the corresponding confidence level (in terms of percentage of agreement).
\subsection*{Text transcription}
\noindent Finally, the text superimposed to the imaged have been manually transcribed for each of the 800 memes.
\section*{Ethics Statement}
\noindent An informed consent was given to all the involved subjects, indicating the presence of possible explicit contents, explaining how to perform the tasks also reporting them the possibility to stop the labeling activity whenever they wanted.
Any personal data have been acquired to identify the labellers, being therefore GDPR compliant.
\section*{Acknowledgments}
\noindent We want to give our thanks especially to Silvia Corchs for her valuable scientific support in facing the problem of misogyny detection, and to Monica Mantovani and Gaia Campisi, for their supporting work during the collection and validation of the dataset.
\section*{Declaration of Competing Interest}
\noindent
The authors declare that they have no known competing
financial interests or personal relationships which have, or could be
perceived to have, influenced the work reported in this article.
\bibliographystyle{model1-num-names}
\section*{Data in Brief Article Template}
\noindent \href{https://www.journals.elsevier.com/data-in-brief/about-data-in-brief/data-in-brief-faq}%
{\textit{Data in Brief}}
is an open access journal that publishes data articles.
Please note:
\begin{itemize}
\item A data article is different to a research article, so it is
important to \textbf{use the template} below to prepare your manuscript for Data
in Brief.
\item A data article should \textbf{simply describe data} without providing
conclusions or interpretive insights.
\item Before you start writing your data article you should read the
guidance on
\href{https://www.journals.elsevier.com/data-in-brief/policies-and-guidelines/what-data-are-suitable-for-data-in-brief}%
{What Data are Suitable for Data in Brief}.
\item It is mandatory that Data in Brief authors share their research data:
\begin{itemize}
\item If you have \textbf{raw data} (also referred to as primary, source or
unprocessed data) relating to any charts, graphs or figures in the
manuscript, these data must be publicly available, either with the data
article (e.g. as a supplementary file) or hosted on a trusted data
repository.
\item If you are describing \textbf{secondary data} you are required to provide
a list of the primary data sources used \underline{and} to make the full secondary
dataset publicly available, either with the data article (e.g. as a
supplementary file) or hosted on a trusted data repository.
\item Although we allow supplementary files, it is preferred that
authors deposit their data in a trusted data repository ($>$70\% of
Data in Brief authors now do this). See our
\href{https://www.elsevier.com/authors/author-resources/research-data/data-base-linking#repositories}%
{list of supported data repositories}.
\item For data that, for ethical reasons, require access controls a
mechanism must be provided so that our Editors and reviewers may access
these data without revealing their identities to authors (more
information is provided in the template \hyperlink{target1}{below}).
\end{itemize}
\end{itemize}
Have you any questions? See a list of frequently asked questions
\href{https://www.journals.elsevier.com/data-in-brief/about-data-in-brief/data-in-brief-faq}{here},
or email our Managing Editors:
\href{mailto:[email protected]}{[email protected]}. This
step-by-step
\href{https://www.journals.elsevier.com/data-in-brief/about-data-in-brief/how-to-submit-your-research-data-article-data-in-brief}%
{video} guide will also tell you how to complete the
template correctly to maximise your chances of acceptance.
\vskip6pt
\noindent Authors can submit to Data in Brief in two ways:
\begin{enumerate}
\item[\bf(1)] \textbf{If you are submitting your data article directly to Data in
Brief, you can now skip the next section and complete the}
\hyperlink{target2}{\textbf{Data Article template}}.
\item[\bf(2)] \textbf{If you are submitting your data article to Data in Brief via
another Elsevier journal as a co-submission (i.e. with a Research
Article), please read the} \hyperlink{target3}{\textbf{Co-submission Instructions}}
\textbf{on the next page
before completing the} \hyperlink{target2}{\textbf{Data Article template}}.
\end{enumerate}
\hypertarget{target3}{}
\section*{Co-submission Instructions}
A co-submission to~\textit{Data in Brief}~is done at the same time that you
submit (or resubmit, after revision) a research article to another
Elsevier journal. For co-submissions you therefore submit your
\textit{Data in Brief} data article manuscript via the other journal's
submission system and \underline{not} directly to
\textit{Data in Brief} itself.
\vskip6pt\noindent
The other Elsevier journal's Guide for Authors will state if a
co-submission is offered by that journal, and any revision letter/email
you receive from a participating journal will contain an offer to
submit a data article to \textit{Data in Brief}.
\vskip6pt\noindent
\textbf{To complete a co-submission you will need to zip your
~\textit{Data in Brief}~
manuscript file and all other files relevant to the
~\textit{Data in Brief}~
submission (including any supplementary data files) into a single .zip
file, and upload this as a "Data in Brief"-labelled item in the other
journal's submission system when you submit manuscript to that journal.
The .zip file will then be automatically transferred to
~\textit{Data in Brief}~
when your research article is accepted for publication in the other
journal, and when published your original research article and data
article will link to each other on ScienceDirect.}
\vskip6pt\noindent
\textbf{As~\textit{Data in Brief}~is open access, a moderate article publication charge
(APC) fee is payable on publication. For more information about the
APC, please see
\href{https://www.elsevier.com/journals/data-in-brief/2352-3409/open-access-journal}%
{here}.}
\vskip6pt\noindent
\textit{Data in Brief} \underline{requires} that authors share their research data. This can
be done by submitting it with the data article (e.g. as a supplementary
file) or by hosting on a trusted data repository (the latter is
preferred). Failure to do this will delay publication of your
co-submission.
\vskip6pt\noindent
\textbf{If you have any questions, please contact:
\href{mailto:[email protected]}{[email protected]}}
\vskip6pt\noindent
Please note, authors should not republish the same data presented in
their original research article in a \textit{Data in Brief} co-submission, as
this could constitute duplicate publication; however, \textit{Data in Brief}
welcomes the publication of any data article that fulfils one or more
of the following criteria:
\checkmark A description of the supplementary data that would
previously have been hosted as supplementary electronic files alongside
your original research article.*
\checkmark A description of the full dataset or additional information
that will aid reuse of the data.
\checkmark A detailed description of the raw data relating to the
charts, graphs or figures in your companion research article, if making
these data available will substantially enhance reproducibility and/or
reanalysis of the data.
\checkmark Any negative datasets or data from intermediate experiments
related to your research.
\textsf{X} Review articles or supplemental files from a review article
are not considered original data and are typically unsuitable for Data
in Brief.
\vskip12pt\noindent
* If describing supplementary data that you previously planned to
publish as supplem
entary electronic files hosted alongside the original
research article, it is requested that you either\break
deposit these in a
repository (preferred) or submit these to \textit{Data in Brief} alongside the
data article. \textbf{They should not be published as supplementary files with
your research article in the other journal}.
\clearpage
\hypertarget{target2}{}
\section*{Data Article template}
\noindent
Please fill in the template below. All sections are mandatory unless
otherwise indicated. Please read all instructions in [square brackets]
carefully and ensure that you delete all instruction text (including
the questions) from the template before submitting your article.
\vskip6pt\noindent
Reminder: A data article simply describes data and should not provide
conclusions or interpretive insights, so \textbf{avoid} using words such as
`study', `results' and `conclusions'.
\vskip6pt\noindent
We would welcome feedback on this template and how it might be
improved. To provide anonymous feedback via a very short survey, please
click \href{https://forms.office.com/Pages/ResponsePage.aspx?id=P-50kiWUCUGif5-xXBBnXTeXkbO343VFrbpYVBvxdZtUM05UVjIwM0U4WlRKUldCOTNMRUQwOVRHTy4u}%
{here}.
\vskip6pt\noindent
{\small\textbf{\textit{Please delete this line and everything above it before submitting your
article, in addition to anything in [square brackets] below, including
in the Specifications Table}}\vskip6pt\hrule\vskip12pt}
{\fontsize{7.5pt}{9pt}\selectfont
\noindent\textbf{Specifications Table}
Every section of this table is mandatory.
Please enter information in the right-hand column and remove all the instructions
\begin{longtable}{|p{33mm}|p{94mm}|}
\hline
\endhead
\hline
\endfoot
Subject & [Please select one CATEGORY for your manuscript from the list
available at:\break
\href{https://www.elsevier.com/__data/assets/excel_doc/0012/736977/%
DIB-categories.xlsx}{DIB categories}.]\\
\hline
Specific subject area & [Briefly describe the narrower subject area. Max 150 characters]\\
\hline
Type of data & [List the type(s) of data this article describes.
Simply delete from this list as appropriate:]
Table\newline
Image\newline
Chart\newline
Graph\newline
Figure\newline
[Any other type not listed- please specify]\\
How data were acquired & [State how the data were acquired: E.g. Microscope,
SEM, NMR, mass spectrometry, survey* etc.\newline
Instruments: E.g. hardware, software, program\newline
Make and model and of the instruments used:\newline
{\fontsize{7pt}{8pt}\selectfont
*\,if you conducted a survey you must submit a copy of the
survey(s) used (either provide these as supplementary material
file or provide a URL link to the survey
in this section of the table).
If the survey is not written in English,
please provide an English-language translation.}]\\
\hline
Data format & [List your data format(s). Note, unless you are describing secondary data,
all raw data must be provided (either with this data article or linked to a repository).
Simply delete from this list as appropriate:]\newline
Raw\newline
Analyzed\newline
Filtered\newline
[Any other format not listed- please specify]\\
\hline
Parameters for
data\newline
collection & [Provide a brief description of which conditions were considered
for data collection. Max 400 characters]\\
\hline
Description of
data\newline
collection & [Provide a brief description of how these data were collected.
Max 600 characters]\\
\hline
Data source location & [Fill in the information available, and delete from this list as appropriate:\newline
Institution:\newline
City/Town/Region:\newline
Country:\newline
Latitude and longitude (and GPS coordinates, if possible) for collected samples/data:\newline
If you are describing secondary data, you are required to provide a list of
the primary data sources used in the section.\newline
Primary data sources: ]\\
\hline
\hypertarget{target1}
{Data accessibility} & [State here if the data are either hosted `With the article' or on a public repository.
In the interests of openly sharing data we recommend hosting your data in a
trusted repository ($>$70\% of Data in Brief authors now use a data repository).
See our \href{https://www.elsevier.com/authors/author-resources/research-data/data-base-linking#repositories}{list of supported data repositories}.
We suggest \href{https://data.mendeley.com/}{Mendeley Data} if you do not have a trusted repository.\newline
Please delete or complete as appropriate, either:]\newline
With the article\newline
[Or, if in a public repository:]\newline
Repository name: [Name repository]\newline
Data identification number: [provide number]\newline
Direct URL to data: [e.g. https://www.data.edu.com - please note,\newline
this URL should be working at the time of submission]\newline
[\textbf{In addition, for data with access controls only:} For data that,
for ethical reasons (i.e. human patient data),
require access controls please describe
how readers can request access these data and provide a link to any
Data Use Agreement (DUA) or upload a copy as a supplementary file.]\newline
Instructions for accessing these data:\newline
[\textit{Important: if your data have access controls a mechanism must also be
provided so that our Editors and reviewers may access these data
without revealing their identities to authors, please include
these instructions with your submission. Please contact the Managing
Editors ([email protected]) if you have any questions.}]\\
\hline
Related
research\newline
article & [If your data article is related to a research article - \textbf{especially
if it is a co-submission} - please cite your associated research
article here. Authors should only list \textbf{one article}.\newline
Authors' names\newline
Title\newline
Journal\newline
DOI: \textbf{OR} for co-submission manuscripts `In Press'\newline
\textbf{For example, for a direct submission:}\newline
J. van der Geer, J.A.J. Hanraads, R.A. Lupton, The art of writing a scientific article,
J. Sci. Commun. 163 (2010) 51-59. https://doi.org/10.1016/j.Sc.2010.00372\newline
\textbf{Or, for a co-submission (when your related research article has not yet published):}\newline
J. van der Geer, J.A.J. Hanraads, R.A. Lupton, The art of writing a
scientific article, J. Sci. Commun. In Press.\newline
\textbf{Or, if your data article is not directly related to a research article,
please delete this last row of the table.}]
\end{longtable}
}
\section*{Value of the Data}
[Provide 3-6 bullet points explaining why these data are of value to the scientific community.
Bullet points 1-3 must specifically answer the questions next to the bullet point,
but do not include the question itself in your answer. You may
provide up to three additional bullet points to outline the value of these data.
Please keep points brief, with ideally no more than 400 characters for each point.]
\begin{itemize}
\itemsep=0pt
\parsep=0pt
\item Your first bullet point must explain why these data are useful or important?
\item Your second bullet point must explain who can benefit from these data?
\item Your third point bullet must explain how these data might be used/reused for
further insights and/or development of experiments.
\item In the next three points you may like to explain how these data could
potentially make an impact on society and highlight any other additional value of these data.
\item ....
\end{itemize}
\section*{Data Description}
\noindent [Individually describe each data file (i.e. figure 1, figure 2, table
1, dataset, raw data, supplementary data, etc.) that are included in
this article. Please make sure you refer to every data file and provide
a clear description for each - do not simply list them. No insight,
interpretation, background or conclusions should be included in this
section. Please include legends with any tables, figures or graphs.
\noindent\textbf{Tip:} do not forget to describe any supplementary data files.]
\section*{Experimental Design, Materials and Methods}
\noindent [Offer a complete description of the experimental design and methods
used to acquire these data. Please provide any programs or code files
used for filtering and analyzing these data. It is very important that
this section is as comprehensive as possible. If you are submitting via
another Elsevier journal (a co-submission) you are encouraged to
provide more detail than in your accompanying research article. There
is no character limit for this section; however, no insight,
interpretation, or background should be included in this section.
\noindent\textbf{Tip:} do not describe your data (figures, tables, etc.) in this section,
do this in the Data Description section above.]
\section*{Ethics Statement}
\noindent [Please refer to the journal's
\href{https://www.elsevier.com/journals/data-in-brief/2352-3409/guide-for-authors}{Guide for Authors}
for more information on
the ethical requirements for publication in Data in Brief. In addition
to these requirements:
\noindent\textbf{If the work involved the use of human subjects:}
please include a statement here confirming that informed consent was
obtained for experimentation with human subjects;
\noindent\textbf{If the work involved animal experiments:} please
include a statement confirming that all experiments comply with
the \href{https://www.nc3rs.org.uk/arrive-guidelines}{ARRIVE\ guidelines} and were be carried out in accordance with the
U.K. Animals (Scientific Procedures) Act, 1986 and associated
guidelines, \href{https://ec.europa.eu/environment/chemicals/lab_animals/legislation_en.htm}{EU Directive 2010/63/EU for animal experiments}, or the
National Institutes of Health guide for the care and use of Laboratory
animals (NIH Publications No. 8023, revised 1978)]
\section*{Acknowledgments}
Acknowledgments should be inserted at the end of the paper, before the
references, not as a footnote to the title. Use the unnumbered
Acknowledgements Head style for the Acknowledgments heading.
\section*{Declaration of Competing Interest}
\noindent [All authors are required to report the following information:
\begin{enumerate}
\item[(1)] All third-party financial support for the work this article;
\item[(2)] All financial relationships with any entity that could be
viewed as relevant to data described in this manuscript;
\item[(3)] All sources of revenue with relevance to this work where
payments have been made to authors, or their institutions on their
behalf, within the 36 months prior to submission;
\item[(4)] Any other interactions with the sponsor, outside of the
submitted work;
\item[(5)] Any relevant patents or copyrights (planned, pending or
issued);
\item[(6)] Any other relationships or affiliations that may be
perceived by readers to have influenced, or give the appearance of
potentially influencing, what has been written in this article.
\end{enumerate}
As a general guideline, it is usually better to
disclose a relationship than not. This information will be acknowledged
at publication in the manuscript. If there is no known competing
financial interests or personal relationships that could have appeared
to influence the work reported in this paper, please include this
statement.]
\vskip12pt\noindent
The authors declare that they have no known competing
financial interests or personal relationships which have, or could be
perceived to have, influenced the work reported in this article.
\vskip12pt\noindent
[If there are financial interests/personal relationships which may be
considered as potential competing interests, please declare them here.]
\subsection*{Note}
\label{sec1}
Any instructions relevant to the \verb+elsarticle.cls+ are applicable
here as well. See the online instruction available on:
\makeatletter
\if@twocolumn
\begin{verbatim}
http://support.river-valley.com/wiki/
index.php?title=Elsarticle.cls
\end{verbatim}
\else
\begin{verbatim}
http://support.river-valley.com/wiki/index.php?title=Elsarticle.cls
\end{verbatim}
\fi
\section*{References}
\noindent [References are limited (approx. 15) and excessive self-citation is not
allowed. \textbf{If your data article is co-submitted via another Elsevier
journal, please cite your associated research article here.}
\noindent\textbf{Reference style:}
Text: Indicate references by number(s) in square brackets in line with
the text. The actual authors can be referred to, but the reference
number(s) must always be given.
\noindent Example: '..... as demonstrated [3,6]. Barnaby and Jones [8] obtained a different result ....'
\noindent [Use \verb+\cite+ command to cite a reference list item in text.
\noindent These are examples for reference citations \cite{1}.
\cite{2}.
\cite{4}.]
\subsection*{Reference list using Bib\TeX database file}
\noindent [If Bib\TeX database file is used for reference data please use
\begin{verbatim}
\bibliographystyle{model1-num-names}
|
1,116,691,499,620 | arxiv | \section{Introduction}
In classical mechanics \cite{1} the velocity $\mathbf{v}\left(
t\right)$ of a material point is defined as
\begin{equation}
\label{1}\mathbf{v}\left(t\right) =\frac{d\mathbf{x}\left(
t\right)}{dt},
\end{equation}
where $\mathbf{x}\left(t\right)$ is the trajectory function of the
moving point and $t$ is the time coordinate in the chosen inertial reference
frame. In special relativity \cite{2} the notion of the three-dimensional
velocity $\mathbf{v}\left(t\right)$ is generalized to the notion
of the four-velocity defined as
\begin{equation}
\label{2}u^\mu\left(\tau\right)=\frac{dx^\mu\left(\tau\right)}{d\tau},
\end{equation}
where the space--time position of the material point is given by four
functions $x^\mu \left( \tau \right) (\mu=0,1,2,3; x^0 =ct)$ parameterized
by the so-called proper time
\begin{equation}
\label{3}d\tau =dt\sqrt{1-\frac{\mathbf{v}^2\left(t\right)}{c^2}}.
\end{equation}
Due to its dependence on velocity of the moving material point, the notion
of the proper time $\tau $ is different for each material point and for
non-uniform motions the proper time is not a uniformly changing function of
the coordinate time $t$. Moreover, for non-uniform motions the proper time
coincides with the coordinate time in continuously changing inertial
reference frames (the momentarily rest frames). Only for uniformly moving
material points the proper time coincides with the coordinate time in one
reference frame (the rest frame of the moving material point). In addition,
for many particle systems the trajectories of particles are parametrized by
different proper times and it is almost impossible to describe the
interaction between particles without the notion of propagating fields.
Therefore relativistic mechanics cannot be so well developed as the
nonrelativistic mechanics is.
Fortunately, there exists another way of passing from Galilean--Newtonian
mechanics to the relativistic one \cite{3} which is not based on Eq. (\ref{2}).
Indeed, it is easy to see that rewriting Eq. (\ref{1}) in the form
\begin{equation}
\label{4}d\mathbf{x}\left(t\right)-\mathbf{v}\left(
t\right)dt=0
\end{equation}
we can immediately generalize it to a relativistic (as a matter of fact,
generally) covariant form
\begin{equation}
\label{5}V_\nu ^\mu \left( x\right) dx^\nu =0,
\end{equation}
where a new mixed tensor field $V_\nu ^\mu \left( x\right) $ is introduced.
We shall name this tensor as {\it the velocity tensor}.
It is clear that for nontrivial velocity tensors ($V_\nu ^\mu \left(
x\right) \neq \delta _\nu ^\mu $) Eq. (\ref{5}) define some
submanifolds of the considered space--time. We shall require from the velocity
tensors that these submanifolds should always be one dimensional what means
that Eq. (\ref{5}) must determine some curves interpreted as
trajectories of the moving material points.
Form (\ref{5}) has the obvious advantage over (\ref{1}) and (\ref{2}),
that it does not use any evolution parameter and therefore it may be applied
to systems with arbitrary number of material points by generalizing (\ref{5})
to the set of relations
\begin{equation}
\label{7}V_{a,\nu }^\mu \left( x_a\right) dx_a^\nu =0,
\end{equation}
where the index $a$ labels different material points.
At each space--time event the velocity tensors (different for different
material points) fix the infinitesimal directions in which any material
point located at that event may move. In addition, {\it forms (\ref{5})
and (\ref{7}) are invariant under arbitrary changes of space--time
coordinates.} Therefore, they may be used to formulate a generally covariant
scheme for classical mechanics.
The aim of the present paper is to describe some interesting properties of
the velocity tensors. We shall also provide the explicit construction of the
general form of such tensors.
It is clear that velocity tensors are related to the kinematical part of
mechanics. We shall also touch the dynamical aspect of mechanics.
\section{General Properties of the Velocity Tensors}
Equation (\ref{5}), in $n$-dimensional space--time, is an eigen equation for
the $n\times n$-dimensional matrix $V$ (defined by the velocity tensor) for
the eigenvalue $0$, while the infinitesimal displacements $dx^\mu $ in any
motion are the eigenvectors of the velocity tensors belonging to this
eigenvalue.
Writing the characteristic equation for the general eigenvalue problem
\begin{equation}
\label{8}V_\nu ^\mu \left( x\right) dx^\nu =\lambda dx^\mu
\end{equation}
we get the equation for the possible eigenvalues $\lambda $
\begin{equation}
\label{9}\sum_{j=0}^n\left( -\lambda \right) ^{n-j}Tr_jV\left( x\right) =0,
\end{equation}
where $Tr_jV\left(x\right)$ denotes the sums of diagonal minors of order $j$
of the matrix $V\left(x\right).$ Obviously, $Tr_1V\left(x\right)$
coincides with the ordinary trace of $V\left(x\right)$ and $Tr_nV\left(
x\right)$ is the determinant of $V\left(x\right)$. For shortness, we also
use the convention
\begin{equation}
\label{10}Tr_0V\left( x\right) =1
\end{equation}
for any matrix $V\left( x\right) $.
Due to physical reason we must require that there should be only one
eigenvalue equal to $0$. This means that there should be a unique
eigenvector for any velocity tensor which fixes the infinitesimal
displacements in any motion. The characteristic equation (\ref{9}) must be
therefore of the form
\begin{equation}
\label{11}\lambda ^n=0,
\end{equation}
from which we get the following conditions for any velocity tensor:
\begin{equation}
\label{13}Tr_jV(x)=0
\end{equation}
for all $j>0.$
Conditions (\ref{13}) are generally covariant requirements because all
the $Tr_jV,$ being the coefficients in characteristic equation (\ref{9
), are invariant under arbitrary similarity matrix transformations and it is
well known that for mixed tensors, treated as matrices, the general
coordinate transformations locally become the similarity transformations
\begin{equation}
\label{14}V\left( x\right) \rightarrow V^{\prime }\left( x^{\prime }\right)
=S\left( x\right) V\left( x\right) S^{-1}\left( x\right) ,
\end{equation}
where the matrix elements of $S\left( x\right) $ are given by
\begin{equation}
\label{15}S_\nu ^\mu \left( x\right) =\frac{\partial x^{\prime\mu}\left(
x\right) }{\partial x^\nu }
\end{equation}
for arbitrary changes of space--time coordinates $x^\mu \rightarrow x^{
\prime\mu }\left( x\right) $ .
Conditions (\ref{13}) impose $n$ restrictions for the $n^2$ matrix elements
of the velocity tensors. Further restrictions come from the requirement
that, in each reference frame, from (\ref{5}) it should follow that
\begin{equation}
\label{16}dx^k=v^k\left( t\right)dt,
\end{equation}
where $k=1,...,(n-1)$ and $v^k\left( t\right) $ are the components of the
standard velocity. This gives us additional $n-1$ restrictions for the
matrix elements of the velocity tensor. Finally, we shall require that in $n
-dimensional spacetimes the motions in all $n-k$ subspaces should be
described exactly as they were described in the $\left( n-k\right) $-dimensional
spacetimes. This means that restricting the motions to subspaces
the form of the velocity tensor should reduce to the already established
forms of the velocity tensors in the corresponding lower dimensional subspaces. We shall
refer to this requirement as to the reduction principle. It is easy to count
that such a requirement gives additional $2^{n-1}-2$ conditions for the
matrix elements of any velocity tensor. Altogether we are left with
$n^2-n-\left( n-1\right) -\left( 2^{n-1}-2\right) =\left( n-1\right)
^2-\left( 2^{n-1}-2\right) $ free parameters of any velocity tensor. These
free parameters should represent components of some $\left( n-1\right)$-dimensional
vector which will guarantee the covariance of the velocity
tensor under space rotations because this is the only simple geometrical
interpretation of the remaining constants in the velocity tensors. In this
way, we arrive at the equation
\begin{equation}
\label{17}\left( n-1\right) ^2-\left( 2^{n-1}-2\right) =n-1.
\end{equation}
It is surprising that this equation has solution only for $n=2,3$ and $4$.
This means that our construction can be performed only in two, three and
four-dimensional spacetimes, correspondingly.
\section{General Construction of the Velocity Tensors}
We shall now present a simple method of the construction of all possible
velocity tensors.
Let us consider spacetimes for which the passage between inertial reference
frames is described by the linear change of coordinates
\begin{equation}
\label{17}x^\mu \rightarrow x^{\prime\mu }=L_\nu ^\mu \left(
\mathbf{u}\right) x^\nu ,
\end{equation}
where $\mu ,\nu =0,1,2,3$ and $\mathbf{u}$ denote the relative
velocity of the two inertial reference frames. From (\ref{17}) and the
tensor character of the velocity tensor we get the transformation law for it
(written in the matrix form)
\begin{equation}
\label{18}V\rightarrow V^{\prime }=L\left( \mathbf{u}\right)
VL^{-1}\left( \mathbf{u}\right) =L\left( \mathbf{u}\right)
VL\left( -\mathbf{u}\right) .
\end{equation}
It is clear that we should look for velocity tensors which are functions of the
ordinary velocity of motion. Our basic assumption consists in the
requirement that the functional forms of the velocity tensor are the same in
each reference frame. This means that
\begin{equation}
\label{19}V^{\prime }\left( \mathbf{v}^{\prime }\right) =V\left(
\mathbf{v}^{\prime }\right)
\end{equation}
because only under such condition in each reference frame we can fulfill
conditions (\ref{16}). In this way, transformation law (\ref{18}) becomes
to be a system of functional equations for the matrix elements of the matrix
$V$ of the following form:
\begin{equation}
\label{20}V\left( \mathbf{v}^{\prime }\right) =L\left(
\mathbf{u}\right) V\left( \mathbf{v}\right) L\left( -
\mathbf{u}\right) ,
\end{equation}
where
\begin{equation}
\label{21}v^{\prime k}=\frac{L_0^k\left( \mathbf{u}\right)
\sum_j L_j^k\left( \mathbf{u}\right)
v^j}{L_0^0\left( \mathbf{u}\right) + \sum_j
L_j^0\left( \mathbf{u}\right) v^j}.
\end{equation}
Taking into account that the particle at rest in the unprimed reference
frame moves with the velocity $-\mathbf{u}$ in the primed frame we
can rewrite these functional equations in the explicit form:
\begin{equation}
\label{22}V\left( \frac{-u^kL_0^0\left( \mathbf{u}\right) +\sum_j
L_j^k\left( \mathbf{u}\right) v^j}{
L_0^0\left( \mathbf{u}\right) +\sum_j
L_j^0\left( \mathbf{u}\right) v^j}\right) =L\left( \mathbf{u}
\right) V\left( \mathbf{v}\right) L\left( -\mathbf{u}\right).
\end{equation}
The solutions of these equations are obtained by the standard method. We
first put $v^k=0$, then change the signs of $u^k$ and finally rename $
\mathbf{u}$ into $\mathbf{v}$. As a result we get
\begin{equation}
\label{23}V\left( \mathbf{v}\right) =L\left( -\mathbf{v
\right) VL\left( \mathbf{v}\right) ,
\end{equation}
where on the right-hand side the matrix $V$ has constant matrix elements
equal to the elements of $V\left( 0\right) .$ The constant matrix elements
of $V$ should be determined by the additional requirements the velocity
tensors have to satisfy.
For all dimensions the first column of the velocity tensor $V$ consists of
null elements. This follows from the fact that for particles at rest the
eigenvector in (\ref{5}) is of the form
\begin{equation}
\label{24}\left(
\begin{array}{c}
dt \\
0 \\
0 \\
0
\end{array}
\right) .
\end{equation}
Such eigenvector will satisfy Eq. (\ref{5}) only if $V_0^\mu =0.$
\section{Examples}
\subsection{Two-Dimensional Space--Time}
For $n=2$ from conditions (\ref{13}) it follows that
\begin{equation}
\label{25}V=\left(
\begin{array}{cc}
0 & V_1^0 \\
0 & 0
\end{array}
\right) ,
\end{equation}
where $V_1^0$ is an arbitrary nonzero number. Since Eq.(\ref{5}) is
homogeneous, this constant can be taken as $1.$
For Galilean space--time
\begin{equation}
\label{26}L\left( u\right) =\left(
\begin{array}{cc}
1 & 0 \\
-u & 1
\end{array}
\right)
\end{equation}
and from (\ref{23}) we get
\begin{equation}
\label{27}V\left( v\right) =\left(
\begin{array}{cc}
-v & 1 \\
-v^2 & v
\end{array}
\right) .
\end{equation}
For Lorentz space--time
\begin{equation}
\label{28}L\left( u\right) =\frac 1{\sqrt{1-\frac{v^2}{c^2}}}\left(
\begin{array}{cc}
1 & -\frac u{c^2} \\
-u & 1
\end{array}
\right)
\end{equation}
and from (\ref{23}) we get
\begin{equation}
\label{29}V\left( v\right) =\frac 1{1-\frac{v^2}{c^2}}\left(
\begin{array}{cc}
-v & 1 \\
-v^2 & v
\end{array}
\right) .
\end{equation}
\subsection{Higher-Dimensional Spacetimes}
From the reduction principle and from the form of the velocity tensor in the
two-dimensional space--time we immediately get that in all higher dimensional
spacetimes the only non-zero components are the $V_k^0$. Therefore the final
form of the velocity tensors is
\begin{equation}
\label{30}V_\nu ^\mu \left( \mathbf{v}\left( t\right) \right)
=L_0^\mu \left( -\mathbf{v}\left( t\right) \right)
\sum_k V_k^0L_\nu ^k\left( \mathbf{v}\left(
t\right) \right) ,
\end{equation}
where $\left( V_1^0,V_2^0,....,V_n^0\right) $ are components of a $\left(
n-1\right) $-dimensional vector under rotations in the subspace $\left(
x^1,x^2,....,x^{n-1}\right) .$ Using this form of $V$ and the explicit forms of
the Galilean and Lorentz transformations we easily can get the velocity
tensor both for the Galilean and Lorentz spacetimes of any dimension.
\section{Dynamics}
Since our kinematical part of classical mechanics is generally covariant, it
is necessary to determine such form of dynamical equations which also will
be generally covariant. For this purpose we shall remind that the only
generally covariant differential relation which may be reduced to the famous
Newton relation
\begin{equation}
\label{27}\frac{d\mathbf{p}\left( t\right) }{dt}=\mathbf{F
\left( t\right)
\end{equation}
is of the form
\begin{equation}
\label{28}\nabla _\mu \pi ^{\mu \nu }\left( x\right) =F^\nu \left( x\right)
,
\end{equation}
where $\pi ^{\mu \nu }\left( x\right) $ is some tensorial density and $F^\nu
\left( x\right) $ is a vector density, while $\nabla _\mu $ denotes a
corresponding covariant derivative.
Assuming that $\pi ^{\mu \nu }\left( x\right) ,$ like the velocity tensors,
is a function of the ordinary velocity, we easily can construct the explicit
form of this quantity. This leads, exactly as for the velocity tensors, to
the following form of the dynamical tensor:
\begin{equation}
\label{29}\pi ^{\mu \nu }\left( \mathbf{v}\left( t\right) \right)
=L_\alpha ^\mu \left( -\mathbf{v}\left( t\right) \right) \pi
^{\alpha \beta }L_\beta ^\nu \left( -\mathbf{v}\left( t\right)
\right) ,
\end{equation}
where all $\pi ^{\alpha \beta }$ are constants. Since, in contradiction
to the velocity tensor, the dynamical tensor $\pi ^{\mu \nu }\left(
\mathbf{v}\right) $ need not to satisfy any additional conditions, we
have here to do with $n^2$ arbitrary constants which describe the inertial
properties of the considered particles. We may, however, diminish the number
of arbitrary constants by requiring the symmetry of $\pi ^{\mu \nu }\left(
\mathbf{v}\right) $ and then only one parameter, the mass of the
particle, describes its inertial property. In this case $\pi^{\mu \nu}$
simply is the energy-momentum tensor of the material point. Since the
$\pi ^{\mu \nu }\left(\mathbf{v}\left( t\right) \right) $ depends
only on the time coordinate, it is clear that Eq. (\ref{28}) reduces to
(\ref{27}).
\section{Conclusions}
We have introduced a new mechanical object called the velocity tensor and
explicitly constructed the velocity tensors in
spacetimes of any dimension. We hope that the notion of the velocity tensor
will shed more light on the possible dynamics in general relativity. It also
may be useful for relativistic many-body systems.
|
1,116,691,499,621 | arxiv | \section{Introduction}
The majority of known transiting gas giants are hot Jupiters; they orbit their host star with periods of less than ten days. The extended atmospheres of hot Jupiters are ideal for atmospheric studies and understanding their composition (e.g. \citealt{madhusudhan_2014a,sing_2016,wyttenbach_2017,showman_2020,baxter_2021}), however the original properties of these systems are not often preserved (e.g. \citealt{albrecht_2012}).
The orbital parameters of an exoplanet are the result of its origin and formation but in the case of close-in planets such as hot Jupiters, the proximity of the host star leads to additional mechanisms such as tidal interactions (e.g. \citealt{valsecchi_2015}) which disturb the initial orbital parameters. Longer period transiting planets, such as warm Jupiters, are less affected by their host star and their orbital elements do retain a record of their formation and migrational history.
Warm Jupiters are exoplanets orbiting their host star with periods typically defined between 10 and 200 days. Similarly to hot Jupiters, if warm Jupiters are not formed in-situ, they require migration mechanisms which are able to bring them from several au to a fraction of an au from their host star \citep{dawson_2018}. Part of the warm Jupiter population is located in the period valley, a region of the parameter space between 10 and 100 days where gas giants are less frequent (\citealt{udry_2003,wittenmyer_2010}). The occurrence rates of gas giants can also suggest which type of formation and evolution different gas giants undergo. One of the distinctive features of the warm Jupiter population is its wide range of eccentricities. The formation and evolution processes leading to the diversity of orbital arrangements seen in systems hosting warm Jupiters remain to be fully understood.
Several explanations have been put forward to explain the wide eccentricity distribution of warm Jupiters.
The eccentricity distribution of warm Jupiters is composed of two groups, a low eccentricity component and a higher eccentricity one (e.g. \citealt{petrovich_2016}). Disk migration (e.g. \citealt{goldreich_1980, baruteau_2014}) is able to explain the lower eccentricity component and can reproduce the period distribution of gas giants under certain disk properties \citep{coleman_2016}. However, disk migration and ensuing planet-planet scattering do not create enough warm Jupiters with high eccentricities \citep{petrovich_2014}. In high-eccentricity migration scenarios (e.g. \citealt{rasio_1996,fabrycky_2007}), warm Jupiters are precursors of hot Jupiters which we are observing in the midst of inward migration. High-eccentricity migration produces a satisfying number of warm Jupiters at high eccentricities but not enough low eccentricity warm Jupiters. Besides, high-eccentricity migration under-produces warm Jupiters \citep{wu_2011}, while disk migration produces a number of warm Jupiters in agreement with the estimate of the period valley. Thus it is thought that a combination of these mechanisms could explain the observed population.
The composition of gas giants depends on where they are formed in the protoplanetary disk, on which timescales, but also on the composition of the disk. Measuring precise masses and radii of giant planets enables estimations of their bulk metallicity, which in turn constrains the formation and evolution models. Solar system giants are metal-enriched compared to the solar metallicity (e.g. \citealt{wong_2004}) and core-accretion models are able to match their heavy metal enhancement (e.g. \citealt{alibert_2005}). For exoplanets, \citet{thorngren_2016b} showed that the planet metal enrichment ($Z_{planet}/Z_{star}$) is anti-correlated with the planetary mass while the mass of heavy elements is positively correlated with planetary mass. Comparing the results of different planetary synthesis models with the bulk metallicity and atmospheric composition of exoplanets leads to constraints on the processes driving core formation and envelope enrichment (e.g. \citealt{mordasini_2014,mordasini_2016a}). However, the sample of warm Jupiters with precise mass and radius measurements is scarce. And the correlation between planet metal-enrichment and planetary mass still needs further investigation, for example in terms of detailed host star abundances \citep{teske_2019} and in terms of its dependence on orbital properties of the planetary systems \citep{dalba_2022}.
Detecting transiting warm Jupiters is challenging, especially for ground-based surveys, as partial or full transit events visible from a given site are rare. However, the space-based photometric mission Transiting Exoplanet Survey Satellite (TESS; \citealt{ricker_2015a}) is opening a new window to detect this type of exoplanet orbiting bright and well characterized stars. TESS observes each field almost continuously for 27 days, but a significant fraction of the sky is covered for longer periods of time when fields overlap. As such, TESS monitors some parts of the sky for up to one year in regions called continuous viewing zones. For fields observed continuously for 27 days, we expect most warm Jupiters to show only a single transit. Simulations by \cite{cooke_2018} and \cite{villanueva_2019a} predicted that between 500 and 1000 single transit events would be found in the first two years of the TESS data. Following these results, several pipelines have been developed to search for these events (e.g. \citealt{gill_2020,montalto_2020a}). These dedicated searches provide valuable candidates in addition to the ones announced by the TESS Objects of Interest (TOIs; \citealt{guerrero_2021}).
Single transit candidates have several observational disadvantages, namely their orbital period is largely unconstrained. An estimate of the orbital period can be determined based on stellar and transit parameters \citep{osborn_2016}. With the re-observation of previous TESS sectors through the extended mission, new single transit candidates are detected and several of the known candidates show a second transit, constraining the possible planetary periods to a discrete set of values. These candidates require spectroscopic vetting as we expect a false positive rate of about 50\% \citep{santerne_2016}. Hence a by-product of the search for warm Jupiters is the detection and characterization of long-period low-mass eclipsing binaries (e.g. \citealt{lendl_2020,gill_2022}). Single and duo transit candidates are challenging systems to follow up and characterize but they can reveal warm planets which are highly valuable systems (e.g. \citealt{osborn_2022,schanche_2022}). Long-period transiting giants are the missing link between hot Jupiters and the solar system giants and they offer precious information for understanding the physics of their atmosphere, their formation, migration and evolution history.
This paper describes the discovery and characterization of two new warm Jupiters. The observations are detailed in Section~\ref{observations}, the derivation of stellar parameters and the methods used to analyze photometric and radial velocity data are explained in Section~\ref{methods}. The results are presented in Section~\ref{results} and discussed in Section~\ref{discussion}. We summarize our findings in Section~\ref{conclusion}.
\section{Observations}
\label{observations}
The discovery photometry was collected with the space-based mission TESS (Section~\ref{sec:tess}) and follow-up observations were carried out from the ground with the photometric facility NGTS (Section~\ref{sec:ngts}), and the high-resolution spectrographs CORALIE, FEROS, CHIRON, HARPS, and TRES (Section~\ref{sec:coralie}, ~\ref{sec:feros}, ~\ref{sec:chiron}, ~\ref{sec:harps}, and \ref{sec:tres}). The presence of nearby stars was checked with speckle imaging (Section~\ref{sec:nessi}).
\subsection{TESS photometry}
\label{sec:tess}
{\rm TOI-5153}\ and {\rm NGTS-20}\ were observed by the TESS satellite during its primary and extended mission. Both targets showed a single transit event in sectors 4 and 6 respectively. As several teams set out to search for and vet this type of events, these stars were selected as single transit candidates by the TSTPC (Tess Single Transit Planetary Candidate) group, and later announced as CTOIs (Community Tess Object of Interest) by J. Steuer and the WINE team.
{\rm TOI-5153}\ was observed at a 30 min cadence in sector 6 (2018-12-11 to 2019-01-07) and at a 2 min cadence in sector 33 (2020-12-17 to 2021-01-13).
{\rm NGTS-20}\ was observed at a 30 min cadence in sector 4 from 2018-10-18 to 2018-11-15 and at a 2 min cadence in sector 31 from 2020-10-21 to 2020-11-19.
Both stars were observed at a 2 min cadence in the extended mission, thanks to the approved Guest Investigator Program G03188 led by S. Villanueva.
However, only {\rm TOI-5153}\ showed a second single transit in sector 33 and no event was detected in sector 31 for {\rm NGTS-20}.
Both primary transits passed the vetting stage where we checked for asteroid crossing and centroid shifts (indicating of a background eclipsing binary).
Light curves of the primary mission were extracted with the Quick Look Pipeline (QLP, \citealt{huang_2020,huang_2020a}).
The light curves from the extended mission were obtained through the data reduction done at the Science Processing Operation Center (SPOC, \citealt{jenkins_2016}). We use the Simple Aperture Photometry (SAP) fluxes and their corresponding errors for our analysis.
The light curves are presented in Figures~\ref{fig:phase-folded_lc_1240} \& \ref{fig:phase-folded_lc_2575}.
We generated target pixel files with \texttt{tpfplotter} \citep{aller_2020} and checked the presence of contaminant sources, down to a magnitude difference of 6, within the aperture used to extract the light curves.
One star (TIC\,124029687) falls into the aperture around {\rm TOI-5153}. We estimate the dilution by comparing the fluxes measured in the Gaia passband RP as this filter matches the TESS passband. {\rm TOI-5153}\ has a mean RP flux of 273878 $\pm$ 52 electrons per second ($\rm e^{-1}s^{-1}$) and TIC\,124029687 has a flux of 2208\,$\pm$\,8\,$\rm e^{-1}s^{-1}$. The dilution is equal to 0.8$\pm$0.003\% (Equation 6 from \citealt{espinoza_2019a}). We included a dilution factor for the light curve modeling and we chose a Normal prior informed by the dilution estimated with Gaia photometry.
The aperture of {\rm NGTS-20}\ is not contaminated by neighboring stars, hence we chose to fix the dilution factor to 1 (no dilution) for the modeling of the light curve.
\begin{figure*}
\includegraphics[width=0.95\hsize]{./figures/phase_plot_TIC1240_AA_dd7gp.pdf}
\caption{Top: Photometric observations of {\rm TOI-5153}\ from TESS sector 6 at 30 min cadence (orange dots, left panel) and sector 33 at 10 min cadence (green dots, middle panel) with full median models (orange and green lines), and Gaussian process models (black line). Right side panel shows the detrended and phase folded data from both sectors (orange and green dots) with the phase folded transit model in black. Bottom: Each panel show the residuals in parts per million between the full model and the respective light curve.}
\label{fig:phase-folded_lc_1240}
\end{figure*}
\begin{figure*}
\includegraphics[width=\hsize]{./figures/phase_plot_TIC2575_AA_d70gp_hour_tight.pdf}
\caption{Top: Photometric observations of {\rm NGTS-20}\ from TESS sector 4 at 30 min cadence (left panel) and NGTS binned at 2 min cadence (middle and right panels). In each panel the data is shown are colored dots (orange, green, and red), the full model is represented with a line of the same color and the Gaussian process model as a grey line. Bottom: Each panel show the residuals in parts per million between the full model and the respective light curve.}
\label{fig:phase-folded_lc_2575}
\end{figure*}
\subsection{NGTS photometry}
\label{sec:ngts}
{\rm NGTS-20}\ was monitored from the ground by the Next Generation Transit Survey (NGTS). NGTS is an automated array of twelve 20\,cm telescopes installed at ESO's Paranal Observatory, Chile \citep{wheatley_2018}. NGTS uses a custom filter which spans 520 to 890\,nm. Starting on the night of September 29th, 2020, the target was observed in the blind survey mode (every possible night) with one 20\,cm telescope. The data was acquired with an exposure time of 10 seconds and cadence of 13 seconds. Data reduction is performed with standard aperture photometry and an automatic transit search is done using template matching \citep{gill_2020a}. One full transit was observed on the night of December 8th, 2020. After this second transit event, the possible periods for this candidate correspond to a set of period aliases. The target was only observed on nights when a transit of a period alias was expected. A third transit was observed on the night of October 28th, 2021. Six NGTS cameras were used with the same exposure time and cadence as during the blind search survey mode.
The NGTS light curves are presented in Figure~\ref{fig:phase-folded_lc_2575}.
\begin{table}
\caption{Radial velocities of {\rm TOI-5153}.}
\label{table:table_rvs_1240}
\begin{tabular}{l l l l}
\hline
\hline
\noalign{\smallskip}
Time & RV & RV error & Instrument\\
BJD & [$\rm km\,s^{-1}$] & [$\rm km\,s^{-1}$] & \\
\hline
\noalign{\smallskip}
2459192.69733 & -35.24751 & 0.06266 & CORALIE\\
2459201.70551 & -35.56908 & 0.06674 & CORALIE\\
2459214.55618 & -35.30501 & 0.08927 & CORALIE\\
...&&&\\
2459504.78439 & -35.4622 & 0.0156 & HARPS\\
2459505.85753 & -35.5273 & 0.0213 & HARPS\\
2459506.76824 & -35.5145 & 0.0162 & FEROS\\
\hline
\end{tabular}
\tablefoot{Full table is available at CDS.}
\end{table}
\begin{table}
\caption{Radial velocities of {\rm NGTS-20}.}
\label{table:table_rvs_2575}
\begin{tabular}{l l l l}
\hline
\hline
\noalign{\smallskip}
Time & RV & RV error & Instrument\\
BJD & [$\rm km\,s^{-1}$] & [$\rm km\,s^{-1}$] & \\
\hline
\noalign{\smallskip}
2458738.86129 & 0.0778 & 0.0282 & CHIRON\\
2458748.83030 & 0.1598 & 0.0273 & CHIRON\\
2458804.74919 & 12.67750 & 0.05916 & CORALIE\\
...&&&\\
2459504.73569 & 12.60444 & 0.03076 & CORALIE\\
2459514.61823 & 12.66770 & 0.02817 & CORALIE\\
2459528.58470 & 12.43836 & 0.03839 & CORALIE\\
\hline
\end{tabular}
\tablefoot{Full table is available at CDS.}
\end{table}
\subsection{CORALIE spectroscopy}
\label{sec:coralie}
Spectroscopic vetting and radial velocity follow-up was carried out with the CORALIE spectrograph \citep{queloz_2001a}. CORALIE is a fiber-fed spectrograph installed at the Nasmyth focus of the Swiss 1.2m Euler telescope (La Silla, Chile). CORALIE has a spectral resolution of 60\,000 and observes with a 3 pixel sampling per resolution element. Fiber injection is done with two fibers: a first fiber is used to observe the target and a second fiber can collect light from a Fabry-Pérot etalon or the sky to allow for simultaneous wavelength calibration or background subtraction.
{\rm TOI-5153}\ and {\rm NGTS-20}\ are part of an on-going CORALIE survey which aims to confirm TESS single transit candidates and characterize the properties of the systems. Spectroscopic vetting is first done by taking two spectra of the target about one week apart. These two observations are used to rule out eclipsing binary scenarios. Then each target is monitored with an average of one point per week, and the sampling is adapted to maximize the phase coverage of the orbit once a periodic signal is detected.
Stellar radial velocities are measured with the cross-correlation technique: the stellar spectrum is cross-correlated with a mask close to the stellar type of the host star to obtain a cross-correlation function (CCF, e.g. \citealt{pepe_2002a}). In addition to the radial velocity and its associated error, full width half maximum, contrast, and bisector inverse slope (BIS) are some of the parameters derived from the CCF. These parameters have been shown to be reliable tracers of the radial velocity noise induced by stellar activity and they can be used to detrend the data in some cases (e.g. \citealt{melo_2007}).
We collected 25 radial velocity measurements of {\rm TOI-5153}\ (from 2020-12-09 to 2021-04-28) and 39 measurements of {\rm NGTS-20}\ (from 2019-11-17 to 2021-11-10) with an exposure time varying between 900 and 1800s. The spectra of {\rm TOI-5153}\ have an average signal-to-noise ratio of 14 and the ones of {\rm NGTS-20}\ have an average signal-to-noise of 23.
The observations for both targets are detailed in Tables~\ref{table:table_rvs_1240} \& \ref{table:table_rvs_2575} and plotted in Figures~\ref{fig:rv_periodo_1240} \& \ref{fig:rv_periodo_2575}. CORALIE spectra were also used to derive stellar parameters and the analysis is detailed in Section~\ref{stellar-analysis}.
\begin{figure}
\includegraphics[width=\hsize]{./figures/rv_plot_TIC1240_feb22.pdf}
\caption{Top: Time series of the radial velocities from CORALIE (blue dots), FEROS (orange crosses), HARPS (red triangles), and CHIRON (purple stars) for {\rm TOI-5153}. Bottom: Generalized Lomb-Scargle periodogram of the radial velocities. The highest peak corresponds to a period of about 20.1 days.}
\label{fig:rv_periodo_1240}
\end{figure}
\begin{figure}
\includegraphics[width=\hsize]{./figures/rv_plot_TIC2575_feb22.pdf}
\caption{Top: Time series of the radial velocities from CORALIE (blue dots), FEROS (orange crosses), and CHIRON (purple triangles) for {\rm NGTS-20}. Bottom: Generalized Lomb-Scargle periodogram of the radial velocities. The highest peak corresponds to a period of about 53.5 days.}
\label{fig:rv_periodo_2575}
\end{figure}
\subsection{FEROS spectroscopy}
\label{sec:feros}
FEROS is a high-resolution spectrograph installed at the 2.2m telescope in La Silla, Chile. FEROS has a spectral resolution of 48\,000 with 3 pixel sampling. Two fibers are available to observe the target and simultaneously record the spectrum of Th-Ar lamp to allow a precise wavelength calibration \citep{kaufer_1999}.
A total of nine FEROS spectra were collected for {\rm TOI-5153}\ between 2021-02-20 and 2021-10-09 under the program number 0106.A-9014(A) (PI: Sarkis). The exposure times vary between 900 and 1200 seconds depending on the weather conditions.
For {\rm NGTS-20}, 13 spectra were recorded under the program number 0104.A-9007(A) (PI: Sarkis) between 2020-02-25 and 2021-01-09. Exposure times vary between 900s and 1350s depending on weather conditions and lead to a signal-to-noise ratio ranging from 70 to 123.
The data reduction was performed with the CERES pipeline \citet{brahm_2017}, and the radial velocities were extracted using the cross-correlation technique. Radial velocity measurements are detailed in Tables~\ref{table:table_rvs_1240} \& \ref{table:table_rvs_2575} and plotted in Figures~\ref{fig:rv_periodo_1240} \& \ref{fig:rv_periodo_2575}.
\subsection{CHIRON spectroscopy}
\label{sec:chiron}
{\rm TOI-5153}\ was observed with the CHIRON spectrograph \citep{tokovinin_2013} of the 1.5 m Smarts telescope
located at Cerro Tololo International Observatory (CTIO) in Chile. CHIRON is a fiber-fed spectrograph with a spectral resolution of 80\,000
when used with the image slicer mode. Five spectra were taken from 2021-03-13 to 2021-04-03.
These data were acquired through a monitoring program from Sam Quinn and reduced with a
least-squares deconvolution method \citep{donati_1997} leading to an average radial velocity precision of \SI{67}{\meter\per\second}.
{\rm NGTS-20}\ was monitored with the CHIRON spectrograph and observations took place between 2019-09-12 and 2021-02-18.
A total of 13 radial velocity measurements were obtained with an exposure time of 1800s and a nominal signal-to-noise ratio of 30.
The data was obtained through two different observing programs and the data reduction was done following the procedures described in \citet{wang_2019a} and \citet{jones_2019}, and wavelength calibration is done with Th-Ar lamp exposures taken before and after the science observation. The radial velocities were derived with the cross-correlation technique
and an average radial velocity precision of about \SI{27}{\meter\per\second} was reached. Radial velocity measurements are detailed in Tables~\ref{table:table_rvs_1240} \& \ref{table:table_rvs_2575} and plotted in Figures~\ref{fig:rv_periodo_1240} \& \ref{fig:rv_periodo_2575}.
\subsection{HARPS spectroscopy}
\label{sec:harps}
{\rm TOI-5153}\ was also observed with the high-resolution spectrograph HARPS (\citealt{mayor_2003a}; $\rm R\sim115\,000$)
installed on the 3.6m telescope in La Silla, Chile. A total of three observations were
obtained under the program number 106.21ER.001 (PI: Brahm), between 2021-03-03 and 2021-03-23.
Observations were obtained with the high-accuracy mode and an exposure time set to 1200 seconds.
The data reduction was performed with the standard data reduction pipeline.
The radial velocities were extracted with the cross-correlation technique
using a G2 mask. We obtained an average signal-to-noise ratio of 25 at 550\,nm.
The observations are detailed in Table~\ref{table:table_rvs_1240} and plotted in Figure~\ref{fig:rv_periodo_1240}.
\subsection{TRES spectroscopy}
\label{sec:tres}
Two reconnaissance spectra of {\rm TOI-5153}\ were obtained on February 10 and 19, 2022 using the Tillinghast Reflector Echelle Spectrograph (TRES; \citealt{furesz_2008}) located at the Fred Lawrence Whipple Observatory (FLWO) atop Mount Hopkins, Arizona, USA. TRES is a fiber-fed echelle spectrograph with a wavelength range of 390-910\,nm and a resolving power of 44\,000. The spectra were extracted as described in \citet{buchhave_2010}. We used the TRES spectra to derived stellar parameters for {\rm TOI-5153}\ using the Stellar Parameter Classification (SPC) tool \citep{buchhave_2012,buchhave_2014}. SPC cross correlates an observed spectrum against a grid of synthetic spectra based on Kurucz atmospheric models \citep{kurucz_1992}. The stellar effective temperature is evaluated at $\rm 6190 \pm 53$\,K. The surface gravity is equal to $\rm 4.30 \pm 0.10\,cm\,s^{-2}$ and the stellar metallicity to $\rm 0.20 \pm 0.08$. These values are consistent within 1\,$\sigma$ with the adopted stellar parameters obtained from CORALIE spectra and described in Section~\ref{stellar-analysis}.
\begin{table*}
\caption{Stellar properties and stellar parameters derived with the spectral synthesis method.}
\label{table:stellar-params}
\centering
\begin{tabular}{l c c c}
\hline
\hline
\noalign{\smallskip}
& {\rm TOI-5153}\ & {\rm NGTS-20}\ & \\
\hline
\noalign{\smallskip}
Other Names & & & \\
\noalign{\smallskip}
2MASS & J06060966-1957118 & J03051020-2156011 & 2MASS\\
Gaia & 2942084865853011712 & 5078704372599743104 & Gaia\\
TIC & TIC 124029677 & TIC 257527578 & TESS\\
TOI & TOI-5153 & TOI-5152 & TESS\\
NGTS & - & NGTS-20 & NGTS\\
\hline
\noalign{\smallskip}
Astrometric Properties & & & \\
\noalign{\smallskip}
R.A. & 06:06:09.68 & 03:05:10.23 & TIC\\
Dec & -19:57:12.4 & -21:56:01.1 & TIC\\
$\mu$R.A.($\rm mas\,yr^{-1}$) & 8.745±0.019 & 20.077±0.017 & Gaia EDR3\\
$\mu$Dec.($\rm mas\,yr^{-1}$) & -34.563±0.021 & -0.873±0.017 & Gaia EDR3\\
Parallax (mas) & 2.563±0.023 & 2.731±0.018 & Gaia EDR3\\
Distance (pc) & 390.1±3.5 & 366.2±2.4 & Gaia EDR3\\
\hline
\noalign{\smallskip}
Photometric Properties & & & \\
\noalign{\smallskip}
V (mag) & 11.93±0.15 & 11.23±0.09 & Tycho\\
B (mag) & 12.46±0.19 & 11.76±0.09 & Tycho\\
G (mag) & 11.5779±0.0004 & 11.0313±0.0004 & Gaia EDR3\\
T (mag) & 11.215±0.007 & 10.6509±0.0078 & TESS\\
J (mag) & 10.681±0.024 & 10.143±0.024 & 2MASS\\
H (mag) & 10.48±0.023 & 9.879±0.025 & 2MASS\\
Ks (mag) & 10.408±0.019 & 9.830±0.019 & 2MASS\\
W1 (mag) & 10.399±0.022 & 9.800±0.023 & WISE\\
W2 (mag) & 10.412±0.02 & 9.834±0.019 & WISE\\
W3 (mag) & 10.418±0.066 & 9.775±0.036 & WISE\\
W4 (mag) & 8.595±0.361 & 9.075±0.219 & WISE\\
$\rm A_V$ & 0.13±0.04 & 0.01±0.01 & Sec. 3.1\\
\hline
\noalign{\smallskip}
Bulk Properties & & & \\
\noalign{\smallskip}
$\rm T_{eff}$ (K) & 6300±80 & 5980±80 & Sec. 3.1\\
Spectral type & F8\,V & G1\,IV & Sec. 3.1 \\
log g ($\rm cm\,s^{-2}$) & 4.30±0.15 & 3.8±0.2 & Sec. 3.1\\
$\rm [Fe/H]$ (dex) & 0.12±0.08 & 0.15±0.08 & Sec. 3.1 \\
$\rm v.sini$ ($\rm km\,s^{-1}$) & 10.1±1.0 & 8.0±0.8 & Sec. 3.1\\
$\rm log\,R'_{HK}$ & -4.95±0.07 & -5.01±0.05 & Sec. 3.1\\
Age (Gyrs) & 5.4±1.0 & 1.4 - 6.8 & Sec. 3.1\\
Radius ($R_\odot$) & 1.40$\pm$0.04 & 1.78$\pm$0.05 & Sec. 3.1\\
Mass ($M_\odot$) & 1.24±0.07 & 1.47$\pm$0.09 & Sec. 3.1\\
\hline
\end{tabular}
\tablebib{2MASS \cite{skrutskie_2006}; GAIA EDR3 \cite{gaiacollaboration_2021}; Tycho \citep{hog_2000}; WISE \cite{wright_2010}.}
\end{table*}
\subsection{Speckle imaging}
\label{sec:nessi}
High-resolution speckle images were taken for both stars with the NN-Explore Exoplanet and Stellar Speckle Imager (NESSI: \citealt{scott_2018}). NESSI is installed on the 3.5m WYIN telescope located at the Kitt Peak National Observatory. {\rm TOI-5153}\ was observed on the 17th of November, 2019 and {\rm NGTS-20}\ was observed on the 18th of November, 2019. Both stars had images taken in the blue and red channels with two narrow band filters centered at 562\,nm and 832\,nm.
The reconstructed speckle images are produced following the procedures described in \cite{howell_2011}. The 5\,$\sigma$ background sensitivity limits are measured in the reconstructed images and are shown in Figure~\ref{fig:nessi_1240}. The NESSI data show no indication that either of the targets has close stellar companions.
\begin{figure}
\includegraphics[width=\hsize]{./figures/nessi_TIC1240.pdf}
\includegraphics[width=\hsize]{./figures/nessi_TIC2575.pdf}
\caption{5-$\sigma$ background sensitivity curves derived from speckle images taken with NESSI for {\rm TOI-5153}\ (top panel) and {\rm NGTS-20}\ (bottom panel) showing no bright companion ($\Delta mag < 4$) from 0.2 to 1.2 arcsec.}
\label{fig:nessi_1240}
\end{figure}
\begin{figure}[!ht]
\centering\includegraphics[width=6.2cm,angle=90,trim=130 50 50 100,clip]{./figures/tic_124029677_sed.pdf}
\centering\includegraphics[width=3.05cm,angle=90,trim=320 50 80 100,clip]{./figures/tic_124029677_resids.pdf}
\centering\includegraphics[width=6.2cm,angle=90,trim=130 50 50 100,clip]{./figures/tic_257527578_sed.pdf}
\centering\includegraphics[width=3.05cm,angle=90,trim=320 50 80 100,clip]{./figures/tic_257527578_resids.pdf}
\caption{Spectral energy distributions (SEDs) and associated residuals for {\rm TOI-5153}\ (top panel) and {\rm NGTS-20}\ (bottom panel).
Red symbols represent the observed photometric measurements, where the horizontal bars represent the effective width of the passband.
Blue symbols are the model fluxes from the best-fit Kurucz atmosphere model (black).}
\label{fig:SED_fits}
\end{figure}
\section{Methods}
\label{methods}
\subsection{Stellar parameter determination}
\label{stellar-analysis}
We combined the CORALIE spectra of {\rm NGTS-20}\ and the HARPS spectra of {\rm TOI-5153}\ in order to derive the parameters for both planet host stars.
The stellar parameters were obtained using the spectral synthesis technique implemented in the iSpec package \citep{blanco-cuaresma_2014}. This package generates a synthetic stellar spectrum using the SPECTRUM radiative transfer code,
the model atmospheres from MARCS \citep{gustafsson_2008}, and the atomic line list from \citet{asplund_2009}.
iSpec minimizes the difference between the observed spectrum and synthetic spectra (computed simultaneously)
and varies only one free parameter at a time.
We defined a set of given wavelength regions for the fitting. The first region includes $H_{\alpha}$, Na, and Mg lines and is used to measure the stellar effective temperature and the surface gravity. The second region includes FeI and FeII lines which are used to derive the stellar metallicity and the projected stellar velocity ($v\sin i$).
We derive the effective temperature, the surface gravity, the metallicity and the $v\sin i$ of {\rm TOI-5153}\ and {\rm NGTS-20}. We note that both stars are metal-rich with metallicities of $\rm 0.12 \pm 0.08$ and $\rm 0.15 \pm 0.08$ for {\rm TOI-5153}\ and {\rm NGTS-20}, respectively. The results are presented in Table~\ref{table:stellar-params}.
We performed an analysis of the broadband spectral energy distribution (SED) together with the {\it Gaia\/} EDR3 parallax \citep{gaiacollaboration_2021} in order to determine an empirical measurement of the stellar radius, following the procedures described in \citet{stassun_2016,stassun_2017,stassun_2018}. We pulled the $B_T V_T$ magnitudes from {\it Tycho-2} \citep{hog_2000}, the $BVgri$ magnitudes from APASS \citep{henden_2014}, the $JHK_S$ magnitudes from {\it 2MASS} \citep{skrutskie_2006}, the W1--W4 magnitudes from {\it WISE} \citep{wright_2010}, and the $G$, $G_{\rm BP}$, $G_{\rm RP}$ magnitudes from {\it Gaia}. For {\rm TOI-5153}, we also used the available {\it GALEX} NUV flux \citep{bianchi_2017}. Together, the available photometry spans the full stellar SED over the wavelength range 0.35--22~$\mu$m, and extends down to 0.2~$\mu$m for {\rm TOI-5153}\ (see Figure~\ref{fig:SED_fits}). We performed a fit using Kurucz stellar atmosphere models, with the priors on effective temperature ($T_{\rm eff}$), surface gravity ($\log g$), and metallicity ([Fe/H]) from the spectroscopically determined values. The remaining free parameter is the extinction ($A_V$), which we restricted to the maximum line-of-sight value from the dust maps of \citet{schlegel_1998}.
Integrating the (dereddened) model SED gives the bolometric flux at Earth of $\rm F_{\rm bol} = 5.86 \pm 0.21 \times 10^{-10} erg\,s^{-1}\,cm^{-2}$ for {\rm TOI-5153}\
and $\rm F_{\rm bol} = 8.72 \pm 0.10 \times 10^{-10} erg\,s^{-1}\,cm^{-2}$ for {\rm NGTS-20}.
Taking the $F_{\rm bol}$ and $T_{\rm eff}$ together with the {\it Gaia\/} EDR3 parallax
with no systematic adjustment \citep[see][]{stassun_2021} gives the stellar radius, $1.401 \pm 0.045\,R_\odot$ and $1.781 \pm 0.050\,R_\odot$, respectively.
When comparing with the stellar radius estimated from \cite{torres_2010a} empirical relation, we find that the results are identical for {\rm TOI-5153}\ but are incompatible for {\rm NGTS-20}.
We find that the spectroscopic log g of {\rm NGTS-20}\ may be underestimated:
part of the spectral line broadening is attributed to rapid rotation instead of gravity broadening.
A log g value of 4.05 instead of 3.8 allows to obtain compatible stellar radii.
We can also estimate the stellar mass from the empirical relations of \citet{torres_2010a} as well as directly via $R_\star$ and $\log g$.
From \citet{torres_2010a}, the stellar mass is equal to 1.24 $\pm$ 0.07\,$M_\odot$ for {\rm TOI-5153}\ and 1.47 $\pm$ 0.09 $M_{\odot}$ for {\rm NGTS-20}.
The uncertainty on the stellar masses from the Torres relation is computed by taking into account the uncertainty
on the stellar parameters as well as the uncertainty on the coefficients of the relation.
Finally, we can estimate the age of the star from the spectroscopic $R'_{\rm HK}$ and from the stellar rotation period determined
from the spectroscopic $v\sin i$ together with $R_\star$, via the empirical relations of \citet{mamajek_2008}.
From \citet{torres_2010a}, the stellar mass is equal to 1.24 $\pm$ 0.07\,$M_\odot$ for {\rm TOI-5153}\ and 1.47 $\pm$ 0.09 $M_{\odot}$ for {\rm NGTS-20}. The stellar age is estimated from empirical gyrochronology relation \citep{mamajek_2008} and is equal to 5.4 $\pm$ 1.0 Gyr in the case of {\rm TOI-5153}\ and in agreement with the age derived from the spectroscopic $R'_{\rm HK}$ equal to 5.7 $\pm$ 1.6 Gyr. However the age estimation is less certain for {\rm NGTS-20}\ as the UV activity index points towards a lower stellar rotational velocity than what is derived from the spectroscopic analysis. As a result, the stellar age of {\rm NGTS-20}\ ranges from 1.4 to 6.8 Gyr. The results are presented in Table~\ref{table:stellar-params}.
\subsection{Radial velocity analysis}
We first ran an analysis on the radial velocity dataset to find evidence for periodic signal in the data.
We used the \texttt{kima} software package \citep{faria_2018a} to model both radial velocity datasets.
\texttt{kima} uses Bayesian inference to model radial velocity series as a sum of Keplerians,
where different instruments can be included thanks to free radial velocity offset between them.
\texttt{Kima} also allows to have the number of planets as a free parameter.
We chose a uniform prior between 0 and 1 for the number of planets.
The parameters governing the planetary orbit, and the priors used in the fit, are summarized in Table~\ref{table:prior-kima}.
For both systems, there is clear evidence for one periodic signal as the ratio
between the number of posterior samples for the one-planet model over the no planet model is superior to 150.
The orbital period is also clearly defined in both cases: at $20.315 ^{+0.029} _{-0.029}$ days for {\rm TOI-5153}\ and $54.32 ^{+0.20} _{-0.16}$ days for {\rm NGTS-20}.
The radial velocity time series and their associated generalized Lomb-Scargle periodograms are presented
in Figure~\ref{fig:rv_periodo_1240} for {\rm TOI-5153}\ and in Figure~\ref{fig:rv_periodo_2575} for {\rm NGTS-20}.
\subsection{Joint analysis}
We perform the joint analysis of the photometric and radial velocity data with the software package \texttt{juliet} \citep{espinoza_2019a}.
\texttt{Juliet} uses Bayesian inference to model a set number of planetary signals.
For the light curve modeling, \texttt{juliet} uses \texttt{batman} \citep{kreidberg_2015} to model the planetary transit,
and the stellar activity as well as instrumental systematics can be taken into account with Gaussian processes \citep{gibson_2014} or simpler parametric functions.
For the radial velocity modeling, \texttt{juliet} uses \texttt{radvel} \citep{fulton_2018a} and stellar activity signal can also be modeled with Gaussian processes.
We chose to use the nested sampling method \texttt{dynesty} \citep{speagle_2019} implemented in \texttt{juliet}.
Several instruments can be taken into account with radial velocity offsets between them.
For {\rm TOI-5153}, only two planetary transits have been observed with TESS.
In this case of a duo-transit, the solution in orbital period is a discontinuous space where
several period aliases can explain the two observed transits.
Setting a broad uniform prior on the orbital period leads the algorithm to only explore parts of
the parameter space, usually around one or two of the period aliases. To overcome this difficulty,
we can combine several \texttt{dynesty} runs and thus obtain a more complete picture of the posterior distribution.
We note that the period alias with the highest likelihood corresponds to the orbital period also found with the analysis of the radial velocity data.
Hence we chose to set a normal prior with a width of 0.1 days on the orbital period
for the joint fit. In order to confirm the correct period alias, we ran the joint modeling with six sets of priors.
We tested the three period aliases closest to the orbital period found with the RV fit (20.1 days). For each period alias prior,
we either fixed the eccentricity to 0 or we let the eccentricity be a free parameter.
For {\rm NGTS-20}, the TESS light curves displayed one transit and we obtained two additional transits with NGTS, the solution in orbital period
is thus well constrained and do not present period aliases. For both stars,
the parameters and their prior distributions used for the joint fit are listed in Table~\ref{table:prior-juliet}.
Both targets were analyzed with the same choice of model parameters. For the planet, we have the orbital period, mid-transit time,
impact parameter and planet-to-star radius ratio.
The eccentricity and argument of periastron are used directly as model parameters
and the eccentricity is governed by a Beta prior as detailed in \cite{kipping_2014}.
We chose to use the stellar density as a parameter
instead of the scaled semi-major axis ($\rm a/R_{\star}$). The normal prior on stellar density is informed by the stellar analysis
which allowed us to derive precise masses and radii and their associated errors.
TESS and NGTS have slightly different bandpasses which we modeled by setting two sets of limb-darkening parameters.
We chose a quadratic limb darkening parameterized as (q1, q2) in order to efficiently sample the parameter space \citep{kipping_2013}.
Additional jitters and offsets are taken into account in the modeling.
As the TESS sectors are two years apart and NGTS observed the two transits of {\rm NGTS-20}\ with different number of cameras,
we have a set of four jitters and offsets for {\rm NGTS-20}. For {\rm TOI-5153}, we have a set of two jitters and offsets as only two TESS light curves are available.
For the radial velocity model, the semi-amplitude is one of parameters and
each spectrograph has a separate offset and jitter parameters. For both targets, we used the nested sampling algorithm implemented in \texttt{juliet}, \texttt{dynesty}: each fit was done with 1000 live points and until the estimated uncertainty on the log-evidence is smaller than 0.1.
Photometric variability and radial velocity jitter can be accounted for either through linear models against any parameters or through Gaussian processes.
The TESS light curves show small levels of stellar variability both in the SAP and PDCSAP fluxes.
The first NGTS light curve of {\rm NGTS-20}\ shows a drop in flux after the end of the first transit as visible in Figure~\ref{fig:phase-folded_lc_2575}.
We tried to decorrelate this feature against relevant parameters extracted from the NGTS observations (e.g. time, airmass, peak flux)
but we were unable to find a combination of linear models which would modeled it.
We chose to model correlated noise in all datasets with a Gaussian process using an approximate Matern kernel implemented via \texttt{celerite} \citep{foreman-mackey_2017}.
\begin{figure}
\includegraphics[width=\hsize]{./figures/phase_plot_rv_TIC1240_AA_dd7gp_tight.pdf}
\caption{Radial velocities from CORALIE (blue dots), HARPS (red triangles), FEROS (orange crosses), and CHIRON (purple stars) for {\rm TOI-5153}. Median Keplerian model is plotted as a grey line along with its corresponding 1 $\sigma$ uncertainty (grey shaded area).}
\label{fig:rv_timeserie_1240}
\end{figure}
\begin{figure}
\includegraphics[width=\hsize]{./figures/phase_plot_rv_TIC2575_d70gp_tight.pdf}
\caption{Radial velocities from CORALIE (blue points), FEROS (orange crosses), and CHIRON (purple triangles) for {\rm NGTS-20}. Median Keplerian model is plotted as a grey line along with its corresponding 1 $\sigma$ uncertainty (grey shaded area).}
\label{fig:rv_timeserie_2575}
\end{figure}
\section{Results}
\label{results}
We present the final parameters derived from the joint analysis of {\rm TOI-5153}\,b in Section~\ref{results_ticA} and {\rm NGTS-20}\,b in Section~\ref{results_ticB}. We compute a first estimate of the heavy element content for both planets in Section~\ref{results_metals}.
\subsection{{\rm TOI-5153}}
\label{results_ticA}
For {\rm TOI-5153}\ we find that the best solution from the joint fit corresponds to the orbital period of 20.33\,days.
The log-evidence values for the six joint fits are shown in Table~\ref{table:comparison-evidences}.
The highest evidence model favors a non-circular orbit with an eccentricity of $0.091^{+0.024}_{-0.026}$ .
The planet has a radius of $\rm 1.06 ^{+0.04} _{-0.04}\,R_{Jup}$ for a mass of $\rm 3.26 ^{+0.18} _{-0.17}\,M_{Jup}$.
The semi-major axis is equal to $\rm 0.158 ^{+0.006} _{-0.006}\,au$.
Figure~\ref{fig:rv_timeserie_1240} presents the phase folded radial velocities along with the median radial velocity model. The radial velocity semi-amplitude is equal to $\rm 212 ^{+8} _{-8}\,m\,s^{-1}$. The residuals on the radial velocity are about 129, 76, 33, and 15\,$\rm m\,s^{-1}$ for CHIRON, CORALIE, FEROS, and HARPS, respectively. After about one year of radial velocity monitoring of the system, we do not see any hint of a long term drift.
Each TESS sector light curve is modeled with a Gaussian process with an amplitude of 270\,ppm and 560\,ppm and a time-scale of 0.9\,days and 1.4\,days for sectors 6 and 33, respectively. The 30 min binned residuals after model subtraction for the light curve are about 620\,ppm for sector 6 and 490\,ppm for sector 33 .
Phase folded light curves and their corresponding models are shown in Figure~\ref{fig:phase-folded_lc_1240}.
The posterior distributions of the planetary parameters, the stellar density, and the radial velocity semi-amplitude
are presented in Figure~\ref{fig:corner_1240}.
The final parameters of the system can be found in Table~\ref{table:system-parameters}.
\begin{table}
\caption{Model comparison based on the log-evidence values (ln Z) for {\rm TOI-5153}.}
\label{table:comparison-evidences}
\centering
\begin{tabular}{l l l}
\hline
\hline
\noalign{\smallskip}
& Free eccentricity & Fixed eccentricity \\
\noalign{\smallskip}
\noalign{\smallskip}
Period aliases & ln Z & ln Z\\
\noalign{\smallskip}
19.78 d & 89803.1 $\pm$ 0.5 & 89798.8 $\pm$ 0.5 \\
20.33 d & 89815.5 $\pm$ 0.5 & 89808.7 $\pm$ 0.5 \\
20.91 d & 89790.2 $\pm$ 0.5 & 89743.0 $\pm$ 0.7 \\
\hline
\end{tabular}
\tablefoot{Six fits were performed using a Normal prior on three period aliases around 20 days with the option
to set the orbital eccentricity as a free or fixed parameter.}
\end{table}
\subsection{{\rm NGTS-20}}
\label{results_ticB}
{\rm NGTS-20}\ hosts a massive warm Jupiter on a longer period orbit of about 54.19\,days. The planet has a mass of $\rm 2.98 ^{+0.16} _{-0.15}\,M_{Jup}$ and a radius of $\rm 1.07 ^{+0.04} _{-0.04}\,R_{Jup}$. The orbital eccentricity is equal to $0.432 ^{+0.023} _{-0.023}$ and the semi-major axis to $\rm 0.313 ^{+0.0013} _{-0.013}\,au$.
The radial velocity semi-amplitude is equal to $\rm 138 ^{+5} _{-5}\,m\,s^{-1}$ and the residuals are about 30, 40, and 12\,$\rm m\,s^{-1}$ for CORALIE, CHIRON, and FEROS, respectively. The phase folded radial velocities are presented in Figure~\ref{fig:rv_timeserie_2575}. The radial velocity observations of {\rm NGTS-20}\ cover a baseline of two years and there is no hint of a long term drift.
The three transits are shown in Figure~\ref{fig:phase-folded_lc_2575}. The 30 min binned residuals for the TESS light curve are about 390\,ppm. The two NGTS light curves show residuals, binned to 30 min, close to 525\,ppm for the one camera observation and 130\,ppm for the observation with six cameras.
Figure~\ref{fig:corner_2575} displays the posterior distributions of the planetary parameters along with the stellar density and radial velocity semi-amplitude.
The final parameters of the system are listed in Table~\ref{table:system-parameters}.
\subsection{Heavy element content}
\label{results_metals}
We estimate the heavy element content of both warm Jupiters by comparing their planetary mass and radius
with interior structure models.
We used the planetary evolution model \texttt{completo21} \citep{mordasini_2012} for the core and the envelope modeling
and coupled it with a semi-grey atmospheric model \citep{guillot_2010}.
We select the SCvH equations of state (EOS) of hydrogen and helium (H and He) with a He mass fraction of Y=0.27 \citep{saumon_1995}.
As \cite{thorngren_2016b}, we model the planet with a planetary core of $\rm 10\,M_{\oplus}$.
The core is composed of iron and silicates, with iron mass fraction of 33\%.
The remaining of heavy elements are homogeneously mixed in the H/He envelope.
The heavy elements are modeled as water with the AQUA2020 EOS of water \citep{haldemann_2020}.
We run the evolution tracks for both planets from 10\,Myr to 8\,Gyrs
varying the water mass fraction from 0 to 0.25, as shown in Figure~\ref{fig:metal_content}.
We compute the error on the water mass fraction using a Monte Carlo approach,
taking into account the uncertainties on the planetary radius and the stellar age.
We choose an average value of $\rm 4.1 ^{+2.7} _{-2.7}\, Gyrs$ for the age of {\rm NGTS-20}.
We find that the radius of the planets are well explained with a water mass fraction of
$\rm 0.12 ^{+0.06} _{-0.06}$ for {\rm TOI-5153}\,b and $\rm 0.10 ^{+0.06} _{-0.06}$ for {\rm NGTS-20}\,b
corresponding to a heavy element mass of 133\,$\rm M_{\oplus}$ and 104\,$\rm M_{\oplus}$ respectively.
We vary the planetary mass within the $\rm 1\sigma$ uncertainty and find no significant changes of the water mass fraction.
We assume that the stellar metallicity scales with the iron abundance ([Fe/H]) derived in Section~\ref{stellar-analysis} as follows:
$\rm Z_{\star} = 0.0142 \times 10^{[Fe/H]}$ \citep{asplund_2009,miller_2011}.
The heavy element enrichment ($\rm Z_{p} / Z_{\star} $) is estimated at $6.9 ^{+3.4} _{-3.4}$ for {\rm TOI-5153}\,b and $5.5 ^{+3.1} _{-3.1}$ for {\rm NGTS-20}\,b.
We can calculate the heavy element enrichment using \cite{thorngren_2016b} relations and
we find that $\rm Z_{p} / Z_{\star} $ equals $5.7 ^{+1.0} _{-1.0}$ for {\rm TOI-5153}\,b and $5.9 ^{+1.0} _{-1.0}$ for {\rm NGTS-20}\,b.
For both planets, our values are in agreement with the estimations derived from \cite{thorngren_2016b}.
\begin{figure}
\includegraphics[width=\hsize]{./figures/radius_v_time_NGTS20_TOI5153.pdf}
\caption{Evolution curves of the planetary radius as a function of time, color-coded by the water mass fraction in the envelope
for {\rm TOI-5153}\,b (left panel) and {\rm NGTS-20}\,b (right panel).}
\label{fig:metal_content}
\end{figure}
\begin{table*}
\caption{Derived parameters for {\rm TOI-5153}\ and {\rm NGTS-20}\ systems.}
\label{table:system-parameters}
\centering
\begin{tabular}{l c c}
\hline
\hline
\noalign{\smallskip}
Parameters & {\rm TOI-5153}\,b & {\rm NGTS-20}\,b \\
\hline
\noalign{\smallskip}
Fitted parameters & & \\
\noalign{\smallskip}
Orbital period (days) & $20.33003^{+0.00007}_{-0.00007}$ & $54.18915 ^{+0.00015} _{-0.00015}$ \\
Time of transit $\rm T_0$ (days) & $2458486.1239^{+0.0019}_{-0.0020}$ & $2458432.9798 ^{+0.0025} _{-0.0025}$\\
Radius ratio $\rm R_p/R_{\star}$ & $0.0777^{+0.0012}_{-0.0013}$ & $0.0618 ^{+0.0012} _{-0.0012}$ \\
Impact parameter & $0.725^{+0.024}_{-0.027}$ & $0.846 ^{+0.014} _{-0.015}$ \\
Stellar density ($\rm kg\,m^{-3}$) & $649^{+60}_{-60}$ & $348 ^{+30} _{-31}$ \\
TESS limb darkening q1 & $0.293^{+0.035}_{-0.034}$ & $0.326 ^{+0.024} _{-0.022}$ \\
TESS limb darkening q2 & $0.288^{+0.017}_{-0.017}$ & $0.298 ^{+0.009} _{-0.009}$ \\
NGTS limb darkening q1 & - & $0.380 ^{+0.018} _{-0.019}$ \\
NGTS limb darkening q2 & - & $0.330 ^{+0.008} _{-0.009}$ \\
Eccentricity & $0.091^{+0.024}_{-0.026}$ & $0.432 ^{+0.023} _{-0.023}$ \\
Argument of periastron (deg) & $144^{+24}_{-23}$ & $66.1 ^{+3.2} _{-3.2}$ \\
Radial velocity semi-amplitude ($\rm m\,s^{-1}$) & $212 ^{+8} _{-8}$ & $137 ^{+5} _{-5}$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Derived parameters & &\\
\noalign{\smallskip}
Planetary radius ($\rm R_{J}$) & $1.06 ^{+0.04} _{-0.04}$ & $1.07 ^{+0.04} _{-0.04}$ \\
Planetary mass ($\rm M_{J}$) & $3.26 ^{+0.18} _{-0.17}$ & $2.98 ^{+0.16} _{-0.15}$ \\
Inclination (degrees) & $88.27 ^{+0.14} _{-0.14}$ & $88.4 ^{+0.6} _{-0.6}$ \\
Transit duration (hours) & $4.87 ^{+0.08} _{-0.07}$ & $4.55 ^{+0.09} _{-0.08}$ \\
Semi-major axis (au) & $0.158 ^{+0.006} _{-0.006}$ & $0.313 ^{+0.013} _{-0.013}$ \\
Pericenter distance (au) & $0.143 ^{+0.007} _{-0.006}$ & $0.178 ^{+0.011} _{-0.010}$ \\
Apocenter distance (au) & $0.172 ^{+0.007} _{-0.007}$ & $0.448 ^{+0.019} _{-0.019}$ \\
Equilibrium temperature (K) & $906 ^{+13} _{-13}$ & $688 ^{+14} _{-13}$ \\
Equilibrium temperature at periastron (K) & $949 ^{+19} _{-19}$ & $913 ^{+18} _{-18}$ \\
Equilibrium temperature at apoastron (K) & $867 ^{+18} _{-18}$ & $575 ^{+11} _{-11}$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Instrumental parameters & &\\
\noalign{\smallskip}
TESS offset & $-0.00005^{+0.00014}_{-0.00012}$ & $-0.00009 ^{+0.00009} _{-0.00009}$ \\
TESS jitter (ppm) & $5^{+26}_{-4}$ & $2.5 ^{+13.4} _{-2.1}$ \\
TESS 2 offset & $-0.00005^{+0.00028}_{-0.00027}$ & - \\
TESS 2 jitter (ppm) & $3.6^{+24.3}_{-3.2}$ & - \\
NGTS offset & - & $0.0002 ^{+0.0017} _{-0.0024}$ \\
NGTS jitter (ppm) & - & $4475 ^{+78} _{-77}$ \\
NGTS 2 offset & - & $-0.0038 ^{+0.0012} _{-0.0006}$\\
NGTS 2 jitter (ppm) & - & $6462 ^{+42} _{-43}$\\
TESS dilution factor & $0.991998 ^{+0.000027} _{-0.000026}$ & -\\
\noalign{\smallskip}
GP amplitude TESS (relative flux) & $0.00027 ^{+0.00017} _{-0.00007}$ & $0.00027 ^{+0.00008} _{-0.00005}$\\
GP time-scale TESS (days) & $0.9 ^{+0.7} _{-0.4}$ & $0.61 ^{+0.21} _{-0.14}$ \\
GP amplitude TESS 2 (relative flux) & $0.00056 ^{+0.00030} _{-0.00013}$ & - \\
GP time-scale TESS 2 (days) & $1.4 ^{+0.8} _{-0.4}$ & - \\
GP amplitude NGTS (relative flux) & - & $0.002 ^{+0.006} _{-0.002}$ \\
GP time-scale NGTS (days) & - & $4.2 ^{+3.2} _{-2.6}$ \\
GP amplitude NGTS 2 (relative flux) & - & $0.0007 ^{+0.0025} _{-0.0006}$ \\
GP time-scale NGTS 2 (days) & - & $4.9 ^{+2.9} _{-2.7}$ \\
\noalign{\smallskip}
CORALIE offset ($\rm km\,s^{-1}$) & $-35.396^{+0.018}_{-0.018}$ & $12.553^{+0.005}_{-0.005}$ \\
CHIRON offset ($\rm km\,s^{-1}$) & $-36.685 ^{+0.024} _{-0.024}$ & $0.047^{+0.009}_{-0.010}$ \\
FEROS offset ($\rm km\,s^{-1}$) & $-35.297^{+0.009}_{0.009}$ & $12.585^{0.004}_{0.004}$ \\
HARPS offset ($\rm km\,s^{-1}$) & $-35.308^{+0.011}_{-0.012}$ & - \\
CORALIE jitter ($\rm m\,s^{-1}$) & $2^{+9}_{-1}$ & $4^{+4}_{-2}$ \\
CHIRON jitter ($\rm m\,s^{-1}$) & $1.1 ^{+3.6} _{-0.9}$ & $26^{+12}_{-10}$ \\
FEROS jitter ($\rm m\,s^{-1}$) & $2^{+5}_{-2}$ & $5^{+5}_{-3}$ \\
HARPS jitter ($\rm m\,s^{-1}$) & $0.9^{+3.2}_{-0.7}$ & - \\
\end{tabular}
\end{table*}
\section{Discussion}
\label{discussion}
The majority of known gas giants with measured masses and radii are hot Jupiters.
They have orbital periods of a few days and are subject to strong interactions with their host star,
such as tidal interactions (e.g. \citealt{valsecchi_2015})
and radius inflation mechanisms (e.g. \citealt{sestovic_2018,sarkis_2021,tilbrook_2021}). There is a positive correlation between the planetary radius and the amount of stellar incident flux \citep{enoch_2012a} and only hot Jupiters receiving a level of stellar irradiation lower than $\rm 2x10^8\,erg\,s^{-1}\,cm^{-2}$ are shown to have a radius independent of the stellar irradiation \citep{demory_2011}.
Cooler gas giants, such as
{\rm TOI-5153}\,b and {\rm NGTS-20}\,b, with equilibrium temperatures of about 900\,K and 700\,K,
should not be affected by radius inflation mechanisms. Thus we may derive precise bulk metallicities based on evolution models (e.g. \citealt{thorngren_2019a}).
We provide a first estimate of the heavy element content of {\rm TOI-5153}\,b and {\rm NGTS-20}\,b. We show that their metal-enrichment is in agreement with the mass-metallicity relation and is consistent with planets at long periods with comparable masses (e.g. \citealt{dalba_2022}). Both planets are excellent probes to help us better understand this relation.
We compare the properties of both planets with the population of known transiting exoplanets. We queried the DACE PlanetS exoplanet catalog database\footnote{\href{https://dace.unige.ch/exoplanets/?}{dace.unige.ch/exoplanets}} on February 10th, 2022 and selected exoplanets with mass and radius uncertainties smaller than 25\% and 8\% respectively. The radius uncertainty is scaled to 1/3 of the mass uncertainty to have the same impact on the planetary density.
Figure~\ref{fig:db_plot} shows that {\rm TOI-5153}\,b and {\rm NGTS-20}\,b populate a region of the parameter space where fewer systems have been reported with precise mass and radius. Besides, {\rm TOI-5153}\,b and {\rm NGTS-20}\,b orbit relatively bright stars (Vmag\,=\,11.9 and 11.2) in comparison to the known systems with orbital periods above 20\,days. While the Kepler mission was successful at detecting long-period transiting planets, most of these discoveries were done around faint stars (e.g. \citealt{wang_2015,kawahara_2019}). The follow-up of TESS single transit candidates allows one to probe the same population of planets around brighter stars (e.g. \citealt{eisner_2020,dalba_2022}), which are better suited for follow-up observations.
Figure~\ref{fig:db_plot_2} presents the masses, periods, and eccentricities of warm transiting planets.
Despite the small number, we notice that higher mass planets ($M_{P} > 3-4\,M_{J}$) show higher eccentricities.
This trend is reported by \citet{ribas_2007} for a larger sample of planets detected in radial velocities,
where the authors show that the eccentricity distribution of higher mass planets is similar to that of binary stars;
hinting that these planets may have formed by pre-stellar cloud fragmentation.
{\rm TOI-5153}\,b and {\rm NGTS-20}\,b have similar masses and significantly eccentric orbits with eccentricities of 0.091$\pm$0.026 and 0.43$\pm$0.02, respectively.
\cite{bitsch_2013} showed that the eccentricity of planets with $M_{P} < 5\,M_{J}$ can be damped by the disk.
However, \cite{debras_2021} present disk cavity migration as a possible explanation for eccentricities up to 0.4 for warm Jupiter-mass planets.
Another feature that can be seen is that the highest eccentricities do not occur at closer orbital distances. A possible reason for this high-eccentricity cutoff could be tidal circularization by the host star \citep{adams_2006,dawson_2018}. Planetary orbits beyond a threshold in eccentricity and orbital distance circularize before we observe them \citep{schlecker_2020}. Eccentric warm Jupiters like {\rm NGTS-20}\,b can thus serve as a valuable test bed to study tidal interactions between planets and their host stars.
The eccentricities of both targets, and especially {\rm NGTS-20}\,b, make them interesting candidates for high-eccentricity migration. Eccentric warm Jupiters could be exoplanets caught in the midst of inward migration. The migration would bring them to close-in orbits, and the orbits of the new hot Jupiters would circularize due to stellar tidal forces. However, other scenarios have been put forward, \cite{schlecker_2020} present the discovery of a warm Jupiter on a highly eccentric 15 day orbit ($\rm e\sim0.58$); the tidal evolution analysis of this system shows that its current architecture likely resulted from an interaction with an undetected companion rather than an on-going high-eccentricity migration.
High-eccentricity migration is a scenario which can be tested by measuring the spin-orbit of the system. Both targets are suitable for Rossiter-McLaughlin observations. The predicted Rossiter-Mclaughlin effect measured with the classical method is about 40\,$\rm m\,s^{-1}$ for {\rm TOI-5153}\,b and 16\,$\rm m\,s^{-1}$ for {\rm NGTS-20}\,b (Eq 40 from \citealt{winn_2010}). The Rossiter-Mclaughlin effect is large enough so that the spin-orbit angle of the system can be measured with current high-resolution spectrographs.
\begin{figure}
\includegraphics[width=\hsize]{./figures/radius_period_Vmag_PlanetS_AA_3.pdf}
\caption{Radius - period diagram for the population of transiting giant planets ($\rm M_{P} > 0.2\,M_{J}$) with mass and radius uncertainties smaller than 25\% and 8\%, respectively.
The size of the points is proportional to the planetary mass and the V band magnitude is color-coded.
{\rm TOI-5153}\,b is circled in black and {\rm NGTS-20}\,b in blue.}
\label{fig:db_plot}
\end{figure}
\begin{figure}
\includegraphics[width=\hsize]{./figures/mass_period_ecc_PlanetS_AA_5.pdf}
\caption{Mass - period diagram for the population of transiting giant planets ($\rm M_{P} > 0.2\,M_{J}$ and $\rm P > 10\,days$) with mass and radius uncertainties smaller than 25\% and 8\% respectively, color-coded as a function of orbital eccentricity. {\rm TOI-5153}\,b is circled in black and {\rm NGTS-20}\,b in blue.}
\label{fig:db_plot_2}
\end{figure}
\section{Conclusions}
\label{conclusion}
We report the discovery of two transiting massive warm Jupiters around the bright and metal-rich stars {\rm TOI-5153}\ and {\rm NGTS-20}.
{\rm TOI-5153}\ hosts a planet on a 20.33\,day period with a planetary mass of 3.26$\pm$0.18\,$M_{J}$ and
planetary radius of 1.06$\pm$0.04\,$R_{J}$. The orbit of the planet has an eccentricity of 0.091$\pm$0.026.
{\rm NGTS-20}\ hosts a longer period planet with an orbital period of 54.19\,days.
The planet has a radius of 1.07$\pm$0.04\,$R_{J}$, a planetary mass of 2.98$\pm$0.16\,$M_{J}$,
and presents an eccentric orbit with an eccentricity of 0.43$\pm$0.02.
We show that both planets are metal-enriched and their heavy element content
is consistent with the mass-metallicity relation of gas giants.
We used TESS photometry to identify single transit candidates which were then
followed up with ground-based photometric and spectroscopic instruments
in order to confirm the planetary nature of the transiting objects.
Both warm Jupiters orbit bright stars and are ideal targets
for additional observations in order to measure the spin-orbit alignment of these systems.
These exoplanets show that our selection of targets with single and duo transits and
subsequent radial velocity follow-up are successful and
we expect more discoveries of long-period transiting planets over the coming years.
\begin{acknowledgements}
This work has been carried out within the framework of the National Centre of
Competence in Research PlanetS supported by the Swiss National Science
Foundation under grants 51NF40\_182901 and 51NF40\_205606.
The authors acknowledge the financial support of the SNSF.
ML acknowledges support of the Swiss National Science Foundation under grant
number PCEFP2194576.
The NGTS facility is operated by the consortium institutes with support from
the UK Science and Technology Facilities Council (STFC) under projects ST/M001962/1 and ST/S002642/1.
The contributions at the University of Warwick by PJW, RGW, DRA,
and SG have been supported by STFC through consolidated grants ST/L000733/1 and ST/P000495/1.
RB acknowledges support from FONDECYT Project 11200751 and from ANID –
Millennium Science Initiative.
DD acknowledges support from the TESS Guest Investigator Program grants
80NSSC21K0108 and 80NSSC22K0185, and NASA Exoplanet Research Program grant
18-2XRP18\_2-0136.
Some of the observations in this paper made use of the NN-EXPLORE Exoplanet
and Stellar Speckle Imager (NESSI). NESSI was funded by the NASA Exoplanet
Exploration Program and the NASA Ames Research Center. NESSI was built at the
Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and
Emmett Quigley.
GZ thanks the support of the ARC DECRA program DE210101893.
AJ, FR, and PT acknowledge support from ANID -- Millennium Science Initiative --
ICN12\_009 and from FONDECYT project 1210718.
The results reported herein benefited from collaborations and/or information
exchange within the program “Alien Earths” (supported by the National
Aeronautics and Space Administration under agreement No. 80NSSC21K0593) for
NASA’s Nexus for Exoplanet System Science (NExSS) research coordination
network sponsored by NASA’s Science Mission Directorate.
JSJ gratefully acknowledges support by FONDECYT grant 1201371 and from the
ANID BASAL projects ACE210002 and FB210003.
MNG acknowledges support from the European Space Agency (ESA) as an ESA
Research Fellow.
The work performed by HPO has been carried out within the framework of the
NCCR PlanetS supported by the Swiss National Science Foundation.
EG gratefully acknowledges support from the David and Claudia Harding
Foundation in the form of a Winton Exoplanet Fellowship.
T.T. acknowledges support by the DFG Research Unit FOR 2544
"Blue Planets around Red Stars" project No. KU 3625/2-1.
T.T. further acknowledges support by the BNSF program "VIHREN-2021" project No.
JIV acknowledges support of CONICYT-PFCHA/Doctorado Nacional-21191829.
The authors acknowledge the use of public TESS data from pipelines at
the TESS Science Office and at the TESS Science Processing Operations Centre.
This paper includes data collected with the TESS mission obtained from
the MAST data archive at the Space Telescope Science Institute (STScI).
Funding for the TESS mission is provided by the NASA Explorer program.
STScI is operated by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS5-26555.
This work made use of \texttt{tpfplotter} by J. Lillo-Box
(publicly available in www.github.com/jlillo/tpfplotter),
which also made use of the python packages \texttt{astropy},
\texttt{lightkurve}, \texttt{matplotlib} and \texttt{numpy}.
This publication makes use of The Data \& Analysis Center for Exoplanets (DACE),
which is a facility based at the University of Geneva (CH) dedicated to
extrasolar planets data visualisation, exchange and analysis.
DACE is a platform of the Swiss National Centre of Competence in Research (NCCR) PlanetS,
federating the Swiss expertise in Exoplanet research.
The DACE platform is available at https://dace.unige.ch.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,499,622 | arxiv | \section{Introduction}
This paper consists of three successively more concrete parts, logically forming a progression. The first, purely model theoretic, constructs a canonical geometry associated
with any first-order theory. This `core space' was introduced in \cite{patterns}; here we recast it in a `local' setting that allows the core, and the attendant automorphism group, to be locally compact rather than compact. Beyond that, the relation between
the core and the automorphism groups of a models of the theory is now brought out explicitly, in terms of a
quasi-homomomorphism; the Lascar-Shelah neighbor relation plays the role of the error set.
The second part uses this construction to give a structure theorem for approximate subgroups. The `Lie model theorem'
proved in \cite{nqf} for near-subgroups, and used in \cite{bgt}, is extended in modified form to
general approximate subgroups; a conjecture of Massicot-Wagner predicting that this should be possible is thus confirmed. The third part specializes to {\em approximate lattices} in locally compact groups. In the case of semisimple groups,
a full classification can be achieved, solving in particular a problem of Bj\"orklund and Hartnick \cite{BjH}.
The rest of the introduction follows this outline in more detail. Type spaces - the Stone spaces dual to the Boolean algebra of formulas - occur for instance in the study of countable categoricity.
These spaces contain all information about $T$, but only as a convenient reformulation of the syntax; they do not bring out well the geometry of definable sets; in particular, they are inert and admit no automorphisms. In Morley's proof of his theorem on uncountable categoricity, type spaces {\em over models} first played a leading role.
Morley rank is immediately visible from their topology; and they carry a natural action of $Aut(M)$. This does not, however, give a canonical space or group associated with $T$, since a choice of $M$ is required. For some purposes, this is alleviated by uniqueness theorems for saturated models, but then the group
becomes large and unwieldy, falling outside any known structured family of groups.
Shelah \cite{shelah} recognized the algebraic imaginaries as a critical geometry associated with a first-order theory $T$.
The automorphism group is a profinite quotient of $Aut(M)$ (for $M$ a sufficiently homogeneous model of $T$), and a
perfect Galois duality holds between them. For T=ACF one recovers Galois's duality in full. This was for any theory; for
stable $T$, Shelah showed how amalgamation of substructures is controlled by his group.
Kim and Pillay \cite{kim-pillay} defined the {\em bounded hyperimaginaries} as a generalization
of the algebraic imaginaries; their automorphism group is a compact topological group. Once again this is valid for any first-order theory; for simple theories, it controls amalgamation. This equally beautiful theory later became absorbed in continuous logic, and indeed in \cite{byu} is not even terminologically separated from Shelah's algebraic imaginaries.
It was Lascar, more than a decade earlier, who suggested that a richer Galois group is needed in unstable theories. He defined two candidates; the compact one used by Kim-Pillay, and a
a bigger common quotient of the groups $Aut(M)$, called the general Lascar group. The latter was much studied recently, but
was never found to be the automorphism group of any meaningful geometry (see e.g. \cite{kms} for a related negative result.)
Newelski, Krupinski, Pillay, Rzepecki, Simon and others, working group-theoretically, showed suggestively that the general Lascar group was in turn a quotient of a compact group; see the introduction to \cite{patterns} for more detailed references.)
In \cite{patterns},
a common subspace $\mathcal{J}$ of all type spaces over models was identified, generalizing the Shelah space and Kim-Pillay space for stable and simple theories, and having significant structure beyond the topology. While canonical and closely related to definability in $T$, the automorphism groups $Aut(M)$ of models of $T$ do not naturally act on $\mathcal{J}$. We show here, however, that they do admit a useful {\em quasi-action}. A {\em quasi-homomorphism} $\phi: G \to H:K$
between two groups $G,H$, relative to an `error set' $K \subset H$, is a map $\phi$ satisfying $\phi(1)=1$ and $\phi(xy) \in \phi(x)\phi(y) K$, for all $x,y \in G$. This is of interest when $K$ is small in some sense.
We find a quasi-homomorphism $\varphi: Aut(M) \to Aut(\mathcal{J}):$\cjRL{S}$\, $. The error set $$\cjRL{S}$\, $ derives from a fundamental `neighbor' relation going back to \cite{lascar} (on $Aut(M)$
for highly saturated $M$) and \cite{shelah-simple} (for indiscernible sequences), and defined here on the core $\mathcal{J}$.
Given a quasi-homomorphism $\phi: G \to H:K$, one can of course factor out the group $\langle K \rangle$ generated by $K$
so as to obtain an ordinary homomorphism $G \to H/\langle K \rangle$. In the case of the quasi-homomorphism $Aut(M) \to Aut(\mathcal{J}):$\cjRL{S}$\, $ described above, this would recover the general Lascar group.
The stance taken here is that factoring out by $\langle $\cjRL{S}$\, \rangle$ is the wrong thing to do. An alternative way to forget
unwanted information while discarding much less of value is offered by category theory: in place of changing the objects, add (iso)morphisms. This is set out in Appendix \ref{categories}. In the paper itself we will use the categorical approach only in spirit.
We will however analyze mathematically the relevant morphisms.
Examples of quasi-homomorphisms include projective representations of a group $G$; here $H$ is a unitary group, the group
of automorphisms of a Hilbert space, and $K$ is the one-dimensional center of $H$.
Quasi-homorphisms with unitary targets were also studied notably in \cite{turing} (a 1938 paper first cited, according to MathSciNet, in 2011), and in \cite{kazhdan}, with $K$ a set of elements of small operator norm in both cases. When $G$ is finite (Turing) or amenable (Kazhdan), they were shown to be close to ordinary homomorphisms.
The quasi-morphisms of interest to us will have locally compact target groups, and normal,
compact error sets $K$.
The case $H = \Rr$ has been very well studied; a quasi-homomorphism into $\Rr$, with compact error set, is called a {\em quasimorphism}; it is significant in several branches of geometry. (Readers unfamiliar with the notion may enjoy reading
the very short introduction \cite{kotschick} at this point).
Returning to first-order theories, we can get a clear idea of the smallness of $$\cjRL{S}$\, $ if we allow a slight generalisation,
to local logic. This is introduced in \S 2. In many-sorted logic, quantification is allowed only on a given sort, or finitely many sorts. If one thinks of a sort as compact (say via the type space), then a many-sorted model is only locally compact.
Local logic is a slight variation, likewise restricting quantification, but refraining from destroying possible symmetries by naming specific bounded domains as sorts. In the simplest case,
one has a metric $m$ (a discrete metric is good enough for our purposes); and quantification is allowed over balls of bounded $m$-radius.
We will assume a `doubling' property - a $2$-ball for the metric $m$ is covered by finitely many $1$-balls.
The construction of $\mathcal{J}$ generalizes; $Aut(\mathcal{J})$ is now locally compact rather than compact; it has a natural locally compact Hausdorff quotient, which is what we will actually work with\footnote{Here we are in the same situation as Lascar, who did not know if the quotient is actually necessary.}. The relation $$\cjRL{S}$\, $ on $\mathcal{J}$ remains local:
$x $\cjRL{S}$\, y$ implies that their distance is at most $1$. As a results, the set of automorphisms $$\cjRL{S}$\, ^{Aut(\mathcal{J})}$ is a normal, compact subset of $Aut(\mathcal{J})$. In fact,
compactness alone does not fully capture the tightness of $$\cjRL{S}$\, $. But along with conjugation invariance - arising from the
canonical nature of $$\cjRL{S}$\, $ - it is sufficient for most of the group-theoretic applications.
At this point, the reader is encouraged to consult \thmref{summary1}, summarizing the main properties of the locally compact core $\mathcal{J}$, and \thmref{qh-basic},
including the basic properties of the quasi-homomorphism $\varphi$. The material is closely parallel to the non-local case of \cite{patterns}. The exposition in sections 2 and 3 is a compromise, not repeating every lemma of \cite{patterns}, but attempting to be self-contained with respect to the main line and in particular the results that will be critical in the later sections.
In \secref{definablegroups}, we transpose to the case of definable groups. In the local setting, we look at groups that are direct limits of definable sets; in the model theory literature they are known as piecewise-definable, locally definable, or strict ind-definable.
Thus $G=\cup X_n$, where each $X_n$ is a definable set, and multiplication restricts to a definable map $X_n \times X_n \to X_{n+1}$ .
The doubling condition amounts here to the assumption that $X_{n+1}$ is covered by finitely many translates of $X_n$
This makes each $X_n$ into an {\em approximate subgroup} of $G$, as defined in \cite{tao}.
At least at first approach, approximate subgroups are best understood up to commensurability, i.e. without distinguishing $X$ and $X'$, if each can be covered by finitely many translates of the other.
We will call an approximate subgroup {\em {laminar}} if it contains a sequence $(X_n)$ as above
indexed by {\em negative} integers. A
commensurability class $\omega$ is {\em {laminar}} if it contains a laminar approximate subgroup.
Model-theoretically, $X$ is laminar iff it contains a subgroup cut out by an intersection of generic definable subsets.
Quotienting such a a $\bigwedge$-definable subgroup out from a saturated version of $G$ leads to a homomorphism $\varphi: G \to \mathsf H$
into a locally compact group $\mathsf H$; and $\omega$ can be recovered from $\varphi$ and $\mathsf H$, since the elements of $\omega$ are intertwined
with pullbacks of compact open neighborhoods of $1$ in $\mathsf H$.
We obtain something like a Bourgain system (\cite{green-sanders}) down to an
arbitrarily {fine} scale. In the non-laminar case, to extend the metaphor, eddies of a certain size prevent further descent.
Assuming the existence of an appropriate invariant measure on $X^3$ (or just an invariant ideal, with similar properties to the measure zero ideal), it was proved
in \cite{nqf}, \cite{sanders}, \cite{massicot-wagner} (in various settings) that the commensurability
class of $X$ is {laminar}; in fact the sequence $X^n, n \geq 4$ can be continued downwards through the negative integers.
Here we prove, with no amenability assumptions, that {\em a map $\varphi: G \to \mathsf H:K$ into a locally compact group still exists, with the same property of intertwining {\em compact} with {\em definable}.} Only now $\varphi$ is a quasi-homomorphism, a compact, normal error set $K$ is allowed;
and open neighborhoods of $K$ (rather than of $1$) are considered on the locally compact side. See \thmref{grmain}.
Model-theoretically, the characteristic definability property of $\varphi$ in the {laminar} case was {\em definable separation}:
if $C,C'$ are disjoint compact subsets of $\mathsf H$, then $\varphi^{-1}(C),\varphi ^{-1}(C')$ are separated by a definable set.
If we give $\mathsf H$ a metric, $C,C'$ are separated by some distance $\epsilon>0$.
In the general setting, we still have the separation property but now assuming $CK,C'K$ are disjoint; they must be seen to be distinct even if our resolution is limited to the radius of $K$.
In the {laminar} case $\epsilon>0$ is arbitrary, but in general a certain minimum separation is required.
Massicot and Wagner wrote, regarding approximate subgroups: "We conjecture that even without the definable amenability assumption a suitable Lie model exists" \cite{massicot-wagner}. The term `Lie model' was coined earlier in \cite{bgt}, where it was the starting point for the classification of pseudo-finite approximate groups.
The meaning of `suitable' was not further clarified, but
I view \thmref{ag2} as a confirmation of their intuition. It asserts that {\em any commensurability class of approximate subgroups
arises by pullback of compact open sets under an quasi-homomorphism into a Lie group $L$; moreover, the error set is contained
a bounded subset of a closed, normal abelian subgroup $A \trianglelefteq L$, $A \cong \Rr^N$, such that $L$-conjugation respects a Euclidean structure on $A$.} (We refer to such actions as {\em rigid}.)
It was not immediately apparent
whether \thmref{ag2} has real content or merely replaces one kind of approximateness (for subgroups) by
another (for homomorphisms); the problems attacked in part 3, described below, were intended as a testing ground. The latter turned out to be a much tighter condition, thanks to the combined compactness and normality of the error set and the finite dimensionality of the target. To bring this out directly we give two additional formulations within \S 5,
a group-theoretic and a model-theoretic one. The group-theoretic version, \thmref{ag2gt}, points out that the quasi-homomorphism
$\varphi: G \to \mathsf H:K$ of \thmref{ag2} can be viewed as an (ordinary) homomorphism from an `almost' central extension $\tilde{G}$ of $G$ by $\Rr^n$, into the Lie group $\mathsf H$. Here `almost' means that the image in $End(\Rr^n)$ of conjugation by elements of $\tilde{G}$ ,
while not trivial, is contained in a compact subgroup.
This resembles the situation with projective representations of Lie groups, seen as representations
of a central extension. The cocycle coding the group extension will be a bounded one, forming one connection with the far-reaching results of bounded cohomology (e.g. \cite{gromov}, \cite{brooks}, \cite{burger-monod},\cite{monod-icm}.)
The second formulation is \thmref{ag2mt}. The model-theoretic avatar of an approximate subgroup in the {laminar} case is an $\bigwedge$-definable group `of bounded index.' Here we show that in general, an approximate subgroup is associated with
an $\bigwedge$-definable set, obtained as the `approximate kernel' of finitely many quasimorphisms on an $\bigwedge$-definable group of bounded index, i.e. the pullback of a bounded set of $\Rr^n$. This connects again, not in quite the same way, to bounded cohomology.
For instance,
since groups of bounded exponent admit neither dense homomorphisms into connected Lie groups nor unbounded quasimorphism,
any approximate subgroup of a bounded exponent group is commensurable to an actual subgroup; the conclusion is the same
as in \cite{nqf}, where this was used as an illustration of the theory, but the hypothesis includes no amenability assumption. The same theorem shows that the bumpiness alluded to above is felt only at the scale of the error set; further down, continuity can be regained.
The "approximate kernel" of a nontrivial quasimorphism is always an approximate subgroup (\propref{converse}), but never {laminar} (\propref{notsame})
The proof of \thmref{ag2} combines the conclusion of \thmref{grmain} with an analysis of precompact conjugacy classes in locally compact groups, applied to the error set $K$. {\em Such classes are contained in a normal, abelian-by-compact part of the group}; this is the import of \thmref{lc1cc} (see there for the precise statement). \thmref{lc1cc} can be viewed
as a footnote to a chapter of locally compact group theory (FC groups) completed
in \cite{tits} and \cite{GM}, following a line by Baer and Neumann, and going back as usual to Schur.
There, a similar
conclusion was reached for groups {\em all} of whose conjugacy classes are precompact. As I could not find the
statement we need on a single conjugacy class, it is proved in Appendix \ref{precompactconj}.
\secref{amenable} is a brief interlude, forming a bridge to discrete approximate subgroups.
We clarify the relation between the definable amenability of the approximate subgroup and amenability of the ambient group itself. Amenability of the ambient group involves a finitely additive, translation invariant measure on all subsets; this does not immediately give a similar measure on the approximate group, as it may have measure $0$. Nevertheless, another measure may be constructed that shows it is amenable too. Thus the known
{laminar}ity of approximate subgroups in the definably amenable setting (e.g. \cite{nqf}, \cite{massicot-wagner}, \cite{HKP}) applies
to all discrete approximate subgroups of amenable groups, whether discrete or locally compact. The result of this section was obtained earlier by Machado; see the final paragraph of the introduction.
The third part of the paper, consisting of \S 7, \S 8 and Appendix A, is concerned with the geometry of numbers; but this will not be obvious at the outset.
Our working environment in \S 7 is an easily defined class of locally compact groups, that we call {\em abstractly semisimple}. It has two defining requirements: no abelian normal subgroups, no discrete conjugacy classes other than the trivial ones.
These serve as ambient groups; within them, we consider discrete approximate subgroups. (Perhaps it is worth recalling here, though I am aware of no direct connection, the fundamental role of discreteness in the theory of pseudo-finite approximate subgroups of \cite{bgt}).)
We call a commensurability class of discrete approximate subgroups {\em strictly dense} if the group generated by any element of the class is dense; this is at an opposite extreme from discrete subgroups; under certain assumptions, a decomposition exists into the two kinds. Building on the results of \S 5, we begin by showing
that, for abstractly semisimple groups, {\em a strictly dense approximate subgroup is commensurably contained in a laminar one.}
See \thmref{discrete1}.
Recall that a {\em lattice} is a discrete subgroup of finite co-volume; it is called {\em uniform} if it is co-compact. In the case of the Euclidean space $\Rr^3$, a lattice is the accepted mathematical model of a crystal, and their
histories are inseparable. Yves Meyer in \cite{meyer} defined, in the setting of $\Rr^n$, {\em approximate lattices};
this later became the mathematical home for quasicrystals.
Meyer proved
that they are {laminar} as approximate subgroups, with Lie model $\Rr^n$; he did so in the equivalent, geometrically suggestive language of {\em model sets}.
On the other hand, lattices in non-commutative locally compact groups such as $SL_n(\Rr)$ play a major role in many parts of geometry, ergodic theory and number theory; especially relevant to us are the arithmeticity
theorems of \cite{margulis}.
Bj\"orklund and Hartnick \cite{BjH} were first to investigate approximate versions. They defined uniform approximate lattices, and offered two ``tentative" definitions in the non-uniform case. We will suggest another definition, defining
an {\em approximate lattice in the sense of finite covolume} to be a discrete approximate subgroup $\Lambda$ of $G$, such that $\Lambda C =G$ for some Borel set $C$ of finite measure. As in \cite{BjH}, $\Lambda$ is said to be {\em uniform} if $C$ can be taken compact.
It seems to work;
the basic properties of approximate lattices, with this elementary definition, are developed in Appendix \ref{approxlattices}.
After writing this, I ran into
Problem 6.4 of \cite{bats}, asking for the ``right" definition of an approximate lattice; Appendix \secref{approxlattices}, then, is our possibly naive proposal.
In this paper, approximate lattice will always mean, by way of abbreviation, `approximate lattice in the sense of finite covolume'.
Among the approximate lattices, Bj\"orklund and Hartnick consider the {\em Meyer sets}. The definition (that we naturally extend from the uniform case) inolves not only $G$ but also an additional locally compact
$\mathsf H$, that we will call the complementary group, and a lattice $\Gamma$ in $G \times \mathsf H$. A Meyer set $M$ is defined by the positive primitive formula of {\em local} logic: $(\exists h \in C)((x,h) \in \Gamma)$, where $C$ is a compact open set of $H$.
This description is equivalent to laminarity of $M$, with $\mathsf H$ corresponding to the target of the homomorphism into
a locally compact group.
I am grateful to Emmanuel Breuillard for introducing me to this beautiful theory, and explaining this equivalence between the language of laminarity, or Lie models, and of Meyer model sets.
Bj\"orklund and Hartnick posed the problem of proving laminarity when possible (\cite{BjH}, Problem 1, formulated there for
uniform approximate lattices.) Simon Machado, in \cite{machado}, \cite{machado2}, did so in the nilpotent and solvable cases. In \secref{discreteapprox}, complementing this work, we solve Problem 1 for semisimple groups $G$.
To do so, we will use a local logic where `local' implies 'discrete'. This contrasts with the natural treatment of
say $SL_n(\Rr)$ in local logic, where ``local" means "compact" (see e.g. \cite{HPP}, section 7.)
Nevertheless the local compactness of $Aut(\mathcal{J})$ (as mediated by Theorems 4.11,5.10)
allows us to recover compactness, effectively
finding the complementary group and presenting the
approximate lattice as a Meyer set. See \corref{semisimple1}.
The duality between compactness and discreteness seen here is a key feature of this world.
At this point, Margulis arithmeticity becomes available. See {arithmodelset} for the definition of an {\em arithmetic approximate lattice}. It is just like the definition of an arithmetic lattice in \cite{margulis}, except that
(as if in line with Weil's recommendation)
all places are treated as equals; no special provision is made for archimedean primes. If we switch attention
for a moment to rings,
$\Zz$ is the simplest lattice (in $\Rr$), while the approximate ring $\{a/p^n \in \Zz[1/p]: |a| \leq p^n \}$ is the simplest
arithmetic approximate lattice (in $\Qq_p$). In each case, the (approximate) lattice $R$ is the set of rationals whose norm in every completion, other than the ambient locally compact group, is at most $1$. And in both cases, appropriately understood,
$G(R)$ gives rise to arithmetic lattices in $G(\Rr)$ or $G(\Qq_p)$ for semisimple groups $G$.
\thmref{arith1} shows that {\em any approximate lattice in $G$ is commensurable with a product of lattices and arithmetic approximate lattices of direct factors of $G$. } Note that the arithmeticity of the strictly approximate lattices is valid even for $n=2$; higher rank assumptions are not required.
To illustrate the concrete character of this answer, let us describe, up to commensurability, the approximate lattices $\Lambda$ in $SL_n(\Rr)$.
They depend on a choice of a a number field $K \leq \Rr$ and an algebraic group $G \leq GL_N$ defined over $K$,
and isomorphic to $SL_n$ over $\Rr$ (the case $G=SL_n$, $n=N$ is interesting enough). Let $\Lambda_K$ be
the set of matrices in $G(K) \leq GL_N(K)$ of the form $I+ M$, where $I$ is the identity matrix and $M$ is a matrix whose nonzero entries are Pisot numbers $\alpha$ with $\Qq(\alpha)= K$. Then $\Lambda_K$ is an approximate lattice in $SL_n(\Rr)$.
It follows from \thmref{arith1} that {\em any approximate lattice in $SL_n(\Rr)$ is commensurable to a conjugate of one of these, or to a lattice.}
In positive characteristic,
strictly approximate lattice seem unnecessary, and in fact, do not exist (\corref{char-p}).
The class of abstractly semisimple groups was introduced in \S 7 as simply a convenient
home for groups like $SL_n(K)$, with $K$ a local field, and their more demanding cousins; but it eventually becomes natural to take interest in the class itself. \thmref{arith2}
shows that all strictly approximate lattices in abstractly semisimple groups are of arithmetic origin. In particular,
by \remref{arith2b}, {\em an arbitrary abstractly semisimple group containing a strictly approximate lattice is virtually isogenous an adelic product of semisimple groups over local fields}, where
`virtually isogenous' allows homomorphisms with co-compact image and compact kernel.
The proof of this recognition theorem
uses Margulis arithmeticity again, and a method of pivoting over the complementary locally compact.
Finally, we take a peek beyond soluble or semisimple groups. We construct a simple example of a non-laminar approximate lattice. Returning to bounded cohmology, we also give a criterion for an approximate lattice $\Lambda \leq G$, where $G$ is semisimple, ensuring laminarity of any approximate lattice $\Lambda'$ lying above $\Lambda$ in an algebraic group $G'$ with semisimple part $G$.
Leitfaden: The appendices are, within the present paper, self-contained; as is \secref{amenable}.
Sections (2,3,4) form a logical sequence. The main results of the later sections require, from this development, only parts (1,2) of \thmref{grmain}.
The reader interested mostly in approximate lattices can read Appendix \ref{approxsg}, the first few pages of \secref{approxsg}, including \thmref{ag2}, and turn to \secref{discreteapprox}.
Questions of uneven scope and difficulty are scattered through the text, mostly at the end of the sections.
Thanks to Arturo Rodriguez-Fanlo for his reading and comments.
Immediately after writing this paper I became aware of Machado's \cite{machado3} and
\cite{machado4}. In the earlier-written \cite{machado3}, Machado developed a theory including \S 6,
and parallel to much of \S A. He generally works with strong or cocompact approximate lattices; but for many if not all results of \S A, his methods are similar and should extend to finite covolume lattices with no difficulty. In \cite{machado4}, he obtains
arithmeticity and rigidity results for strong approximate lattices in higher rank Lie groups, via what appears to be a
beautiful and completely different approach using Zimmer cocycle rigidity . See the introduction to \cite{machado4}. The two approaches still need to be compared in detail; see in particular the comments added to Questions \ref{othergroups-q} and \ref{bjhquestion}.
\section{Local logic} \label{local}
We begin by reviewing the theory of existentially closed and saturated models. This theory belongs to the model theory of the 1960's, and was already recalled in detail in \cite{patterns}. But we need a generalization here, permitting a localization of the quantifiers without disturbing a possible symmetry. The extension is slight in that upon adding a constant symbol $c$ in
${\mathbb{g}}$, we return to the previous positive-primitive setting, with balls of various radii around $c$ are viewed as sorts.
As usual more care is needed to subtract a constant than to add one.
Throughout this paper, unless explicitly specified otherwise,
the word `definable' will mean: by a formula of the given language, without parameters.
We assume given a relational language (with equality) ${\L}$, with a distinguished sort ${\mathbb{g}}$, and a distinguished family $\{\mu\}$ of binary relations on ${\mathbb{g}}$, called {\em locality relations}. For simplicity, we will think of ${\mathbb{g}}$
as the only sort, and
assume we have only one locality relation $\mu_1$.
For any ${\L}$-structure $A$, we define a metric $\mu$ on $A$ according to the Cayley graph of $\mu_1$:
\[\mu(x,y) \leq 1 \iff x=y \vee \mu_1(x,y) \vee \mu_1(y,x) \]
\[ \mu(x,y) \leq n+1 \iff (\exists z)( \mu(x,z)\leq 1 \wedge \mu(z,y) \leq n ) \]
We write $\mu_n(x,y)$ for $\mu(x,y) \leq n$ (replacing $\mu_1$ by its symmetrization),
and also $\mu_n(a):=\{y: \mu_n(a,y)\}$.
By a pp formula,we mean one built from atomic formulas using conjunction and {\em local existential quantification}: if $\phi(x,y,\cdots)$
is a pp formula with a free variable $x$ and another variable or constant $y$ (both of sort $\mathbb{g}$),
then $(\exists x)(\mu_1(x,y) \wedge \phi)$ is also pp. Thus iterated quantification ranges over a ball of bounded radius. There are no pp sentences, though they do appear as soon a a constant is named.
A {\em primitive-universal theory} $\CT$ includes negations of sentences of the form $(\exists x)(\bigwedge_{i=1}^m \psi_i)$,
where $x$ is a tuple of variables, and $\psi_i$ a tuple of atomic formulas, or equivalently a pp formula; thus a typical sentence
can be written $(\forall x) \neg \psi(x)$.
A {\em primitive-existential theory} $\CT_{\exists}$ includes sentences of the form $(\exists x)(\bigwedge_{i=1}^m \psi_i)$.
Given a primitive-universal theory $\CT$, let $\CT_{\exists} $ consist of all existential sentences $(\exists x) \psi(x)$ where $\neg (\exists x) \psi(x) \notin \CT$.
We assume {\em irreducibility} in a strong form \footnote{We could allow some dependence
on $\psi,\psi'$, but the formulation with absolute bound $k$ will suffice for our purposes. In fact in the main application (LJEP) will have LJEP$_2$; in case of transitivity, when ${\mathbb{g}}$ has a single 1-type, we have LJEP$_1$.}: \\ \noindent
(LJEP$_k$) If $(\exists x) \psi(x), (\exists y) \psi'(y) $ are in $\CT_{\exists}$, where $\psi,\psi'$ are pp and $x,y$ are variables
(of sort $\mathbb{g}$), then
$(\exists x)(\exists y)(\mu_k(x,y) \wedge \psi(x) \wedge \psi'(y) ) \in \CT_{\exists}$.
We say LJEP holds if LJEP$_k$ holds for some $k$. In this case, $\CT ^{\pm} := \CT \cup \CT_{\exists}$ is consistent.
A {\em weak model} of $\CT$ is a structure $A$ where the sentences of $\CT$ hold. It is a {\em model} of $\CT$, in the sense of local logic, if it is a weak model where any two elements of ${\mathbb{g}}(M)$ are at finite $\mu$-distance. We will sometimes
use the term {\em local model} for emphasis and clarity.
Due to (LJEP), the class of models of $\CT$ has the joint embedding property. If $A,B \models \CT$, pick $a \in {\mathbb{g}}(A), b \in {\mathbb{g}}(B)$;
then the union of the diagrams of $A$ and of $B$, along with the sentence $\mu_k(a,b)$, is consistent. Any weak model
of this union,
restricted to the universe spanned by $A,B$, will be a model of $\CT$.
Local logic ultrapowers of a model $M$ of $\CT$ are formed by first taking the ordinary ultrapower, then restricting
to the elements at finite distance from some element of $M$. We can similarly define ultraproducts of models $M$ of $\CT_c$, i.e. models $M$ of $\CT$ with a distinguished element $a=c^M \in {\mathbb{g}}(M)$.
A model $A$ of $\CT$ is {\em existentially closed} (abbreviated e.c.) if ${\mathbb{g}}(A) \neq \emptyset$, and for every homomorphism $f: A \to B$, where $B \models \CT$, and
any ${\L}_A$-{pp} sentence $\phi$, if $B_A \models \phi$ then $A_A \models \phi$. Here ${\L}_A$ is ${\L}$
expanded by constants for the elements $a \in A$; they are interpreted as $a$ in $A_A$ and as $f(a)$
in $B_A$. In particular, this implies that $f$ is an embedding.
The usual direct limit construction shows that any model $A$ of $\CT$ admits a homomorphism $f: A \to B$ into an existentially closed model $B$ of $\CT$, with $|B| \leq |{\L}_A| + \aleph_0$. Any existentially closed model of $\CT$ is a model of $\CT^{\pm}$. Write $M \modec \CT$ if $M$ is an e.c. model of $\CT$. Also, for any local sentence $\psi$, write $\CT \modec \psi$ if
$M \models \psi$ whenever $M \modec \CT$.
The class of e.c. models admits amalgamation: if $f_i: A \to B_i$, we may embed each $B_i$ in an ultrapower $A^*$ of $A$, then compose with a homomorphism to an e.c. model.
For pp formulas $\phi(x),\psi(x)$ we write $\phi \perp \psi$ if $\CT \models (\forall x)\neg (\phi \wedge \psi)$.
If $M \modec \CT$, for any pp formula $\phi(x)$ and any $a$ from $M$, we have $M \models \phi(a)$
or $M \models \psi(a)$ for some pp formula $\psi \perp \phi$. This is proved just as in \cite{patterns} 2.3(1),
taking into account that pp formulas now use local quantifiers only.
Write $\phi \subsetec \psi$ if $\theta \perp \phi$ whenever
$\theta \perp \psi$; in e.c. models we have $\phi \subsetec \psi$ iff $\phi(M) \subseteq \psi(M)$.
\begin{prop}\label{prop2.1} Assume $\CT$ is a local primitive-universal theory enjoying (LJEP) and, for some infinite cardinal $\theta$, \\ \noindent
(ecB): Any e.c. model of $\CT$ has cardinality $\leq \theta$. \\
Then $\CT$ has a unique
e.c. model $\mathcal{U}$ that is {\em universal}, i.e. any $M \models \CT$ admits a homomorphism $M \to \mathcal{U}$.
Any homomorphism on $\mathcal{U}$ is an embedding, and
any endomorphism $f: \mathcal{U} \to \mathcal{U}$ is an isomorphism. If $\mathcal{U} \leq B \models \CT $ then
there exists a homomorphism $r: B \to \mathcal{U}$ with $r|\mathcal{U} = Id_\mathcal{U}$. $\mathcal{U}$ is homogeneous for {pp} types.
\end{prop}
\begin{proof} By the usual construction of saturated models, using amalgamation repeatedly and taking direct limits, there exists $\mathcal{U} \modec \CT$ such that
for any e.c. $A \leq \mathcal{U}$ and any embedding $f: A \to B \models \CT$ with $|B| \leq \theta$, there exists a homomorphism $g: B \to \mathcal{U}$ with $g \circ f = Id_A$. A priori $\mathcal{U}$ may be large, but
by (ecB) we have $|\mathcal{U}| \leq \theta$.
If $M \models \CT$, we can find by (LJEP) and the joint embedding property that we saw follows from it, some $B \models \CT$ admitting a homomorphism $h: M \to B$ and a homomorphism
$f: \mathcal{U} \to B$. Taking $A=\mathcal{U}$ above, we find $g: B \to \mathcal{U}$. Then $gh: M \to \mathcal{U}$ shows that $\mathcal{U}$ is universal.
Any homomorphism from any $A \modec \CT$ into a model of $\CT$ is an embedding; by definition of e.c..
Let $f: {\mathcal{U}} \to {\mathcal{U}}$ be an endomorphism,
with image $U'$. Then $f ^{-1}: U' \to {\mathcal{U}}$ is an embedding, that extends (by the $|{\mathcal{U}}|^+$- saturation of ${\mathcal{U}}$)
to an embedding $g: {\mathcal{U}} \to {\mathcal{U}}$. Since $g$ is injective, while $g|U'$ is surjective, we must have ${\mathcal{U}}=U'$. Thus $f$ is an
automorphism of $\mathcal{U}$.
As ${\mathcal{U}}$ is universal, if $\mathcal{U} \leq B \models \CT$ there exists a homomorphism $f: B \to {\mathcal{U}}$;
on ${\mathcal{U}}$ it induces an isomorphism $g$; so $r=g ^{-1} \circ f: B \to {\mathcal{U}}$ is as required.
Any universal $U \modec \CT$ is isomorphic to $\mathcal{U}$: since $U$ is universal,
there exists a homomorphism $f: U \to \mathcal{U}$, which must be an embedding; so we may assume $U \leq \mathcal{U}$.
Then there exists a retraction $r: \mathcal{U} \to U$. But endomorphisms of $\mathcal{U}$ are isomorphisms, so $\mathcal{U} \cong U$.
Homogeneity for pp types: let $C,C' \subset \mathcal{U}$ and let $g: C \to C'$ be a bijection preserving pp formulas.
We may assume $C \neq \emptyset$, say $c \in C$.
Using locality of $\mathcal{U}$, any existential sentence $\theta$ true in $\mathcal{U}_C$ is implied by a stronger pp sentence, specifying
also a distance of the witnesses from $c$. Hence $\theta$, transposed via $g$, is also true in $\mathcal{U}_{C'}$.
Thus $g$ extends to an embedding $G$ of $\mathcal{U}$ in some elementary extension $\mathcal{U}^*$ of $\mathcal{U}$ (a weak model initially).
Restricting to elements at finite distance from $g(c)$ will give a local model $V$ including all of $g(\mathcal{U})$.
We also have the usual inclusion of $\mathcal{U}$ in $\mathcal{U}^*$, and again it is contained in $V$.
Let $r: V \to \mathcal{U}$ be a retraction. Then $r \circ G: \mathcal{U} \to \mathcal{U}$ is an automorphism extending $g$.
\end{proof}
\ssec{The {pp} topology on ${\mathcal{U}}$ and $\Aut({\mathcal{U}})$.}
\label{pptop}
Let us topologize the sorts of ${\mathcal{U}}$ (specifically, ${\mathbb{g}}^m$ for $m \in \Nn$).
We give each sort of $\mathcal{U}$ the coarsest topology such that every pp-definable set, with parameters, is closed.
Thus as a pre-basis for ${\mathbb{g}}^m$ we take the complements of sets of the form $\{x: R(x,c) \}$, with $x=(x_1,\ldots,x_m)$, $c=(c_1,\ldots,c_k) \in {\mathcal{U}}$, and $R$ a {pp} relation.
As we included equality in the language, singletons in ${\mathbb{g}}^n$ are pp-definable with parameters and hence closed, so
each sort of $\mathcal{U}$ is T1. (This remains true without equality in the language, see \cite{patterns}.)
We will now make a further assumption:
(LC) There exists a pp formula $\mu_{>n}(x,y)$ such that $\mu_{>n} \perp \mu_n$, and for some
$N >n$, ${\mathbb{g}} \subsetec \mu_{>n} \cup \mu_{N} $.
\begin{lem}\label{LC2} Let $a \in {\mathbb{g}}(\mathcal{U})$, $n \in \Nn$, and let $B=\mu_n(a) \subset {\mathbb{g}}$ be a $\mu$-ball. Then $B^m \subset {\mathbb{g}}^m$ is compact.
Assuming (LC) holds, ${\mathbb{g}}$ is locally compact; \footnote{We use the term to mean that every point has a compact neighborhood; it does not presuppose that ${\mathbb{g}}$ is Hausdorff.} and every compact subset of $\mathbb{g}(\mathcal{U})$ is contained
in some ball $\mu_n(a)$.
\end{lem}
\begin{proof} Consider a family $F_i$ of basic closed subsets of $B^m$ with the finite intersection property.
$F_i$ is defined by $R_i(x,c_i)$ with $R_i$ a finite disjunction of {pp} formulas. In an elementary extension $U'$ of ${\mathcal{U}}$,
one can find $d=(d_1,\ldots,d_m)$ with $R_i(d,c_i)$ holding for all $i$, as well as $\mu_n(a,d_i)$ for each $i \leq m$.
We can initially allow $U'$ to be a non-local extension, then discard all elements at infinite distance from $a$; this will not effect the truth value of local formulas for the remaining elements, and the $d_i$ remain in the domain.
By \propref{prop2.1} there exists $r: U' \to {\mathcal{U}}$,
$r|{\mathcal{U}}=Id_{\mathcal{U}}$. Then $R_i(r(d),c_i)$ holds for each $i$, so $r(d) \in \cap_i F_i$ and
$\cap_i F_i \neq \emptyset$.
If (LC) holds, then $a$ lies in the interior of $\mu_N(a)$, since it is contained in the complement of $\mu_{>n}(a)$ which in turn is contained in $\mu_N(a)$. Hence $a$ has a compact neighborhood, and so ${\mathbb{g}}$ is locally compact. If $Y$ is a compact
subset of $\mathbb{g}(\mathcal{U})$, as it is covered by neighborhoods $\mu_N(a')$, it must be covered by finitely many such neighborhoods,
$\mu_N(a_1),\cdots, \mu_N(a_m)$;
and as the `centers' $a_i$ are at finite distances from each other, it follows that $Y$ is contained in a large enough ball.
\end{proof}
Let $G= \Aut({\mathcal{U}})$. Give $G$ the topology whose pre-basic open set have the form
\[\{g: \neg R (ga_1,\ldots,ga_n,b_1,\ldots,b_m) \}\]
where $a_1,\ldots,a_n,b_1,\ldots,b_m \in \mathcal{U}$ and $R$ is pp. The action $G \times \mathcal{U} \to \mathcal{U}$ is then continuous in the first variable; for fixed $g$ it is also continuous in the second variable, since $g$ is an automorphism.
It is also easy to see that multiplication $G \times G \to G$ is continuous in each variable, and that inversion $g \mapsto g ^{-1}$
is continuous.
As $\mathcal{U}$ is T1, it follows from left-continuity of the action that $G$ is T1.
Let $G(a,n) = \{g \in G: \mu_n(g(a),a)\}$.
\begin{lem}\label{LC3} { $G(a,n)$ is compact. Assuming (LC), $G$ is a locally compact space.}\end{lem}
\begin{proof} Fix $a$ and $n$. Let $u$ be an ultrafilter on a set $I$, and let $g_i \in G(a,n)$, $i \in I$;
we need to find a limit point $g$ of $(g_i)_i$ along $u$. Let $\mathcal{U}^*$ be the ultrapower of $\mathcal{U}$ along $u$,
and let $g_*: \mathcal{U} \to \mathcal{U}^*$ be the ultraproduct of the maps $g_i: \mathcal{U} \to \mathcal{U}^*$. Since $\mu_n(g_i(a),a))$,
we have $\mu_n(g_*(a),a)$ and hence $\mu_{n+k}(g_*(c),a)$ whenever $\mu_k(c,a)$. Thus $g$ is defined on $\mathcal{U}$ into
the local ultrapower. Let $j$ be the diagonal embedding
$\mathcal{U} \to \mathcal{U}^*$, ultrapower of $Id: \mathcal{U} \to \mathcal{U}$.
As $\mathcal{U}^* \models \CT^{\pm}$,
\propref{prop2.1} provides a homomorphism $r: \mathcal{U}^* \to \mathcal{U}$ with $r \circ j = Id_{\mathcal{U}}$. Let $g=r \circ g_* $.
Then $g \in End(\mathcal{U})=\Aut(\mathcal{U})$. If $R(g_ia_1,\ldots,g_ia_n,b_1,\ldots,b_m)$ holds for $u$-almost all $i \in I$,
then $\mathcal{U}^* \models R(g_*a_1,\ldots,g_*a_n,jb_1,\ldots,jb_m)$ so $\mathcal{U} \models R(ga_1,\ldots,ga_n,b_1,\ldots,b_m)$.
Hence $g$ is indeed a limit point of $(g_i)_i$ along $u$.
Assuming (LC), $G$ is locally compact: since translations are continuous, it suffices to show
that the identity element $1 \in G$ has a compact neighborhood. Indeed as in \lemref{LC2}, the compact set $G(a,N)$ contains
an open set including $1$, namely the complement of $\{g: \mu_{>n}(a,g(a))\}$.
\end{proof}
Let $X$ be a $G$-invariant subset of ${\mathbb{g}}$. Define the infinitesimal elements of $G$, in its action on $X$, to be
\[ {\fg_X}= \{g \in G: \hbox{for every nonempty open } U \subset X, \ \ gU \cap U \neq \emptyset \} \]
Fix $X$ to be either ${\mathbb{g}}(\mathcal{U})$ or a complete 0-definable pp type $P \subset {\mathbb{g}}(\mathcal{U})$; and
let $\fg=\fg_X, \CG = G/\fg, \pi: G \to \CG$ the quotient map.
By Lemma C.1 of \cite{patterns}, $\fg$ is a closed normal subgroup of $G$, and $\CG = G/\fg$ is Hausdorff. Note that $G/\fg$
remains locally compact, with continuous inversion and right and left multiplication.
It follows from Ellis' joint continuity theorem \cite{ellis}
that $G/\fg$ is a locally compact Hausdorff topological group, acting continuously
on $\mathcal{U}/\fg$. (A direct proof in our setting is given in \cite{patterns} 3.27.)
We have shown the main part of:
\begin{prop} \label{2.5} Assume (ecB) and (LC). Let $\mathcal{U}$ be the universal e.c. model, $G=Aut(\mathcal{U})$,$\fg=\fg_{\mathcal{U}}$, $\CG=G/\fg$.
Then $\CG$ is a locally compact topological group. With $\pi: G \to \CG$ the quotient map, we have:
\begin{itemize}
\item
If $Y \subset \CG$ is precompact, then $\pi ^{-1}(Y)$ is bounded.
\item
If $W \subset G$ is bounded, then $\pi W$ is precompact. \end{itemize}
\end{prop}
Here $W \subset G$ is said to be bounded if for some/any $a \in \mathbb{g}(\mathcal{U})$, for some $N \in \Nn$,
$Wa \subset \mu_N(a)$; see \defref{groupdefs} \ref{bounded}.
\begin{proof}
We saw that $\CG$ is a locally compact topological group, and that $\fg$ is a closed normal subgroup of $G$.
Fix $a \in \mathbb{g}(\mathcal{U})$.
By \lemref{LC2}, $a$ has a compact neighborhood $W=\mu_n(a)$. Then by definition of $\fg$,
if $g \in \fg$ then $gW \cap W \neq \emptyset $; so $\mu_n(a,gb)$ holds for some $b$ with $\mu_n(a,b)$;
hence $\mu_n(ga,gb)$, and $\mu_{2n}(a,ga)$. This shows that $\fg \subset G(a,2n)$, which is compact by \lemref{LC3}.
Thus $\fg$ is compact.
It follows easily\footnote{[Let $\pi: G \to \CG$ be the quotient map. Let $Y \subset G$ be closed with $\pi(Y)$ compact. Then $Y$ is compact:
Let $y_i \in Y, i \in I$, and $u$ an ultrafilter on $I$. The $\lim_{i \to u} \pi(y_i) = \pi(b) $ for some $b$.
Since $b\fg$ is compact, it has a compact neighborhood $U= U \fg$; and we have $y_i \in U$ for almost all $i$;
so $y_i$ approaches a limit along $u$.]} that if $Y \subset \CG$ is compact, then so is the pullback to $G$. Thus
for the first item
it suffices to show that if $Y \subset G$ is compact, then $Y \subset G(a,N)$ for some $N$. Indeed as $Y$ is compact,
$Y a$ is compact. By \lemref{LC2},
$Y a \subset \mu_N(a)$ for some $N$;
hence by definition $Y \subset G(a,N)$.
If $W \subset G$ is bounded, $W \subset G(a,N)$ for some $N$; we saw that $G(a,N)$ is compact; hence so is $\pi(G(a,N))$.
So $\pi(W) \subset \pi(G(a,N))$ is precompact.
\end{proof}
\begin{prop} \label{autlc}
Assume (ecB) and (LC); also that ${\L}$ is countable. Let $P$ be a complete pp type in ${\mathbb{g}}(\mathcal{U})$,
$\CG(P)=G/\fg_{P}$. Then $\CG(P) $
is a second countable, locally compact topological group. If $P'$ is another complete pp type in ${\mathbb{g}}(\mathcal{U})$,
then $\CG(P'),\CG(P)$ differ only by a compact isogeny.
\end{prop}
\begin{proof} We have the natural homomorphism $G \to \CG(P)$, by restriction and then going to the quotient. It is clearly continuous, when
$\CG(P)$ is endowed with the topology described above, with respect to the action of $G$ on $P$.
The kernel is compact, indeed with $n$ as in the proof of \propref{2.5}, for any $a \in P$, the kernel is contained
in $G(a,n)$. It follows that $\CG(P)$ is locally compact, and (regardless of $P$) differs
by a compact isogeny from $\CG$. When ${\L}$ is countable, the proof of \cite{patterns} 3.24 shows that $\CG(P)$ is second countable. (using $\sigma$-compactness
via $G = \cup_{n \in \Nn} G(a,n)$ in place of compactness; note that Lemma C.4 remains valid when $\ft_1$ is
$\sigma$-compact, and $X \times Y$ is $\ft_2$-compact for $\ft_1$-compact $X,Y$, as it the case here.)
\end{proof}
\begin{question}
Is there an example of $Aut(P)/\fg_P \neq 0$?
\end{question}
\begin{lem} \label{LClem1} Assume $\CT^{\pm}$ has a model $A$, with $a \in \mathbb{g}(A)$, such that: \begin{enumerate}
\item
$\mathbb{g}(A)$ is endowed with a Hausdorff topology $\tau$ such that each $\mu_n(a)$ is compact, and
every pp-relation on $\mathbb{g}(A)^n$ is closed in the product topology $\tau^n$.
\item For some $k \in \Nn$, $Aut(A) \mu_k(a) = \mathbb{g}(A)$. Explicitly, for any $c \in \mathbb{g}(A)$ there exists $\sigma \in Aut(A)$
with $\mu(a,\sigma(c)) \leq k$.
\end{enumerate}
Then $A$ is universal, and
$\CT$ is e.c.-bounded (ecB), with LJEP.
\end{lem}
\begin{proof} Let ${C} \models \CT$. We have to prove that $Hom({C},A) \neq \emptyset$. Pick $d \in \mathbb{g}({C})$, and
let $F$ be the space of functions $f: {C} \to A$ such that $\mu(a,f(x)) \leq k+\mu(d,x)$ for all $x \in C$. The topology on $F$ is the finite-open topology relative to $\tau$; i.e. a typical basic open set has the form $\{f: f(d_i) \in U_i, i=1,\ldots,m\}$ where $d_1,\ldots,d_m \in \mathbb{g}({C})$ and $U_1,\ldots,U_m \in \tau$. Then $F$ can be seen as the product over all $c \in C$ of the ball around $a$ of radius $k+\mu(c,d)$. Each of these balls is compact by assumption, so $F$ is compact.
Now within $F$, we want to find a homomorphism $h:{C} \to A$. The conditions for being a homomorphism are: for any pp formula $R(x_1,\ldots,x_n)$
and any $(d_1,\ldots,d_n) \in R(C)$, $h$ should belong to $F(R;d_1,\ldots,d_n) :=\{f \in F: R(f(d_1),\ldots,f(d_n)) \}$.
We may take $d_1,\ldots,d_n $ distinct, and we may always adjoin $d$ to the list, so we may assume $d_1=d$.
By assumption, each $F(R;d_1,\ldots,d_n) $ is closed in $F$, and the intersection of two such sets is contained in a third (using the pp formula $R(x,y)=R_1(x) \wedge R_2(y)$.) Hence by compactness of $F$ it suffices to show that each $F(R;d_1,\ldots,d_n) $ is nonempty.
Since $A \models \CT^{\pm}$, there exist $b_1,\ldots,b_n \in A$ with
$R(b_1,\ldots,b_n)$. Let $\sigma$ be an automorphism of $A$ with $\mu_k(a,\sigma(b_1))$; define $f(d_i):= \sigma(b_i)$, and
let $f(c)=a$ for $c \in C \smallsetminus \{d_1,\ldots,d_n\}$.
Then $f \in F(R,d_1,\ldots,d_n)$. This finishes the proof of existence of $h: C \to A$.
Since every ${C} \models \CT$ admits a homomorphism into $A$, if ${C} \modec \CT$ then ${C}$ embeds into $A$,
so $|{C}| \leq |A|$; this proves e.c.-boundedness of $\CT$. LJEP$_k$ is clear from universality of $A$ and
the assumption $Aut(A) \mu_k(a) = \mathbb{g}(A)$.
\end{proof}
\newpage
\section{The core space}
Let $T$ be a complete first-order theory, in a language $L$ with a distinguished sort ${D}$ and locality relation on ${D}$,
that we take to be generated by a single reflexive, symmetric binary relation ${m_1}$ on ${D}$.
Let ${d_m}(x,y)$ be the Cayley-graph distance formed from ${m_1}$, as in the discussion of local logic. A model of $T$ will be said to be {\em ${m}$-local}
if any two elements of ${D}$ are at finite ${d_m}$- distance.
We treat $T$ in local logic.
This means, first of all, that we
restrict attention to ${m}$-local models $M$ of $T$, and in particular to the part of the type space consisting of types realized in elementary extensions that are themesleves ${m}$-local. If a weak model $N'$ is an elementary extension of $M$
in the usual sense, and $N$ is obtained by simply removing any points of ${D}(N')$ at infinite distance from points of ${D}(M)$,
then by fiat, $N$ is a (local) elementary extension of $T$ in the local logic sense.
We will use only local formulas, i.e. formulas built up using local quantifers only; with the one exception that we assume $T$ is complete
with respect to sentences $(\forall x) \phi$ and $(\exists x)\phi$, where $\phi$ is local.
\footnote{ We can assume or arrange by Morleyzation that $T$ admits local quantifier-elimination. i.e. every local formula
is equivalent to a quantifier-free one. In this case $T$ is determined by its universal part; the models of $T$ are e.c. models of $T_{\forall}$. In \cite{patterns} it was useful to allow more general universal theories, restricting attention to e.c. models; this could be done here too.}.
Note that the 'local' restriction on types $p(x) \in S_{D}(M)$ amounts simply to saying that some formula $m_n(x,b)$ be represented in $p$. Following convention in enriched logics, we denote the usual quantifier-free type
space by $S^{weak} (M)$; and the subspace of local types by $S(M)$.
For simplicity we
will concentrate on the space of local $1$-types in ${D}$, $S_{D}$.
$S_{{D}}(M)$ is an open subset of $S_{D}^{weak}(M)$
with the usual topology; hence it is locally compact.
\ssec{The pattern language} Let us recall the construction in \cite{patterns} of the pattern language ${\L}$, and
the ${\L}$-structure on the type spaces
$S_{x}(M)$, $M \models T$. Here we
concentrate on a single sort ${D}$ of $T$, and will be interested only in one sort of ${\L}$, to be denoted $\mathbb{g}$, corresponding to local $1$-types on ${D}$. Let $x,x_i$ be variables
ranging over ${D}$.
Let $y$ be an additional tuple of variables (the {\em parameter variables}.)
Let $t=(\phi_1,\ldots,\phi_n;\alpha)$ be
an $n$-tuple of formulas $\phi_i(x_i,y)$, and let $\alpha(y)$ be a formula.
To each such $t$ we associate a relation symbol $\R_t$ of ${{\L}}$, taking variables $(\xi_1,\ldots,\xi_n)$ ranging over $\mathbb{g}$.
For any $M \models T$,
we define an ${{\L}}$-structure with universe $S=S_D(M)$.
The interpretation of $\R_t$ is:
\[\R_{t} ^S =\{(p_1,\ldots,p_n) \in S: (\forall a \in \alpha(M)) \bigvee_{i \leq n} (\phi_i(x,a) \in p_i) \} \]
Thus when $n=1$, $\R_{\phi;\alpha}$ indicates a partial definition scheme for $\phi$, thus the notation $\R$.
(In \cite{patterns}, $Om_{\neg \phi; \alpha}$ was used instead, where $Om$ stands for {\em omitting} $\neg \phi$.
These are equivalent in the presence of negation in $L$, but even then $\R$ is more natural.)
For typographical reasons, we will sometimes write $\R[t]$ for $\R_t$.
\ssec{Locality structure on $\mathbb{g}$}
\begin{defn} A model $M$ of $T$ is {\em orbit-bounded} (in the sort ${D}$) if for some $k \in \Nn$,
for all $a,b \in {{D}}(M)$, there exists $\alpha \in Aut(M)$ such that $a,\alpha(b)$ are at distance $\leq k$. \end{defn}
We will make two assumptions about $T$ as a local theory.
(OB) $T$ has an orbit-bounded model $M$.
(DP) For each $r \in \Nn$, for some $m=m(r) \in \Nn$, every $r+1$-ball is a union of at most $m$ $r$-balls.
\medskip
The second assumption, (DP), is the {\em doubling} property for the metric $d_m$. Assuming
any two points are connected by a finite chain of points at distance $1$ (i.e. the locality is generated by a single relation),
(DP) follows from the same statement for $r=1$.
If it holds for one ball with in an orbit-bounded model, then
the uniform version will be valid too.
(DP) is needed in order to define the locality structure for the pattern language, by a pp formula.
It follows from doubling that every $n$-ball is a union of finitely many $1$-balls; hence
a type $p$ of ${D}$ over a model $M$ is local iff it represents the formula $m(x,y)$, i.e. $m(x,c) \in p$ for some $c \in {D}(M)$.
( In other words we will not have $ \R[{\neg m(x,y);D(y) }]$.)
The terminology orbit-bounded is short for `the space of orbits of $Aut(M)$ on $M$ is bounded, under the induced metric'.
In our applications, (OB) will hold in every local model of $T$. In general, if $T$ has one orbit-bounded model, then
every $\aleph_0$-homogneous model will be orbit-bounded.
\begin{defn} \label{mu} We define the locality generating relation $\mu_1(\xi_1,\xi_2)$ for ${\L}$ to be
\[\R[{\neg m_1(x_1,y_1), \neg m_1(x_2,y_2); \neg m_5(y_1,y_2)}]\]
Towards (LC), we also define $\mu_{>1}(\xi_1,\xi_2)$ by the formula:
\[ \R[{\neg m_1(x_1,y_1) , \neg m_1(x_2,y_2) ; m_{5}(y_1,y_2) } ] \]
\end{defn}
(We used $5$ above, where $3$ would do, in order to ensure below that close Lascar neighbors have $\mu$-distance at most $1$;
it is not an essential point.)
For the space of types over a model $M$, if $p_1,p_2 \in S(M)$ are local types, and $S(M) \models \mu_1(p_1,p_2)$,
by (DP) there exist $b_1,b_2 \in D(M)$
with $m_1(x_i,b_i) \in p_i$; now $\mu_1(p_1,p_2)$ guarantees that $m_5(b_1,b_2)$ holds, so that any two realizations
of $p_1,p_2$ will be at $m$-distance at most $7$.
Conversely if realizations $b_i$ of $p_i$ exist with $m_1(b_1,b_2)$,
then $\mu_1(p_1,p_2)$ holds (even with $3$ in place of $5$): we have to show that if $M \models \neg m_5(c_1,c_2)$
then $\neg m_1(x_1,c_1) \in p_1$ or $\neg m_2(x_1,c_2) \in p_2$. Otherwise, $m_1(x_1,c_i) \in p_i$ so $m_1(b_i,c_i)$.
Then (initially in an elementary extension of $M$) the chain $c_1,b_2,b_2,c_2$ shows that $m_3(c_1,c_2)$ holds,
a contradiction.
\begin{lem}\label{OB2} Assume $M$ is orbit-bounded; then
$S_{D}(M)$ is orbit-bounded too, even just with respect to the action of $Aut(M)$. \end{lem}
\begin{proof} Let $p, q \in S_{{D}}(M)$.
Since $p(x)$ is local, for some $n \in \Nn$, $p(x)$ contains a formula defining an $m$-ball of radius $n$. By the doubling property (DP) it follows that $p(x)$ contains a formula defining an $m$-ball of radius $1$; i.e.
$m_1(x,c) \in p$ for some $c \in M$. Similarly $m_1(x,d) \in q$ for some $d \in M$. Let $\sigma \in Aut(M)$ be such that
$m(d,\sigma(c)) \leq k$. Then $\sigma$ induces an automorphism of $S(M)$, and $m_1(x,\sigma(c)) \in \sigma(p)$.
Let $d=d_1,\cdots,d_k=\sigma(c)$ be a chain of elements of $D(M)$, with $m_1(d_i,d_{i+1})$. (Say $k$ is even.)
Also write $d_i$ for the algebraic type $x=d_i$. We have $\mu_1(q,d_2), \mu_1(d_2,d_4), \cdots, \mu_1(d_{k-2},d_k),
\mu_1(d_k,\sigma(p))$. Thus $\mu_{k/2+1} (\sigma(p),q)$ holds in $S(M)$. \end{proof}
\ssec{The pattern theory of $T$}
We define ${\CT}$ be the set of local, primitive universal ${\L}$-sentences true in $S(M)$ for some $M \models T$.
Defined this way, it is not clear that ${\CT}$ is consistent; but this, along with LJEP, follows from \lemref{orbitbounded2}.
\begin{lem} \label{orbitbounded2} Let $M$ be an orbit-bounded model of $T$. Then $S(M) \models {\CT}$. \end{lem}
\begin{proof}
Let $N \models T$. If $S(M) \models (\exists x_1)\cdots(\exists x_n) \Theta$, where $\Theta$ is a finite conjunction of basic relations of ${\L}$, we have to show that $S(N) \models (\exists x_1)\cdots(\exists x_n) \Theta$. For simplicity, let us assume
$\Theta$ is a basic relation $\R[{\phi_1,\ldots,\phi_n;\alpha}]$.
Let $p_1,\ldots,p_n \in S(M)$ be such that $S(M) \models \Theta(p_1,\ldots,p_n)$.
Since $M$ and $S(M)$ are local, each $p_i$ is at some finite $m$-distance $d$
from some $a \in M$, i.e. $m_d(a,x_i) \in p_i$. Thus the following partial type over $M$ is consistent:
\[ \{ \bigvee_{i=1}^n \phi_i(x_i,c): c \in \alpha(M) \} \cup \{ m_d(a,x_i): i =1,\ldots,n \} \]
Hence, for any $\sigma \in Aut(M)$, the same partial type is consistent if we replace $a$ by $\sigma(a)$. By the orbit boundedness
of $M$, it follows that for {\em any} $a' \in D(M)$, the type below is consistent too:
%
\[ \{ \bigvee_{i=1}^n \phi_i(x_i,c): c \in \alpha(M) \cup \{ m_{d+k}(a',x_i): i =1,\ldots,n \} \]
%
Thus for any $m$, we have in $M$:
%
\[ (\forall u \in {D})(\forall c_1,\ldots,c_m)( \bigwedge_j\alpha(c_j) \to (\exists x_1,\ldots,x_n) \bigwedge_i m_{d+k}(u,x_i) \wedge
\bigwedge_j ( \bigvee_{i=1}^n \phi_i(x_i,c_j)) ) \] %
This applies equally to $N$ (using the completeness of $T$), and so retracing out steps, we find that for any $b' \in N$ there exist
types $q_1,\ldots,q_n$ with $\mu_{d+k}(b',x_i) \in q_i$ and $S(N) \models \R_t(q_1,\ldots,q_n)$. The $q_i$ are local types,
and show that $S(N) \models (\exists x_1)\cdots(\exists x_n) \Theta$, as promised.
\end{proof}
Regarding (LC), clearly $\mu_1,\mu_{>1}$ are incompatible in any local type space.
On the other hand, if $p_1,p_2 \in S(M)$, then either $\mu_{>1}$ holds or $m_i(x_i,b_i) \in p_i$ for some
$b_1,b_2 \in \mathbb{g}(M)$ with $m_5(b_1,b_2)$. Then as in the proof of \lemref{OB2} we have $\mu_4(p_1,p_2)$.
Thus $S(M) \models (\forall \xi)(\mu_{>1}(\xi) \vee \mu_{4}(\xi))$, so (say by \lemref{embedinmodel})
${\CT} \modec (\forall \xi)(\mu_{>1}(\xi) \vee \mu_{4}(\xi))$.
\begin{lem}\label{embedinmodel} Let $M \models T$ be orbit-bounded. $A \models {\CT}$. Then there exists a homomorphism $f: A \to S(M)$. In particular any
e.c. model of ${\CT}$ is isomorphic to a substructure of $S(M)$.
${\CT}$ has the properties (ecB) and (LC).
\end{lem}
\begin{proof} We have to check that
the usual (locally compact) type space topology of $S_D(M)$ satisfies the hyptheses of \lemref{LClem1}. (2) there was already
proved in \lemref{OB2}.
The interpretation of $R=\R[\phi_1,\ldots,\phi_n; \alpha]$ is closed, since if $p=(p_1,\ldots,p_n) \notin R$,
then $\neg \phi_i(x_i,c) \in p_i$ for some $c \in \alpha(M)$; the set of all tuples $q=(q_1,\ldots,q_n)$ with $\neg \phi_i(x_i,c) \in q_i$
is an open neighborhood of $p$ in the product topology, disjoint from $R$.
As for compactness of $\mu_n(p)$, where $p \in S_D(M)$: if $q \in \mu_n(p)$, there exists a chain
$p=p_0,\cdots, p_n=q$ with $\mu_1(p_i,p_{i+1})$. Pick $a_i \in D(M)$ with $m_1(x,a_i) \in p_i$. Then
$m_5(a_i,a_{i+1})$ , so $m_{1+5n}(x,a_0) \in q$. The types $q$ with $m_{1+5n}(x,a_0) \in q$ form
a compact set $W$ in the usual topology on the weak type space, which is entirely contained in the space of local types;
hence $W$ is compact. Now $\mu_n(p)$ is a subset of $W$ defined by a pp relation, hence is closed in $W$ and thus
it is compact too.
\end{proof}
\propref{prop2.1} and \propref{2.5} are thus valid for ${\CT}$. We call $\mathcal{U}$ the core space of $T$. To summarize:
Let $T$ be a local theory in the sort $D$, complete (with respect to local formulas),
and satisfying (DP) and (OB).
We have defined the {\em pattern language ${\L}$},
a local primitive universal theory in the sort $\mathbb{g} $, valid in $S_D(M)$
when $M \models T$.
\begin{defn}
A {\em core} for (type spaces of models of) $T$ is an ${\L}$-structure $J$ such that:
\begin{itemize}
\item For any orbit-bounded $M \models T$, there exists an ${\L}$- embedding $j: J \to S(M)$.
\item For any such $j$, there exists a homomorphism $r: S(M) \to J$
such that $r \circ j = Id_J$.
\end{itemize}
\end{defn}
\begin{thm} \label{summary1} Let $T$ be a complete local theory in the sort $D$.
Assume $T$ has an orbit-bounded model, where every $2$-ball is contained in finitely many $1$-balls. Then:
\begin{enumerate}
\item A core exists and is unique up to isomorphism. We denote it $\mathcal{J}=\co(T)$.
\item Reflection: let $\Theta$ be a
universal sentence $(\forall p_1,\ldots,p_k) R$, where $R$ is an arbitrary Boolean combination of local pp formulas in the pattern
language $\mathcal{L}$ . If $S(M) \models \Theta$ then $\mathcal{J} \models \Theta$ .
\item Quantifier elimination:
$\mathcal{L}$-atomic types in $\mathcal{J}$ are orbits of $Aut(\mathcal{J})$. Every pp formula is equivalent
(in any $S(M)$ as well as in $\mathcal{J}$)
to a certain conjunction of atomic formulas.
\item $Aut(\mathcal{J})$ has a natural locally compact topology: \[ W(R;a,b) := \{g: \R_t (ga_1,\ldots,ga_n,b_1,\ldots,b_m) \}\]
is a basic closed set. The canonical Hausdorff quotient $\CG(T) : = Aut(\mathcal{J})/\fg
$ is a locally compact topological group.
\end{enumerate}
\end{thm}
We have just proved (1) and (4). The proofs of (2,3) are the same as in \cite{patterns} in the non-local case (see 3.12.)
\ssec{The local Lascar-Shelah relation} \label{LSh}
We continue to assume $T$ has an orbit-bounded model $M$, with bound $k$, and that doubling (DP) holds; and we concentrate
on a single sort $D$.
A {\em definable family of finite local partitions} is a pair of formulas $(B,E)$, with $B$ a non-empty 0-definable subset of $D^n$ for some $n \geq 1$, and
$E(x,y; z_1,\ldots,z_n) \subset D^2 \times B$, such that for some $f \in \Nn$, with $f$ strictly above the radius $k$
of $M/Aut(M)$ we have:
\begin{enumerate}
\item $E$ implies $m_{f}(x,z_1)$ and $m_{f}(y,z_1)$.
\item
$B$ implies $m_f(z_1,z_i)$ for each $i \leq n$.
\item For any $b=(b_1,\ldots,b_n) \in B$, $E_b(x,y):= E(x,y;b)$ defines an equivalence relation on $m_{f}(b_1)$, with at most $f$ classes.
\end{enumerate}
Two elements $a,a' \in D$ are {\em Lascar-Shelah neighbors}, denoted $a {$\cjRL{S}$\, } a'$, if for each $n \in \Nn$, every non-empty 0-definable set $B \subset D^n$, and every definable family $E \subset D^2 \times B$ of finite local partitions,
\[M \models (\exists z_1,\cdots,z_n) E(a,a',z_1,\ldots,z_n) .\]
\begin{lem} \label{shrem} Let $M \models T$, $a,a' \in D(M)$. Then $a $\cjRL{S}$\, a'$ iff for some elementary extension $M'$ of $M$,
and elementary submodel $N \prec M'$, $N \cong M$, we have $tp(a/N)=tp(a'/N)$. (Here by convention $M'$ is local, and $tp$ refers
to the quantifier-free type.)
Also, $a{$\cjRL{S}$\, } a'$ implies $m( x,y) \leq 2$.
\end{lem}
\begin{proof} Assume $tp(a/N)=tp(a'/N)$. We have $D(N) \neq \emptyset$; pick $d \in D(N)$. Since $M'$ is local, $M' \models m_l(a,d), m_l(a',d)$ for some $l$. By (DP),
$m_l(d)$ is a finite union of balls $m_1(c_i)$, $c_1,\ldots,c_r \in D(N)$. So $m_1(a,c)$ holds for some $c=c_j \in D(N)$.
Since $tp(a/N)=tp(a'/N)$, we have $m_1(a',c)$ too.
Now let $(B,E)$ be a definable family of finite local partitions.
As $B \neq \emptyset$, and using (OB),
there exists $b=(b_1,\ldots,b_n) \in B(N)$ with $m(c,b_1) \leq k$. Thus $m_{k+1} (a,b_1) $ and $m_{k+1}(a',b_1)$
hold. Since $E_b$ defines a finite partition of $m_{f}(b_1)$ and $f \geq k+1$, and using that $N$ is a model and
$tp(a/N)=tp(a'/N)$, we have $E(a,a',b')$. This proves that $a $\cjRL{S}$\, a'$.
Along the way we saw that $m(a,a') \leq 2$.
Conversely, assume $a $\cjRL{S}$\, a'$. Let $y_c: c \in M$ be variables, and let $\Delta$ be the diagram of $M$,
i.e. the set of $L$-local formulas in the variables $y_c$ that become true in $M$ under the assignment $y_c \mapsto c$.
Fix one of the variables $y_c$, and call it $y_1$.
Let $\Delta'$ be the set of all local formulas $\phi(x,y) \iff \phi(x',y)$, where the $y$ are some of the $y_c$, including $y_1$, and $\phi$ is a formula of $L$.
Since $a $\cjRL{S}$\, a'$, it is easy to see that $T \cup \Delta \cup \Delta' \cup \{m_{k+1}(a,y_1) ,m_{k+1}(a',y_1)\}$ is consistent. Hence this set of formulas is realized
in some weak model, and restricting to elements at finite distance from $a$ and $a'$, it is realized in some model.
This leads to $N \cong M$ with $N,M \prec M'$ for some $M' \succ M$.
\end{proof}
\ssec{The neighbor relation in the pattern language} \label{neighbor2}
We call two (local) types $p_1,p_2$ over a model $N$ {\em Lascar neighbors} if
there exist $a_i \models p_i$ such that $a_1,a_2$ are Lascar neighbors, i.e. if $p_1(x_1) \cup p_2(x_2) \cup {$\cjRL{S}$\, }(x_1,x_2)$
is consistent. Call them {\em close neighbors} if for every definable family of finite local partitions
$(E_b)_{b \in B} $, for some $b \in B(N)$ and $d' \in D(N)$, $x_i E_b d' \in p_i$. This implies that $p_1 \cup p_2 \models {$\cjRL{S}$\, }(x_1,x_2)$, and
in particular $p_1,p_2$ are neighbors.
We will denote the Lascar neighbor relation on $S(N)$ (and later on $\mathcal{J}$ and $Aut(\mathcal{J})$) by the same symbol ${$\cjRL{S}$\, }$. When the need arises to clarify, we will use a supersript, such as $$\cjRL{S}$\, ^{\mathcal{J}}$.
\begin{lem} \label{LSdef}
\begin{enumerate}
\item (${\L}$-$\bigwedge$-definability of ${$\cjRL{S}$\, }$.) There exist atomic formulas $\psi_m$ of ${\L}$ such that for any $N \models T$, and any $p,q \in S(N)$, $p{$\cjRL{S}$\, } q $ iff for each $m \in \Nn$, $S(N) \models \psi_m(p,q)$.
\item Let $\rho: S(N) \to S(N)$ be a retraction, i.e. a homomorphism with $\rho^2=\rho$. Let $p \in S(N)$.
Then $p,\rho(p)$ are close Lascar neighbors.
\end{enumerate}
\end{lem}
\begin{proof}
(1) If $p {$\cjRL{S}$\, } q$ then for every definable family $(B,E)$ of finite local partitions such that
\[T \models \neg (\exists x,x',u,y) (\phi(x,u) \wedge \phi'(x',u) \wedge \alpha(u) \wedge B(y) \wedge E(x,x',y)) \]
we have \[S(N) \models \R [\neg \phi, \neg \phi'; \alpha](p,q)\]
Conversely, if $(p,q) \notin $\cjRL{S}$\, $, then $p(x) \cup q(x')$ must be inconsistent with the following set of formulas:
\[ \{(\exists y) (B(y) \wedge E(x,x',u) ): (B,E) \} \]
where $(B,E)$ ranges over all definable families of finite local partitions. By compactness, for some tuple $c$ from $N$, $\phi(x,c) \in p, \phi(x',c) \in p'$, $\alpha \in tp_{N}(c)$, and for some finite set $(B_j,E_j)$ of definable families of finite local partitions,
\[T \models \neg (\exists x,x',u,y_1,\ldots,y_r) ( \phi(x,u) \wedge \phi(x',u) \wedge \alpha(u) \wedge \bigwedge_j B_j(y_j) \wedge E_j (x,x',y_j) )\]
Let $y=(y_1,\ldots,y_r)$, $B(y) = \bigwedge_j B_j(y_j)$, $E(x,x',y) = \bigwedge_j E_j(x,x',y_j)$.
Then
\[T \models \neg (\exists x,x',u,y) ( \phi(x,u) \wedge \phi(x',u) \wedge \alpha(u) \wedge \bigwedge B(y) \wedge E (x,x',y) ))\]
yet $c$ shows that $S(N)$ does not omit $\phi,\phi', \alpha$. Thus)
$\R [\neg \phi, \neg\phi'; \alpha](p,q)$ holds for each $\phi,\phi',\alpha$ as above iff $p{$\cjRL{S}$\, }q$.
(2) Let $q=\rho(p)$. Let $(E_b)_{b \in B} $ be a definable family of local finite partitions.
Consider the atomic formula $\psi(\xi_1,\xi_2) = \R[\neg x_1E_y u, \neg x_2 E_y u; B(y) \wedge D(u) ]$. Since a realization of $q$
must be $E_b$-equivalent to some $d' \in N$ (as there are only finitely many classes, all represented in $N$),
$S(N) \models \neg \psi(q,q)$. Now if we had $S(N) \models \psi(p,q)$, applying the
homomorphism $\rho$ would give $\psi(q,q)$, which we have just ruled out; so $S(N) \models \neg \psi(p,q)$.
This means that for some $b \in B(N)$ and $d' \in D(N)$, $x_1 E_b d' \in p$ and $x_2 E_b d' \in q$. As this holds for all definable families of finite partitions, $p,\rho(p)$ are close neighbors.
\end{proof}
\ssec{Automorphisms of models and automorphisms of the core} \label{LSh2}
In this section, $M,N$ will denote orbit-bounded (local) models of $T$.
We say that a subset $W$ of $Aut(M)$ is {\em bounded} if for some $a \in D(M)$, $Wa :=\{w(a): w \in W \}$ is
contained in a $d_m$-ball of finite radius. Equivalently, for {\em all} $a \in D(M)$, $Wa$ has finite radius. (\defref{groupdefs} \ref{bounded}.)
By \lemref{LSdef} (1), $$\cjRL{S}$\, $ is defined by a conjunction $\Psi$ of atomic sentences in any $S(N)$, $N$ an orbit-bounded model of $T$.
We define $$\cjRL{S}$\, $ by the same formula on $\mathcal{J}$. Since $\mathcal{J}$ embeds into $S(N)$, this does not depend on the choice of $\Psi$.
On $G=Aut(\mathcal{J})$, we let
\[$\cjRL{S}$\, = $\cjRL{S}$\, ^{Aut(\mathcal{J})} = \{g \in Aut(\mathcal{J}): (\forall a \in \mathcal{J})(a,g(a)) \in $\cjRL{S}$\, ^{\mathcal{J}} \} \]
Let $$\cjRL{S}$\, ^{\CG} $ be the image of $ $\cjRL{S}$\, ^{Aut(\mathcal{J})}$ in $\CG$.
$$\cjRL{S}$\, =$\cjRL{S}$\, ^{\CG}$ is a normal\footnote{I.e. conjugation-invariant. See \defref{groupdefs} for group-theoretic terminology.},
symmetric, compact subset of $\CG$. This follows from the same assertion in $Aut(\mathcal{J})$. In $Aut(\mathcal{J})$ we saw that
the $2$-ball is compact and contains $$\cjRL{S}$\, $ as a closed subset.
\begin{thm} \label{qh-basic} Assume (DP). Let $M$ be an orbit-bounded model of $T$, $\iota: \mathcal{J} \to S(M)$ an embedding,
and $\rho: S(M) \to \mathcal{J}$ a retraction.
Then there exists a canonical quasi-homomorphism ${\varphi}:Aut(M) \to (\CG(T): {$\cjRL{S}$\, })$. We have:
\begin{enumerate}
\item When $C \subset \CG$ is precompact, ${\varphi} ^{-1}(C)$ is bounded in $Aut(M)$.
\item When $W \subset Aut(M)$ is bounded, ${\varphi}(W)$ is precompact in $\CG$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $J=\iota(\mathcal{J})$, and identify $J$ with $\mathcal{J}$; so that $\iota$ is the inclusion
map $J \to S(M)$, and $\rho|J = Id_J$.
The definitions of ${$\cjRL{S}$\, }$ on $J$ and on $S(M)$ are compatible, via $\iota$ and $\rho$.
For $g \in Aut(M)$, define
\[ {\phi}(g) = \rho \circ g \circ \iota \in Aut(\mathcal{J}) \]
And let ${\varphi} = \pi \circ \phi$, where $\pi: Aut(\mathcal{J}) \to \CG$ is the quotient map. Since $\pi$
is a continuous and proper homomorphism, it suffices to prove the quasi-homomorphism property and (1,2) for $\phi$.
We have ${\phi}(Id_M)=Id_J$ since $\rho \circ \iota = Id_J$. We have to show, for $g,h \in Aut(M)$,
that ${\phi}(gh) \in {\phi}(g) {\phi}(h) {$\cjRL{S}$\, }$. For any $q \in S(M)$, we have $ \rho(q) {$\cjRL{S}$\, } q$ in $S(M)$ by \lemref{LSdef}(2).
Thus
\[ \rho h(q) {$\cjRL{S}$\, } h(q) \]
As $ \rho g: S(M) \to \mathcal{J}$ is an ${\L}$-homomorphism,
$\rho g \rho h(q) {$\cjRL{S}$\, } \rho g h(q)$; in particular, for $p \in \mathcal{J}$, $\rho g \rho h \iota (p) {$\cjRL{S}$\, } \rho g h \iota(p)$;
so
\[ \rho g \rho h \iota {$\cjRL{S}$\, }^{Aut(\mathcal{J})} \rho g h \iota \]
i.e. (using $\iota \rho = \rho$) ${\phi}(g){\phi}(h) {$\cjRL{S}$\, } {\phi}(gh)$. Since $End(J) = Aut(J)$, we can write ${\phi}(gh)={\phi}(g){\phi}(h) k$
for some $k \in Aut(J)$, and we have ${\phi}(g){\phi}(h) k {$\cjRL{S}$\, } {\phi}(g){\phi}(h)$. Applying the ${\L}$-endomorphism
$({\phi}(g){\phi}(h)) ^{-1}$ we see that $k \in {$\cjRL{S}$\, }^{Aut(J)}$.
This shows that ${\phi}:Aut(M) \to Aut(\mathcal{J}):{$\cjRL{S}$\, }$ is a quasi-homomorphism.
Composing with the Hausdorffization map $\pi:Aut(\mathcal{J}) \to \CG(T)$, we see that $\phi:\CG(T): {$\cjRL{S}$\, }$
is also a quasi-homomorphism.
(1) In view of \propref{2.5}, it suffices to prove now that if $C \subset G=Aut(\mathcal{J})$ is bounded with respect to the action on $(\mathcal{J},\mu)$, then $W:=\phi ^{-1}(C)$ is bounded with respect to the action on $D(M)$. Identifying $\mathcal{J}$
with a subset $J$ of $S(M)$, the retraction $\rho$ shows every point of $S(M)$ is at distance at most $1$
from a point of $J$; moreover
since $\rho$ changes distances by at most $1$, it is clear that $W$ is bounded with respect
to the action on $S(M)$. On the other hand, letting $i(a)$ denote the algebraic type $x=a$,
we have an $Aut(M)$-invariant embedding $i:M \to S(M)$, and again we saw
using (DP) that
any element of $S(M)$ is at bounded distance from some element of $i(M)$. So if $a \in M$, then
$W i(a)$ has finite radius with respect to $\mu$. But immediately following
\defref{mu}, we saw that for $c,d \in i(M)$, the $m$-distance $m(c,d)$ is at most $7 \mu(i(c),i(d))$. Thus $W a$
has finite radius with respect to $m$, and so $W$ is bounded with respect to the action on $D(M)$.
(2) Similarly, if $W \subset Aut(M)$ is bounded with respect to the action on$(M,m)$, then it is bounded with respect to the induced action on types (to check boundedness we may choose a "center" $a$ lying in $M$.) It follows as in (1) that the set $\rho \circ W$
has bounded action on $J$; so the image in $\CG$ is precompact by \propref{2.5}.
\end{proof}
\begin{rem} \label{error-improve} If we introduce new sorts $\mathbb{g}_n$ corresponding to $D^n$, and define $$\cjRL{S}$\, _!^{Aut(\mathcal{J})}$ to be the set of automorphisms $\gamma$ of $\mathcal{J}$ (on all these sorts) with
$ \gamma(a) $\cjRL{S}$\, a$ for any $a \in \mathbb{g}_n$, the proof still shows that ${\phi}:Aut(M) \to (Aut(\mathcal{J}): {$\cjRL{S}$\, _!})$.
As we will have no immediate use for this apparent improvement, we will stick with $$\cjRL{S}$\, $. \end{rem}
\begin{rem} \label{same} In the various constructions of \thmref{qh-basic},
we used the {\em fact} that the notion of locality is generated by a definable relation $m$, and we used the notion of finite distance
in defining the notion of model and of type space. But $m$ itself was not further used. Thus replacing $m$ by another relation
$m'$, where $m_1'(x,y)$ implies $m_k(x,y)$ for some $k$, and vice versa, will change nothing: the class of models,
the definition of the local type space, of the langauge $\L$, of $\mathcal{J}$, the notion of an embedding and a retraction between $\mathcal{J}$
and $S(M)$, and the maps $\phi$ and $\varphi$, all remain the same.
\end{rem}
\ssec{Choices} \label{choices}
The homomorphism of \thmref{qh-basic} depends not only on $M$, but also on $\iota$ and $\rho$.
Letting $\Sigma=S(M)$,
the relevant spaces are $Hom(\mathcal{J},\Sigma)$ (for choosing $\iota$), and for $\iota \in Hom(\mathcal{J},\Sigma)$, $J=\iota(\mathcal{J})$, the space of retractions $Hom_J(\Sigma,J) $. Their study is of considerable interest; in part, I think they are relevant to
Gromov's questions about symmetry breaking in Ramsey theory. Here we will only mention a
degree of canonicity that is nevertheless present, despite the choices. This will not be relevant to the main applications
to approximate subgroups and lattices.
As discussed in \cite{patterns}, the choice of $\iota$ amounts to expanding $M$ to a model of a certain universal theory $T_{\td}$
with JEP. $T_{\td}$ is no longer a local theory; it has a relation for each element of $\mathcal{J}$, and the action of $\Aut(\mathcal{J})$
passes from the models to the langauge itself.
\begin{rem} \label{choices2} $\mathcal{J}$ is itself independent of the choices, and the
quasi-homomorphism $Aut(M) \to \mathcal{J}$ (for a saturated $M$) can be made as unique as the saturated model itself:
in an appropriate model of set theory, we fix a cardinal $\kappa \geq L_{\td}$ with $2^\kappa=\kappa^+$, and let $\CM_{\td}$ be
an existentially closed model of $T_{\td}$, realizing every qf type over an existentially closed substructure of cardinality
$\leq \kappa$. It was shown in \cite{patterns} (see 3.17) that $T_{{\td}}$ is an irreducible universal theory, i.e. the joint embedding property holds; this continues to apply sortwise; so $\CM_{\td}$ is unique up to isomorphism;
and the reduct $\CM$ to $L$ is a $\kappa$-saturated model of $T$.
As for $\rho$, the arbitrariness is mitigated by \propref{qh2} (2); in the spirit
of Appendix \ref{categories}, it implies that $\phi$
gives a well-defined homomorphism $Aut(M) \to Aut_{\mathsf{ApprSp}} (\mathcal{J}_h,$\cjRL{S}$\, _2)$.
\end{rem}
Let us formulate a saturation hypothesis on a model $M$ of $T_{\td}$:
(SH) \begin{enumerate}
\item $M$ embeds all models of $T_{\td}$ of size $\leq |\mathcal{J}|$;
\item $M$ is homogeneous for $L$-elementary embeddings $M_1 \to M$ with $M_1 \models T$ with $|M_1| \leq |\mathcal{J}|$. (I.e. for fixed $M_1$, all such embeddings are $Aut(M)$-conjugate.) (In particular, $M$ is orbit-bounded.)
\end{enumerate}
Note (SH) holds if either $M$ is saturated as a model of $T_{\td}$, or else if it
is ec and saturated for qf types of $L_{\td}$.
\begin{prop} \label{qh2} In \thmref{qh-basic}, we further have: \begin{enumerate}
\item Take $M$ to satisfy (SH).
Then
$\rho$ can be chosen so that the induced quasi-homomorphism
$\phi:Aut(M) \to Aut(\mathcal{J})$ (and $\varphi:Aut(M) \to \CG(T) $ ) are surjective.
\item If $f,f': Aut(M) \to Aut(\mathcal{J})$ are obtained from two choices of retractions $\rho$,$\rho'$, then $f $\cjRL{S}$\, _2 f'$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) The $L_{\td}$-structure on $M$ can be viewed as a homomorphism $\iota: \mathcal{J} \to S(M)$.
Let $M_0$ be a small elementary submodel of $M$
as an $L_{\td}$-structure, and let $r_0: S(M) \to S(M_0)$ be
the restriction map. Then $\iota_0:=r_0 \circ \iota: \mathcal{J} \to S(M_0)$
admits, by the retraction property of $\mathcal{J}$, a one-sided inverse
$\rho_0: S(M_0) \to \mathcal{J}$, with $\rho_0 \iota_0=Id_{\mathcal{J}}$.
Let $\rho: S(M) \to \mathcal{J}$ be the composition $\rho_0 r_0 $.
Let $f(\alpha) = \rho \circ \alpha \circ \iota$.
\claim{} $f$ is surjective.
To see this, identify $\mathcal{J}$ with $J=\iota(\mathcal{J})$, so that $\iota$ becomes the inclusion map of $J$ in $S(M)$,
and $\iota_0 = r_0|J$.
Let $\beta \in Aut(J)$. We can now view $\beta$ also as a homomorphism $J \to S(M)$.
According to \cite{patterns} A.11(3), there exists $\sigma \in Aut(M)$ with $r_0 \circ \sigma |J =r_0 \circ \beta$.
So $f(\sigma) = \rho \sigma \iota
= \rho_0 r_0 \sigma \iota = \rho_0 r_0 \beta \iota =\rho_0 \iota_0 \beta \iota = \beta$. Since $\beta \in Aut(J)$
was arbitrary, $f$ is onto.
(2) Let $\alpha \in Aut(M)$.
We have $ \rho \circ \alpha {$\cjRL{S}$\, } \alpha$
and $\rho' \circ \alpha {$\cjRL{S}$\, } \alpha$, so
$\rho \circ \alpha $\cjRL{S}$\, _2 \rho' \circ \alpha$ and
$ \rho \circ \alpha \circ \iota $\cjRL{S}$\, _2 \rho' \circ \alpha$.
\end{proof}
\ssec{The pullback of $$\cjRL{S}$\, $.}
Define the binary relation $$\cjRL{S}$\, _k$ to be the $k$-fold composition of ${$\cjRL{S}$\, }$ with itself; it is defined by a pp formula (the existential quantifiers are local since
$$\cjRL{S}$\, (x,y)$ implies $m_2(x,y)$, \lemref{shrem}.) It is easy to check that the same expression
defines
the $k$-fold iteration of $$\cjRL{S}$\, $ on $\mathcal{J}$ (or see \cite{patterns} 3.12.)
For any $\mathcal{A},\mathcal{B} \modec {\CT}$, we define a relation $$\cjRL{S}$\, _k$ on
$Hom(\mathcal{A},\mathcal{B})$ by
\[h_1 $\cjRL{S}$\, _k h_2 \iff (\forall a \in \mathcal{A} ) (h_1(a),h_2(a)) \in $\cjRL{S}$\, _k^{\mathcal{B}} \]
For an element of $Aut(\mathcal{A})$, we write $$\cjRL{S}$\, _k(h)$ for $h $\cjRL{S}$\, _k 1$, where $1=Id_{\mathcal{A}}$.
For $h_1,h_2 \in Aut(\mathcal{A})$, we have $h_1 $\cjRL{S}$\, _k h_2$ iff $(h_1(a),h_2(a)) \in $\cjRL{S}$\, _k^{\mathcal{A}}$
for all $a$; by definability of $$\cjRL{S}$\, _k$, this is iff $(a,h_1 ^{-1} h_2(a)) \in $\cjRL{S}$\, _k^{\mathcal{A}}$ for all $a$ iff $$\cjRL{S}$\, _k(h_1 ^{-1} h_2)$.
It follows that $$\cjRL{S}$\, _k$ is a normal, symmetric subset of $Aut(\mathcal{A})$.
The above notation applies to $Aut(\mathcal{J})$ and to $Aut(S(M)) = Aut(M)$ (via its action on $S(M)$.)
Finally, we define $$\cjRL{S}$\, _k^{\CG(T)} $ to be the image of $$\cjRL{S}$\, _k^{Aut(J)}$ under the quotient homomorphism
$Aut(J) \to \CG(T)$.
Note that in the groups $Aut(J)$ or $\CG(T)$ we have $$\cjRL{S}$\, _k $\cjRL{S}$\, _{k'} \subset $\cjRL{S}$\, _{k+k'}$, but (presumably) equality need not hold in general.
\begin{prop}\label{qh3} In \thmref{qh-basic}, one can add:
\begin{enumerate}
\item $\varphi($\cjRL{S}$\, _n^{Aut(M)}) \subset $\cjRL{S}$\, _n^{\CG} $
\item If $\phi(g_1)\phi(g_2)^{-1} \in $\cjRL{S}$\, _n^{Aut(\mathcal{J})}$, then $g_1 g_2 ^{-1} \in $\cjRL{S}$\, _{n+4}$.
\item $\phi^{-1} $\cjRL{S}$\, _n^{Aut(\mathcal{J})} \subset $\cjRL{S}$\, _{n+3}^{Aut(M)}$
\item $ \varphi^{-1} $\cjRL{S}$\, _n^{\CG(T)} \subset $\cjRL{S}$\, _{n+5}^{Aut(M)}$.
\item If $\varphi(g_1)\varphi(g_2)^{-1} \in $\cjRL{S}$\, _n^{\CG}$, then $g_1 g_2 ^{-1} \in $\cjRL{S}$\, _{n+6}$.
\end{enumerate}
\end{prop}
\begin{proof}
Identify $\mathcal{J}$ with $J \leq S(M)$, and let $\rho: S(M) \to J$ be a retraction.
(1) If $g \in $\cjRL{S}$\, ^{S(M)}$, so that $g(p) $\cjRL{S}$\, p$ for all $p \in S(M)$, then for $p \in J$ we have
${\phi}(g)(p) = \rho g(p) L_n \rho(p) = p$. Thus ${\phi}($\cjRL{S}$\, ) \subset $\cjRL{S}$\, $. Similarly, $\phi($\cjRL{S}$\, _n) \subset $\cjRL{S}$\, _n$.
(2) Let $g_1,g_2 \in Aut(M)$ and assume $\phi(g_1) $\cjRL{S}$\, _n \phi(g_2)$ in $Aut({\mathcal{J})}$. Let $p \in S(M)$.
By \lemref{LSdef}(2), $p $\cjRL{S}$\, \rho(p)$. Thus (using the fact that $g_1,g_2,\rho$ are homomorphisms),
\[ g_1(p) $\cjRL{S}$\, \rho g_1(p) $\cjRL{S}$\, \rho g_1 (\rho(p)) $\cjRL{S}$\, _n \rho g_2 (\rho(p)) $\cjRL{S}$\, \rho g_2(p) $\cjRL{S}$\, g_2(p) \]
So $g_1(p) $\cjRL{S}$\, _{n+4} g_2(p)$, and since $p$ was arbitrary, $g_1 g_2 ^{-1} \in $\cjRL{S}$\, _{n+4} $.
(3) The penultimate step in the displayed formula in (2) can be skipped when $g_2=Id$, since $\rho \rho p = \rho p$.
(4)
Now suppose $\varphi(g) \in $\cjRL{S}$\, _n^{\CG}$; i.e. $\phi(g) = \pi(h)$ for some $h \in $\cjRL{S}$\, _n^{Aut(J)}$.
So ${\phi}(g) h ^{-1} \in \ker \pi \subset L_2$ (by \cite{patterns} 4.3(2).)
Hence ${\phi}(g) \in $\cjRL{S}$\, _{n+2}$, and by (3), $g \in $\cjRL{S}$\, _{(n+2)+3}$.
(5) Similarly, if $\varphi(g_1)\varphi(g_2)^{-1} \in $\cjRL{S}$\, _n^{\CG}$, then by (2) we have $g_1 g_2 ^{-1} \in $\cjRL{S}$\, _{(n+2)+4}$.
\end{proof}
The example below is adapted from the basic component of the example of \cite{clpz} of a non-compact Lascar group; local logic permits a simpler presentation.
It is also in essence the same as Example 3 of a quasimorphism in \cite{kotschick}, where one can find an illuminating discussion of its provenance in geometry.
\begin{example} \label{milnor}
Let $T=Th_{loc;\mu_1}(\Rr,<,S)$. Here $S(x)=x+1$,
\[\mu_1(x,y) \equiv x \leq y \leq S(x) \vee y \leq x \leq S(y),\]
and the subscript loc designates that we take the theory in local logic, i.e. using quantifiers with bounded range only.
The space $\Sigma$ of (local) $1$-types over $M=(\Rr,<,S)$ can be identified with $\{-,0,+\} \times \Rr$;
here $(0,\alpha)$ denotes the algebraic type $x=\alpha$; $(-,\alpha)$ denotes the type just below $\alpha$,
and dually $(+,\alpha)$. The two non-local 0-definable types, at $\infty$ and $-\infty$, are of course not part of the space $\Sigma$.
Let $J = \{-,0,+\} \times \Zz$, a ${\L}$-substructure of $\Sigma$. Pick a cut $\gamma \in (0,1]$
and define
a ${\L}$-retraction $r:\Sigma \to J$ as follows: for any $n \in \Zz$, $\delta \in [0,1)$ and $* \in \{-,0,+\}$,
let $r(*, n+\delta)=(*,n)$ if $\delta \leq \gamma$ and $r(*, n+\delta)=(*,n+1)$ if
if $\gamma < \delta $. It is easy to see that no further collapse is possible, and $J$ is the universal e.c. model of $\CT$.
We have $Aut(J)=\Zz$.
$Aut(M)$ is the much bigger group of order preserving maps $\Rr \to \Rr$ commuting with $S$.
The quasimorphism on $Aut(M)$, associated with the above retraction, is the translation number. Restricted to the group of
translations $\Rr$ as a subgroup of $Aut(M)$, we obtain a choice of 'nearest integer' map, which is of course
a quasi-homomorphism $\Rr \to (\Zz;\{-1,0,1\})$.
In connection with the discussion in \secref{choices}, we note also that in this example, we have
$Hom(\mathcal{J},\Sigma)=\Rr$ and $Hom_J(\Sigma,J) $ is the space
of cuts in $(0,1)$.
\end{example}
\begin{question} Is it possible to develop an analogous theory uniform in the doubling constant? Assume every $2$-ball
is uniformly a union of a $Y$-parameterized family of $1$-balls, where the definable set $Y$ is in some sense
`small' compared to $D$; at a minimum, there should exist types over $M$ that properly increase $M$ without increasing
$Y$, so that finite distance still implies distance $1$ from some point. According to a theorem of Shelah \cite{shelah-2}, such types exist in pseudo-finite theories provided $|{D}|$ is not polynomially bounded by $|Y|$. Thus in theory, this could
give a possible route to various questions of polynomial bounds. Developing a theory on this basis will not be easy, but even much stronger assumptions of relative smallness would be of interest. The theory of locally compact groups would need to be generalized, however, in any case, with the compact/ noncompact dichotomy replaced by a model-theoretic analogue.\end{question}
\newpage
\section{Definable groups} \label{definablegroups}
\begin{thm} \label{grmain} Let $G$ be a group, generated by an approximate subgroup $\Lambda$. Then
there exists a second countable
locally compact topological group $\mathsf H$,
a compact normal subset $$\cjRL{S}$\, \subset \mathsf H$
and a quasi-homomorphism
\[f: G \to \mathsf H:$\cjRL{S}$\, \]
such that:
\begin{enumerate}
\item For $C \subset \mathsf H$ compact, $f ^{-1}(C)$ is contained in some $\Lambda^i$.
\item For each $i$ there exists a compact $C \subset \mathsf H$ with $\Lambda^i \subset f ^{-1}(C)$.
\item Specifically, $f ^{-1}($\cjRL{S}$\, ) \subset \Lambda^{12}$.
\item Let $X,X'$ be compact subsets of $\mathsf H$, with $$\cjRL{S}$\, ^2X \cap $\cjRL{S}$\, ^2 X' = \emptyset$. Then there exist disjoint definable subsets $D,D'$ of $\Lambda^k$ for some $k$ (with parameters in $\Lambda$) such that $f ^{-1}(X) \subset D$, and $f ^{-1}(X') \subset D'$.
\end{enumerate}
\end{thm}
The purely group-theoretic properties (1,2) will be the main engine of the group-theoretic applications, including \thmref{ag2} and \thmref{discrete1}; the second-countability will not be needed.
(4) should be compared to the familiar form of continuity or continuous-logic definability of a map into a compact space,
asserting that the preimages of two disjoint compact sets can be separated by a definable set. Here this is asserted for
two compacts at some discrete distance from each other.
We will begin with a proof of (1,2);
the proofs of (3,4) are postponed till we discuss definability
\begin{proof} We define a local structure $M$. The universe $D(M)$ of $M$, as a set, is a copy of $G$.
The relations on $D$ include the binary relation $ y x ^{-1} \in \Lambda$, denoted $m_1$:
and the quaternary relation $yx ^{-1} = z w ^{-1} \in \Lambda$. The locality generator is $m_1$. There are no further
relations in the structure $M$; thus the action of $G$ on $D$ from the right,
$(g,d) \mapsto d g ^{-1}$, is an automorphism of $M$. This gives a homomorphism $\delta: G \to Aut(M)$.
Since $G$ is transitive on $D$, it is clear that $M$ is orbit-bounded. The balls can be identified
as $m_1(a)=\Lambda a$, and $m_2(a) = \Lambda^2 a$. As $\Lambda$ is an approximate subgroup,
$\Lambda^2 \subset \cup_{i=1}^k \Lambda c_i$, so $m_2(a) = \cup_{i=1}^k m_1(c_ia)$; this gives the doubling property (DP).
Thus \thmref{summary1} applies: $Th(M)$ admits a core $\mathcal{J}$, and $Aut(\mathcal{J})$ has a natural locally compact topology;
the canonical Hausdorff quotient $\CG(T) : = Aut(\mathcal{J})/\fg_D$ is a locally compact topological group; we also
defined a canonical compact, normal, symmetric subset $$\cjRL{S}$\, \subset \CG(T)$. There exists an embedding
$\iota: \mathcal{J} \to S(M)$, and a retraction $\rho: S(M) \to \mathcal{J}$.
According to
\thmref{qh-basic},
there exists a quasi-homomorphism ${\varphi}:Aut(M) \to (\CG(T): {$\cjRL{S}$\, })$ such that
compact subsets of $\CG$ pull back to bounded subsets of $Aut(M)$, and bounded subsets of $Aut(M)$ push forward to
precompact subsets of $\CG$.
Let $f = \varphi \circ \delta$. If $C \subset \CG$ is compact, then $\delta(f ^{-1}(C))$ is bounded in $Aut(M)$, being a subset of $\varphi ^{-1}(C)$.
Picking $d_0 \in D$ to be the identity element of $G$, we have $\delta(f ^{-1}(C)) (d_0) = d_0 f ^{-1}(C) = f ^{-1}(C)$; so
$f^{-1} (C)$ is
bounded in $D(M)$. This means that $f ^{-1}(C) \subset \Lambda^i$ for some $i$. Conversely, $\delta(\Lambda^i)$ is by definition bounded, so $f(\Lambda^i) = \varphi(\delta(\Lambda^i)) $ is precompact, hence $C= cl(f(\Lambda^i))$ is compact and $\Lambda^i \subset f ^{-1}(C)$.
To obtain second countability, we modify the end of the construction slightly. Recall $\varphi$ was the composition
$\pi \circ \phi$, with $\pi: Aut(\mathcal{J}) \to \CG$ the quotient map, and $\phi: Aut(M) \to (\CG(T): {$\cjRL{S}$\, })$ as constructed in \ref{qh-basic}.
Let $P$ be the complete atomic type of $\rho(d_0)$.
Let $\fg=\fg_{P}$ be the infinitesimal subgroup with respect to the action on $P$. By \propref{autlc}, $\fg_P$ is a closed, compact subgroup of $Aut(\mathcal{J})$, and $\CG(P) $
is a second countable, locally compact topological group. Let $\pi_P: Aut(\mathcal{J}) \to \fg_P$ be the projection.
We let $\varphi_P = \pi_P \circ \phi$, and again $f=\varphi_P \circ \delta$. We still have the above properties, and now
according to \propref{autlc}, $\CG(P) $
is also second countable.
\end{proof}
We will also mention a slightly more general version. The proof is identical once we have a generalization of the local logic of
\secref{local}, where we do not assume that the locality relation is generated by a single relation; we will not repeat details but will describe the changes needed in the construction.
\begin{prop}\label{grmain2}
Let $G$ be a group, $\omega$ a family of
commensurable approximate subsets of $G$. Assume:
if $\Lambda,\Lambda' \in \omega$, then $\Lambda \cup \Lambda' \in \omega$, and $\Lambda^2 \in \omega$; and $\cup \omega = G$.
Then there exists a locally compact topological group $\mathsf H$,
a compact normal subset $$\cjRL{S}$\, \subset \mathsf H$
and a quasi-homomorphism
\[{f}: G \to \mathsf H:$\cjRL{S}$\, \]
such that:
\begin{enumerate}
\item For $C \subset \mathsf H$ compact, and $\Lambda \in \omega$, $f ^{-1}(C) $ is covered by a finite union of right translates of $\Lambda$.
\item For $\Lambda \in \omega$ there exists a compact $C \subset \mathsf H$ with $\Lambda \subset f ^{-1}(C)$.
\item If $\Lambda_0 \in \omega$ and $X \subset \Lambda \in \omega$, we have $f ^{-1} f(X) \subset X \Lambda_0^{12} $.
If $f(g_1) \in f(g_2) $\cjRL{S}$\, _k $ then $\delta(g_1g_2 ^{-1}) \in $\cjRL{S}$\, _{12+2k}$.
\item Let $X,Y$ be compact subsets of $\mathsf H$, with $$\cjRL{S}$\, ^2X \cap $\cjRL{S}$\, ^2 Y = \emptyset$. Then there exists a definable subset $D$ of some $\Lambda \in \omega$ (with parameters in $\Lambda$) such that $f ^{-1}(X) \subset D$, and $f ^{-1}(Y) \subset \Lambda \setminus D$.
\item If $\omega$ is countable, we may take $\mathsf H$ to be second countable.
\end{enumerate}
\end{prop}
\begin{proof}
Define a structure $M$ as in \thmref{grmain}, with universe $D=G$; for {\em each} $\Lambda \in \omega$, we posit
a binary relation $m_\Lambda: \ y x ^{-1} \in \Lambda$; and a quaternary relation $yx ^{-1} = z w ^{-1} \in \Lambda$. A subset of $D$ is viewed
as local if it is contained in a ball $m_\Lambda(a)=\{b: m_\Lambda(a,b)\}$; or equivalently, in the union of finitely many such balls.
As before, only existential quantifiers of the form $(\exists x)m_\Lambda(x,y) \wedge \cdots$ are allowed in the formation of pp formulas,
but now all $\Lambda \in \omega$ are allowed.
If we pick one $\Lambda_0 \in \omega$,
it remains true that any ball $m_\Lambda(a)$ is covered by finitely many balls $m_{\Lambda_0} (b_i)$. A weak model is local if
for any $a,b \in D(M)$, $m_\Lambda(a,b)$ holds for some $\Lambda \in \omega$. A type $p(x)$ is local if $m_\Lambda(x,y)$ is represented
in $p$ for some - or equivalently, by DP, for all - $\Lambda \in \omega$. The construction of $\mathcal{J},Aut(\mathcal{J}), \fg,f$ and proof of their properties is identical.
\end{proof}
\begin{rem} \label{4.3} Let $\omega'$ be any family of commensurable approximate subgroups of $G$,
such that if $\Lambda \in \omega$ and $g \in G$ then $g ^{-1} \Lambda g$ is commensurable
with $\Lambda$. Then $\omega'$ may be completed to a class $\omega$ satisfying the hypotheses of \propref{grmain2},
whose union is $G$.
\end{rem}
\begin{proof}
If $\Lambda \in \omega$, then any two-sided translate $a \Lambda b$ is contained in a finite union of right translates of $\Lambda$:
we have
$a \Lambda a ^{-1} \subset \cup_i \Lambda c_i$
for some $c_1,\ldots,c_l$; and $a \Lambda b = (a \Lambda a ^{-1}) ab \subset \cup_i \Lambda c_iab$. In particular, each
$c_iab\Lambda$ is contained in a finite union $\cup \Lambda d_{ij}$, so $a \Lambda b \Lambda c \subset \cup _{i,j} \Lambda^2 d_{ij}$,
and it follows that $a \Lambda b \Lambda c$ is contained in a finite union of right translates of $\Lambda$. From here it is
clear that closing $\omega$ under the operations $\Lambda,\Lambda' \mapsto \Lambda \cup \Lambda'$, $\Lambda \mapsto \Lambda^2 $,
and $\Lambda \mapsto (a \Lambda \cup \{1\} \cup \Lambda a)$
will not damage the hypotheses. \end{proof}
\propref{grmain2} applies, in particular, to the top commensurability class of a group $G$, namely the generic definable subsets.
Here the results of \cite{patterns} apply directly, and there is no need for locality.
This is the case where $\mathsf H$ is compact. Nevertheless we obtain a nontrivial statement: by \propref{grmain2} (3) (or for that matter \thmref{grmain} (3)) we have:
\ssec{Definability} We will now place our approximate subgroups in a definable setting more explicitly. A complete first-order theory $T$
is given with predicates $\Lambda_i$ ($i \in I$, where $I$ is a partially ordered index set. ) If $i\leq j$, a definable injective map
$\Lambda_i \to \Lambda_j$ is given, forming a direct limit system; the direct limit will be denoted $G$. We assume that for each $i,j \in I$ for some $k \in I$, a definable map $\Lambda_i \Lambda_j \subset \Lambda_k$ is given; inducing a group multiplication on $G$.
In this situation $G$ is called a piecewise definable group of the theory $T$; or a strict Ind-object in Grothendieck's category of Ind-definable sets.
We identify $\Lambda_i$ with a subset of $G$, and assume further that each $\Lambda_i$ is an approximate subgroup, and any two $\Lambda_i$ are commensurable in $G$.
We say that a subset of $G$ is
{\em locally $\bigwedge$-definable} if the intersection with each $\Lambda_i$ is $\bigwedge$-definable.
Given a theory $T$ with a definable commensurable family of approximate subgroups, as above,
we introduce a new local sort $D$, a torsor for $G$. For $M \models T$, $D(M)$ is a copy of $G(M)$. The new relations introduced
are binary relations $m_i(x,y): \ \ \equiv \ \ ( y x ^{-1} \in \Lambda_i)$ on $D$, and the relation $yx^{-1} = u \in \Lambda_i$ on $D^2 \times \Lambda_i$.
We define a locality structure on $D$, using the $m_i$. Restricting attention to
the induced (local) structure on $D$, we form $\mathcal{J}$ (in the corresponding sort $\mathbb{g}$.) Denote by $L^+$
the langauge obtained from $L$ by adding the local sort $D$ and these relations; $M^+$ the result
of adding the torsor $D$ to the structure as above; and $T^+:= Th(M^+)$.
Next recall the universal theory $T^+_{\td}$ of \cite{patterns} (see 3.17). A model $N^+_{\td}$ is nothing more than a model
$N^+$ of $T^+$, along with a homomorphism $\iota: \mathcal{J} \to S_D(N^+)$. The language $L^+_{\td}$ is obtained from $L^+$ by adding
a new relation $(d_px)\phi$
for each $p \in \mathcal{J}$ and each $\phi(x;y_1,\ldots,y_k) \in L^+$, to be interpreted as a subset of $D^k$, namely as
$\{(b_1,\ldots,b_k) \in D^k: \phi(x,b_1,\ldots,b_k) \in \iota(p)\}$. Fix some $p_0 \in \mathcal{J}$; the choice will not matter, up
to bi-interpretation. We have in $L^+_{\td}$ a
unary relation $m_k^{p_0}$ in $L^+$ such that $m_k^{p_0}(b)$ holds iff $m_k(x,b) \in p_0$. We will only consider models of
$T^+_{\td}$ that omit the partial type: $\neg m_k^{p_0}(x,b), k=1,2,\cdots$. This amounts to treating the sort $D$ of
$T^+_{\td}$ (a single local sort in $T^+$) as the union of infinitely many `sorts' defined by $m_k^{p_0}$ (with inclusion maps among them.) Any model of
$T^+_{\td}$ (in this sense), restricted to an $L$-structure, is a (local) model of $T^+$.
\footnote{Since some model of $(T^+_{\td})^{\pm}$
is a model of $T$, if $T^+_{\td} \models \neg (\exists x,y,z)(\alpha(x,y) \wedge \gamma(y,z))$ with $\alpha \in L^+$,
and $T^+ \models (\forall y)(\exists x)(\alpha(x,y))$, then
$T^+_{\td} \models \neg (\exists y,z)( \gamma(y,z))$.) Thus $T^+_{\td} \modec T^+$. }
In \thmref{grmain} and \propref{grmain2}, we used the action of $G$ on $D$ by right translation. This action depends on a choice
of an element $d_0$ of $D$, used to identify $G$ with $D$ via $g \mapsto gd_0$. The homomorphism $\delta:G \to Aut(M^+)$
is thus also dependent on this choice, and may be better denoted $\delta_{d_0}$. However when $X$ is a normal subset of $Aut(M^+)$, $\delta ^{-1}(X)$ is well-defined and does not depend on the choice of $d_0$. In particular,
$\delta ^{-1}($\cjRL{S}$\, _k)$
is well-defined. (Where $$\cjRL{S}$\, _k \subset Aut(M^+)$ is defined above \lemref{qh3}.) Let us compare $\delta ^{-1}($\cjRL{S}$\, _k)$
to $$\cjRL{S}$\, _k$, as computed directly in $T$ (i.e. in $G$ and not in $D$.)
Consider
\[P:= \{ab ^{-1}: a,b \in G(M), a $\cjRL{S}$\, b\} \]
Here $$\cjRL{S}$\, $ is computed in $\aleph_0$-saturated models of $T$.
While there is no multiplication on $D$, we can define $ab ^{-1} \in G$ for $a,b \in D$, so that $(ab ^{-1}) b=a$. Then
$P=\{a b ^{-1}: a,b \in D(M), a $\cjRL{S}$\, b\} $. (If $a,b \in D$ and $tp(a/N)=tp(b/N)$, then picking $d \in D(N)$ we have $ab^{-1} = (a d^{-1}) (b d ^{-1}) ^{-1}$
and $a d ^{-1} $\cjRL{S}$\, ^M b d ^{-1}$.) From this it follows easily that $\delta ^{-1} ($\cjRL{S}$\, _k^{Aut(M^+)}) \subset P^k$.
On the other hand, by locality and the doubling property, if $a,b \in D$ and $a $\cjRL{S}$\, ^D b$ then $ab^{-1} \in \Lambda^2$. (Since $tp(a/N)=tp(b/N)$
implies that $a,b$ lie in a common $1$-ball.)
By \propref{qh3} (2), we obtain:
\begin{prop}\label{qh3b} In \ref{grmain} and \ref{grmain2}, one can add:
If $f(g_1) f(g_2) ^{-1} \in $\cjRL{S}$\, _k^{\CG}$ then $\delta(g_1g_2 ^{-1}) \in $\cjRL{S}$\, _{k+6}$ so
$g_1 g_2 ^{-1} \in P^{k+6} \subset \Lambda^{2(k+6)}$. Likewise, if $f(g) \in $\cjRL{S}$\, _k^{\CG}$ then $\delta(g) \in \Lambda^{2(k+5)}$.
\end{prop}
In particular, this proves \thmref{grmain} (3).
\begin{rem} All conjugates of $P$ are contained in $P^3$. Hence $P \subset Q \subset P^3$ for some normal
$\bigwedge$-definable set $Q$. \end{rem}
\begin{proof} Let $c \in {P}$, and $g \in G$. We have $c=c_1 ^{-1} c_2$ with $tp(c_1/N)=tp(c_2/N)$.
Take $N$ to be $\aleph_1$-saturated, and $N_0$ a countable elementary submodel; then $tp(g/N_0) = tp(m/N_0)$
for some $ m \in N$; so $ m ^{-1} g \in {P}$.
Now $tp(m ^{-1} c_1 m/M ) = tp(m ^{-1} c_2 m/M)$, so
$m ^{-1} c m \in {P}$. Thus $g ^{-1} c g = l ^{-1} ( m ^{-1} c m) l \in P^3$. Now let $Q$ be the intersection of all conjugates of $P^3$.\end{proof}
\begin{prop} \label{grmain3} \label{4.8}
Let $G$ be a piecewise definable group in a theory $T$, limit of definable approximate subgroups.
Let $M^+_{\td} \models T^+_{\td}$, let $M^+$ be the reduct to $L^+$, and $\iota: \mathcal{J} \to S(M^+)$ the embedding corresponding
to $M^+_{\td}$. Further let $\rho: S(M^+) \to \mathcal{J}$ be a retraction, and let $d_0$ be an element of $D(M)$,
Let $\delta: G \to Aut(S(M)^+)$ the map induced by right multiplication
if $(D,d_0) $ is identified with $(G,1)$. Finally let
$\phi: Aut(S(M)^+) \to Aut(\mathcal{J})$ be defined by $\gamma \mapsto \rho \circ \gamma \circ \iota$, $\varphi$ be the composition of $\phi$
with $Aut(\mathcal{J}) \to \CG$, and $f= \varphi \circ \delta: G(M) \to \CG$.
Then:
\begin{enumerate}
\item Let $W$ be a compact subset of $\CG$. Then there exists a $\bigwedge$-definable
$W^* \subset G$ in $L_{\td}^+(d_0)$ with
$f ^{-1}(W) \subset W^* \subset f ^{-1}(W $\cjRL{S}$\, )$. $W^*$ is defined by universal formulas of $L^+_{\td}(d_0)$.
\item
Let $W_1,W_2$ be compact subsets of $\CG$ with $$\cjRL{S}$\, W_1 \cap $\cjRL{S}$\, W_2 = \emptyset$. Then
there exist definable sets $V_1,V_2 \subset G$ in $L_{\td}^+(d_0)$ with $f ^{-1} (W_i) \subset V_i$ and
$V_1 \cap V_2= \emptyset$.
\end{enumerate}
\end{prop}
\begin{proof}
{\em During the present proof, the word {\em definable} and its attendants will refer to $L^+_{\td}(d_0)$.}
We show first that we may assume some saturation of $M^+_{\td}$; this will be needed for (2).
Let $((M^+_{\td})^*,d)$ be an elementary extension of $(M^+_{\td},d)$. Define $\rho^* = \rho \circ r$,
where $r: S(M^+) \to S(M)$ is the restriction map. We obtain $\iota^*,\delta^*,\phi^*, \varphi^*$; and it is easy to verify that
the various maps commute, and we have: $\phi \circ \delta = \phi^* \circ \delta^* |M$, and $f = f^* |M$. Thus
by passing to an elementary extension
we may assume $M^+_{\td}$ is $\aleph_0$- saturated, and $M^+$ is $\aleph_0$- homogeneous.
We begin with the proof of (1).
Identify $\mathcal{J}$ with $\iota(\mathcal{J})$. Let ${\mathsf{f}}$ denote the composition
\[ G(M) \to_{\delta} Aut(M^+) \to Aut(S_D(M^+)) \to_{\phi} Aut(\mathcal{J}) \]
so that $f = \pi \circ {\mathsf{f}}$, with $\pi: Aut(\mathcal{J}) \to \CG$.
We will write $\hat{g}$ for $\delta(g)$ or for the image of $g$ in $Aut(S_D(M^+)) $ or $Aut(\mathcal{J})$.
The homomorphism $Aut(\mathcal{J}) \to \CG$ is continuous and proper; so it suffices to show that for each closed subset $V$ of $Aut(\mathcal{J})$, there exists a $\bigwedge$-definable set $V^*$ with
\[ {\mathsf{f}} ^{-1}(V) \subset V^* \subset {\mathsf{f}} ^{-1}(V $\cjRL{S}$\, ) \ \ \ \ \ \]
Assuming the displayed statement holds for some closed sets $V_i$, it is also clearly true for $\cap_i V_i$, with
$(\cap_i V_i)^* = \cap_i V_i^*$. Hence it sufices to take $V$ to be a basic closed set in $Aut(\mathcal{J})$, i.e.
\[ V = \{\gamma \in Aut(\mathcal{J}): R(a,\gamma(a)) \} \]
where $R$ is a basic relation of $\L$ (ensuring that $\gamma$ lies in a bounded subset of $G$, when $a$ is fixed),
$a$ is a tuple of elements of $\mathcal{J}$, and $\gamma((a_j: j \in J)) = (\gamma(a_j): j \in J)$. While $R$ will make use of only finitely many of the $a_j$ and $\gamma(a_j)$, it will be convenient to allow $a$ to be an infinite tuple, indeed a tuple enumerating all of $\mathcal{J}$. Let $p$ be the atomic type of $a$ in $\mathcal{J}$. For two tuples $(a_j: j \in J), (b_j: j \in J)$, write $$\cjRL{S}$\, ^1(a,b)$ if $$\cjRL{S}$\, (a_j,b_j)$ holds for each $j$.
Define a subset $V^*$ of $G$ thus:
\[ V^* = \{ g \in G: S(M^+) \models (\exists \xi) (p(\xi) \wedge (\xi $\cjRL{S}$\, \hat{g}(a)) \wedge R(a,\xi) )\} \]
Let $g \in G(M)$, $\gamma={\mathsf{f}}(g)$ the image of $g$ in $Aut(\mathcal{J})$. We have $\gamma= (\rho \circ \hat{g}) | \mathcal{J}$.
If $\gamma \in V$, then letting $\xi = \gamma(a)$,
we see (using the definition of $V$ and \lemref{LSdef}(2)) that $g \in V^*$. Thus ${\mathsf{f}} ^{-1}(V) \subset V^* $.
In the opposite direction if $g \in V^*$, then
$\mathcal{J} \models (\exists \xi) (p(\xi) \wedge (\xi $\cjRL{S}$\, \rho\hat{g}(a)) \wedge R(a,\xi) ) $.
Let $b$ be a witness for $\xi$. Since $p(b)$ holds, $a \mapsto b$ defines an $\L$-homomorphism $\mathcal{J} \to \mathcal{J}$;
by \propref{prop2.1}, it is an automorphism of $\mathcal{J}$, call it $\gamma'$. Since $\gamma ' (a) = b$ and $b $\cjRL{S}$\, \gamma (a)$, we have $\gamma ' \gamma ^{-1}(b) $\cjRL{S}$\, b $ so $\gamma ' \gamma ^{-1} \in $\cjRL{S}$\, ^{Aut(\mathcal{J})}$.
But $R(a,b)$ shows that $\gamma' \in V$. So $\gamma \in V $\cjRL{S}$\, $.
It remains only to see that $V^*$ is locally $\bigwedge$-definable. Using local compactness of $S(M)$ we see that $V^*$ is the intersection of all approximations to it, obtained by replacing $$\cjRL{S}$\, $ by a definable approximation inside the matrix of
definition of $V^*$; note that these approximations can be chosen so as to still ensure that the set of $g$ they define is bounded.
Hence
$V^*$ is a conjunction of
pp relations concerning $(a, \hat{g}(a))$. By \thmref{summary1} (3), $V^*$ can also be written as an (infinite) conjunction of
atomic formulas $\R_t$; so it suffices to prove that each such formula is locally definable.
To simplify notation, consider the atomic formula $\R_t(p,\hat{g}(q))$,
$t=(\phi_1(x_1,u),\phi_2(x_2,u))$, with $\phi_1,\phi_2$ formulas of $T^+$. Here $p,q \in \mathcal{J}$ are fixed and $g$ is the variable. We have
$\R_t(p,\hat{g}(q))$ iff there is no $c$ with $\neg \phi_1(x,c) \in p$ and $\neg \phi_2(x,c) \in \hat{g}(q)$.
Now $\phi_2(x,c) \in \hat{g}(q)$ iff $\phi_2(x,\hat{g ^{-1}}(c)) \in q$.
Thus
\[\R_t(p,\delta(g) (q)) \iff \neg (\exists c)
(d_{p} x)\neg \phi_1(x,c) \wedge (d_q x)\neg \phi_2(x,\hat{g ^{-1}}(c)) \]
The right hand side is definable in $L^+_{\td}(d)$.
(2) Let $W_1^*,W_2^*$ be as in (1). Then $W_i^*$ is $\bigwedge$-definable (in particular contained in a definable
approximation to $G$), and
$W_1^* \cap W_2^* \subset f^{-1}(W_1 $\cjRL{S}$\, ) \cap f ^{-1}(W_2 $\cjRL{S}$\, ) = \emptyset$. If $W_1^*,W_2^*$ were not separated by
definable sets $V_i \subset G$ (in $L^+_{\td}(d_0)$), by compactness and since every type over $\emptyset$ is
realized , we would have $W_1^* \cap W_2^* \neq \emptyset$, a contradiction.
\end{proof}
\begin{rem} \label{beth}
The Beth-Craig-Robinson definability theory is valid for $\bigwedge$-definable relations. Here is the version we need:
Let $M$ be an $\aleph_0$-homogeneous $L$-structure. Let $M'$ be an expansion to an $L'$-structure, in which every type is realized. Let $X,Y$ be $\bigwedge$-definable subsets in $M'$. Assume $X \cap \sigma Y = \emptyset$ for every automorphism
$\sigma$ of $M$. Then $X,Y$ are separated by a 0-definable set of $M$. [Proof: Let $SX = \{tp(a; M'): a \in X\}$, and similarly $SY$.
Then $SX,SY$ are compact sets in the space $S'$ of $L'$-types (over $\emptyset$.) Let $\pi: S' \to S$ be the projection to the space of $L$-types. Then $\pi SX, \pi SY$ must also be compact sets, and they are disjoint since if $tp_L(a)=tp_L(b)$ then there exists
an $L$-automorphism $\sigma$ with $\sigma(a)=b$. Hence there exists a 0-definable set $D$ of $L$ separating $\pi SX, \pi S Y$ and thus also $X,Y$.]
It follows in particular that if $X$ is $Aut(M)$-invariant, then $X$ is $\bigwedge$-definable in $L$.
\end{rem}
Definability in the following corollary (and beyond) refers again to $L$, not to $L_{\td}$.
\begin{cor} \label{G00} In \propref{grmain3}, and hence also in \ref{grmain} and \ref{grmain2}, one can add that $f ^{-1}( cl(\langle $\cjRL{S}$\, \rangle))$ is locally $\bigwedge$-definable in $L$. In fact it is an intersection of 0-definable sets commensurable with $\Lambda$,
and the smallest subgroup with this property.
\end{cor}
\begin{proof} We may assume $M$ is $\beth_2^+$-saturated in $L_{\td}$, and $\aleph_0$-homogeneous in $L$.
We have $f= \varphi \circ \delta: G(M) \to \CG$; so that while $f$ is defined only on $G(M)$, it extends
to a quasi-homomorphism $\varphi: Aut(M) \to \CG$. Let $\sigma \in Aut(M)$, $g \in G(M)$, and let $Y$ be a normal subset of $\CG$.
We have
$\varphi(\sigma ^{-1} \delta(g) \sigma ) \in \varphi(\sigma) ^{-1} \varphi(\delta(g)) \varphi(\sigma) $\cjRL{S}$\, ^3 \subset \varphi(\delta(g)) $\cjRL{S}$\, ^3$;
So $\sigma (f ^{-1}(Y)) \subset f ^{-1}(Y $\cjRL{S}$\, ^3)$. In case $Y = Y $\cjRL{S}$\, $, this shows that $f ^{-1}(Y)$ is $Aut(M)$-invariant.
If in addition $Y$ is closed, by \propref{grmain3}, $f ^{-1}(Y)=Y^*$ is also locally $\bigwedge$-definable
in $L_{\td} $; i.e. $\Lambda\cap f ^{-1}(Y)$ is $\bigwedge$ - definable in $L_{\td}$, for any $\Lambda \in \omega$. By \remref{beth},
(using the $Aut(M)$-invariance), $f ^{-1}(Y)$ is then locally $\bigwedge$-definable in $L$.
In particular, this applies to $f ^{-1}(cl(\langle $\cjRL{S}$\, \rangle))$.
Since $\Lambda / f ^{-1}(cl(\langle $\cjRL{S}$\, \rangle))$ has cardinality $\leq 2^{\aleph_0}$, it is clear that any definable subset of $\Lambda$ containing $f ^{-1}(cl(\langle $\cjRL{S}$\, \rangle))$ is commensurable with $\Lambda$.
In the other direction, let $H$ be a locally $\bigwedge$--definable subgroup of $G$ of bounded index. Then $H$ induces
a locally $\bigwedge$--definable equivalence relation $E$ on the $G$-torsor $D$, namely $ y x ^{-1} \in H$; and thus an
equivalence relation (also denoted $E$) on the type space $S_D(M)$. It is easy to see that $E$ is an intersection of
$\L$-definable relations, so that $E$ is closed in the pp topology too; and we have $\rho(p) E \rho(q)$ iff $p E q$. The elements of $Aut(J)$ preserving each $E$-class thus form a closed subgroup of $Aut(J)$. Identifying
$\mathcal{J}$ with the image $J$ of $\rho$, we see that the image of $H$ in $Aut(\mathcal{J})$ contains a closed subgroup that in turn
contains $$\cjRL{S}$\, ^{Aut(\mathcal{J})}$; so the image contains $cl(\langle $\cjRL{S}$\, \rangle)$. This clearly remains true modulo $\fg$ too.
\end{proof}
See \cite{arturo} for a systematic treatment of piecewise hyper-definable groups. $G^{00}_0$ is defined there to
be the smallest locally - $\bigwedge$-definable subgroup of $G$ of bounded index. In this notation, we have
$f ^{-1}(cl(\langle $\cjRL{S}$\, \rangle))= G^{00}_0$.
\ssec{ Proof of \thmref{grmain} (4)}
Unlike \propref{G00}, the conclusion here allows for parameters. Let $M_0$ be a countable (or rather, with
$|M_0| \leq |\Omega|$) elementary submodel
of the given structure. Let $M$ be a highly saturated model of $T^+_{\td}$, with $\aleph_0$-homogeneous reduct
to $L(M_0)$. We have the map $\varphi: Aut(M) \to \CG$. By \propref{qh3} (1), $\varphi($\cjRL{S}$\, ^{Aut(M)}) \subset $\cjRL{S}$\, ^{\CG}$.
Now $Aut(M/M_0) \subset $\cjRL{S}$\, ^{Aut(M)}$, by definition of $$\cjRL{S}$\, $.
Thus if $\sigma \in Aut(M/M_0)$ and $a \in \varphi ^{-1}(X) $ then $\sigma ^{-1} a \sigma \in \varphi ^{-1}(X $\cjRL{S}$\, ^2)$. Let
$X^*,Y^*$ are as in \propref{grmain3}. Then $X^* \subset \varphi ^{-1}(X $\cjRL{S}$\, )$ so $\sigma ^{-1} (X^*) \sigma \subset \varphi ^{-1}(X $\cjRL{S}$\, ^3)$.
Since $$\cjRL{S}$\, Y \cap $\cjRL{S}$\, ^3 X = \emptyset$,
we have $\sigma ^{-1} (X^*) \sigma \cap Y^* = \emptyset$. Thus by Beth, as formulated above, $X^*,Y^*$ are separated by $L(M_0)$-definable sets.
It follows that the pullback of closed, $\langle $\cjRL{S}$\, \rangle$-invariant sets is $\bigwedge$-definable with parameters. Thus:
\begin{cor} \label{G00b} The induced homomorphism $G/f ^{-1}( cl(\langle $\cjRL{S}$\, \rangle)) \to \mathsf H/ cl(\langle $\cjRL{S}$\, \rangle)$ is
continuous, i.e. the inverse image of any compact subset is $\bigwedge$-definable with parameters.
\end{cor}
\begin{cor}\label{slowly2} Let $G$ be a definable group in some theory $T$, and let
$\mathcal{J}$ the core space of $T^+$ (on the torsor sort $D$).
If the image of $G$ in $Aut(\mathcal{J})$ is trivial (in particular, if $Aut(\mathcal{J}_D)$ is trivial), then
then for any generic, symmetric definable $X \subset G$, we have $X^{12} =G$. \end{cor}
\begin{question} How far can the 12 in \thmref{grmain} be improved? \end{question}
\begin{question} Describe the image of $f$ in \thmref{grmain}. \end{question}
\newpage
\section{Approximate subgroups} \label{approxsg}
Let $G$ be a group.
For $\Lambda \subset G$, define $\Lambda ^{-1} = \{a ^{-1}: a \in \Lambda\}$, $\Lambda^2 = \{ab: a,b \in \Lambda\}$.
$\widetilde{\Lambda} = \cup \Lambda^n $ will denote the group generated by $\Lambda$. $\Lambda$ is
{\em symmetric} if $1 \in \Lambda = \Lambda ^{-1}$.
If $Y \subset G$ is covered by finitely many right translates
of $X$, $Y \subset \cup_{j=1}^n X b_j$, we will say that $X$ {\em commensurably covers} $Y$.
Two subsets $X,Y \subset G$ are {\em commensurable}, $X \sim Y$, if each is covered by finitely many translates
of the other: $X \subset \cup_{i=1}^m Y a_i$, $Y \subset \cup_{j=1}^n X b_j$.
The following definition is due to Tao:
\begin{defn}[\cite{tao}] Let $G$ be a group. A symmetric subset $\Lambda \subset G$ is an {\em approximate subgroup}
if $\Lambda \sim \Lambda^2$.
\end{defn}
The {\em commensurator} $\comm(\Lambda)$ of an approximate subgroup $\Lambda \subset G$ is the group of elements $g \in G$ with $g\Lambda g ^{-1}$ commensurable to $\Lambda$. The {\em commensurability class $\omega$ of $\Lambda$} is the
set of approximate subgroups commensurable to $\Lambda$. The
commensurator of $\Lambda$ depends only on the commensurability class, and can be denoted $\comm(\omega)$ or $\langle \omega \rangle_{max}$.
it is the pullback of the subgroup of $Aut(G)$ fixing $\omega$, under the natural map $G \to Inn(G) \leq Aut(G)$.
\begin{lem}\label{commlem} Let $\Lambda $ be an approximate subgroup of $G$, $\omega$ the commensurability class. Then
$\langle \omega \rangle_{max} = \cup \omega$; $\comm(\Lambda)$ is the union of all approximate subgroups commensurable to $\Lambda$.
\end{lem}
\begin{proof} Note first that $\Lambda \subset \comm(\Lambda)$: if $g \in \Lambda $ then $g ^{-1} \Lambda g \subset (\Lambda)^3$, so
$g ^{-1} \Lambda g$ is commensurably covered by $\Lambda$, and conjugating back by $g$ we obtain the converse.
If $\Lambda' \sim \Lambda$ then $\comm(\Lambda')=\comm(\Lambda)$, so $\Lambda' \subset \comm(\Lambda)$. Conversely,
if $g \in \comm(\Lambda)$, then $g\Lambda \cup \Lambda \cup \Lambda g^{-1}$ is an approximate subgroup in $\omega$;
see the proof of \remref{4.3}.
\end{proof}
Let $\mathsf H$ be a locally compact topological group. Let $U$ be a compact neighborhood of $1$. Then $U$ is an approximate subgroup.
The compact, symmetric neighborhoods of $1$ form a unique commensurability class of approximate subgroups; we call it the {\em fundamental commensurability class} of $\mathsf H$. If $f: G \to \mathsf H$ is a homomorphism,
the commensurability class in $G$ of the pullback of any compact open neighborhood of the identity in $\mathsf H$ is
denoted $\chi(f)$, and again called teh fundamental commensurability class of $f$.
If $H$ is a subgroup of $G$, any commensurability class $\omega'$ of approximate subgroups of $H$ extends to a commensurability class $\omega$ of approximate subgroups of $G$. We say in this case that $\omega$ {\em belongs to } $H$. (This terminology does not preclude belonging to still smaller subgroups.; in fact if $\omega$ belongs to $H$, then it belongs to any finite index subgroup of $H$. In many cases of interest, there does exist a unique smallest commensurability class of subgroups to which $\omega$ belongs; see \propref{sensible}.)
\begin{lem}\label{commlem2}
Let $G \leq H$. If $X,Y \subset H$ are commensurable as subsets of $G$, they are commensurable in $H$. \end{lem}
\begin{proof} Say $X \subset \cup Ya_i$; then
either $Ya_i$ can be deleted, or $a_i \in Y ^{-1} X \subset H$. thus $X$ is also contained in a union of $H$-translates of $Y$; and dually. \end{proof}
\ssec{Laminar commensurability classes}
A commensurability class $\omega=\chi(f)$ arising from a homomorphism $f$ on a subgroup of $G$ into a locally compact group
$\mathsf H$ will be said to be {\em {laminar}}. It is shown in \cite{nqf} that in this case, $\mathsf H$ can be taken to be a connected Lie group
with no compact normal subgroups; this Lie group is then uniquely determined by the class $\omega$.
This uses Yamabe's theorem, that guarantees that any locally compact group has an open subgroup
compactly isogenous to a Lie group. This subgroup however is neither canonical nor normal, and making use of it exacts a technical toll later on, before one regains a measure of normality
In the opposite direction, it is sometimes desirable to extend the domain of $f$ as much as possible. It follows
from \lemref{commlem} that the domain of any $f$ with $\chi(f)=\omega$ is contained in the commensurator group $\omega$;
and indeed it is possible to find $f$ with this domain.
\begin{lem}\label{commsmooth} If $\omega$ is a {laminar} commensurability class of approximate subgroups, there exists
a homomorphism $F$ with domain $\comm(\omega)$ into a locally compact group, with $\chi(F)=\omega$. \end{lem}
\begin{proof} This can be seen via \propref{grmain2}. More directly, view $\comm(\omega)$ as an ind-definable group,
where the inductive system is the family of all approximate subgroups in $\omega$; by \lemref{commlem},
their union of $\comm(\omega)$. Let $G*$ be a saturated elementary extension, $\lambda^*=\lambda(G^*)$,
and
$\omega^* = \cup_{\lambda \in \omega} \lambda^*$. Let $I$ be the set of $\lambda_1 \in \omega$ such that there exist
$ \lambda_2 \cdots \in \omega$ with $\lambda_{k+1}^2 \subset \lambda_k$, $\lambda_k ^{-1} = \lambda_k$. If $\lambda,\lambda' \in \omega$,
then we saw that $\lambda^2 \cap (\lambda')^2 \in \omega$. Hence if $\lambda_1,\lambda_1' \in I$ and $\lambda_k,\lambda'_k$
are as in the definition of $I$, since $\lambda_{k+1}^2 \cap (\lambda'_{k+1})^2 \in \omega$, we have
$\lambda_k \cap \lambda'_k \in \omega$ for each $k$, and so $\lambda_1 \cap \lambda_1' \in I$. It is clear that
$I$ is conjugation-invariant. Let $N$ be the
$\bigwedge$-definable subgroup of $\omega^*$, formed by the intersection of all $\lambda \in I$. Then $N$ is a normal subgroup of $\omega^*$, $N \subset \lambda_1$ (for any choice of $\lambda_1 \in I$), and $\omega^*/N$ is locally
compact in the logic topology. Let $F$ be the natural map $\omega^* \to \omega^*/N$, restricted to $\omega$.
\end{proof}
For example, the approximate subgroup $\{a/2^n: n \in \Nn, a \in \Zz, -2^n \leq a \leq 2^n \}$ of $\Qq$ has commensurability class $\chi(f)$, where $f: \Zz[1/2] \to \Rr$; and also $\chi(g)$, with $g: \Qq \to \Rr \times \Pi_{p \neq 2} \Qq_p$ the natural inclusion.
We note immediately a uniqueness statement for the Lie group explanation of a given commensurability class of approximate subgroups, with a given subgroup as domain.
If $N$ is a compact normal subgroup of $\mathsf H$, $\pi: \mathsf H \to \mathsf H/N$ the quotient map, then the composed map $\pi \circ f:G \to \mathsf H/H$
determines the same commensurability class of approximate subgroups as $f$.
This is however
the only ambiguity, i.e. $f: G \to \mathsf H$ is unique up to compact isogeny:
\begin{prop}\label{uniquelc} Let $h_i: G \to H_i$ be homomorphisms with dense image into locally compact groups, and assume
$\chi(h_1)=\chi(h_2)$. Then there exist compact normal subgroups $N_i$ of $H_i$, $\pi_i: H_i \to H_i/N_i$, and an isomorphism
$\phi:H_1/N_1 \to H_2/N_2$ , with $\pi_2 h_2 = \phi \circ \pi_1 h_1$. \end{prop}
\begin{proof} Let $F_0 \leq H_1 \times H_2$ be the image of $G$ under the map $(h_1,h_2): \Lambda \to H_1 \times H_2$.
Let $F$ be the closure of $F_0$. Then $F$ is a closed subgroup of $H_1 \times H_2$.
Let $\pi_i: F \to H_i$ be the projection, let $N_1 = \{x \in H_1: (x,1) \in F \}$, $N_2=\{y \in H_2: (1,y) \in F \}$.
\claim{1} $N_i$ is compact.
\begin{proof} Let $U_i$ be a precompact open neighborhood of $1$ in $H_i$. Then $h_2 ^{-1}(U_2)$ is commensurable with $\Lambda$,
and hence with $h_1 ^{-1}(U_1)$; so $h_2 ^{-1}(U_2)$ is covered by finitely many translates of $h_1 ^{-1}(U_1)$; hence
$h_1 h_2 ^{-1}(U_2)$ is covered by finitely many translates of $U_1$, so it is precompact. Let $U_1'$ be a precompact open
neighborhood of $1$ in $H_1$ containing $h_1 h_2 ^{-1}(U_2)$. So $N_1 \subset \pi_1(F \cap (H_1 \times U_2)) \subset
\pi_1 cl(F_0 \cap (H_1 \times U_2)) \subset cl(h_1 h_2 ^{-1}(U_2)) $ and so $N_1$ is precompact. Being closed it is compact, and likewise $N_2$. \end{proof}
\claim{2} $F$ projects onto $H_1$ and onto $H_2$.
\begin{proof} Let $U$ be an open subset of $H_2$, with compact closure $\bU$. Since $N_1$ is compact, $\pi_2$ is proper,
so $\pi_2 ^{-1}(\bU)$ is compact. Thus $\pi_2(\pi_2 ^{-1}(\bU))$ is compact. By the density assumption,
$\pi_2(\pi_2 ^{-1}(U))$ is dense in $U$. Thus $\pi_2(\pi_2 ^{-1}(\bU))$ is closed, and contains a dense subset of $U$,
hence it contains $U$. It follows that $F$ projects onto $H_2$, and similarly $H_1$.
\end{proof}
Now $N_i \trianglelefteq H_i$, and $F$ induces a group isomorphism $\phi: H_1/N_1 \to H_2/N_2$.
By the same argument as in Claim 1, $\phi (X)$ is compact iff $X$ is compact.
Let $V_i$ be a compact neighborhood of $1$ in $H_i$, and $W_1 = V_1 \cup \phi ^{-1}(V_2)$. Then $W_1$ is compact, as is $W_2 = \phi(W_1) = \phi(V_1) \cup V_2$.
Thus $\phi$ restricted to $W_1$ is a homeomorphism near $1$, which suffices.
\end{proof}
Given a conjugacy class $\theta$ of approximate subgroups, if it arises from a homomorphism $f: G \to \mathsf H$, there is always a {\em universal} one with class $\theta$, mapping with compact kernel onto any other.
Often, there is also a minimal one, namely $f: G \to \mathsf H$ such that $\mathsf H$ has no compact normal subgroups.
Both are unique up to a unique isomorphism under $G$.
\ssec{Intersections} \label{icc}
Let $\omega$ be a
commensurability class of approximate subgroups of a group $G$, and let $H$ be a subgroup of $G$.
If $X ,Y \in \omega$, then $Y \cap H$ is commensurably covered by $XX^{-1} \cap H$. To see this, say $Y \subset \cup_{i=1}^m Xc_i$. Then $Y \cap H \subset \cup_{i=1}^m (Xc_i \cap H)$. Now if $Xc_i \cap H \neq \emptyset$,
let $h_i \in Xc_i \cap H$. Then $c_i \in X^{-1} h_i $ so $Xc_i \subset X X ^{-1} h_i$. Hence the $H$-commensurability class of
$XX^{-1} \cap H$ does not depend on the choice of $X \in \omega$. In particular,
as $(XX^{-1} \cap H)^2 \subset (XX^{-1})^2 \cap H $ and $ (XX^{-1})^2 \in \omega$, $XX^{-1} \cap H$ is an approximate subgroup of $H$. We denote this class by $\omega \cap H$.
If $H$ is normal,
we also have the class $\omega /H$, represented by $X/H$ for any $X \in \omega$.
Then $\omega$ belongs to $H$ iff $\omega \cap H \sim \omega$. When $H$ is normal, this is equivalent to: $\omega/H \sim \{1\}/H$.
One can similarly define an intersection of commensurability classes: if $X,Y$ are approximate subgroups of $G$,
then so is $XX ^{-1} \cap Y Y ^{-1}$, and the commensurability class of $XX ^{-1} \cap Y Y ^{-1}$ depends only on the classes of $X$ and of $Y$. In case $X,Y$ are commensurable, $X X ^{-1} \cap Y Y ^{-1}$ is commensurable to both. This can be seen directly using a Rusza-style argument, starting from a maximal subset $\{a_1,\ldots,a_n\}$ of $X$ such that the sets $a_iY$ are disjoint. A more conceptual derivation will soon be available: we will see below (\thmref{grmain}) that any commensurability class of approximate subgroups can be written as $\chi(f)$, for some approximate homomorphism into a locally compact group; then it is easy to check that $\chi(f \times g) = \chi(f) \cap \chi(g)$, and $\chi(f \times f) = \chi(f)$.
\ssec{The commensurability class of pullbacks of compact neighborhhods
\label{classofmorphism}
More generally, if $f: G \to \mathsf H:K$ is a quasi-homomorphism, with $\mathsf H$ locally compact and
$K=K ^{-1}$ compact, we consider pullbacks $Y=f ^{-1}(W )$ of compact neighborhoods $W$ of $K$ in $\mathsf H$.
We will see below that
$Y Y ^{-1}$ is an approximate subgroup, whose
commensurability class is uniquely determined. We continue to use the notation $\chi(f)$ for this class . We will write $X \sim \chi(f)$ to meant that the commensurability class of $X$ equals $\chi(f)$.
Uniqueness again holds but now in the category of compact correspondences.
\begin{prop}\label{converse} Let $f: \Lambda \to \mathsf H:K$ be a quasi-homomorphism into a locally compact topological group $\mathsf H$ , $1 \in K=K ^{-1} \subset \mathsf H$ compact.
Let $W$ be a compact subset of $\mathsf H$ with nonempty interior.
Let $X : = f ^{-1}(W{K})$.
Then $XX ^{-1}$ is an approximate subgroup, contained in $f ^{-1}(W W ^{-1} K^4)$ , and
right- commensurable with $X$. It is also commensurable with $f ^{-1} (Z)$ for any
precompact $Z \subset \mathsf H$ containing $W W ^{-1} K^4$ .
\end{prop}
\begin{proof} \begin{itemize}
\item $f(y) ^{-1} \in f(y ^{-1}) K $, and
$f(x y ^{-1} ) \in f(x) f(y) ^{-1} K^{-1} K$.
[Proof: We have $1 = f(1) = f(y ^{-1} y) \in f(y ^{-1}) f(y) K$.
So $f(y) \in f(y ^{-1})^{-1} K ^{-1}$ or $f(y ^{-1} ) \in f(y) ^{-1} K ^{-1}$; and
$f(y) ^{-1} \in f(y ^{-1}) K.$
Thus $f(x y ^{-1} ) \in f(x) f(y ^{-1}) K \subset f(x) f(y) ^{-1} K^{-1} K$.
]
\item For $U \subset \mathsf H$, $f ^{-1} ( Ua) \subset f^{-1}(UU^{-1} {K}) c$ for some $c$. [Proof:
If $f ^{-1} (Ua) = \emptyset$, there is nothing to prove. Otherwise, let $c$ be such that $f(c) \in Ua$.
If $f(x) \in Ua$, then $f(x c ^{-1}) \in (Ua) (Ua) ^{-1} {K} = UU ^{-1} {K}$; so $xc^{-1} \in f^{-1}(UU ^{-1} {K})$ and
$x \in f^{-1}(UU ^{-1} {K}) c$. ]
\item
Let $Z \subset \mathsf H$ be compact and let $U$ be a neighborhood of $1$ in $\mathsf H$. Then
finitely many right cosets of $f ^{-1} (U {K})$ cover $f ^{-1} (Z)$.
[Proof: Let $V$ be a neighborhood of $1$, with
$VV^{-1} {K} \subset U {K}$. As $Z$ is compact,
it is covered by finitely many cosets $Va$;
so it suffices to show that $f ^{-1} (Va)$ is covered
by a right coset of $f ^{-1} (V V ^{-1} {K})$; this was the previous clause. ]
\item
Let $Y = X X ^{-1}$.
We have seen that for any compact $Z \subset \mathsf H$,
$f ^{-1}(Z) $ is covered by finitely many right translates of $X$.
Now $Y$ is symmetric ($Y=Y ^{-1}$)
and contains $1$.
As $f(Y)f(Y)^{-1} K$ is precompact, and $YY$ is contained
in $f ^{-1} (f(Y)f(Y)^{-1} K)$, it is covered by finitely many cosets of $X$, and a fortiori also of $Y $ (making it an approximate subgroup.)
\end{itemize}
\end{proof}
\begin{rem} \begin{enumerate}
\item \propref{converse} did not assume normality of $K$.
\item
It is worth noting that although we began with an approximate homomorphism into a locally compact group in the general sense of a {\em compact} error set $K$, the associated commensurability class in $\Lambda$ consists of approximate subgroups in
Tao's original sense of a {\em finite} set of translates.
\end{enumerate}\end{rem}
We have already proved the principal structure theorem for approximate subgroups, \thmref{grmain}; we will now refine it.
(at the cost of losing some canonicity; $\widetilde{\Lam}$ is not a canonical choice, though once it is chosen the rest can be made canonical.)
See \defref{groupdefs} (6) for the definition of rigidity, and \secref{classofmorphism} for $\chi$.
\begin{thm} \label{ag2} Let $\omega$ be a commensurability class of approximate subgroups of a group $G$. Then there exists a subgroup $\widetilde{\Lam} \leq G$, a connected Lie group $\mathsf{L}$, a rigid abelian subgroup $A \trianglelefteq \mathsf{L}$, a normal compact subset $K$ of $A$,
and a quasi-homomorphism
$f: \widetilde{\Lam} \to \mathsf{L}:K$
with $\omega= \chi(f)$.
\end{thm}
Some supplements: \begin{enumerate}[label=(\roman*)]
\item One can demand that $A \cong \Rr^N$
for some $N \in \Nn$, and that $\langle f(\widetilde{\Lam}) \rangle A$ be dense in $\mathsf{L}$.
\item One can add that $\mathsf{L}$ has no compact normal subgroups.
\item In place of connectedness of $\mathsf{L}$, and as an alternative to (i), one can ask that $\langle f(\widetilde{\Lam}) \rangle $ is dense in $\mathsf{L}$,
and $A \cong \Rr^n \times \Zz^k$ for some $n,k$.
\item The induced homomorphism $\widetilde{\Lam} \to \mathsf{L}/A$ is definable in the sense of continuous logic.
\end{enumerate}
\medskip
Following the proof, we will give two ways to decompose the quasi-homomorphism $f$ into better studied kinds.
In \thmref{ag2mt}, we will describe the situation in terms of a $\bigwedge$-definable group $H$ and $n$ definable quasimorphisms
on $H$, i.e. a definable quasi-homomorphism on $H$ into $\Rr^n$. In \propref{ag2gt}, we will instead decompose
$f$ in terms of a rigid group extension $\widehat{\Lambda}$ of $\widetilde{\Lam}$, and a group homomorphism $\hat{f}: \widehat{\Lambda} \to \mathsf{L}$.
\ssec{Refining the target.} \label{mprov}
The proof of \thmref{ag2} will start with a quasi-homomorpism ${f}: \G \to \mathsf H:K $ as in \thmref{grmain},
and gradually seek to simplify it, without changing the commensurability class $\chi({f})$ associated with ${f}$.
Let us review in advance a few of the allowed modifications, along with an explanation of why the pullback class $\chi(f)$ does not change. Some of the steps parallel \cite{FK}, where Fujiwara and Kapovich proved that any quasi-homomorphism with discrete target can be perturbed to one with central error. Here the best we can aim for is a rigid error set.
\begin{enumerate}
\item Replace $\mathsf H$ by an open subgroup $\mathsf H_1$ containing $K$; let ${\G}_1= {f} ^{-1}(\mathsf H_0)$, ${f}_1 = {f}|{\G}_1$; $K_1 = K \cap \mathsf H_1$.
Then ${\G}_1$ is a subgroup (if $x,y \in {\G}_1$ then ${f}(xy) \in {f}(x){f}(y) K \subset {\G}_1$.)
Since a compact open neighborhood of the identity in $\mathsf H_1$ is also one in $\mathsf H$, we have $\chi({f}|{\G}_1) = \chi({f})$.
\item
Factor out a compact normal $N$ subgroup of $\mathsf H$: let ${f'}: \G \to \mathsf H/N: K/N$ be the composition of ${f}$
with the quotient map $\pi: \mathsf H \to \mathsf H/N$. We have $({f'}) ^{-1} (YN/N) = {f} ^{-1} (Y N) $ for any
compact neighborhood $Y$ of $K$ in $\mathsf H$, so $\chi({f})=\chi({f'})$.
\item Let $\mathsf H_2$ be a closed subgroup of $\mathsf H$, with $\mathsf H_2 K^m = \mathsf H$. Let $K_2 = \mathsf H_2 \cap K^{3m+1}$.
For any $x \in {\G}$ pick $c(x) \in K^m$ with $f(x)c(x) \in \mathsf H_2$, $c(1)=1$, and define $g:{\G} \to \mathsf H_2$ by $g(x)=f(x)c(x)$.
For any $x,y \in {\G}$
we have, for some $k=k(x,y) \in K$:
\[g(xy) = f(xy)c(xy) = f(x)f(y) k c(xy) = g(x) c(x) ^{-1} g(y) c(y) ^{-1} k c(xy) = \]
\[=g(x) g(y) c(x)^{-g(y)} c(y) ^{-1} k c(xy) \in g(x)g(y) K^{3m+1} \] so $g(xy) \in g(x)g(y) K_2$.
For a compact neighborhood $U$ of $1$ in $\mathsf H$, $UK^m \cap \mathsf H_2$ is a compact neighborhood of $1$ in $\mathsf H_2$,
and $f ^{-1}(U) \subset g ^{-1}(UK^m \cap \mathsf H_2) \subset f ^{-1} (U K^{2m})$. Thus $\chi(f)=\chi(g)$.
\item Let $H_0$ be an open subgroup of $\mathsf H$ of finite index. Let $\mathsf H_3 = H_0 \langle K \rangle$. Then $\mathsf H_3 = H_0 K^m$
for some $m$. Let ${\G}_1 = f ^{-1}(\mathsf H_3)$.
Then ${\G}_1$ is a finite index subgroup of ${\G}$, and by (1) $\chi(f) = \chi(f|{\G}_1)$.
By the previous step (3), we can find $g: {\G}_1 \to H_0$ with $\chi(f) = \chi(g)$.
\end{enumerate}
\begin{rem} To a model theorist, the apparent arbitrariness in the choice of $f_0$ in step (3) is disturbing.
It will only be used via (4), so in a finite index setting, where only finitely many parameters need be chosen. Note also
in general
that if $\langle K \rangle$ is centerless, the correction function $c_0$, and hence $f_0$, are uniquely defined in the construction.
The general case could be made definable using a relational presentation of the notion of a quasi-homomorphism,
namely the category $\mathsf{CoGr}$ of Appendix \ref{categories},
in effect allowing $f(x)$ to be a compact subset rather than an element;
and taking $k(x) = f(x)^{-1} H_2 \cap K^m$.
\end{rem}
\begin{proof}[Proof of \thmref{ag2}]
Let $\Lambda_0$ be an approximate subgroup in $\omega$, and $\widetilde{\Lam} = \langle \Lambda_0 \rangle$.
By \thmref{grmain},
there exists $f: \widetilde{\Lam} \to \mathsf H:K$ with $\mathsf H$ locally compact,
$K$ normal, symmetric, compact,with $\Lambda_0 \sim \chi(f)$.
By \thmref{lc1cc}, there exist closed normal subgroups $N \leq A \leq H_1 \leq \mathsf H$ with $\mathsf H/H_1$ finite,
$N$ compact, with $A/N$ basic abelian and rigid in $H_1/N$, such that $\langle K \rangle \cap H_1 \leq A$.
By \ref{mprov} (4), we may pass from $\mathsf H$ to the finite index subgroup $H_1$ (replacing $K$ by $K^m \cap H_1$ for appropriate $m$); so we may assume $K \subset A$, $N \leq A \leq \mathsf H$. Since $N$ is compact, by \ref{mprov} (2) we may factor it out.
Now $A$ is basic abelian and rigid in $\mathsf H$, and $K \subset A$.
We have $A=DV$ with $D \cong \Zz^l$ central in $\mathsf H$, and $V$ a rigid vector group. Let $D_\Rr = D {\otimes} \Rr$, so that
$(A,D) \cong (\Zz^l,\Rr^l)$. Let $\bar{\mathsf H} = \mathsf H \times_D D_\Rr := (\mathsf H \times D_\Rr) / D$, with $D$ embedded into
$\mathsf H \times D_\Rr$ by $d \mapsto (d,-d)$. This diagonal image of $D$ is a closed, discrete subgroup of $\mathsf H \times D_\Rr$
and so the quotient is still locally compact. Now $\mathsf H$ embeds into $\bar{\mathsf H} $ as a closed subgroup. Let $\bar{A}=A \times_D D_\Rr$.
If $U$ is a compact neighborhood of $1$ in $\bar{\mathsf H} $, then $U \cap \mathsf H$ is a compact neighborhood of $1$ in $\mathsf H$.
Hence we may replace $\mathsf H,A,K$ by $\bar{\mathsf H} ,\bar{A},K$; we have the same situation, and in addition $\bar{A} \cong \Rr^N$ for some $N$. We are of course not claiming any density at this point.
By Yamabe's theorem \cite{yamabe}, $\mathsf H$ has an open subgroup $H'$ and a compact subgroup $N'$, normal in $H'$,
with $H'/N'$ a
Lie group.
As $V$ is connected and $H'$ is open, $V \subset H'$; i.e. by the above paragraph, $A \subset H'$. By
\ref{mprov} (1), we may pass from $\mathsf H$ to $H'$; so we may assume $\mathsf H=H'$.
Factoring out the compact normal subgroup $N'$ using \ref{mprov} (2)
again, we have that $\mathsf{L}$ is a Lie group.
Replacing $\mathsf H$ by the closed subgroup
$\mathsf{L}=cl(f(\widetilde{\Lam})A)$, we may assume $f(\widetilde{\Lam}) A $ is dense in $\mathsf{L}$. Now $\mathsf{L}^0$ is an open subgroup of $\mathsf{L}$;
$\mathsf{L}^0 \cap f(\widetilde{\Lam})A$ is thus still dense in $\mathsf{L}^0$: since $A$ is connected, $A \subset \mathsf{L}^0$; by \ref{mprov} (1), we may replace $\mathsf{L}$ by $\mathsf{L}^0$, so that we may assume $\mathsf{L}$ is connected. Finally, by \ref{mprov}(2), we may, without changing $\chi(f)$, factor out the maximal normal compact subgroup of $\mathsf{L}$, to obtain that $\mathsf{L}$ has no nontrivial normal compact subgroups.
Now the continuity (iv) comes from \corref{G00b}; the various modifications made
to the quasi-homomorphism of \thmref{grmain}, to wit restricting the domain to a subgroup, factoring out a compact normal subgroup of the image, and modifying by a map into $A$, will not effect this.
\end{proof}
\begin{rem} \label{ag2r}
If desired, say in order to connect with cohomological results phrased for unitary modules,
we can make the error set contained in a complex (unitary) rather than real rigid space.
Take first the semi-direct product of $\mathsf{L}$ with the complexifization $A_{\Cc}$ (where $\mathsf{L}$ acts on $A$ by conjugation, and
using the derived action on $A_{\Cc}$), and then factoring out the image of the homomorphism $A \to \mathsf{L} \times A_{\Cc}$,
$x \mapsto (-x,x)$, which becomes a normal subgroup. The three variants $\mathsf{L},\mathsf{L}',\mathsf{L}''$ do not detract from the canonicity of the construction, but give three canonical forms that may be convenient for different purposes.
\end{rem}
\ssec{Model-theoretic formulation}
Let us recall some definitions. As usual, we interpret $\bigwedge$ and $\bigvee$-definable
sets in a sufficiently saturated model of the theory.
\begin{enumerate}
\item Let $\widetilde{\Lam} = \cup H_n$ be a $\bigvee$-definable group.
A set $X \subset \widetilde{\Lam}$ is {\em locally $\bigwedge$-definable} if for each $N$, $H_n \cap X$ is $\bigwedge$-definable.
\item A subgroup $\Gamma$ of $\widetilde{\Lam}$ is {\em locally $\bigwedge$-definable} if $H_n \cap \Gamma$ is $\bigwedge$-definable, for all $n$.
\item A $\bigwedge$-definable approximate subgroup $W$
is said to {\em determine a commensurability class $\omega$} if $W= \cap_n W_n$, with $W_1 \supset W_2 \supset \cdots $
all in $\omega$.
\item Let $X$ be a $\bigwedge$-definable set. A function $\phi: X \to \Rr$ is said to be {\em continuous} (or: {\em definable in the sense of continuous logic}) if it is bounded, and for any two disjoint compact sets $C,C' \subset \Rr$ there exists a definable $D$
with $\phi ^{-1}(C) \subset D$ and $D \cap \phi ^{-1} C' = \emptyset$.
\item Let $X$ be a locally $\bigwedge$-definable subset of $\widetilde{\Lam}$. Then $\phi: X \to \Rr$ is {\em continuous} if
$\phi | (X \cap H_n)$ is continuous for each $n$.
\end{enumerate}
\begin{thm} \label{ag2mt} Let $G$ be a group, $\omega$ a commensurability class of approximate subgroups of a group $G$.
Then there exists a $\bigvee$ -definable
group $\widetilde{\Lam}$,
a locally $\bigwedge$-definable subgroup $\Gamma \leq \widetilde{\Lam}$ of bounded index, and finitely many continuous
quasimorphisms $\phi_1,\ldots,\phi_n : \Gamma \to \Rr$, such that the
approximate kernel \[ \Gamma_\phi : = \cap_{i=1}^n \phi_i ^{-1} [-1,1] \]
is a $\bigwedge$-definable approximate subgroup, and
$\omega$ is the commensurability class determined by $\Gamma_\phi$.
\end{thm}
\begin{proof}
By \thmref{ag2}, there exists a quasi-homomorphism $f: \widetilde{\Lam} \to \mathsf{L}:K$, where $K$ is a compact subset of a rigid abelian subgroup $A \trianglelefteq \mathsf{L}$, $A \cong \Rr^n$, $\omega=\chi(f)$.
Let $\bar{\mathsf{L}} = \mathsf{L} / A$, let $\bar{f}: \widetilde{\Lam} \to \bar{\mathsf{L}}$ be the composed homomorphism $\widetilde{\Lam} \to \mathsf{L} \to \bar{\mathsf{L}}$;
and let $\Gamma$ be the kernel.
By \corref{G00b}, since $A$ contains $cl(\langle $\cjRL{S}$\, \rangle)$, the induced homomorphism $\widetilde{\Lam}/ \Gamma \to \mathsf{L} / A$ is continuous, and $\Gamma$ is a
locally $\bigwedge$-definable group.
Let $\psi = f| \Gamma$.
Identifying $A$ with $\Rr^n$ and projecting
to coordinates, we can view $\psi$ as an $n$-tuple of quasimorphisms $\psi_1,\ldots,\psi_n$.
We have:
\begin{enumerate}
\item $\psi ^{-1} (-m,m)^n$ is an approximate subgroup, for large enough $m$ (by \propref{converse}).
\item If $C \subset A$ is compact, then $\psi ^{-1}(C) \subset f ^{-1}(C)$
so this set is contained in a definable subset of $\widetilde{\Lam}$. (\thmref{grmain} (4)).
\item for some $m$, the pullbacks under $\psi$ of two compact subsets of $\Rr^n$ whose Euclidean distance is at most $m$ are separated by a definable set
\item $\psi ^{-1} (-m,m)^n$ has bounded index in $\widetilde{\Lam}$ (even if we work in a large saturated $M$, if $(a_i: i < (2^{\aleph_0})^+$
are elements of $\widetilde{\Lam}$, then some $a_i,a_j$ with $i<j$ must have the same image in $\mathsf{L}$, hence $a_i ^{-1} a_j \in A$,
and $\psi(a_i) \in \psi(a_j) $\cjRL{S}$\, \subset (-m,m)^n$.) )
\end{enumerate}
Let $\phi_i$ be the homogeneous
quasimorphism at bounded distance from $\psi_i$; and let $\phi=(\phi_1,\ldots,\phi_n)$. The four properties of $\psi$ above are all
inherited by $\phi$.
Replacing $\phi$ by $\frac{1}{m} \phi$ we may assume $\phi ^{-1} ((-1,1)^n)$ is an approximate subgroup, and the inverse images of two compacts that are $1$-separated (i.e. $[-1,1]^n C \cap C' = \emptyset$) are separated by a definable set.
Along with the homogeneity of $\phi$, this implies that $\phi$ is continuous: if $C ,C'$ are disjoint compact subsets of $\Rr^n$,
they are $\epsilon$-separated for some $\epsilon >0$; let $N$ be an integer with $N \epsilon >1$. Note
$\phi ^{-1}(NC), \phi ^{-1}(NC')$ are contained in a definable subset of $\widetilde{\Lam}$. As $NC, NC'$ are $1$-separated,
there exists a definable set $D$ with $\phi ^{-1}(NC) \subset D , D \cap \phi ^{-1}(NC')=\emptyset$. Let
$D^*=\{g \in \Gamma: g^n \in D\}$. Then $\phi ^{-1}(C) \subset D^*$, $D^* \cap \phi ^{-1} (C') =\emptyset$. Thus the definable sets
separate pullbacks of an arbitrary pair of disjoint compacts, proving the continuity of $\phi$.
Since $\phi ^{-1} [-1,1]^n$ has bounded index in $\widetilde{\Lam}$, any definable set containing $\phi ^{-1} [-1,1]^n$ must be commensurable with the definable approximations to $\widetilde{\Lam}$.
\end{proof}
\begin{rems}
\begin{enumerate}
\item In case $n=0$, $\Gamma$ is $\bigwedge \omega$-definable. Existence of such a $\Gamma$ is assured in this case by Theorem 3.5 of
\cite{nqf}.
\item We may choose $\widetilde{\Lam},\Gamma$ so that $\widetilde{\Lam}/\Gamma$ is a connected
Lie group with no nontrivial compact normal subgroup.
\end{enumerate}
\end{rems}
%
\ssec{Group-theoretic formulation} \label{ag2gt}
We restate \thmref{ag2} in more standard group-theoretic terms, somewhat in the spirit
of the Bargmann-Wigner theorem
on projective representations of Lie groups.
Let $G$ be a group, let $E_n$ be Euclidean space, $O_n$ the group of isometries of $E_n$ fixing $0$. Assume
a homomorphism $\rho:G \to O_n$ is given; $\rho$ makes $E_n$ into a $G$-module $V$; we call a {\em rigid} $G$-module
to indicate that the image of $\rho$ preserves the Euclidean norm.
Let $C^n(G,\rho)=C^n(G,V)$ be the $\Rr$-space of functions from $G^n \to V$ view this as a complex with the usual differential,
$ \left(d^{n+1}\varphi \right)(g_{1},\ldots ,g_{n+1})=g_{1}\varphi (g_{2},\dots ,g_{n+1})+\sum _{i=1}^{n}(-1)^{i}\varphi \left(g_{1},\ldots ,g_{i-1},g_{i}g_{i+1},\ldots ,g_{n+1}\right)+(-1)^{n+1}\varphi (g_{1},\ldots ,g_{n}) $.
The kernel of $d^{n+1}$ is denoted
$Z^n$, the group of $n$-cocycles. In particular any $\alpha \in Z^2(G,V)$ determines canonically
a group $G \underset{\alpha}{\times} V$ with universe $G \times V$, and multiplication: $(g,e) (h,d) = (gh, \rho(h) e +d+\alpha(g,h))$. It fits into an exact sequence
\[ 0 \to E_n \to G \underset{\alpha}{\times} V \to G \to 1 \]
In case $\rho$ is trivial, this is a central extension of $G$ by $\Rr^n$; in general we may call it a rigid extension of $G$.
Let $C_{bdd}$ be the subcomplex of the bounded chains. See \cite{monod-icm}.
\def G{\tilde{G}}
\begin{thm} \label{ag2gr} Let $\omega$ be a commensurability class of approximate subgroups of a group $G$. Then there exists a subgroup $\widetilde{\Lam} \leq G$, a connected Lie group $\mathsf{L}$ without compact normal subgroups,
a rigid $\widetilde{\Lam}$-module $V$,
a cocycle
$\alpha \in H^2_b(\widetilde{\Lam}, V)$, and a homomorphism $\phi: \widetilde{\Lam} \underset{\alpha}{\times} V \to \mathsf H$, continuous on $1 \times V$, with $\omega$
commensurable to $\{ g \in \mathsf{L}: \phi(g,0) \in U \}$, for $U$ a compact open neighborhood of $1$ in $L$.
\end{thm}
\begin{proof} By \thmref{ag2} there exists a subgroup $\widetilde{\Lam} \leq G$, a connected Lie group $\mathsf{L}$,
a normal compact subset $K$ of a rigid abelian subgroup $A \leq \mathsf{L}$, $A \cong \Rr^N$,
and a quasi-homomorphism
\[f: \widetilde{\Lam} \to \mathsf{L}:K\]
with $\omega = \chi(f)$. We will identify $A$ with $\Rr^N$, and write multiplication additively within $A$.
Rigidity of $A$ means that there exists a homomorphism $\rho': \mathsf{L} \to O_N$ with $x ^{-1} a x = \rho'(x)a$,
for all $x \in \mathsf{L}, a \in A$.
For $g \in \widetilde{\Lam}$, let $\rho(g) = \rho'(f(g))$. As $K \subset A$, $\phi(K) =1$ and hence $\rho: \widetilde{\Lam} \to O_N$ is a homomorphism.
Let $V$ be $A$ with this $\widetilde{\Lam}$-module structure.
Define $\alpha: \widetilde{\Lam}^2 \to A=\Rr^N$ by $\alpha(x,y) = f(xy)-f(x)-f(y)$. It is easy to check that $d^3 \alpha =0$ in $C^3(\widetilde{\Lam},\rho)$.
Since $\alpha(x,y) \in K$, it is bounded, i.e. $\alpha \in Z^2_{bdd}(\widetilde{\Lam},\rho)$.
Define $\phi: \widetilde{\Lam} \underset{\alpha}{\times} V \to \mathsf H$ by $\phi(g,a) = f(g)a$. It is easy to check that $\phi$ is a homomorphism, and that
$\chi(f) \sim \{ g \in \mathsf{L}: \phi(g,0) \in U \}$, for $U$ a compact open neighborhood of $1$ in $L$.
\end{proof}
Let $\beta$ be the bounded cohomology class represented
by $\alpha$. If $\beta=0$, then $\omega$ is smooth: we have, above, $\alpha= d^2 (b)$ for some bounded $b: \widetilde{\Lam} \to A$. Define $f'(x) = f(x) b(x) ^{-1}$.
Again it is easy to check that $f': \widetilde{\Lam} \to \mathsf{L}$ is a homomorphism. As $b(\widetilde{\Lam})$ is a bounded subset of $A$, and hence
a precompact subset of $\mathsf{L}$, we have $\chi(f)=\chi(f')$. This shows that $\omega$ is smooth.
\begin{cor} \label{cohcor} Let $G$ be a group, $\omega$ a commensurability class of approximate subgroups of $G$ that
does not belong to a proper subgroup of $G$. Then $\omega$ canonically determines a homomorphism
$\bar{f}: G \to \bar{L}$ into a Lie group $L$, a rigid $G$-module $M$, and a class $\beta \in H^2_b(G,M)$.
We have $\omega= \chi(\bar{f})$ if $\beta=0$. In particular if $H^2_b(G,M) = 0$ then all such approximate subgroups are smooth.
\end{cor}
A similar statement can be made for approximate subgroups that do not belong to an infinite index subgroup of $G$,
in terms of the limit of $H^\cdot(F,\cdot)$ over finite index subgroups $F$ of
$G$.
Burger and Monod have proved vanishing theorems for lattices in semisimple groups $L$. This will not immediately help us in
understanding approximate lattices $\Lambda$ of $L$, since the criterion of \corref{cohcor} requires vanishing of the bounded cohomology not of $L$ itself, but of the subgroups $G$ generated by
approximate subgroups.
Prima facie, these groups $G$ seem perfectly likely to be free and hence have very large bounded cohomology. We will analyze
$\Lambda$ by other methods. But after the fact, we will know that $G$ is in fact isomorphic to a lattice in a semisimple group
(not in $L$ itself but in a product of $L$ with a `complementary' Lie group); making the results of \cite{burger-monod} and \cite{monod} relevant.
It will be too late for $G$ itself, but along with Gromov's mapping theorem, we will be able
to apply the criterion to soluble extensions of $G$.
See \propref{othergroups}.
The next Proposition, though of independent motivation, will establish a certain converse to \corref{cohcor}; a non-zero class of
$H^2_b(G,\Rr)$, for example, gives rise to a non-laminar approximate subgroup, perhaps not of $G$ but in any case of a
central extension of $G$. (Namely, if $0 \neq \beta \in H^2_b(G,\Rr)$, the image of $\beta$ in $H^2(G,\Rr)$ determines a central extension $\bar{G}$ of $G$ by $\Rr$; the image of $\beta$ in $H^2_b(\bar{G}, \Rr)$ remains nontrivial by the same mapping theorem of Gromov; but the image in $H^2(\bar{G},\Rr)$ is trivial. Thus $\beta = d \alpha$ for some quasimorphism $\alpha: \bar{G} \to \Rr$,
with nonzero bounded cohomology class; by \propref{notsame} , the approximate kernel of $\alpha$ is not laminar.
\ssec{Approximate kernels of quasimorphisms are {non-laminar}.} \label{5.18}
The proposition below implies that an approximate kernel $E$ of a nontrivial quasimorphism is associated to no $\bigwedge$-definable group,
in any expansion of the language. In other words, there is no chain of approximate subgroups
$D_n$ commensurable to $E$, with $D_{n+1}^2 \subset D_n$.
In particular, the
two classes of approximate subgroups, arising from homomorphisms into Lie group or from (nontrivial) quasimorphisms into $\Rr$, have trivial intersection.
\begin{prop} \label{notsame}
Suppose $\omega$ is a commensurability class of approximate subgroups, equal to $\chi(g)$ for some
homogeneous quasimorphism $g: G \to \Rr^n$, and also to $\chi(f)$ for some homomorphism into a Lie group.
Then $\omega$ belongs to a subgroup on which $g$ is a homomorphism.
\end{prop}
\begin{proof}
Let $E=g ^{-1} [-1,1]$. We can assume $G=\langle E \rangle$, and take $f(G)$ to be dense in $L$. As in \cite{nqf} we may take $L$ to be a connected Lie group no compact normal subgroups. Being homogeneous, $g$ is conjugation-invariant, so $E$ is normal, and thus $f(E)$ is normalized by $f(G)$ and so $cl(f(E))$ is normal in $L$. But
$f(E)$ is precompact, so $cl(f(E))$ is compact. We can now apply \thmref{lc1cc} with $H=L$, $K=cl(f(E))$. We must have
$H_1=H=L$ and $N=1$. Thus there exists a rigid $A \leq L$, with $\langle cl(f(E)) \rangle \subset A$. But $\langle cl(f(E)) \rangle = f(G)$
is dense in $L$; so $L=A$ is abelian, and $f (\langle [G,G] \rangle) = 1$. Since $\chi(f)$ and $E$ are commensurable,
$\langle [G,G] \rangle$ is contained in finitely many translates of $E$, so $g( \langle [G,G] \rangle) $ is bounded. By homogeneity,
$g(x^n)=ng(x)$ for $x \in [G,G]$, so $\Zz g(x)$ is bounded and hence $g(x)=0$, i.e. $g$ vanishes on $ \langle [G,G] \rangle $.
Now let $x,y \in G$; then $x^ny^n (xy)^{-n} \in \langle [G,G] \rangle $, so $g(x^ny^n (xy)^{-n}) =0$ and hence $g(x^n)+g(y^n)+g((xy)^{-n})$
is bounded independently of $n$. But $g(x^n)+g(y^n)+g((xy)^{-n}) = n(g(x)+g(y)-g(xy))$ so letting $n \to \infty$
we have $g(x)+g(y)=g(xy)$, i.e. $g$ is a homomorphism.
\end{proof}
\begin{example} All known nontrivial definable approximate subgroups appear to live in theories with the strict order property.
This is related to longstanding questions regarding simple theories, and to new ones concerning the existence property in theories of Shelah's class NSOP1. It is interesting in this connection to examine a first example of an approximate subgroup arising from a quasimorphism. We take Brooks' quasimorphism on the free group, see \cite{kotschick}, Example 1. Let $F$ be the free group on two letters $x,y$, and $w$ a nontrivial cyclically reduced word; for definiteness take $w=xy$. Let $A \subset F$ be the set of words
containing as many nonoverlapping copies of $w$ as of $w ^{-1}$; $A$ is an approximate subgroup. The group $x^{\Zz}=\langle x \rangle$ is contained in $A$.
Moreover $x^n y \in A$ for $n<0$ but not for $n>0$. Hence the semigroup $x^{\Nn} = A \cap C_F(x) \cap A y ^{-1}$ is definable
with parameters in $F$, and even definable in $A$ given with partial multiplication and conjugation by $x,y$. Hence $(F,A,\cdot)$
has the order property.
\end{example}
\ssec{} An approximate homomorphism with central error defined on a centerless group (in a strong sense)
induces an actual homomorphism, whose pullbacks of compact sets are not much larger. We give one formulation
of this in terms of a topology on $G$; see
\secref{complementarity} for the definition of discrete-on-compact.
\begin{prop}\label{centralcase} Let $G,\mathsf H$ be topological groups, $\widetilde{\Lam}$ a subgroup of $G$.
Let $K$ be a compact subset of the center $Z$ of $\mathsf H$. Assume:
\begin{itemize}
\item For some finite set $F \subset \widetilde{\Lam}$, the centralizer $C_G(F)$ is finite.
\end{itemize}
Let $f: \widetilde{\Lam} \to \mathsf H:K$ a discrete-on-compact quasi-homomorphism. Then the composition $ \widetilde{\Lam} \to \mathsf H/Z$ is a
discrete-on-compact homomorphism. \end{prop}
\begin{proof}
Let $\pi: \mathsf H \to \bar{\mathsf H}:=\mathsf H /Z$ be the quotient map, and $\ff = \pi \circ f: \widetilde{\Lam} \to \bar{\mathsf H}$.
Let $T_h: \mathsf H \to \mathsf H$ be the map: $x \mapsto x ^{-1} h x$. Then $T_h$ factors through a map $t_h:\bar{\mathsf H} \to \mathsf H$. By definition of the quotient topology,
$t_h: \bar{\mathsf H} \to \mathsf H$ is continuous. Hence it maps (pre)compact sets to (pre)compact sets.
Let $W$ be a subset of $\widetilde{\Lam}$ with $\ff(W)$ precompact. It follows for any $g \in \widetilde{\Lam}$ that
$t_{f(g)} (\ff (W)) $
is precompact. Now $t_{f(g)}(\ff W) = T_{f(g)} ( f(W)) $ and
$ f(g^W) \subset t_{f(g)} ( f(W)) K^4 $ so $f(g^W)$ is also precompact,
and hence $g^W$ is discrete in $G$.
Suppose $w_i \in W$
and $w_i \to a \in G$. Then for each $g \in F$, $w_i ^{-1} g w_i \to a ^{-1} g a$. Since $g^W $ is discrete in $G$,
it follows that $w_i ^{-1} g w_i = a ^{-1} g a$ for large $i$. As this holds for all $g \in F$, it follows that all $w_i$ (for large enough $i$) lie in the same coset of $ C_G(F)$. But $C_G(F)$ is finite. Thus $W$ has no accumulation points in $G$, so it is discrete in $G$.
\end{proof}
\begin{question} Can the `no chain' statement alluded to in the first paragraph of \secref{5.18} be made more effective? For example, it follows from this statement
that there exists an approximate subgroup $D$ commensurable to $E$, such that for no approximate subgroup $D'$
do we have $(D')^8 \subset D^4$. How can such a $D$ be found or recognised?
\end{question}
\begin{question} Let $G= \cup_{ n \in \Nn} G_n$ be a piecewise definable group in some structure, with the $G_n$ commensurable. Let $\phi: G \to \Rr^d:[-1,1]^d$ be a continuous, nontrivial quasimorphism; so that the sets $\phi ^{-1}([-n,n]^d)$
are intertwined with the $G_n$. It is fairly clear, in view of \propref{notsame} and the existence of stabilizers of generic types in simple theories, and the independence theorem, that $T=Th(G)$ cannot be simple. Likewise if $\phi$ is defined on $G^{00}$, it must be trivial; so it appears that approximate subgroups in simple theories must be {laminar}. Does this extend to theories with Shelah's property SOP1 (\cite{shelah-towards}, \cite{kr})?
\fmeo{Not simple:
The pullback of $ \phi ^{-1} [-100,100]$ is commensurable to $\phi ^{-1} [-3,3]$ so the latter is generic; but then a generic type ought to generate a group in 4 steps, or a descending sequence; we've already argued againt the latter, giving a homomorphism to a locally compact
group; but in 4 steps implies that $ \phi ^{-1} [-100,100] \subset \phi ^{-1} [-12,12]$, etc.}
\end{question}
\newpage
\section{Amenable approximate subgroups} \label{amenable}
Call an approximate subgroup ${\Lambda}$ of $G$ {\em amenable} if there exists a finite, left translation invariant finitely additive measure $\mu$ on the set of all
subsets of ${\Lambda}^3$, with $0<\nu({\Lambda})$, $\nu({\Lambda}^3)<\infty$\footnote{Translation invariance means that $\nu(Y)=\nu(aY)$ whenever
$Y ,aY \subset {\Lambda}^3$.}
It is easy to see that if ${\Lambda}$ is an amenable
approximate subgroup of $G$, then for any $n$ there exists a finite, left translation invariant finitely additive measure $\mu_n$ on the algebra of all
subsets of ${\Lambda}^n$, with $0<\nu({\Lambda})$, $\nu({\Lambda}^n)<\infty$; moreover these measures are compatible, each being uniquely determined by the restriction to subsets of ${\Lambda}$.
Ultraproducts of amenable groups need not be amenable, but they are definably amenable in an appropriate sense.
This was used in \cite{nqf} to show that ultraproducts of finite $k$-approximate groups are controlled by homomorphisms
to Lie groups; the proof applies to ultraproducts of amenable $k$-approximate groups identically, so these
are {laminar} (see \corref{amenable-lie}.)
\begin{lem} \label{amenable1} Let $G$ be a discrete or second countable locally compact group, and assume $G$ is amenable.
Let ${\Lambda}$ be a discrete approximate subgroup of $G$.
Then ${\Lambda}$ is amenable.
\end{lem}
\begin{proof} Let $W$ be a maximal subset of $G$ satisfying $w' \notin {\Lambda}^6 w$ for distinct $w',w \in W$. In the locally compact
case, $W$ can be taken to be Borel by \lemref{borel}; note also in this case that ${\Lambda}$ is countable, being discrete and second countable.
Let $\mu$ be a nonzero, left translation invariant, finitely additive measure on the algebra of all Borel subsets of $G$.
(or arbitrary subsets in the discrete case.)
For $C \subset {\Lambda}^3$, define
\[\nu(C) = \mu(CW) \]
Since finitely many left cosets of ${\Lambda}$ cover ${\Lambda}^{12}$, and ${\Lambda}^{12}W = G$, we must have $\nu({\Lambda})=\mu({\Lambda}W) >0$.
If $C \cap C' = \emptyset$, then $CW \cap C'W = \emptyset$ since if $c \in C, c' \in C', w,w' \in W$ then
$cw = c'w' $ only if $w=w'$ and hence $c=c'$. It follows that $\nu$ is a finitely additive, left translation invariant measure on ${\Lambda}^3$.
\end{proof}
The proof is easier in case ${\Lambda}$ is cocompact: in place of a maximal subset, take $W$ to be a small open neighborhood of $1$, meeting ${\Lambda}^6$ in $1$. By cocompactness, $\mu({\Lambda}W) \neq 0$ since finitely many translates of ${\Lambda}W$ cover $G$; the rest of the proof is the same.
\begin{cor}\label{amenable-lie}
Let $G$ be an amenable locally compact group, and ${\Lambda}$ a discrete approximate subgroup.
Then there exists a subgroup $\widetilde{\Lambda}$ of $\langle {\Lambda} \rangle$
and a homomorphism $f: \Lambda \to L$, $L$ a Lie group,
such that ${\Lambda} \sim \chi(f)$.
\end{cor}
\begin{proof} Consider $G$ as a structure for a large language, where every Borel relation is named.
Then $\mu$ is a definable (finitely additive) measure.
Take a (sufficiently) saturated elementary extension (e.g. ultrapower)
$(G^* ,\cdot,{\Lambda}^*,\cdots)$ of $(G,\cdot,{\Lambda},\cdots)$. Then ${\Lambda}^*$ is
a {\em near subgroup}, in the sense of \cite{nqf}: the ideal of all subsets $Y$ of ${\Lambda}^3$
with $\nu(Y) =\nu(Y ^{-1}) = 0$ is an intersection of (two) S1-ideals, hence itself an S1-ideal.
Theorem {4.2} of \cite{nqf} gives the conclusion.
\end{proof}
For subgroups of soluble Lie groups, this (and more) is proved in \cite{machado2}. For abelian groups it is a theorem of Meyer, \cite{meyer}.
\begin{rem} The lemma and corollary remain valid if $G$ is an amenable {\em approximate} group. \end{rem}
\begin{question} Let $\Lambda$ be an approximate lattice in a non-amenable locally compact group.
Could $\Lambda$ be definably amenable in a reasonable language, or even for that matter outright amenable? For
co-compact $\Lambda$, \cite{BjH} answer negatively for metric amenability.
\end{question}
\newpage
\section{Approximate lattices in semisimple groups}
\label{discreteapprox}
Let $G$ be a locally compact group.
Recall the definition of an approximate lattice from the introduction, and Appendix \ref{approxlattices}.
The definitions below parallel those in \cite{BjH}; we allow ourselves to define {\em Meyer sets} for all lattices (in our sense),
so that a Meyer set in the sense of \cite{BjH} is what we call a {\em uniform Meyer set}. (The requirement that $M,S$
be commensurable is prima facie stronger than that in \cite{BjH}, but they are equivalent, see \lemref{commens}.)
When we have a product $G \times H$ of two groups, we will usually denote the projections by $\pi_1,\pi_2$.
\begin{defn}[\cite{BjH}] \label{modelset} Let $G$ be a locally compact group. \begin{itemize}
\item A subset $S \subset G$ is a {\em (uniform) model set} if
there exists a locally compact group $H$, a (co-compact) lattice $\Gamma \subset G \times H$,
and a compact $C \subset H$ with nonempty interior, such that $\Gamma$ has dense image under the projection $\pi_2$, $\pi_1$ is injective on $\Gamma$, and
\[S= \pi_1 ( \Gamma \cap (G \times C)) \]
$S$ is also called a {\em cut-and-project set} for $\Gamma$.
\item A {\em (uniform) Meyer set} is a subset $M$ of a (uniform) model set $S$, commensurable to $S$.
\end{itemize}
\end{defn}
This generalizes a construction of lattices, fundamental to the definition of an arithmetic lattice
where $C$ is taken to be a compact
{\em subgroup}; see \cite{margulis} p.1. Here it is replaced, in effect, by an approximate subgroup.
The injectivity and density assumptions in \defref{modelset} are inessential: we can replace $H$ by the closure $H'$ of the image of $\pi_2(\Lambda)$, then factor out the closed normal subgroup
$\pi_2( \ker(\pi_1))$ from $H'$, without changing the cut-and-project sets.
Finding a
cut-and-project model $\Gamma \leq G \times H$ for an approximate sublattice $\Lambda$ is equivalent to the existence of
homomorphism $f: \widetilde{\Lam} \to H$, viewed as a partial map $G \to H$, with the interesting property of interchanging compact sets in the image with discrete sets in the domain; the relation between them is that $\Gamma$ is the graph of $f$.
The identity map on the ring $\Zz[1/2]$, as a partial map from the $2$-adic completion to the real one, or vice versa, is a suggestive example of this phenomenon.
\begin{problem}[Bj\"orklund and Hartnick, \cite{BjH} p. 2918] \label{bjhp1} In what locally compact second countable groups $G$ is every cocompact approximate lattice a Meyer set?
\end{problem}
This is equivalent to the apparently weaker statement that every approximate lattice
$\Lambda$ be commensurable to a cut-and-project set:
\begin{lem} Let $G,H$ be locally compact groups, $\Gamma$ a lattice in $G\times H$ projecting injectively to $G$, $\Lambda$ an approximate subgroup of $G$
commensurable to a cut-and-project set of $\Gamma$. Then there exists a locally compact $H'$ and lattice $\Gamma' \leq G \times H'$,
such that $\Lambda$ is a Meyer set of $\Gamma'$, i.e. commensurable to and contained in a cut-and-project set of $\Gamma'$.
\end{lem}
\begin{proof} Let $\widetilde{\Lam}$ be the projection of $\Gamma$ to $G$, and $f: \widetilde{\Lam} \to H$ the homomorphism whose graph is $\Gamma$.
Let $\omega$ be the commensurator of $\Lambda$. By \lemref{commsmooth}, there exists $f' : \omega \to H'$ such that
$\chi(f')=\chi(f)= \omega$. Let $\Gamma' \leq G \times H'$ be the graph of $f'$. By \lemref{commlem}, $\Lambda \subset \comm(\omega)$. Since $\Lambda \sim \chi(f')$, there exists
a compact $C \subset H'$ with $\Lambda \subset (f') ^{-1}(C)$. Thus $\Lambda$ is a Meyer set of $\Gamma'$
\end{proof}
Note that this may require a change of target $H$; even if $G,H$ are connected Lie groups, it may be necessary to augment $H$ with totally disconnected factors. This will be clear in the arithmetic setting below, where $p$-adic factors appear.
Existing results, as described in \secref{amenable}, fall into the amenable class.
We will complement this with a positive answer, without the restriction to cocompact approximate lattices,
for almost simple groups over local fields, and their finite products.
\ssec{Abstractly semisimple groups}
\begin{defn} \label{ass}
A locally compact group $G$ is {\em abstractly semisimple} if \begin{enumerate}
\item $G$ has no nontrivial normal abelian subroups.
\item Outside the center, $G$ has no discrete conjugacy classes.
\end{enumerate}
\end{defn}
\begin{rems} \label{assrems} \begin{enumerate}
\item \label{assd} Abstract semisimplicity remains true upon passing to a finite index normal subgroup $H$ of $G$.
\item \label{products} Abstract semisimplicity is preserved by finite products.
\item \label{adeless}
The class is also closed under restricted infinite products $\Pi'_i G_i$ with respect to compact subgroups $C_i$ (see \defref{groupdefs} (\ref{restricted}.)), provided each $C_i$
have trivial centralizer in $G_i$.
\item \label{agss} A semisimple algebraic group over a local field is abstractly semisimple, at least upon moving to a finite index subgroup, and factoring out a finite center. These operations will not alter the validity of the results we will prove.
\item \label{bt1}
Let $G$ be an abstractly semisimple locally compact group. Then for some compact normal subgroup $N$, $G/N$ has a finite index subgroup of the form $G_1 \times G_2$ with $G_1$
a totally disconnected abstractly semisimple locally compact group, and $G_2$ a connected semisimple Lie group.
\end{enumerate}
\end{rems}
\begin{proof}
(\ref{assd}) If $X$ is a discrete conjugacy class of $H$, then the union of $G$-conjugates of $X$ is a conjugacy class of $G$ and is still discrete (the union being finite), so trivial . If $A$ is a normal abelian subgroup of $H$, let $A_1,\ldots,A_n$ be the conjugate subgroups. Then $\cap_{i=1}^n A_i$ is $G$-normal and abelian, so trivial. We show by reverse induction on $k=n,\cdots,1$ that the
intersections of any $k$ of the groups $A_i$ is trivial. Assume this true for $k+1$, if $A,B$
are each the intersection of $k$ of the $A_i$ and are distinct, then $A \cap B = (1)$; so
the various $k$-intersections commute and so generate a commutative $G$-normal subgroup, hence trivial. For $k=1$
this gives $A=(1)$.
(\ref{products}) The statement on discrete conjugacy classes is clear, since
a conjugacy class in the product is a product of conjugacy classes. If $A$ is a normal abelian subgroup of $G$,
the projection of $A$ to each factor is normal and abelian so trivial, hence $A$ is trivial.
(\ref{adeless}) The proof that there are no normal abelian subgroups is the same as in (\ref{products}).
If $X$ is a discrete conjugacy class of $G$,
pick $x \in X$; then the centralizer $C_G(x)$ is an open subgroup of $G$; thus for some finite $I_0 \subset I$,
$1_{I_0} \times \Pi_{i \in I \smallsetminus I_0} C_i$ is a subset of $C_G(x)$. From this it follows that $x \in G_0:= \Pi_{i \in I_0} G_i \times 1_{I \smallsetminus I_0}$.
So the projection
$\pi_0 X$
of $X$ to $G_0$ must already be discrete. By (\ref{products}), $\pi_0 X = (1)$. Since $x \in G_0$ we have $x=1$.
(\ref{agss}) By factoring out the center and moving
to a finite index subgroup, we may take $G$ to be a finite product of simple
groups. Then it is clear that $G$ has no nontrivial normal abelian subgroups, nor open ones. If $a^G$ is a discrete conjugacy class,
then $C_G(a)$ is open, and using the chain condition on centralizers, $C_G(a^G)=C_G(F)$
for some finite set $F$ of conjugates of $a$; so $C_G(a^G)$ is an open normal subgroup; hence $C_G(a^G)= G$ so $a$ is
central.
(\ref{bt1}) If $G$ is connected, this is Yamabe's theorem (with $G_1=1$). Moreover in this case, $G$ has a unique maximal
compact normal subgroup. In general, factoring out the maximal compact $N$ of the connected component, $G^0$ becomes
a semisimple Lie group. It has have only finitely many outer automorphisms, using the structure theory for semimple Lie groups, and Borel-Tits; thus up to finite index, $G/N$ decomposes as a product of $G^0$ with the totally disconnected group $C_G(G^0)$.
\end{proof}
As a consequence of (3), if $G$ is a semisimple algebraic group over $\Qq$, then $G(\Aa_{\Qq})$ has a co-compact subgroup that is abstractly semisimple modulo a compact center. For instance
$SL_2(\Aa_{\Qq}) / Z$ is abstractly semisimple, where $Z$ is the center $\{\pm 1\}^{\{\infty,3,5,7,11\cdots\}}$.
Let $G$ be an abstractly semisimple group, and let
$\Lambda$ be a discrete approximate subgroup.
\thmref{ag2} explains $\Lambda$ in terms of a quasi-homomorphism into a Lie group; we would like to show that an ordinary homomorphism will already work. In case the error set is central,
this is already carried out in \propref{centralcase}, assuming in a strong sense that $\Gamma$ is centerless:
the centrality of the error set is played against the semisimplicity of $G$, showing we can factor out the centre of $L$ without harm.
The general case, where the error set is contained in a rigid normal subgroup $A$, is more complicated;
$A$, when factored out, may require taking with it a bigger normal subgroup, that needs to be shown to be solvable.
We postpone the technical details to \propref{ctp} at the end of the section, and prove the theorem assuming it.
\begin{defn} Let $G$ be a topological group, and $\omega$ a commensurability class of approximate subgroups. $\omega$ is
{\em strictly dense} if it does not belong\footnote{as defined in the beginning of \secref{approxsg}} to a closed subgroup of infinite index in $G$.
\end{defn}
Note that lattices are never strictly dense; see also the Example below. In view of this we will also say that $\Lambda$ is
{a {\em strictly approximate lattice}} if the commensurability class of $\Lambda$ is strictly dense.
In case $\Lambda \in \omega$ and $\langle \Lambda \rangle \in {\langle \omega \rangle}_{min}$ in the sense of \corref{minimalclass}, $\Lambda$ is a fortiori
strictly dense in $G$, of course.
\begin{example} \label{nondense}
If $G=G_1 \times G_2$, $\Lambda_1$ an approximate lattice in $G_1$,
$\Lambda_2$ a lattice in $G_2$, then $\Lambda_1 \times \Lambda_2$ is an approximate lattice in $G$, that is not dense;
it belongs to the closed subgroup $G_1 \times \Lambda_2$.
\end{example}
We prove the first
{laminar}ity result assuming strict density. Later, in the classical setting at least, we will recognize \exref{nondense} as
the only obstruction to strict density, so that the theorem applies to all approximate lattices.
\begin{thm} \label{discrete1} Let $G$ be an abstractly semisimple locally compact group. Let $\Lambda$ a discrete approximate
subgroup of $G$, with strictly dense commensurability class. Then there exist a connected semisimple real Lie group $\mathsf{L}$ and a discrete subgroup
$\Gamma \leq G \times \mathsf{L}$ with cut-and-project sets commensurably covering $\Lambda$.
\noindent{{\bf Supplements:}}
\begin{itemize}
\item $\mathsf{L}$ can be taken to be centerless and to have no nontrivial compact normal subgroups.
\item We can take the projection $\Gamma \to \mathsf{L}$ to be injective with dense image, and $\Gamma \to G$ to
be injective.
\item
If $\Lambda$ is an approximate lattice in $G$, then $\Gamma$ is a lattice in $G \times \mathsf{L}$, and the cut-and-project sets of $\Gamma$ are commensurable to $\Lambda$.
\end{itemize}
\end{thm}
\begin{proof}
By \thmref{ag2}, there exists a subgroup $\widetilde{\Lam} \leq \langle \Lambda \rangle$, a Lie group ${L}$ with
$L/Z(L)$ connected,
a normal compact subset $K$ of a rigid abelian subgroup $A \leq {L}$,
and a quasi-homomorphism
\[f: \widetilde{\Lam} \to {L}:K\]
with $f(\widetilde{\Lam})A$ dense in $L$,
such that $f^{-1}(C)$ commensurably covers $\Lambda$
for some compact $C \subset {L}$ with nonempty interior, and in fact is commensurable with $\Lambda$.
We use the commensurability here to conclude that $f ^{-1}(C)$ is discrete in $G$; so $f$ is discrete-on-compact.
By assumption, $cl(\widetilde{\Lam})$ has finite index in $G$; so it is still abstractly semisimple (\remref{assrems} (\ref{assd})). Thus we may assume
$cl(\widetilde{\Lam})=G$; and (i),(ii) of \propref{ctp} hold.
By \propref{ctp}, $L$ has a normal subgroup $R$ such that $L/R$ is semisimple, centerless, without nontrivial normal compact subgroups; and the induced map $\bar{f}: \widetilde{\Lam} \to L/R$ is injective and discrete-on-compact.
Since $A \subset R$, $\bar{f}$
is a homomorphism,
The cut-and-project sets of $L/R$ can only be bigger than those of $L$.
Let $\mathsf{L} = L/R$, and let $\Gamma$ be the graph of the composed map $\bar{f}: \widetilde{\Lam} \to L \to L/R=\mathsf{L}$. Since $\bar{f}$ is
discrete-on-compact, $\Gamma$ is discrete. The projection $\Gamma \to G$ is injective by definition, and we saw that $\Gamma \to L$ and hence $\Gamma \to \mathsf{L}$ have dense image.
That the kernel of $\Gamma \to \mathsf{L}$ is trivial follows from injectivity of $\bar{f}$; the kernel on the left is trivial since $\bar{f}$ is a function.
Assume now that $\Lambda$ is an approximate lattice. As $\Gamma$ is discrete, it follows that $X$ intersects each compact of $G$ in a finite set, so that $X$ is discrete. Now $X^4$ is contained in a cut-and-project set of $\Gamma$, so by the same argument $X^4$ is discrete.
The commensurability of $\Lambda$ and $X$ now follows from \lemref{commens}. The fact that $\Gamma$ is a
lattice, cocompact if $\Lambda$ is, comes from \lemref{latticecoherence}.
\end{proof}
\begin{rem} \label{discrete1r} Assume $G$ has the centralizer chain condition (3) (as is automatic in a linear group).
Then in \thmref{discrete1},
the strict density requirement can be replaced by the weaker conditions (1),(2).
\begin{enumerate}
\item $\Lambda$ is weakly Zariski dense. This refers to the topology on $G$ whose basic closed sets are defined by equations $ax=xb$ or $b x ^{-1} ax = x ^{-1} a x b$, with $a,b \in G$. Notably normalizers $N_G(C_G(X))$ of centralizer groups $C_G(X)$ are $\mathsf{wz}$-closed.
\item (Weak irreducibility) There is no closed subgroup $H$ of $G$
such that $\Lambda \subset N_G(H)$ and $\Lambda / H$ is commensurably covered by an infinite discrete subgroup of $N_G(H)/H$.
\item \label{weaklin} Any centralizer group $C_G(X)$ is the centralizer group $C_G(X_0)$ of a finite subset $X_0$ of $X$.
(Or at least, if $H$ is a closed subgroup and $C_G(x) \cap H$ is open in $H$, then the interesection of conjugates of
$C_G(x) \cap H$ in $H$ is still open in $H$.)
\end{enumerate}\end{rem}
\begin{proof}
Let $\widetilde{\Lam}$ be a subgroup as in the beginning of the proof of \thmref{discrete1}. It suffices to check that $cl(\widetilde{\Lam})$ is
abstractly semisimple (and then proceed with the proof of \thmref{discrete1}.) If $A$ is an abelian subgroup of $G$ normalized by $cl(\widetilde{\Lam})$, then the centralizer group $C_G(C_G(A))$
is also normalized, hence by weak Zariski density it is normal in $G$, and thus by abstract semi-simplicity of $G$ it is trivial;
so $A=1$ also. Now let $a \in cl(\widetilde{\Lam})$ and suppose the conjugacy class of $a$ in $cl(\widetilde{\Lam})$ is discrete. Then
$cl(\widetilde{\Lam}) \cap C_G(a)$ is an open subgroup of $cl(\widetilde{\Lam})$. By (3), $cl(\widetilde{\Lam}) \cap C_G(a^{cl(\widetilde{\Lam})})$
is still an open subgroup $H$ of $cl(\widetilde{\Lam})$. Thus $cl(\widetilde{\Lam}) / H$ is discrete. As $\Lambda$ is commensurably covered by
$cl(\widetilde{\Lam})$, by (2), it follows that $cl(\widetilde{\Lam}) / H$ is finite, i.e. a finite index subgroup of
$cl(\widetilde{\Lam})$ centralizes $a$. By weak Zariski density, a finite index subgroup of $G$ centralizes $a$. So $a^G$ is finite and hence
trivial, and $a=1$.
\end{proof}
\begin{cor} \label{semisimple1}
Let $G$ be a finite product of non-compact, centerless, simple algebraic groups over local fields.
Let $\Lambda$ be an approximate lattice of $G$. Then there exists a decomposition $G=G_1 \times G_2$, $\Lambda$ is commensurable to $\Lambda_1 \times {\Lambda}_2'$
where $\Lambda_1 \leq \langle \Lambda \rangle$ is an approximate lattice of $G_1$ and $\Lambda_2'$ is a lattice of $G_2$, and
there exists a semisimple real Lie group $\mathsf{L}$ and an approximate lattice
$\Gamma \leq G_1 \times \mathsf{L}$ with cut-and-project sets commensurable to $\Lambda_1$. \bigskip
\noindent{\bf Supplements:} \begin{enumerate}
\item If $\Lambda$ is a uniform approximate lattice, then $\Gamma$ is a uniform lattice, as is $\Lambda_2'$.
\item The projections $\Gamma \to G_1, \Gamma \to \mathsf{L}$ can be taken to be injective and a dense embedding, respectively.
\item
The projection of $\Gamma$ to $G_1$ is a subgroup $\widetilde{\Lam}_1$. One can choose $\Gamma$ so that $\widetilde{\Lam}_1$ is minimal, in the sense
that no subgroup of infinite index commensurably contains $\Lambda_1$; and $\mathsf{L}$ connected and without normal compact subgroups. In this case,
$\mathsf{L}$ is uniquely determined, and $\Gamma,\widetilde{\Lam}_1$ are determined up to commensurability.
\item There exists no nontrivial discrete subset of $G_1$ normalized by $\Lambda_1$;
in particular, no nontrivial subset normalized by $\Lambda_1$ and commensurably covered by $\Lambda_1$.
\end{enumerate}
\end{cor}
\begin{proof}
Let $\widetilde{\Lam}, \mathsf{L}$ and $f: \widetilde{\Lam} \to \mathsf{L}:K$ be as given by \thmref{ag2}, applied to $\Lambda \subset \langle \Lambda \rangle$. We may at this point replace $\Lambda$ by the commensurable $\Lambda^2 \cap \widetilde{\Lam}$, and so assume $\Lambda \leq \widetilde{\Lam}$.
If $cl(\widetilde{\Lam})$ is itself abstractly semisimple,
we proceed as in \thmref{discrete1},
obtaining $\mathsf{L}$ and a discrete $\Gamma \leq G \times \mathsf{L}$ with a cut-and-project set $X$ commensurably covering $\Lambda$.
We repeat the argument that $\Gamma$ is a lattice (in $G \times \mathsf{L}$ and not only in $cl(\widetilde{\Lam}) \times \mathsf{L}$):
since $\Gamma$ is discrete, it follows that $X$ intersects each compact of $G$ in a finite set, so that $X$ is discrete. Now $X^4$ is contained in a cut-and-project set of $\Gamma$, so by the same argument $X^4$ is discrete.
The commensurability of $\Lambda$ and $X$ now follows from \lemref{commens}. The fact that $\Gamma$ is a
lattice, cocompact if $\Lambda$ is, comes from \lemref{latticecoherence}.
The statements on injectivity and density are as in
\thmref{discrete1}. Supplement (3) follows from \propref{sensible}.
We thus need to handle the case where $cl(\widetilde{\Lam})$ is not abstractly semisimple. In any case, $\Lambda$ and thus $cl(\widetilde{\Lam})$
are Zariski dense by \propref{boreldensity}. Let $A$ be an abelian subgroup of $G$ normalized by $\Lambda$. To show $A$ is trivial it suffices to show the triviality of every projection to one of the simple algebraic factors of $G$. Let $\pi_i:G \to G_i$ be such a projection; then $\pi(A)$ is abelian and normalized by $\pi(\Lambda)$; hence the double centralizer of $\pi(A)$, a Zariski closed group, is normalized by $\pi(\Lambda)$; so by Zariski density of the latter, it is trivial.
In a similar way one can show that a centralizer subgroup of $G$ normalized by $\Lambda$ is normal.
Indeed let $C=C_G(Y)$; both $C$ and $N_G(C)$ are defined by coordinatewise conditions, so that if $\pi_i(C)$ is normal in $G_i$ for each $i$,
then $C$ is normal in $G$. Now $\pi_i(C)$ is a centralizer subgroup of $G_i$, hence Zariski closed; and thus so is
$N_{G_i}(\pi_i(C))$; since the latter contains the Zariski dense set $\pi_i(\Lambda)$, it must equal $G_i$.
The remaining issue is condition (ii) of abstract semisimplicity, forbidding discrete conjugacy classes.
Let $D$ be the set of all elements $a \in cl(\widetilde{\Lam})$ with $a^{cl(\widetilde{\Lam})}$ discrete in $G$; and let $G_1 = C_G(D)$
be the centralizer.
$G_1$ is clearly normalized by $cl(\widetilde{\Lam})$ and hence by $\Lambda$, so by the above, it is a closed normal subgroup of $G$.
Let $G_2=C_G(G_1)$; as $G$ is a product of simple groups, it is clear that $G_1$ is a product of some of them, and
$G_2$ of the others. We have $G \cong G_1 \times G_2$ naturally; $G_2=C_G(G_1) $ contains $D$ by definition.
By linearity of $G$, $G_1= \cap_{a \in D} C_G(a)$ is a finite intersection, i.e. equals $C_G(D_0)$ for some finite $D_0 \subset D$.
For $a \in D_0$, let $U$ be a neighborhood of $a$ with $U \cap a^{cl(\widetilde{\Lam})} =\{a\}$; then
$C_G(a) \cap {cl(\widetilde{\Lam})} = \{g \in cl(\widetilde{\Lam}): a^g \in U \}$, so it forms an open
subgroup of $cl(\widetilde{\Lam})$; and hence $G_1 \cap cl(\widetilde{\Lam})$ is open in $cl(\widetilde{\Lam})$.
It follows that $cl(\widetilde{\Lam}) / (G_1 \cap cl(\widetilde{\Lam})) \cong cl(\widetilde{\Lam}) G_1 / G_1$ is discrete as a topological space, and so also as
a subgroup of $G/G_1 \cong G_2$.
The image $\pi_2(cl(\widetilde{\Lam})) = \pi_2(\widetilde{\Lam})$ is thus a discrete, closed subgroup of $G_2$.
A fortiori, $\pi_2(\Lambda)$ is discrete in $G_2$.
By \propref{productdec},
$\Lambda_1 = \Lambda^2 \cap G_1$ is an approximate lattice in $G_1$; $\Lambda_2=\Lambda^2 \cap G_2 $
is an approximate lattice in $G_2$; and $\Lambda_1 \times \Lambda_2$ is commensurable with $\Lambda$. Since
$\pi_2(\widetilde{\Lam})$ is a discrete subgroup containing $\Lambda_2$, it is commensurable with $\Lambda_2$ by \lemref{commens};
and so $\Lambda_1 \times \pi_2(\widetilde{\Lam})$ is also commensurable with $\Lambda$; the second factor $\pi_2(\widetilde{\Lam})$ is a lattice in $G_2$.
Let $\Lambda_2' = \pi_2(\widetilde{\Lam})$.
Now $G_1$ has no discrete conjugacy classes and so is abstractly semisimple.
$f(\Lambda_2)A$ is a normal subgroup, and contained in a compact set (since $f(\Lambda)$ is) so trivial.
Since $\Lambda_1 \times \Lambda_2$ is commensurable with $\Lambda$, $cl(f(\Lambda_1)A) = cl (f((\Lambda_1 \times \Lambda_2)A)$
has finite index in $\mathsf{L}$; as $\mathsf{L}$ is connected, $cl(f(\Lambda_1)A)=\mathsf{L}$.
We are now in the situation of the first paragraph with respect to $G_1$.
If we repeat the proof with respect to $G'=G_1$, $\Lambda'=\Lambda_1$, we obtain a new $D$ (call it $D'$);
it is not clear that $D' \subset D$, since the discreteness of $a^{cl(\langle \Lambda' \rangle)}$ for $a \in G_1$ does not obviously
imply discreteness of $a^{cl(\langle \Lambda' \rangle)}$ f; while $\Lambda'$ is contained in $\Lambda^2$ and commensurable to $\Lambda$,
$\langle \Lambda' \rangle$ is contained in $\widetilde{\Lam}$ but may not have finite index in it. Thus on the face of it the new $G_1$ may be smaller.
However if we assume:
\\ (*) (*) no subgroup of infinite index in $\widetilde{\Lam}$ commensurably contains $\Lambda$.
then $\langle \Lambda_1 \times \Lambda_2 \rangle $ must be commensurable with $\langle \Lambda \rangle$. In this case discreteness of $a^{cl(\langle \Lambda' \rangle)}$ does imply
discreteness of $a^{cl(\langle \Lambda' \rangle)}$, so $D' \subset D$ and $G_1'=G_1$.
Now upon one iteration, we gain (3) and do not lose it again. Thus after at most two iterations the process will settle down,
and the $G_1$ obtained at that point will satisfy (3).
For the same reason, it suffices to prove supplement (4) under the additional assumption (*) that no subgroup of infinite index
in $\widetilde{\Lam}$ commensurably contains $\Lambda$.
Suppose $S$ is a discrete subset of $G_1$, normalized by $\Lambda_1$, and
$1 \neq a \in S$. Let $\Lambda' = \Lambda_1 \times \Lambda_2$. Since $\Lambda_1$ normalizes $S$, as does $\Lambda_2$,
so does $\Lambda'$, and hence the group $\widetilde{\Lam}'$ generated by $\Lambda'$ normalizes $S$.
Now $S$ is closed in $G$, so $cl(\widetilde{\Lam} ')$ normalizes $S$ too. By (*), $\widetilde{\Lam} ' $ has finite index in $\widetilde{\Lam}$. Hence
$cl(\widetilde{\Lam}')$ has finite index in $cl(\widetilde{\Lam})$. Thus $a^{cl(\widetilde{\Lam})}$. In the notation within the proof, we have $a \in D$
and hence
$G_1$ commutes with $a$. But $a \in G_1$ so $a$ also commutes with $G_2$, and hence is central in $G_1 \times G_2=G$,
contradicting our assumptions. This proves (4) under (*), and hence as noted above in general.
\end{proof}
This solves Problem 1 of \cite{BjH} for semisimple algebraic groups over local fields, and their finite products.
\begin{center}{*} \end{center}
We are still in debt of a technical lemma. When applied to lattices in (real) Lie groups, it is a classical statement
going back to Zassenhaus and Auslander on the interaction of a lattice with the solvable radical. For us
the main import will be an ability to convert approximate homomorphisms with error in a basic abelian group, to actual homomorphisms into a quotient Lie group.
\begin{prop} \label{ctp} \label{7.15} Let $L$ be a Lie group, with $L/Z(L)$ connected, containing a normal closed
abelian subgroup $A \trianglelefteq L$, and a compact normal symmetric $K \subset A$.
Assume $A=VD$, where $V \cong \Rr^n$ is closed and normal in $L$, and $D$ is closed and central in $L$.
Further assume that either $V$ is rigid, or $K=(1)$.
Let $G$ be a second countable locally compact group, with a subgroup ${H}$
satisfying:
\begin{enumerate}[label=(\roman*)]
\item $G$ has no nontrivial abelian subgroups normalized by ${H}$.
\item $H$ has no conjugacy classes that are discrete in $G$, other than (1).
\end{enumerate}
Let $f: {H} \to L:K$ be a discrete-on-compact quasi-homomorphism, with $f({H})A$ dense in $L$.
Then $f$ induces an injective, discrete-on-compact homomorphism $\ff: {H} \to L/R$, where $R$ is a closed normal solvable subgroup of $L$, and $L/R$ is semisimple and centerless, and without nontrivial normal compact subgroups.
\end{prop}
\begin{proof}
We first show:
\claim{1} $f ^{-1}(A) = 1$.
\begin{proof}
If $A$ is rigid, let $g \in f ^{-1}(A)$, and let $g^{{H}}$ be the set of ${H}$-conjugates of
$g$. Then
$f(g^{{H}}) \subset f(g)^{L} K^3$;
since $A$ is rigid, $f(g)^{L}$ and hence also $f(g)^{L} K^3$ are precompact.
So $g^{{H}} \subset f ^{-1} f(g^{{H}}) $ is discrete (in particular closed) in $G$. By (ii) we have $g=1$, showing that $f ^{-1} (A) =1$.
Now assume instead of rigidity that $K=1$. Then $f$ is a homomorphism. Hence $f ^{-1}(1)$ is a normal, discrete subset
of $H$, and it follows directly from (ii) that $f ^{-1}(1)=1$. Thus $f$ is an injective homomorphism; so
$f ^{-1}(A)$ is an abelian subgroup of $G$. It is normalized by $H$, hence trivial by (i).
\end{proof}
It is only in the above proof of Claim 1 that we use the rigidity of all of $A$, rather than just $K$.
\claim{2} There exists a closed, normal, solvable subgroup $I$ of $L$ containing $A$ such that
the induced homomorphism
$f_1: {H} \to L/I$ is discrete-on-compact.
\begin{proof}
Any Lie group has a compact neighborhood $U$ of $1$, satisfying (*) $[U,U] \subset U$; if $\mathfrak{u}$ is any compact neighborhood of $0$ in the Lie algebra, then for small enough $t>0$, $exp(t \mathfrak{u})$ will do. Moreover,
we can arrange topological nilpotence (**): letting $U_1=U, U_{n+1} =[U,U_n]$, we have $\cap_n U_n = (1)$.
It follows that every locally compact group $G$ has a neighborhood satisfying (*): by Yamabe's theorem, $G$ has an open subgroup $S$ containing a compact normal subgroup $N$, with $S/N$ a Lie group; pull back from $S/N$ any neighborhood with this property.
Fix a a compact neighborhood $W$ of $1$ in $G$ with $[W,W] \subset W$, and a compact neighborhood $U$ of $1$ in $L$ with (*) and (**). Writing $\bar{U_n}$ for the image of $U_n$ modulo $A$, we can also ensure that $\cap _n \bar{U_n} = {1}$.
By assumption, $A= V D$ is a product of a vector group $V$, normal
in ${L}$, with a closed group $D$ central in ${L}$.
Fix a Euclidean inner product on $V \cong \Rr^n$; let $B_r(V)$ be the ball around $0$ of radius $r$; normalize it so that
$K^3 \subset B_1(V) \times D$.
Also a compact $B_1(D) \subset D$,
such that $K^3 \subset B_1(V) \times B_1(D)$. Let $B_r (D) = B_1(D)^{ \lfloor r \rfloor }$, and let
$A_r = B_r(V) B_r(D)$.
The inner product induces an operator norm on $End(V)$. For $g,x \in {L}$, we will write $x{^g} $ for $g x g ^{-1}$.
Let $ad: L \to End(A)^{op}$ be the endomomorphism mapping $g$ to the endomorphism $x \mapsto x{^g}$ of $A$.
Write $||x||$ for the operator norm of $ad(x)-1$ when restricted to $V$. This is a continuous function on ${L}$.
We can shrink $U$ so as to satisfy:
\begin{enumerate} \setcounter{enumi}{1}
\item \hspace{20pt} $||g|| < 1/10$ for $g \in U^2$.
\end{enumerate}
In fact, $ad([x,y]) = [ad(x),ad(y)]$ so it suffices to pull back a neighborhood $\Omega$ of $1$ in $GL_n(V)$
with the analogous property, and such that $[\Omega,\Omega] \subseteq \Omega$ (so as not to lose this property for $U$.)
\claim{2.1} Let $n \geq 1, r \geq 2$. We have \[ [U A_r, U_n A_r ] \subset U_{n+1} A_{r/4} . \]
\begin{proof}
Consider elements
$\gamma_i = u_i a_i$, $u_i \in {U}, a_i \in A$. Assume $u_1 \in U_1,u_2 \in {U}$ and $a_i \in A_r$ $r \geq 2$.
We compute
\[ [u_1a_1,u_2a_2] = [u_1,u_2] ( [u_2 ^{-1},a_1] [a_1,a_2] [a_2,u_1 ^{-1}] )^{u_2u_1} \]
since $[a_1,a_2]=1$,
\[ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\gamma_1,\gamma_2] = [u_1,u_2] ad_{u_2u_1} ( ^{u_2 ^{-1} -1} a_1 + ^{1-u_1 ^{-1}} a_2) \]
where multiplication inside the abelian group $A$ is written additively.
If $u_1,u_2 \in {U}_n$ then $[u_1,u_2] \in {U}_{n+1}$.
And if $a_1,a_2 \in B_r(V)+D$, then $ ^{u_2 ^{-1} -1} a_1$ and
$ ^{1-u_1 ^{-1}} a_2$ lie in $A_{r/10}$ by (2), so $( ^{u_2 ^{-1} -1} a_1 + ^{1-u_1 ^{-1}} a_2) \in A_{r/5}$,
and again by (2), $ad( u_2 u_1 ) ( ^{u_2 ^{-1} -1} a_1 + ^{1-u_1 ^{-1}} a_2) \in A_{r/4} $
\end{proof}
(Actually since $D$ is central we have the stronger statement $[U B_rD, U_n B_r D] \subset U_{n+1} B_{r/4}$.)
\claim{2.2} For $r \geq 2$, the set $\Delta_r = U A_r \cap f(W \cap {H})$ is finite.
\begin{proof}
Since $f$ is discrete-on-compact, and $UA_r $ is compact, $f ^{-1} (UA_r)$ is discrete, so $W \cap f ^{-1} (UA_r)$
is finite. Applying $f$, we see that $f(W \cap {H}) \cap UA_r$ is finite.
\end{proof}
\claim{2.3} For any $r \geq 2$, $\Delta_r$ generates a nilpotent group modulo $A$.
\begin{proof} Let $v_1,v_2 \in \Delta_r$. So for some $u_1,u_2 \in U$, $a_1,a_2 \in A_r$, $w_i \in W \cap {H}$,
we have $v_i = u_ia_i= f(w_i)$.
By the proof of Claim 2.1, $[v_1,v_2] =[u_1a_1,u_2a_2] = [u_1,u_2] a$ with $a \in A_{r/4}$.
On the other hand,
$ f([w_1,w_2]) = [f(w_1),f(w_2)] k $ for some $k \in K^3 \subset A_1$; so
$[v_1,v_2] = [f(w_1),f(w_2)] = f([w_1,w_2)]) k ^{-1}$. Hence $[v_1,v_2] k \in f(W \cap {H}) \cap UA_{r/4+1}$.
Since $r \geq 2$ this shows that $[v_1,v_2] k \in \Delta_r$.
Let $\mathbf{\Delta_r} = \Delta_r/A$ be the image of the finite set $\Delta_r$ in $L/A$. We have just shown that $\mathbf{\Delta_r}$
is closed under the commutator. Let $F_1 = \mathbf{\Delta_r}$ and $F_{n+1} =[F_1,F_{n}]$. Then $F_1 \supset F_2 \supset \cdots$ and the sequence must stabilize at some $F_k$ with $[F_1,F_k] = F_k$. Since $F_k \subset {\bar{U}}_1$, and $[{\bar{U}}_1,{\bar{U}}_n] \subset {\bar{U}}_{n+1}$,
it follows inductively that $F_k \subset {\bar{U}}_n$ for each $n$. Thus $F_k \subset \cap_n {\bar{U}}_n = (1)$. This proves
that $\mathbf{\Delta_r} $ generates a nilpotent group.
\end{proof}
Write ${\Gamma}$ for the graph of $f$, and (with slight abuse of notation)
\[{\Gamma A} := \Gamma (1 \times A) = \{(w, f(w) a): \ w \in {H} , a \in A \} \]
While ${\Gamma}$ may not be a subgroup of $G \times L$, since $f$ is an $A$-quasi homomorphism,
${\Gamma A}$ is a subgroup of $G \times L$; hence so is the closure $cl( {\Gamma A})$
Let $\pi_2: G \times L \to L$ be the second projection. Define
\[ C := cl( {\Gamma A} )^0 \]
Hence so is the closure $cl( {\Gamma A})$; $C$ is the identity component. As $A$ is normal, $C$
is normalized by ${\Gamma}$.
By a theorem of Zassenhaus (see \cite{dixon}), all soluble subgroups $S$ of the linear group ${L}/ Z({L})$
are soluble of some fixed degree; hence the same holds for $L$. Call this bound ${\bb}={\bb}(L)$.
\footnote{The theorem is easy to prove for characteristic zero linear groups, such as $L/Z(L)$: by passing to Zariski closure it suffices to consider Zariski closed subgroups $H$; the solvable degree of the connected part $H^0$ is clearly bounded by the dimension; while Jordan's theorem presents the quotient $H/H^0$ as a normal abelian subgroup by a bounded subgroup.}
\footnote{In fact we can show here, using connectedness of $C^0$, that $\pi_2(C)/A$ is nilpotent.}
\claim{2.4} ${\Gamma A} \cap ( W \times U) $ generates a ${\bb}$-step soluble subgroup of $G \times L $.
\begin{proof} An element of ${\Gamma A}$ has the form $(w, f(w) a)$ with $w \in {H} $. Thus an element of
${\Gamma A} \cap ( W \times U) $ has the form $(w, f(w) a)$ with $w \in {H} \cap W$ and $f(w) a \in U$.
Letting $\pi_2$ denote the second projection $G \times L \to L$, we have
$\pi_2({\Gamma A} \cap ( W \times U) ) \subset (U \cap f(W \cap {H}) A)$. For a finite $F \subset {\Gamma A} \cap ( W \times U)$,
$\pi_2(F) \subset U \cap f(W \cap {H}) A_r$ for some $r \geq 2$. Now
\[ \pi_2 (F) / A \subset (U \cap f(W \cap {H}) A_r ) / A = (UA_r \cap f(W \cap {H}) ) / A = \mathbf{\Delta_r}.\]
So $\pi_2 (F) / A$ generates a nilpotent group, by Claim 2.3. Hence $\pi_2(F)$ generates a soluble group. By definition of ${\bb}$,
it is ${\bb}$-step soluble.
Let $z$ be an element of the
${\bb}$-th derived group of $\langle F \rangle$. Write $z = (x,f(x)a)$ with $x \in {H}$.
Then $\pi_2(z) = f(x)a \in A$ so $f(x) \in A$ and hence $x=1$ by Claim 1. Thus $\langle F \rangle$ is ${\bb}$-step soluble.
Since this holds for all finite $F \subset {\Gamma A} \cap ( W \times U)$, the group
$\langle {\Gamma A} \cap ( W \times U) \rangle$ is ${\bb}$-solvable.
\end{proof}
It follows that $cl( {\Gamma A} \cap ( W \times U) )$ still generates a ${\bb}$-solvable subgroup $S$ of $G \times L$.
Now $ cl( {\Gamma A}) \cap (W \times U) = cl ( {\Gamma A} \cap (W \times U)) $ so $S$ is generate by an open subset of
$cl({\Gamma A})$; hence $S$ is an open subgroup of $cl({\Gamma A})$. In particular, $C = cl( {\Gamma A})^0 =S^0 \leq S$ and so $C$ is solvable.
In particular the first projection $\pi_1(C)$ is soluble. Since $C$ is normalized by ${\Gamma A}$, $\pi_1(C)$
is normalized by ${H}$. By assumption (i), $\pi_1(C) = (1)$. Thus $C = (1) \times I$ for some $I$, which is clearly
a closed subgroup of $L$. $I$ is normalized by $\pi_2(\Gamma) Z(L)$, which is dense in $L$ by assumption. So $I$ is normal
in $L$; and $I$ is soluble.
\begin{center}{*} \end{center}
It remains only to prove that the induced homomorphism $f_1: {H} \to L/I$ is discrete-on-compact.
By Claim 1, $f ^{-1}(I)$ is soluble, and is normalized by ${H}$; hence by assumption (i), $f ^{-1}(I)=(1)$ and so $f_1$ is
injective. So we are in position to use \lemref{doc} (3); we thus need to show, for one compact neighborhood $W' \subset G$, that $f_1(W' \cap {H})$ is discrete in $L/I$.
Let us write also ${\Gamma} I$ for ${\Gamma} (1 \times I)$.
Since $cl({\Gamma A})$ contains $cl({\Gamma})$ and $ (1 \times I)$, and is closed; and on the other hand $A \subset I$, we
have $cl({\Gamma A}) = cl({\Gamma}I)$.
We saw that $cl({\Gamma A})^0= 1 \times I$. The group $cl({\Gamma A})/(1 \times I)$
is thus totally disconnected. So it has a profinite open subgroup, that we can write as $P' / (1 \times I)$, with $P'$ an open subgroup of $cl({\Gamma A})$.
The image of $P'$ in $L/I$, namely $\pi_2(P') / I$, is also profinite, in particular totally disconnected; but by the Von-Neumann-Cartan theorem, being a closed subgroup of $L/I$, it is itself a Lie group.
Thus $\pi_2(P')/I$ is finite; so $ \pi_2 ^{-1}(I) \cap P'$ has finite index in $P'$; being closed, it is an open subgroup,
hence open in $cl({\Gamma A})$ too. Let $P=\{a: (a,1) \in P' \}$; then $P \times I$ is a clopen subgroup of $cl({\Gamma} I)$.
Let us now show that $f_1(W')$ is discrete. Otherwise, some subsequence of distinct points of $f_1(W')$ has a limit point
in $L/I$; and it follows that some sequence of distinct points from $(W')(W') ^{-1}$ converges to $1$ in $L/I$.
Refining further ,using compactness of $W'$ and of $W''=(W')(W') ^{-1}$, we find
a sequence $(a_i)$ of elements of ${H} \cap W''$, converging to some $a \in W''$, such that $f_1(a_i) \to 1$.
Let $U'$ be a precompact open neighborhood of $1$ in $L$; then $U' I / I$ is an open subset of $L/I$, containing $\bar{b}$;
so $f(a_i) \in U' I$ for almost all $i$; $f(a_i) = u_i c_i$, with $c_i \in I$ and $u_i \in U'$. We have $(a_i, u_i c_i) \in \Gamma$,
so $(a_i,u_i ) \in {\Gamma} I$. Since $U'$ is precompact, we may assume $u_i \to u \in cl(U')$. Then $(a,u) \in cl({\Gamma}I)$.
Since $P \times I$ is open in $cl({\Gamma} I)$, almost all pairs $(a_i,u_i)$ lie
in a single coset of $P \times I$. In particular, the $u_i$ lie eventually in a single coset
of $I$; this implies that modulo $I$ they are finite in number. This contradicts the assumption of distinctness of the
elements $f_1(a_i) = f(a_i) I = u_i I$, and
ends the proof of Claim 2.
\end{proof}
\begin{center}{*} \end{center}
We return to the proof of the Proposition.
The composed map ${H} \to L \to L/A$ is a group homomorphism, injective by Claim 1. Since $I/A$ is solvable, $f ^{-1}(I) $ is
solvable. By (i) we have:
$ f ^{-1}(I)=1 $. Let $A_1=I$ and let $f_1: {H} \to L/A_1$ be the composition ${H} \to L \to L/A_1$. So $f_1$ is injective. By Claim 2, $f_1$ is discrete-on-compact.
We have now achieved the main goal, of replacing the quasi-homomorphism $f$ by an actual homomorphism $f_1$,
still discrete-on-compact. Moreover $f_1$ is injective. We still wish to change it further so that the target Lie group becomes semisimple
and centerless. To this end we repeat the procedure inductively, constructing a sequence $A_1 \leq A_2 \leq \cdots $
of closed normal subgroups of $L$, such that $f$ induces a discrete-on-compact, injective homomorphism $f_k: G \to L/A_k$.
Assume $A_k,f_k$ have been defined and satisfy these conditions. We proceed as follows.
In case $L/A_k$ has a nontrivial center, let $A_{k+1}/A_k$ be the center of $L/A_k$.
Then the composed homomorphism $f_{k+1}: {H} \to L/A_{k+1}$ is discrete-on-compact by Claim (2),
applied to $f_k$.
The kernel of $f_{k+1}$ is an abelian normal subgroup of ${H}$, hence trivial by assumption (i).
Assume then that $A_{k+1}/A_k$ has trivial center.
If $L/A_k$ has a nontrivial normal compact subgroup, choose $A_{k+1}$ to be such a subgroup. Then $f_{k+1} :{H} \to L/A_{k+1}$
remains discrete-on-compact trivially, since a pullback of a compact from $L/A_{k+1}$ to $L/A_k$ is compact.
The kernel, being the pullback of a compact, is discrete in $G$, and normal in $H$, hence must by trivial by (ii). (Note
$f_k$ is an injective homomorphism at this point, i.e. $K=1$.)
Assume therefore that $L/A_k$ has no such normal subgroups.
If $L/A_k$ has a nontrivial normal closed connected abelian subgroup, let $A'/A_k$ be such a subgroup.
Then $A'/A_k$
is a connected locally compact abelian group so it is isomorphic to $\Rr^n \times B$ with $B$ a maximal compact subgroup.
If $B'$ is a $G$-conjugate of $B$, then $BB'$ is also compact, so $BB'=B$. It follows that $B$ is normal in $G$,
so should have been chosen in the previous step, unless $B=1$. Thus $B=1$ so $A'/A_k$ is a vector group.
In this case the hypotheses of the Proposition apply to $G,{H}$ with $f_k: {H} \to L/A_k$
and $A'/A_k$ in place of $f$ and $A$.
By Claim 2 (now with $K=1$), there exists a closed, solvable normal subgroup $A_{k+1}$ of $L$ containing $A'$,
such that the induced map $f_{k+1}: {H} \to L/A_{k+1}$ is discrete-on-compact. Again the kernel of $f_{k+1}$ is soluble and so trivial by assumption (i).
Assume $L/A_k$ has no normal subgroups of the types considered so far. If $L/A_k$ has a nontrivial normal closed abelian subgroup $A'$, the connected component of $A'$ is trivial. Since $A'$ is a Lie group, it is discrete, and so central, returning to a previously considered case.
The remaining possibility is that $L/A_k$ has no nontrivial closed normal subgroups that are either abelian or compact. When this
occurs the construction terminates.
Note that $\dim(L/A_{k}) \geq \dim(L/A_{k+1})$; equality holds only in case $A_{k+1}/A_k$
is discrete; this cannot recur twice in a row, since if $A_{k+2}/A_{k+1}$ and $A_{k+1}/A_k$ are both
discrete then so is $A_{k+2}/A_k$, and it is central in $L/A_k$ by connectedness, so $A_{k+2}$ would have been chosen
already at stage $k$. Thus $\dim(L/A_k)$ descends and strictly so at every other step, so the construction proceeds at most $2 \dim(L)$ times.
If $A_k$ is the last stage, then $f_k: {H} \to L/A_k$ is a discrete-on-compact homomorphism to a centerless Lie group, with no normal abelian subgroups, hence semisimple, and with no compact normal subgroups. .
\end{proof}
When $\Gamma$ is a discrete subgroup of a connected Lie group $G$, Claim 2 of
\propref{ctp} is the ``Zassenhaus Lemma'' of \cite{auslander}.
Curiously, though Auslander explicitly invokes an effective version of (**),
his proof appears to make no use of it; he uses rather the exponential shrinking within $A$ due to (2).
In the approximate situation, (2) cannot be used in this way since
upon iterated approximate commutation, the $A$-component remains bounded but need not
approach $0$. We used instead, after some rearrangement, the convergence of the elements $u_i$ towards $1$.
\newpage
\section{Arithmeticity, recognition, extensions} \label{sec8}
Let $K$ be a global field.
Let $\Aa=\Aa_K$ be the adele ring of $K$. If $S$ is a
finite set $S $ of places of $K$, we may write
$\Aa = {\Aa_S} \times \Aa'_S$, where ${\Aa_S}= \Pi_{v \in S} K_v$, and $\Aa'_S$ is the restricted direct product
of the $K_v, v \notin S$. Let $\pi_S: \Aa \to {\Aa_S}$ be the projection.
\begin{defn}[Arithmetic approximate lattice] \label{arithmodelset} Let $K$ be a global field, $H$
a connected, almost simple algebraic $K$-subgroup of $SL_n$.
Let $C$ be a compact open neighborhood of $1$ in $H(\Aa'_S)$, and
\[ \Lambda = \pi_S( H(K) \cap ({\Aa_S} \times C) ). \]
If $\psi: G \to H({\Aa_S})$ is a topological group isomorphism,
we refer to $\psi ^{-1}(\Lambda)$ as an {\em arithmetic approximate lattice}.
An {\em arithmetic Meyer set} (in $H({\Aa_S})$) is a subset of an arithmetic approximate lattice, commensurable to it. \end{defn}
\begin{rem} If one demands that $S$ contain all archimedean primes $v$ where
$H(K_v)$ is not compact, one may (without changing the commensurability class) choose $C$ to be a subgroup;
in that case one obtains the definition of an arithmetic lattice. (\cite{margulis} p.1)\end{rem}
\begin{rems} \label{8.2}
\begin{enumerate}
\item If interested only in the approximate subgroup $\Lambda$, one can take a finite set $S''= S \sqcup S'$; let $\Oo_{K,S''}$ be the ring of $S''$-integers in $K$; let $C$ be a compact neighborhood of $1$ in $H(\Aa_{S'})$
and let $\Lambda = \pi_S(H(\Oo_{K,S''}) \cap ({\Aa_S} \times C))$. One can take $S'$ to consist just of archimedean primes, if desired.
Since $K$ embeds into ${\Aa_S}$, and also into $\Aa_{S'}$, we have a map $f: H(\Oo_{K,S''}) \to H(\Aa_S')$,
showing incidentally that $\Lambda$ is laminar.
\item On the other hand, consider a pair $(\Lambda,\widetilde{\Lam})$, where
$\widetilde{\Lam}$ is any subgroup of $G$ commensurating $\Lambda$ and containing $\Lambda$. If $(G,\Lambda)$ is arithmetic,
laminarity is also shown by a homomorphism $f: \widetilde{\Lam} \to H(\Aa_K)$, with $ \Lambda \in \chi(f)$,
and $f(\widetilde{\Lam}) \leq H(K)$.
This is because
$\widetilde{\Lam} \leq H(K)$ (\lemref{A18}), and by \defref{arithmodelset}, $\Lambda = j ^{-1}(C)$ where $j: \widetilde{\Lam} \to H(K)$ is the inclusion.
For this, if $\widetilde{\Lam}$ is not finitely generated, a finite product would not suffice.
Note here that if $C$ is chosen to contain $H(\Oo_{K_v})$ whenever $v$ is non-archimedean, then $f(\widetilde{\Lam})$
contains $H(\Oo_K)$.
Take the Weil restriction of scalars $H'$ of $H$ from $\Oo_K$ to $\Zz$. \footnote{$\Oo_K$ is isomorphic to $\Zz^n$
as an abelian group, and so a choice of basis gives a polynomial interpretation of the ring $\Oo_K$ over the ring $\Zz$,
with universe $\Zz^n$. Any affine scheme $W \subset \Aa^N$ over $\Oo_K$ becomes interpreted as $W^* \subset \Aa^{Nm}$
over $\Zz$, in such a way that $W(\Oo_k)$ can be identified with $W^*(\Zz)$.} We obtain a homomorphism $h: \widetilde{\Lam} \to H'(\Aa_{\Qq})$
with $ \Lambda \in \chi(f) $, and $H'(\Zz) \leq h(\widetilde{\Lam}) \leq H'(\Qq)$.
\end{enumerate}\end{rems}
For simplicity we restrict to groups of adjoint type below. This allows us to use the simpler definition of `arithmetic lattice'
in \cite{margulis}, p.1; and it is convenient to have centerless groups.
For classification up to commensurability this is not a real issue;
If $G$ is a group with finite center $Z$, $\pi: G \to G/Z$ the projection, then any $X \subset G$ is commensurable with
$\pi ^{-1} (\pi(X))$.
\begin{thm} \label{arith1}
Let $G$ be a finite product of noncompact, adjoint,
simple algebraic groups over local fields. Then any approximate lattice $\Lambda_0$ in $G$ is commensurable with a
product $\Lambda$ of lattices and arithmetic approximate lattices of direct factors of $G$. \smallskip
\noindent{{\bf Supplements}} \begin{itemize}
\item We may choose $\Lambda \subset \langle \Lambda_0 \rangle $.
\item In case no lattices appear in the product, $\Lambda_0$ is an arithmetic Meyer set.
\end{itemize}
\end{thm}
\begin{proof}
By \corref{semisimple1}, $\Lambda_0$ is commensurable to a cut-and-project set of a lattice $\Gamma \leq G \times \mathsf{L}$.
Here $\mathsf{L}$ is a centerless semisimple Lie group, and can be viewed as the connected component (of finite index) in a semisimple
adjoint algebraic group over $\Rr$.
Passing to a finite
index sublattice of $\Gamma$ if necessary,
we can express $\Gamma$ as a product of irreducible lattices $\Gamma_i \leq G_i \times \mathsf{L}_i$, where $G=\Pi_i G_i$
and $\mathsf{L}= \Pi_i \mathsf{L}_i$; in the sense of \cite{margulis}, p. 1. If $U_i$ is a compact open neighborhood of $1$ in $\mathsf{L}_i$,
the cut-and -project sets in $G$ for $\Gamma$ and $\Pi_i U_i$ is the product of the corresponding sets for $\Gamma_i$ and $U_i$.
Thus $\Lambda_0$ is commensurable to the cut-and-project set of $\Gamma$ with respect to $\Pi_i U_i$. From this it is also
clear that $\Lambda_0$ is commensurable to $\Pi_i (\pi_i(\Lambda_0))$; moreover evidently $\Gamma \subset \Pi_i (\pi_i(\Lambda_0))$.
Thus proving the result for each $\Lambda_i$ will suffice, so
we may restrict attention to a single such factor; renaming, we take $\Gamma$ to be an irreducible lattice
in $G \times \mathsf{L}$.
In case $\mathsf{L} $ is trivial, $\Gamma$ is a lattice in $G$, commensurable with $\Lambda_0$, and there is nothing further
to show. Assume from now on that $\mathsf{L}$ (and $G$) are nontrivial. Then,
$G \times \mathsf{L}$ has rank at least $2$, and so Theorem 1 of \cite{margulis}, p. 2 applies: in the notation of \defref{arithmodelset}, we may identify $G$ with $H(K_S)$, $\Gamma$ with $H(K)$ as embedded diagonally in $H(K_S) \times H(\Aa'_S)$
and $\Lambda$ with $\pi_1( \Gamma \cap (H(K_S) \times C) )$, with $C$ a compact open neighborhood of $1$ in $H(\Aa'_S)$.
This yields the arithmeticity of $\Gamma$.
It remains to show that $\Lambda_0$, is contained in a cut-and-project set $\Lambda$ of $\Gamma$.
We have in any case
$\Lambda_0 \subset \Lambda F_0$ for some finite set $F_0$. Note that one can take $F_0 \subset \Lambda_0 \Lambda$.
By \lemref{commlem}, $F_0 \subset \Lambda_0 \Lambda \subset \comm(\Lambda)$.
By Borel density (\secref{boreldensity}), $\pi_1(H(K) \cap (H(K_S) \times C))$ is Zariski dense in $H$. (It is here
that we use the assumption that $G=H(K_S)$ has no compact factors.) By \lemref{A18},
$\comm(\Lambda) \leq H(K)$.
So $F_0 \subset H(K)$.
Now we may view $F_0$ as a subset
of $H(\Aa'_S)$ as well; and it is clear that $F_0C$ is compact and $\Lambda_0 \subset \pi( H(K) \cap (K_S \times F_0C) )$.
\end{proof}
\begin{cor}\label{char-p} Let $G$ be a semisimple algebraic group over a local field of positive characteristic, or a finite product of such groups. Then every approximate lattice in $G$ is commensurable with a lattice. \end{cor}
\begin{proof} By \thmref{arith1}, any approximate lattice in $G$ is commensurable with a product of lattices, and arithmetic approximate subgroups. In the arithmetic case, the field $K$ will also be of positive characteristic (say by Borel-Tits), and so
the complementary $H(\Aa_S')$ will be totally disconnected. One can thus take the compact open neighborhood
$C \subset H(\Aa_S')$ to be a subgroup; and in this case, the cut-and-project set is a lattice. \end{proof}
\ssec{Recognition}
We will find that an abstractly semisimple locally compact group, possessing a strictly dense
approximate lattice, is itself of adelic origin. The method of proof
involves two uses of complementarity, relating two locally compact groups by bouncing off an
approximate subgroup of a finite dimensional Lie group.
Margulis arithmeticity is invoked, this time also via the commensurator criterion.
\begin{thm} \label{arith2} Let $G$ be an abstractly semisimple locally compact group.
Let ${\lambda}$ be a strictly dense commensurability class of approximate lattices in $G$. Then
there exists a semisimple algebraic group $H$ over $\Qq$, and a subgroup $M$ of $H(\Qq)$ containing
$H(\Zz)$, such that $G$ is compactly isogenous to the closure $T$ of $M$ in
$H(\Aa_{\Qq})$.
\end{thm}
\begin{proof}
By \thmref{discrete1} there exists a connected semisimple real Lie group $\mathsf{L}$ and a lattice
$\Gamma \leq G \times \mathsf{L}$ such that $\pi_1: \Gamma \to G$ is injective,
$\pi_2: \Gamma \to \mathsf{L}$ is injective with dense image; and with $\widetilde{\Lam}= \pi_1(\Gamma)$ and
$f= \pi_2 \circ \pi_1 ^{-1}$, we have ${\lambda}= \chi(f)$. Moreover $\mathsf{L}$ is centerless has no nontrivial compact normal subgroups.
By the strict density of ${\lambda}$, $\widetilde{\Lam}$ is dense in $G$.
Let $\widetilde{\Omega} =f(\widetilde{\Lam})$. Then $\Gamma$ is also
the graph of a map $f ^{-1}: \widetilde{\Omega} \to \widetilde{\Lam} \subset G$.
Now let $W$ be a
compact neighborhood of $1$ in $G$, and let $\Omega= f(W \cap \widetilde{\Lam})$. By \lemref{latticecoherence},
applied on the right this time,
$\Omega$ is an approximate lattice in $\mathsf{L}$.
We now apply \thmref{arith1} to $(\mathsf{L},\Omega)$. We find a direct product decomposition $\mathsf{L}=\Pi_{i=1}^k L_i$, and approximate
lattices $\Omega_i \leq L_i$ with $\Pi_i \Omega_i$ commensurable with $\Omega$, such that each $\Omega_i$ is an irreducible lattice,
or an arithmetic approximate lattice. Let $\pi_i: \mathsf{L} \to L_i$ be the projection.
Since $f$ has dense image, so does $\pi_i f$ for each $i$. As $\Omega_i$ is discrete, $\pi_i \widetilde{\Omega}$ and $\Omega_i$
cannot be commensurable. Hence in case $\Omega_i$ is a lattice, while it may have rank one, it is still arithmetic,
by the commensurator criterion of Theorem 1 of \cite{margulis} (p.2; note finite generation is automatic in characteristic zero, according to point (b)).
The fundamental commensurability class of the locally compact group $G$ is preserved under $\widetilde{\Lam}$-conjugation;
hence the commensurability class of $\Omega$ is preserved under $\widetilde{\Omega}$-conjugation. Thus $\widetilde{\Omega} \subset \comm(\Pi_i \Omega_i)$. Let $\widetilde{\Omega}_i = \pi_i(\widetilde{\Omega})$. It follows that $\widetilde{\Omega}_i \leq \comm(\Omega_i)$.
By \remref{8.2} (2), there exist homomorphisms $h_i: \widetilde{\Omega}_i \to H_i(\Aa{\Qq})$ with image between $H_i(\Zz)$
and $H_i(\Qq)$, and with
$\Omega_i \in \chi(h_i)$.
Let $H = \Pi_i H_i$ and $h= \Pi_i h_i$; then $\Omega \in \chi(h)$, and $H(\Zz) \leq h(\widetilde{\Omega}) \leq H(\Qq)$. Let
$M=h(\widetilde{\Omega})$ and $T=cl(M)$. Then $h: \widetilde{\Omega} \to T$ is a homomorphism with dense image and $\Omega \in \chi(h)$.
But we already know a homomorphism with dense image from $\widetilde{\Omega}$ to a locally compact group and {canonical commensurability class $\Omega$}
namely $f ^{-1}: \widetilde{\Omega} \to G$. By \propref{uniquelc}, $T$ and $G$ are compactly isogenous, by an isogeny respecting $\widetilde{\Omega}$.
\end{proof}
\begin{center}{*} \end{center}
Let us refer to abstractly semisimple locally compact groups $T$ of the form appearing in \thmref{arith2}
as {\em Margulis groups}.
\begin{rem} \label{arith2b} Up to compact isogeny,
and up to subgroups of profinite index, any Margulis group is itself a restricted product of semisimple groups over $\Rr$ and the $\Qq_p$, with respect to the open subgroups $\Zz_p$. \end{rem}
\begin{proof} By the proof of \remref{assrems} (\ref{bt1}), after factoring out a compact normal subgroup,
we have $T= T_{na} \times T_{ar}$, where $T_{ar}$ is a connected semisimple Lie group,
and $T_{na}$ is the closure of $\widetilde{\Lam}$ in $H(\Aa_{fin})$, where $A_{fin} = \Pi'_p H( \Qq_p)$ are the finite adeles. It suffices to prove the statement for $T_{na}$.
$T_{na}$ contains the closure of $H(\Zz)$ in $H(\Aa_{fin})$, which in turn contains an open subgroup of $\Pi_{p} H(\Zz_p)$.
Hence $T_{na}$ is open in $H(\Aa_{fin})$. We may take $H$ to be centerless, and a product of groups $H_1,\ldots,H_m$
that are almost $\Qq$-simple; rewrite the restricted product as $\Pi'_q H_q$, where $q=(p,j)$ and $H_q = H_j(\Qq_p)$.
Let $\pi_q: \Aa \to H_q$ be the projection.
Let $I$ be the set of $q$
such that $\pi_q(T)$ is not compact.
Let $T'$ be the projection of $T$ to $\Pi'_{q \in I} H_q$,. The kernel of $T \to T'$ is compact. For $q=(p,j) \in I$, the projection $\pi_q(T)$ is open and unbounded in $H_j(\Qq_p)$. Hence by the theorem of Tits in the title of \cite{prasad}, Theorem T,
$\pi_q(T)$ contains $H_j(\Qq_p)^+$. Since the groups $H_j(\Qq_p)^+$ are simple and non-isomorphic,
the image of $T'$ in any finite product $\Pi_{q \in I_0} H_q$ must contain $\Pi_{q \in I_0} H_q^+$. But $T'$ is open, hence closed, and so it contains the full restricted product $\Pi'_{q \in I} H_q^+$.
\end{proof}
This gives in particular an analogue for approximate lattices of some of the results of \cite{oh}.
\ssec{A non-laminar approximate lattice} \label{rough}
Let $\Lambda=SL_2(\Zz[1/p])$, a lattice in $G=SL_2(\Rr) \times SL_2(\Qq_p)$.
We will define a central extesnsion $\hat{G}$ of $G$ by $\Qq_p$ and
a certain extension of $\Lambda$
by a laminar approximate lattice in $\Qq_p$, yielding an approximate lattice in $\hat{\Lambda}$ that is not {laminar}.
The group $SL_2(\Rr)$ has a nontrivial central extension, determined by a bounded $2$-cocycle $\beta$ taking values
in $\{-1,0,1\} \subset \Zz$. One can see by computing commutators that restricted to $\Lambda=SL_2(\Zz[1/p])$, and to any finite
index subgroup of $\Lambda$,
$\beta$ remains a nontrivial class. Indeed there exists a retraction from $SL_2(\Rr)$ to a circle group $C: \ \ x^2+y^2=1$ within
it, inducing an isomorphism of fundamental groups. The restriction of $\beta$ to $C$ is already nontrivial. On this circle, the points of $\Zz[1/p]$, and of any finite index subgrup, are dense, and the nontriviality of the extension
can be witnessed already here.
Let $\hat{\Lambda}$ be the central extension of $\Lambda$ by $ \Zz[1/p]$ determined by $\beta$,
\[1 \to \Zz[1/p] \to \hat{\Lambda} \to \Lambda \to 1 \]\
Concretely, we can let $\hat{\Lambda} = \Lambda \times \Zz[1/p]$, with multiplication
\[(\lambda_1,a_1)(\lambda_2,a_2) = (\lambda_1\lambda_2,a_1+a_2+\beta(\lambda_1,\lambda_2))\]
Let $ {\delta}(\lambda,a)=-a$, where now $a \in \Zz[1/p]$ is viewed as a real number. Then $ {\delta}: \hat{\Lambda} \to \Rr :[-1,1]$ is a quasimorphism. Let
$\Delta$ be the approximate kernel $\Delta = \{x: -2< {\delta}(x)<2 \}$. By \propref{converse},
$\Delta$ is an approximate subgroup of $\hat{\Lambda}$. We have $\pi_1(\Delta) =\Lambda$,
so if a subgroup $S$ of $\hat{\Lambda}$ contains an approximate subgroup commensurable to $\Delta$,
then $\pi_1(S)$ has finite index in $\Lambda$, and thus $\Zz[1/p] S $ has finite index in $\hat{\Lambda}$. It follows that $ {\delta}$ is not a homomorphism on $S$. By \propref{notsame},
the commensurability class of the approximate subgroup $\Delta$ of $\hat{\Lambda}$ is not {laminar}.
Now view $\beta $ a $2$-cocycle with values in $\Qq_p$, and construct in the same way a central extension
\[1 \to \Qq_p \to \widehat{SL_2(\Rr)_{\Qq_p}} \to SL_2(\Rr) \to 1.\]
Here $\widehat{SL_2(\Rr)_{\Qq_p}}$ is obtained from the universal covering group of $SL_2(\Rr)$ by taking a fiber product with $\Qq_p$
over $\Zz$. This also describes the locally compact group topology on $\widehat{SL_2(\Rr)_{\Qq_p}}$; it has an open subgroup isomorphic
to the inverse limit of coverings of $SL_2(\Rr)$ of order $p^n$.
Taking products with $SL_2(\Qq_p)$, we obtain
\[1 \to \Qq_p \to \hat{G} \to G \to 1.\]
We can view the universe of $\hat{G}$ as being $G \times \Qq_p$, with multiplication twisted by $\beta$ as above.
Viewing $\hat{G}$ as a product in the Borel category, it is easy to see that the product measure is a Haar measure.
Now $\widehat{\Lambda}$ is a subgroup of $\hat{G}$, and $\Delta$ is a non-laminar approximate subgroup of
$\hat{G}$. One can check that $\hat{\Lambda}$ is discrete, and indeed a lattice:
if $C_1 \subset G$ is a Borel set of finite measure with $C_1 \Lambda = G$, then $(C_1 \times \Zz_p) \hat{\Lambda} = \hat{G}$,
and of course $\Zz_p \subset \Qq_p$ is compact and hence of finite measure.
\ssec{Approximate lattice extensions} \label{othergroups}
Let $G$ be a connected linear algebraic group over a local field $k$ (the discussion extends naturally to a finite product of such groups.) Then $G$ has soluble radical $R=R(G)$; it is a soluble Zariski closed normal subgroup, with $\bG:=G/R$ semisimple. One can either choose $R$ connected, or such that $G/R$ is centerless; the latter will be a little more convenient for us.
We define $G_{ss} = G/R$.
Trying to reformulate the Bj\"orklund-Hartnick problem, given the amenable and semisimple answers, we may first ask:
\begin{question} \label{othergroups-q} Is $\Lambda/R$ above always an approximate lattice in $G/R$?
For connected $G$ at least, I am fairly sure, a slight modification (and simplification) of \lemref{ctp} gives a positive answer.
\end{question}
N.B.: Machado has answered this positively at least over characteristic zero local fields.
\bigskip
Let us consider the case where ${{\bf \Lambda}}:=\Lambda/R$ is an approximate lattice; then we know it is a product of
arithmetic approximate lattices, and lattices. In case $\langle \Lambda \rangle$ is dense in $G$ (as is natural to assume),
$\langle {{\bf \Lambda}} \rangle$ will be dense in $G/R$, and then all the components of ${{\bf \Lambda}}$ will be arithmetic. By \corref{minimalclass}, $\langle {{\bf \Lambda}} \rangle$ contains an element $\Gamma$ of ${\langle \bar{\omega} \rangle } _{\min}$; and by pulling back such an element, we may
reduce to the case that $\langle \Lambda \rangle \in {\langle \bar{\omega} \rangle } _{\min}$. In this case, $\langle \Lambda \rangle$ is an $S$-arithmetic group
(of course it is not a lattice of $G/R$, but of some other product of semisimple groups.) Now we may further ask:
\begin{question} \label{fine?}Let $\bG$ be a product of semisimple groups over local fields, and ${{\bf \Lambda}}$ an approximate lattice in $\bar{g}$,
with $\langle {{\bf \Lambda}} \rangle$ isomorphic to a given $S$-arithmetic group $\widetilde{\Lam}$. When is it true that for all $G$
with $G_{ss}\cong \bG$,
every approximate lattice $\Lambda$ of $G$ with $\Lambda /R = {{\bf \Lambda}}$ is {laminar}?
\end{question}
\begin{prop} \label{othergr} If $H^2_b({{\bf \Lambda}},V) =(0)$ for all rigid actions of ${{\bf \Lambda}}$ on $V=\Rr^n$, induced from continuous action of $G$,
then \qref{fine?} has a positive answer.\end{prop}
\begin{proof}
Sketch (not well checked): this follows using \corref{minimalclass} from \propref{cohcor}, taking into account that $H^2_b(\Lambda,V) = H^2_b({{\bf \Lambda}}, V^R)$
(by the mapping theorem \cite{gromov} p.40, extended to nonconstant coefficients, \cite{noskov}, cf. \cite{monod}.) \end{proof}
Conversely if $H^2_b({{\bf \Lambda}},V) \neq (0)$, let $\beta$ be a nontrivial bounded $2$-cocycle, and use it to form an extension $\hat{\Lambda}$ of ${{\bf \Lambda}}$ by the module $V$. By Gromov's mapping theorem \cite{gromov} p.40, extended to nontrivial coefficients \cite{noskov}, we have
$H^2_b (\hat{\Lambda}) = H^2_b(\Lambda)$ canonically since $V$ is amenable. Thus
pulled back to $\hat{\Lambda}$, $\beta $ still represents a nontrivial class in $H^2_b(\hat{\Lambda})$;
but now it maps to $0$ as a class in $H^2(\hat{\Lambda})$. So there exists a quasimorphism $\delta$ with $d {\delta}=-\beta$.
This gives a non-laminar approximate subgroup, as in Example \ref{rough}. It may well be possible to embed this in an appropriate algebraic group, again as in
\ref{rough}, so as to produce a negative answer for this instance of Question \ref{fine?}; if so, then \propref{othergr} gives the complete answer to \qref{fine?}.
The bounded cohomology of $S$-arithmetic groups, as ${{\bf \Lambda}}$ must be here, is studied in depth in \cite{burger-monod}, \cite{monod}, providing I believe both
instances where $H^2_b({{\bf \Lambda}},V) $ is zero for all $V$, and otherwise.
\
Emmanuel Breuillard pointed out to me that many examples of abstractly semisimple groups not embeddable in a
restricted product of semisimple algebraic groups are known, including certain automorphism groups of trees and non-$p$-adic
analytic pro-$p$ groups. Such groups thus do not contain a strictly approximate lattice.
|
1,116,691,499,623 | arxiv | \section{Proof of Lemma~\ref{lemma:product_gaussians}}
First, recall Bernstein's lemma.
Let $X_1, ..., X_p$ be zero-mean independent r.v.s
satisfying the Orlicz norm bound $\norm{X_i}_{\psi_1} \leq K$.
Then as long as $p \geq 2 \log(1/\delta)$, with probability at least $1-\delta$,
\begin{align*}
\sum_{i=1}^{p} X_i \leq K \sqrt{2n \log(1/\delta)} \:.
\end{align*}
Next, let $Q$ be an $m \times n$ matrix. Let $u_1, ..., u_{M_\varepsilon}$ be a
$\varepsilon$-net for the $m$-dimensional $\ell_2$ ball, and similarly let
$v_1, ..., v_{N_\varepsilon}$ be a $\varepsilon$ covering for the $n$-dimensional $\ell_2$ ball.
For each $\norm{u}_2=1$ and $\norm{v}_2=1$, let $u_i$, $v_j$ denote the
elements in the respective nets such that $\norm{u - u_i}_2 \leq \varepsilon$
and $\norm{v - v_j}_2 \leq \varepsilon$. Then,
\begin{align*}
u^* Q v &= (u - u_i + u_i)^* Q v = (u - u_i)^* Q v + u_i^* Q ( v - v_j + v_j ) \\
&= (u - u_i)^* Q v + u_i^* Q ( v - v_j ) + u_i^* Q v_j \:.
\end{align*}
Hence,
\begin{align*}
u^* Q v \leq 2 \varepsilon \norm{Q}_2 + u_i^* Q v_j \leq 2 \varepsilon \norm{Q}_2 +\max_{1 \leq i \leq M_\varepsilon, 1 \leq j \leq N_\varepsilon} u_i^* Q v_j \:.
\end{align*}
Since $u,v$ are arbitrary on the sphere,
\begin{align*}
\norm{Q}_2 \leq \frac{1}{1-2\varepsilon} \max_{1 \leq i \leq M_\varepsilon, 1 \leq j \leq N_\varepsilon} u_i^* Q v_j \:.
\end{align*}
Now we study the problem at hand. Choose $\varepsilon = 1/4$.
By a standard volume comparison argument,
we have that $M_\varepsilon \leq
9^m$ and $N_\varepsilon \leq 9^n$, and that
\begin{align*}
\bignorm{ \sum_{k=1}^{N} f_k g_k^* }_2 \leq 2 \max_{1 \leq i \leq M_\varepsilon, 1 \leq j \leq N_\varepsilon} \sum_{k=1}^{N} (u_i^* f_k) (g_k^* v_j) \:.
\end{align*}
Note that $u_i^* f_k \sim N(0, u_i^* \Sigma_f u_i)$ and $g_k^* v_j \sim N(0,
v_j^* \Sigma_g v_j)$. By independence of $f_k$ and $g_k$, $(u_i^*
f_k)(g_k^* v_j)$ is a zero mean sub-Exponential random variable, and therefore $\norm{(u_i^*
f_k)(g_k^* v_j)}_{\psi_1} \leq \sqrt{2}
\norm{\Sigma_f}_2^{1/2} \norm{\Sigma_g}_2^{1/2}$.
Hence, for each pair $u_i, v_j$ we have with probability at least $1 - \delta/9^{m+n}$,
\begin{align*}
\sum_{k=1}^{N} (u_i^* f_k) (g_k^* v_j) \leq 2\norm{\Sigma_f}_2^{1/2}\norm{\Sigma_g}_2^{1/2} \sqrt{N (m + n) \log(9/\delta)} \:.
\end{align*}
Taking a union bound over all pairs in the $\varepsilon$-net yields the claim.
\section{Proof of Proposition~\ref{prop:data_dependent}}
\label{app:data_dependent}
For this proof we need a lemma similar to Lemma~\ref{lemma:product_gaussians}. The following is a standard result in high-dimensional statistics \cite{wainwright2019high}, and we state it here without proof.
\begin{lemma}
\label{lem:operator_norm}
Let $W \in \ensuremath{\mathbb{R}}^{N \times n}$ be a matrix with each entry i.i.d. $\mathcal{N}(0, \sigma_w^2)$. Then, with probability $1 - \delta$, we have
\begin{align*}
\|W\|_2 \leq \sigma_w(\sqrt{N} + \sqrt{n} + \sqrt{2 \log(1/\delta)}).
\end{align*}
\end{lemma}
As before we use $Z$ to denote the $N \times (n + p)$ matrix with rows equal to $z_\ell^\top = \begin{bmatrix}
(x^{(\ell)})^\top & (u^{(\ell)})^\top
\end{bmatrix}$. Also, we denote by $W$ the $N \times n$ matrix with columns equal to $w^{(\ell)}$. Therefore, the error matrix for the ordinary least squares estimator satisfies
\begin{align*}
E = \begin{bmatrix}
(\Ah - A)^\top \\
(\widehat{B} - B)^\top
\end{bmatrix} = (Z^\top Z)^{-1} Z^\top W,
\end{align*}
when the matrix $Z$ has rank $n + p$. Under the assumption that $N \geq n + p$ we consider the singular value decomposition $Z = U \Lambda V^\top$, where $V, \Lambda \in \ensuremath{\mathbb{R}}^{(n + p) \times (n + p) }$ and $U \in \ensuremath{\mathbb{R}}^{N \times (n + p)}$. Therefore, when $\Lambda$ is invertible,
\begin{align*}
E = V (\Lambda^\top \Lambda)^{-1} \Lambda^\top U^\top W = V \Lambda^{-1} U^\top W.
\end{align*}
This implies that
\begin{align*}
E E^\top &= V \Lambda^{-1} U^\top W W^\top U \Lambda^{-1} V^\top \preceq \| U^\top W\|_2^2 V \Lambda^{-2} V^\top= \| U^\top W\|_2^2 (Z^\top Z)^{-1}.
\end{align*}
Since the columns of $U$ are orthonormal, it follows that the entries of $U^\top W$ are i.i.d. $\mathcal{N}(0, \sigma_w^2)$. Hence, the conclusion follows by Lemma~\ref{lem:operator_norm}.
\section{Derivation of the LQR cost as an $\mathcal{H}_2$ norm}\label{app:h2}
In this section, we consider the transfer function description of the infinite horizon LQR optimal control problem. In particular,
we show how it can be recast as an equivalent $\mathcal{H}_2$ optimal control problem in terms of the system response variables defined in
Theorem \ref{thm:param}.
Recall that stable and achievable system responses $(\tf \Phi_x,\tf \Phi_u)$, as characterized in equation \eqref{eq:achievable}, describe the closed-loop map from disturbance signal $\tf w$ to the state and control action $(\tf x, \tf u)$ achieved by the controller $\tf K = \tf \Phi_u \tf \Phi_x^{-1}$, i.e.,
\begin{equation*}
\begin{bmatrix} \tf x \\ \tf u \end{bmatrix} = \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} \tf w.
\end{equation*}
Letting $\tf \Phi_x = \sum_{t=1}^\infty \Phi_x(t) z^{-t}$ and $\tf \Phi_u = \sum_{t=1}^\infty \Phi_u(t) z^{-t}$, we can then equivalently write for any $t \geq 1$
\begin{equation}
\begin{bmatrix} x_t \\ u_t \end{bmatrix} = \sum_{k=1}^t\begin{bmatrix} \Phi_x(k) \\ \Phi_u(k) \end{bmatrix} w_{t-k}.
\label{eq:time_response}
\end{equation}
For a disturbance process distributed as $w_t \stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0,\sigma_w^2 I_n)$, it follows from equation \eqref{eq:time_response} that
\begin{align*}
\mathbb{E}\left[ x_t^* Q x_t\right] &= \sigma_w^2\sum_{k=1}^t \Tr(\Phi_x(k)^* Q\Phi_x(k)) \:, \\
\mathbb{E}\left[ u_t^* R u_t\right] &= \sigma_w^2\sum_{k=1}^t \Tr(\Phi_u(k)^* R\Phi_u(k)) \:.
\end{align*}
We can then write
\begin{align*}
\lim_{T\to \infty} \frac{1}{T} \sum_{t=1}^T \mathbb{E}\left[ x_t^* Q x_t + u_t^* R u_t\right] &= \sigma_w^2\left[\sum_{t=1}^\infty \Tr(\Phi_x(t)^* Q\Phi_x(t)) + \Tr (\Phi_u(t)^* R\Phi_u(t))\right] \\
&= \sigma_w^2\sum_{t=1}^\infty \bignorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2} \end{bmatrix}\begin{bmatrix} \Phi_x(t) \\ \Phi_u(t) \end{bmatrix}}_F^2 \\
&= \frac{\sigma_w^2}{2\pi} \int_{\mathbb{T}} \bignorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2} \end{bmatrix}\begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix}}_F^2 \; dz \\
&=\sigma_w^2 \bignorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2} \end{bmatrix}\begin{bmatrix} \tf \Phi_x \\ \tf\Phi_u \end{bmatrix}}_{\mathcal{H}_2}^2 \:,
\end{align*}
where the second to last equality is due to Parseval's Theorem.
\input{fir_appendix.tex}
\input{common_lyapunov.tex}
\section{Numerical Bootstrap Validation}
\label{sec:bootstrap-experiments}
We evaluate the efficacy of the bootstrap
procedure introduced in Algorithm~\ref{alg:bootstrap}. Recall that even
though we provide theoretical bounds in
Proposition~\ref{prop:independent_estimation}, for practical purposes and for
handling dependent data, we want bounds that are the least conservative
possible.
For given state dimension $n$, input dimension $p$, and scalar $\rho$, we generate upper triangular
matrices $A \in \mathbb{R}^{n \times n}$ with all diagonal
entries equal to $\rho$ and the upper triangular entries i.i.d. samples from
$\mathcal{N}(0,1)$, clipped at magnitude $1$. By construction, matrices will have spectral radius $\rho$.
The entries of $B\in \mathbb{R}^{n \times p}$ were sampled
i.i.d. from $\mathcal{N}(0,1)$, clipped at magnitude $1$. The variance
terms $\sigma_u^2$ and $\sigma_w^2$ were fixed to be $1$.
Recall from Section~\ref{sec:bootstrap} that $M$ represents the number of
trials used for the bootstrap estimation, and $\widehat \epsilon_A$, $\widehat \epsilon_B$ are the bootstrap estimates for $\epsilon_A$, $\epsilon_B$. To check the validity of the
bootstrap procedure we empirically estimate the fraction of time $A$
and $B$ lie in the balls $B_{\widehat{A}}(\widehat{\epsilon}_A)$ and $B_{\widehat{B}}(\widehat{\epsilon}_B)$, where $B_{X}(r) = \{X'\colon \ltwonorm{X' - X} \leq r\}$.
Our findings are
summarized in
Figures~\ref{fig:bootstrap_AB1}
and~\ref{fig:bootstrap_AB2}.
Although not plotted, the theoretical bounds found
in Section~\ref{sec:estimation} would be orders of magnitude larger
than the true $\epsilon_A$ and $\epsilon_B$, while the bootstrap bounds offer a
good approximation.
\def 0.38{0.38}
\def 0.49{0.49}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Estimation Error in $A$}
\centerline{\includegraphics[width=\columnwidth]{./plots/eps_A_n=3_p=1_rho=0900_fig.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Correctness of Bootstrap Estimate}
\centerline{\includegraphics[width=\columnwidth]{./plots/prob_A_n=3_p=1_rho=0900_fig.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Estimation Error in $B$}
\centerline{\includegraphics[width=\columnwidth]{./plots/eps_B_n=3_p=1_rho=0900_fig.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Correctness of Bootstrap Estimate}
\centerline{\includegraphics[width=\columnwidth]{{./plots/prob_B_n=3_p=1_rho=0900_fig.pdf}}}
\end{subfigure}
\caption{\small In these simulations: $n = 3$, $p = 1$, $\rho = 0.9$, and $M = 2000$. In (a), the spectral distances to $A$ (shown in the solid lines) are compared with
the bootstrap estimates (shown in the dashed lines). In (b), the probability $A$ lies in $B_{\widehat{A}}(\widehat{\epsilon}_A)$ estimated from $2000$ trials.
In (c), the spectral distances to $B_*$ are compared with the bootstrap estimates. In (d), the probability $B$ lies in $B_{\widehat{B}}(\widehat{\epsilon}_B)$ estimated from $2000$ trials.}
\label{fig:bootstrap_AB1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Estimation Error in $A$}
\centerline{\includegraphics[width=\columnwidth]{{./plots/eps_A_n=6_p=2_rho=1010_fig.pdf}}}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Correctness of Bootstrap Estimate}
\centerline{\includegraphics[width=\columnwidth]{{./plots/prob_A_n=6_p=2_rho=1010_fig.pdf}}}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Estimation Error in $B$}
\centerline{\includegraphics[width=\columnwidth]{{./plots/eps_B_n=6_p=2_rho=1010_fig.pdf}}}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\caption{\footnotesize Correctness of Bootstrap Estimate}
\centerline{\includegraphics[width=\columnwidth]{{./plots/prob_B_n=6_p=2_rho=1010_fig.pdf}}}
\end{subfigure}
\caption{\small In these simulations: $n = 6$, $p = 2$, $\rho = 1.01$, and $M = 2000$. In (a), the spectral distances to $A$ are compared with the bootstrap estimates. In (b), the probability $A$ lies in $B_{\widehat{A}}(\widehat{\epsilon}_A)$ estimated from $2000$ trials. In (c), the spectral distances to $B$ are compared with the bootstrap estimates. In (d), the probability $B$ lies in $B_{\widehat{B}}(\widehat{\epsilon}_B)$ estimated from $2000$ trials.}
\label{fig:bootstrap_AB2}
\end{figure}
\section{ Experiments with Varying Rollout Lengths}\label{app:eps_v_trial_figs}
Here we include results of experiments in which we fix the number of trials ($N=6$) and vary the rollout length.
Figure \ref{fig:eps_v_trial} displays the estimation errors. The estimation errors on $A$ decrease more quickly than in the fixed rollout length case, consistent with the idea that longer rollouts of easily excitable systems allow for better identification due to higher signal to noise ratio. Figure \ref{fig:perf_v_trial} shows that stabilizing performance of the nominal is somewhat better than in the fixed rollout length case (Figure \ref{fig:perf_v_rollout}). This fact is likely related to the smaller errors on the estimation of $A$ (Figure \ref{fig:eps_v_trial}).
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\caption{\footnotesize Least Squares Estimation Errors}
\centerline{\includegraphics[width=\columnwidth]{./plots/eps_v_trial.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\caption{\footnotesize Accuracy of Bootstrap Estimates}
\centerline{\includegraphics[width=\columnwidth]{./plots/epserror_v_trial.pdf}}
\end{subfigure}
\caption{\small The resulting errors from 100 identification experiments with with a total of $N=6$ rollouts is plotted against the length rollouts. In (a), the median of the least squares estimation errors decreases with $T$. In (b), the ratio of the bootstrap estimates to the true estimates. Shaded regions display quartiles.}
\label{fig:eps_v_trial}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\caption{\footnotesize LQR Cost Suboptimality}
\centerline{\includegraphics[width=\columnwidth]{./plots/subopt_v_trial.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\caption{\footnotesize Frequency of Stabilization}
\centerline{\includegraphics[width=\columnwidth]{./plots/stabilize_v_trial.pdf}}
\end{subfigure}
\caption{\small The performance of controllers synthesized on the results of the 100 identification experiments is plotted against the length of rollouts. In (a), the median suboptimality of nominal and robustly synthesized controllers are compared, with shaded regions displaying quartiles, which go off to infinity when stabilizing controllers are not frequently found. In (b), the frequency synthesis methods found stabilizing controllers.}
\label{fig:perf_v_trial}
\end{figure}
\section{A Common Lyapunov Relaxation for Proportional Control}
\label{appendix:common_lyap}
We unpack each of the norms in~\eqref{eq:robustLQRbnd-static2} as linear matrix inequalities. First, by the KYP Lemma, the $\mathcal{H}_\infty$ constraint is satisfied if and only if there exists a matrix $P_\infty$ satisfying
\begin{align*}
\begin{bmatrix}
(\widehat{A}+\widehat{B} K)^* P_\infty(\widehat{A}+\widehat{B} K)-P_\infty & (\widehat{A}+\widehat{B} K)^*P_\infty\\
P_\infty (\widehat{A}+\widehat{B} K) & P_\infty
\end{bmatrix}+
\begin{bmatrix}
\gamma^{-2} \begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} K\end{bmatrix}^* \begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} K\end{bmatrix} & 0\\
0 & - I
\end{bmatrix} \preceq 0 \:.
\end{align*}
Applying the Schur complement Lemma, we can reformulate this as the equivalent matrix inequality
\begin{align*}
\begin{bmatrix}
-P_\infty^{-1} & 0 & 0 & (\widehat{A}+\widehat{B} K) & I\\
0 & -\gamma^2 I & 0 & \tfrac{\epsilon_A}{\sqrt{\alpha}} I & 0\\
0 & 0 & -\gamma^2 I & \tfrac{\epsilon_B}{\sqrt{1-\alpha}} K & 0\\
(\widehat{A} + \widehat{B} K)^* &\tfrac{\epsilon_A}{\sqrt{\alpha}} I & \tfrac{\epsilon_B}{\sqrt{1-\alpha}} K^* & -P_\infty & 0\\
I & 0 & 0 & 0 & -I
\end{bmatrix} \preceq 0 \:.
\end{align*}
Then, conjugating by the matrix $\operatorname{diag}(I,I,P_\infty^{-1},I)$ and setting $X_\infty = P_\infty^{-1}$, we are left with
\begin{align*}
\begin{bmatrix}
-X_\infty & 0 & 0 & (\widehat{A}+\widehat{B} K)X_\infty & I\\
0 & -\gamma^2 I & 0 & \tfrac{\epsilon_A}{\sqrt{\alpha}} X_\infty & 0\\
0 & 0 & -\gamma^2 I & \tfrac{\epsilon_B}{\sqrt{1-\alpha}} KX_\infty & 0\\
X_\infty(\widehat{A} + \widehat{B} K)^* &\tfrac{\epsilon_A}{\sqrt{\alpha}} X_\infty & \tfrac{\epsilon_B}{\sqrt{1-\alpha}} X_\infty K^* & -X_\infty & 0\\
I & 0 & 0 & 0 & -I
\end{bmatrix} \preceq 0 \:.
\end{align*}
Finally, applying the Schur complement lemma again gives the more compact inequality
\begin{align*}
\begin{bmatrix}
-X_\infty+I & 0 & 0 & (\widehat{A}+\widehat{B} K)X_\infty\\
0 & -\gamma^2 I & 0 & \tfrac{\epsilon_A}{\sqrt{\alpha}} X_\infty \\
0 & 0 & -\gamma^2 I & \tfrac{\epsilon_B}{\sqrt{1-\alpha}} KX_\infty \\
X_\infty(\widehat{A} + \widehat{B} K)^* &\tfrac{\epsilon_A}{\sqrt{\alpha}} X_\infty & \tfrac{\epsilon_B}{\sqrt{1-\alpha}} X_\infty K^* & -X_\infty \\
\end{bmatrix} \preceq 0 \:.
\end{align*}
For convenience, we permute the rows of this inequality and conjugate by $\operatorname{diag}(I,I,\sqrt{\alpha}I,\sqrt{1-\alpha} I )$ and use the equivalent form
\begin{align*}
\begin{bmatrix} -X_\infty+I & (\widehat{A}+\widehat{B} K)X_\infty & 0 & 0 \\
X_\infty(\widehat{A}+\widehat{B} K)^* & -X_\infty & \epsilon_A X_\infty & \epsilon_B X_\infty K^* \\
0 & \epsilon_A X_\infty & -\alpha\gamma^2 I & 0\\
0 & \epsilon_B KX_\infty & 0 & -(1-\alpha)\gamma^2 I \end{bmatrix} \preceq 0 \:.
\end{align*}
For the $\mathcal{H}_2$ norm, we have that under proportional control $K$, the average cost is given by
$ \operatorname{Trace}((Q +K^*R K)X_2)$ where $X_2$ is the steady state covariance. That is, $X_2$ satisfies the Lyapunov equation
\begin{align*}
X_2 = (\widehat{A}+\widehat{B} K) X_2(\widehat{A}+\widehat{B} K)^* +I \,.
\end{align*}
But note that we can relax this expression to a matrix inequality
\begin{align}\label{eq:htwo-lyap}
X_2 \succeq (\widehat{A}+\widehat{B} K) X_2(\widehat{A}+\widehat{B} K)^* +I \:,
\end{align}
and $ \operatorname{Trace}((Q +K^*R K)X_2)$ will remain an upper bound on the squared $\mathcal{H}_2$ norm. Rewriting this matrix inequality with Schur complements and combining with our derivation for the $\mathcal{H}_\infty$ norm, we can reformulate~\eqref{eq:robustLQRbnd-static2} as a nonconvex semidefinite program
\begin{align}\label{eq:robustLQRbnd-static-nc}
\begin{array}{ll}
\operatorname{minimize}\limits_{X_2,X_\infty,K,\gamma} &\frac{1}{(1-\gamma)^2} \operatorname{Trace}((Q +K^*R K)X_2) \\
\mbox{subject to} & \begin{bmatrix} X_2 -I & (\widehat{A}+\widehat{B} K)X_2 \\ X_2(\widehat{A}+\widehat{B} K)^* & X_2\end{bmatrix}\succeq 0\\
&\begin{bmatrix} X_\infty-I & (\widehat{A}+\widehat{B} K)X_\infty & 0 & 0 \\
X_\infty(\widehat{A}+\widehat{B} K)^* & X_\infty & \epsilon_A X_\infty & \epsilon_B X_\infty K^* \\
0 & \epsilon_A X_\infty & \alpha\gamma^2 I & 0\\
0 & \epsilon_B KX_\infty & 0 & (1-\alpha)\gamma^2 I \end{bmatrix} \succeq 0 \:.
\end{array}
\end{align}
The common Lyapunov relaxation simply imposes that $X_2 = X_\infty$. Under this identification, we note that the first LMI becomes redundant and we are left with the SDP
\begin{align*}
\begin{array}{ll}
\operatorname{minimize}\limits_{X,K,\gamma} &\frac{1}{(1-\gamma)^2} \operatorname{Trace}((Q +K^*R K)X) \\
\mbox{subject to}
&\begin{bmatrix} X-I & (\widehat{A}+\widehat{B} K)X & 0 & 0 \\
X(\widehat{A}+\widehat{B} K)^* & X & \epsilon_A X & \epsilon_B X K^* \\
0 & \epsilon_A X & \alpha\gamma^2 I & 0\\
0 & \epsilon_B KX & 0 & (1-\alpha)\gamma^2 I \end{bmatrix} \succeq 0 \:.
\end{array}
\end{align*}
Now though this appears to be nonconvex, we can perform the standard variable substitution $Z=KX$ and rewrite the cost to yield~\eqref{eq:robustLQRbnd-common-lyap}.
\subsection{Finite impulse response approximation}\label{sec:approx-FIR}
An elementary approach to reducing the aforementioned semi-infinite program to a finite dimensional one is to only optimize over the first $L$ elements of the transfer functions $\tf \Phi_x$ and $\tf \Phi_u$, effectively taking a finite impulse response (FIR) approximation. Since these are both stable maps, we expect the effects of such an approximation to be negligible as long as the optimization horizon $L$ is chosen to be sufficiently large -- in what follows, we show that this is indeed the case.
By restricting our optimization to FIR approximations of $\tf \Phi_x$ and $\tf \Phi_u$, we can cast the $\mathcal{H}_2$ cost as a second order cone constraint. The only difficulty arises in posing the $\mathcal{H}_\infty$ constraint as a semidefinite program. Though there are several ways to cast $\mathcal{H}_\infty$ constraints as linear matrix inequalities, we use the formulation in Theorem 5.8 of Dumitrescu's text to take advantage of the FIR structure in our problem~\cite{dumitrescu2007positive}. We note that using Dumitrescu's formulation, the resulting problem is affine in $\alpha$ when $\gamma$ is fixed, and hence we can solve for the optimal value of $\alpha$.
Then the resulting system response elements can be cast as a dynamic feedback controller using Theorem 2 of~\citet{anderson17}.
\subsubsection{Sub-optimality guarantees}
In this subsection we show that optimizing over FIR approximations incurs only a small degradation in performance relative to the solution to the infinite-horizon problem. In particular, this degradation in performance decays exponentially in the FIR horizon $L$, where the rate of decay is specified by the decay rate of the spectral elements of the optimal closed loop system response $\Res{A + BK_\star}$.
Before proceeding, we introduce additional concepts and notation needed to formalize guarantees in the FIR setting. A linear-time-invariant transfer function is stable if and only if it is exponentially stable, i.e., $\tf \Phi = \sum_{t=0}^\infty z^{-t}\Phi(t) \in \mathcal{RH}_\infty$ if and only if there exists positive values $C$ and $\rho \in [0,1)$ such that for every spectral element $\Phi(t)$, $t\geq 0$, it holds that
\begin{equation}
\twonorm{\Phi(t)} \leq C \rho^t.
\label{eq:exp_decay}
\end{equation}
In what follows, we pick $C_\star$ and $\rho_\star$ to be any such constants satisfying $\twonorm{\Res{A + B K_\star}(t)} \leq C_\star \rho_\star^t$ for all $t\geq 0$.
We introduce a version of the optimization problem~\eqref{eq:robustLQR} with a finite number of decision variables:
\begin{align}\label{eq:robustFIRbnd}
\begin{split}
\mbox{minimize}_{\gamma\in[0,1)}\frac{1}{1 - \gamma}&\min_{\tf\Phi_x, \tf \Phi_u, V} \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix}\right\|_{\mathcal{H}_2}\\
& \text{s.t.} \begin{bmatrix}zI-\widehat{A}&-\widehat{B}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix} = I + \frac{1}{z^L}V, \\ &\left\|\begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \tf \Phi_x \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} \tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty} + \twonorm{V} \leq \gamma\\
&\tf\Phi_x = \sum_{t=1}^L \frac{1}{z^t}\Phi_x(t), \, \tf\Phi_u = \sum_{t=1}^L \frac{1}{z^t}\Phi_u(t).
\end{split}
\end{align}
In this optimization problem we search over finite response transfer functions $\tf \Phi_x$ and $\tf \Phi_u$.
Given a feasible solution $\tf \Phi_x $, $ \tf \Phi_u$ of problem \eqref{eq:robustFIRbnd}, we can implement the controller $\tf K_L = \tf \Phi_u \tf \Phi_x^{-1}$ with an equivalent state-space representation $(A_K, B_K, C_K, D_K)$ using the
response elements $\{ \Phi_x(k) \}_{k=1}^{L}$ and
$\{ \Phi_u(k) \}_{k=1}^{L}$ via Theorem 2 of \cite{anderson17}.
The slack term $V$ accounts for the error introduced by truncating the infinite response transfer functions of problem \eqref{eq:robustLQR}.
Intuitively, if the truncated tail is sufficiently small, then the effects of this approximation should be negligible on performance. The next result formalizes this intuition.
\begin{theorem}\label{thm:FIR_subopt}
Set $\alpha = 1/2$ in \eqref{eq:robustFIRbnd} and let $C_\star > 0$ and $\rho_\star \in [0,1)$ be such that $\twonorm{\Res{(A + B K_\star)}(t)} \leq C_\star \rho_\star^t$ for all $t \geq 0$. Then, if $\tf K_L$ is synthesized via \eqref{eq:robustFIRbnd}, the relative error in the LQR cost is
\begin{align*}
\frac{J(A, B, \tf K_L) - J_\star}{J_\star} \leq 10 (\epsilon_A + \epsilon_B \twonorm{K_\star}) \hinfnorm{\Res{A + B K_\star}},
\end{align*}
as long as
\begin{align*}
\epsilon_A + \epsilon_B \twonorm{K_\star} \leq \frac{1 - \rho_\star}{10C_\star} \; \text{ and }\; L \geq \frac{4 \log\left(\frac{C_\star}{(\epsilon_A + \epsilon_B \twonorm{K_\star})\hinfnorm{\Res{A + B K_\star}}} \right)}{1 - \rho_\star}.
\end{align*}
\end{theorem}
The proof of this result, deferred to Appendix \ref{app:FIR}, is conceptually the same as that of the infinite horizon setting. The main difference is that care must be taken to ensure that the approximation horizon $L$ is sufficiently large so as to ensure stability and performance of the resulting controller. From the theorem statement, we see that for such an appropriately chosen FIR approximation horizon $L$, our performance bound is the same, up to universal constants, to that achieved by the solution to the infinite horizon problem. Furthermore, the approximation horizon $L$ only needs to grow logarithmically with respect to one over the estimation rate in order to preserve the same statistical rate as the controller produced by the infinite horizon problem. Finally, an end-to-end sample complexity result analogous to that stated in Corollary \ref{coro:lqr_cost_iid} can be easily obtained by simply substituting in the sample-complexity bounds on $\epsilon_A$ and $\epsilon_B$ specified in Proposition \ref{prop:independent_estimation}.
\subsection{Static controller and a common Lyapunov approximation}\label{sec:approx-CL}
As we have reiterated above, when the dynamics are known, the optimal LQR control law takes the form $u_t = K x_t$ for properly chosen static gain matrix $ K$.
We can reparameterize the optimization problem~\eqref{eq:robustLQRbnd} to restrict our attention to such static control policies:
\begin{align}\label{eq:robustLQRbnd-static}
\begin{split}
\mbox{minimize}_{\gamma\in[0,1)}\frac{1}{1 - \gamma}&\min_{\tf\Phi_x, \tf \Phi_u,K} \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix}\right\|_{\mathcal{H}_2}\\
& \text{s.t.} \begin{bmatrix}zI-\widehat{A}&-\widehat{B}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix} = I,~~\left\|\begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \tf \Phi_x \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} \tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty}\leq \gamma\\
&\qquad \tf\Phi_x, \tf \Phi_u \in\frac{1}{z}\mathcal{RH}_\infty\,,~~ K=\tf \Phi_u \tf \Phi_x^{-1}.
\end{split}
\end{align}
Under this reparameterization, the problem is no longer convex. Here we present a simple application of the \emph{common Lyapunov relaxation} that allows us to find a controller $K$ using semidefinite programming.
Note that the equality constraints imply:
\begin{align*}
I=\begin{bmatrix} zI-\widehat{A}&-\widehat{B}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix} =\begin{bmatrix} zI-\widehat{A}&-\widehat{B}\end{bmatrix}\begin{bmatrix} I \\ K \end{bmatrix} \tf \Phi_x
=(zI-\widehat{A}-\widehat{B} K) \tf \Phi_x\,,
\end{align*}
revealing that we must have
\begin{align*}
\tf \Phi_x = (zI - \widehat{A}-\widehat{B} K)^{-1} ~~\mbox{and}~~\tf \Phi_u= K(zI - \widehat{A}-\widehat{B} K)^{-1}\,.
\end{align*}
With these identifications,~\eqref{eq:robustLQRbnd-static} can be reformulated as
\begin{align}\label{eq:robustLQRbnd-static2}
\begin{split}
\mbox{minimize}_{\gamma\in[0,1)}\frac{1}{1 - \gamma}&\min_{K} \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}K\end{bmatrix}(zI - \widehat{A}-\widehat{B} K)^{-1} \right\|_{\mathcal{H}_2}\\
& \text{s.t.} \left\|\begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} K\end{bmatrix} (zI - \widehat{A} -\widehat{B} K)^{-1}\right\|_{\mathcal{H}_\infty}\leq \gamma
\end{split}
\end{align}
Using standard techniques from the robust control literature, we can upper bound this problem via the semidefinite program
\begin{align}\label{eq:robustLQRbnd-common-lyap}
\begin{array}{ll}
\operatorname{minimize}\limits_{X,Z,W,\alpha,\gamma} &\frac{1}{(1-\gamma)^2}
\left\{ \operatorname{Trace}(Q W_{11}) + \operatorname{Trace}(R W_{22}) \right\}\\
\mbox{subject to}
& \begin{bmatrix} X & X & Z^* \\
X & W_{11} & W_{12} \\
Z & W_{21} & W_{22} \end{bmatrix} \succeq 0\\
&\begin{bmatrix} X-I & (\widehat{A}+\widehat{B} K)X & 0 & 0 \\
X(\widehat{A}+\widehat{B} K)^* & X & \epsilon_A X & \epsilon_B Z^* \\
0 & \epsilon_A X & \alpha\gamma^2 I & 0\\
0 & \epsilon_B Z & 0 & (1-\alpha)\gamma^2 I \end{bmatrix} \succeq 0\,.
\end{array}
\end{align}
Note that this optimization problem is affine in $\alpha$ when $\gamma$ is fixed. Hence, in practice we can find the optimal value of $\alpha$ as well. A static controller can then be extracted from this optimization problem by setting $K=Z X^{-1}$. A full derivation of this relaxation can be found in Appendix~\ref{appendix:common_lyap}. Note that this compact SDP is simpler to solve than the truncated FIR approximation. As demonstrated experimentally in the following section, the cost of this simplification is that the common Lyapunov approach provides a controller with slightly higher LQR cost.
\subsection{Least Squares Estimation as a Random Matrix Problem}
We begin by explicitly writing the form of the least squares estimator. First, fixing notation to simplify the presentation, let $\Theta := \begin{bmatrix} A & B \end{bmatrix}^* \in \ensuremath{\mathbb{R}}^{(n + p) \times n}$
and let $z_t := \begin{bmatrix} x_t \\ u_t \end{bmatrix} \in \ensuremath{\mathbb{R}}^{n + p}$.
Then the system dynamics can be rewritten, for all $t \geq 0$,
\begin{align*}
x_{t+1}^* = z_t^* \Theta + w_t^* \:.
\end{align*}
Then in a single rollout, we will collect
\begin{align}\label{eq:single_trial}
X := \begin{bmatrix} x_1^* \\ x_2^* \\ \vdots \\ x_T^* \end{bmatrix} \:, \:\:
Z := \begin{bmatrix} z_0^* \\ z_1^* \\ \vdots \\ z_{T-1}^* \end{bmatrix} \:, \:\:
W := \begin{bmatrix} w_0^* \\ w_1^* \\ \vdots \\ w_{T-1}^* \end{bmatrix} \:.
\end{align}
The system dynamics give the identity $ X = Z \Theta + W $.
Resetting state of the system to $x_0=0$ each time,
we can perform $N$ rollouts and collect $N$ datasets like \eqref{eq:single_trial}. Having the ability to reset the system to a state independent of past observations will be important for the analysis in the following section, and it is also practically important for potentially unstable systems.
Denote the data for each rollout as $(X^{(\ell)}, Z^{(\ell)}, W^{(\ell)})$.
With slight abuse of notation, let
$X_N$ be composed of vertically stacked $X^{(\ell)}$, and similarly for
$Z_N$ and $W_N$. Then we have
\begin{align*}
X_N = Z_N \Theta + W_N \:.
\end{align*}
The full data least squares estimator for $\Theta$ is (assuming for now invertibility of $Z_N^* Z_N$),
\begin{align}\label{eq:LS_est}
\widehat{\Theta} = (Z_N^* Z_N)^{-1} Z_N^* X_N = \Theta + (Z_N^* Z_N)^{-1} Z_N^* W_N \:.
\end{align}
Then the estimation error is given by
\begin{align}\label{eq:est_error}
E := \widehat{\Theta} - \Theta = (Z_N^* Z_N)^{-1} Z_N^* W_N \:.
\end{align}
The magnitude of this error is the quantity of interest in determining confidence sets around estimates $(\widehat A,
\widehat B)$. However, since $W_N$ and $Z_N$ are not independent, this
estimator is difficult to analyze using standard methods. While this type of
analysis is an open problem of interest, in this paper we turn instead to a simplified estimator.
\subsection{Theoretical Bounds on Least Squares Error}
\label{sec:th_bounds}
In this section, we work out the statistical rate for the least squares estimator which uses just the last sample of each trajectory $({x}_T^{(\ell)}, {x}_{T-1}^{(\ell)}, {u}_{T-1}^{(\ell)})$. This estimation procedure is made precise in Algorithm~\ref{alg:independent_estimation}.
Our analysis ideas are analogous to those used to prove statistical rates for standard linear regression, and they leverage recent tools in nonasymptotic analysis of random matrices. The result is presented above in Proposition~\ref{prop:independent_estimation}.
\begin{center}
\begin{algorithm}[h!]
\caption{Estimation of linear dynamics with independent data}
\begin{algorithmic}[1]
\FOR{$\ell$ from $1$ to $N$}
\STATE ${x}_0^{(\ell)} = 0$
\FOR{$t$ from $0$ to $T - 1$}
\STATE ${x}_{t + 1}^{(\ell)} = A {x}_t^{(\ell)} + B {u}_t^{(\ell)} + {w}_t^{(\ell)}$ with ${w}_t^{(\ell)} \stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0, \sigma_w^2 I_n)$ and ${u}_t^{(\ell)} \stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0, \sigma_u^2 I_p)$.
\ENDFOR
\ENDFOR
\STATE $(\Ah, \widehat{B}) \in \arg\min_{(A,B)} \sum_{\ell = 1}^N \frac{1}{2} \ltwonorm{ A{x}_{T - 1}^{(\ell)} + B{u}_{T - 1}^{(\ell)} - {x}_{T}^{(\ell)}}^2$
\end{algorithmic}
\label{alg:independent_estimation}
\end{algorithm}
\end{center}
In the context of Proposition~\ref{prop:independent_estimation}, a single data point from each $T$-step rollout is used. We emphasize that this strategy results in independent data, which can be seen by defining the estimator matrix directly. The previous estimator \eqref{eq:LS_est} is amended as follows: the matrices defined in (\ref{eq:single_trial}) instead include only the final timestep of each trial,
$X_N = \begin{bmatrix} x_T^{(1)} & x_T^{(2)} & \hdots & x_T^{(N)} \end{bmatrix}^* $, and similar modifications are made to $Z_N$ and $W_N$. The estimator~\eqref{eq:LS_est} uses these modified matrices, which now contain independent rows. To see this, recall the definition of $G_T$ and $F_T$ from \eqref{eq:gramians},
\begin{align*}
G_T = \begin{bmatrix} A^{T-1} B & A^{T-2} B & ... & B \end{bmatrix} \:, \:\:
F_T = \begin{bmatrix} A^{T-1} & A^{T-2} & ... & I_n \end{bmatrix} \:.
\end{align*}
We can unroll the system dynamics and see that
\begin{align} \label{eq:unrolled_state_eq}
x_T = G_T \begin{bmatrix} u_0 \\ u_1 \\ \vdots \\ u_{T-1} \end{bmatrix} + F_T \begin{bmatrix} w_0 \\ w_1 \\ \vdots \\ w_{T-1}\end{bmatrix} \:.
\end{align}
Using Gaussian excitation, $u_t\sim \mathcal{N}(0,\sigma^2_u I_p)$ gives
\begin{align}\label{eq:row_distribution}
\begin{bmatrix} x_T \\ u_T \end{bmatrix} \sim \mathcal{N}\left(0, \begin{bmatrix} \sigma^2_u G_TG_T^* + \sigma^2_w F_TF_T^* & 0 \\ 0 & \sigma^2_u I_p \end{bmatrix} \right) \:.
\end{align}
Since $F_TF_T^* \succ 0$, as long as both $\sigma_u,\sigma_w$ are positive, this is a non-degenerate distribution.
Therefore, bounding the estimation error can be achieved via proving a result on the error in random design linear regression with vector valued observations.
First, we present a lemma which bounds the spectral norm of the product of two independent Gaussian matrices.
\begin{lemma}
\label{lemma:product_gaussians}
Fix a $\delta \in (0, 1)$ and $N \geq 2 \log(1/\delta)$.
Let $f_k \in \ensuremath{\mathbb{R}}^m$, $g_k \in \ensuremath{\mathbb{R}}^n$ be independent random vectors
$f_k \sim \mathcal{N}(0, \Sigma_f)$ and $g_k \sim \mathcal{N}(0, \Sigma_g)$
for $1 \leq k \leq N$. With probability at least $1-\delta$,
\begin{align*}
\bigspectralnorm{ \sum_{k=1}^{N} f_k g_k^* } \leq 4 \spectralnorm{\Sigma_f}^{1/2}\spectralnorm{\Sigma_g}^{1/2} \sqrt{N(m+n)\log(9/\delta)} \:.
\end{align*}
\end{lemma}
We believe this bound to be standard, and include a proof in the appendix for completeness.
Lemma~\ref{lemma:product_gaussians} shows that if $X$ is $n_1 \times N$ with i.i.d. $\mathcal{N}(0, 1)$ entries
and $Y$ is $N \times n_2$ with i.i.d. $\mathcal{N}(0, 1)$ entries, and $X$ and $Y$ are independent,
then with probability at least $1-\delta$ we have
\begin{align*}
\spectralnorm{ X Y } \leq 4 \sqrt{N(n_1 + n_2) \log(9/\delta)} \:.
\end{align*}
Next, we state a standard nonasymptotic bound on
the minimum singular value of a standard Wishart matrix
(see e.g. Corollary 5.35 of~\cite{vershynin10}).
\begin{lemma}
\label{lemma:wishart_lower_bound}
Let $X \in \ensuremath{\mathbb{R}}^{N \times n}$ have i.i.d. $\mathcal{N}(0, 1)$ entries.
With probability at least $1-\delta$,
\begin{align*}
\sqrt{\lambda_{\min}(X^* X)} \geq \sqrt{N} - \sqrt{n} - \sqrt{2\log(1/\delta)} \:.
\end{align*}
\end{lemma}
We combine the previous lemmas into a statement on the error of random design regression.
\begin{lemma}
\label{lemma:random_design_bound}
Let $z_1, ..., z_N \in \ensuremath{\mathbb{R}}^{n}$ be i.i.d. from $\mathcal{N}(0, \Sigma)$ with $\Sigma$ invertible.
Let $Z^* := \begin{bmatrix} z_1 & ... & z_N \end{bmatrix}$.
Let $W \in \ensuremath{\mathbb{R}}^{N \times p}$ with each entry i.i.d. $\mathcal{N}(0, \sigma_w^2)$ and independent of $Z$.
Let $E := (Z^* Z)^{\dag} Z^* W$, and
suppose that
\begin{align}
N \geq 8n + 16 \log(2/\delta) \label{eq:N_assumption} \:.
\end{align}
For any fixed matrix $Q$, we have
with probability at least $1-\delta$,
\begin{align*}
\spectralnorm{QE} \leq 16 \sigma_w \spectralnorm{Q \Sigma^{-1/2}} \sqrt{\frac{(n+p)\log(18/\delta)}{N}} \:.
\end{align*}
\end{lemma}
\begin{proof}
First, observe that $Z$ is equal in distribution to
$X \Sigma^{1/2}$, where $X \in \ensuremath{\mathbb{R}}^{N \times n}$ has i.i.d. $\mathcal{N}(0, 1)$ entries.
By Lemma~\ref{lemma:wishart_lower_bound}, with probability at least $1-\delta/2$,
\begin{align*}
\sqrt{\lambda_{\min}(X^* X)} \geq \sqrt{N} - \sqrt{n} - \sqrt{2\log(2/\delta)} \geq \sqrt{N}/2 \:.
\end{align*}
The last inequality uses \eqref{eq:N_assumption} combined with the inequality
$(a+b)^2 \leq 2(a^2 + b^2)$.
Furthermore, by Lemma~\ref{lemma:product_gaussians} and \eqref{eq:N_assumption},
with probability at least $1-\delta/2$,
\begin{align*}
\spectralnorm{X^* W} \leq 4 \sigma_w \sqrt{N(n+p)\log(18/\delta)} \:.
\end{align*}
Let $\mathcal{E}$ denote the event which is the intersection
of the two previous events.
By a union bound, $\Pr(\mathcal{E}) \geq 1-\delta$.
We continue the rest of the proof assuming the event $\mathcal{E}$ holds.
Since $X^* X$ is invertible,
\begin{align*}
QE = Q(Z^* Z)^{\dag} Z^* W = Q (\Sigma^{1/2} X^* X \Sigma^{1/2})^{\dag} \Sigma^{1/2} X^* W = Q \Sigma^{-1/2} (X^* X)^{-1} X^* W \:.
\end{align*}
Taking operator norms on both sides,
\begin{align*}
\spectralnorm{QE} \leq \spectralnorm{Q \Sigma^{-1/2}} \spectralnorm{(X^* X)^{-1}} \spectralnorm{ X^* W} = \spectralnorm{Q \Sigma^{-1/2}} \frac{\spectralnorm{X^* W}}{\lambda_{\min}(X^* X)} \:.
\end{align*}
Combining the inequalities above,
\begin{align*}
\frac{\spectralnorm{X^* W}}{\lambda_{\min}(X^* X)} \leq 16\sigma_w \sqrt{\frac{(n+p)\log(18/\delta)}{N}} \:.
\end{align*}
The result now follows.
\end{proof}
Using this result on random design linear regression, we are now ready to analyze the estimation errors of the identification in Algorithm~\ref{alg:independent_estimation} and provide a proof of Proposition~\ref{prop:independent_estimation}.
\begin{proof}
Consider the least squares estimation error (\ref{eq:est_error}) with modified single-sample-per-rollout matrices.
Recall that rows of the design matrix $Z_N$ are distributed as independent normals, as in (\ref{eq:row_distribution}). Then applying Lemma~\ref{lemma:random_design_bound} with $Q_A = \begin{bmatrix} I_n &
0 \end{bmatrix}$ so that $Q_A E$ extracts only the estimate for $A$, we conclude
that with probability at least $1 - \delta/2$,
\begin{align}
\spectralnorm{\widehat{A} - A} \leq \frac{16 \sigma_w}{\sqrt{\lambda_{\min}(\sigma_u^2 G_TG_T^* + \sigma_w^2 F_TF_T^*)}}\sqrt{\frac{(n + 2p)\log(36/\delta)}{N}} \:,
\end{align}
as long as $N \geq 8(n + p) + 16\log(4/\delta)$.
Now applying Lemma~\ref{lemma:random_design_bound} under the same condition on $N$ with
$Q_B = \begin{bmatrix} 0 & I_p \end{bmatrix}$, we have
with probability at least $1 - \delta/2$,
\begin{align}
\spectralnorm{\widehat{B} - B} \leq \frac{16\sigma_w}{\sigma_u}\sqrt{\frac{(n + 2p)\log(36/\delta)}{N}} \:.
\end{align}
The result follows by application of the union bound.
\end{proof}
There are several interesting points to make about the guarantees offered by Proposition~\ref{prop:independent_estimation}. First, as mentioned in the introduction, there are $n(n + p)$ parameters to learn and our bound states that we need $O(n + p)$ measurements, each measurement providing $n$ values. Hence, this appears to be an optimal dependence with respect to the parameters $n$ and $p$. Second, note that intuitively, if the system amplifies the control and noise inputs in all directions of the state-space, as captured by the minimum eigenvalues of the control and disturbance Gramians $G_TG_T^*$ or $F_TF_T^*$, respectively, then the system has a larger ``signal-to-noise'' ratio and the system matrix $A$ is easier to estimate.
On the other hand, this measure of the excitability of the system has no impact on learning $B$. Unlike in Fiechter's work~\cite{fiechter1997pac}, we do not need to assume that $G_T G_T^*$ is invertible. As long as the process noise is not degenerate, it will excite all modes of the system.
Finally, we note that the Proposition~\ref{prop:independent_estimation} offers a data independent guarantee for the estimation of the parameters $(A,B)$. We can also provide data dependent guarantees, which will be less conservative in practice. The next result shows how we can use the observed states and inputs to obtain more refined confidence sets than the ones offered by Proposition~\ref{prop:independent_estimation}. The proof is deferred to Appendix~\ref{app:data_dependent}.
\begin{proposition}
\label{prop:data_dependent}
Assume we have $N$ independent samples $(y^{(\ell)}, x^{(\ell)}, u^{(\ell)})$ such that
\begin{align*}
y^{(\ell)} = A x^{(\ell)} + B u^{(\ell)} + w^{(\ell)},
\end{align*}
where $w^{(\ell)}$ are i.i.d. $\mathcal{N}(0,\sigma_w^2 I_n)$ and are independent from $x^{(\ell)}$ and $u^{(\ell)}$. Also, let us assume that $N \geq n + p$. Then, with probability $1 - \delta$, we have
\begin{align*}
\begin{bmatrix}
(\widehat{A} - A)^\top \\
(\widehat{B} - B)^\top
\end{bmatrix}
\begin{bmatrix}
(\widehat{A} - A) & (\widehat{B} - B)
\end{bmatrix}
\preceq C(n, p, \delta) \left(\sum_{\ell = 1}^N
\begin{bmatrix}
x^{(\ell)}\\
u^{(\ell)}
\end{bmatrix}
\begin{bmatrix}
(x^{(\ell)})^\top & (u^{(\ell)})^\top
\end{bmatrix}
\right)^{-1},
\end{align*}
where $C(n, p, \delta) = \sigma_w^2 (\sqrt{n + p} + \sqrt{n} + \sqrt{2 \log(1/\delta)})^{2}$.
If the matrix on the right hand side has zero as an eigenvalue, we define the inverse of that eigenvalue to be infinity.
\end{proposition}
Proposition~\ref{prop:data_dependent} is a general result that does not require the inputs $u^{(\ell)}$ to be normally distributed and
it allows the states $x^{(\ell)}$ to be arbitrary as long as all the samples $(y^{(\ell)}, x^{(\ell)}, u^{(\ell)})$ are independent and the process noise
$w^{(\ell)}$ is normally distributed. Nonetheless, both Propositions~\ref{prop:independent_estimation} and \ref{prop:data_dependent} require estimating $(A, B)$ from independent samples. In practice, one would collect rollouts from the system, which consist of many dependent measurements. In that case, using all the data is preferable. Since the guarantees offered in this section do not apply in that case, in the next section we study a different procedure for estimating the size of the estimation error.
\subsection{Estimating Model Uncertainty with the Bootstrap}\label{sec:bootstrap}
In the previous sections we offered theoretical guarantees on the performance of the least squares estimation of $A$ and $B$ from independent samples. However, there are two important limitations to using such guarantees in practice to offer upper bounds on $\epsilon_A = \spectralnorm{A - \Ah}$ and $\epsilon_B = \spectralnorm{B - \widehat{B}}$. First, using only one sample per system rollout is empirically less efficient than using all available data for estimation. Second, even optimal statistical analyses often do not recover constant factors that match practice. For purposes of robust control, it is important to obtain upper bounds on $\epsilon_A$ and $\epsilon_B$
that are not too conservative.
Thus, we aim to find $\widehat{\epsilon}_A$ and $\widehat{\epsilon}_B$ such that $\epsilon_A \leq \widehat{\epsilon}_A$ and $\epsilon_B \leq \widehat{\epsilon}_B$ with high probability.
We propose a vanilla bootstrap method for estimating $\widehat{\epsilon}_A$ and $\widehat{\epsilon}_B$.
Bootstrap methods have had a profound impact in both theoretical and applied statistics since their introduction~\cite{efron79}. These methods are used to estimate statistical quantities (e.g. confidence intervals) by sampling synthetic data from an empirical distribution determined by the available data. For the problem at hand we propose the procedure described in Algorithm~\ref{alg:bootstrap}.\footnote{We assume that $\sigma_u$ and $\sigma_w$ are known. Otherwise they
can be estimated from data.}
\begin{center}
\begin{algorithm}[h!]
\caption{Bootstrap estimation of $\epsilon_A$ and $\epsilon_B$}
\begin{algorithmic}[1]
\STATE \textbf{Input:} confidence parameter $\delta$, number of trials $M$, data $\{({x}_{t}^{(i)}, {u}_t^{(i)})\}_{\substack{1 \leq i \leq N\\ 1 \leq t \leq T}}$, and $(\Ah, \widehat{B})$ a minimizer of
$
\sum_{\ell = 1}^N \sum_{t = 0}^{T - 1} \frac{1}{2}\spectralnorm{A{x}_{t}^{(\ell)} + B {u}_{t}^{(\ell)} - {x}_{t + 1}^{(\ell)}}^2.
$
\FOR{$M$ trials}
\FOR{$\ell$ from $1$ to $N$}
\STATE $\widehat{{x}}_0^{(\ell)} = {x}_0^{(\ell)}$
\FOR{$t$ from $0$ to $T - 1$}
\STATE $\widehat{{x}}_{t + 1}^{(\ell)} = \Ah \widehat{{x}}_t^{(\ell)} + \widehat{B} \widehat{{u}}_t^{(\ell)} + \widehat{{w}}_t^{(\ell)}$ with $\widehat{{w}}_t^{(\ell)} \stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0, \sigma_w^2 I_n)$ and $\widehat{{u}}_t^{(\ell)} \stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0, \sigma_u^2 I_p)$.
\ENDFOR
\ENDFOR
\STATE $(\widetilde{A}, \widetilde{B}) \in \arg\min_{(A,B)} \sum_{\ell = 1}^N \sum_{t = 0}^{T - 1} \frac{1}{2} \ltwonorm{ A\widehat{{x}}_{t}^{(\ell)} + B\widehat{{u}}_{t}^{(\ell)} - \widehat{{x}}_{t + 1}^{(\ell)}}^2$.
\STATE record $\widetilde{\epsilon}_A = \ltwonorm{\Ah - \widetilde{A}}$ and $\widetilde{\epsilon}_B = \ltwonorm{\widehat{B} - \widetilde{B}}$.
\ENDFOR
\STATE \textbf{Output:} $\widehat{\epsilon}_A$ and $\widehat{\epsilon}_B$, the $100(1- \delta)$th percentiles of the $\widetilde{\epsilon}_A$'s and the $\widetilde{\epsilon}_B$'s.
\end{algorithmic}
\label{alg:bootstrap}
\end{algorithm}
\end{center}
For $\widehat{\epsilon}_A$ and $\widehat{\epsilon}_B$ estimated by Algorithm~\ref{alg:bootstrap} we intuitively have
\begin{align*}
\mathbb{P}(\spectralnorm{A - \Ah} \leq \widehat{\epsilon}_A) \approx 1 - \delta \quad \text{and} \quad \mathbb{P}(\spectralnorm{B - \widehat{B}} \leq \widehat{\epsilon}_B) \approx 1 - \delta.
\end{align*}
There are many known guarantees for
the bootstrap, particularly for the parametric version we use. We do not
discuss these results here; for more details see texts by Van Der Vaart and
Wellner~\cite{van1996weak}, Shao and Tu~\cite{shao2012jackknife}, and
Hall~\cite{hall2013bootstrap}. Instead, in
Appendix~\ref{sec:bootstrap-experiments} we show empirically the performance of
the bootstrap for our estimation problem. For mission critical systems, where empirical validation is insufficient, the statistical error bounds presented in Section~\ref{sec:th_bounds} give guarantees on the size of $\epsilon_A$, $\epsilon_B$. In general, data dependent error guarantees will be less conservative. In follow up work we offer guarantees similar to the ones presented in Section~\ref{sec:th_bounds} for estimation of linear dynamics from dependent data \cite{simchowitz18}.
\subsection{Estimation of Example System}
We focus experiments on a particular example system. Consider the LQR problem instance specified by
\begin{align} \label{eq:exampledynamics}
A = \begin{bmatrix} 1.01 & 0.01 & 0\\
0.01 & 1.01 & 0.01\\
0 & 0.01 & 1.01\end{bmatrix}, ~~ B = I, ~~ Q = 10^{-3} I, ~~ R =I \:.
\end{align}
The dynamics correspond to a marginally unstable graph Laplacian system where adjacent nodes are weakly connected, each node receives direct input, and input size is penalized relatively more than state. Dynamics described by graph Laplacians arise naturally in consensus and distributed averaging problems. For this system, we perform the full data identification procedure in \eqref{eq:ls_problem}, using inputs with variance $\sigma_u^2=1$ and noise with variance $\sigma_w^2=1$. The errors are estimated via the bootstrap (Algorithm \ref{alg:bootstrap}) using $M=2,000$ trials and confidence parameter $\delta = 0.05$.
The behavior of the least squares estimates and the bootstrap error estimates are illustrated in Figure \ref{fig:eps_v_rollout}. The rollout length is fixed to $T=6$, and the number of rollouts used in the estimation is varied. As expected, increasing the number of rollouts corresponds to decreasing errors. For large enough $N$, the bootstrapped error estimates are of the same order of magnitude as the true errors. In Appendix~\ref{app:eps_v_trial_figs} we show plots for the setting in which the number of rollouts is fixed to $N=6$ while the rollout length is varied.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\caption{\small Least Squares Estimation Errors}
\centerline{\includegraphics[width=\columnwidth]{./plots/eps_v_rollout.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\caption{\small Accuracy of Bootstrap Error Estimates}
\centerline{\includegraphics[width=\columnwidth]{./plots/epserror_v_rollout.pdf}}
\end{subfigure}
\caption{\small The resulting errors from 100 repeated least squares identification experiments with rollout length $T=6$ is plotted against the number of rollouts. In (a), the median of the least squares estimation errors decreases with $N$. In (b), the ratio of the bootstrap estimates to the true estimates hover at 2. Shaded regions display quartiles.}
\label{fig:eps_v_rollout}
\end{figure}
\subsection{Controller Synthesis on Estimated System}
\label{sec:synthesis-experiments}
Using the estimates of the system in~\eqref{eq:exampledynamics}, we synthesize controllers using two robust control schemes: the convex problem in~\ref{eq:robustFIRbnd} with filters of length $L=32$ and $V$ set to $0$, and the common Lyapunov (CL) relaxation of the static synthesis problem~\eqref{eq:robustLQRbnd-static}.
Once the FIR responses $\{ \Phi_x(k) \}_{k=1}^{F}$ and
$\{ \Phi_u(k) \}_{k=1}^{F}$ are found, we need a way to implement the system responses as a controller.
We represent the dynamic controller $\tf K = \tf \Phi_u \tf \Phi_x^{-1}$ by finding
an equivalent state-space realization $(A_K, B_K, C_K, D_K)$ via Theorem 2 of \cite{anderson17}. In what follows, we compare the performance of these controllers with the nominal LQR controller (the solution to \eqref{eq:lqr-classic} with $\widehat{A}$ and $\widehat{B}$ as model parameters), and explore the trade-off between robustness, complexity, and performance.
The relative performance of the nominal controller is compared with robustly synthesized controllers in Figure \ref{fig:perf_v_rollout}. For both robust synthesis procedures, two controllers are compared: one using the true errors on $A$ and $B$, and the other using the bootstrap estimates of the errors. The robust static controller generated via the common Lyapunov approximation performs slightly worse than the more complex FIR controller, but it still achieves reasonable control performance. Moreover, the conservative bootstrap estimates also result in worse control performance, but the degradation of performance is again modest.
Furthermore, the experiments show that the nominal controller often outperforms the robust controllers \emph{when it is stabilizing.} On the other hand, the nominal controller is not guaranteed to stabilize the true system, and as shown in Figure \ref{fig:perf_v_rollout}, it only does so in roughly 80 of the 100 instances after $N=60$ rollouts. It is also important to note a distinction between stabilization for nominal and robust controllers. When the nominal controller is not stabilizing, there is no indication to the user (though sufficient conditions for stability can be checked using our result in Corollary~\ref{lemma:robust-sls} or structured singular value methods~\cite{qiu1995formula}). On the other hand, the robust synthesis procedure will return as infeasible, alerting the user by default that the uncertainties are too high. We observe similar results when we fix the number of trials but vary the rollout length. These figures are provided in Appendix~\ref{app:eps_v_trial_figs}.
Figure \ref{fig:FIR} explores the trade-off between performance and complexity for the computational approximations, both for FIR truncation and the common Lyapunov relaxation. We examine the tradeoff both in terms of the bound on the LQR cost (given by the value of the objective) as well as the actual achieved value. It is interesting that for smaller numbers of rollouts (and therefore larger uncertainties), the benefit of using more complex FIR models is negligible, both in terms of the actual costs and the upper bound. This trend makes sense: as uncertainties decrease to zero, the best robust controller should approach the nominal controller, which is associated with infinite impulse response (IIR) transfer functions. Furthermore, for the experiments presented here, FIR length of $L = 32$ seems to be sufficient to characterize the performance of the robust synthesis procedure in~\eqref{eq:robustLQRbnd}. Additionally, we note that static controllers are able to achieve costs of a similar magnitude.
The SLS framework guarantees a stabilizing controller for the true system provided that the computational approximations are feasible for \emph{any} value of $\gamma$ between 0 and 1, as long as the system errors $(\epsilon_A,\epsilon_B)$ are upper bounds on the true errors. Figure \ref{fig:perf_v_rollout_relaxed} displays the controller performance for robust synthesis when $\gamma$ is set to 0.999. Simply ensuring a stable model and neglecting to optimize the nominal cost yields controllers that perform nearly an order of magnitude better than those where we search for the optimal value of $\gamma$. This observation aligns with common practice in robust control: constraints ensuring stability are only active when the cost tries to drive the system up against a safety limit. We cannot provide end-to-end sample complexity guarantees for this method and leave such bounds as an enticing challenge for future work.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\caption{\small LQR Cost Suboptimality}
\centerline{\includegraphics[width=\columnwidth]{./plots/subopt_v_rollout.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\caption{\small Frequency of Finding Stabilizing Controller}
\centerline{\includegraphics[width=\columnwidth]{./plots/stabilize_v_rollout.pdf}}
\end{subfigure}
\caption{\small The performance of controllers synthesized on the results of the 100 identification experiments is plotted against the number of rollouts. Controllers are synthesis nominally, using FIR truncation, and using the common Lyapunov (CL) relaxation. In (a), the median suboptimality of nominal and robustly synthesized controllers are compared, with shaded regions displaying quartiles, which go off to infinity in the case that a stabilizing controller was not found. In (b), the frequency that the synthesis methods found stabilizing controllers.}
\label{fig:perf_v_rollout}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\caption{\small LQR Cost Suboptimality Bound}
\centerline{\includegraphics[width=\columnwidth]{./plots/subopt_bound_v_rollout_FIR.pdf}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\caption{\small LQR Cost Suboptimality}
\centerline{\includegraphics[width=\columnwidth]{./plots/subopt_v_rollout_FIR.pdf}}
\end{subfigure}
\caption{\small The performance of controllers synthesized with varying FIR filter lengths on the results of 10 of the identification experiments using true errors. The median suboptimality of robustly synthesized controllers does not appear to change for FIR lengths greater than 32, and the common Lyapunov (CL) synthesis tracks the performance in both upper bound and actual cost.}
\label{fig:FIR}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centerline{\small LQR Cost Suboptimality}
\centerline{\includegraphics[width=\columnwidth]{./plots/subopt_v_rollout_fixedtau.pdf}}
\end{subfigure}
\caption{\small The performance of controllers synthesized on the results of 100 identification experiments is plotted against the number of rollouts. The plot compares the median suboptimality of nominal controllers with fixed-$\gamma$ robustly synthesized controllers ($\gamma = 0.999$).}
\label{fig:perf_v_rollout_relaxed}
\end{figure}
\section{Proof of Theorem \ref{thm:FIR_subopt}}
\label{app:FIR}
To understand the effect of restricting the optimization to FIR transfer functions we need to understand the decay of the transfer functions $\Res{\Ah+\widehat{B}K_\star}$ and $K_\star \Res{\Ah+\widehat{B}K_\star}$. To this end we consider $C_\star > 0$ and $\rho_\star \in (0, 1)$ such that $\| (A + B K_\star)^t \|_2 \leq C_\star \rho_\star^t$ for all $t \geq 0$. Such $C_\star$ and $\rho_\star$ exist because $K_\star$ stabilizes the system $(A, B)$. The next lemma quantifies how well $K_\star$ stabilizes the system $(\Ah, \widehat{B})$ when the estimation error is small.
\begin{lemma}
\label{lem:simple_perturbation}
Suppose $\epsilon_A + \epsilon_B \|K_\star\|_2 \leq \frac{1 - \rho_\star}{2 C_\star}$. Then,
\begin{align*}
\| (\Ah + \widehat{B}K_\star)^t \|_2 \leq C_\star \left( \frac{1 + \rho_\star}{2} \right)^t \; \text{, for all } \; t \geq 0.
\end{align*}
\end{lemma}
\begin{proof}
The claim is obvious when $t = 0$. Fix an integer $t \geq 1$ and denote $M = A + B K_\star$. Then,
if $\Delta = \Delta_A + \Delta_B K_\star$, we have $\Ah + \widehat{B} K_\star = M + \Delta$.
Consider the expansion of $(M+\Delta)^t$ into $2^k$ terms.
Label all these terms as $T_{i,j}$ for $i=0, ..., t$ and $j=1, ..., {t \choose i}$
where $i$ denotes the degree of $\Delta$ in the term.
Using the fact that $\norm{M^t}_2 \leq C_\star \rho_\star^t$ for all $t \geq 0$, we have
$\norm{T_{i,j}}_2 \leq C^{i+1} \rho^{t-i} \norm{\Delta}_2^i$.
Hence by triangle inequality:
\begin{align*}
\norm{(M + \Delta)^t}_2 &\leq \sum_{i=0}^{t} \sum_{j} \norm{T_{i,j}}_2 \\
&\leq \sum_{i=0}^{t} {t \choose i} C_\star^{i+1} \rho_\star^{t-i} \norm{\Delta}^i_2 \\
&= C_\star \sum_{i=0}^{t} {t \choose i} (C_\star \norm{\Delta}_2)^i \rho_\star^{t-i} \\
&= C_\star (C_\star \norm{\Delta}_2 + \rho_\star)^t \\
&\leq C_\star \left(\frac{1 + \rho_\star}{2} \right)^t \:,
\end{align*}
where the last inequality uses the fact $\norm{\Delta}_2 \leq \epsilon_A + \epsilon_B \norm{K_\star}_2 \leq \frac{ 1 -\rho_\star}{2C_\star}$.
\end{proof}
For the remainder of this discussion, we use the following notation to denote the restriction of a system response to its first $L$ time-steps:
\begin{equation}
\tf\Phi_{x}(1:L) = \sum_{t=1}^L \frac{1}{z^t}\Phi_x(t), \ \tf\Phi_{u}(1:L) = \sum_{t=1}^L \frac{1}{z^t}\Phi_u(t).
\label{eq:FIR_restriction}
\end{equation}
To prove Theorem \ref{thm:FIR_subopt} we must relate the optimal controller $K_\star$ with the optimal solution of the optimization problem \eqref{eq:robustFIRbnd}. In the next lemma we use $K_\star$ to construct a feasible solution for problem \eqref{eq:robustFIRbnd}. As before, we denote $\zeta = (\epsilon_A + \epsilon_B \norm{K_\star}_2) \hinfnorm{\Res{A + B K_\star}}$.
\begin{lemma}\label{lem:FIR_feasibility}
Set $\alpha = 1/2$ in problem \eqref{eq:robustFIRbnd}, and assume that $\epsilon_A + \epsilon_B \|K_\star\|_2 \leq \frac{1 - \rho_\star}{2 C_\star}$, $\zeta < 1/5$, and
\begin{equation}
L \geq \frac{4 \log\left(\frac{C_\star}{\zeta} \right)}{1 - \rho_\star}.
\label{eq:L-feasible}
\end{equation}
Then, optimization problem \eqref{eq:robustFIRbnd} is feasible, and the following is one such feasible solution:
\begin{equation}
\widetilde{\tf\Phi}_x = \Res{\widehat{A}+\widehat{B}K_\star}(1:L),~~ \widetilde{\tf \Phi}_u = K_\star\Res{\widehat{A}+\widehat{B}K_\star}(1:L),~~\widetilde{V}= - \Res{\widehat{A}+\widehat{B}K_\star}(L+1),~~\tilde \gamma = \frac{4 \zeta}{1 - \zeta}.
\end{equation}
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:simple_perturbation} and the assumption on $\zeta$ we have that $\norm{(\Ah + \widehat{B} K_\star)^t}_2 \leq C_\star \left(\frac{1 + \rho_\star}{2} \right)^t$ for all $t \geq 0$. In particular, since $\Res{\Ah + \widehat{B} K_\star} (L + 1) = (\Ah + \widehat{B} K_\star)^{L} $, we have $\norm{\widetilde V} = \norm{(\Ah + \widehat{B} K_\star)^L } \leq C_\star \left(\frac{1 + \rho_\star}{2} \right)^L \leq \zeta$. The last inequality is true because we assumed $L$ is sufficiently large.
Once again, since $\Res{\Ah + \widehat{B} K_\star} (L + 1) = (\Ah + \widehat{B} K_\star)^{L} $, it can be easily seen that our choice of $\widetilde {\tf \Phi}_x$, $\widetilde {\tf \Phi}_u$, and $\widetilde V$ satisfy the linear constraint of problem \eqref{eq:robustFIRbnd}. It remains to prove that
\begin{align*}
\sqrt{2}\bighinfnorm{\begin{bmatrix}{\epsilon_A}{\tf\Phi_x}\\ {\epsilon_B}{\tf\Phi_u}\end{bmatrix}} + \twonorm{\widetilde V} \leq \tilde \gamma < 1.
\end{align*}
The second inequality holds because of our assumption on $\zeta$. We already know that $ \twonorm{\widetilde V} \leq \zeta$. Now, we bound:
\begin{align*}
\bighinfnorm{\begin{bmatrix}{\epsilon_A}{\widetilde{\tf\Phi}_x}\\ {\epsilon_B}{ \widetilde{\tf\Phi}_u}\end{bmatrix}} &\leq (\epsilon_A +\epsilon_B\twonorm{K_\star})\hinfnorm{\Res{\Ah + \widehat{B}K_\star}(1:L)} \\
&\leq (\epsilon_A +\epsilon_B \twonorm{K_\star})(\hinfnorm{\Res{\Ah + \widehat{B}K_\star}} + \hinfnorm{\Res{\Ah + \widehat{B}K_\star} (L + 1 : \infty)}).
\end{align*}
These inequalities follow from the definition of $(\widetilde{\tf\Phi}_x, \widetilde{\tf \Phi}_u)$ and the triangle inequality.
Now, we recall that $\Res{\Ah + \widehat{B} K_\star} = \Res{A + B K_\star} (I + \tf \Delta)^{-1}$, where $\tf \Delta = - (\Delta_A + \Delta_B K_\star) \Res{A + B K_\star}$. Then, since $\hinfnorm{\tf \Delta} \leq \zeta$ (due to Proposition \ref{prop:bound}), we have $\hinfnorm{\Res{\Ah + \widehat{B} K_\star}} \leq \frac{1}{1 - \zeta} \hinfnorm{\Res{A + B K_\star}}$.
We can upper bound
\begin{align*}
\hinfnorm{\Res{\Ah + \widehat{B}K_\star}(L + 1 : \infty)} \leq \sum_{t = L + 1}^\infty \twonorm{\Res{\Ah + \widehat{B} K_\star}({t})} \leq C_\star \left(\frac{1 + \rho_\star}{2}\right)^L \sum_{t = 0}^\infty \left( \frac{1 + \rho_\star}{2}\right)^t = \frac{2 C_\star}{1 - \rho_\star} \left(\frac{1 + \rho_\star}{2}\right)^L.
\end{align*}
Then, since we assumed that $\epsilon_A$ and $\epsilon_B$ are sufficiently small and that $L$ is sufficiently large, we obatin
\begin{align*}
(\epsilon_A +\epsilon_B \twonorm{K_\star}) \hinfnorm{\Res{\Ah + \widehat{B}K_\star}(L + 1 : \infty)} \leq \zeta.
\end{align*}
Therefore,
\begin{align*}
\bighinfnorm{\begin{bmatrix}{\epsilon_A}{\widetilde{\tf\Phi}_x}\\ {\epsilon_B}{ \widetilde{\tf\Phi}_u}\end{bmatrix}} &\leq \frac{\zeta}{1 - \zeta} + \zeta \leq \frac{2\zeta}{1 - \zeta}.
\end{align*}
The conclusion follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:FIR_subopt}]
As all of the assumptions of Lemma \ref{lem:FIR_feasibility} are satisfied, optimization problem \eqref{eq:robustFIRbnd} is feasible. We denote $(\tf \Phi_x^\star, \tf \Phi_u^\star, V_\star, \gamma_\star)$ the optimal solution of problem \eqref{eq:robustFIRbnd}. We denote
\[
\hat{\tf{\Delta}} := \Delta_A \tf \Phi_{x}^\star + \Delta_B \tf \Phi_{u}^\star + \frac{1}{z^L}V_\star.
\]
Then, we have
\begin{align*}
\begin{bmatrix}
zI - A & - B
\end{bmatrix}
\begin{bmatrix}
\tf \Phi_{x}^\star \\ \tf \Phi_{u}^\star
\end{bmatrix} = I + \hat{\tf{\Delta}}.
\end{align*}
Applying the triangle inequality, and leveraging Proposition \ref{prop:bound}, we can verify that
\[
\hinfnorm{\hat{\tf{\Delta}}} \leq \sqrt2\bighinfnorm{\begin{bmatrix}\epsilon_A \tf\Phi_{x}^\star \\ \epsilon_B \tf \Phi_{u}^\star \end{bmatrix}} + \twonorm{V_\star} \leq \gamma_\star < 1,
\]
where the last two inequalities are true because the optimal solution is a feasible point of the optimization problem \eqref{eq:robustFIRbnd}.
We now apply Lemma \ref{lemma:robust-sls} to characterize the response achieved by the FIR approximate controller $\tf K_L$ on the true system $(A,B)$:
\begin{align*}
J(A,B,\tf K_L) &= \bightwonorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2} \end{bmatrix} \begin{bmatrix}\tf \Phi_{x}^\star\\ \tf \Phi_{u}^\star \end{bmatrix}(I+\hat{\tf{\Delta}})^{-1} }\\
& \leq \frac{1}{1-\gamma_\star}\bightwonorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2} \end{bmatrix} \begin{bmatrix}{\tf \Phi_{x}^\star}\\{\tf \Phi_{u}^\star}\end{bmatrix}}.
\end{align*}
Denote by $(\widetilde{\tf\Phi}_x, \widetilde{\tf\Phi}_u, \widetilde{V}, \tilde{\gamma})$ the feasible solution constructed in Lemma \ref{lem:FIR_feasibility}, and let $J_L( \Ah , \widehat{B}, K_\star)$ denote the truncation of the LQR cost achieved by controller $K_\star$ on system $(\Ah, \widehat{B})$ to its first $L$ time-steps.
Then,
\begin{align*}
\frac{1}{1-\gamma_\star}\bightwonorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2} \end{bmatrix} \begin{bmatrix}{\tf \Phi_{x}^\star}\\{\tf \Phi_{u}^\star}\end{bmatrix}}
&\leq \frac{1}{1- \tilde{\gamma}}\bightwonorm{\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0& R^\frac{1}{2} \end{bmatrix} \begin{bmatrix}{\widetilde{\tf \Phi}_x}\\{ \widetilde{\tf \Phi}_u}\end{bmatrix}}\\
&= \frac{1}{1- \tilde \gamma} {J_L(\widehat{A},\widehat{B},K_\star)} \\
& \leq \frac{1}{1- \tilde \gamma} {J(\widehat{A},\widehat{B},K_\star)} \\
& \leq \frac{1}{1- \tilde \gamma} \frac{1}{1-\hinfnorm{\tf \Delta}}{J_\star},
\end{align*}
where $\tf \Delta = - (\Delta_A + \Delta_B K_\star) \Res{A + B K_\star}$. The first inequality follows from the optimality of $(\tf \Phi_{x}^\star,\tf\Phi_{u}^\star ,V_\star, \gamma_\star)$, the equality and second inequality from the fact that $(\widetilde{\tf\Phi}_x, \widetilde{\tf\Phi}_u)$ are truncations of the response of $K_\star$ on $(\widehat{A},\widehat{B})$ to the first $L$ time steps, and the final inequality by following similar arguments to the proof of Theorem \ref{thm:lqr_cost}, and in applying Theorem \ref{thm:robust}.
Noting that
\[
\hinfnorm{\tf \Delta} = \bighinfnorm{({\Delta_A}+{\Delta_B}K_\star)\Res{A + B K_\star}} \leq \zeta < 1,
\]
we then have that
\[
{J(A,B,\tf K_L)} \leq \frac{1}{1- \tilde \gamma} \frac{1}{1-\zeta}{J_\star},
\]
Recalling that $\tilde \gamma = \frac{4 \zeta}{1 - \zeta}$, we obtain
\begin{align*}
\frac{J(A,B,\tf K_L) - J_\star}{J_\star} \leq \frac{1 - \zeta}{1- 5\zeta} \frac{1}{1-\zeta} - 1 = \frac{5 \zeta}{(1 - 5\zeta)} \leq 10 \zeta,
\end{align*}
where the last equality is true when $\zeta \leq 1/10$. The conclusion follows.
\end{proof}
\section{Introduction}
\label{sec:intro}
\input{learning-lqr-intro}
\subsection{Related Work}
\label{sec:related}
\input{related}
\section{System Identification through Least-Squares}
\label{sec:estimation}
\input{estimation}
\section{Robust Synthesis}
\label{sec:robust}
\input{robust-synthesis}
\section{Sub-optimality Guarantees}
\label{sec:subopt}
\input{subopt}
\section{Computation}
\label{sec:computation}
\input{computation}
\section{Numerical Experiments}
\label{sec:experiments}
\input{experiments}
\section{Conclusions and Future Work}
\label{sec:conclusions}
\input{conclusions}
\section*{Acknowledgements}
We thank Ross Boczar, Qingqing Huang, Laurent Lessard, Michael Littman, Manfred Morari, Andrew Packard, Anders Rantzer, Daniel Russo, and Ludwig Schmidt for many helpful comments and suggestions. We also thank the anonymous referees for making several suggestions that have significantly improved the paper and its presentation. SD is supported by an NSF Graduate Research Fellowship under Grant No. DGE 1752814. NM is generously funded by grants from the AFOSR and NSF, and by gifts from Huawei and Google. BR is generously supported by NSF award CCF-1359814, ONR awards N00014-14-1-0024 and N00014-17-1-2191, the DARPA Fundamental Limits of Learning (Fun LoL) Program, a Sloan Research Fellowship, and a Google Faculty Award.
\begin{small}
\bibliographystyle{abbrvnat}
\subsection{Problem Statement and Our Contributions}
The standard optimal control problem aims to find a control sequence that minimizes an expected cost. We assume a dynamical system with \emph{state} $x_t\in\ensuremath{\mathbb{R}}^n$ can be acted on by a \emph{control} $u_t \in \ensuremath{\mathbb{R}}^p$ and obeys the stochastic dynamics
\begin{align}
x_{t+1} = f_t(x_t,u_t,w_t)
\end{align}
where $w_t$ is a random process with $w_t$ independent of $w_{t'}$ for all $t\neq t'$. Optimal control then seeks to minimize
\begin{align}
\label{eq:general_control}
\begin{array}{ll}
\mbox{minimize} & \mathbb{E}\left[ \frac{1}{T} \sum_{t=1}^T c_t(x_t,u_t)\right]\\
\mbox{subject to} & x_{t+1} = f_t(x_t,u_t,w_t)
\end{array}\,.
\end{align}
Here, $c_t$ denotes the state-control cost at every time step, { and the input $u_t$ is allowed to depend on the current state $x_t$ and all previous states and actions. In this generality, problem~\eqref{eq:general_control} encapsulates many of the problems considered in the reinforcement learning literature.}
The simplest optimal control problem with continuous state is the Linear Quadratic Regulator (LQR), in which costs are a fixed quadratic function of state and control and the dynamics are linear and time-invariant:
\begin{align}\label{eq:lqr-classic}
\begin{array}{ll}
\mbox{minimize} & \mathbb{E}\left[ \frac{1}{T} \sum_{t=1}^T x_t^* Q x_t + u_{t-1}^* R u_{t-1} \right]\\
\mbox{subject to} & x_{t+1} = A x_t + B u_t + w_t
\end{array}\,.
\end{align}
Here $Q$ (resp. $R$) is a $n \times n$ (resp. $p\times p$) positive definite matrix, $A$ and $B$ are called the \emph{state transition matrices}, and $w_t\in \ensuremath{\mathbb{R}}^n$ is Gaussian noise with zero-mean and covariance $\Sigma_w$. Throughout, $M^*$ denotes the Hermitian transpose of the matrix $M$.
In what follows, we will be concerned with the \emph{infinite time horizon}
variant of the LQR problem where we let the time horizon $T$ go to infinity and
minimize the average cost. When the dynamics are known, this problem has a
celebrated closed form solution based on the solution of matrix Riccati
equations~\cite{ZDGBook}. Indeed, the optimal solution sets $u_t = K x_t$ for
a fixed $p \times n$ matrix $K$, and the corresponding
optimal cost will serve as our gold-standard baseline to which we will compare
the achieved cost of all algorithms.
In the case when the state transition matrices are unknown, fewer results have been established
about what cost is achievable. We will assume that we can conduct experiments
of the following form: given some initial state $x_0$, we can evolve the
dynamics for $T$ time steps using any control sequence $\{u_0,\ldots,
u_{T-1}\}$, measuring the resulting output $\{x_1,\ldots, x_T\}$. {If we run
$N$ such independent experiments}, what infinite time horizon control cost is
achievable using only the data collected? For simplicity of bookkeeping, in our analysis we further assume that we can
prepare the system in initial state $x_0 = 0$.
In what follows we will examine the performance of the Coarse-ID control framework in this scenario. We will estimate the errors accrued by least squares estimates $(\widehat{A},\widehat{B})$ of the system dynamics. This estimation error is not easily handled by standard techniques because the design matrix is highly correlated with the model to be estimated. Regardless, for theoretical tractability,
we can build a least squares estimate using only the final sample $(x_{T},x_{T-1},u_{T-1})$ of each of the $N$ experiments. Indeed, in Section~\ref{sec:estimation} we prove the following
\begin{proposition}
\label{prop:independent_estimation}
Define the matrices
\begin{align}
G_T = \begin{bmatrix}
A^{T - 1} B & A^{T - 2}B & \ldots & B
\end{bmatrix} \quad \text{and} \quad
F_T = \begin{bmatrix}
A^{T - 1} & A^{T - 2} & \ldots & I_n
\end{bmatrix} \:. \label{eq:gramians}
\end{align}
Assume we collect data from the linear, time-invariant system initialized at
$x_0 = 0$, using inputs $u_t~\stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0,\sigma_u^2 I_p)$ for $t=1,...,T$. Suppose
that the process noise is $w_t~\stackrel{\mathclap{\text{\scriptsize{ \tiny i.i.d.}}}}{\sim} \mathcal{N}(0,\sigma_w^2 I_n)$ and that
\begin{align*}
N \geq 8(n + p) + 16\log(4/\delta) \:.
\end{align*}
Then, with probability at least $1-\delta$,
the least squares estimator using only the final sample of each trajectory satisfies both
the inequality
\begin{align}\label{eq:A_error_bound}
\ltwonorm{\widehat{A} - A} \leq \frac{16 \sigma_w}{\sqrt{\lambda_{\min}(\sigma_u^2 G_TG_T^* + \sigma_w^2 F_TF_T^*)}}\sqrt{\frac{(n + 2p)\log(36/\delta)}{N}} \:,
\end{align}
and the inequality
\begin{align}\label{eq:B_error_bound}
\ltwonorm{\widehat{B} - B} \leq \frac{16\sigma_w}{\sigma_u}\sqrt{\frac{(n + 2p)\log(36/\delta)}{N}}\:.
\end{align}
\end{proposition}
The details of the estimation procedure are described in Section~\ref{sec:estimation} below. Note that this estimation result seems to yield an optimal dependence in terms of the number of parameters:
$(A,B)$ together have $n(n + p)$ parameters to learn and each measurement consists of $n$ values. Moreover, this proposition further illustrates that not all linear systems are equally easy to estimate. The matrices $G_T G_T^*$ and $F_T F_T^*$ are finite time \emph{controllability Gramians} for the control and noise inputs, respectively . These are standard objects in control: each eigenvalue/vector pair of such a Gramian characterizes how much input energy is required to move the system in that particular direction of the state-space. Therefore $\lambda_{\min}\left(\sigma_u^2G_T G_T^* + \sigma_w^2F_T F_T^*\right)$ quantifies the least controllable, and hence most difficult to excite and estimate, mode of the system. This property is captured nicely in our bound, which indicates that for systems for which all modes are easily excitable (i.e., all modes of the system amplify the applied inputs and disturbances), the identification task becomes easier.
While we cannot compute the operator norm error bounds~\eqref{eq:A_error_bound} and~\eqref{eq:B_error_bound} without knowing the true system matrices $(A,B)$, we present a data-dependent bound in Proposition~\ref{prop:data_dependent}. Moreover, as we show in Section~\ref{sec:bootstrap}, a simple bootstrap procedure can efficiently upper bound the errors $\epsilon_A:=\ltwonorm{A-\widehat{A}}$ and $\epsilon_B:=\ltwonorm{B-\widehat{B}}$ from simulation.
With our estimates $(\widehat{A},\widehat{B})$ and error bounds $(\epsilon_A,\epsilon_B)$ in hand, we can turn to the problem of synthesizing a controller.
We can assert with high probability that $A = \widehat{A} + \Delta_A$, and $B = \widehat{B} + \Delta_B$, for $\ltwonorm{\Delta_A} \leq \epsilon_A$ and $\ltwonorm{\Delta_B} \leq \epsilon_B$, where the size of the error terms is determined by the number of samples $N$ collected.
In light of this, it is natural to pose the following robust variant of the standard LQR optimal control problem \eqref{eq:lqr-classic}, which computes a robustly stabilizing controller that seeks to minimize the worst-case performance of the system given the (high-probability) norm bounds on the perturbations $\Delta_A$ and $\Delta_B$:
\begin{equation}
\begin{array}{rl}
\mbox{minimize} \:\: \displaystyle\sup\limits_{\substack{\|\Delta_A\|_2\leq \epsilon_A \\ \|\Delta_B\|_2\leq \epsilon_B}} &
\lim_{T\to \infty} \frac{1}{T}\sum_{t=1}^T \mathbb{E}\left[ x_t^* Q x_t + u_{t-1}^* R u_{t-1} \right]\\
\mbox{subject to} & x_{t+1} = (\hat{A} + \Delta_A) x_t + (\hat{B} + \Delta_B) u_t + w_t
\end{array} \,.
\label{eq:robust_lqr}
\end{equation}
Although classic methods exist for computing such controllers~\cite{feron1997analysis,paganini1995necessary,sznaier2002convex,wu1995optimal}, they typically require solving nonconvex optimization problems, and it is not readily obvious how to extract interpretable measures of controller performance as a function of the perturbation sizes $\epsilon_A$ and $\epsilon_B$. To that end, we leverage the recently developed System Level Synthesis (SLS) framework \cite{SysLevelSyn1} to create an alternative robust synthesis procedure. Described in detail in Section~\ref{sec:robust}, SLS lifts the system description into a higher dimensional space that enables efficient search for controllers. At the cost of some conservatism, we are able to guarantee robust stability of the resulting closed-loop system for all admissible perturbations and bound the performance gap between the resulting controller and the optimal LQR controller. This is summarized in the following proposition.
\begin{proposition}\label{prop:sls-robust-stylized}
Let $(\widehat{A},\widehat{B})$ be estimated via the independent data collection scheme used in Proposition~\ref{prop:independent_estimation} and $\tf \widehat{K}$ synthesized using robust SLS. Let $\widehat{J}$ denote the infinite time horizon LQR cost accrued by using the controller $\tf \widehat{K}$ and $J_\star$ denote the optimal LQR cost achieved when $(A,B)$ are known. Then the relative error in the LQR cost is bounded as
\begin{align}
\frac{\widehat{J} - J_\star}{J_\star} \leq \mathcal{O}\left(\mathcal{C}_{\mathrm{LQR}}\sqrt{\frac{(n + p )\log(1/\delta)}{N}} \right)
\end{align}
with probability $1-\delta$ provided $N$ is sufficiently large.
\end{proposition}
The complexity term $\mathcal{C}_{\mathrm{LQR}}$ depends on the rollout length $T$, the true dynamics, the matrices $(Q,R)$ which define the LQR cost, and the variances $\sigma_u^2$ and $\sigma_w^2$ of the control and noise inputs, respectively. The $1-\delta$ probability comes from the probability of estimation error from Proposition~\ref{prop:independent_estimation}. The particular form of $\mathcal{C}_{\mathrm{LQR}}$ and concrete requirements on $N$ are both provided in Section~\ref{sec:subopt}.
Though the optimization problem formulated by SLS is infinite dimensional, in Section~\ref{sec:computation} we provide two finite dimensional upper bounds on the optimization that inherit the stability guarantees of the SLS formulation. Moreover, we show via numerical experiments in Section \ref{sec:experiments} that the
controllers synthesized by our optimization do indeed provide stabilizing
controllers with small relative error. We further show that settings exist
wherein a na{\"{i}}ve synthesis procedure that ignores the uncertainty in the
state-space parameter estimates produces a controller that performs poorly (or
has unstable closed-loop behavior) relative to the controller synthesized using the SLS
procedure.
\subsection{Useful Results from System Level Synthesis}
The SLS framework focuses on the \emph{system responses} of a closed-loop system. As a motivating example, consider linear dynamics under a fixed a static state-feedback control policy $K$, i.e., let $u_k = Kx_k$. Then, the closed {loop map from} the disturbance process $\{w_0, w_1, \dots\}$ to the state $x_k$ and control input $u_k$ at time $k$ is given by
\begin{equation}
\begin{array}{rcl}
x_k &=& \sum_{t=1}^{k} (A + B K)^{k-t}w_{t-1} \:, \\
u_k &=& \sum_{t=1}^k K(A + B K)^{k-t}w_{t-1} \:.
\end{array}
\label{eq:impulse-response}
\end{equation}
Letting $\Phi_x(k) := (A + B K)^{k-1}$ and $\Phi_u(k) := K(A + B K)^{k-1}$, we can rewrite {Eq.~\eqref{eq:impulse-response}} as
\begin{equation}
\begin{bmatrix} x_k \\ u_k \end{bmatrix} =
\sum_{t=1}^k \begin{bmatrix}\Phi_x(k-t+1) \\ \Phi_u(k-t+1) \end{bmatrix}w_{t-1} \:,
\label{eq:phis}
\end{equation}
where $\{\Phi_x(k),\Phi_u(k)\}$ are called the \emph{closed-loop system response elements} induced by the static controller $K$.
Note that even when the control is a linear function of the state and its past history (i.e. a linear dynamic controller), the expression~\eqref{eq:phis} is valid. Though we conventionally think of the control policy as a function mapping states to input, whenever such a mapping is linear, both the control input and the state can be written as linear functions of the disturbance signal $w_t$. With such an identification, the dynamics require that the $\{\Phi_x(k),\Phi_u(k)\}$ must obey the constraints
\begin{equation}
\Phi_x(k+1) = A \Phi_x(k) + B \Phi_u(k) \:, \:\: \Phi_x(1) = I \:, \:\: \forall k \geq 1 \:,
\label{eq:time-achievability}
\end{equation}
As we describe in more detail below in Theorem \ref{thm:param}, these constraints are in fact both necessary and sufficient. Working with closed-loop system responses allows us to cast optimal control problems as optimization problems over elements $\{\Phi_x(k),\Phi_u(k)\}$, constrained to satisfy the affine equations~\eqref{eq:time-achievability}. Comparing equations \eqref{eq:impulse-response} and \eqref{eq:phis}, we see that the former is non-convex in the controller $K$, whereas the latter is affine in the elements $\{\Phi_x(k),\Phi_u(k)\}$.
As we work with infinite horizon problems, it is notationally more convenient to work with \emph{transfer function} representations of the above objects, which can be obtained by taking a $z$-transform of their time-domain representations. The frequency domain variable $z$ can be informally thought of as the time-shift operator, i.e., $z\{x_k,x_{k+1},\dots\} = \{x_{k+1},x_{k+2},\dots\}$, allowing for a compact representation of LTI dynamics. {We use boldface letters to denote such transfer functions signals in the frequency domain, e.g., $\tf{\Phi}_x(z) = \sum_{k = 1}^\infty\Phi_x(k) z^{-k}$. Then, the constraints \eqref{eq:time-achievability} can be rewritten as }
\begin{equation*}
\begin{bmatrix} zI - A & - B \end{bmatrix} \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} = I \:,
\end{equation*}
{and the corresponding (not necessarily static) control law $\tf u = \tf K \tf x$ is given by $\tf K = \tf \Phi_u \tf \Phi^{-1}_x$.}
The relevant frequency domain connections for LQR are illustrated in Appendix~\ref{app:h2}.
We formalize our discussion by introducing notation that is common in the controls
literature. For a thorough introduction to the functional analysis
commonly used in control theory, see Chapters 2
and 3 of~\citet{ZDGBook}.
Let $\mathbb{T}$ (resp. $\mathbb{D}$) denote the unit circle (resp. open unit
disk) in the complex plane.
The restriction of the Hardy spaces $\mathcal{H}_\infty(\mathbb{T})$ and $\mathcal{H}_2(\mathbb{T})$ to matrix-valued
real-rational functions that are analytic on the complement of $\mathbb{D}$
will be referred to as $\mathcal{RH}_\infty$ and $\mathcal{RH}_2$,
respectively.
In controls parlance, this corresponds to (discrete-time) stable
matrix-valued transfer functions.
For these two function spaces, the $\mathcal{H}_\infty$ and $\mathcal{H}_2$ norms simplify to
\begin{align}
\norm{\tf G}_{\mathcal{H}_\infty} = \sup_{z \in \mathbb{T}} \: \norm{G(z)}_2 \:, \:\:
\norm{\tf G}_{\mathcal{H}_2} = \sqrt{ \frac{1}{2\pi} \int_{\mathbb{T}} \norm{G(z)}_F^2 \; dz } \:.
\end{align}
Finally, the notation $\frac{1}{z} \mathcal{RH}_\infty$ refers to the set
of transfer functions $\tf G$ such that $z \tf G \in \mathcal{RH}_\infty$.
Equivalently, $\tf G \in \frac{1}{z} \mathcal{RH}_\infty$
if $\tf G \in \mathcal{RH}_\infty$ and $\tf G$ is strictly proper.
The most important transfer function for the LQR problem is the map
from the state sequence to the control actions: the control policy.
Consider an arbitrary transfer function $\tf K$ denoting the map from state to control action, $\tf u = \tf K \tf x$. Then the closed-loop transfer matrices from the process noise $\tf w$ to the state $\tf x$ and control action $\tf u$ satisfy
\begin{equation}
\begin{bmatrix} \tf x \\ \tf u \end{bmatrix} = \begin{bmatrix} (zI - A-B\tf K)^{-1} \\ \tf K (zI-A-B \tf K)^{-1} \end{bmatrix} \tf w.
\label{eq:response}
\end{equation}
We then have the following theorem parameterizing the set of stable closed-loop transfer matrices, as described in equation \eqref{eq:response}, that are achievable by a given stabilizing controller $\tf K$.
\begin{theorem}[State-Feedback Parameterization~\cite{SysLevelSyn1}]
The following are true:
\begin{itemize}
\item The affine subspace defined by
\begin{equation}
\begin{bmatrix} zI - A & - B \end{bmatrix} \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} = I, \ \tf \Phi_x, \tf \Phi_u \in \frac{1}{z}\mathcal{RH}_\infty
\label{eq:achievable}
\end{equation}
parameterizes all system responses \eqref{eq:response} from $\tf w$ to $(\tf x, \tf u)$, achievable by an internally stabilizing state-feedback controller $\tf K$.
\item For any transfer matrices $\{\tf \Phi_x, \tf \Phi_u\}$ satisfying \eqref{eq:achievable}, the controller $\tf K = \tf \Phi_u \tf \Phi_x^{-1}$ is internally stabilizing and achieves the desired system response \eqref{eq:response}.
\end{itemize}
\label{thm:param}
\end{theorem}
Note that in particular, $\{\tf \Phi_x, \tf \Phi_u\}=\{ (zI - A-B\tf K)^{-1} , \tf K (zI-A-B \tf K)^{-1} \}$ as in~\eqref{eq:response} are elements of the affine space defined by~\eqref{eq:achievable} whenever $\tf K$ is a causal stabilizing controller.
We will also make extensive use of a robust variant of Theorem \ref{thm:param}.
\begin{theorem}[Robust Stability~\cite{virtual}]
Suppose that the transfer matrices $\{\tf\Phi_x, \tf \Phi_u\} \in \frac{1}{z}\mathcal{RH}_\infty$ satisfy
\begin{equation}
\begin{bmatrix} zI - A & - B \end{bmatrix} \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} = I + \tf \Delta.
\end{equation}
Then the controller $\tf K = \tf \Phi_u \tf \Phi_x^{-1}$ stabilizes the system described by $(A,B)$ if and only if $(I+\tf \Delta)^{-1} \in \mathcal{RH}_\infty$. Furthermore, the resulting system response is given by
\begin{equation}
\begin{bmatrix} \tf x \\ \tf u \end{bmatrix} = \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix}(I+\tf \Delta)^{-1} \tf w.
\end{equation}
\label{thm:robust}
\end{theorem}
\begin{coro}
Under the assumptions of Theorem \ref{thm:robust}, if $\|\tf \Delta \| <1$ for any induced norm $\|\cdot \|$, then the controller $\tf K = \tf \Phi_u \tf \Phi_x^{-1}$ stabilizes the system described by $(A,B)$.
\label{coro:sufficient}
\end{coro}
\begin{proof}
Follows immediately from the small gain theorem, see for example Section 9.2 in \cite{ZDGBook}.
\end{proof}
\subsection{Robust LQR Synthesis}
We return to the problem setting where estimates $(\widehat{A}, \widehat{B})$ of a true system $(A,B)$ satisfy
\[\|\Delta_A\|_2\leq\epsilon_A,~~\|\Delta_B\|_2\leq\epsilon_B\]
where $\Delta_A := \widehat{A}-A$ and $\Delta_B := \widehat{B}-B$ and where we wish to minimize the LQR cost for the worst instantiation of the parametric uncertainty.
Before proceeding, we must formulate the LQR problem in terms of the system responses $\{\Phi_x(k),\Phi_u(k)\}$. It follows from Theorem \ref{thm:param} and the standard equivalence between infinite horizon LQR and $\mathcal{H}_2$ optimal control that, for a disturbance process distributed as $w_t \overset{i.i.d.}{\sim{}} \mathcal{N}(0,\sigma_w^2 I)$, the standard LQR problem \eqref{eq:lqr-classic} can be equivalently written as
\begin{equation}
\min_{\tf\Phi_x, \tf\Phi_u} \sigma_w^2 \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}\end{bmatrix}\begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix}\right\|_{\mathcal{H}_2}^2 \text{ s.t. equation \eqref{eq:achievable}}.
\label{eq:lqr2}
\end{equation}
We provide a full derivation of this equivalence in Appendix~\ref{app:h2}. Going forward, we drop the $\sigma_w^2$ multiplier in the objective function as it affects neither the optimal controller nor the sub-optimality guarantees that we compute in Section \ref{sec:subopt}.
We begin with a simple sufficient condition under which any controller $\tf K$ that stabilizes $(\widehat{A},\widehat{B})$ also stabilizes the true system $(A,B)$. To state the lemma, we introduce one additional piece of notation. For a matrix $M$, we let $\Res{M}$ denote the resolvent
\begin{equation}\label{eq:phi-def}
\Res{M} := (zI - M)^{-1}\,.
\end{equation}
We now can state our robustness lemma.
\begin{lemma}\label{lemma:robust-sls}
Let the controller $\tf K$ stabilize $(\widehat{A}, \widehat{B})$ and $(\tf\Phi_x,\tf\Phi_u)$ be its corresponding system response \eqref{eq:response} on system $(\widehat{A},\widehat{B})$. Then if $\tf K$ stabilizes $(A,B)$, it achieves the following LQR cost
\begin{equation}
J(A,B,\tf K) := \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix}\left(I+\begin{bmatrix}\Delta_A& \Delta_B\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix}\right)^{-1}\right\|_{\mathcal{H}_2}\:.
\end{equation}
Furthermore, letting
\begin{equation}\label{eq:deltahat}
\tf{\hat\Delta} := \begin{bmatrix}\Delta_A& \Delta_B\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix} = (\Delta_A + \Delta_B \tf K)\Res{\widehat{A}+\widehat{B}\tf K} \:.
\end{equation}
a sufficient condition for $\tf K$ to stabilize $(A,B)$ is that $\hinfnorm{\tf{\hat{\Delta}}} <1$.
\label{lem:sufficient}
\end{lemma}
\begin{proof}
Follows immediately from Theorems \ref{thm:param}, \ref{thm:robust} and Corollary \ref{coro:sufficient} by noting that for system responses $(\tf \Phi_x, \tf \Phi_u)$ satisfying
\[
\begin{bmatrix} zI - \widehat{A} & - \widehat{B} \end{bmatrix} \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} = I,
\]
it holds that
\[
\begin{bmatrix} zI - A & - B \end{bmatrix} \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} = I + \hat{\tf{\Delta}}
\]
for $\hat{\tf{\Delta}}$ as defined in equation \eqref{eq:deltahat}.
\end{proof}
We can therefore recast the robust LQR problem \eqref{eq:robust_lqr} in the following equivalent form
\begin{align}\label{eq:robustLQR}
\begin{split}
&\min_{\tf\Phi_x, \tf \Phi_u} \sup\limits_{\substack{\|\Delta_A\|_2\leq \epsilon_A \\ \|\Delta_B\|_2\leq \epsilon_B}} J(A,B,\tf K)\\
& \text{s.t.} \begin{bmatrix}zI-\widehat{A}&-\widehat{B}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix} = I,~~\tf\Phi_x, \tf \Phi_u \in\frac{1}{z}\mathcal{RH}_\infty \:.
\end{split}
\end{align}
The resulting robust control problem is one subject to real-parametric
uncertainty, a class of problems known to be computationally
intractable~\cite{braatz94}. Although effective computational heuristics
(e.g., DK iteration \cite{ZDGBook}) exist, the performance of the resulting controller on the
true system is difficult to characterize analytically in terms of the size of
the perturbations.
To circumvent this issue, we take a slightly conservative approach and find an upper-bound to the cost $J(A,B,\tf K)$ that is independent of the uncertainties $\Delta_A$ and $\Delta_B$. First, note that if $\hinfnorm{\hat{\tf{\Delta}}} < 1$, we can write
\begin{align} \label{eq:upperbnd1}
J(A,B,\tf K) \leq \hinfnorm{(I+\hat{\tf{\Delta}})^{-1}}J(\widehat{A},\widehat{B},\tf K) \leq \frac{1}{1-\hinfnorm{\hat{\tf{\Delta}}}}J(\widehat{A},\widehat{B},\tf K).
\end{align}
Because $J(\widehat{A},\widehat{B},\tf K)$ captures the performance of the controller $\tf K$ on the nominal system $(\widehat{A},\widehat{B})$, it is not subject to any uncertainty. It therefore remains to compute a tractable bound for $\hinfnorm{\hat{\tf{\Delta}}}$, which we do using the following fact.
\begin{proposition}
For any $\alpha \in (0,1)$ and $\tf{\hat{\Delta}}$ as defined in \eqref{eq:deltahat}
\begin{equation}\label{eq:ben-tri-bound}
\| \tf{\hat{\Delta}} \|_{\mathcal{H}_\infty}
\leq \left\|\begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \tf \Phi_x \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} \tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty} =\colon H_\alpha(\tf\Phi_x,\tf\Phi_u) \:.
\end{equation}
\label{prop:bound}
\end{proposition}
\begin{proof}
Note that for any block matrix of the form $\begin{bmatrix} M_1 & M_2 \end{bmatrix}$, we have
\begin{equation}\label{eq:block-norm-bound}
\left\|\begin{bmatrix} M_1 & M_2 \end{bmatrix}\right\|_2
\leq \left(\left\| M_1 \right\|_2^2 + \left\| M_2 \right\|_2^2\right)^{1/2}\,.
\end{equation}
To verify this assertion, note that
\[
\left\|\begin{bmatrix} M_1 & M_2 \end{bmatrix}\right\|_2^2
= \lambda_{\mathrm{max}}(M_1 M_1^*+ M_2 M_2^*)
\leq \lambda_{\mathrm{max}}(M_1 M_1^*)+ \lambda_{\mathrm{max}}(M_2 M_2^*)
= \left\| M_1 \right\|_2^2 + \left\| M_2 \right\|_2^2\,.
\]
With~\eqref{eq:block-norm-bound} in hand, we have
\begin{align*} \left\| \begin{bmatrix} \Delta_A & \Delta_B \end{bmatrix} \begin{bmatrix} \tf \Phi_x \\ \tf \Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty}
&=\left\| \begin{bmatrix} \frac{\sqrt{\alpha}}{\epsilon_A}\Delta_A & \frac{\sqrt{1-\alpha}}{\epsilon_B}\Delta_B\ \end{bmatrix} \begin{bmatrix} \frac{\epsilon_A}{\sqrt{\alpha}} \tf\Phi_x \\ \frac{\epsilon_B}{\sqrt{1-\alpha}}\tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty} \\
&\leq \left\| \begin{bmatrix} \frac{\sqrt{\alpha}}{\epsilon_A}\Delta_A & \frac{\sqrt{1-\alpha}}{\epsilon_B}\Delta_B\ \end{bmatrix}\right\|_2
\left\| \begin{bmatrix} \frac{\epsilon_A}{\sqrt{\alpha}} \tf\Phi_x \\ \frac{\epsilon_B}{\sqrt{1-\alpha}}\tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty}\leq
\left\| \begin{bmatrix} \frac{\epsilon_A}{\sqrt{\alpha}} \tf\Phi_x \\ \frac{\epsilon_B}{\sqrt{1-\alpha}}\tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty} ,
\end{align*}
completing the proof.
\end{proof}
The following corollary is then immediate.
\begin{coro}
Let the controller $\tf K$ and resulting system response $(\tf \Phi_x, \tf \Phi_u)$ be as defined in Lemma \ref{lem:sufficient}. Then if $H_\alpha(\tf \Phi_x, \tf \Phi_u) < 1$, the controller $\tf K = \tf\Phi_u \tf\Phi_x^{-1}$ stabilizes the true system $(A,B)$.
\label{coro:stable}
\end{coro}
Applying Proposition \ref{prop:bound} in conjunction with the bound \eqref{eq:upperbnd1}, we arrive at the following upper bound to the cost function of the robust LQR problem \eqref{eq:robust_lqr}, which is independent of the perturbations $(\Delta_A,\Delta_B)$:
\begin{align}\label{eq:upperbnd}
\sup\limits_{\substack{\|\Delta_A\|_2\leq \epsilon_A \\ \|\Delta_B\|_2\leq \epsilon_B}} J(A,B,\tf K) &\leq \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix}\right\|_{\mathcal{H}_2}\frac{1}{1 - H_\alpha(\tf\Phi_x,\tf\Phi_u)} = \frac{J(\widehat{A},\widehat{B},\tf K)}{1 - H_\alpha(\tf\Phi_x,\tf\Phi_u)}\:.
\end{align}
The upper bound is only valid when $H_\alpha(\tf \Phi_x, \tf \Phi_u) < 1$, which guarantees the stability of the closed-loop system as in Corollary~\ref{coro:stable}.
We remark that Corollary~\ref{coro:stable} and the bound in \eqref{eq:upperbnd} are of interest independent of the synthesis procedure for $\tf K$. In particular, they can be applied to the optimal LQR controller $\widehat{K}$ computed using the nominal system $(\widehat{A},\widehat{B})$.
As the next lemma shows, the right hand side of Equation~\eqref{eq:upperbnd} can be efficiently optimized by an appropriate decomposition. The proof of the lemma is immediate.
\begin{lemma} \label{lem:innerouter}
For functions $f:\mathcal{X}\to\mathbb{R}$ and $g:\mathcal{X}\to\mathbb{R}$ and constraint set $C\subseteq \mathcal{X}$, consider
\begin{align*}
\min_{x\in C} \frac{f(x)}{1-g(x)} \:.
\end{align*}
Assuming that $f(x) \geq 0$ and $0 \leq g(x) < 1$ for all $x\in C$, this optimization problem can be reformulated as an outer single-variable problem and an inner constrained optimization problem (the objective value of an optimization over the emptyset is defined to be infinity):
\begin{align*}
\min_{x\in C} \frac{f(x)}{1-g(x)} = \min_{\gamma\in[0,1)} \tfrac{1}{1-\gamma} \min_{x\in C} \{ f(x) ~|~ g(x)\leq\gamma\}
\end{align*}
\end{lemma}
Then combining Lemma \ref{lem:innerouter} with the upper bound in \eqref{eq:upperbnd} results in the following optimization problem:
\begin{align}\label{eq:robustLQRbnd}
\begin{split}
\mbox{minimize}_{\gamma\in[0,1)}\frac{1}{1 - \gamma}&\min_{\tf\Phi_x, \tf \Phi_u} \left\|\begin{bmatrix} Q^\frac{1}{2} & 0 \\ 0 & R^\frac{1}{2}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix}\right\|_{\mathcal{H}_2}\\
& \text{s.t.} \begin{bmatrix}zI-\widehat{A}&-\widehat{B}\end{bmatrix}\begin{bmatrix} \tf\Phi_x \\ \tf\Phi_u \end{bmatrix} = I,~~\left\|\begin{bmatrix} \tfrac{\epsilon_A}{\sqrt{\alpha}} \tf \Phi_x \\ \tfrac{\epsilon_B}{\sqrt{1-\alpha}} \tf\Phi_u \end{bmatrix} \right\|_{\mathcal{H}_\infty}\leq \gamma\\
&\qquad \tf\Phi_x, \tf \Phi_u \in\frac{1}{z}\mathcal{RH}_\infty.
\end{split}
\end{align}
We note that this optimization objective is jointly quasi-convex in $(\gamma, \tf \Phi_x, \tf \Phi_u)$. Hence, as a function of $\gamma$ alone the objective is quasi-convex, and furthermore is smooth in the feasible domain. Therefore, the outer optimization with respect to $\gamma$ can effectively be solved with methods like golden section search. We remark that the inner optimization is a convex problem, though an infinite dimensional one. We show in Section~\ref{sec:computation} that a simple finite impulse response truncation yields a finite dimensional problem with similar guarantees of robustness and performance.
We further remark that because $\gamma \in [0,1)$, any feasible solution $(\tf \Phi_x, \tf \Phi_u)$ to optimization problem \eqref{eq:robustLQRbnd} generates a controller $\tf K = \tf \Phi_u \tf \Phi_x^{-1}$ satisfying the conditions of Corollary \ref{coro:stable}, and hence stabilizes the true system $(A,B)$. Therefore, even if the solution is approximated, as long as it is feasible, it will be stabilizing. As we show in the next section, for sufficiently small estimation error bounds $\epsilon_A$ and $\epsilon_B$, we can further bound the sub-optimality of the performance achieved by our robustly stabilizing controller relative to that achieved by the optimal LQR controller $K_\star$.
|
1,116,691,499,624 | arxiv | \section{Introduction}
In this paper we show how to get rid of some unnatural features of
Lagrangean mechanics, such as multivaluedness and neglecting
total divergences. There is almost nothing new: we simply consider
Hamilton--Jacobi equation and its characteristics. The only point is
in introducing, instead of $M\times{\Bbb R}\,$, a principal $G$- bundle $U$
over the spacetime $M$, where $G={\Bbb R}\,$ or $U(1)$.
Even if $U$ is trivial, it is, in a natural way, only
a bundle and not a product. This correspods to ``up to a total divergence''
phrases. The Hamilton--Jacobi equation is simply a $G$-invariant hypersurface
in the space of contact elements of $U$.
In quantization, the wave functions are sections
of a line bundle associated to $U$, no topological ambiguity remains, so the
correspondence classical $\leftrightarrow$ quantum is very clear. The
space of characteristics ${\frak{Ch}}$ carries a natural contact structure; the
phase space ${\frak{Ph}}$ emerges as the quotient of ${\frak{Ch}}/G$. Thus
${\frak{Ch}}\rightarrow {\frak{Ph}}$ is a principal $U(1)$- (or ${\Bbb R}\,$- ) bundle; the contact structure
gives us a connection.
The plan of the paper is as follows:
In Section 2 we present basic facts of contact geometry, its connection
with symplectic geometry and geometrical quantization,
with first order PDE and the method
of characteristics and with asymptotics of linear PDE. In Section 3
we introduce the point of view described above and discuss its correspondence
with Lagrangians. For example, it may contain some additional topological
information (obviously the topological quantization
ambiguity has to be hidden somewhere).
The bundle ${\frak{Ch}}\rightarrow{\frak{Ph}}$ and quantization are discussed in Section 4.
We conclude with the fact that one can replace the group $U(1)$ by any
Lie group almost without changing anything. Finally we mention
the obvious open problem --
what happens if we do not consider extremal curves, but surfaces etc.
\section{Basic notions of contact geometry}
A {\em contact structure} on a manifold $M$ is a field of hyperplanes
$HM\subset TM$
(a subbundle of codimension 1) satisfying a maximal nonintegrability condition.
It can be formulated as follows: as for any subbundle of $TM$, we have a map
$\sigma:\bigwedge^2HM\rightarrow TM/HM$ satisfying (and defined by) the fact that for any
1-form $\alpha$ on $M$, annulated on $HM$, the formula
$$ \alpha\left(\sigma(u,v)\right)=d\alpha(u,v) $$
holds for any $u,v\in H_xM$, $x\in M$. Alternatively, we may extend $u$
and $v$ to
sections of $HM$; their commutator at $x$ (when considered mod $HM$)
is $\sigma(u,v)$. The maximal nonintegrability condition requires $\sigma$ to be
regular. In that case, $M$ is clearly odd-dimensional. Any two contact
manifolds with the same dimension are locally isomorphic (a form of Darboux
theorem).
We call a vector field on $M$ {\em contact}, if its flow preserves the contact
strucure. There is a 1-1 correspodence between contact vector fields and
sections of the line bundle $TM/HM$. More precisely, for any $w\in{\cal C}^\infty(TM/HM)$
there is a unique contact $v$ that becomes $w$ when considered mod $HM$.
The proof is easy: choose any $v'$ that is $w$ mod $HM$. As a rule, $v'$ is not
contact, so it generates an infinitesimal deformation of the contact structure
-- say $\beta:HM\rightarrow TM/HM$. But due to the nondegeneracy of $\sigma$ there is
a unique $v''\in{\cal C}^\infty(HM)$ producing the same deformation. Thus $v=v'-v''$ is
the required contact field. The field $w$ is called the {\em contact
hamiltonian} of $v$.
An important example of contact geometry emerges when $M$ is a principal
$G$-bundle over a symplectic manifold $(N,\omega)$, where $G={\Bbb R}\,$ or $U(1)$.
Suppose we are given a connection on $M$ such that its curvature is $\omega$.
The horizontal distribution makes $M$ into a contact manifold. We can use
the connection 1-form $\alpha$ to identify sections of $TM/HM$ (contact
hamiltonians) with functions on $M$. The local flow generated by a contact
field $v$ preserves the structure of $G$-bundle iff $v$ is
$G$-invariant, i.e. iff its contact hamiltonian $f$ is (the pullback of)
a function on $N$. Then the field $v$ is projected onto a well-defined
vector field $v_N$ on $N$ whose flow preserves $\omega$; in fact, $f$ is a
hamiltonian generating $v_N$. We may put these facts together:
The Lie algebra ${\cal C}^\infty(N)$ (with the Poisson bracket) is isomorphic to the Lie
algebra of $G$-invariant contact fields on $M$. A function $f$ on $N$
and the corresponding hamiltonian vector field $v_N$ are combined together
($f$ as the vertical part and $v_N$ as the horizontal part) to form a contact
field $v$ on $M$.
This point of view is useful in geometrical quantization. Here one considers
a line bundle $L\rightarrow N$ associated to $M\rightarrow N$, and represents the Lie algebra
$({\cal C}^\infty(N),\{,\})$ by operators on the space ${\cal C}^\infty(L)$. The sections of $L$ are
simply functions on $M$ equivariant with respect to $G$ and the action of
a function $f\in{\cal C}^\infty(N)$ on such a section is given by the derivative with
respect to the corresponding contact vector field.
The classical example of a contact manifold is the space of contact elements
(i.e. hyperplanes in the tangent space) of a manifold $M$, which we denote as
$CM$. The distribution $H(CM)$ is given as follows: take an $x\in CM$; it
corresponds to a hyperplane $H$ in $T_{\pi(x)}M$, where $\pi:CM\rightarrow M$ is the
natural projection. Then $H_x(CM)$ is $(d_x\pi)^{-1}(H)$.
Contact geometry, in particular on $CM$, was invented to give a geometrical
meaning to first order partial differential equations and to Lagrange method
of characteristics. Suppose $E\subset CM$ is a hypersurface; it will represent
the equation. Any hypersurface $\Sigma\subset M$ can be lifted to $CM$: for any point
$x\in \Sigma$ take the hyperplane $T_x\Sigma$ to be a point of the lift $\tilde\Sigma$.
$\tilde\Sigma$ is a Legendre submanifold of $CM$, i.e. $T\tilde\Sigma\subset H(CM)$ and
$\tilde\Sigma$ has the maximal possible dimension
(${\rm dim}\,CM=2\,{\rm dim}\,\tilde\Sigma+1$). $\Sigma$
is said to solve the equation if $\tilde\Sigma\subset E$. This has a nice interpretation
due to Monge: For any $x\in M$ we take the enveloping cone of the hyperplanes
$\pi^{-1}(x)\cap E$ in $T_xM$. In this way we obtain a field of cones in $M$.
Then $\Sigma$ solves the equation if it is tangent to the cones everywhere.
Lie's point of view is to forget about $M$ and to take as a solution any
Legendre submanifold contained in $E$. Such a solution may look singular in $M$
(singularities emerge upon the projection $\pi:CM\rightarrow M$). This definition uses
only the contact structure on $CM$ and thus allowes using the entire
(pseudo)group
of contact transformations.
Now we will describe the method of characteristics. The hyperplane field
$H(CM)$ cuts a hyperplane field $HE$ on $E$ (there may be points where the
contact hyperplane touches $E$. Generally they are isolated and we will ignore
them). The field $HE$ does not make $E$ into a contact manifold: the form $\sigma$
becomes degenerate when we restrict ourselves from $H_x(CM)$ to $H_xE$.
Thus at any $x\in E$ there appears a direction along which $\sigma$ is
degenerate. The integral curves of this direction field are called
{\em characteristics}. For example, if the Monge cones coming from $E$
are the null cones of some pseudo-riemannian metrics on $M$ then
the projections
of the characteristics are the light-like geodesics in $M$.
Generally, if $F$ is a manifold with a hyperplane field $HF$, and the form
$\sigma:\bigwedge^2HF\rightarrow TF/HF$ has constant rank, then the bundle of kernels of $\sigma$,
$KF\subset HF$, is integrable. Moreover, if one takes an open $U\subset F$ small
enough, so that the integral manifolds of $KF$ in $U$ form a manifold ${\frak{Ch}}$,
then there is a well-defined contact structure on ${\frak{Ch}}$ coming from the
projection of $HF$. Coming back to the case of $E\subset CM$, it gives us a method
of finding the Legendre submanifolds contained in $E$. Just take a submanifold
that is almost Legendre -- up to the dimension, which is less by 1. Suppose
that the characteristics intersect it transversally. Then their union form
a Legendre submanifold.
Let us look at vector fields on $E$ with flow preserving the field $HE$;
we shall call them contact, too.
First of all, there are {\em characteristic vector fields}, i.e. fields
touching the characteristics. Thus it is no longer true that if we choose
a $w\in{\cal C}^\infty (TE/HE)$ then there is a unique $v\in{\cal C}^\infty (TE)$ equal to $w$
mod $HE$: we can always add a characteristic field to $v$. On the other hand,
$w$ cannot be arbitrary. The flow of a contact field has to preserve the
characteristic foliation. If ${\frak{Ch}}$ is the space of characteristics, each
contact field on $E$ can be projected onto a contact field on
${\frak{Ch}}$ (recall ${\frak{Ch}}$ is a contact manifold). This is the basis for conservation
laws.
For example if a contact field $v\in HE$ (i.e. $w=0$)
at a point $x\in E$ then
$v\in HE$ ($w=0$)
along the characteristic $\gamma_x$ running through $x$.
Let us also notice that any contact vector field on $E$ can
be prolongated to a contact vector field on $CM$
(with the flow preserving $E$).
Hypersurfaces $E\subset CM$ often come from an equation of the type $Df=0$,
where $D:{\cal C}^\infty (M)\rightarrow{\cal C}^\infty (M)$ is a linear differential
operator. Take the sybmol $s_D$ of $D$ (a function on $T^*M$ defined by
$(i\lambda)^n s_D(dg)=D\exp(i\lambda g)+O(\lambda^{n-1})$, $\lambda\rightarrow\infty$, where $n$ is the
degree of $D$ and $g\in{\cal C}^\infty(M)$).
The equation $s_D=0$ specifies a hypersurface $E\subset CM$.
The singularities of solutions of $Df=0$ are located on hypersurfaces solving
the equation corresponding to $E$; also, if $f=a(x)\exp(i\lambda S(x))$,
$\lambda\rightarrow\infty$ is an asymptotic solution of $Df=0$ then the levels
$S(x)=const$ solve the $E$-equation.
\section{The geometry of Lagrangean mechanics}
We shall deal with first-order variational principles. Suppose that at each
point $x$ of a manifold $M$ (the space-time or extended configuration space)
there is a 1-homogeneous function $\Lambda_x:T_xM\rightarrow{\Bbb R}\,$ (and suppose everything
is smooth outside the zero section of $TM$). Then on each oriented curve $\gamma$,
$\Lambda$ specifies a 1-form, so we may compute its integral $S(\gamma)=\int_\gamma\Lambda$.
We are looking for extremals of $S$ (in this paper, extremal means
stationary).
There are several reasons why this point of view is not entirely satisfactory.
First of all, even in the simplest problems, $\Lambda_x$ is not defined on all
$T_xM$, but only on an open conic subset. Even worse, $\Lambda$ may be multivalued.
An example is drawn on the following two figures. On the first one, we suppose
that $\Lambda_x$ is positive (outside 0). The figure represents the endpoints of
vectors satisfying $\Lambda_x(v)=1$; it is called the {\em wave diagram} in the
beautiful elementary book \cite{burke}. The dashed lines represent a covector
$p$ corresponding to the drawn vector (they are $p=0$ and $p=1$); $p$ is
called the {\em momentum}.
$$\epsfxsize 35mm
\epsfbox{wdiag.eps }$$
Obviously, we may use the field of wave diagrams instead of $\Lambda$. But we
may work as well with diagrams of the following shape; they correspond to
multivalued $\Lambda$'s:
$$\epsfxsize 30mm
\epsfbox{wdiag2.eps}$$
However, the real problem is that $\Lambda$ is unnatural. The reason is that it is
defined only up to a closed 1-form. For example, in the presence of an
`electromagnetic
field' $F\in{\cal C}^\infty (\bigwedge^2 T^*M)$, $dF=0$, we take
as the actual $\Lambda$ (the one from which we compute $S$) $\Lambda+A$,
where $dA=F$. Of course $A$ need not exist globally and it is not
defined uniquely.
This problem appears also in Noether theorem: we take as an infinitesimal
symmetry any vector field $v$ whose flow preserves $\Lambda$ up to some $df$.
It is desirable to have a picture in which $v$ is an actual symmetry.
A way out is in the following construction: Let $U\rightarrow M$ be a principal
$G$-bundle, where $G=U(1)$ or ${\Bbb R}\,$ (you may imagine that we added the action
$S$ to $M$ as a new coordinate; of course this interpretation is rather
limited). Suppose we are given a $G$-invariant hypersurface $E\subset CU$;
we are interested in its characteristics. Their projections to $M$ are the
extremals for certain (multivalued) $\Lambda$ (if $c_1(U)\ne0$
then either $\Lambda$ exists only locally or we must admit an elmg. field $F$).
We simply replaced $\Lambda$ by the corresponding Hamilton--Jacobi equation $E$,
but the new point of view is rid of the problems listed
above. {\em For this reason we take $E\subset CU$ and its characteristics
as fundamental
and the Lagrangian $\Lambda$ as a derived, sometimes ill-defined notion.}
The correspondence between $E$ and $\Lambda$ is as follows: Let $\alpha$ be an
arbitrary connection
1-form on $U$. To find the wave diagram at a point $x\in M$, take a point
$y\in U$ above $x$. The intersection of the Monge cone in $T_yU$ with the
hyperplane $\alpha=1$ is the wave diagram. We have to take the curvature $F$
as the elmg. field. We see that the transformation $\Lambda\rightarrow\Lambda+A$, $F\rightarrow F-dA$
($A$ a 1-form) corresponds
simply to a change of the connection.
If we start with $\Lambda$ and $F$, we have to suppose that
the periods of $F$ are integral (or at least commesurable) to find a $U$
admitting a connection with $F$ as the curvature. Notice that
if $H^1(M,G)\ne 0$, the picture
$E\subset CU$ contains
more information than the pair $(\Lambda,F)$ .
The inequivalent choices of $U$ together with a connection correspond
to the elements of the group $H^1(M,G)$ (this group acts there freely and
transitively). The subgroup $H^1(M,{\bf Z})\otimes G$ corresponds
to equivalent $U$'s (with ineqivalent connections); if $G=U(1)$, even
the quotient
group may be nontrivial (it is ${\rm Tor}\,H^2(M,{\bf Z}$)). These ambiguities
are clearly connected with quantization.
A well known example is the following: Let the Monge cones on $U$ be the
light cones of a Lorentzian metrics and suppose the vector field $u_G$
generating the action of $G$ is spacelike. As a connection on $U$ take the
orthogonal complements of $u_G$. Then the wave diagrams are the (pseudo)spheres
of a Lorentzian metrics on $M$. This picture describes a charged relativistic
particle and its antiparticle in an elmg. field given by the curvature of the
connection.\footnote{The connection dissects each light cone in $U$ into two
halfs. Thus the lightlike geodesics in $U$ (the characteristics) are (at least
locally, and globally if there is a time orientation) divided into 3 classes;
two of them are projected onto particles and
antiparticles worldlines respectively, while the curves in the third class are
horizontal and they are projected onto lightlike geodesics in $M$.} In the
nonrelativistic limit the field $u_G$ becomes lightlike and the antiparticle
disappears.
Let us look at Noether theorem. In the $(\Lambda,F)$-picture
one takes as a symmetry a vector field $v$ together with a function $f$
satisfying
$$v(\Lambda)+F(v,.)+df=0$$
($v(.)$ denotes the Lie derivative); then $p(v)+f$ is constant on extremals.
But for $E\subset CU$ we simply take a $G$-invariant vector field on $U$
preserving $E$. In fact one easily sees the full statement of Noether theorem,
claiming a 1-1 correspondence between conservation laws and $G$-invariant
contact fields
on $E$ modulo characteristic fields.
\section{A U(1)-bundle over the phase space and quantization}
Let us suppose that the characteristics in $E$ form a manifold ${\frak{Ch}}$.
It inherits a contact structure. Notice
that $E$ is a $G$-bundle; we shall also suppose that the group $G$ acts nicely
on ${\frak{Ch}}$ so that ${\frak{Ch}}$ becomes a $G'$-bundle where $G'=G/H$ and $H\subset G$ is
discrete. Its base ${\frak{Ph}}={\frak{Ch}}/G'$ is the phase space.
Where the contact hyperplanes
on ${\frak{Ch}}$ may be used as a connection for ${\frak{Ch}}\rightarrow {\frak{Ph}}$, the curvature is
the usual symplectic form on ${\frak{Ph}}$. The points of ${\frak{Ph}}$ where this is
impossible
are usually deleted and they should be regarded as ideal. For example,
the full ${\frak{Ph}}$ of a relativistic particle in 1+1-dimensions is on the following
picture:
$$\epsfxsize 25mm
\epsfbox{phase.eps}$$
One half of the cylinder corresponds to particles, the other half to
antiparticles and the connecting lines to lightlike geodesics.
We see that there is a completely natural $U(1)$- or ${\Bbb R}\,$-bundle ${\frak{Ch}}$ over
the phase space, together with a natural connection. It is important in the
view of use of such a bundle in quantization. Notice that ${\frak{Ch}}$ is even prior
to ${\frak{Ph}}$.
Let us now look at quantization using wave functions in $M$. This may have
nothing to do with quantum mechanics: we simply look for a wave equation
that leads to a given classical picture in a limit. Usually, one considers
linear equations $D_hf=0$ ($h$ being a parameter in $D$) and looks for the
high-frequency asymptotics as $h\ra0$ and the wavelength is of order
$h$. It is however much nicer if $D$ is fixed; an outline of the theory
was given at the end of Section 2. Thus let $D$ be a $G$-invariant
linear diff. operator on $U$. If we consider only $G$-equivariant functions
(with the weight $1/h$), we get an operator $D_h$ on the corresponding
associated bundle.
For example, the Schroedinger equation comes from
$$\left({1\over 2m}\triangle+V(x,t){\partial^2\over\partial s^2}+{\partial^2\over\partial s\partial t}
\right)\psi(x,t,s)=0,$$
where $s$ is the new coordinate (here $U=M\times{\Bbb R}\,$):
just notice that $\partial/\partial s$ becomes $i/\hbar$ for $\psi$ with the weight
$1/\hbar$.
Let $E\subset CU$ be given by $s_D=0$ where $s_D$ is the symbol of $D$
(notice that the Monge cone in $T_xU$
is dual to the cone $s_{D,x}=0$ in $T^*_xU$).
In the obvious sense the equation $D_hf_h=0$ gives the classical $E$-theory
as $h\rightarrow 0$. For example, take a (nonequivariant!) solution of $Df=0$ with
a singularity on a narrow strip along a characteristic of $E$. If we take
the Fourier component $f_h$ for $h\ra0$, it is significantly non-zero only
close to the projection of the characteristic to $M$. Perhaps an interesting
point is that the equation $Df=0$ contains $D_hf_h=0$ for any $h$.
Thus given $E$, quantization simply means a $G$-invariant $D$ giving $E$
by $s_D=0$. Of course, the Monge cones of $E$ have to be algebraic.
Finally, let us return to ${\frak{Ch}}\rightarrow {\frak{Ph}}$. We have a situation typical
to integral geometry: ${\frak{Ch}}\leftarrow E\rightarrow U$. In geometrical quantization
one considers sections of bundles associated to ${\frak{Ch}}\rightarrow {\frak{Ph}}$, but here we take
all possible $h$'s at once, so we consider all the functions on ${\frak{Ch}}$ instead.
One should expect a correspondence between certain such fuctions and
functions on $U$ satisfying $Df=0$. A polarization on ${\frak{Ph}}$ gives us
a $G$-invariant Legendrean foliation (if it is real) or (if it is completely
complex) a $G$-invariant
(codimension 1
and nondegenerate) $CR$-structure on ${\frak{Ch}}$.
The foliation gives us a complete system of solution of the Jacobi--Hamilton
equation. Thus functions on ${\frak{Ch}}$, constant on the leaves of the foliations,
should correspond to solutions of $Df=0$ that are (integral) linear
combinations of functions singular along hypersurfaces in the complete system.
The $CR$-case is somewhat more complicated.
The discussion above is useless in this complete generality
(and several important points were omitted), but it might be interesting
for some classes of $D$'s.
\section{Conclusion}
In the present paper $G$ was always 1-dimensional, but one can consider a
principal $G$ bundle $U\rightarrow M$ and a hypersurface $E\subset CU$ for another
Lie group $G$. The manifold ${\frak{Ch}}$ is still contact, but ${\frak{Ph}}={\frak{Ch}}/G$ is no
longer symplectic; it carries only an analogue of symplectic structure.
Characterictics of $E$ represent particles in a Yang--Mills field.
We can also consider a $G$-invariant operator $D:{\cal C}^\infty(U)\rightarrow{\cal C}^\infty(U)$. Suppose
$V$ is a $G$-module and the dual $V^*$ contains a cyclic vector $\alpha$.
Let $I$ be the ideal in $U({\frak g})$ of elements annulating
$\alpha$. Then we can
embed $V$ into the regular representation (namely onto functions annulated by
$I$) via $v\mapsto\alpha(gv)$. In this way the functions on $U$ annulated by
$I$ are sections of the vector bundle associated to $V$. Thus $D$
becomes an operator on these sections. We see the situation is quite analogous
to 1-dimensional $G$.
Perhaps the real problem is to go from extremal curves to surfaces and higher.
The problems with Lagrangians remain the same.
\subsection*{Acknowledgement}
This work was partially supported by the grant GA\v CR 201/96/0310.
|
1,116,691,499,625 | arxiv | \section{Introduction}
The Blandford-Znajek (BZ) mechanism \cite{Blandford1977} is believed to be
one of most efficient ways to extract rotation energy from spinning black holes (BHs),
which operates in BH systems on all mass scales, from the stellar-mass BHs of gamma-ray
bursts to the supermassive BHs of active galactic nuclei. In the past decade,
we have studied the BZ mechanism from different approaches and the cross-check among
these different approaches has facilitated substantial progress in understanding the underlying
detailed physics. Taking the simple monopole magnetic field
configuration as an example, the solutions obtained from different approaches
are in quantitative agreement, see e.g. \cite{Komissarov2001,Komissarov2004,Komissarov2004e, McKinney2004}
for general relativistic magnetohydrodynamic simulations,
\cite{Tanabe2008,Pan2015,Pan2015b, Gralla2014, Gralla2015, Penna2015, Grignani2018} for analytic solutions
and \cite{Contopoulos2013,Nathanail2014,Mahlmann2018} for numerical solutions.
But for other magnetic field configurations, there is no such good agreement, e.g.,
different approaches do not even reach a consensus on the solution uniqueness for the uniform field configuration.
Several force-free electrodynamics (FFE) simulations \cite{Komissarov2004e, Komissarov2005,
Komissarov2007,Palenzuela2010,Paschalidis2013,Yang2015,Carrasco2017} have been done and
the BH magnetospheres in these simulations all settle down to a steady state with similar final field configuration,
which is an indicator for solution uniqueness.
From the viewpoint of numerical solutions, the structure of a BH magnetosphere in axisymmetric and
steady state is governed by the Grad-Shafranov (GS) equation, which is a second-order differential equation of
the magnetic flux $A_\phi(r,\theta)$, with two eigenfunctions $I(A_\phi)$ and $\Omega(A_\phi)$ to be
determined. For common field configurations, the two eigenfunctions are determined by requiring the
magnetic field line smoothly cross the light surfaces (LSs), where the GS equation degrades to be first-order.
But for the uniform field configuration, there exits only one LS, which is insufficient for determining
the two eigenfunctions. Following this argument, there should exist infinitely many solutions \cite{Nathanail2014}.
In addition, a family of analytic solutions were presented for slowly spinning BHs,
and no instability mode was found for any of these solutions \cite{Yang2015}. Therefore the solution stability
is not likely the explanation for the solution uniqueness.
To explain the discrepancy about the solution uniqueness from different approaches, Pan et al. \cite{Pan2017}
proposed that the two eigenfunctions $\Omega(A_\phi)$ and $I(A_\phi)$ are connected by the radiation condition at infinity instead of
being independent, which was readily confirmed by recent high-accuracy FFE simulations done
by East and Yang \cite{East2018}. In addition, there are other interesting features in the structure of the BH
magnetosphere showing up in the simulations, e.g., an equatorial current sheet naturally develops
with the ergosphere, and the magnetic dominance marginally loses on the current sheet, i.e. $B^2-E^2\approx 0$.
Motivated by these simulation results, we revisit the uniform field solution and investigate the role of the
marginally force-free equatorial boundary condition in the BH magnetosphere structure.
We find the qualitative properties of the BH magnetosphere structure, including the shape of the LS,
the near-horizon field line configuration, and the source of the Poynting flux, are attributed to the marginally
force-free equatorial boundary condition without invoking the GS equation.
We also propose an algorithm for numerically solving the GS equation and
self-consistently imposing the the marginally force-free equatorial boundary condition.
As a result, we find our numerical solutions are in good agreement with the FFE simulations.
The paper is organized as follows. In Section \ref{sec:basic}, we outline the basic governing equations.
In Section \ref{sec:uni_sol}, we clarify the radiation condition, boundary conditions and the numerical algorithm
for the uniform field solution. In Section \ref{sec:discussion}, we generalize the discussion
to more field configurations. Summary is given in Section \ref{sec:summary}.
Throughout this paper, we use the convention $G=c=M=1$ unless otherwise specified,
where $M$ the mass of the BH.
\section{basic equations}
\label{sec:basic}
In this paper, we adopt the Kerr-Schild
coordinate with the line element
\[
\begin{aligned}
ds^2 =
&-\left( 1-\frac{2r}{\Sigma} \right)dt^2 + \left( \frac{4
r}{\Sigma} \right) dr dt + \left(1+\frac{2r}{\Sigma} \right) dr^2 \\
&+ \Sigma d\theta^2 - \frac{4 a r \sin^2\theta}{\Sigma} d\phi dt
- 2 a \left(1+\frac{2r}{\Sigma}\right) \sin^2\theta d\phi dr \\
& + \frac{\beta}{\Sigma} \sin^2\theta d\phi^2
\end{aligned}
\]
where $\mu\equiv\cos\theta$, $\Sigma = r^2 + a^2 \mu^2$, $\Delta = r^2 -2r + a^2$,
$\beta = \Delta\Sigma + 2r(r^2 + a^2)$, and $a$ is the dimensionless BH spin.
In the force-free approximation, electromagnetic energy greatly exceeds that of matter.
Consequently, the force-free magnetospheres is governed by energy
conservation equation of electromagnetic field, or
conventionally called as the GS equation.
In the Kerr spacetime,
the axisymmetric and steady GS equation can be written in a compact form \cite{Pan2017}
\begin{eqnarray}
\label{eq:GSg}
&&\phantom{+}
\left[A_{\phi,rr} + \frac{\sin^2\theta}{\Delta}A_{\phi,\mu\mu} \right] \mathcal K(r,\theta; \Omega )\nonumber \\
&&
+\left[A_{\phi,r} \partial_r^\Omega + \frac{\sin^2\theta}{\Delta}A_{\phi,\mu} \partial_\mu^\Omega\right] \mathcal K(r,\theta; \Omega ) \nonumber \\
&&
+ \frac{1}{2}\left[A_{\phi,r}^2 + \frac{\sin^2\theta}{\Delta}A_{\phi,\mu}^2\right] \Omega' \partial_\Omega \mathcal K(r,\theta; \Omega )\nonumber \\
&&
- \frac{\Sigma}{\Delta}II' = 0 \ ,
\end{eqnarray}
where the LS function
\begin{equation}
\label{eq:ls}
\mathcal K(r,\theta; \Omega )= \frac{\beta}{\Sigma}\Omega^2 \sin^2\theta
-\frac{4ra}{\Sigma}\Omega \sin^2\theta
-\left(1-\frac{2r}{\Sigma}\right),
\end{equation}
the primes designate derivatives with respect to $A_\phi$.
$\partial_i^\Omega (i=r, \mu)$ denotes the partial derivative
with respect to coordinate $i$ with $\Omega$ fixed, and $\partial_\Omega$ is the derivative with
respect to $\Omega$. {\bf The GS equation degrades to first order on the LS,
where the LS function $\mathcal K(r,\theta; \Omega )$ vanishes.}
\section{Uniform field solution}
\label{sec:uni_sol}
\subsection{Solution uniqueness and radiation condition}
For common field configurations, there exists two LSs where
the LS function vanishes and the GS equation degrades from second order to first order. As proposed
by Contopoulos et al. \cite{Contopoulos2013}, one can adjust the two eigenfunctions $\Omega(A_\phi)$
and $I(A_\phi)$ enabling field lines smoothly cross the two LSs, then the solution
$\{\Omega(A_\phi), I(A_\phi), A_\phi(r,\theta)\}$ is uniquely ensured. But for the vertical field lines, their
exists only one LS, which is insufficient for determining two eigenfunctions.
In this case, many solutions are expected \cite{Nathanail2014, Mahlmann2018}, but the many-solutions scenario is in
conflict with several previous FFE simulations \cite{Komissarov2004e, Komissarov2005,
Komissarov2007,Palenzuela2010,Paschalidis2013,Yang2015,Carrasco2017}.
To explain the discrepancy on the uniqueness of uniform field solution,
Pan et al. \cite{Pan2016a, Pan2017} proposed that the two eigenfunctions are not independent;
instead, they are related by the radiation condition at infinity, {\bf which is formulated as
$\hat E_\theta = \hat B_\phi$, with $\hat E_\theta$ and $\hat B_\phi$ being the $\theta$
component of electric field and $\phi$ component of the magnetic field measured by zero-angular-momentum-observers, respectively.
As for the uniform field solution, the radiation condition is explicitly expressed as }
\begin{equation} I = 2\Omega A_\phi, \label{eq:rad}\end{equation}
which has been readily confirmed by recent high-accuracy FFE simulations \cite{East2018}.
Combining with suitable boundary conditions, we expect a unique uniform field solution as indicated
by the previous FFE simulations.
\subsection{Boundary conditions}
The boundary conditions at infinity (inner infinity $r=r_+$ and outer infinity $r\rightarrow\infty$)
and on the polar axis can be simply set as
\begin{equation}
\begin{aligned}
A_{\phi,r}|_{r=r_+, \infty} &= 0, \\
A_\phi|_{\mu = 1} &= 0,\\
\end{aligned}
\end{equation}
where $r_+$ is the radius of the event horizon,
while the equatorial boundary condition is more uncertain until recent high-accuracy simulations
come out showing that there exists an equator current sheet within the ergosphere where the
magnetic dominance marginally loses, i.e., $(B^2-E^2)/B_0^2$ goes to a small positive value
as approaching the current sheet, where $B_0$ is the uniform field strength
at infinity \cite{East2018}.\footnote{The marginally force-free equatorial boundary condition
is not a unique feature of BH magnetospheres, which is also found in dissipative pulsar magnetospheres
\cite[see e.g.][]{Gruzinov2008, Gruzinov2011}.}
Motivated by the simulation results,
we choose the following equatorial boundary condition in our numerical solutions,
\begin{subequations}
\begin{align}
A_{\phi,\mu}(\mu = 0, r > 2) &= 0, \label{eq:bc2}\\
B^2-E^2 (\mu = 0, r_+ \leq r \leq 2) &=0.\label{eq:bc3}
\end{align}
\end{subequations}
In fact, Equation (\ref{eq:bc3})
is neither a Dirichlet nor a Neumann boundary condition, since
\begin{equation}
\begin{aligned}
&B^2-E^2 \label{eq:B2mE2}\\
= & \frac{1}{\Sigma \sin^2\theta} \left[ -\mathcal{K} \left(A_{\phi,r}^2 +\frac{\sin^2\theta}{\Delta}A_{\phi,\mu}^2 \right)+\frac{\Sigma}{\Delta}I^2\right],
\end{aligned}
\end{equation}
which involves both derivatives $A_{\phi,\mu}$ and $A_{\phi,r}$ on the boundary.
As we will see later, it is numerically non-trivial to impose this boundary condition in computation.
We note a coordinate singularity $1/\Delta$ in the expression of $B^2-E^2$.
To avoid possible numerical difficulty, we use the prescription
\begin{equation}
\label{eq:nbc}
\int_{A_\phi^{\rm HE}}^{A_\phi^{\rm EE}} \left(\frac{B^2-E^2}{B^2+E^2}\right)^2 dA_\phi \Bigg/ \left(A_\phi^{\rm EE}- A_\phi^{\rm HE}\right) < 10^{-3},
\end{equation}
in our computation, as a proxy of the marginally force-free equatorial boundary condition (\ref{eq:bc3}),
where $A_\phi^{\rm HE}$ and $A_\phi^{\rm EE}$ are the magnetic flux enclosed by the horizon and by the ergosphere, respectively;
``HE" and ``EE" are short for Horizon-Equator and Ergosphere-Equator, respectively.
For definiteness, we choose $B^2+E^2$ to be the energy density measured by zero-angular-momentum-observers.
Explicitly, we have
\begin{equation}
\begin{aligned}
& B^2+E^2 \\
=& \frac{1}{\Sigma \sin^2\theta} \left[ \left(\mathcal{K}+\frac{\Delta\Sigma}{\beta} \right)
\left(A_{\phi,r}^2 +\frac{\sin^2\theta}{\Delta}A_{\phi,\mu}^2 \right)
+\frac{\Sigma}{\Delta}I^2\right] .
\end{aligned}
\end{equation}
\subsection{Generic properties of the BH magnetosphere structure}
\label{subsec:features}
Before delving into the details of numerically solving the GS equations,
here we point out that from the radiation condition (\ref{eq:rad}) and the marginally
force-free boundary condition (\ref{eq:bc3})
themselves contain rich information about the BH magnetosphere structure.
Let's first find out where the LS intersects with the equator, $r_{\rm LS}|_{\mu=0}$.
{\bf On this point $r_{\rm LS}|_{\mu=0}$ where the LS function $\mathcal K$ vanishes,
$I$ must also vanish for satisfying the marginally force-free boundary condition (see Equation [\ref{eq:B2mE2}]),
which in turns indicates a vanishing angular velocity $\Omega$ from
the radiation condition (\ref{eq:rad}), i.e., $\Omega(\mu=0, r= r_{\rm LS}|_{\mu=0}) =0$.
Plugging $\Omega(\mu=0, r= r_{\rm LS}|_{\mu=0}) =0$ back into $\mathcal K = 0$,
we obtain $r_{\rm LS}|_{\mu=0}=2$, i.e.
to satisfy the boundary condition (\ref{eq:bc3}), the LS must intersect the equator at $r=2$, which also
justifies our choice of equatorial boundary conditions (\ref{eq:bc2},\ref{eq:bc3}) . }
From above analysis, we expect several generic properties in the magnetospheres that are independent
of the GS equation:
(1) the LS runs from $r=r_+$ to $r=2$ as $\theta$ varies from $0$ to $\pi/2$;
(2) since $I$ vanishes at $r_{\rm LS}|_{\mu=0}$, we expect no current sheet within the magnetosphere except the equatorial current
sheet extending from $r_+$ to $2$, which gives rise to a cusp ($A_{\phi,\mu} \neq 0$) to the equatorial magnetic field lines;
(3) magnetic field lines entering the ergosphere end up either on the horizon or on the equatorial current sheet,
both of which carry electric current and therefore Poynting flux (see \cite{Punsly1990} for a physical realization of
equatorial current sheet sourcing Poynting flux).
With the guidance of the qualitative properties above, we now proceed to numerically
solve the GS equation and quantify these properties.
\subsection{Numerical method}
\label{subsec:algorithm}
In our computation, we define a new radial coordinate $R=r/(1+r)$, confine our
computation domain $R\times \mu$ in the region $[R(r_+), 1]\times [0,1]$,
and implement a uniform $512\times 64$ grid.
We aim to find a pair of $\Omega(A_\phi)$ and $I(A_\phi)$ satisfying the radiation condition (\ref{eq:rad})
and enabling field lines smoothly crossing the LS,
and suitable normal derivative $A_{\phi,\mu}(\mu =0, r_+\leq r \leq 2)$ on the equator
guaranteeing the boundary condition (\ref{eq:bc3}).
The numerical algorithm of searching for the desired eigenfunctions and the equatorial
boundary condition $\{\Omega(A_\phi), I(A_\phi), A_{\phi,\mu}(\mu =0, r_+\leq r \leq 2)\}$
is detailed in the following steps.
1. We choose an initial guess for the field configuration, eigenfunctions
$\{ \Omega(A_\phi), I(A_\phi)\}$
and equatorial boundary condition as follows
\begin{subequations}
\begin{align}
A_\phi &= \frac{B_0}{2}r^2 \sin^2\theta, \label{eq:gss1} \\
\Omega & = 0.5\Omega_{\rm H}\left(1-A_\phi/A_\phi^{\rm HE}\right), \label{eq:gss2}\\
I & = \Omega_{\rm H} A_\phi\left(1-A_\phi/A_\phi^{\rm HE}\right) , \\
A_{\phi,\mu}(\mu =0, r_+\leq r \leq 2) & = - (r/r_+)^3, \label{eq:gss3}
\end{align}
\end{subequations}
where $\Omega_{\rm H} = a/2r_+$ is the angular velocity of the BH.
2. We evolve the GS equation (\ref{eq:GSg}) using the well-known relaxation method \cite{Press1987}
and adjust $II'(A_\phi)$ until field lines smoothly cross the LS
\cite[see e.g.][for more details]{Contopoulos2013, Nathanail2014, Pan2017, Mahlmann2018}.
3. Usually the current $I$ found in Step 2 neither satisfies the radiation condition (\ref{eq:rad})
nor guarantees the boundary condition (\ref{eq:bc3}). We adjust
$A_{\phi,\mu}(\mu = 0, r_+ \leq r\leq 2)$ as follows,
\begin{equation}
\label{eq:bc_crt}
A_{\phi,\mu}|_{\rm new} = A_{\phi,\mu}|_{\rm old} + \zeta_1\times[2\OmegaA_\phi(2\OmegaA_\phi)' -II'],
\end{equation}
where $\zeta_1$ is an empirical step size. For each new $A_{\phi,\mu}$, we repeat Step 2 and iterative
correction (\ref{eq:bc_crt}) until $A_{\phi,\mu}(\mu = 0, r_+ \leq r\leq 2)$ converges, i.e. the condition $2\OmegaA_\phi(2\OmegaA_\phi)' = II'$ is
achieved for $A_\phi\in (A_\phi^{\rm HE}, A_\phi^{\rm EE})$.
4. The remaining task is to adjust $\Omega(0< A_\phi < A_\phi^{\rm HE})$ enabling the radiation condition (\ref{eq:rad})
for $A_\phi \in (0, A_\phi^{\rm HE})$ and to adjust $\Omega(A_\phi^{\rm HE} \leq A_\phi \leq A_\phi^{\rm EE})$
enabling the boundary condition (\ref{eq:bc3}) for $A_\phi \in (A_\phi^{\rm HE}, A_\phi^{\rm EE})$.
The first part is straightforward, i.e.,
\begin{equation}
2A_\phi\Omega_{\rm new} = I|_{0 < A_\phi < A_\phi^{\rm HE}},
\end{equation}
and the second part can be realized by iterative correction
\begin{equation}
2A_\phi(\Omega_{\rm new} - \Omega_{\rm old}) = - \zeta_2\times\Delta (B^2-E^2)|_{\mu=0, r_+\leq r \leq 2},
\end{equation}
where $\zeta_2$ is again an empirical step size, and we have multiplied factor $\Delta$
in the correction term to avoid numerical difficulty in the vicinity of the event horizon.
To eliminate unphysical discontinuity in the angular
velocity at $A_\phi^{\rm HE}$, we fit $\Omega_{\rm new}(A_\phi)$ on the whole range
$(0, A_\phi^{\rm EE})$ via a fifth-order polynomial.
5. For the new angular velocity $\Omega_{\rm new}(A_\phi)$ obtained in Step 4, we repeat Step 2 to Step 4,
until both the radiation condition (\ref{eq:rad}) and the numerical prescription (\ref{eq:nbc}) for the boundary condition (\ref{eq:bc3}) is satisfied.
\subsection{Numerical results}
In Figure \ref{fig:field_lines}, we plot the magnetic field lines enclosing a BH with spin $a =0.99$ as an example,
which explicitly displays the properties we anticipated in Section \ref{subsec:features} and agrees with
the simulation results in detail \cite{East2018}.
In Figure \ref{fig:omega}, we show the angular velocity function $\Omega(A_\phi)$ for different BH spins and
compare it with the counterpart obtained from the simulations \cite{East2018}.
For reference, we also plot the leading-order analytic solution in the slow-rotation
limit \cite{Beskin2013, Pan2014, Gralla2015, East2018},
\begin{equation}
\Omega = \Omega_{\rm H}\frac{\sqrt{1-\psi}} {1+\sqrt{1-\psi}},
\end{equation}
where $\psi = A_\phi/(2B_0M^2)$.
From our numerical solutions, we find the magnetic flux entering the ergosphere $A_\phi^{\rm EE}$ increases with
the BH spin and approaches $2.75 B_0 M^2$ for extremal spins (upper panel of Figure \ref{fig:omega}),
which is about $\approx 5\%$ lower than the simulation result (Figure 3 in Ref. \cite{East2018}),
while the angular velocity $\Omega$ as a function
of normalized magnetic flux $A_\phi/A_\phi^{\rm EE}$ is in agreement with the
simulation results to high precision.\footnote{We have done a test and
find that the $\approx 5\%$ difference in $A_\phi^{\rm EE}$ is not arising from the slightly
different equatorial boundary conditions used in this work and found from East and Yang's simulations.
The difference is more likely due to the relative numerical bias between the two algorithms. }
\begin{figure}
\includegraphics[scale=0.6]{f1}
\caption{\label{fig:field_lines} The configuration of field lines for the magnetosphere of a Kerr BH with spin $a=0.99$,
where the solid/red line is the ergosphere and the dashed/black line is the LS, both of which intersect with
the equator at $r=2 M$. }
\end{figure}
\begin{figure}
\includegraphics[scale=0.7]{f2a}
\includegraphics[scale=0.7]{f2b}
\caption{\label{fig:omega} Upper panel: the angular velocity $\Omega(A_\phi)$ for different BH spins
obtained from our numerical solutions.
Lower Panel: comparison of our numerical results (solid lines) with the simulation results of
Ref. \cite{East2018} (dashed lines). For reference, we also plot the leading order analytic solution in
dash-dotted lines.}
\end{figure}
With the angular velocity $\Omega(A_\phi)$ obtained, the energy extraction rate from the BH is given by
\begin{equation}
\label{eq:Edot}
\dot E = 4\pi \int_0^{A_\phi^{\rm EE}} \Omega \times I \ d A_\phi.
\end{equation}
It is straightforward to obtain the energy extraction rate in the slow-rotation limit
\begin{equation}
\label{eq:lowEdot}
\dot E = 128\pi \left(\frac{17}{24}-\ln 2\right) B_0^2 M^4 \Omega_{\rm H}^2.
\end{equation}
\begin{figure}
\includegraphics[scale=0.7]{f3}
\caption{\label{fig:Edot} Comparison of the energy extraction rates $\dot E(\Omega_{\rm H})$ obtained from
three different approaches: the leading-order analytic solution (\ref{eq:lowEdot}),
our numerical solutions and the high-resolution force-free simulations \cite{East2018}.}
\end{figure}
In Figure \ref{fig:Edot}, we compare the energy extraction rates $\dot E(\Omega_{\rm H})$
derived from our numerical solutions with East and Yang's simulation results
\cite{East2018}, where the data points are taken from either the simulations
or our numerical solutions, while the solid lines are corresponding polynomial
fitting curves which we require to approach Equation (\ref{eq:lowEdot}) for small
spins and to be flat for extremal spins. As expected, our energy extraction
rate $\dot E(\Omega_{\rm H})$ is $\approx 10\%$ lower than the corresponding simulation results,
due to the $\approx 5\%$ smaller magnetic flux $A_\phi^{\rm EE}$.
To summarize, the uniform field solution is indeed unique as double confirmed by the high-accuracy FFE
simulations and by our numerical solutions. The structure of the BH magnetosphere
is largely shaped by the radiation condition and the marginally force-free
equatorial boundary condition.
\section{Discussion}
\label{sec:discussion}
\subsection{Application to general field configurations}
In real astrophysical environment, we expect the field lines far away from the central BH
are more close to parabolas instead of being strictly vertical. In several previous studies of such field
configurations \cite[e.g.][]{Tchekhovskoy2010,Nathanail2014,Mahlmann2018},
due to lacking knowledge of the equatorial boundary condition, the
equator within the ergosphere was intentionally excluded out of the computation domain by manually
introducing a ``wall" extending from the horizon-equator intersection to infinity. Such
simplification obviously misses magnetic field lines rooting on the
equatorial current sheet, which contribute about half of the total Poynting flux for extremal spins in
the case of uniform field configuration.
Due to the resemblance of near-horizon field lines in the two cases,
it is reasonable to expect an equatorial current develops within the ergosphere,
where the magnetic dominance loses, therefore the marginally force-free boundary
condition (\ref{eq:bc3}) should also be a good work approximation for studying
the BH magnetosphere embedded in parabolic magnetic field lines.
It is straightforward to solve the GS equation and self-consistently
impose the marginally force-free boundary condition following the
algorithm detailed in Section \ref{subsec:algorithm}.
Though we do not numerically solve the GS equation for the general parabolic field configurations,
the qualitative properties we summarized in Section \ref{subsec:features} also apply here,
since these properties are the consequence of the radiation condition and the marginally force-free equatorial boundary condition,
while the GS equation only serves to quantify them.
\subsection{Near-horizon magnetic field lines}
In a previous study \cite{Pan2016a}, we made a claim that ``in the steady axisymmetric force-free magnetosphere
around a Kerr BH, all magnetic field lines that cross the infinite-redshift surface must intersect the event horizon".
This claim is based on the radiation condition
\begin{equation}
\label{eq:Grad}
I = \Omega \times \mathcal F(A_\phi),
\end{equation}
and the assumption of no current sheet within the ergosphere,
where the function $\mathcal F(A_\phi)$ is of $\mathcal O(A_\phi)$ and is field configuration dependent.
The basic logic for obtaining the claim above is as follows.
The angular velocity $\Omega$ must be nonzero for all field lines entering the ergosphere due to the frame-dragging effect;
as a result, $I$ must be nonzero for these field lines according to the radiation condition.
If there is a field line entering the ergosphere and crossing the equator, the electric current either flows
towards the equator from both the $+z$ and $-z$ side, or flows away from the equator to infinity in
both the $+z$ and $-z$ direction. For each case, the charge conservation is violated
if there exists no equatorial current sheet.
However, the high-accuracy FFE simulations show that an equatorial current sheet inevitably develops within the ergosphere, where
the force-free condition marginally breaks down. Therefore the above claim should be generalized as
``in the steady axisymmetric force-free magnetosphere
around a Kerr BH, all magnetic field lines that cross the infinite-redshift surface
must intersect the event horizon or end up on the equatorial current sheet". Specifically,
this claim excludes the existence of field lines entering the ergosphere and crossing the
equator vertically.
\section{Summary}
\label{sec:summary}
In the force-free limit, the structure of steady and axisymmetric BH magnetosphere is governed by
the GS equation, which is a second-order differential equation about the magnetic flux $A_\phi$, with two
eigenfunctions $\Omega(A_\phi)$ and $I(A_\phi)$ to be determined. For common field configurations, there exists
two LSs on which the GS equation degrades to be first-order, and the two eigenfunctions are determined
by the requirement that magnetic field lines should smoothly cross the two LSs.
For the uniform field configuration, there is only one
LS, which is insufficient for determine both $\Omega(A_\phi)$ and $I(A_\phi)$.
Therefore the solution uniqueness of the uniform field configuration has been a controversial problem.
To tackle this problem, we proposed that the two functions are related by the radiation condition (\ref{eq:rad}),
instead of being independent \cite{Pan2017}, which was readily confirmed by recent high-accuracy
FFE simulations \cite{East2018}. In addition, these simulations also provide a close look at the
equatorial boundary condition: an equatorial current sheet develops within the ergosphere and the magnetic dominance marginally loses, i.e. $B^2-E^2\approx 0$.
Motivated by these simulation results, we revisit the problem of the uniform field solution in this paper.
We find the radiation condition (\ref{eq:rad}) and the marginally force-free boundary
condition (\ref{eq:bc3}) are rather informative, which dictate the BH magnetosphere structure
in various aspects, including the shape of the LS, the near-horizon field line configuration and
the source of BZ flux (see Section \ref{subsec:features} for details). {\bf Especially we find the LS
intersects with the ergosphere at the equator, which was also observed in previous simulations \cite[e.g.][]{Carrasco2017, East2018}
and now we understand its underlying physics: the radiation condition and the marginally force-free condition.}
Other than these qualitative properties,
we also propose an algorithm for numerically solving the GS equation and consistently imposing
the marginally force-free equatorial boundary condition. As a result, we find a good agreement
between our numerical solutions with the high-accuracy FFE simulations.
In realistic astrophysical environment, we expect the magnetic field lines far away from the central BH
are more close to be parabolic instead of being strictly vertical. However, we also expect the marginally
force-free equatorial boundary condition to be a good working approximation for studying the parabolic field configurations,
due to the resemblance of the near-horizon field configurations in the two cases.
Though we do not numerically solve the GS equation for the parabolic configurations in this paper, the qualitative
properties of the uniform field solution summarized in Section \ref{subsec:features} also apply here,
since these properties are dictated by the radiation condition
and the marginally force-free boundary condition, while the GS equation only serves to quantify them.
\acknowledgements
ZP thanks William East and Huan Yang for stimulating discussions and sharing their simulation results.
ZP also thanks Cong Yu and Lei Huang for reading through a previous version of this manuscript and providing useful suggestions.
ZP was supported by the UC Davis Dissertation Year Fellowship when this work was started.
This research was also supported by Perimeter Institute for Theoretical Physics.
Research at Perimeter Institute is supported by the Government of Canada
through the Department of Innovation, Science and Economic Development Canada
and by the Province of Ontario through the Ministry of Research, Innovation and Science.
|
1,116,691,499,626 | arxiv | \section{Introduction}\label{sec:intro}
Mollison has introduced in \cite{Mo1}, \cite{Mo2} a stochastic spatial general
epidemic model on $\Z^d$,
describing the evolution of individuals submitted to infection by contact
contamination of infected neighbors. More precisely, each site of $\Z^d$
can be healthy, infected, or immune. At time 0, there is an infected
individual at the origin, and all other sites are occupied by healthy
individuals.
An infected individual emits germs according to a Poisson process,
it stays infected for a random time, then it recovers and becomes
immune to further infection.
A germ emitted from $x\in\Z^d$ goes to one of the neighbors $y\in\Z^d$ of
$x$ chosen at random. If the individual at $y$ is healthy
then it becomes
infected and begins to emit germs; if this individual is infected or
immune, nothing happens. The germ emission processes and the durations of infections of
different individuals are mutually independent.\\ \\
Since its introduction, this epidemic model has given rise to many
studies, and to other models that are variations
of this ``SIR'' (Susceptible-Infected-Recovered) structure.
A first direction is
whether the
different states aymptotically survive or not, according
to the values of the involved parameters (e.g. the infection and
recovery rates). A second direction is the obtention of a shape
theorem for the asymptotic behavior of infected individuals,
when there is no extinction of the infection (throughout this paper,
extinction is understood as extinction of the infection). \\ \\
Kelly in \cite{K} proved that for $d=1$, extinction is almost sure for
the spatial general epidemic model.
Kuulasmaa in \cite{Ku}
has studied the threshold behavior of this model in dimension $d\ge 2$.
He proved that the process has a critical infection rate below which extinction is
almost certain, and above which there is survival.
His work (as well as the following ones on this model)
is based on the analysis of a directed oriented percolation model,
that he calls a locally dependent random graph, in correspondence
with the epidemic model.
See also the related paper \cite{KZ}.\\ \\
Cox \& Durrett have derived in \cite{CD2} a shape theorem
for the set of infected individuals
in the spatial general epidemic model on $\Z^2$, when there is no extinction,
and the contamination rule is nearest neighbor.
This result was extended to a finite range
contamination rule by Zhang in \cite{Z}. The proofs in \cite{CD2}, \cite{Z}
are based on the correspondence with the locally dependent random graph, and they refer to
\cite{CD1} (which deals with first passage percolation).
They rely on the introduction of circuits to delimit and control
open paths, hence cannot
be used above dimension $d=2$.\\ \\
Chabot proved in \cite{C} the shape theorem for the infected individuals
of the spatial general epidemic model in dimension
$d\ge 3$, with the restriction to deterministic
durations of infection: in that case the oriented percolation model is comparable
to a non-oriented Bernoulli percolation model (as noticed in \cite{Ku}, the
case with constant durations of infection is the only one where the edges are independent).
He also exploited
the papers \cite{AP} by Antal \& Pisztora,
and \cite{GM} by Grimmett \& Marstrand on dynamic renormalization
to deal with dimension $d\ge 3$.
He introduced some random neighborhoods for points in
$\Z^d$ and with these instead of circuits he was able to extend the proof of \cite{CD2}. \\ \\
In the present work, we prove
the shape theorem for infected individuals in
the spatial general epidemic model in dimension $d\ge 3$, when
the durations of infection are random variables.
There, the comparison with non oriented
percolation done in \cite{C} is not longer valid. Our approach requires to adapt techniques
of \cite{GM}, and to derive sub-exponential estimates to play the role of the exponential estimates of \cite{AP}.
It is then possible to follow the skeleton of \cite{C}.\\ \\
In Section \ref{sec:model} we define the spatial general epidemic model, the locally dependent random graph,
and we state our main result, Theorem \ref{th:shape}. In Section \ref{sec:appliquer_GM}
we derive the necessary percolation estimates on the locally dependent random graph for Theorem \ref{th:shape}. We prove the latter
in Section \ref{sec:Nicolas}, thanks to an analysis of the passage times for the
epidemic.
\section{ The model: definition and result }\label{sec:model}
Let $d\ge 3$. The epidemic model on $\Z^d$ is represented by a Markov process $(\eta_t)_{t\ge 0}$
of state space $\Omega=\{0,i,1\}^{\Z^d}$. The value $\eta_t(x)\in\{0,i,1\}$ is the state
of individual $x$ at time $t$: state 1 if the individual is healthy (but not immune),
state $i$ if it is infected, or state 0
if it is immune.
To describe how the epidemic propagates, we
introduce
a locally dependent oriented bond percolation model on $\Z^d$.
For $x=(x_1,\ldots,x_d)\in\Z^d, y=(y_1,\ldots,y_d)\in\Z^d$,
$\| x-y\|_1=\sum_{i=1}^d |x_i-y_i|$ denotes the $l^1$ norm of
$x-y$,
and we write $x\sim y$ if $x,y$ are neighbors, that is $\| x-y\|_1=1$.
Let $(T_x, e(x,y): x,y \in \Z^d, x\sim y)$ be independent
random variables on a probability space,
whose probability is denoted by $P_\lambda$ for
a parameter $\lambda>0$, such that
1) the $T_x$'s are positive with a common distribution
satisfying $P_\lambda(T_x=0)<1$;
2) the $e(x,y)$'s are exponentially distributed with parameter $\lambda$.\\
We define
\begin{equation}\label{def:open-closed-bonds}
X(x,y)=
\cases{
1 & if $e(x,y)<T_x$;\cr
0 & otherwise.\cr
}
\end{equation}
The oriented bond $(x,y)$ is said \textit{open with passage time}
$e(x,y)$ (abbreviated $\lambda$\textit{-open}, or \textit{open} when the parameter is fixed)
if $X(x,y)=1$ and \textit{closed}
(with infinite passage time) if $X(x,y)=0$.\\ \\
For a given infected individual $x$, $T_x$ denotes the amount of time $x$ stays infected;
during this time of infection, $x$ emits germs according to a Poisson process of parameter $2d\lambda$;
when $T_x$ is over,
$x$ recovers and
its state becomes 0 forever.
An emitted germ from $x$ reaches one of the $2d$ neighbors of $x$ uniformly. If
this neighbor
is in state 1, it immediately changes to state $i$, and begins
to emit germs according to the same rule; if this neighbor is in state 0
or $i$, nothing happens. \\ \\
We denote by $C_o^o$ the set of sites $x\in\Z^d$ that will
ever become infected if, at time 0, the origin $o=(0,\ldots,0)$
is the only infected site while all other sites are healthy, that is
\be\label{eq:C_o}
C_o^o=\{x\in\Z^d:\exists\, t\ge 0, \eta_t(x)=i|\eta_0(o)=i,\forall\, z\not= o, \eta_0(z)=1\}.
\ee
It was proven in \cite[(1.2)]{CD2} (see also \cite[p. 322]{Mo1}, \cite[Lemma 3.1]{Ku}) that $C_o^o$
is the set of sites that can be reached from the origin following
{\sl an open path}, that is a path of open oriented bonds.
More generally, for each $x\in \Z^d$ we define the
\textit{ingoing and outgoing clusters to and from} $x$ to be
\be\label{def:clusters_of_x}
C_x^i =\{y\in \Z^d:y\rightarrow x\},\qquad C_x^o =\{y\in \Z^d:x\rightarrow y\},
\ee
and the corresponding critical values to be
\be\label{def:critical_of_x}
\lambda_c^i= \inf \{\lambda: P(\vert C_x^i\vert =+\infty)>0\},\qquad \lambda_c^o
= \inf \{\lambda: P(\vert C_x^o\vert= +\infty)>0\},
\ee
where
``$x\to y$'' means that there exists (at least) an open path
$\Gamma_{x,y}=(x_0=x,x_1,\ldots,x_n=y)$ from $x$ to $y$, and
$|A|$ denotes the cardinality of a set $A$.
Although we are using the symbol $o$ for both the origin and
the outgoing cluster we believe no confusion will arise because in
the former case it appears as a subindex while in the latter it does so as a superindex.
We will prove in Section \ref{sec:appliquer_GM} the following proposition.
\begin{proposition}\label{cor:lambda-i=lambda-o}
$$\lambda_c^i=\lambda_c^o.$$
This common value will be denoted by $\lambda_c=\lambda_c(\Z^d)$.
\end{proposition}
We can now state our main result:
\begin{theorem}\label{th:shape}
Assume $\lambda>\lambda_c$. Define, for $t\ge 0$,
\begin{eqnarray*}\label{eq:immune_at_t}
\xi_t &= \{x\in\Z^d: x \mbox{ is immune at time } t\} &= \{x\in\Z^d: \eta_t(x)=0\};\\
\label{eq:infected_at_t}
\zeta_t &= \{x\in\Z^d: x \mbox{ is infected at time } t\} &= \{x\in\Z^d: \eta_t(x)=i\}.
\end{eqnarray*}
Then there exists a convex subset $D\subset \Z^d$ such that, for
all $\eps>0$ we have
\begin{equation}\label{eq:shape}
\Bigl((1-\eps)tD \cap C_o^o \Bigr)\subset \Bigl( \xi_t \cup \zeta_t\Bigr)
\subset\Bigl( (1+\eps)tD \cap C_o^o \Bigr)\mbox{ a.s. for}\ t \mbox{ large enough;}
\end{equation}
and if $E(T_z^d)<\infty$ we also have
\begin{equation}\label{eq:couronne}
\zeta_t \subset\Bigl( (1+\eps)tD \setminus (1-\eps)tD \Bigr) \mbox{ a.s. for}\ t \mbox{ large enough.}
\end{equation}
\end{theorem}
In other words, the epidemic's progression follows linearly the boundary of a convex set.
To derive this theorem we follow some of the fundamental steps of \cite{CD2}, but since in
dimensions three or higher, circuits are not useful as in dimension 2, this is not a
straightforward adaptation.
On the percolation model,
we first construct for each site $z\in\Z^d$ a neighborhood ${\mathcal V}(z)$ in such a way that two
neighborhoods are always connected by open paths. For $x,y\in\Z^d$,
we show that the time $\tau(x,y)$ for the epidemic to go from $x$ to $y$ is `comparable'
(in a sense to be precised later on) to
the time ${\widehat\tau}(x,y)$ it takes to go from ${\mathcal V}(x)$ to ${\mathcal V}(y)$.
Then we approximate the passage times between different sites by a subadditive process,
we derive through Kingman's subbaditive theorem a radial limit $\varphi(x)$
(for all $x$), which is asymptotically the linear growth speed of the epidemic in direction $x$.
We establish that the
global propagation speed is at most linear in $z$ for ${\widehat\tau}(o,z)$. Finally we
prove an asymptotic shape theorem for
$\widehat\tau(o,\cdot)$, from which we deduce Theorem \ref{th:shape}.
\section{Percolation estimates}\label{sec:appliquer_GM}
In this section we collect some results concerning the locally dependent random graph, that is the oriented percolation model given by the random variables $(X(x,y),x,y\in \Z^d)$ introduced in Section \ref{sec:model}.
Note that although these r.v.'s are not independent, the random vectors $\{X(x,x+{\rm e}_1),\ldots,X(x,x+{\rm e}_d),X(x,x-{\rm e}_1),\ldots,X(x,x-{\rm e}_d): x\in \Z^d\}$ (where $({\rm e}_1,\ldots,{\rm e}_d)$
denotes the canonical basis of $\Z^d$) are i.i.d. This small dependence
forces us to explain why some results known for independent percolation remain valid in this context.
\begin{remark}\label{rk:FKG}
The function $X(x,y) $ is increasing in the independent random variables $T_x$
and $-e(x,y)$. It then follows as in \cite[Lemma (2.1)]{CD2}
that the r.v.
$(X(x,y): x,y \in \Z^d, y\sim x)$ satisfy the following property: If $U$ and $V$ are bounded increasing functions of the random variables $(X(x,y): x,y \in \Z^d, y\sim x)$,
then $E(UV)\ge E(U)E(V)$.
\end{remark}
For $n\in\N\setminus\{0\}$, let $B_n=[-n,n]^d$, let $\partial B_n$
denote the boundary vertex points of $B_n$, and,
for $x\in\R^d$, $B(x,n)=x+B_n$.
For $A,R\subset\Z^d$, ``$A\rightarrow R$'' means that
there exists an open path
$\Gamma_{x,y}$ from some $x\in A$ to some $y\in R$.
\begin{theorem}\label{th:outgoing-decay}
Suppose $\lambda <{\lambda}_c^o$, then there exists $\beta_o>0$ such that for all $n>0$,
$$P_\lambda(o\rightarrow \partial B_n)\leq \exp (-\beta_o n).$$
\end{theorem}
This is a special case of \cite[Theorem (3.1)]{BGS}, whose proof can
be adapted to obtain:
\begin{theorem}\label{th:ingoing-decay}
Suppose $\lambda <{\lambda}_c^i$, then there exists $\beta_i>0$ such that for all $n>0$,
$$P_\lambda(\partial B_n \rightarrow o)\leq \exp (-\beta_i n).$$
\end{theorem}
It is worth noting that in the context of our paper, by Remark \ref{rk:FKG}, \cite[Theorem (3.1)]{BGS} can
be proved using the BK inequality instead of the Reimer inequality
(see \cite[Theorems (2.12), (2.19)]{G}).\\ \\
Theorems \ref{th:outgoing-decay}, \ref{th:ingoing-decay} yield
Proposition \ref{cor:lambda-i=lambda-o}:\\
\begin{proof}{proposition}{cor:lambda-i=lambda-o}
Suppose $\lambda <{\lambda}_c^i$. Then by translation invariance and Theorem \ref{th:ingoing-decay}
we have that for any $x\in \partial B_n$, $P_\lambda(o\rightarrow x)\leq \exp (-\beta_i n)$.
Adding over all points of $\partial B_n$ we get
$P_\lambda(o\rightarrow \partial B_n)\leq K'n^d \exp (-\beta_i n)$
for some constant $K'$, which implies that
$\lim_{n\to +\infty} P_\lambda(o\rightarrow \partial B_n)=0$. Therefore $\lambda\leq \lambda_c^o$ and
$\lambda_c^i\leq \lambda_c^o$. The other inequality is obtained similarly.
\end{proof}
\mbox{}\\ \\
{}From now on, we assume $\lambda>\lambda_c(\Z^d)$ and use the following notation:
For $x,y \in\Z^d,A\subset\Z^d$,
\newline \textit{(i)} $\{ x\rightarrow y \ \mbox {within }\ A \}$
is the event on which there exists an open path
$\Gamma_{x,y}=(x_0=x,x_1,\ldots,x_n=y)$ from $x$ to $y$ such that
$x_i\in A$ for all $i\in\{0,\ldots,n-1\}$. Note that the end point $y$ may not belong to $A$.
\newline \textit{(ii)} $\{x\to y \hbox{ outside } A\}$ is the event on which
there exists an open path
$\Gamma_{x,y}=(x_0=x,x_1,\ldots,x_n=y)$ from $x$ to $y$ such that
none of the $x_i$'s ($i\in\{0,\ldots,n\}$) belongs to $A$.
\begin{definition}\label{def:within-outside}
For $x\in\Z^d,A\subset\Z^d$ let
\begin{eqnarray*}\label{eq:C_xA}
C_x^i(A) &=&\{y\in A:y\rightarrow x\ \hbox{ within }A \}\qquad \mbox{and}\cr
C_x^o(A) &=&\{y\in A:x\rightarrow y\ \hbox{ within }A\}.
\end{eqnarray*}
\end{definition}
Note that with this definition $C_x^i(A)\subset A$ and $C_x^o(A)\subset A$.
Next proposition on percolation on slabs
follows from the methods of \cite{GM} or \cite[Chapter 7]{G}.
\begin{proposition}\label{prop:clusters_rentrant-sortant_infinis}
For any $k\in\N\setminus\{0\}$, let $S_k=\Z^{d-1}\times \{0,1,\dots,k\}$
denote the slab of thickness $k$. Then
for $k$ large enough we have $\inf_{x\in S_k}P_{\lambda}(\vert C_x^i(S_k)\vert =+\infty)>0$
and $\inf_{x\in S_k}P_{\lambda}(\vert C_x^o(S_k)\vert =+\infty)>0$.
\end{proposition}
More precisely,
to adapt
\cite[Chapter 7]{G}, it is convenient to define the processes for
different values of $\lambda$ on the same probability space, whose probability is
denoted by $P$. This is done as follows:
Let $e_1(x,y)$ be a collection of independent exponential r.v.'s as before, but now
with parameter $1$. Then let $e_{\lambda}(x,y)={\lambda}^{-1}e_1(x,y)$,
and
\begin{equation}\label{def:sprinkling_open-closed-bonds}
X_{\lambda}(x,y)=
\cases{
1 & if $e_{\lambda}(x,y)<T_x$;\cr
0 & otherwise.\cr
}
\end{equation}
The following lemma implies that given $K$ and $\delta_1>0$, there exists $\eps_1>0$ such that for any
$\lambda \in [0,K]$ the random field $\{X_{\lambda +\delta_1}(u,v):u,v \in \Z^d\}$ is stochastically above the random field
$\{\max \{X_{\lambda }(u,v),Y(u,v)\}:u,v \in \Z^d\}$
where the random variables $Y(u,v)$ are i.i.d. Bernoulli with parameter $\eps_1$ and are independent of the random variables $X_{\lambda}(u,v)$.
This lemma justifies the use of this \textit{sprinkling technique}. Its proof is elementary and will be omitted. Then, with Lemma \ref{lem:sprinkling} one can adapt the proof of \cite[Theorem (7.2)]{G} to get Proposition \ref{prop:clusters_rentrant-sortant_infinis}.
\begin{lemma}\label{lem:sprinkling}
Let $K$ and $\delta_1$ be strictly positive, then there exists $\eps>0$ such that for any $\lambda\leq K$,$u\in \Z^d,$
$$P(X_{\lambda +\delta_1 } (u,v)=1\ \forall v\sim u\vert \ \ X_{\lambda}(x,y): x,y\in \Z^d,\ x\sim y)>\eps\ \ \ a.s.$$
\end{lemma}
We introduce now some notation:
For $A\subset\Z^d$
we define the
{\sl exterior vertex boundary} $\Delta_V A$ as:
$$\Delta_V A= \{x\in\Z^d:x\notin A, x\sim y \mbox{ for some } y\in A\}.$$
If $x\rightarrow y$ let $D(x,y)$ be the smallest number
of bonds required to build an open path from $x$ to $y$.
For $A\subset\Z^d$, $x\in A,y\in \Delta_V A$,
``$D(x,y)<m \mbox{ within }\ A$''
means that there is an open path $\Gamma_{x,y}$
using less than $m$ bonds from
$x$ to $y$ whose sites are all in $A$ except for $y$.\\
The rest of this section provides some upper bounds for the tail of the conditional distribution of $D(x,y)$ given the event $\{x\rightarrow y\}$. These estimates are not optimal
and better results can be obtained by adapting the methods of \cite{AP}. Instead of getting exponential decays in $\|x-y\|_1$ (or in $n$) we get exponential decays in
$\|x-y\|_1^{1/d}$ (or in $n^{1/d}$). We have adopted this approach because the weaker results suffice for our purposes and are easier to obtain.
\begin{lemma}\label{lem:connections}
There exist $\delta>0$, ${\rm C}_1>0$ and $k\in \N\setminus\{0\}$ such that
\noindent
(i) $\forall n>0,\ x\in B_{n+k}\setminus B_n,\ y\in (B_{n+k}\setminus B_n)\cup \Delta_V(B_{n+k}\setminus B_n ) $ we have :
$$P(x\rightarrow y \ \mbox{ within }\ B_{n+k}\setminus B_n)>\delta. $$
(ii) Let
\begin{eqnarray*}\label{eq:A_nm}
A_{n,m}&=&\{z: -k+n\leq z_1<n, -\infty<z_2\leq k\}\cup\cr
&& \{z:-k+n\leq z_1\leq m+k, 0<z_2\leq k\}\cup \cr
&&\{z:m<z_1\leq m+k, -\infty<z_2\leq k\}.
\end{eqnarray*}
$\forall n<m,\ \forall x \in A_{m,n},\forall y\in A_{m,n}\cup \Delta_V A_{m,n}$, we have:
$$P(D(x,y)<{\rm C}_1(\| x-y\|_1 +\vert x_2 \vert +\vert y_2 \vert)\ \mbox{ within } A_{m,n})>\delta.$$
\end{lemma}
Again, the proof of this lemma relies on the methods of \cite[Chapter 7]{G} and \cite{GM}. Since it is not an entirely
straightforward adaptation
we make some remarks that we believe will help the reader.
\begin{remark}\label{rk:sites-renormalises}
In \cite[Chapter 7]{G}, renormalised sites, which we will call r-sites, are introduced. These are the hypercubes $B_k=[-k,k]^d $ and their translates
$B(2kx,k)=[-k,k]^d +2kx, \ x\in \Z^d$. We will denote by $x$ both the point in $\Z^d$ and the r-site centered at $2kx$ since we believe no confusion will arise from this. Loosely speaking, these r-sites are called occupied if they are well connected with their neighbors. Crucial for this method is the fact that
for any given $p\in (0,1)$ the set of occupied r-sites dominates a Bernoulli product measure of density $p$ if $k$ is large enough. In \cite[Chapter 7]{G} and \cite{GM}, $p$ is taken above the critical
value for non-oriented Bernoulli percolation on some subset of $\Z^d$, but here it is convenient
to take it above the critical value
of 2-dimensional oriented bond percolation. This choice of $p$ and the corresponding choice of $k$ guarantee the following property:\\
\newline {\rm (P)} There exists $\gamma>0$ such that for any pair of r-sites $u$ and $v$ the probability that there exists a path of occupied r-sites going from $u\in \Z^d$ to $v\in \Z^d$
which uses at most
$2\|u-v\|_1$ r-sites is at least $\gamma$.\\
\newline From this property {\rm (P)} one deduces that there exist $\delta>0$ and ${\rm C}_1$
such that for any pair $x,y \in \Z^2\times [-k,k]^{d-2}$
the probability that there is an open path from $x$ to $y$ contained in $\Z^2 \times [-k,k]^{d-2}$
that uses at most ${\rm C}_1\|x-y\|_1$ bonds is at least $\delta$.
\end{remark}
\begin{remark}\label{rk:ajouter-x2}
We have to add $\vert x_2 \vert +\vert y_2 \vert$ because if
$x\in \{z: -k+n\leq z_1<n, -\infty<z_2\leq 0\}$ and
$y\in \{z:m<z_1\leq m+k, -\infty<z_2\leq 0\}$, to move from $x$ to $y$ staying in $A_{m,n}$ we need to reach first the set
$ \{z:-k+n\leq z_1\leq m+k, 0<z_2\leq k\}$ (i.e. to increase the second coordinate until it is positive).
\end{remark}
\begin{lemma}\label{lem:reaching faces}
Let $k$ be given by Lemma \ref{lem:connections} and let $x$ and $y$ be points in $\Z^d$.
For $n\in \Z$ let $H_n=\{x\in \Z^d:x_1=n\}$ and define the events
\begin{eqnarray*}
J_n&=&\{x\rightarrow H_{x_1-1-ik}\ \mbox{within}\ B(x,nk),
i=0,\dots \lfloor n/2\rfloor\}\cap \cr
&&\quad\{ H_{y_1+1+ik}\rightarrow y\ \mbox{within}\ B(y,nk),
i=0,\dots \lfloor n/2\rfloor\},\cr
G_n&=&\{x\rightarrow \partial B(x,kn), \ \partial B(y,kn)\rightarrow y\},
\end{eqnarray*}
where, for any $a\in\R$, $\lfloor a\rfloor$ denotes the greatest integer not greater than
$a$. Then, there exists $\beta>0$ such that
\[
P(J_n\vert G_n)\ge 1-\exp(-\beta n).
\]
\end{lemma}
\begin{proof}{lemma}{lem:reaching faces}
By translation invariance we may assume that $x$ is the origin.
When $G_n$ occurs there are open paths from $o$ to $B(o,ik)\setminus B(o,(i-1)k)$
and
to $y$ from
$B(y,ik)\setminus B(y,(i-1)k)$ for $i=2,\dots,n$. Hence we conclude by part \textit{(i)} of Lemma \ref{lem:connections}.
\end{proof}
\begin{lemma}\label{lem:APp}
Let $k$ be given by Lemma \ref{lem:connections} and let $G_n$ be as in Lemma \ref{lem:reaching faces}.
Then, there exist constants
${\rm C}_2$, ${\rm C}_3$ and $\alpha_2>0$ such that for all $x,y \in \Z^d$, $n\in \N$ we have
$$P(D(x,y)> {\rm C}_2 \|x-y\|_1+{\rm C}_3(nk)^d \vert \ G_n)\leq \exp(-\alpha_2 n). $$
\end{lemma}
\begin{proof}{lemma}{lem:APp}
Again, by translation invariance we may assume that $x$ is the origin and without loss of generality, we also assume that $y_1>0$ and $y_2\ge 0$.
By Lemma \ref{lem:reaching faces} it suffices to show that
$$P(D(o,y)> {\rm C}_2 \|x-y\|_1+{\rm C}_3(nk)^d \vert \ J_n)$$
decays exponentially in $n$.
Let $r=\lfloor n/2\rfloor$, and for $1\le j\le r$ let
\begin{eqnarray}\label{eq:A_0etA_j}
A_0&=&\{z: -k\leq z_1<0,\, -\infty<z_2\leq y_2+k\}\cup\cr
&& \{z:-k\le z_1\le y_1+k,\, y_2<z_2\leq y_2+k\}\cup \cr
&&\{z:y_1<z_1\leq y_1+k,\, -\infty<z_2\leq y_2+k\},\cr
A_j&=&\{z: -(j+1)k\leq z_1<-jk,\, -\infty<z_2\leq y_2+k\}\cup\cr
&&\{z:-(j+1)k\le z_1 \le y_1+(j+1)k, \cr
&&\quad\qquad y_2+jk<z_2\leq y_2+(j+1)k\}\cup\cr
&& \{z:y_1+jk<z_1\leq y_1+(j+1)k,\,
-\infty<z_2\leq y_2+(j+1)k\}.
\end{eqnarray}
\begin{figure}[htp]
\label{fig:dessin_lemme3.4-2}
\centering
\input{enr4.pdf_t}
\caption{the event $W_3$}
\end{figure}
Note that the sets $A_0,\dots, A_{r}$ are disjoint. Figure 1
should help the reader to visualize them.
On the event $J_n$, we can reach from the origin
each of these sets by means of an open path contained in $B_{nk}$
and from each of these sets we can reach $y$ by means of an open
path contained in $B(y,nk)$. Hence, on $J_{n}$ for each
$i\in \{0,\dots,r\}$ there exists a random point
$U_i\in B_{nk}\cap A_i$ and there is an open path from the
origin to $U_i$ such that all its sites except $U_i$ are in
$B_{nk}\cap (\cap_{j=i}^{r}A_j^c)$.
If there are many possible values of $U_i$ we choose
the first one in some arbitrary deterministic order. Similarly,
there is a point $V_i\in B(y,nk)\cap \Delta A_i$
and an open path from $V_i$ to $y$ such that all its
sites are in $B(y,nk)\cap (\cap_{j=i}^{r}A_j^c)$.
Let $u_i$ and $v_i$ be possible values of $U_i$ and $V_i$ respectively.
Then define
\begin{eqnarray*}
F_i(u_i,v_i)&=&\{U_i=u_i,V_i=v_i\}, \cr
E_i(u_i,v_i)&=&\{D(u_i,v_i)< {\rm C}\| u_i-v_i\|_1\ \mbox{within}\ A_i\}\ \mbox{and} \cr
W_i&=&\cup_{u_i,v_i}\left(F_i(u_i,v_i)\cap E_i(u_i,v_i)\right),
\end{eqnarray*}
where the union is over all possible values of $U_i$ and $V_i$.
Now we define a subset of $\Z^d$
\begin{equation}
R_i=B_{nk}\cup B(y,nk)
\cup\Big( A_0\dots \cup A_{i-1}\Big)\cap \Big( A_i^c\cup \dots \cup A_{n-1}^c\Big),
\end{equation}
and we denote by $\sigma_i$ the $\sigma$-algebra generated by $\{T_x,e(x,y):x \in R_i,x\sim y\}$.
Then, noting that $\bold{1}_{F_i(u_i,v_i)}\Pi_{j=0}^{i-1}\bold{1}_{W_j^c} $ is $\sigma_i$-measurable write for $i=1,\dots r$:
\begin{eqnarray*}
&& P\left(W_i\cap J_n \cap(
\cap_{j=0}^{i-1}W_j^c)\right)=\sum_{u_i,v_i}E\left(\bold{1}_{F_i(u_i,v_i)}\mathbf{1}_{E_i(u_i,v_i)}
\bold{1}_{J_n}(\Pi_{j=0}^{i-1}\bold{1}_{W_j^c})\right)\cr
&=&
\sum_{u_i,v_i}E\left(\bold{1}_{F_i(u_i,v_i)}(\Pi_{j=0}^{i-1}\bold{1}_{W_j^c}) E( \bold{1}_{J_n} \bold{1}_{E_i(u_i,v_i)}\vert \sigma_i)\right)\cr
&\ge&
\sum_{u_i,v_i}P(E_i(u_i,v_i))E \left( \bold{1}_{F_i(u_i,v_i)}(\Pi_{j=0}^{i-1}\bold{1}_{W_j^c}) E( \bold{1}_{J_n} \vert \sigma_i)\right)\cr
&=&\sum_{u_i,v_i}P(E_i(u_i,v_i)) E\left( \bold{1}_{F_i(u_i,v_i)}(\Pi_{j=0}^{i-1}\bold{1}_{W_j^c}) \bold{1}_{J_n} \right)\cr
&\ge&
\delta \sum_{u_i,v_i}P\left(F_i(u_i,v_i)\cap J_n \cap(\cap_{j=0}^{i-1}W_j^c)\right) =\delta P\left(J_n\cap(\cap_{j=0}^{i-1}W_j^c)\right),
\end{eqnarray*}
where the sums are over all possible values of $U_i$ and $V_i$, the first inequality follows from the facts that $E_i(u_i,v_i)$
is independent of $\sigma_i$ and both $J_n$ and $E_i(u_i,v_i)$ are increasing events,
the second inequality from
part \textit{(ii)} of Lemma
\ref{lem:connections} and the last equality from the fact that $J_n$ is contained in the union of the $F_i(u_i,v_i)$'s which are disjoint. Now, proceeding by
induction one gets:
$$P\left(J_n\cap(\cap _{j=0}^{r-1}W_j^c)\right)\le (1-\delta)^rP(J_n).$$
Since we can choose ${\rm C}_2$ and ${\rm C}_3$ in such a way that the event
$\{D(o,y)> {\rm C}_2 \|x-y\|_1+{\rm C}_3(nk)^d \}$ does not occur if any of
the $W_i$'s occurs, the lemma follows.
\end{proof}\\ \\
Noting that modifying the constant $\alpha_2$ the statement of the above lemma holds for ${\rm C}_3=1/k^d$, we get:
\begin{lemma}\label{lem:ap2}
(i) Let
${\rm C}_2$ be as in Lemma \ref{lem:APp}. Then, there exists $\alpha_3>0$ such that
$P(D(x,y)\geq {\rm C}_2\| x-y\|_1+n^d\vert x\rightarrow y)\leq \exp(-\alpha_3 n)$;
(ii) $P(x\rightarrow y \vert \ \vert C_x^o \vert =+\infty, \ \vert C_y^i \vert =+\infty )=1$.
\end{lemma}
\section{ The shape theorem }\label{sec:Nicolas}
Let
\begin{equation}\label{eq:widetildeC}
\widetilde{C}= \{x\in\Z^d: x\to \infty \hbox{ and }
\infty\to x\}.
\end{equation}
\begin{remark}\label{rk:csq-ap2}
As a consequence of Lemma \ref{lem:ap2} part \textit{(ii)}, if two sites $x,y$ of
$\Z^d$ belong to $\widetilde{C}$, then $x\to y$ and $y\to x$.
\end{remark}
\subsection{Neighborhoods in $\widetilde{C}$}\label{subsec:widetildeC}
\begin{definition}\label{def:racines}
For $x\in\Z^d$, let
\[
\cases{
R_x^o=\{y\in\Z^d: x\to y \hbox{ outside } \widetilde{C} \} & (outgoing root from $x$);\cr
R_x^i=\{y\in\Z^d: y\to x \hbox{ outside } \widetilde{C} \} & (incoming root to $x$).\cr
}
\]
In particular
$x$ belongs to $R_x^o$ and $R_x^i$ if and only if $x\notin\widetilde{C}$.
\end{definition}
Our next lemma says that the distribution of the radius of $R_o^o\cup R_o^i$ decreases exponentially.
\begin{lemma}\label{lem:exp_decay_R}
There exists $\sigma_1=\sigma_1(\lambda,d)>0$ such that, for all $n\in\N$,
$$P\left((R_o^o\cup R_o^i)\cap \partial B_n\neq\emptyset\right)\le \exp(- \sigma_1 n).$$
\end{lemma}
\begin{proof}{lemma}{lem:exp_decay_R}
For $n\in\N\setminus\{0\}$, $R_o^o\cap \partial B_{2n}\neq\emptyset$ means that
there exists an open path $o\to \partial B_{2n}$ outside
$\widetilde{C}$. This implies that there exists $x\in \partial B_n$
satisfying $o\to x\to \partial B_{2n}$ outside $\widetilde{C}$. Similarly,
$R_o^i\cap \partial B_{2n}\neq\emptyset$ implies that there exists
$x\in \partial B_n$ satisfying $\partial B_{2n}\to x\to o$ outside
$\widetilde{C}$. Then for such a point, either the cluster $C_x^o$ or the cluster $C_x^i$ is
finite, and has a radius larger than or equal to $n$.
We adapt to our case \cite[Theorems (8.18), (8.21)]{G} to get the existence
of $\sigma_0=\sigma_0(\lambda,d)>0$ such that:
\begin{equation}\label{eq:analogue_G-thm8.21}
\cases{
P(C_x^o \cap \partial B(x,n)\neq\emptyset, |C_x^o| <+\infty) \le \exp(- \sigma_0 n); \cr
P(C_x^i \cap \partial B(x,n)\neq\emptyset, |C_x^i| <+\infty) \le \exp(- \sigma_0 n).
}
\end{equation}
Hence
\begin{eqnarray*}
&&P\left((R_o^o\cup R_o^i)\cap \partial B_{2n}\neq\emptyset\right) \cr
&\le& P\left(R_o^o\cap \partial B_{2n}\neq\emptyset\right)
+P\left(R_o^i\cap \partial B_{2n}\neq\emptyset\right)\cr
&\le& 2\sum_{x\in\partial B_n} P(|C_x^o| <+\infty, x\to \partial B(x,n))\cr
&&
+2\sum_{x\in\partial B_n} P(|C_x^i| <+\infty, \partial B(x,n)\to x)\cr
& \le & 4|\partial B_n| \exp(- \sigma_0 n)
\end{eqnarray*}
which induces the result.
\end{proof}\\ \\
To define the neighborhood ${\mathcal V}(x)$ on $\widetilde{C}$ of a site $x$,
we introduce the smallest box whose interior contains $R_x^o$ and $R_x^i$,
which contains elements of $\widetilde{C}$, and is such that two elements
of $\widetilde{C}$ in this box are connected by an open path which does not
exit from a little larger box. For this last condition, which will enable to bound the
passage time through ${\mathcal V}(x)$, we use the parameter ${\rm C}_2$ obtained in Lemma \ref{lem:APp}.
\begin{definition}\label{def:k_x}
Let
${\rm C}'={\rm C_2}d+2$.
Let $\kappa(x)$ be the smallest $l\in\N\setminus\{0\}$
such that
\[
\cases{
(i)\,\,\,\,\, \partial B(x,l) \cap \left(R_x^o\cup R_x^i\right)=\emptyset;\cr
(ii)\,\,\, B(x,l) \cap \widetilde{C} \not= \emptyset;\cr
(iii)\, \forall\, (y,z) \in (B(x,l) \cap \widetilde{C})^2,\,y\to z \hbox{ within } B(x,{\rm C}'l).\cr
}
\]
\end{definition}
\begin{remark}\label{rk:(i)}
By \textit{(i)} above, $R_x^o\cup R_x^i\subset B(x,\kappa(x))$.
\end{remark}
The random variable $\kappa(x)$ has a sub-exponential tail:
\begin{lemma}\label{lem:k_x-exp_tail}
There exists a constant $\sigma=\sigma(\lambda,d)>0$ such that, for any $n\in\N$,
\[
P(\kappa(x)\ge n)\le \exp(-\sigma n^{1/d}).
\]
\end{lemma}
\begin{proof}{lemma}{lem:k_x-exp_tail}
We show that the probability that any of the 3 conditions
in Definition \ref{def:k_x} is not achieved for $n$ decreases
exponentially in $n^{1/d}$: \par
\noindent
\textit{(i)} By translation invariance, we have by Lemma \ref{lem:exp_decay_R},
\be\label{eq:non_i}
P\left( \partial B(x,n)\cap\left(R_x^o\cup R_x^i\right)\not=\emptyset\right) \le \exp(-\sigma_1 n).
\ee
\noindent
\textit{(ii)}
There exist $m\in\N, \sigma_2=\sigma_2(\lambda,d)>0$ such that
\be\label{eq:non_ii}
P(B(x,n) \cap \widetilde{C}=\emptyset) \le \exp(-\sigma_2\lfloor n/(m+1)\rfloor).
\ee
Indeed, since
$\{ \vert C_x^i(S_m)\vert =+\infty \}$
and $\{ \vert C_x^o(S_m)\vert =+\infty \}$ are increasing events,
it follows from
Proposition \ref{prop:clusters_rentrant-sortant_infinis} and the FKG inequality
(see Remark \ref{rk:FKG})
that for
$m=m(\lambda,d)$ large enough we have
$\inf_{x\in S_m}P(\vert C_x^i(S_m)\vert =\vert C_x^o(S_m)\vert =+\infty)>0$.
Then, \eqref{eq:non_ii} follows from the facts that $B(x,n)$ intersects $\lfloor n/(m+1)\rfloor$ disjoint translates of $S_m$,
and events in disjoint slabs are independent.\\ \\
\noindent
\textit{(iii)} There exists $ \sigma_3=\sigma_3(\lambda,d)>0$ such that
\begin{eqnarray}\label{eq:sigma_3}
& P\left(\exists\,
(y,z) \in (B(x,n) \cap \widetilde{C})^2,\,y\not\to z \hbox{ within }(B(x,{\rm C}'n)\right)\cr
&\le \exp (- \sigma_3 n^{1/d}).
\end{eqnarray}
Indeed, if no open path from $y$ to $z$ (both in $B(x,n)\cap \widetilde C$) is contained in
$B(x,{\rm C}'n)$, then $D(y,z)\ge 2({\rm C}'-1)n$. Given our choice of ${\rm C}'$
this implies
that $D(y,z)\ge {\rm C_2}\|y-z\|_1 +n$. Therefore, \eqref{eq:sigma_3} follows from part \textit{(i)} of
Lemma \ref{lem:ap2}.
\end{proof}\\ \\
We define the (site)
neighborhood in $\widetilde{C}$ of $x$ by
\begin{equation}\label{def:calV-x}
{\mathcal V}(x)=B(x,\kappa(x))\cap \widetilde{C}.
\end{equation}
\begin{remark}\label{rk:calV-a-2-points}
By condition \textit{(ii)} in Definition \ref{def:k_x}, ${\mathcal V}(x)\not=\emptyset$.
\end{remark}
By condition \textit{(iii)} in Definition \ref{def:k_x},
for all $y,z$ in ${\mathcal V}(x)$, there exists
at least one open path from
$y$ to $z$, denoted by $\Gamma^*_{y,z}$ contained in
$B(x,{\rm C}'\kappa(x))$. If there are several such paths we
choose the first one according to some deterministic
order.
We finally
define an ``edge'' neighborhood $\overline\Gamma(x)$ of $x$:
\begin{eqnarray}\label{def:barGamma-x}
\overline\Gamma(x)&=&\{(y',z')\subset B(x,\kappa(x)),
(y',z')\hbox{ open}\}\cup\cr
&&\qquad\{(y',z')\in\Gamma^*_{y,z},y,z\in {\mathcal V}(x)\}.
\end{eqnarray}
Those neighborhoods satisfy
\begin{equation}\label{eq:borner_calV-x_et_Gamma-x}
{\mathcal V}(x)\subset B(x,\kappa(x));\qquad\overline\Gamma(x)\subset B(x,{\rm C}'\kappa(x)).
\end{equation}
\subsection{Radial limits}\label{subsec:Radial-limits}
We now come back to the spatial epidemic model. Indeed, the underlying percolation
model does not give any information on the time needed by the epidemic to
cover $\widetilde{C}$. We first define an approximation for the passage time of the epidemic, then we
prove the existence of radial limits for this approximation and for the epidemic.
We will follow for this the construction in \cite{CD2}.\\ \\
For $x,y\in\Z^d$, if $x\not=y,x\to y$ and $\Gamma_{x,y}=(x_0=x,x_1,\ldots,x_n=y)$
denotes an open path from $x$ to $y$,
we define the \textit{passage time on} $\Gamma_{x,y}$ to be
(see \eqref{def:open-closed-bonds})
\begin{equation}\label{eq:passage-time-on-Gamma}
\overline{\tau}(\Gamma_{x,y})=\sum_{i=0}^{n-1}e(x_i,x_{i+1})
\end{equation}
and, if $x=y$, we put $\overline{\tau}(\Gamma_{x,x})=0$.\\
For $x,y\in\Z^d$, we define the \textit{passage time from $x$ to} $y$ to be
\begin{equation}\label{eq:passage-time-from-x-to-y}
\tau(x,y)=
\cases{\dsp{\inf_{\{\Gamma_{x,y}\}}}\,\overline{\tau}(\Gamma_{x,y})
&
if $x\not=y,x\to y$, \cr
0 & if $x=y$,\cr
+\infty & otherwise.
}
\end{equation}
where the infimum is over all possible open paths from $x$
to $y$. By analogy with \cite{CD1}, \cite{CD2}, we also define
\begin{eqnarray}\label{def:approx-passage-time}
\widehat{\tau}(x,y)&=&\dsp{\inf_{x'\in{\mathcal V}(x), y'\in{\mathcal V}(y)} \tau(x',y')}
;\cr
u(x)&=& \cases{\dsp{\sum_{(y',z')\in\overline\Gamma(x)} \tau(y',z')}&
if $\overline\Gamma(x)\not=\emptyset$, \cr
0 & otherwise.
}
\end{eqnarray}
By Remarks \ref{rk:csq-ap2}, \ref{rk:calV-a-2-points},
$\widehat{\tau}(x,y)$
is well defined.
\begin{remark}\label{rk:widehat-tau=0}
If ${\mathcal V}(x)\cap{\mathcal V}(y)\not=\emptyset$, then $\widehat{\tau}(x,y)=0$.
\end{remark}
We now show that if $y\in C_x^o\setminus R_x^o$, $\widehat{\tau}(x,y)$ approximates
$\tau(x,y)$.
\begin{lemma}\label{lem:comparaison-approximation}
For $x\in\Z^d$, if $y\in C_x^o\setminus R_x^o$, we have
\[
\widehat{\tau}(x,y) \le \tau(x,y)\le u(x)+\widehat{\tau}(x,y)+u(y).
\]
\end{lemma}
\begin{proof}{lemma}{lem:comparaison-approximation}
Let $\Gamma_{x,y}$ be an open path
from $x$ to $y$ such that $\tau(x,y)=\overline{\tau}(\Gamma_{x,y})$.
Since $y\notin R_x^o$ this path must intersect $\widetilde{C}$.
Let $c_1$ and $c_2$ be the first and last points we encounter in $\widetilde{C}$ when moving
from $x$ to $y$
along $\Gamma_{x,y}$. By condition \textit{(i)} of Definition \ref{def:k_x}, $c_1 \in{\mathcal V}(x)$ and $c_2 \in{\mathcal V}(y)$: indeed (for instance for $c_1$), either $x\in\widetilde{C}$ and $c_1=x$, or the point $a\in\partial
B(x,\kappa(x))\cap\Gamma_{x,y}$ does not belong to $R_x^o$ and $c_1$ is the first point on $\Gamma_{x,y}$ between
$x$ and $a$; we might have $c_1=c_2$, if ${\mathcal V}(x)\cap{\mathcal V}(y)\not=\emptyset$.
We have
\
\Gamma_{x,y}=\Gamma_{x,c_1}\vee\Gamma_{c_1,c_2}\vee\Gamma_{c_2,y}
\
where
$\vee$ denotes the concatenation of paths, $\Gamma_{x,c_1}$ is an open path from $x$ to $c_1$ contained in $B(x,\kappa(x))$, $\Gamma_{c_1,c_2}$ is an open path from $c_1$ to $c_2$
and $\Gamma_{c_2,y}$ is an open path from $c_2$ to $y$ contained in $B(y,\kappa(y))$.
We then obtain the first inequality of the lemma since:
$$\widehat{\tau} (x,y)\le \overline{\tau}(\Gamma_{c_1,c_2})\le \overline{\tau}(\Gamma_{x,y})=\tau(x,y).$$
To prove the second inequality of the lemma let $\Gamma_{d_1,d_2}$ be an open path from $d_1 \in{\mathcal V}(x)$ to $d_2\in{\mathcal V}(y)$
such that $\overline{\tau}(\Gamma_{d_1,d_2})=\widehat{\tau}(x,y)$.
Since the open paths $\Gamma_{x,c_1}$ from $x$ to $c_1$ and $\Gamma^*_{c_1,d_1}$ (which exists by Remark \ref{rk:csq-ap2}) from $c_1$ to $d_1$ have edges in $\overline{\Gamma}(x)$ (see \eqref{def:barGamma-x}) the
open path $\Gamma_{x,d_1}=\Gamma_{x,c_1}\vee\Gamma^*_{c_1,d_1}$ from $x$ to $d_1$ satisfies
$\overline{\tau}(\Gamma_{x,d_1})\le u(x)$. Similarly, there is an open path $\Gamma_{d_2,y}$ from $d_2$ to $y$ such that $\overline{\tau}(\Gamma_{d_2,y})\le u(y)$.
We conclude with
$$\tau(x,y)\le \overline{\tau}(\Gamma_{x,d_1})+\overline{\tau}(\Gamma_{d_1,d_2})+\overline{\tau}(\Gamma_{d_2,y})\le u(x)+\widehat{\tau}(x,y)+u(y).$$
\end{proof}
\begin{lemma}\label{lem:sous-additif}
For all $x,y,z\in\Z^d$, we have the \rm{subadditivity property}
\begin{equation}\label{eq:sous-additif}
\widehat{\tau}(x,z)\le\widehat{\tau}(x,y)+u(y)+\widehat{\tau}(y,z).
\end{equation}
\end{lemma}
\begin{proof}{lemma}{lem:sous-additif}
Let $\Gamma_{a,b}$ be an open path from $a\in{\mathcal V}(x)$ to $b\in{\mathcal V}(y)$ such that
$\widehat{\tau}(x,y)=\overline{\tau}(\Gamma_{a,b})$. Similarly, let $\Gamma_{c,d}$ be an open path from
$c\in {\mathcal V}(y)$ to $d\in{\mathcal V}(z)$ such that
$\widehat{\tau}(y,z)=\overline{\tau}(\Gamma_{c,d})$ (we might have $a=b$, $c=d$ or $b=c$). Since both $b$ and $c$ are in ${\mathcal V}(y)$ there exists an open path $\Gamma^*_{b,c}$ from $b$ to $c$ such that
$\overline{\tau}(\Gamma^*_{b,c})\le u(y)$ (see Remark \ref{rk:csq-ap2} and \eqref{def:barGamma-x}). The lemma then follows since the concatenation of these three paths is
an open path from a point of ${\mathcal V}(x)$ to a point of ${\mathcal V}(z)$ and
$$\widehat{\tau}(x,z)\le \overline{\tau}(\Gamma_{a,b}) + \overline{\tau}(\Gamma_{b,c})+\overline{\tau}(\Gamma_{c,d})\le \widehat{\tau}(x,y)+u(y)+\widehat{\tau}(y,z).$$
\end{proof}\\ \\
Before stating our next lemma we introduce some notation.
For $x,y\in \Z^d$, let
$$\overline{D}(x,y)=
\inf_{x'\in {\mathcal V}(x),y'\in {\mathcal V}(y)} D(x',y').$$
Note that unlike $D(x,y)$, $\overline{D}(x,y)$
is always finite.
\begin{lemma}\label{lem:bar-D}
There exist constants ${\rm C}_4$ and $\alpha_4>0$ such that
$$P(\overline{D}(x,y)\ge {\rm C_4}\|x-y\|_1 +n)\le \exp(-\alpha_4 n^{1/d}),
\qquad \forall\, x,y \in \Z^d,n\in \N.$$
\end{lemma}
\begin{proof}{lemma}{lem:bar-D}
Let $K$ be a positive constant. Then
\begin{eqnarray*}
&&P(\overline{D}(x,y)\ge K\|x-y\|_1 +(2d+1)Kn)\cr\le&& P(\kappa(x)>n)+P(\kappa(y)>n)\cr
&+&P(\overline{D}(x,y)\ge K\|x-y\|_1 +(2d+1)Kn,\kappa(x)\le n, \kappa(y)\le n)\cr
\le&& P(\kappa(x)>n)+P(\kappa(y)>n)\cr
&+&\sum_{x'\in B(x,n),y'\in B(y,n)}P(D(x',y')\ge K\|x-y\|_1 +(2d+1)Kn, x'\rightarrow y' )\cr
\le&& P(\kappa(x)>n)+P(\kappa(y)>n)\cr
&+&\sum_{x'\in B(x,n),y'\in B(y,n)}P(D(x',y')\ge K\|x'-y'\|_1 +Kn, x'\rightarrow y').
\end{eqnarray*}
Taking $K={\rm C_2}$ (given in Lemma \ref{lem:ap2}) the result follows from Lemmas \ref{lem:ap2}
and \ref{lem:k_x-exp_tail}.
\end{proof}\\ \\
Of course, the random variables $u(x)$ and $\widehat{\tau}(x,y)$ are almost surely finite. But we will need a better control of their size, provided by our next lemma.
\begin{lemma}\label{lem:regularite-approx-tau}
For all $x,y\in\Z^d$, $r\in\N\setminus\{0\}$, $u(x)$ and $\widehat{\tau}(x,y)$ have a finite $r$-th moment.
\end{lemma}
\begin{proof}{lemma}{lem:regularite-approx-tau}
By Lemma \ref{lem:k_x-exp_tail},
$u(x)$ is bounded above by a sum of passage times $e(y,z)$ with $y$ and $z$ in the box $B(x,Y)$, where $Y$ is a random variable whose moments are all finite.
By Lemmas \ref{lem:k_x-exp_tail} and \ref{lem:bar-D} the same happens to $\widehat{\tau}(x,y)$. Therefore it suffices to show that if
$(X_i,i\in \N)$ is a sequence of i.i.d. random variables and
$N$ is a random variable taking values in $\N$, then the moments of $\sum_{i=1}^N X_i$ are all finite if it is the case for both the $X_i$'s and $N$. To prove this write:
\begin{eqnarray*}
E(\vert \sum_{i=1}^N X_i\vert^r)&= &\sum_{n=1}^{\infty}E(\vert X_1+\dots+X_n\vert^r {\bf 1}_{\{N=n\}})\cr
&\leq&\sum_{n=1}^{\infty}[E(\vert X_1+\dots+X_n\vert^{2r})P(N=n)]^{1/2} \cr
&\leq&\sum_{n=1}^{\infty}[E(\vert X_1\vert +\dots+\vert X_n\vert)^{2r}P(N=n)]^{1/2}\cr
&\leq&\sum_{n=1}^{\infty} [n^{2r}{\rm C}_{2r}P(N=n)]^ {1/2}
\end{eqnarray*}
where the second line
follows from Cauchy-Schwartz' inequality,
the factor $n^{2r}$ counts the number of terms in the development of $(\vert X_1\vert +\dots+\vert X_n\vert)^{2r}$ and the constant ${\rm C}_{2r}$ depends on the distribution
of the $X_i$'s.
As $N$ has all its moments finite $P(N=n)$ decreases faster than $n^{-2r-4}$ and the sum is finite.
\end{proof}\\ \\
We now construct a process $(\vartheta_\cdot)$ which is subadditive in every
direction, and has a.s., by Kingman's Theorem, a radial limit denoted by $\mu$. We will then
check that $\widehat\tau(o,\cdot)$ also has, in every
direction, the same radial limit, and we will extend this conclusion
to $\tau(o,\cdot)$ on the set $C_o^o$ of sites that have ever been infected.
Hence we first prove
\begin{theorem}\label{th:radial-limits}
For all $z\in\Z^d$, there exists $\mu(z)\in\R^+$
such that almost surely
\begin{equation}\label{eq:lim-widehat_tau}
\lim_{n\to +\infty}\frac {\widehat{\tau}(o,nz)}n =\mu(z),
\end{equation}
\begin{equation}\label{eq:radial-limit_2}
\lim_{n\to +\infty}\left[\frac{\tau (o,nz)}n - \mu(z)\right]{\bf 1}_{\{nz\in C_o^o\}}=0.
\end{equation}
\end{theorem}
\begin{proof}{theorem}{th:radial-limits}
\textit{(i)}
For all $z\in\Z^d$, $(m,n)\in\N^2$, let
\begin{equation}\label{eq:vartheta_theta}
\vartheta_z(m,n)=\widehat{\tau}(mz,nz)+u(nz).
\end{equation}
The process $(\vartheta_z(m,n))_{(m,n)\in\N^2}$ satisfies the hypotheses of Kingman' subadditive ergodic theorem (see \cite[Theorem VI.2.6]{lig1}) by \eqref{eq:sous-additif}.
Hence there exists $\mu(z)\in\R^+$ such that
\begin{equation}\label{eq:lim-vartheta_theta}
\lim_{n\to +\infty}\frac 1n \vartheta_z(0,n)=\mu(z)
\,\mbox{ a.s. and in }L^1.
\end{equation}
Since the random variables $(u(z):z\in \Z^d)$ are identically distributed,
it follows from Lemma \ref{lem:regularite-approx-tau} and Chebychev's inequality that
$\sum_{n=0}^\infty P(u(nz)>n\eps)<+\infty$
for all $\eps>0$, so that by Borel-Cantelli's Lemma
\begin{equation}\label{eq:lim-u}
\lim_{n\to +\infty}\frac {u(nz)}n =0,\,\mbox{ a.s.}
\end{equation}
Thus by \eqref{eq:lim-vartheta_theta}, \eqref{eq:lim-u} we have \eqref{eq:lim-widehat_tau} for all $z\in\Z^d$.\\ \\
\textit{(ii)} Since $ R_o^o$ is a.s. finite, if $nz\in C_o^o$, then $nz\in C_o^o\setminus R_o^o$ for $n$ large enough. Hence, from
Lemma \ref{lem:comparaison-approximation}, for $n$ large enough we have
\[
\left|\dsp{\frac{\tau(o,nz)}n}-\mu(z)\right|{\bf 1}_{\{nz\in C_o^o\setminus R_o^o\}}
\le \dsp{\frac{u(o) + u(nz)}n}+\left|\dsp{\frac{\widehat{\tau}(o,nz)}n}-\mu(z)\right|
\]
and we conclude by \eqref{eq:lim-u} and \eqref{eq:lim-widehat_tau}.
\end{proof}
\subsection{Extending $\mu$}\label{subsec:extending_mu}
We have proved the existence of a linear propagation speed in every direction of $\Z^d$.
However, to derive an asymptotic shape result, in particular for the approximating passage times
$(\widehat\tau(x,y),x,y\in\Z^d)$, we need to control this propagation speed uniformy in all directions.
For this we study $\mu(z),z\in\Z^d$, and
we follow \cite{CD1} to construct a Lipschitz, convex and homogeneous function $\varphi$
which extends $\mu$ to $\R^d$. The asymptotic shape of the epidemic will be given
by the convex set $D$ defined later on in \eqref{eq:A-et-D}. \par
\begin{lemma}\label{lem:g-et-son-extension}
The function $g$ defined on $\Z^d$ by
\begin{equation}\label{eq:g_on-Z}
\forall z\in\Z^d, \quad g(z)=E(\vartheta_z(0,1))
\end{equation}
has a barycentric extension to $\R^d$.
\end{lemma}
\begin{proof}{lemma}{lem:g-et-son-extension}
By \eqref{eq:lim-vartheta_theta}, we have a.s., for all $z\in\Z^d$,
\begin{equation}\label{eq:mu_z}
\lim_{n\to +\infty}\frac{\vartheta_z(0,n)}{n}=\inf_{n\in\N} E\left(\frac{\vartheta_z(0,n)}{n}\right)=\inf_{n\in\N} \frac{g(nz)}{n}=\mu(z).
\end{equation}
To do a barycentric extension of $g$, we decompose $[0,1]^d$ in simplexes: each of them having a unique barycentric decomposition, $g$ will be defined on its elements as the barycenter of its values on extremal points. This (arbitrary) construction will be translation invariant.
\begin{enumerate}
\item Let $M$ denote the center of $[0,1]^d$. We define $g(M)$
to be the mean of the values of $g$ on the $2^d$ elements of $\Z^d\cap[0,1]^d$.
\item For the center $c$ of each face ${\bf F}$ of $[0,1]^d$ (which is a cube of dimension $d-1$) we define $g(c)$ to be the mean of the values of $g$ on the $2^{d-1}$ elements of $\Z^d\cap{\bf F}$. We proceed similarly on each sub-cube, up to sub-faces of dimension 2.
\item Now, on sub-faces $F$ of dimension 2, we link the center to the 4 elements of $\Z^d\cap F$, to obtain 4 triangles, or simplexes, of dimension 2. On each of them, we define $g(x)$ for each point $x$ to be the barycentric combination of the values of $g$ on the 3 extremal points.
\item We deal with dimension 3 sub-faces by taking barycentric combinations between the dimension 2 simplexes and the center of the dimension 3 sub-face. This way we have decomposed each dimension 3 sub-face into $4\times 6$ simplexes, on which for each point $x$ we define $g(x)$ in a barycentric way. We go on in the same way until the dimension $d$ cube, that is $[0,1]^d$.
\end{enumerate}
The function $g$ is continuous on $[0,1]^d$. Then, for $z=(z_1,\ldots,z_d)\in \Z^d$, to each $d$-cube $\prod_{i=1}^d[z_i,z_i+1]$ we associate the $2^{d-1}d!$ simplexes translated from those described in $[0,1]^d$, and we define as previously $g(x)$ for each point $x\in\prod_{i=1}^d[z_i,z_i+1]$ to be the barycentric combination of the values of $g$ on $d+1$ extremal points.
\end{proof}\\ \\
Following \cite[Lemma 3.2]{CD1}, we define a sequence of functions $(g_n)_{n\ge 0}$ by
\begin{equation}\label{eq:gn}
\forall x\in\R^d,\quad g_n(x)=\frac{g(nx)}{n}.
\end{equation}
\begin{lemma}\label{lem:gn_lipsch}
The elements of the sequence $(g_n)_{n\ge 0}$ are Lipschitz functions with a common Lipschitz constant
denoted by $\gamma^*$, hence the sequence is equicontinuous.
\end{lemma}
\begin{proof}{lemma}{lem:gn_lipsch}
It is enough to prove that $g$ is a Lipschitz function.
For all $x,y\in\Z^d$, by subadditivity and symmetry of $\vartheta_\cdot$ on $\Z^d$ we get
\beq\label{eq:g-subadd-1}
g(x)+g(y) &\ge& g(x+y),\\
\label{eq:g-subadd-2}
g(x+y)+g(-y) &\ge& g(x),\\
\label{eq:g-sym}
g(-y)=g(y) &\le& \|y\|_1 g({\rm e}_1).
\eeq
Indeed,
\begin{eqnarray*}
g(x+y) &=& E(\vartheta_{x+y}(0,1))=E\left(\widehat{\tau}(o,n(x+y))+u(n(x+y))\right)\cr
&\le& E\left(\widehat{\tau}(o,nx)+u(nx)\right)+E\left(\widehat{\tau}(nx,n(x+y))\right)+E\left(u(n(x+y))\right)\cr
&=& g(x) +E\left(\widehat{\tau}(o,ny)\right)+ E\left(u(o)\right)\cr
&=& g(x)+g(y)
\end{eqnarray*}
where we have used successively \eqref{eq:g_on-Z}, \eqref{eq:vartheta_theta}, \eqref{eq:sous-additif}
and the translation invariance of the distributions of $\widehat{\tau}$ and $u$. Then, writing \eqref{eq:g-subadd-1} for $x=(x+y)+(-y)$ gives \eqref{eq:g-subadd-2}.
For \eqref{eq:g-sym}, we first use the symmetry of $\vartheta_\cdot$ on $\Z^d$ to get
$g(-y)=g(y)$, that we then combine with \eqref{eq:g-subadd-1} to write
\[
g(y) \le \sum_{i=1}^d g(|y_i|{\rm e}_i)\le\sum_{i=1}^d |y_i|g({\rm e}_i)=\|y\|_1 g({\rm e}_1).
\]
Therefore by \eqref{eq:g-subadd-1}, \eqref{eq:g-subadd-2}, \eqref{eq:g-sym},
\begin{equation}\label{eq:g-sym-part3}
|g(x+y)-g(y)| \le g(y){\bf 1}_{\{g(x+y)\ge g(y)\}} + g(-y){\bf 1}_{\{g(x+y)< g(y)\}}\le \|y\|_1 g({\rm e}_1).
\end{equation}
If we now take $x,y\in\R^d$, from the previous barycentric construction,
let $(x_0=x,\ldots,x_n=x+y)$ be the sequence of points on the simplexes crossed by
$[x,x+y]$. Then
\[
|g(x+y)-g(x)| \le \sum_{k=0}^{n-1} |g(x_{k+1})-g(x_k)|.
\]
Thus, since $\|y\|_1=\sum_{k=0}^{n-1} \|x_{k+1}-x_k\|_1$, we have to show that on a given simplex the Lipschitz constant of $g$ does not depend on the simplex. Assuming now that $x,y$ belong to the same simplex, they are written uniquely as a barycentric combination of the $d+1$ extremal points $(z_0,\ldots,z_d)$ of that simplex, $z_0$ being the center of the cube translated from $[0,1]^d$ containing the simplex. Similarly, each $z_i,0\le i\le d$, is the barycenter of $2^d$ extremal points $(c_i,1\le i\le 2^d)$, with coefficients given by the barycentric construction:
\begin{eqnarray}\label{eq:xy_bary}
&x=\dsp{\sum_{i=0}^d \alpha_i z_i;\qquad
y=\sum_{i=0}^d \beta_i z_i;\qquad
\sum_{i=0}^d \alpha_i=\sum_{i=0}^d \beta_i=1};\cr
&\dsp{z_0=\sum_{l=1}^{2^d}\kappa_l c_l; z_i=\sum_{l=1}^{2^d} \iota_l c_l,\,1\le i\le d;\,
\sum_{l=1}^{2^d} \kappa_l=\sum_{l=1}^{2^d} \iota_l=1}.
\end{eqnarray}
As $(z_i-z_0,1\le i\le d)$ is a basis of the vector space $\R^d$, denoting by $\|.\|_1^*$ the $l^1$-norm w.r.t. this basis, there exists a constant $\gamma_0>0$ such that
\begin{equation}\label{eq:normes-equivalentes}
\forall z\in\R^d,\quad \frac 1\gamma_0\|z\|_1\le \|z\|_1^*\le\gamma_0\|z\|_1.
\end{equation}
Since $[0,1]^d$ is decomposed in a finite number $2^{d-1}d!$ of simplexes, \eqref{eq:normes-equivalentes} is valid for all these simplexes, for a constant
$\gamma>0$ which is the infimum of all the $\gamma_0$'s. We have, using \eqref{eq:g-sym-part3}, \eqref{eq:xy_bary},
\eqref{eq:normes-equivalentes},
\begin{eqnarray*}\label{eq:g_xy_bary}
|g(x)-g(y)| &= \dsp{\left| \sum_{i=0}^d (\alpha_i-\beta_i)g(z_i)\right|
= \left| \sum_{i=0}^d (\alpha_i-\beta_i)(g(z_i)-g(z_0))\right|}\cr
& \le \dsp{ \sum_{i=0}^{d} |\alpha_i-\beta_i|
\left| \sum_{l=1}^{2^d} (\iota_l-\kappa_l)(g(c_l)-g(c_0))\right|}\cr
&\le \dsp{\sum_{i=0}^{d} |\alpha_i-\beta_i|\sum_{l=1}^{2^d} |\iota_l-\kappa_l|\|c_l\|_1 g({\rm e}_1)}\cr
&\le \dsp{\sum_{i=0}^{d} |\alpha_i-\beta_i|2^d\times 2\times 2d\times g({\rm e}_1)}
=2^{d+2}dg({\rm e}_1)\|x-y\|_1^*\cr
&\le 2^{d+2}\gamma dg({\rm e}_1)\|x-y\|_1
\end{eqnarray*}
hence there exists $\gamma^*>0$ such that
\begin{equation}\label{eq:g_lipsch_R4}
\forall x,y\in\R^d,\quad |g(x)-g(y)| \le \gamma^*\|x-y\|_1.
\end{equation}
\end{proof}
\begin{lemma}\label{lem:definir_phi}
The sequence $(g_n)_{n\ge 0}$ converges uniformly on each compact
subset of $\R^d$ to a function $\varphi$ which extends $\mu$ to $\R^d$,
is Lipschitz with Lipschitz constant $\gamma^*$, convex and homogeneous
(that is which satisfies $\varphi(\alpha_1 x)=\alpha_1\varphi(x)$ for all
$x\in\R^d$ and $\alpha_1>0$).
\end{lemma}
\begin{proof}{lemma}{lem:definir_phi}
\textit{(i)} For $x\in \Z^d$,
$$g_n(x)=\frac{g(nx)}{n}= \frac{E(\vartheta_{nx}(0,1))}{n}=\frac{E(\vartheta_{x}(0,n))}{n}.$$
Hence by \eqref{eq:lim-vartheta_theta} we get
\begin{equation}\label{limg_n}
\lim_{m\to\infty} g_m(x)=\mu(x) \ \ \forall \ x\in \Z^d.
\end{equation}
Let now $x\in \Q^d$,
and
\begin{equation}\label{eq:Nx}
N_x=\min\{k\ge 1,k\in \N :kx\in\Z^d\}.
\end{equation}
Then, $g_{nN_x}(x)=g(nN_xx)/(nN_x)$ converges to $\mu(N_xx)/N_x$ as $n$ goes to infinity. To prove the convergence of $g_m(x)$ over the whole sequence, write
$m=n(m)N_x +j(m)$ where $j(m)\in \{0,\ldots,N_x-1\}$, so that
\begin{eqnarray*}
g_m(x)&=&\frac{g(mx)}{m}=\frac{g(n(m)N_xx+j(m)x)}{n(m)N_x+j(m)}\cr
&=&\frac{g(n(m)N_xx)}{n(m)N_x}\times\frac{n(m)N_x}{n(m)N_x+j(m)}\cr
&&\quad+ \frac{ g(n(m)N_xx+j(m)x)-g(n(m)N_xx) }{n(m)N_x+j(m)}.
\end{eqnarray*}
By \eqref{eq:g_lipsch_R4}, the second term of the last right hand side above converges
to $0$ as $m$
goes to infinity.
Therefore,
\be\label{eq:lim-sur-Q}
\lim_{m\to\infty} g_m(x)=\lim_{m\to\infty} \frac{g(n(m)N_xx)}{n(m)N_x}\times\frac{n(m)N_x}{n(m)N_x+j(m)}=\frac{\mu(N_xx)}{N_x}, \qquad \forall x\in \Q^d.
\ee
It follows from Lemma \ref{lem:gn_lipsch} and Arzela-Ascoli's Theorem that any subsequence of $(g_m(x))_{m\ge 0}$ has a further subsequence that converges uniformly on compact subsets of
$\R^d$ to a Lipschitz function $\varphi$ with the same Lipschitz constant
$\gamma^*$ as the $g_m$'s (cf. \eqref{eq:g_lipsch_R4}). Since, by \eqref{eq:lim-sur-Q},
$\varphi(x)$ must be equal to $\mu(N_xx)/N_x$ for all $x\in \Q^d$
and $\varphi$ is Lipschitz, the limiting function does not depend on the subsequence
and the whole sequence $(g_m)_{m\ge 0}$ converges uniformy on compact subsets of $\R^d$ to $\varphi$ which extends $\mu$ by \eqref{limg_n}.
\begin{equation}\label{eq:phi_on-Q}
\forall\, x\in\Q^d,\quad \lim_{n\to +\infty}g_n(x)=\varphi(x).
\end{equation}
This implies convergence on $\R^d$, since every subsequence of
$(g_n)_{n\ge 0}$ has a subsequence which converges uniformly on each compact
subset of $\R^d$
to a continuous function, equal to $\varphi$ on $\Q^d$.\\ \\
\textit{(ii)} To prove that $\varphi$ is homogeneous we start noting that for $z\in \Z^d$ and $k\in \N$ we have:
\be\label{eq:homo-Z}
\varphi (z)=\mu(z)=\lim_{n\to +\infty} \frac{\vartheta_z(0,n)}{n}=\lim_{n\to +\infty} \frac{\vartheta_z(0,nk)}{nk}=\frac{\mu(kz)}{k}=\frac{\varphi(kz)}{k}.
\ee
Now let $x\in \Q^d$ and recall that $\varphi(x)=\mu(N_xx)/N_x =\varphi(N_xx)/N_x $. Then if $n$ is a multiple of $N_x$, we let $k=n/N_x\in \N$
and write by \eqref{eq:homo-Z},
\be\label{eq:homo-Q}
\varphi(x)=\frac{\varphi (N_xx)}{N_x}=\frac{\varphi(kN_xx)}{kN_x}=\frac{\varphi(nx)}{n}.
\ee
Since $N_x$ is a multiple of $N_{kx}$, \eqref{eq:homo-Q} implies:
$$\varphi(kx)=\frac{\varphi(N_xkx)}{N_x}=\frac{\varphi(kN_xx)}{N_x}
=\frac{k\varphi(N_xx)}{N_x}=k\varphi(x),\qquad \forall \ x\in \Q^d, k\in \N .$$
Hence, if $r=n/m$ and $x\in \Q^d$ we have:
$$\varphi(rx)= n\varphi((1/m)x)=(n/m)\varphi(x),$$
so that $\varphi$ is homogeneous on $\Q^d$.\\ \\
\textit{(iii)} To prove that $\varphi$ is convex on $\Q^d$, take $x,y \in \Q^d$ and $\alpha \in \Q \cap (0,1)$.
Then let $k_1,k_2$ be elements in $\N$ such that $k_1\alpha \in \N$, $k_2 x \in \Z^d$ and $k_2 y \in \Z^d$. Using subadditivity of $g$ and homogeneity of $\varphi$ write:
\begin{eqnarray*}
\varphi (\alpha x+(1-\alpha)y)&=&\lim_{n \to\infty}\frac{g(n\alpha x+n(1-\alpha)y)}{n}\cr
&=&\lim_{n \to\infty} \frac {g(nk_1\alpha k_2x+nk_1(1-\alpha)k_2y)}{nk_1k_2}\cr
&\leq&\lim_{n \to\infty} \frac {g(nk_1\alpha k_2x)+g(nk_1(1-\alpha)k_2y)}{nk_1k_2}\cr
&=&\frac{\varphi(k_1k_2\alpha x)+\varphi(k_1k_2(1-\alpha)y)}{k_1k_2}\cr
&=&\alpha \varphi(x)+(1-\alpha)\varphi(y).
\end{eqnarray*}
Since $\varphi$ is continuous it is also homogeneous and convex on $\R^d$.
\end{proof}
\subsection{Behavior of $\widehat \tau$}\label{subsec:behavior-widehat_tau}
Our next result says that for $z\in\Z^d$, $\widehat \tau(0,z)$ grows at most linearly in $\|z\|_{\infty}$.
\begin{theorem}\label{th:widehat_t-serie}
There exist $K=K(\lambda,d)>0$ and $\alpha>0$ such that
\begin{eqnarray*}
P(\widehat\tau(o,z)>K\|z\|_\infty)&\leq& \exp(-\alpha(\|z\|_\infty^{1/d}),\ \ \forall \ z\in \Z^d,\cr
P(\widehat\tau(o,z)>K(\|z\|_\infty+n))&\leq& \exp(-\alpha n^{1/d}),\ \ \forall \ z\in \Z^d,n\in \N,\cr
\sum_{z\in\Z^d} P({\widehat\tau}(o,z)>K\|z\|_\infty)&<&+\infty.
\end{eqnarray*}
\end{theorem}
\begin{proof}{theorem}{th:widehat_t-serie}
Let $K\ge 0, z\in\Z^d$. Then write:
\begin{eqnarray}\label{eq:calcul1-widehat_t-serie}
&&P({\widehat\tau}(o,z)>K(\|z\|_\infty+n))\cr
&\le & P(4\kappa(z)>\|z\|_\infty+n)+ P(4\kappa(o)>\|z\|_\infty+n) +
P(A)
\end{eqnarray}
where
\begin{eqnarray}\label{eq:calcul2-widehat_t-serie}
A&=&\{{\widehat\tau}(o,z)>K(\|z\|_\infty +n), 4\kappa(z) \le \|z\|_\infty +n, 4\kappa(o) \le \|z\|_\infty +n\}\\\nonumber
&\subset& \dsp{\cup_{(x,y)\in B(o,(\|z\|_\infty+n)/4)\times B(z,(\|z\|_\infty+n)/4)}\{ x\to y, \tau(x,y)>K(\|z\|_\infty+n)\}}.
\end{eqnarray}
Noting that if $(x,y)\in B(o,(\|z\|_\infty+n)/4)\times B(z,(\|z\|_\infty+n)/4)$ we have
\begin{eqnarray}\label{eq:x-y and z}
\|z\|_\infty-n&\le& 2\|x-y\|_\infty\le 3\|z\|_\infty+n\quad\mbox{ and}\cr
3(\|z\|_\infty+n)&=& 3\|z\|_\infty+n+2n\ge 2(\|x-y\|_\infty+n),
\end{eqnarray}
from \eqref{eq:calcul2-widehat_t-serie}, for ${\rm C}_2$ given in Lemma \ref{lem:ap2}, we get:
\begin{eqnarray}\label{eq:calcul5-widehat_t-serie}
&&P(A)
\le\sum_{x\in B(o,(\|z\|_\infty+n)/4)}\ \ \sum_{y\in B(z,(\|z\|_\infty+n)/4)} \cr
&&\Big(P( 3\tau(x,y)> 2K(\|x-y\|_\infty +n), D(x,y)<({\rm C}_2+1)(\|x-y\|_1+n)\cr
&&\quad+P(x\to y, D(x,y)\geq ({\rm C}_2+1)(\|x-y\|_1+n ))\Big).
\end{eqnarray}
It now follows from Lemma \ref{lem:ap2} part \textit{(i)} that
we have
\begin{eqnarray}\label{eq:calcul6-widehat_t-serie}
&&P(x\to y, D(x,y)\geq ({\rm C}_2+1)(\|x-y\|_1+n ))\cr
&\le& \exp(-\alpha_3 (\|x-y\|_1+n)^{1/d})\cr
&\le& \exp(-\alpha_3 (\|x-y\|_\infty+n)^{1/d}).
\end{eqnarray}
Then, taking $K$ large enough, by large deviation results for exponential
variables, we also have, for some $\alpha_5>0$,
\begin{eqnarray}\label{eq:calcul7-widehat_t-serie}
&& P( 3\tau(x,y)>2K(\|x-y\|_\infty +n), D(x,y)<({\rm C}_2+1)(\|x-y\|_1+n)) \cr
&\le& P( 3\tau(x,y)>2K(\|x-y\|_\infty +n), D(x,y)<({\rm C}_2+1)d(\|x-y\|_\infty+n))\cr
&\le& \exp(-\alpha_5(\|x-y\|_\infty+n)).
\end{eqnarray}
Hence, from \eqref{eq:x-y and z}--\eqref{eq:calcul7-widehat_t-serie}, for some constants $R$
and $\alpha_6>0$ we have:
\begin{eqnarray*}\label{eq:calcul9-widehat_t-serie}
&&P(A) \le R (\|z\|_\infty+n)^{2d}\exp(-\alpha_6 (\|z\|_\infty+n)^{1/d}),
\end{eqnarray*}
which gives, by modifying the constants,
\begin{eqnarray}\label{eq:calcul10-widehat_t-serie}
&&P(A) \le R' \exp(-\alpha_7 (\|z\|_\infty+n)^{1/d}) .
\end{eqnarray}
All the statements of the Theorem now follow from \eqref{eq:calcul9-widehat_t-serie}, \eqref{eq:calcul1-widehat_t-serie} and Lemma \ref{lem:k_x-exp_tail}.
\end{proof}
\subsection{Asymptotic shape for $\widehat \tau$}\label{subsec:shape-widehat_t}
\begin{theorem}\label{th:At-encadre}
Let $\eps>0$, and
\begin{eqnarray}\label{eq:A-et-D}
{\widehat A}_t&=&\{z\in\Z^d:{\widehat\tau}(o,z)\le t\},\cr
D&=&\{x\in\R^d:\varphi(x)\le 1\}.
\end{eqnarray}
Then, a.s. for $t$ large enough,
\begin{equation}\label{eq:At-encadre}
(1-\eps)tD\cap\Z^d\subset{\widehat A}_t\subset(1+\eps)tD\cap\Z^d.
\end{equation}
\end{theorem}
\begin{remark}\label{rk:D-borne}
The set $D$ is bounded: indeed
passage times along edges are bounded below by passage times of exponential
distributions, hence the epidemic cannot propagate quicker than this
first passage percolation process, whose passage times have exponential distribution of
parameter $\lambda/(2d)$, and which, by \cite[Theorem (1.15)]{Ke}, moves linearly following the boundary of a convex set.
\end{remark}
In the sequel $K$ is a fixed constant satisfying the conclusions of Theorem \ref{th:widehat_t-serie}, $\gamma^*$ is the Lipschitz constant of $\varphi$ (see \eqref{eq:g_lipsch_R4}) and
$N_x$ was defined in \eqref{eq:Nx} for any $x\in \Q^d\setminus \{o\}$.
\begin{lemma}\label{lem:covering B_0}
Let $\rho>0$ and let $\delta \le \rho/(2K)$. Then, for all $x\in \Q^d\setminus \{o\}$,
\begin{equation}\label{eq:f1}
\sum_{k>0}P(\sup_{z\in B(kN_xx,\delta kN_x)\cap\Z^d}\widehat \tau(kN_xx,z)\ge kN_x\rho)<\infty,
\end{equation}
\begin{equation}\label{eq:f2}
\sum_{k>0}P(\sup_{z\in B(kN_xx,\delta kN_x)\cap\Z^d}\widehat \tau(z,kN_xx)\ge kN_x\rho)<\infty.
\end{equation}
\end{lemma}
\begin{proof}{lemma}{lem:covering B_0}
Let $k>0,z\in B(o,\delta kN_x)\cap\Z^d$. By Theorem \ref{th:widehat_t-serie}
we have:
\begin{eqnarray*}
P(\widehat \tau (o,z)\ge kN_x\rho)&\le& P(\widehat \tau (o,z)\ge K\|z\|_\infty +\lfloor kN_x\rho/2\rfloor)\cr
&\le& \exp(-\alpha \lfloor kN_x\rho/2\rfloor^{1/d}).
\end{eqnarray*}
Therefore, for some constant ${\rm C}$,
\begin{eqnarray*}
\sum_{k>0}P(\sup_{z\in B(o,\delta kN_x)\cap\Z^d}\widehat \tau(o,z)\ge kN_x\rho)
&\le& \sum_{k>0} {\rm C} (\delta kN_x)^d \exp(-\alpha \lfloor kN_x\rho/2\rfloor^{1/d})\cr
&<&\infty.
\end{eqnarray*}
Now \eqref{eq:f1} follows from the translation invariance of $\widehat \tau$.
The proof of \eqref{eq:f2} is analogous.
\end{proof}\\ \\
For $x=(x_1,\ldots,x_d)\in\Q^d\setminus\{o\}$ and $\delta>0$, we define
the cone associated to $x$ of amplitude $\delta$ as
\begin{equation}\label{eq:cone_de_x}
C(x,\delta)=\Z^d\cap \Big(\cup_{t\ge 0}B(xt,\delta t)\Big).
\end{equation}
\begin{lemma}\label{lem:included-cone}
Let $x\in \Q^d\setminus \{o\}$. Then for any $0<\delta' <\delta$
the set $C(x,\delta')\setminus \cup_{k\ge 0} B(kN_x x,\delta k N_x)$ is finite.
\end{lemma}
The proof of this lemma is elementary and left to the reader.\\ \\
\begin{proof}{theorem}{th:At-encadre}
Fix $\eps\in (0,1)$ and let $\rho,\delta $ and $\iota$ be three small
positive parameters such that $\delta \le \rho/(2K)$, whose values
will be determined later.
The set ${\mathcal Y}=\{x\in \Q^d:1-2\iota <\varphi(x)<1-\iota\}$ is a ring between two
balls with the same center but with a different radius, because
by Lemma \ref{lem:definir_phi},
$\varphi$
is homogeneous and positive except that $\varphi(o)=0$. Hence the (compact) closure of ${\mathcal Y}$,
which is recovered
by balls of the same radius centered on the rational points of ${\mathcal Y}$,
is in fact covered by a finite number of such balls.
Thus there exists a finite subset $Y$ of ${\mathcal Y}$
such that
$\Z^d \subset \cup_{x\in Y}C(x,\delta/2)$ (if the balls recover the ring, the cones associated to them recover the whole space). Hence, to prove the first
inclusion of \eqref{eq:At-encadre} it suffices to show that for any
$x\in Y$ and any sequences $(t_n)_{n>0}$ and
$(z_n)_{n>0}$ such that $t_n \uparrow \infty$ in
$\R^+$,
$z_n\in C(x,\delta/2)\cap\Z^d$ with $\|z_n\|_\infty\ge n$ and
$\varphi(z_n)\le (1-\eps) t_n$, we have $\widehat \tau (o,z_n)\le t_n$
a.s. for $n$ sufficiently large. So, let $(t_n)_{n>0}$ and
$(z_n)_{n>0}$ be such sequences. Using Lemma \ref{lem:included-cone},
let $k_n\in \N $ be such that $z_n\in B(k_nN_xx, \delta k_nN_x)$, hence
$k_n\ge {\rm C}n$ for some constant ${\rm C}$. Since by Lemma \ref{lem:definir_phi},
$\varphi$
is Lipschitz with Lipschitz constant $\gamma^*$, write, for $\gamma=\gamma^*d$:
$$k_n N_x (1-2\iota)\le \varphi(k_nN_xx)\le \varphi(z_n)
+\gamma \delta k_n N_x\le (1-\eps)t_n+\gamma \delta k_n N_x. $$
Therefore
$$k_nN_x\le \Big(\frac{1-\eps}{1-2\iota -\gamma \delta}\Big)t_n.$$
It now follows from this inequality and the subadditivity property
\eqref{eq:sous-additif} of $\widehat \tau$ that:
$$\frac{\widehat \tau(o,z_n)}{t_n}\le
\Big(\frac{1-\eps}{1-2\iota
-\gamma \delta}\Big)\Big(\frac{\widehat \tau(o,k_nN_xx)}{k_nN_x}
+\frac{u(k_nN_xx)}{k_nN_x}
+\frac{\widehat \tau(k_nN_xx,z_n)}{k_nN_x}\Big).$$
Therefore, by Theorem \ref{th:radial-limits},
Lemma \ref{lem:regularite-approx-tau}
(the variables $u(.)$ are identically distributed, and
$k_n\ge {\rm C}n$), Lemmas \ref{lem:definir_phi} and
\ref{lem:covering B_0} we obtain:
$$\limsup_{n\to +\infty} \frac{\widehat \tau(o,z_n)}{t_n}\le
\Big(\frac{1-\eps}{1-2\iota -\gamma \delta}\Big)\Big(\varphi(x)
+\rho \Big)\qquad \mbox{a.s.}$$
Since $x\in Y$ this implies:
$$\limsup_{n\to +\infty} \frac{\widehat \tau(o,z_n)}{t_n}
\le \Big(\frac{1-\eps}{1-2\iota -\gamma \delta}\Big)\Big(1-\iota+\rho \Big)\qquad \mbox{a.s.}$$
Taking $\iota$, $\rho$ and $\delta$ small enough, the right hand
side is strictly less than $1$ which proves that $\widehat \tau (o,z_n)\le t_n$
a.s. for $n$ sufficiently large.
Similarly, to prove the second inclusion of \eqref{eq:At-encadre} it suffices
to show that for any $x\in Y$ and any sequences $t_n \uparrow \infty$ in
$\R^+$ and
$z_n$ in $C(x,\delta/2)\cap\Z^d$ such that $\varphi(z_n)\ge (1+\eps) t_n$ we have
$\widehat \tau (o,z_n)> t_n$ a.s. for $n$ sufficiently large. As before, we let
$(t_n)_{n>0}$ and
$(z_n)_{n>0}$ be such sequences and we let $k_n\in \N $ be such that
$z_n\in B(k_nN_xx, \delta k_nN_x)$. Then,
$$k_nN_x(1-\iota)\ge \varphi(k_nN_xx)\ge \varphi(z_n)-\gamma \delta k_nN_x
\ge (1+\eps)t_n -\gamma \delta k_nN_x.$$
Therefore,
$$k_nN_x\ge \Big( \frac{1+\eps}{1-\iota +\gamma \delta}\Big)t_n.$$
Proceeding then as for the first inclusion, we get:
$$\frac{\widehat \tau(o,z_n)}{t_n}\ge \Big(\frac{1+\eps}{1-\iota
+\gamma \delta}\Big) \Big( \frac{\widehat \tau (o,k_nN_xx)}{k_nN_x}
-\frac{u(z_n)}{k_nN_x}-\frac{\widehat \tau (z_n,k_nN_xx)}{k_nN_x}\Big),$$
and
\begin{eqnarray*}
\liminf_{n\to +\infty} \frac{\widehat \tau(o,z_n)}{t_n}
&\ge& \Big(\frac{1+\eps}{1-\iota +\gamma \delta}\Big) \Big(\varphi(x) -\rho\Big)\qquad \mbox{a.s.}\cr
&\ge&\Big(\frac{1+\eps}{1-\iota +\gamma \delta}\Big) \Big(1-2\iota-\rho\Big)\qquad \mbox{a.s.}
\end{eqnarray*}
Now, taking $\iota$, $\rho$ and $\delta$ small enough,
the right hand side is strictly bigger than $1$ and the
second inclusion of \eqref{eq:At-encadre} is proved.
\end{proof}
\subsection{ Asymptotic shape for the epidemic }\label{subsec:shape-thm}
We can now prove our main result:
\begin{proof}{theorem}{th:shape}
\textit{(i)} We first show that
the infection grows at least linearly as $t$ goes to
infinity, that is,
given $\eps>0$,
\[
P\Big(\big(\zeta_t\cup \xi_t\big) \supset \big((1-\eps)tD\cap C_o^o \big)\mbox{ for all $t$ large enough} \Big)=1.
\]
Since $R_o^o$ is finite a.s. this will follow from:
\begin{equation}\label{eq:ne-stagne-pas2}
P\Big(\big(\zeta_t\cup \xi_t \big)\supset \big((1-\eps)tD\cap (C_o^o \setminus R_o^o)\big)\mbox{ for all $t$ large enough} \Big)=1.
\end{equation}
Let $z\in (1-\eps)tD\cap (C_o^o \setminus R_o^o)$, then by Theorem \ref{th:At-encadre},
\be\label{eq:tau_z-borne}
{\widehat\tau}(o,z)\le(1-\eps/2)t,\, \mbox{a.s. for } t \mbox{ large enough,}
\ee
and by Lemma \ref{lem:comparaison-approximation},
${\tau}(o,z)\le(1-\eps/2)t+u(o)+u(z)$.
Since $u(o)< \infty$ a.s. we have $u(o)<(\eps/4)t$ a.s. for $t$ large enough. Hence
\eqref{eq:ne-stagne-pas2} will follow if we show that
$\sup_{z \in tD}u(z)\leq (\eps/4) t\,$ a.s. for $t$ large enough:
To derive this, it is enough to show that
$\sup_{z \in (n+1)D}u(z)\leq (\eps/4)n\,$ a.s. for $n (\in \N)$ large enough.
By Remark \ref{rk:D-borne}, $D$ is bounded, hence the number of points in $(n+1)D$ with
coordinates in $\Z$ is less than ${\rm C_5}(n+1)^d$ for some constant ${\rm C_5}$.
Then write
\begin{eqnarray*}
P\left(\sup_{z \in (n+1)D}u(z)\geq \frac{\eps n}4\right)&\leq&
{\rm C_5}(n+1)^d P\left(u(o)\geq \frac{\eps n}4\right)\cr&\leq&
{\rm C_5}(n+1)^d \frac{4^{d+2}}{(\eps n)^{d+2}}E(u(o)^{d+2}).
\end{eqnarray*}
Thus, by Lemma \ref{lem:regularite-approx-tau},
$\sum_{n\in\N} P(\sup_{z \in (n+1)D}u(z)\geq \eps n/4)<\infty$,
and \eqref{eq:ne-stagne-pas2} follows from
Borel-Cantelli's Lemma.\\ \\
\textit{(ii)} Next we show that
\begin{equation}\label{eq:ne-deborde-pas}
P\Big(\big(\zeta_t\cup \xi_t\big) \subset\big( (1+\eps)tD\cap C_o^o\big) \mbox{ for all $t$ large enough} \Big)=1.
\end{equation}
If $z$ belongs to $\xi_t$ or $\zeta_t$, then ${\tau}(o,z)\le t$
hence by Lemma \ref{lem:comparaison-approximation},
${\widehat\tau}(o,z)\le t$ for $z\in C_o^o\setminus R_o^o$, which implies
$z\in(1+\eps)tD$ for $t$ large enough by Theorem \ref{th:At-encadre}. Since $R_o^o$ is finite \eqref{eq:ne-deborde-pas} follows.\\ \\
\textit{(iii)} Finally, assuming $E(\vert T_z\vert^d)<\infty$, we show that
\begin{equation}\label{eq:brule-rapidement}
P(\zeta_t\cap(1-\eps)tD=\emptyset \mbox{ for $t$ large enough})=1.
\end{equation}
Let $z\in(1-\eps)tD\cap C_o^o$, then, by \eqref{eq:tau_z-borne}, $\tau(o,z)\le (1-\eps/2)t$ if $t$ is large enough.
Hence, \eqref{eq:brule-rapidement} follows if we show that $T_z\ge (\eps/2)\tau(o,z)$ occurs only for a finite number of $z$'s. But from \eqref{eq:ne-deborde-pas} we get
that for some $\delta>0$ we have $\tau(o,z) \ge \delta \|z\|_\infty$ except for a finite number of $z$'s.
Therefore, it suffices to show that for any $\delta'>0$ the event $\{T_z\ge \delta'\|z\|_\infty\}$ can only occur for a finite number of $z$'s. This will follow from
Borel-Cantelli's Lemma
once we prove that $\sum_{z\in\Z^d} P(T_z\ge \delta' \|z\|_\infty)<\infty$. To do so we write, since the $T_z$'s are identically distributed:
$$\sum_{z\in\Z^d} P(T_z\ge \delta' \|z\|_\infty)=\sum_{n\in\N}\sum_{z:\|z\|_\infty=n}P(T_z\ge \delta' n)\le c\sum_{n\in\N} n^{d-1}P(T_o\ge \delta' n)$$
for some constant $c$, and this last series converges because $T_o$ has a finite moment of order $d$.
\end{proof}
\mbox{}\\ \\
\noindent{\bf Acknowledgements.} We thank Geoffrey Grimmett for useful discussions.
This work was initiated during the semester
``Interacting Particle Systems, Statistical Mechanics and Probability Theory'' at CEB, IHP (Paris), whose hospitality is acknowledged. Part of this paper was written while
E.A. was visiting IMPA, Rio de Janeiro and thanks are given for the hospitality encountered there.
|
1,116,691,499,627 | arxiv | \section{Introduction}
In the past decades, two major theories have allowed many breakthroughs in the understanding of surface group representations.
On one side, non-abelian Hodge theory gives a bijective correspondence between conjugacy classes of representations of the fundamental group of a closed Riemann surface into a semi-simple Lie group and holomorphic objects on the Riemann surface called \emph{Higgs bundles}. This theory, developed by Hitchin, Simpson, Corlette and many others, has proven very useful in describing the topology of character varieties of surface groups (see \cite{hitchinselfduality}, \cite{hitchin} or \cite{gothen}).
On the other side, Labourie showed that many surface group representations share a certain dynamical property called the \emph{Anosov} property.
This property has strong geometric and dynamical implications similar to the \emph{quasi-Fuchsian} property for surface group representations in $\PSL(2,\C)$.
A recent trend in the field is to try to link these two seemingly disparate theories (see for instance \cite{AlessandriniLi,BaragliaThesis,CollierLi}).
Such links are far from being well-understood.
For instance, there is no known Higgs bundle characterization of Anosov representations.
The main obstacle is that finding the representation associated to a given Higgs bundle involves solving a highly transcendental system of PDEs called the \emph{Higgs bundles equations}.
However, in some cases the Higgs bundle equations simplify, and one can hope to reach a reasonably good understanding of their solutions. These simplifications happen when the Higgs bundle is \emph{cyclic}. Unfortunately, not every Higgs bundle is cyclic. Nevertheless, it turns out that restricting to cyclic Higgs bundles is enough to study representations into most Lie groups of real rank $2$. This was used by Labourie \cite{labouriecyclic} to study Hitchin representations into $\PSL(3,\R)$, $\PSp(4,\R)$ and $\mathrm{G}_2$, by the first author \cite{colliercyclic} to study some maximal representations in $\PSp(4,\R)$ and by the first author with Alessandrini \cite{PSp4maximalRepsAC} to study {\em all} maximal representations into $\PSp(4,\R)$.
The goal of this paper is to derive from Higgs bundle theory several geometric properties of representations of surface groups into Hermitian Lie groups of rank $2$. According to the work of Burger--Iozzi--Wienhard \cite{burgeriozziwienhard}, it is enough to restrict to representations into the Lie groups $\SO_0(2,n+1)$, $n\geq 1$ (see Remark \ref{rmk:LieGroupsRank2}).
\subsection*{Geometrization of maximal representations}
Hitchin representations into split real Lie groups \cite{labouriehyperconvex} and maximal representations into Hermitian Lie groups \cite{BILW} are two important families of Anosov representations.
One very nice feature of Anosov representations is that they are holonomies of certain geometric structures on closed manifolds. More precisely, for every Anosov representation $\rho$ of a hyperbolic group $\Gamma$ in a semi-simple Lie group $G$, Guichard and Wienhard \cite{wienhardanosov} construct a $\rho$-invariant open domain $\Omega$ in a certain \emph{flag manifold} $G/P$ on which $\rho(\Gamma)$ acts properly discontinuously and co-compactly.
In our setting, their result can be reformulated as follows. Let $\R^{2,n+1}$ denote the vector space $\R^{n+3}$ with the quadratic form
\[\mathbf{q}(\mathbf{x}) = x_1^2+x_2^2 - x_3^2 - \ldots - x_{n+3}^2~.\]
We denote by $\Ein^{1,n}$ the space of isotropic lines in $\R^{2,n+1}$ and by $\Pho(\R^{2,n+1})$ the space of \emph{photons} in $\Ein^{1,n}$ or, equivalently, of totally isotropic planes in $\R^{2,n+1}$. By Witt's theorem, $\SO_0(2,n+1)$ acts transitively on both $\Ein^{1,n}$ and $\Pho(\R^{2,n+1})$.
\begin{theo}[Guichard--Wienhard \cite{wienhardanosov}] \label{t:GuicharWienhard}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho: \Gamma \to \SO_0(2,n+1)$ is a maximal representation $(n\geq 2)$, then there exists an open domain $\Omega_\rho$ in $\Pho(\R^{2,n+1})$ on which $\Gamma$ acts properly discontinuously and co-compactly via $\rho$.
\end{theo}
In particular, the representation $\rho$ is the holonomy of a \emph{photon structure} on the closed manifold $\rho(\Gamma) \backslash \Omega_\rho$ (see Definition \ref{d:FiberedPhotonStructure}). One drawback of the construction of Guichard--Wienhard is that it a priori gives neither the topology of the domain $\Omega_\rho$ nor the topology of its quotient by $\rho(\Gamma)$. In forthcoming work \cite{guichardwienhardtopology}, a very clever -- but very indirect -- argument is used to describe this topology in the case of Hitchin representations in $\SL(2n,\R)$. In an earlier paper, they focus on Hitchin representations into $\SO_0(2,3)$\footnote{To be more accurate, Guichard and Wienhard study Hitchin representations into $\PSL(4,\R)$ and in particular in $\PSp(4,\R)$, and their action on the projective space $\ProjR{3}$. By a low dimension exceptional isomorphism, $\PSp(4,\R)$ is isomorphic to $\SO_0(2,3)$ and $\ProjR{3}$ identifies (as a $\PSp(4,\R)$-homogeneous space) with $\Pho(\R^{2,3})$.} and give a more explicit parametrization of (the two connected components of) $\Omega_\rho$ by triples of distinct points in $\ProjR{1}$, thus identifying $\rho(\Gamma) \backslash \Omega_\rho$ with the unit tangent bundle of $\Sigma$. In this parametrization, however, the circle bundle structure of the manifold is not apparent.
Here, we will construct photon structures on certain fiber bundles over $\Sigma$ with holonomy any prescribed maximal representation in $\SO_0(2,n+1)$ in such a way that the fibers are ``geometric''. We will show that these photon structures coincide with the Guichard--Wienhard structures, and thus describe the topology of Guichard--Wienhard's manifolds in this setting.
\begin{MonThm}\label{t:photontructures}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho: \Gamma \to \SO_0(2,n+1)$ is a maximal representation $(n\geq 2)$, then there exists a fiber bundle $\pi : M \to \Sigma$ with fibers diffeomorphic to $\mathrm{O}(n)/\mathrm{O}(n-2)$, and a $\Pho(\R^{2,n+1})$-structure on $M$ with holonomy $\rho \circ \pi_*$. Moreover, the developing map of this photon structure induces an isomorphism from each fiber of $\pi$ to a copy of $\Pho(\R^{2,n}) \subset \Pho(\R^{2,n+1})$.
Conversely, if $\pi : M \to \Sigma$ is a fiber bundle with fibers diffeomorphic to $\mathrm{O}(n)/\mathrm{O}(n-2)$, then any photon structure on $M$ whose developing map induces an isomorphism from each fiber of $\pi$ to a copy of $\Pho(\R^{2,n}) \subset \Pho(\R^{2,n+1})$ has holonomy $\rho\circ \pi_*$, where $\rho:\Gamma \to \SO_0(2,n+1)$ is a maximal representation.
\end{MonThm}
\begin{MonCoro}
The manifold $\rho(\Gamma) \backslash \Omega_\rho$ in Guichard--Wienhard's Theorem \ref{t:GuicharWienhard} is diffeomorphic to a $\mathrm{O}(n)/\mathrm{O}(n-2)$-bundle over $\Sigma$.
\end{MonCoro}
\begin{rmk}
The proof of Theorem \ref{t:photontructures} in Section \ref{s:Geometrization} gives additional information on the topology of the fiber bundle $M$, which depends on certain topological invariants of the representation $\rho$.
\end{rmk}
Hitchin representations into $\SO_0(2,3)$ are the special class of maximal representations that also have a Guichard--Wienhard domain of discontinuity in $\Ein^{1,2}$. In a manner similar to \cite{guichardwienhardsl4}, this domain can be parametrized by triples of distinct points in $\ProjR{1}$ so that its quotient by $\rho(\Gamma)$ is homeomorphic to the unit tangent bundle to $\Sigma$. Here, we recover this $\Ein^{1,2}$ structure (referred to as a conformally flat Lorentz structure) on the unit tangent bundle to $\Sigma$ in such a way that the fibers are ``geometric'':
\begin{MonThm}\label{t:einsteinstructures}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. Let $T^1\Sigma$ denote the unit tangent bundle to $\Sigma$ and $\pi : T^1\Sigma \to \Sigma$ the bundle projection. If $\rho: \Gamma \to \SO_0(2,3)$ is a Hitchin representation, then there exists a $\Ein^{1,2}$-structure on $T^1\Sigma$ with holonomy $\rho \circ \pi_*$. Moreover, the developing map of this $\Ein^{1,2}$-structure induces an isomorphism from each fiber of $\pi$ to a copy of $\Ein^{1,0} \subset \Ein^{1,2}$.
\end{MonThm}
For the group $\SO_0(2,2),$ Alessandrini and Li \cite{AlessandriniLi} used Higgs bundle techniques to construct anti-de Sitter structures on circle bundles over $\Sigma$, recovering a result of Salein and Gu\'eritaud-Kassel \cite{Salein,GueritaudKassel}.
\subsection*{Length spectrum of maximal representations in rank $2$}
Some Anosov representations of surface groups, such as Hitchin representations into real split Lie groups or maximal representations into Hermitian Lie groups, have the additional property of forming connected components of the whole space of representations. There have been several attempts to propose a unifying characterization of these representations (see \cite{MartoneZhang} and \cite{guichardwienhardpositivity}). Note that quasi-Fuchsian representations into $\PSL(2,\C)$ do not form components; indeed, they can be continuously deformed into representations with non-discrete image.
The property of lying in a connected component consisting entirely of Anosov representations seems to be related to certain geometric controls of the representation ``from below'' such as an upper bound on the entropy or a \emph{collar lemma}. To be more precise, let us introduce the \emph{length spectrum} of a representation.
\begin{defi} \label{d:LengthSpectrumIntro}
Let $\rho$ be a representation of $\Gamma$ into $\SL(n,\R)$, $n\geq 2$. Let $[\Gamma]$ denote the set of conjugacy classes in $\Gamma$. The \emph{length spectrum} of $\rho$ is the function
\[\function{L_\rho}{[\Gamma]}{\R_+}{\gamma}{\frac{1}{2} \log\left|\frac{\lambda_1(\rho(\gamma))}{\lambda_n(\rho(\gamma))}\right |~,}\]
where $\lambda_1(A)$ and $\lambda_n(A)$ denote the complex eigenvalues of $A$ with highest and lowest modulus respectively.
\end{defi}
\begin{rmk}
Since the eigenvalues of matrices in $\SO_0(2,n+1) \subset \SL(n+3,\R)$ are preserved by the involution $A \mapsto A^{-1}$, the above definition simplifies to
\[L_\rho(\gamma) = \log |\lambda_1(\gamma)|\]
for representations into $\SO_0(2,n+1)$.
\end{rmk}
The length spectrum of a representation captures many of its algebraic, geometric and dynamical properties. Several results suggest that the length spectra of Hitchin and maximal representations are somehow always ``bigger'' than that of a Fuchsian representation. The first of these results deals with the ``average behavior'' of the length spectrum.
\begin{defi}
Let $\rho$ be a representation of $\Gamma$ into $\SL(n,\R)$. The \emph{entropy} of $\rho$ is the number
\[h(\rho) = \limsup_{R\to +\infty} \frac{1}{R} \sharp \{\gamma \in [\Gamma] \mid L_\rho(\gamma) \leq R\}~.\]
\end{defi}
\begin{theo}[Potrie--Sambarino \cite{PotrieSambarino}]
If $\rho: \Gamma \to \SL(n,\R)$ be a Hitchin representation, then
\[h(\rho) \leq \frac{2}{n-1}~,\]
with equality if and only if $\rho$ is conjugate to $m_{irr} \circ j$, where $j:\Gamma \to \SL(2,\R)$ is a Fuchsian representation and $m_{irr} : \SL(2,\R) \to \SL(n,\R)$ is the irreducible representation.
\end{theo}
Another ``geometric control'' on Hitchin representations is a generalization of the classical \emph{collar lemma} for Fuchsian representations. It roughly says that, if $\gamma$ and $\eta$ are two essentially intersecting curves on $\Sigma,$ then $L_\rho(\gamma)$ and $L_\rho(\eta)$ cannot both be small.
Such a collar lemma was obtained by Lee and Zhang for Hitchin representations into $\SL(n,\R)$ \cite{LeeZhang} and by Burger and Pozzetti \cite{BurgerPozzetti} for maximal representations into $\Sp(2n,\R)$. More precisely, they prove:
\begin{theo}
There exists a constant $C$ such that, for any $\gamma$ and $\eta$ in $[\Gamma]$ represented by essentially intersecting curves on $\Sigma$ and for any Hitchin (resp. maximal) representation $\rho$ of $\Gamma$ into $\SL(n,\R)$ (resp. $\Sp(2n,\R)$), one has
\[\left(e^{L_\rho(\gamma)} -1\right)\cdot \left(e^{L_\rho(\eta)} -1\right) \geq C~.\]
\end{theo}
Motivated by a question of Zhang, the second author proved a stronger statement for Hitchin representations into $\SL(3,\R)$ which implies both results above:
\begin{theo}[Tholozan, \cite{TholozanConvex}]
If $\rho: \Gamma \to \SL(3,\R)$ is a Hitchin representation, then there exists a Fuchsian representation $j: \Gamma \to \SL(2,\R)$ such that
\[L_\rho \geq L_{m_{irr} \circ j}~.\]
\end{theo}
We will prove a similar statement for maximal representations into $\SO_0(2,n+1)$. A maximal representation $\rho: \Gamma \to \SO_0(2,n+1)$ is said to be \textit{in the Fuchsian locus} if $\rho(\Gamma)$ preserves a copy of $\R^{2,1}$ in $\R^{2,n+1}$ (see Definition \ref{d:FuchsianLocus}).
\begin{MonThm}\label{t:domination}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho: \Gamma \to \SO_0(2,n+1)$ is a maximal representation $(n\geq 0)$, then either $\rho$ is in the Fuchsian locus, or there exists a Fuchsian representation $j$ and a $\lambda >1$ such that
\[L_\rho \geq \lambda L_j~.\]
\end{MonThm}
As a direct consequence of the fact that Fuchsian representations into $\SO_0(2,1)$ have entropy $1$, we obtain the following:
\begin{MonCoro}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho: \Gamma \to \SO_0(2,n+1)$ is a maximal representation $(n\geq 0)$, then the entropy $h(\rho)$ satisfies
\[h(\rho) \leq 1\]
with equality if and only if $\rho$ is in the Fuchsian locus.
\end{MonCoro}
As a direct consequence of Theorem \ref{t:domination} and Keen's collar lemma \cite{KeenCollarLemma}, we can also deduce a sharp collar lemma for maximal representations into $\SO_0(2,n+1)$:
\begin{MonCoro}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two and $\rho:\Gamma\to\SO_0(2,n+1)$ be a maximal representation. If $\gamma$ and $\eta$ are two elements in $[\Gamma]$ represented by essentially intersecting curves on $\Sigma$, then
\[\sinh\left(\frac{L_\rho(\gamma)}{2}\right)\cdot \sinh\left(\frac{L_\rho(\eta)}{2}\right) > 1~.\]
\end{MonCoro}
\subsection*{Labourie's conjecture for maximal representations in rank $2$}
A drawback of non-abelian Hodge theory is that it parameterizes representations of a surface group in a way that depends on the choice of a complex structure on the surface. In particular, such parameterizations do not have a natural action of the mapping class group of $\Sigma.$ One would overcome this issue by finding a canonical way to associate to a given surface group representation a complex structure on the surface. To this intent, Labourie \cite{labourieenergy} suggested the following approach.
Let $\Teich(\Sigma)$ denote the Teichm\"uller space of marked complex structures on $\Sigma$. For each reductive representation $\rho$ of $\Gamma$ into a semi-simple Lie group $\mathrm{G},$ one can associate a functional on $\Teich(\Sigma)$ called the \emph{energy functional}.
\begin{defi}
The {\em energy functional} $\E_\rho$ is the function that associates to a complex structure $J$ on $\Sigma$ the energy of the $\rho$-equivariant harmonic map from $(\widetilde{\Sigma}, J)$ to the Riemannian symmetric space $G/K$.
\end{defi}
The existence of such an equivariant harmonic map was proven by Corlette \cite{corlette}.
By a theorem of Sacks-Uhlenbeck and Schoen-Yau \cite{sacksuhlenbeck,schoenyau}, $J$ is a critical point of $\E_\rho$ if and only if the $\rho$-equivariant harmonic map from $(\widetilde{\Sigma}, J)$ to $G/K$ is weakly conformal or, equivalently, if its image is a branched minimal surface in $G/K$.
Labourie showed in \cite{labourieenergy} that if the representation $\rho$ is Anosov, then its energy functional is proper, and thus admits a critical point.
He conjectured that, for Hitchin representations, this critical point is unique.
\begin{conj}[Labourie]
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho$ is a Hitchin representation of $\Gamma$ into a real split Lie group $G$, then there is a unique complex structure $J\in\Teich(\Sigma)$ on $\Sigma$ such that the $\rho$-equivariant harmonic map from $(\widetilde{\Sigma}, J)$ to $G/K$ is weakly conformal.
\end{conj}
Labourie's conjecture was proven independently by Loftin \cite{loftin} and Labourie \cite{labouriecubic} for $G= \SL(3,\R)$, and then recently by Labourie \cite{labouriecyclic} for other split real Lie groups of rank $2$ (namely, $\PSp(4,\R)$ and $\mathrm{G}_2$). Using the same strategy as Labourie, this was generalized by Alessandrini and the first author \cite{colliercyclic,PSp4maximalRepsAC} to all maximal representations into $\PSp(4,\R)$. Here we give a new proof of their result and extend it to any Hermitian Lie group of rank $2$.
\begin{MonThm} \label{t:LabourieConjecture}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho$ is a maximal representation of $\Gamma$ into a Hermitian Lie group $G$ of rank $2$, then there is a unique complex structure $J\in\Teich(\Sigma)$ such that the $\rho$-equivariant harmonic map from $(\widetilde{\Sigma}, J)$ to $G/K$ is conformal. Moreover, this conformal harmonic map is an embedding.
\end{MonThm}
\begin{rmk} \label{rmk:LieGroupsRank2}
Theorem \ref{t:LabourieConjecture} reduces to a theorem concerning maximal representations into $\SO_0(2,n)$. Indeed, the Hermitian Lie groups of rank $2$ are (up to a cover): $\PU(1,n) \times \PU(1,n)$, $\PSp(4,\R)$, $\PU(2,n)$ and $\SO_0(2,n)$ ($n\geq 5$). By \cite{Toledo}, maximal representations into $\PU(1,n) \times \PU(1,n)$ are conjugate to maximal representations into $\mathrm{P}(\U(1,1) \times \U(n-1)) \times \mathrm{P}(\U(1,1) \times \U(n-1))$. By \cite{burgeriozziwienhard}, maximal representations into $\PU(2,n)$ are all conjugate to maximal representations into $\mathrm{P}(\U(2,2)\times U(n-2))$ . Finally, $\PU(1,1)\times \PU(1,1)$ is isomorphic to $\PSO_0(2,2)$, $\PSp(4,\R)$ is isomorphic to $\SO_0(2,3)$ and $\PU(2,2)$ is isomorphic to $\PSO_0(2,4)$.
\end{rmk}
Note that Labourie's conjecture does not hold for quasi-Fuchsian representations. Indeed, Huang and Wang \cite{HuangWang15} constructed quasi-Fuchsian manifolds containing arbitrarily many minimal surfaces. The conjecture seems to be related to the property of lying in a connected component of Anosov representations.
\subsection*{Maximal surfaces in $\H^{2,n}$ and strategy of the proof}
Let $\H^{2,n}$ be the space of negative definite lines in $\R^{2,n+1}$. The space $\H^{2,n}$ is an open domain in $\ProjR{n+2}$ on which $\SO_0(2,n+1)$ acts transitively, preserving a pseudo-Riemannian metric of signature $(2,n)$ with constant sectional curvature $-1$. The boundary of $\H^{2,n}$ in $\ProjR{n+2}$ is the space $\Ein^{1,n}$. The cornerstone of all the above results will be the following theorem:
\begin{MonThm} \label{t:ExistenceUniquenessMaximalSurface}
Let $\Gamma$ be the fundamental group of a closed oriented surface $\Sigma$ of genus at least two. If $\rho:\Gamma\to\SO_0(2,n+1)$ is a maximal representation, then there exists a unique $\rho$-equivariant maximal space-like embedding of the universal cover of $\Sigma$ into $\H^{2,n}$.
\end{MonThm}
This theorem generalizes a well-known result of existence of maximal surfaces in some anti-de Sitter $3$-manifolds. More precisely, for $n=1$, maximal representations are exactly the holonomies of globally hyperbolic Cauchy-compact anti-de Sitter $3$-manifolds (see \cite{Mess}). In this particular case, our theorem is due to Barbot, B\'eguin and Zeghib \cite{BBZ} (see also \cite{toulisse} for the case with cone singularities).
The existence part of Theorem \ref{t:ExistenceUniquenessMaximalSurface} will be proven in Section \ref{s:maximalsurface} using Higgs bundle theory. More precisely, we will see that, given a maximal representation $\rho:\Gamma\to\SO_0(2,n+1)$, any critical point of the energy functional $\EE_\rho$ gives rise to a $\rho$-equivariant maximal space-like embedding of $\widetilde{\Sigma}$ with the same conformal structure. The uniqueness part of Theorem \ref{t:ExistenceUniquenessMaximalSurface} will then directly imply Theorem \ref{t:LabourieConjecture}. Our proof will use the pseudo-Riemannian geometry of $\H^{2,n}$ in a manner similar to \cite{BonsanteSchlenker}. Note that in a recent paper \cite{DancigerGueritaudKasselPseudoHyperbolic}, Danciger, Gu\'eritaud and Kassel also use the geometry of the pseudo-hyperbolic space to understand special properties of Anosov representations.
We show in Subsection \ref{subsection-Gaussmaps} that the $\rho$-equivariant minimal surface in the Riemannian symmetric space is the Gauss map of the maximal surface in $\H^{2,n}$. In the case $n=1$, this interpretation recovers the equivalence between the existence of a unique maximal surface in globally hyperbolic anti-de Sitter $3$-manifolds and the result of Schoen \cite{schoen} giving the existence of a unique minimal Lagrangian diffeomorphism isotopic to the identity between hyperbolic surfaces (the equivalence was proved in \cite{krasnovschlenker}).
Now, to each negative definite line $x\in\H^{2,n}$, one can associate a copy of $\Pho(\R^{2,n}) \subset \Pho(\R^{2,n+1})$ defined as the set of photons contained in $x^\perp$. Moreover, the copies of $\Pho(\R^{2,n})$ associated to such lines $x$ and $y$ are disjoint if and only if $x$ and $y$ are joined by a space-like geodesic. This remark allows us to construct a $\Pho(\R^{2,n+1})$ structure on a fiber bundle over $\Sigma$ from the data of any $\rho$-equivariant space-like embedding of $\widetilde{\Sigma}$, and as a result, prove Theorem \ref{t:photontructures}.
The $\Ein^{1,2}$-structures associated to Hitchin representations in $\SO_0(2,3)$ from Theorem \ref{t:einsteinstructures} are constructed from the unique maximal space-like surface of Theorem \ref{t:ExistenceUniquenessMaximalSurface} as follows. To each unit tangent vector $v$ of the maximal space-like $\rho$-equivariant embedding of $\widetilde{\Sigma}$ in $\H^{2,2}$, one can associate a point in $\Ein^{1,2} = \partial_\infty \H^{2,2}$ by ``following the geodesic determined by $v$ to infinity''. In this way, one obtains a $\rho$-equivariant map from $T^1 \widetilde{\Sigma}$ to $\Ein^{1,2}$. Using a maximum principle involving the components of the solution to Higgs bundle equations, we will prove that this map is a local diffeomorphism. Note that this is specific to Hitchin representations and is not true for other maximal representations.
Finally, to prove Theorem \ref{t:domination}, we introduce the length spectrum of the maximal $\rho$-equivariant embedding as an intermediate comparison. On the one hand, this length spectrum is larger than the length spectrum of the conformal metric of curvature $-1$ on the maximal surface, and on the other hand, it is less than the length spectrum of the representation $\rho$. This should be compared to \cite{DeroinTholozan} where Deroin and the second author prove that for any representation $\rho$ into the isometry group of $\H^n$, there exists a Fuchsian representation $j$ such that $L_j\geq L_\rho$. Here, both inequalities are reversed because of the pseudo-Riemannian geometry on $\H^{2,n}$.
\smallskip
\noindent\textbf{Acknowledgments.}
When we started this project, Olivier Guichard and Anna Wienhard very kindly shared their working notes on Einstein structures associated to Hitchin representations with us. For this we are very grateful.
The authors gratefully acknowledge support from the NSF grants DMS-1107452, 1107263 and 1107367 “RNMS: GEometric structures And Representation varieties” (the GEAR Network). N. Tholozan's research is partially supported by the ANR project : DynGeo. B. Collier's research is supported by the National Science Foundation under Award No. 1604263.
\section{Maximal representations in $\SO_0(2,n+1)$}
For the rest of the paper, $\Sigma$ will be a closed surface of genus $g\geq2$. We denote by $\Gamma$ its fundamental group and by $\widetilde{\Sigma}$ its universal cover.
Recall that the group $\Gamma$ is \emph{Gromov hyperbolic} and that its \emph{boundary at infinity}, denoted by $\partial_\infty \Gamma$, is homeomorphic to a circle.
\subsection{The Toledo invariant}
Let $\R^{2,n+1}$ denote the space $\R^{n+3}$ endowed with the quadratic form
\[\mathbf{q}: (x_1, \ldots ,x_{n+3}) \mapsto x_1^2 + x_2^2 - x_3^2 - \ldots - x_{n+3}^2~.\]
The Lie group $\SO_0(2,n+1)$ is the identity component of the group of linear transformations of $\R^{2,n+1}$ preserving $\mathbf{q}$. Its subgroup $\SO(2)\times \SO(n+1)$ is a maximal compact subgroup.
To a representation $\rho:\Gamma\to\SO_0(2,n+1)$, one can associate a principal $\SO_0(2,n+1)$-bundle $P_\rho$ whose total space is the quotient of $\widetilde{\Sigma}\times \SO_0(2,n+1)$ by the action of $\Gamma$ by deck transformations:
\[\gamma\cdot (x,y) = (x\cdot\gamma^{-1}, \rho(\gamma)y)~.\]
Since the quotient of $\SO_0(2,n+1)$ by a maximal compact subgroup is contractible, this principal bundle admits a reduction of structure group to a principal $\SO(2)\times \SO(n+1)$-bundle $B_\rho$ which is unique up to gauge equivalence. Finally, the quotient of $B_\rho$ by the right action of $\SO(n+1)$ gives a principal $\SO(2)$-bundle $M_\rho$ on $\Sigma$.
\begin{defi}
The \emph{Toledo invariant} $\tau(\rho)$ of the representation $\rho$ is the Euler class of the $\SO(2)$-bundle $M_\rho$.
\end{defi}
The Toledo invariant is locally constant and invariant by conjugation. It thus defines a map
\[\tau : \xymatrix{\Rep(\Gamma,\SO_0(2,n+1))\ar[r]& \Z},\]
where $\Rep(\Gamma,\SO_0(2,n+1))$ denotes the set of conjugacy class of representations of $\Gamma$ into $\SO_0(2,n+1)).$
It is proven in \cite{domic} that the Toledo invariant satisfies the \emph{Milnor--Wood inequality}:
\begin{prop}
For each representation $\rho:\Gamma\to\SO_0(2,n+1)$ the Toledo invariant satisfies
\[ |\tau(\rho)| \leq 2g-2~.\]
\end{prop}
This leads to the following definition:
\begin{defi}
A representation $\rho: \Gamma \to \SO_0(2,n+1)$ is \emph{maximal} if $|\tau(\rho)| = 2g-2$.
\end{defi}
\subsection{Maximal representations are Anosov}
The Toledo invariant and the notion of maximal representation can be defined more generally for representations of $\Gamma$ into Hermitian Lie groups. In \cite{burgeriozziwienhard}, Burger, Iozzi and Wienhard study these representations. They prove in particular that for any Hermitian Lie group $G$ \emph{of tube type}, there exist maximal representations of $\Gamma$ into $G$ that have Zariski dense image. This applies in particular to maximal representations in $\SO_0(2,n+1)$.
In that same paper, they exhibit a very nice geometric property of maximal representations that was reinterpreted in \cite{BILW} as the \emph{Anosov property} introduced independently by Labourie in \cite{labouriehyperconvex}. Here we describe one of the main consequences of their work in our setting.
Let $\Ein^{1,n} \subset \ProjR{n+2}$ denote the space of isotropic lines in $\R^{2,n+1}$. The group $\SO_0(2,n+1)$ acts transitively on $\Ein^{1,n}$ and preserves the conformal class of a pseudo-Riemannian metric of signature $(1,n)$.
We will say that three isotropic lines $[e_1], [e_2]$ and $[e_3]$ in $\Ein^{1,n}$ are \emph{in a space-like configuration} if the quadratic form $\mathbf{q}$ restricted to the vector space spanned by $e_1$, $e_2$ and $e_3$ has signature $(2,1)$.
\begin{theo}[Burger--Iozzi--Wienhard, \cite{burgeriozziwienhard}] \label{t:AnosovCurve}
If $\rho: \Gamma \to \SO_0(2,n+1)$ is a maximal representation, then there is a unique $\rho$-equivariant continuous embedding
\[\xi: \partial_\infty \Gamma \to \Ein^{1,n}~.\]
Moreover, the image of $\xi$ is a \emph{space-like curve}, meaning that the images of any three distinct points in $\partial_\infty \Gamma$ are in a space-like configuration.
\end{theo}
The Anosov property implies that maximal representations are \emph{loxodromic}. In particular, the limit curve $\xi$ can be reconstructed from the attracting and repelling eigenvectors of $\rho(\gamma)$ for $\gamma \in \Gamma$. More precisely, we have the following :
\begin{coro} \label{c:AnosovProximal}
For every $\gamma \in \Gamma$, if $\gamma_+$ and $\gamma_-$ denote the attracting and repelling fixed points of $\gamma$ in $\partial_\infty \Gamma$, then, for $\lambda >1$, $\xi(\gamma_+)$ and $\xi(\gamma_-)$ are the eigen-directions of $\rho(\gamma)$ for eigenvalues $\lambda$ and $\lambda^{-1}$ respectively. Moreover, the $2$-plane spanned $\xi(\gamma_+)$ and $\xi(\gamma_-)$ is non-degenerate with respect to $\mathbf{q}$, and the restriction of $\rho(\gamma)$ to its perpendicular has spectral radius strictly less than $\lambda$.
\end{coro}
For $n=0$, maximal representations into $\SO_0(2,1)$ correspond to Fuchsian representations \cite{GoldmanTopologicalComponents}. The isometric inclusion
$$\begin{array}{lll}
~\R^{2,1} & \longrightarrow & \R^{2,n+1} \\
(x_1,x_2,x_3) & \longmapsto & (x_1,x_2,x_3,0,\cdots,0)
\end{array}$$
defines an inclusion of $\SO_0(2,1)\hookrightarrow\SO_0(2,n+1)$ which preserves the Toledo invariant.
Thus, given a Fuchsian representation $\rho_{Fuch}:\Gamma\to\SO_0(2,1)$, the $\SO_0(2,n+1)$ representation $j\circ\rho$ is maximal.
Moreover, if $\alpha:\Gamma\to\mathrm{O}(n)$ is an orthogonal representation and $\det(\alpha):\Gamma\to\mathrm{O}(1)$ the determinant representation, then
\[\rho=\rho_{Fuch}\otimes \det(\alpha)\oplus\alpha:\Gamma\to\SO_0(2,n+1)\]
is still maximal.
\begin{defi}\label{d:FuchsianLocus}
A maximal representation $\rho:\Gamma\to\SO_0(2,n+1)$ lies in the {\em Fuchsian locus} if it preserves a three dimensional linear subspace of $\R^{2,n+1}$ in restriction to which $\mathbf{q}$ has signature $(2,1)$;
equivalently, \[\rho=\rho_{Fuch}\otimes \det(\alpha)\oplus \alpha\]
for $\rho_{Fuch}:\Gamma\to\SO_0(2,1)$ a Fuchsian representation and $\alpha:\Gamma\to\mathrm{O}(n)$.
\end{defi}
\subsection{Harmonic metrics and Higgs bundles}
We now recall the non-abelian Hodge correspondence between representations of $\Gamma$ into $\SO_0(2,n+1)$ and $\SO_0(2,n+1)$-Higgs bundles. This correspondence holds for any real reductive Lie group $G$, but we will restrict the discussion to our group of interest.
When the surface $\Sigma$ is endowed with a complex structure, we will denote the associate Riemann surface by $X.$ The canonical bundle of $X$ will be denoted by $\mathcal{K}$ and the trivial bundle will be denoted by $\Oo.$ We also denote the Riemannian symmetric space of $\SO_0(2,n+1)$ by $\mathfrak{X}$, namely
\[\mathfrak{X}=\SO_0(2,n+1)/(\SO(2)\times\SO(n+1)).\]
We start by recalling the notion of a harmonic metric.
\begin{defi}
Let $\rho:\Gamma\to\SO_0(2,n+1)$ be a representation and let $P_\rho$ be the associated flat $\SO_0(2,n+1)$-bundle. A {\em metric} on $P_\rho$ is a reduction of structure group to $\SO(2)\times \SO(n+1)$. Equivalently, a metric is a $\rho$-equivariant map
\[\textbf{h}_\rho:\xymatrix{\widetilde \Sigma\ar[r]&\mathfrak{X}}.\]
\end{defi}
The differential $d\textbf{h}_\rho$ of a metric $\textbf{h}_\rho$ is a section of $T^*\widetilde\Sigma \otimes \textbf{h}_\rho^* T\mathfrak{X}$.
Given a metric $g$ on $\Sigma$, one can define the norm $\Vert d\textbf{h}_\rho \Vert$ of $d\textbf{h}_\rho$ which, by equivariance of $\textbf{h}_\rho$, is invariant under the action of $\Gamma$ on $\widetilde\Sigma$ by deck transformations.
In particular, $\Vert d\textbf{h}_\rho \Vert$ descends to a function on $\Sigma$. The {\em energy} of $\textbf{h}_\rho$ is the $L^2$-norm of $d\textbf{h}_\rho$, namely:
\[\EE(\textbf{h}_\rho)=\int_\Sigma\Vert d\textbf{h}_\rho\Vert^2dv_g.\]
Note that the energy of $\textbf{h}_\rho$ depends only on the conformal class of the metric $g,$ and so, only on the Riemann surface structure $X$ associated to $g$.
\begin{defi}
A metric $\textbf{h}_\rho: \widetilde X\rightarrow \mathfrak{X}$ on $P_\rho$ is \textit{harmonic} if it is a critical point of the energy functional.
\end{defi}
The complex structure on $X$ and the Levi-Civita connection on $\mathfrak{X}$ induce a holomorphic structure $\nabla^{0,1}$ on the bundle $\big( T^*X\otimes \textbf{h}_\rho^*T\mathfrak{X}\big)\otimes\C$. The following is classical:
\begin{prop}\label{p-harmonicholomorphic}
A metric $\textbf{h}_\rho:\widetilde X\rightarrow \mathfrak{X}$ is harmonic if and only if the $(1,0)$ part $\partial \textbf{h}_\rho$ of $d\textbf{h}_\rho$ is holomorphic, that is
\[\nabla^{0,1}\partial \textbf{h}_\rho=0.\]
\end{prop}
A representation $\rho:\Gamma\rightarrow \SO_0(2,n+1)$ is {\em completely reducible} if any $\rho(\Gamma)$-invariant subspace of $\R^{n+3}$ has a $\rho(\Gamma)$-invariant complement. For completely reducible representations, we have the following theorem.
\begin{theo}[Corlette \cite{corlette}]\label{t:Corlette}
A representation $\rho:\Gamma\rightarrow \SO_0(2,n+1)$ is completely reducible if and only if, for each Riemann surface structure $X$ on $S,$ there exists a harmonic metric
$\textbf{h}_\rho:\widetilde X\rightarrow \mathfrak{X}.$ Moreover, a harmonic metric is unique up to the action of the centralizer of $\rho.$
\end{theo}
\begin{rmk}
In \cite{burgeriozziwienhard}, it is shown that all maximal representations are completely reducible and that the centralizer of a maximal representation in compact. Thus, for maximal representations there exists a unique harmonic metric.
\end{rmk}
For a completely reducible representation $\rho$, the energy of the harmonic metric $\textbf{h}_\rho$ defines a function on the Teichm\"uller space $\Teich(\Sigma)$ of $\Sigma$
\begin{equation}
\label{eq:energyfunction}
\EE_\rho:\xymatrix@R=0em{\Teich(\Sigma)\ar[r]&\R^{\geq0}\\X\ar@{|->}[r]&\EE(\textbf{h}_\rho)}.
\end{equation}
The critical points of the energy are determined by the following.
\begin{prop}[Sacks-Uhlenbeck \cite{sacksuhlenbeck}, Schoen-Yau \cite{schoenyau}]\label{p:conformal Branched Minimal Imm}
A harmonic metric $\textbf{h}_\rho$ is a critical point of $\EE_\rho$ if and only if it is weakly conformal, i.e. $\tr(\partial \textbf{h}_\rho \otimes \partial \textbf{h}_\rho)=0.$ This is equivalent to $\textbf{h}_\rho$ being a branched minimal immersion.
\end{prop}
For Anosov representations, Labourie has shown that the energy function \eqref{eq:energyfunction} is smooth and proper, and so, has a critical point. As a corollary we have:
\begin{prop}[Labourie \cite{labourieenergy}]\label{p:Labourie Existence}
For each maximal representation there exists a Riemann surface structure on $\Sigma$ for which the harmonic metric is weakly conformal.
\end{prop}
We now recall the notion of a Higgs bundle on a Riemann surface $X$.
\begin{defi}
An $\SL(n,\C)$-Higgs bundle on $X$ is a pair $(\Ee,\Phi)$ where $\Ee$ is a rank $n$ holomorphic vector bundle with $\Lambda^n\Ee=\Oo$ and $\Phi\in H^0(\End(\Ee)\otimes \mathcal{K})$ is a holomorphic endomorphism of $\Ee$ twisted by $\mathcal{K}$ with $\tr(\Phi)=0.$
\end{defi}
Higgs bundles were originally defined by Hitchin \cite{hitchinselfduality} for the group $\SL(2,\C)$ and generalized by Simpson \cite{simpsonVHS} for any complex semi-simple Lie group.
More generally, Higgs bundles can be defined for real reductive Lie groups. For the group $\SO_0(2,n+1)$ the appropriate vector bundle definition is the following.
\begin{defi}
An $\SO_0(2,n+1)$-Higgs bundle over a Riemann surface $X$ is a tuple $(\Uu,q_\Uu,\Vv,q_\Vv,\eta)$ where
\begin{itemize}
\item $\Uu$ and $\Vv$ are respectively rank 2 and rank (n+1) holomorphic vector bundles on $X$ with trivial determinant,
\item $q_\Uu$ and $q_\Vv$ are non-degenerate holomorphic sections of $\Sym^2(\Uu^*)$ and $\Sym^2(\Vv^*)$,
\item $\eta$ is a holomorphic section of $\Hom(\Uu,\Vv)\otimes \mathcal{K}.$
\end{itemize}
\end{defi}
The non-degenerate sections $q_\Uu$ and $q_\Vv$ define holomorphic isomorphisms
\[\xymatrix{q_\Uu:\Uu\to\Uu^*&\text{and}&q_\Vv:\Vv\to\Vv^*}.\]
Given an $\SO_0(2,n+1)$-Higgs bundle $(\Uu,q_\Uu,\Vv,q_\Vv,\beta,\gamma)$, we get an $\SL(n+3,\C)$-Higgs bundle $(\Ee,\Phi)$ by setting $\Ee=\Uu\oplus\Vv$ and
\begin{equation}\label{eq:SL(n+3,C)HiggsBundle}
\Phi=\mtrx{0&\eta^\dagger\\\eta&0}:\Uu\oplus\Vv\longrightarrow (\Uu\oplus\Vv)\otimes \mathcal{K}.
\end{equation}
where $\eta^\dagger=q_{\Uu}^{-1}\circ\eta^T\circ q_\Vv\in H^0(\Hom(\Vv,\Uu)\otimes \mathcal{K}).$
Note that
\[\Phi^T\mtrx{q_\Uu&\\&-q_\Vv}+\mtrx{q_{\Uu}&\\&-q_\Vv}\Phi=0.\]
Appropriate notions of poly-stability exist for $\mathrm{G}$-Higgs bundles \cite{HiggsPairsSTABILITY}. However, for our considerations, the following definition will suffice.
\begin{defi}
An $\SL(n,\C)$-Higgs bundle $(\Ee,\Phi)$ is stable if for all sub-bundles $\Ff\subset\Ee$ with $\Phi(\Ff)\subset\Ff\otimes \mathcal{K}$ we have $\deg(\Ff)<0;$ $(\Ee,\Phi)$ is called poly-stable if it is direct sum of stable $\SL(n_j,\C)$-Higgs bundles.
An $\SO_0(2,n+1)$-Higgs bundle is poly-stable if and only if the $\SL(n+3,\C)$-Higgs bundle \eqref{eq:SL(n+3,C)HiggsBundle} is poly-stable.
\end{defi}
\noindent\textbf{From Higgs bundles to representations.} A Hermitian metric $h$ on a bundle $\Ee$ defines an isomorphism $h:\Ee\to\overline{\Ee^*}.$ Poly-stability is equivalent to existence of a Hermitian metric solving certain gauge theoretic equations which we refer to as the {\em Higgs bundle equations}.
This was proven by Hitchin \cite{hitchinselfduality} for $\SL(2,\C)$ and Simpson \cite{simpsonVHS} for semi-simple complex Lie groups, see \cite{HiggsPairsSTABILITY} for the statement for real reductive groups. The $\SO_0(2,n+1)$-version we need is the following.
\begin{theo}
\label{t:Hitchin-Simpson}
An $\SO_0(2,n+1)$-Higgs bundle $(\Uu,q_\Uu,V,q_\Vv,\eta)$ is poly-stable if and only if there exist Hermitian metrics $h_\Uu$ and $h_\Vv$ on $\Uu$ and $\Vv$ satisfying $\overline{h_\Vv^{-1}}q_\Vv=\overline{q_\Vv^{-1}}h_\Vv$ and $\overline{h_\Uu^{-1}}q_\Uu=\overline{q_\Uu^{-1}}h_\Uu$ such that
\begin{equation}
\label{eq: SO(2,n+1) Higgs bundle Equations}
\left\{\begin{array}{l}
F_{h_\Uu}+\eta^\dagger\wedge(\eta^\dagger)^{*_h}+\eta^{*_h}\wedge\eta=0 \\
F_{h_\Vv}+\eta\wedge\eta^{*_h}+(\eta^\dagger)^{*_h}\wedge \eta^\dagger=0
\end{array}\right.
\end{equation}
Here $F_{h_\Uu}$ and $F_{h_\Vv}$ denote the curvature of the Chern connections of $h_\Uu$ and $h_\Vv$ and $\eta^{*_h}$ denotes the Hermitian adjoint of $\eta$, i.e. $h_\Vv(u,\eta(v))= h_\Uu(\eta^{*_h}(u),v).$
\end{theo}
If $(h_\Uu,h_\Vv)$ solves the Higgs bundle equations \eqref{eq: SO(2,n+1) Higgs bundle Equations}, then the metric $h=h_\Uu\oplus h_\Vv$ on $\Ee=\Uu\oplus\Vv$ solves the $\SL(n+3,\C)$-Higgs bundle equations
\[
F_h+[\Phi,\Phi^{*h}]=0.\]
Given a solution $(h_\Uu,h_\Vv)$ to the Higgs bundle equations, the connection
\begin{equation}\label{eq:flat conn of Higgs bundle}
\nabla=\mtrx{\nabla_{h_\Uu}&\\&\nabla_{h_\Vv}}+\mtrx{0&\eta^\dagger\\\eta&0}+\mtrx{0&\eta^{*_h}\\(\eta^\dagger)^{*_h}&0}
\end{equation}
is a {\em flat} connection on $\Ee=\Uu\oplus\Vv$. Moreover, the conjugation
\begin{equation}\label{eq:RealConj}
\overline{q_\Ee^{-1}}\circ h=\mtrx{\overline{q_\Uu^{-1}}\circ h_\Uu&\\&-\overline{q_\Vv^{-1}}\circ h_\Vv}:\Uu\oplus\Vv\to\overline\Uu\oplus\overline\Vv
\end{equation}
is preserved by $\nabla.$ Denote the associated real bundle by $E_\nabla.$ The orthogonal structure $q_\Uu\oplus -q_\Vv$ restricts to a $\nabla$-parallel signature $(2,n+1)$ metric $g_\Uu\oplus g_\Vv$ on $E_\nabla.$ The holonomy of $\nabla$ gives a representation $\rho: \Gamma \to \SO_0(2,n+1)$ which is completely reducible.
\noindent\textbf{From representations to Higgs bundles.} Let $(E_\rho,\nabla,g)$ be the flat rank $(n+3)$ vector bundle with signature $(2,n+1)$ metric $g$ and flat connection $\nabla$ associated to a representation $\rho:\Gamma\to\SO_0(2,n+1)$. A metric on $E_\rho$,
\[\textbf{h}_\rho:\xymatrix{\widetilde \Sigma\ar[r]&\mathfrak{X}}\]
is equivalent to a splitting $E_\rho=U\oplus V$ where $U$ is a rank $2$ orthogonal bundle with $g_U=g|_U$ positive definite and $V$ is a rank $(n+1)$-bundle with $-g_V=-g|_V$ positive definite.
Moreover, the flat connection $\nabla$ decomposes as
\begin{equation}\label{eq:flatconnection decomp}
\nabla=\mtrx{\nabla_U&\\&\nabla_V}+\mtrx{&\Psi^\dagger\\\Psi&}
\end{equation}
where $\nabla_U$ and $\nabla_V$ are connections on $U$ and $V$ such that $g_U$ and $g_V$ are covariantly constant, $\Psi$ is a one form valued in the bundle $\Hom(U,V)$ and $\Psi^\dagger=g_U^{-1}\Psi^T g_V$.
The one form $(\Psi+\Psi^\dagger)\in\Omega^1(\Sigma,\Hom(U,V)\oplus\Hom(V,U))$ is identified with the differential of the metric $\textbf{h}_\rho.$
If $X$ is a Riemann surface structure on $\Sigma$, then the Hermitian extension $h_U\oplus h_V$ of $g_U\oplus-g_V$ to the complexification of $E_\rho$ defines a Hermitian metric. The complex linear extensions of $\nabla_U,\nabla_V,\Psi$ and $\Psi^\dagger$ all decompose into $(1,0)$ and $(0,1)$ parts, and $\nabla_U^{0,1}$ and $\nabla_V^{0,1}$ define holomorphic structures. In this case, Proposition \ref{p-harmonicholomorphic} reads:
\begin{prop}
A metric $\textbf{h}_\rho:\widetilde X\to \mathfrak{X}$ is harmonic if and only if $\nabla_{U,V}^{0,1}\Psi^{1,0}=0,$ (or equivalently $\nabla_{V,U}^{0,1}(\Psi^\dagger)^{1,0}).$
\end{prop}
Given a harmonic metric $\textbf{h}_\rho,$ the Hermitian adjoints of $\Psi^{1,0}$ and $(\Psi^{\dagger})^{0,1}$ are given by $(\Psi^{1,0})^{*}=(\Psi^\dagger)^{0,1}$ and $(\Psi^{\dagger})^{1,0}=(\Psi^{0,1})^*$.
With respect to a harmonic metric, the flatness equations $F_\nabla=0$ decompose as
\begin{equation}\label{eq:flatness EQ harmonic metric}
\left\{\begin{array}{l}
F_{\nabla_U}+\Psi^{1,0}\wedge(\Psi^{1,0})^*+((\Psi^{\dagger})^{1,0})^{*}\wedge (\Psi^{\dagger})^{1,0} = 0 \\
F_{\nabla_V}+(\Psi^{\dagger})^{1,0}\wedge((\Psi^{\dagger})^{1,0})^{*}+(\Psi^{1,0})^{*}\wedge\Psi^{1,0}=0 \\
\nabla_{U,V}^{0,1}\Psi^{1,0}=0
\end{array}\right..
\end{equation}
Note that setting $\Psi^{1,0}=\eta,$ the Higgs bundle equations \eqref{eq: SO(2,n+1) Higgs bundle Equations} are the same as the decomposition of the flatness equations \eqref{eq:flatness EQ harmonic metric} with respect to a harmonic metric.
Thus, if $\Uu$ and $\Vv$ are the holomorphic bundles $(U\otimes \C,\nabla_U^{0,1})$ and $(V\otimes \C,\nabla_V^{0,1}),$ then $(\Uu,q_\Uu,\Vv,q_\Vv,\Psi^{1,0})$ is a poly-stable $\SO_0(2,n+1)$-Higgs bundle, where $q_\Uu$ is the $\C$-linear extension of $g_\U$ to $U\otimes \C$ (similarly for $q_\Vv$).
\begin{prop}\label{p:Minimal Imm Tr(Phi2)=0}
Let $\rho:\Gamma\to\SO_0(2,n+1)$ be a completely reducible representation and $X$ be a Riemann surface structure on $\Sigma.$ If $(\Uu,q_\Uu,\Vv,q_\Vv,\eta)$ is the Higgs bundle associated to $\rho$, then the harmonic metric $\textbf{h}_\rho$ is a branched minimal immersion if and only if $\tr(\eta\otimes\eta^\dagger) =0.$
\end{prop}
\begin{proof}
The derivative of the harmonic metric is identified with the $1$-form $\Psi+\Psi^\dagger$ from \eqref{eq:flatconnection decomp}. By Proposition \ref{p:conformal Branched Minimal Imm}, $\textbf{h}_\rho$ is a branched minimal immersion if and only if
\[\tr\left(\mtrx{0&(\Psi^{\dagger})^{0,1}\\\Psi^{0,1}&0}^2\right)=0.\]
This is equivalent to $\tr(\eta\otimes\eta^\dagger)=0.$
\end{proof}
\begin{defi}
An $\SO_0(2,n+1)$-Higgs bundle $(\Uu,q_\Uu,\Vv,q_\Vv,\eta)$ will be called \emph{conformal} if $\tr(\eta\otimes\eta^\dagger)=0.$
\end{defi}
\subsection{Maximal Higgs bundle parameterizations}\label{Higgsbundleparametrization}
We now describe the Higgs bundles which give rise to maximal $\SO_0(2,n+1)$-representations.
\begin{prop}
The isomorphism class of a $\SO_0(2,n+1)$-Higgs bundle is determined by the data $(\Ll,\Vv,q_\Vv,\beta,\gamma)$ where $\Ll$ is a holomorphic line bundle on $X$, $\beta\in H^0(\Ll\otimes\Vv\otimes \mathcal{K})$ and $\gamma\in H^0(\Ll^{-1}\otimes \Vv\otimes \mathcal{K}).$ If $(\Uu,q_\Uu,\Vv,q_\Vv,\eta)$ is poly-stable, then the Toledo invariant of the corresponding representation is the degree of $\Ll.$
\end{prop}
\begin{proof}
The group $\SO(2,\C)$ is isomorphic to the set of $2\times2$ matrices $A$ such that $A^T\mtrx{0&1\\1&0}A=\mtrx{0&1\\1&0}$. Since the bundle $(\Uu,q_\Uu)$ is the associated bundle of a holomorphic principal $\SO(2,\C)$-bundle, up to isomorphism we have
\[(\Uu,q_\Uu)=\left(\Ll\oplus\Ll^{-1},\mtrx{0&1\\1&0}:\Ll\oplus\Ll^{-1}\to(\Ll\oplus\Ll^{-1})^*\right).\]
With respect to the splitting $\Uu=\Ll\oplus\Ll^{-1}$, the holomorphic section $\eta\in\Hom(\Uu,\Vv)\otimes \mathcal{K})$ decomposes as $\beta\oplus\gamma$ where $\beta\in \Hom(\Ll\otimes \Vv\otimes \mathcal{K})$ and $\gamma\in \Hom(\Ll^{-1}\otimes\Vv\otimes \mathcal{K})$.
Since, the degree of $\Ll$ is the degree of the $\SO(2)$-bundle whose complexification is $\Uu,$ the Toledo invariant of the associated representation is the degree of $\Ll.$
\end{proof}
\begin{rmk}
The $\SL(n+3,\C)$-Higgs bundle $(\Ee,\Phi)$ associated $(\Ll,\Vv,q_\Vv,\beta,\gamma)$
is given by $\Ee=\Ll\oplus\Ll^{-1}\oplus\Vv$ and
\begin{equation}\label{eq:betagammaHiggsfield}
\Phi=\mtrx{0&0&\beta^\dagger\\0&0&\gamma\dagger\\\gamma&\beta&0}:\Ee\to\Ee\otimes \mathcal{K}.
\end{equation}
\end{rmk}
The Milnor-Wood inequality can be seen directly for poly-stable Higgs bundles.
\begin{prop}\label{p:maximal Higgs bundle Param}
If $(\Ll,\Vv,q_\Vv,\beta,\gamma)$ is a poly-stable $\SO_0(2,n+1)$-Higgs bundle, then $\deg(\Ll)\leq 2g-2$. Furthermore, if $\deg(\Ll)=2g-2,$ then
\begin{itemize}
\item $\Vv$ admits a $q_\Vv$-orthogonal decomposition $\Vv=\Ii\oplus\Vv_0$ where $\Vv_0$ is a holomorphic rank $n$ bundle and $\Ii=\Lambda^n\Vv_0$ satisfies $\Ii^2=\Oo.$
\item $\Ll\cong \mathcal{K} \Ii$
\item $\gamma\cong\mtrx{1\\0}:\mathcal{K}\Ii\to \Ii \mathcal{K}\oplus \Vv_0\otimes \mathcal{K}$ and $\beta=\mtrx{q_2\\\beta_0}:\mathcal{K}^{-1}\Ii\to \Ii \mathcal{K}\oplus\Vv_0\otimes \mathcal{K}$ where $q_2\in H^0(\mathcal{K}^2)$ and $\beta_0\in H^0(\mathcal{K}\otimes\Ii\otimes\Vv_0).$
\end{itemize}
\end{prop}
\begin{proof}
The poly-stable $\SL(n+3,\C)$ Higgs bundle $(\Ee,\Phi)$ associated to $(\Ll,\Vv,q_\Vv,\beta,\gamma)$
has $\Ee=\Ll\oplus\Ll^{-1}\oplus\Vv$ and $\Phi$ is given by \eqref{eq:betagammaHiggsfield}.
If $\deg(\Ll)>0$, then by poly-stability $\gamma\neq0.$ If the image of $\gamma$ is isotropic, then we have a sequence
\[\xymatrix{&0\ar[r]&\Ll \mathcal{K}^{-1}\ar[r]^\gamma&\ker(\gamma^\dagger)\ar[r]&\Vv\ar[r]^{\gamma^\dagger}&\Ll^{-1}\mathcal{K}\ar[r]&0}.\]
Since $\deg(\ker(\gamma^\dagger))=\deg(\Ll)-(2g-2)$ and $\Ll\oplus \ker(\gamma^\dagger)$ is an invariant sub-bundle, we have
$\deg(\Ll)\leq g-1.$
Thus, for $\deg(\Ll)>g-1$ the composition $\gamma\circ\gamma^\dagger$ is a non-zero element of $H^0((\Ll^{-1}\mathcal{K})^2)$, and we conclude $\deg(\Ll)\leq 2g-2.$
If $\deg(\Ll)=2g-2,$ then $(\Ll^{-1}\mathcal{K})^2=\Oo$ and $\gamma$ is nowhere vanishing. Set $\Ii=\Ll \mathcal{K}^{-1}$, then $\Ll=\Ii \mathcal{K}$ and $\Ii$ defines an orthogonal line sub-bundle of $\Vv$.
Taking the $q_\Vv$-orthogonal complement of $\Ii$ gives a holomorphic decomposition $\Vv=\Ii\oplus(\Ii)^\perp$.
Since $\Lambda^{n+1}\Vv=\Oo,$ we conclude $\Vv=\Ii\oplus\Vv_0$ where $\Ii=\Lambda^n\Vv_0.$
Since the image of $\gamma$ is identified with $\Ii,$ we can take $\gamma\cong\mtrx{1\\0}:\mathcal{K}\Ii\to \Ii \mathcal{K}\oplus \Vv_0\otimes \mathcal{K}$.
Finally, the holomorphic section $\beta$ of $\Hom(\Ii \mathcal{K}^{-1},\Ii\oplus\Vv_0)\otimes \mathcal{K}$ decomposes as
\[\beta= q_2\oplus \beta_0\]
where $q_2$ is a holomorphic quadratic differential and $\beta_0\in H^0(\Vv_0\otimes\Ii \mathcal{K})$
\end{proof}
\begin{rmk}
Higgs bundles with $\deg(\Ll)=2g-2$ will be called maximal Higgs bundles. They are determined by tuples $(\Vv_0,q_{\Vv_0},\beta_0,q_2)$ from Proposition \ref{p:maximal Higgs bundle Param}.
\end{rmk}
\begin{prop}
If $\rho:\Rep^{max}(\Gamma,\SO_0(2,n+1))$ is a maximal representation, $X$ is a Riemann surface structure on $\Sigma$ and the Higgs bundle corresponding to $\rho$ is defined by the data $(\Vv_0,q_{\Vv_0},\beta_0,q_2),$ then the harmonic metric
is a minimal immersion if and only if the holomorphic quadratic differential $q_2$ vanishes.
\end{prop}
\begin{proof}
By Proposition \ref{p:Minimal Imm Tr(Phi2)=0}, the harmonic metric associated to a poly-stable Higgs bundle $(\Uu,q_\Uu,\Vv,q_\Vv,\eta)$ is a branched minimal immersion if and only if $\tr(\eta\otimes\eta^\dagger)=0.$ For a maximal Higgs bundle determined by $(\Vv_0,q_{\Vv_0},\beta_0,q_2)$
\[\eta=\mtrx{1&0\\q_2&\beta_0}:\Ii \mathcal{K}\oplus \Ii \mathcal{K}^{-1}\to\Ii \mathcal{K}\oplus\Vv_0 \mathcal{K}\ \ \ \ \ \text{and}\ \ \ \ \ \eta^\dagger=\mtrx{ q_2&\beta_0^\dagger\\1&0}:\Ii\oplus\Vv_0\to\Ii \mathcal{K}^2\oplus \Ii.\]
A computation shows $\tr(\eta\otimes\eta^\dagger)=2q_2$, thus, by Proposition \ref{p:Minimal Imm Tr(Phi2)=0}, the harmonic map is a branched minimal immersion if and only if $q_2=0.$
Finally, $\eta+\eta^\dagger$ is nowhere vanishing, hence the branched minimal immersion is branch point free.
\end{proof}
Given a maximal representation $\rho,$ by Proposition \ref{p:Labourie Existence}, we can always find a Riemann surface structure in which the corresponding Higgs bundle is a maximal conformal Higgs bundle. A maximal conformal Higgs bundle is determined by $(\Vv_0,q_{\Vv_0},\beta_0)$:
\[(\Uu,q_\Uu,\Vv,q_\Vv,\eta)=\left(\mathcal{K}\Ii\oplus \mathcal{K}^{-1}\Ii,\ \mtrx{0&1\\1&0},\ \Ii\oplus\Vv_0,\ \mtrx{1&0\\0&q_{\Vv_0}},\ \mtrx{1&0\\0&\beta_0}\right).\]
The associated $\SL(n+3,\C)$-Higgs bundle will be represented schematically by
\[
\xymatrix@R=-.2em{\mathcal{K}\Ii\ar[r]^1&\Ii\ar[r]^1&\mathcal{K}^{-1}\Ii\ar[ddl]^{\beta_0}\\&\oplus&\\&\Vv_0\ar[uul]^{\beta_0^\dagger}&}\]
Such a Higgs bundle is an example of a {\em cyclic} Higgs bundle.
\begin{defi}
An $\SL(n,\C)$-Higgs bundle $(\Ee,\Phi)$ is called \emph{cyclic of order $k$} if there is a holomorphic splitting $\Ee = \Ee_1\oplus \ldots \oplus \Ee_k$ such that $\Phi$ maps $\Ee_i$ into $\Ee_{i+1}\otimes \mathcal{K}$ (for $i < k$) and $\Ee_k$ to $\Ee_1\otimes \mathcal{K}$.
\end{defi}
\begin{prop}[Simpson, \cite{KatzMiddleInvCyclicHiggs}]\label{p:CyclicOrthogonalSplitting}
If the Higgs bundle $(E,\Phi)$ is cyclic of order $k$, then the cyclic splitting of $\Phi$ is orthogonal with respect to the Hermitian metric which solves the Higgs equations $F_H+[\Phi,\Phi^*]=0$.
\end{prop}
The symmetries of the solution metrics \eqref{eq: SO(2,n+1) Higgs bundle Equations} and Proposition \ref{p:CyclicOrthogonalSplitting} give a further simplification of the Higgs bundle equations for maximal conformal $\SO_0(2,n+1)$-Higgs bundles.
\begin{prop}\label{p:MaximalConformalHiggsbundleEQ}
For a poly-stable maximal conformal $\SO_0(2,n+1)$-Higgs bundle determined by $(\Vv_0,q_{\Vv_0},\beta_0)$, if $(h_\Uu,h_\Vv)$ solves the Higgs bundle equations \eqref{eq: SO(2,n+1) Higgs bundle Equations}, then
\begin{itemize}
\item $h_\Uu=\mtrx{h_{\Ii \mathcal{K}}&\\&h_{\Ii \mathcal{K}}^{-1}}$ where $h_{\Ii \mathcal{K}}$ is a metric on $\Ii \mathcal{K}$ and $h_{\Ii \mathcal{K}}^{-1}$ is the induced metric on $\Ii \mathcal{K}^{-1}$
\item $h_{\Vv}=\mtrx{h_\Ii&\\&h_{\Vv_0}}$ where $h_\Ii$ is a flat metric on $\Ii$ and $h_{\Vv_0}$ is a metric on $\Vv_0$ satisfying $\overline{h_{\Vv_0}^{-1}}q_{\Vv_0}=\overline{q_{\Vv_0}^{-1}}h_{\Vv_0}$
\end{itemize}
Furthermore, the Higgs bundle equations \eqref{eq: SO(2,n+1) Higgs bundle Equations} simplify as
\begin{equation}
\label{eq:CycliHiggsBundleEQ}
\left\{\begin{array}{l}
F_{h_{\Ii \mathcal{K}}}+\beta_0^\dagger\wedge (\beta_0^\dagger)^{*_h}+1^{*_h}\wedge 1=0 \\
F_{\Vv_0}+\beta_0\wedge\beta_0^{*_h}+(\beta_0^\dagger)^{*_h}\wedge\beta_0^\dagger=0
\end{array}\right.
\end{equation}
\end{prop}
\begin{proof}
The symmetry $\overline{h_\Uu^{-1}}q_\Uu=\overline{q_\Uu^{-1}}h_\Uu$ implies $h_\Uu=\mtrx{h_{\Ii \mathcal{K}}&\\&h_{\Ii \mathcal{K}}^{-1}}$ where $h_{\Ii \mathcal{K}}$ is a metric on $\Ii \mathcal{K}$ and $h_{\Ii \mathcal{K}}^{-1}$ is the induced metric on $\Ii \mathcal{K}^{-1}$.
The splitting of $h_\Vv$ follows from Proposition \ref{p:CyclicOrthogonalSplitting}.
The Higgs bundle equations \eqref{eq: SO(2,n+1) Higgs bundle Equations} with $\eta=\mtrx{1&0\\0&\beta_0}$ and $h_\Uu=h_{\Ii \mathcal{K}}\oplus h_{\Ii \mathcal{K}}^{-1}$ and $h_\Vv=h_\Ii\oplus h_{\Vv_0}$ simplify to
\[ \left\{ \begin{array}{l}
F_{h_{\Ii \mathcal{K}}}+\beta_0^\dagger\wedge (\beta_0^\dagger)^{*_h}+1^{*_h}\wedge 1=0\\
F_{h_{\Ii \mathcal{K}}^{-1}}+1\wedge1^{*_h}+\beta_0^{*_h}\wedge \beta_0=0 \\
F_{h_\Ii}+1\wedge1^{*_h}+1^{*_h}\wedge 1=0 \\
F_{\Vv_0}+\beta_0\wedge\beta_0^{*_h}+(\beta_0^\dagger)^{*_h}\wedge\beta_0^\dagger=0
\end{array}\right.\]
Note that the first two equations are the same and the third equation implies the metric $h_{\Ii}$ is flat.
\end{proof}
For a maximal poly-stable conformal $\SO_0(2,n+1)$-Higgs bundle determined by $(\Vv_0,q_{\Vv_0},\beta_0)$, the associated flat bundle $E_\rho\subset (\Ii \mathcal{K}\oplus \Ii \mathcal{K}^{-1}\oplus \Ii\oplus\Vv_0)$ is the fixed point locus of $\overline{q_\Ee^{-1}}\circ H$ defined by \eqref{eq:RealConj}. Using Proposition \ref{p:MaximalConformalHiggsbundleEQ}, we have
\[\overline{q_\Ee^{-1}}\circ H=\mtrx{&1&&\\1&&&\\&&-1&\\&&&-\overline{q_{\Vv_0}^{-1}}}\mtrx{h_{\Ii \mathcal{K}}&&&\\&h_{\Ii \mathcal{K}}^{-1}&&\\ &&h_\Ii&\\&&&h_{\Vv_0}}\]\[=\mtrx{&h_{\Ii \mathcal{K}}^{-1}&&\\h_{\Ii \mathcal{K}}&&&\\ &&-h_\Ii&\\&&&-\overline{q_{\Vv_0}^{-1}}\circ h_{\Vv_0}}:\Ii \mathcal{K}\oplus \Ii \mathcal{K}^{-1}\oplus \Ii\oplus\Vv_0\longrightarrow \overline{\Ii \mathcal{K}}\oplus \overline{\Ii \mathcal{K}^{-1}}\oplus \overline{\Ii}\oplus\overline{\Vv_0}.\]
Thus the flat bundle $E_\rho=U\oplus V$ of a maximal representation decomposes further. This decomposition will play an essential role in the rest of the paper.
\begin{theo}\label{p:decompositionbundle}
The flat bundle associated to a poly-stable maximal conformal $\SO_0(2,n+1)$-Higgs bundle determined by $(\Vv_0,q_{\Vv_0},\beta_0)$ decomposes as
\[E_\rho=U\oplus\ell\oplus V_0\] where $U\subset\Uu$ is a positive definite rank two sub-bundle,
$\ell\subset\Ii$ is a negative definite line sub-bundle consisting of the fixed points of $-h_{\Ii}:\Ii\to\overline\Ii$ and $V_0\subset \Vv_0$ is a negative definite rank $n$ bundle. In this splitting the flat connection is given by
\[\nabla=\mtrx{\nabla_{h_\Uu}&1+1^{*_h}&\beta_0^\dagger+\beta_0^{*_h}\\1+1^{*_h}&\nabla_{h_\Ii}&0\\\beta_0+(\beta^\dagger)^{*_h}&0&\nabla_{h_{\Vv_0}}}\]
\end{theo}
From now on we will only consider poly-stable maximal $\SO_0(2,n+1)$-Higgs bundles. For notational convenience, we will drop the subscript $0$ and write the decomposition of the flat bundle $E_\rho$ as $E_\rho=U\oplus \ell\oplus V.$
\subsection{Connected components of maximal representations}
Given a maximal $\SO_0(2,n+1)$-Higgs bundle
\begin{equation}\label{eq: SO(2,n+1) Higgs}
\xymatrix@R=0em{\mathcal{K}\Ii\ar[r]^1&\Ii\ar[r]^1&\mathcal{K}^{-1}\Ii\ar[ddl]^{\beta}\\&\oplus&\\&\Vv\ar[uul]^{\beta^\dagger}&},
\end{equation}
the Stiefel-Whitney classes $sw_1\in H^1(\Sigma,\Z/2)$ and $sw_2\in H^2(\Sigma,\Z/2)$ of $\Vv$ define characteristic classes which help distinguish the connected components of maximal Higgs bundles.
Thus, the space of maximal representations decomposes as
\[\Rep^{max}(\Gamma,\SO_0(2,n+1))=\bigsqcup_{\substack{sw_1\in H^1(\Sigma,\Z/2)\\sw_2\in H^2(\Sigma,\Z/2)}} \Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,n+1))\]
where $\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,n+1))$ is the set of maximal representations such that the Stiefel-Whitney classes of the bundle $\Vv$ are $sw_1$ and $sw_2$.
When $n>2,$ these characteristic classes distinguish the connected components of maximal $\SO_0(2,n+1)$-Higgs bundles. In other words each of the sets $\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,n+1))$ is non-empty and connected \cite{MaxRepsHermSymmSpace}. Thus, for $n>2,$ the space $\Rep^{max}(\Gamma,\SO_0(2,n+1))$ has $2^{2g+1}$ connected components.
\begin{prop}\label{p:Fuchsian Locus n>2}
For $n>2$ each connected component of maximal $\SO_0(2,n+1)$-representations contains a point in the Fuchsian locus from Definition \ref{d:FuchsianLocus}.
\end{prop}
\begin{proof}
Let $\rho_{Fuch}:\Gamma\to\SO_0(2,1)$ be a Fuchsian representation and $\alpha:\Gamma\to\mathrm{O}(n)$ be an orthogonal representation. Consider the maximal $\SO_0(2,n+1)$-representation
\[\rho=\rho_{Fuch}\otimes \det(\alpha)\oplus \alpha\]
in the Fuchsian locus. The associated conformal Higgs bundle is given by
\[\xymatrix@R=0em{\mathcal{K}\Ii\ar[r]^1&\Ii\ar[r]^1&\mathcal{K}^{-1}\Ii\\&\oplus&\\&\Vv&},\]
where $\Vv$ is the flat orthogonal bundle associated to the representation $\alpha.$
\end{proof}
The case of maximal $\SO_0(2,3)$-representations is slightly different. Namely, when the first Stiefel-Whitney class of $\Vv$ vanishes, the structure group of $\Vv$ reduces to $\SO(2).$ In this case, $\Vv$ is isomorphic to $\mathcal{N}\oplus\mathcal{N}^{-1}$ for some line bundle $\mathcal{N}$ with nonnegative degree.
Furthermore, the holomorphic section $\beta$ decomposes as $\beta=(\mu,\nu)\in H^0(\mathcal{N}^{-1}\mathcal{K}^2)\oplus H^0(\mathcal{N}\mathcal{K}^2).$
By stability, if $\deg(\mathcal{N})>0$, then $\mu\neq0$.
Thus, we have a bound $0\leq \deg(\mathcal{N})\leq 4g-4.$
Moreover, if $\deg(\mathcal{N})=4g-4$, then $\mathcal{N}=\mathcal{K}^2$ and the Higgs bundle \eqref{eq: SO(2,n+1) Higgs} lies in the Hitchin component.
\begin{prop}\cite{MaxRepsHermSymmSpace}\label{p:Connected Components Maximal SO(2,3)}
The space of maximal $\SO_0(2,3)$-representations decomposes as
\[\bigsqcup\limits_{sw_1\neq 0,\ sw_2}\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))\ \sqcup\bigsqcup_{0\leq d\leq 4g-4}\Rep_{d}^{max}(\Gamma,\SO_0(2,3)).\]
Here the Higgs bundles corresponding to representations in $\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))$ are given by \eqref{eq: SO(2,n+1) Higgs} with Stiefel-Whitney classes of $\Vv$ given by $sw_1$ and $sw_2$ and, for representations in $\Rep^{max}_{ d}(\Gamma,\SO_0(2,3)),$ the corresponding Higgs bundles have $\Vv=\mathcal{N}\oplus\mathcal{N}^{-1}$ with $\deg(\mathcal{N})=d.$
Moreover, each of the above spaces is connected.
\end{prop}
\begin{rmk}\label{r: Gothen v nonGothen components}
The components $\Rep_{d}^{max}(\Gamma,\SO_0(2,3))$ are the $\SO_0(2,3)$-versions of maximal $\Sp(4,\R)$-representations discovered by Gothen \cite{gothen}. Hence, we will call the $4g-4$ components
$\bigsqcup\limits_{0<d\leq 4g-4}\Rep^{max}_{d}(\Gamma,\SO_0(2,3))$ \textit{Gothen components}. In particular, Hitchin representations are Gothen representations corresponding to $d=4g-4$. The remaining components
\[\bigsqcup\limits_{sw_1\neq 0,\ sw_2}\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))\ \sqcup\ \Rep^{max}_{0}(\Gamma,\SO_0(2,3))\] will be called {\em non-Gothen components}.
\end{rmk}
The Gothen components and the non-Gothen components have important differences.
In particular, the Gothen components are smooth, and all representations in Gothen components that are not Hitchin representations are Zariski dense \cite{bradlow,collierthesis}. While all non-Gothen components contain representations in the Fuchsian locus. Thus we have:
\begin{prop}\label{p:Fuchsian Locus n=2}
A maximal $\SO_0(2,3)$-representation $\rho$ is in the Fuchsian locus (see Definition \ref{d:FuchsianLocus}) if and only if $\rho$ is a non-Gothen representation.
\end{prop}
\section{Maximal space-like surfaces in $\H^{2,n}$}\label{s:maximalsurface}
In this section, we look at the action of a maximal representation $\rho: \Gamma \to \SO_0(2,n+1)$ on the pseudo-Riemannian symmetric space $\H^{2,n}$. We show that this action preserves a unique maximal space-like surface, the Gauss map of which gives a minimal surface in the Riemannian symmetric space $\mathfrak{X}$ of $\SO_0(2,n+1)$. As a corollary, we prove Labourie's conjecture for maximal $\SO_0(2,n+1)$ representations (Theorem \ref{t:LabourieConjecture}).
\subsection{The space $\H^{2,n}$}
In this section, we recall without proofs some classical facts about the pseudo-Riemannian symmetric spaces $\H^{2,n}$.
\begin{defi}
The space $\H^{2,n} \subset \ProjR{n+3}$ is the set of lines in $\R^{2,n+1}$ in restriction to which the quadratic form $\mathbf{q}$ is negative. The space $\hat{\H}^{2,n}$ is the set of vectors $u$ in $\R^{2,n+1}$ such that $\mathbf{q}(u) = -1$.
\end{defi}
The natural projection from $\hat{\H}^{2,n}$ to $\H^{2,n}$ is a covering of degree $2$. The restriction of the quadratic form $\mathbf{q}$ induces a pseudo-Riemannian metric on $\H^{2,n}$ of signature $(2,n)$ and sectional curvature $-1$. The group $\SO_0(2,n+1)$ acts transitively on $\H^{2,n}$ preserving this pseudo-Riemannian metric.
\begin{rmk}
The space $\H^{2,1}$ is a Lorentz manifold called the \emph{anti-de Sitter space} of dimension $3$. Some of the results presented in this section generalize known results for $\H^{2,1}$ (see \cite{BonsanteSchlenker}). Note however, that the Lie group $\SO_0(2,2)$ is isomorphic to a two-to-one cover of $\PSL(2,\R) \times \PSL(2,\R)$, thus the case $n=2$ is quite special.
\end{rmk}
\noindent\textbf{Compactification.} The space $\H^{2,n}$ is compactified by the space of isotropic lines in $\R^{2,n+1}$:
\begin{defi}
The Einstein Universe $\Ein^{1,n} \subset \ProjR{n+3}$ is the set of isotropic lines in $\R^{2,n+1}$. The space $\hat{\Ein}^{1,n}$ is the quotient of the space of isotropic vectors in $\R^{2,n+1}$ by the action of $\R_{>0}$ by homotheties.
\end{defi}
The space $\Ein^{1,n}$ has a natural conformal class of pseudo-Riemannian metrics with signature $(1,n)$ which is invariant by the action of $\SO_0(2,n+1)$. It is thus the local model for conformally flat Lorentz manifolds.
\medskip
\noindent\textbf{Geodesics.} The geodesics of $\H^{2,n}$ are the intersections of $\H^{2,n}$ with projective planes. These geodesics fall into three categories:
\begin{itemize}
\item \emph{space-like geodesics} are projectivizations of a plane of signature $(1,1)$,
\item \emph{light-like geodesics} are projectivizations of a plane of signature $(0,1)$,
\item \emph{time-like geodesics} are projectivizations of a plane of signature $(0,2)$.
\end{itemize}
Let $u$ and $v$ be two vectors in $\R^{2,n+1}$ such that $\mathbf{q}(u) = \mathbf{q}(v) = -1$ and $v \neq \pm u$. Then the projections $[u]$ and $[v]$ of $u$ and $v$ in $\H^{2,n}$ are joined by a unique geodesic, which is the projectivization of the plane spanned by $u$ and $v$. If this geodesic is space-like, then one can define the space-like distance $d_{\H^{2,n}}([u],[v])$ between $[u]$ and $[v]$ as the length of the geodesic segment joining them. Though this function is not an actual distance, it will be useful later on.
\begin{prop} \label{p:Distancespace-likePoints}
The points $[u]$ and $[v]$ are joined by a space-like geodesic if and only if $\left|\mathbf{q}(u,v)\right| >1$. In that case, we have
\[d_{\H^{2,n}}([u],[v])=\cosh^{-1}\left|\mathbf{q}(u,v)\right|~.\]
\end{prop}
\noindent\textbf{Warped product structure.} It is sometimes very useful to picture $\hat{\H}^{2,n}$ as a product of a $\H^2\times \S^n$ endowed with a "twisted" metric. To do so, consider an orthogonal splitting $\R^{2,n+1}= \R^{2,0}\oplus \R^{0,n+1}$.
\begin{prop} \label{p:WarpedProduct}
Let $\D$ be the disc of radius $1$ in $\R^2$, and $\S^n$ the sphere of radius $1$ in $\R^{n+1}$.
\begin{itemize}
\item The map
\[\function{F}{\D \times \S^n}{\hat{\H}^{2,n}}{(u,v)}{\left(\frac{2}{1-\norm{u}^2} u, \frac{1+\norm{u}^2}{1-\norm{u}^2} v\right)}\]
is a homeomorphism.
\item We have
\begin{equation} \label{eq:WarpedProductMetric} f^*g_{\H^{2,n}} = \frac{4}{(1-\norm{u}^2)^2} g_\D \oplus - \left( \frac{1+ \norm{u}^2}{1-\norm{u}^2}\right) g_{\S^n}~.\end{equation}
where $g_\D$ is the flat metric $d x^2 + d y^2$ and $g_{\S^n}$ is the spherical metric of curvature $1$ on $\S^n$.
\item The map
\[\function{\partial_\infty F}{\partial \D \times \S^n}{\hat{\Ein}^{1,n}}{(u,v)}{(u,v)}\]
is a homeomorphism that extends $F$ continuously.
\end{itemize}
\end{prop}
\subsection{Extremal surfaces}
Here we recall some basic facts about extremal immersions and refer to \cite{spivak} for more details.
Consider a $2$-dimensional oriented surface $S$ and an $n$-dimensional manifold $(M,g)$ endowed with a signature $(p,q)$ metric, with $p\geq 2$. A \textit{space-like immersion} of $S$ in $M$ is an immersion $\iota: S \hookrightarrow M$ such that the pull-back metric $\iota^*g$ on $S$ is positive definite. Given such an immersion, one gets an orthogonal splitting
$$\iota^*TM = TS \oplus NS,$$
where $NS$ is the orthogonal of $TS$ with respect to $\iota^*g$. We denote by $g_T$ and $g_N$ the restriction of $\iota^*g$ to $TS$ and $NS$ respectively and by $\nabla$ the pull-back of the Levi-Civita connection on $M$.
For $X$ and $Y$ vector fields on $S$ and $\xi$ a section of $NS$, the decomposition of $\nabla$ along $TS$ and $NS$ gives
$$\left\{ \begin{array}{lll}
\nabla_X Y & = & \nabla^T_X Y + \mathrm{II}(X,Y) \\
\nabla_X \xi & = & -B(X,\xi) + \nabla^N_X \xi
\end{array}\right..$$
Here, $\nabla^T$ is the Levi-Civita connection of $(S,g_T)$, $\nabla^N$ is a metric connection on $NS$, $\mathrm{II}\in \Omega^1(S,\Hom(TS,NS))$ is called the \textit{second fundamental form} and $B\in\Omega^1(S,\Hom(NS,TS))$ is called the \textit{shape operator}.
Since $\nabla$ is torsion-free, the second fundamental form is symmetric, namely, $\mathrm{II}\in \Omega^0(\Sym^2(TS)^*\otimes NS)$. Note also that
$$g_N\big(\mathrm{II}(X,Y),\xi\big) = g_T\big(B(X,\xi),Y\big).$$
The \textit{mean curvature vector field} of the immersion $\iota: S \hookrightarrow M$ is given by
$$H:=\text{tr}_{g_T} (\mathrm{II})\in \Omega^0(NS).$$
When $S$ has co-dimension 1, the unit normal to the immersion allows $\mathrm{II}$ and $H$ to be interpreted as real valued tensors. The following is classical:
\begin{prop}
The space-like immersion $\iota: S \hookrightarrow M$ is a critical point of the area functional if and only if $H=0$.
\end{prop}
We will call such an immersion an \textit{extremal immersion}. When $(NS,g_S)$ is positive definite, an extremal immersion locally minimizes the area will be called a \textit{minimal immersion}. On the other hand, when $(NS,g_N)$ is negative definite, an extremal immersion locally maximizes the area will be called a \textit{maximal immersion}.
\begin{rmk}
When $S$ is endowed with a conformal structure, $\iota$ is a space-like extremal immersion if and only if it is harmonic and conformal \cite{eellssampson}.
\end{rmk}
\subsection{Existence of maximal space-like surfaces}
In this Subsection, we prove the existence part of Theorem \ref{t:ExistenceUniquenessMaximalSurface}.
\begin{prop}\label{p:ExistenceMaximalSurface}
Let $\rho:\Gamma \longrightarrow \SO_0(2,n+1)$ be a maximal representation. Then there exists a $\rho$-equivariant maximal space-like immersion $u: \widetilde\Sigma \longrightarrow \H^{2,n}$.
\end{prop}
\begin{proof}
Let $X\in \Teich(\Sigma)$ be a critical point of the energy functional $\E_\rho : \Teich(\Sigma)\to \R_{>0},$ such an $X$ exists by Proposition \ref{p:Labourie Existence}.
By Theorem \ref{p:decompositionbundle}, the flat $\R^{2,n+1}$-bundle $E_\rho$ with holonomy $\rho$ decomposes orthogonally as
$$E_\rho=\ell\oplus U\oplus V,$$
where $\ell$ is a negative definite line sub-bundle, $U$ is positive definite of rank 2 and $V$ is a negative definite of rank $n$.
By pulling-back $E_\rho$ to the universal cover $\pi: \widetilde X\to X$, one sees that the negative definite line sub-bundle $\ell$ defines a $\rho$-equivariant map
$$u: \widetilde X \longrightarrow \H^{2,n}.$$
We will prove that $u$ is a conformal harmonic immersion.
Over a local chart $U\subset \widetilde X$, the map $u$ can be lifted to a map into $\hat{\H}^{2,n}\subset \R^{2,n+1}$. The Levi-Civita connection of $\hat{\H}^{2,n}$ is the restriction of the connection $\nabla$ on $\R^{2,n+1}$. Because $\hat{\H}^{2,n}$ is umbilical, $u$ satisfies the harmonic equation of Proposition \ref{p-harmonicholomorphic} if and only if $\nabla_{\overline{\partial}_z}\nabla_{\partial_z} u$ is parallel to $u$.
Let
$$(\mathcal{E},\Phi)=
\xymatrix@R=0em{\mathcal{K}\Ii\ar[r]^1&\Ii\ar[r]^1&\mathcal{K}^{-1}\Ii\ar[ddl]^{\beta}\\&\oplus&\\&\Vv\ar[uul]^{\beta^\dagger}&},$$
be the Higgs bundle associated to $\rho$ as in Subsection \ref{Higgsbundleparametrization} and let $h$ be the Hermitian metric on $\mathcal{E}$ solving the Higgs bundle equations. The map $u$ is locally given by a constant norm section of $\ell\subset \Ii$. Writing
$$\nabla = A + \Phi + \Phi^*,$$
where $A$ is the Chern connection of $(\mathcal{E},h),$, one gets
\begin{eqnarray*}
\nabla_{\overline{\partial}_z}\nabla_{\partial_z} u & = & \left[\big((A^{0,1}+\Phi^*)(\overline{\partial}_z)\big)\circ (A^{1,0} + \Phi)(\partial_z)\right] (u) \\
& = & \left[\big((A^{0,1}+\Phi^*)(\overline{\partial}_z)\big)\circ \Phi(\partial_z)\right] (u) \\
& = & \left[\Phi^*\big(\overline{\partial}_z\big)\circ\Phi(\partial_z)\right](u).
\end{eqnarray*}
On the first line, we used the fact that the Chern connection is diagonal in the splitting and that $u$ has constant norm, while for the second line, we used the holomorphicity of $\Phi$.
In particular, $\Phi(\partial_z) u$ is a section of $\Ll^{-1}$. Since the splitting $\mathcal{E}=\mathcal{L}\oplus \mathcal{I}\oplus \mathcal{L}^{-1} \oplus \mathcal{V}$ is orthogonal with respect to the metric $h$, $\Phi^*\big(\overline{\partial}_z\big)$ sends $\mathcal{L}^{-1}$ on $\mathcal{I}$. Thus, $\nabla_{\overline{\partial}_z}\nabla_{\partial_z} u$ is a section of $\ell$ which is parallel to $u$.
Locally, the differential $du$ corresponds to $\nabla u=(\Phi+\Phi^*)u=(1+1^*)u$ which is nowhere vanishing. In particular, $u$ is an immersion.
The Hopf differential of $u$ is locally given by
$$u^* q_{\H}^{2,0} = q_{\H}\big(\nabla_{\partial_z}u,\nabla_{\partial_z}u\big)dz^2,$$
where $q_\H$ is the $\C$-linear extension of the metric $g_\H$ on $\H^{2,n}$. But $\nabla_{\partial_z}u$ is a section of $\mathcal{L}^{-1}$ which is isotropic with respect to the $\C$-bilinear symmetric form $q$ on $\mathcal{E}$. In particular, the Hopf differential is zero.
\end{proof}
\begin{rmk}
In the splitting $E=\ell\oplus U \oplus V$, the bundle $V$ is canonically identified with $NX:=(u^*N\widetilde X)/\rho(\Gamma)$, where $N\widetilde X$ is the normal bundle of the maximal space-like immersion $u: \widetilde X\longrightarrow \H^{2,n}$. In particular, the topology of the (quotient of the) normal bundle to $u: \widetilde X \longrightarrow \H^{2,n}$ characterizes the connected component of $\rho\in\Rep^{max}\big(\Gamma,\SO_0(2,n+1)\big)$.
\end{rmk}
\begin{rmk}\label{r:geometricinterpretationofhiggsbundles}
The component of the Higgs field $\beta\in \Omega^{1,0}\big(X,\Hom(\mathcal{L}^{-1},\mathcal{V})\big)$ is identified with the $(1,0)$-part of the second fundamental form $\mathrm{II}\in \Omega^1\big(X, \Hom(TX,NX)\big) $ of the maximal immersion $u$.
\end{rmk}
\subsection{Gauss maps}\label{subsection-Gaussmaps}
Given a maximal representation $\rho\in \Rep^{max}\big(\Gamma,\SO_0(2,n+1)\big)$, let $u: \widetilde X \rightarrow \H^{2,n}$ be the $\rho$-equivariant maximal space-like immersion associated to a critical point $X\in\Teich(\Sigma)$ of the energy functional.
In this Subsection, we describe different Gauss maps of the maximal surface $u$. In particular, we show that the $\rho$-equivariant minimal surface in the Riemannian symmetric space of $\SO_0(2,n+1)$ associated to the critical point $X$ is a Gauss map of the maximal surface $u$.
The \textit{main Grassmannian} $\mathbb{G}\big(\R^{2,n+1}\big)$ is defined to be the set of triple $(F_0,F_1,F_2)$ where
\begin{itemize}
\item $F_0\in \H^{2,n}$ is a negative definite line in $\R^{2,n+1}$,
\item $F_1$ is a positive definite oriented $2$-plane in $\R^{2,n+1}$ orthogonal to $F_0$,
\item $F_2= (F_0\oplus F_1)^\bot$.
\end{itemize}
The stabilizer of a triple $(F_0,F_1,F_2)\in \GR\big(\R^{2,n+1}\big)$ is the subgroup
\[\text{H}:=\SO(2)\times \text{S}(\mathrm{O}(1)\times \mathrm{O}(n)).\]
Hence, the main Grassmannian is the reductive homogeneous space
$$\GR\big(\R^{2,n+1}\big)\cong \SO_0(2,n+1)/\text{H}.$$
The natural inclusion $\iota_1: \text{H}\hookrightarrow \text{S}\big(\text{O}(1,2)\times \text{O}(n)\big)$ gives rise to a projection
$$\begin{array}{llll}
p_1 : & \GR(\R^{2,n+1}\big) & \longrightarrow & \Gr_{(2,1)}\big(\R^{2,n+1} \big) \\
& (F_0,F_1,F_2) & \longmapsto & F_0\oplus F_1
\end{array} $$
where $\Gr_{(2,1)}\big(\R^{2,n+1} \big)=\SO(2,n+1)/\text{S}(\text{O}(2,1)\times\text{O}(n))$ is the Grassmannian of signature $(2,1)$ linear subspaces of $\R^{2,n+1}$.
Similarly, we have a projection associated to the inclusion $\iota_2: \text{H} \hookrightarrow \text{SO}(2)\times \text{SO}(n+1)$
$$\begin{array}{llll}
p_2 : & \GR(\R^{2,n+1}\big) & \longrightarrow & \text{Gr}_{(2,0)}\big(\R^{2,n+1}\big) \\
& (F_0,F_1,F_2) & \longmapsto & F_1
\end{array}. $$
where $\Gr_{(2,0)}\big(\R^{2,n+1}\big)$ is the Grassmannian of oriented space-like 2-planes in $\R^{2,n+1}$. The Grassmannian $\Gr_{(2,0)}\big(\R^{2,n+1}\big)$ is isomorphic to the Riemannian symmetric space $\mathfrak{X}$ of $\SO_0(2,n+1).$
For $M,N\in\so(2,n+1)\subset \mathfrak{sl}(n+3,\mathbb{R}),$ the Killing form is given by
$$\langle M,N\rangle = (n+1)\text{tr}(MN).$$
In particular, the Killing form is non-degenerate on the Lie algebra $\mathfrak{h}$ of $\text{H}$. Denote by $\mathfrak{m}$ the orthogonal complement of $\mathfrak{h}$.
The vector space decomposition $\mathfrak{h}\oplus \mathfrak{m}$ of $\frak{so}(2,n+1)$ is $\text{Ad}(\text{H})$-invariant. Hence, the Maurer-Cartan form of $\SO_0(2,n+1),$ $\omega\in\Omega^1\big(\SO_0(2,n+1),\so(2,n+1)\big)$, decomposes as
$$\omega = \omega_{\mathfrak{h}}+\omega_{\mathfrak{m}},$$
where $\omega_\mathfrak{h}\in\Omega^1(\SO_0(2,n+1),\mathfrak{h})$ and $\omega_\mathfrak{m}\in\Omega^1(\SO_0(2,n+1),\mathfrak{m})$.
The $H$-equivariant form $\omega_\mathfrak{m}$ vanishes on vertical directions of the principal $\text{H}$-bundle $\SO_0(2,n+1) \longrightarrow \GR(\R^{2,n+1})$, and so descends to $\omega_\mathfrak{m}\in\Omega^1\big(\GR(\R^{2,n+1}),\text{Ad}_H(\mathfrak{m})\big)$, where $\text{Ad}_H(\mathfrak{m})=\SO_0(2,n+1) \underset{\text{Ad}(\text{H})}{\times} \mathfrak{m}$ is the associated bundle with fiber $\mathfrak{m}$. For each point $x\in\GR(\R^{2,n+1})$, the form $\omega_\mathfrak{m}$ gives an isomorphism $T_x\GR(\R^{2,n+1}) \cong \mathfrak{m}$, and thus defines an identification
$$T\GR(\R^{2,n+1}) \cong \text{Ad}_H(\mathfrak{m}).$$
Finally, since the Killing form is $\text{Ad}(\text{H})$-invariant and the splitting $\mathfrak{h}\oplus\mathfrak{m}$ is orthogonal, the Killing form defines a pseudo-Riemannian metric on $\GR(\R^{2,n+1})$ of signature $(2n+2,n)$.
The same construction applies to the homogeneous spaces $\Gr_{(2,1)}\big(\R^{2,n+1} \big)$ and $\Gr_{(2,0)}\big(\R^{2,n+1} \big)$ where the Killing form induces a metric on signature $(2n,n)$ and $(2n+2,0)$ respectively.
\begin{defi}
Given a space-like immersion $v: S \longrightarrow \H^{2,n}$ of a surface $S$, the \textit{main Gauss map of $u$} is the map
$$ \mathcal{G} : S \longrightarrow \GR\big(\R^{2,n+1}\big)$$
which sends a point $x\in S$ to the triple
$$(F_0(x),F_1(x),F_2(x)):=\big(v(x),dv(T_xS),(v(x)\oplus dv(T_xS))^\bot\big).$$
The \textit{first} and \textit{second Gauss map} are respectively defined to be $G_1= p_1\circ \mathcal{G}$ and $G_2=p_2 \circ \mathcal{G}$.
\end{defi}
We have the following:
\begin{prop}\label{p-gaussmaps}
The main, first and second Gauss maps of a $\rho$-equivariant maximal immersion $u: \widetilde X \longrightarrow \H^{2,n}$ are extremal space-like immersions.
\end{prop}
\begin{proof}
Since the calculations for each of the Gauss maps are similar, we will only prove the result for the main Gauss map.
It is proved in \cite{ishihara} that the three Gauss maps of a maximal immersion are harmonic. Thus, to prove the result we will show the Gauss maps are also conformal.
Recall that, given a signature $(2,n+1)$ scalar product $\textbf{q}$ on $\mathbb{R}^{n+3}$, the Lie algebra of $\SO_0(2,n+1)$ is
$$\so(2,n+1)=\big\{M\in \mathfrak{gl}_{n+3}(\mathbb{R}),~\textbf{q}M^T\textbf{q}=-M\big\}.$$
Writing the matrices in blocks, with
$$\textbf{q}=\left(\begin{array}{lll} -1 & 0 & 0\\ 0 & I_2 & 0 \\
0 & 0 & -I_n \end{array}\right),$$
where $I_k$ is the identity matrix of size $k\times k$, we get that
$$\mathfrak{h}=\left\{
\left(\begin{array}{lll}
0 & 0 & 0 \\
0 & A & 0 \\
0 & 0 & B
\end{array}\right),~A\in\so(2),~B\in\so(n) \right\}, $$
$$\mathfrak{m}=\left\{
\left(\begin{array}{lll}
0 & A & B \\
A^T & 0 & C \\
-B^T & C^T & 0
\end{array}\right),~A\in\mathcal{M}_{1,2}(\mathbb{R}),~B\in\mathcal{M}_{1,n}(\mathbb{R}),~C\in\mathcal{M}_{2,n}(\mathbb{R})\right\}.$$
In particular, if $M= \left(\begin{array}{lll}
0 & A & B \\
A^T & 0 & C \\
-B^T & C^T & 0
\end{array}\right)\in\mathfrak{m}$, then
$$\langle M,M\rangle= 2(n+1)\left(AA^T-BB^T+\text{tr}\left(CC^T\right)\right).$$
More explicitly, if $p=(F_0,F_1,F_2)\in \GR(\R^{2,n+1})$, then we have an identification
$$T_p\GR(\R^{2,n+1})=\Hom(F_0,F_1)\oplus \Hom(F_0,F_2)\oplus \Hom(F_1,F_2),$$
and if $\varphi=(\varphi_1,\varphi_2,\varphi_3)\in T_p\GR(\R^{2,n+1})$, then the metric $g_{\GR}$ induced by the Killing form is given by
$$g_{\GR}(\varphi,\varphi) = 2(n+1)\left(\varphi_1\varphi_1^\dagger-\varphi_2\varphi_2^\dagger+\text{tr}\left(\varphi_3\varphi_3^\dagger\right)\right),$$
where $\varphi_i^\dagger: F_i \longrightarrow F_{i-1}$ is obtained from $\varphi_i^*: F_i^* \longrightarrow F_{i-1}^*$ using the identification $F_i \cong F_i^*$ given by the induced scalar products.
Consider now $u:\widetilde{X}\longrightarrow \H^{2,n}$ a maximal immersion, and let $\mathcal{G}: \widetilde X \longrightarrow \GR\big(\R^{2,n+1}\big)$ its associated main Gauss map. Given a point $x\in\widetilde{X}$, we get a canonical identification
$$T_{\mathcal{G}(x)}\GR \cong \Hom\big(F_0(x),F_1(x)\big)\oplus \Hom\big(F_0(x),F_2(x)\big)\oplus \Hom\big(F_1(x),F_2(x)\big).$$
In particular, according to this splitting, we can write the differential as
$$d\mathcal{G}=\lambda+\mu+\nu.$$
Moreover, $\lambda\in \Omega^1\big(\widetilde{X},\mathcal{G}^*(\Hom(F_0,F_1)\big)$ corresponds to the differential of $u$, $\mu\in\Omega^1(\widetilde{X},\mathcal{G}^*(\Hom(F_0,F_2)\big)$ is zero and $\nu \in\Omega^1(\widetilde{X},\mathcal{G}^*(\Hom(F_1,F_2)\big)$ is identified with the second fundamental form of the immersion.
If $\partial\mathcal{G}$ denotes the $\mathbb{C}$-linear part of $d\mathcal{G}$ and by $q_{\GR}$ the $\C$-linear extension of $g_{\GR}$, then $\partial\mathcal{G}=\partial u + \beta$ where $\beta$ is the $(1,0)$-part of the second fundamental form, and so is identified with the part of the Higgs field sending $\mathcal{L}^{-1}$ to $\mathcal{V}$ (see Remark \ref{r:geometricinterpretationofhiggsbundles}).
The Hopf differential of $\mathcal{G}$ is thus given by
\begin{eqnarray*}
\text{Hopf}(\mathcal{G}) & = & q_{\GR}(\partial\mathcal{G},\partial\mathcal{G}) \\
& = & 2(n+1)\text{Hopf}(u) - 2(n+1)\text{tr}\big(\beta \beta^\dagger\big) \\
& = & - (n+1)\text{tr}\big(\beta \beta^\dagger\big) \\
& = & 0.
\end{eqnarray*}
For the last equation, we used the fact that $\beta^\dagger$ sends $\mathcal{V}$ to $\mathcal{L}$ (see subsection \ref{Higgsbundleparametrization}).
Finally, a similar computation shows that $\mathcal{G}^*h_{\GR}= (n+1)\Vert \Phi\Vert^2$, and thus never vanishes. In particular, $\mathcal{G}$ is a space-like immersion.
\end{proof}
\subsection{Uniqueness of the maximal surface}
Let $\rho\in\Rep^{max}(\Gamma,\SO_0(2,n+1))$ be a maximal representation. In this subsection, we prove the following theorem:
\begin{theo} \label{t:UniquenessMaximalSurfacePrecise}
Let $S_1$ and $S_2$ be two connected $\rho$-invariant maximal space-like surfaces in $\H^{2,n}$ on which $\rho(\Gamma)$ acts co-compactly. Then $S_1 = S_2$.
\end{theo}
As a corollary, we prove Labourie's conjecture for maximal representations into Hermitian Lie groups of rank $2$.
\begin{coro}\label{c:UniquenessCriticalPoint}
Let $\rho$ be a maximal representation from $\Gamma$ into a Hermitian Lie group of rank $2$. Then the energy functional $\EE_\rho$ on $\Teich(\Sigma)$ has a unique critical point $X$. Moreover, the corresponding minimal immersion $f: \widetilde X \rightarrow \mathfrak{X}$ is an embedding.
\end{coro}
Note that when $n=1$, Theorem \ref{t:UniquenessMaximalSurfacePrecise} was obtained by Barbot, B\'eguin, Zeghib \cite{BBZ} and its corollary was obtained by Schoen \cite{schoen} (see Remark \ref{r:ComparisonBonsanteSchlenker} for details).
\begin{proof}[Proof of Corollary \ref{c:UniquenessCriticalPoint} assuming Theorem \ref{t:UniquenessMaximalSurfacePrecise}]
By \cite{burgeriozziwienhard}, the Zariski closure of the image of $\rho(\Gamma)$ is of tube type; thus, we can assume that $\Gamma$ takes values in $\SO_0(2,n+1)$.
Let $X_1$ and $X_2$ be two critical points of $\EE_\rho$. Proposition \ref{p:ExistenceMaximalSurface} constructs two $\rho$-equivariant maximal space-like immersions $u_1: \widetilde{X}_1 \to \H^{2,n}$ and $u_2: \widetilde{X}_2 \to \H^{2,n}$. By Theorem \ref{t:UniquenessMaximalSurfacePrecise}, these two immersions have the same image $S$. Moreover, since $S$ is homeomorphic to a disc (see Corollary \ref{c:2LiftsBoundary}), both $u_1$ and $u_2$ are diffeomorphisms onto $S$. The map $u_2\circ u_1^{-1}$ induces a biholomorphism from $X_1$ to $X_2$ that is homotopic to the identity. Hence $X_1 = X_2$ in $\Teich(\Sigma)$.
Finally, by Proposition \ref{p-gaussmaps}, the minimal $\rho$-equivariant immersion $f_1: \widetilde{X} \to \mathfrak{X}=\Gr_{(2,0)}\left (\R^{2,n+1}\right)$ is the second Gauss map of the map $u_1$. Corollary \ref{c:2LiftsBoundary} will show that $u_1$ is an embedding, and Corollary \ref{c:IntersectionSpacelikeSurface} will show that every negative definite linear subspace of $\R^{2,n+1}$ of dimension $n+1$ intersects $u_1(\widetilde{X}_1)$ exactly once. In particular, the second Gauss map of $u_1$ is injective, which concludes the proof of Corollary \ref{c:UniquenessCriticalPoint}.
\end{proof}
In order to prove Theorem \ref{t:UniquenessMaximalSurfacePrecise}, we first need some elementary results about space-like surfaces in $\H^{2,n}$ invariant under the action of a maximal representation.
Fix $S$ a connected $\rho$-invariant space-like surface in $\H^{2,n}$ on which $\rho(\Gamma)$ acts co-compactly. We denote by $\partial_\infty S$ the topological boundary of $S$ in the compactification $\H^{2,n} \cup \Ein^{1,n}$. Let $\hat{S}$ denote the inverse image of $S$ by the projection from $\hat{\H}^{2,n}$ to $\H^{2,n}$.
\begin{prop} \label{p:SLipschitzGraph}
The lift $\hat{S}$ of $S$ has at most two connected components diffeomorphic to discs. Moreover, if we identify $\hat{\H}^{2,n}$ with $\D \times \S^n$ as in Proposition \ref{p:WarpedProduct}, then each of these connected components identifies with the graph of a Lipschitz map from $\D$ to $\S^n$.
\end{prop}
\begin{rmk}
We will see in Corollary \ref{c:2LiftsBoundary} that $\hat{S}$ indeed has two connected components and that $S$ itself is homeomorphic to a disc.
\end{rmk}
\begin{proof}
Denote the metric $\frac{4}{(1-\norm{u}^2)^2} g_\D$ by $g_{\H^2}$, and let $\pi: \hat S \to \D$ be the projection on the first factor.
We have
\[\pi^* g_{\H^2} \geq g_{\H^{2,n}}~,\]
where $g_{\H^{2,n}}$ is the metric induced on $\hat S$. Since $\hat{S}$ is space-like and $\rho(\Gamma)$ acts co-compactly on $\hat{S}$, the metric $g_{\H^{2,n}}$ is a complete Riemannian metric on $\hat{S}$. Therefore, $\pi^* g_{\H^2}$ is also a complete Riemannian metric on $\hat{S}$. It follows that $\pi: \hat{S} \to \H^2$ is a proper immersion, hence a covering. Since $\H^2$ is simply connected and $S$ is connected, $\hat{S}$ has at most $2$ connected components diffeomorphic to discs.
Let $\hat{S}_0$ be one of the connected components of $\hat{S}$. Since the projection $\hat{S}_0$ to $\D$ is a diffeomorphism, $\hat{S}$ is the graph of a $\mathcal{C}^1$ map $f: \D \to \S^n$. For every $z \in \D$ and every $v\in T_z\D$, we have
\[ \frac{4}{\left(1-\norm{z}^2\right)^2} \norm{v}^2 - \left(\frac{1+\norm{z}^2}{1-\norm{z}^2}\right)^2\norm{\mathrm{d} f_z(v)}^2 >0\]
since $\hat{S}_0$ is space-like. Therefore,
\begin{equation}\label{LipschitzControl}\norm{\mathrm{d} f_z(v)} < \frac{2}{1+ \norm{z}^2} \leq 2 \end{equation}
and $f$ is Lipschitz.
\end{proof}
Note that one can choose the identification of $\hat{\H}^{2,n}$ with $\D \times \S^n$ so that $\{0\} \times \S^n$ is the intersection of $\hat{\H}^{2,n}$ with any given negative definite linear subspace of $\R^{2,n+1}$ of dimension $n+1$. One thus obtains the following corollary:
\begin{coro} \label{c:IntersectionSpacelikeSurface}
Any negative definite subspace of $\R^{2,n+1}$ of dimension $n+1$ intersects $S$ exactly once.
\end{coro}
Let $\xi: \partial_\infty \Gamma \to \Ein^{1,n}$ be the $\rho$-equivariant boundary map from Theorem \ref{t:AnosovCurve}.
\begin{lem} \label{l:GammaOrbitsS}
For every $\gamma \in \Gamma$, there exists a point $x\in S$ such that
\[\rho(\gamma)^n \cdot x \tend{n\to +\infty} \xi(\gamma_+)~.\]
\end{lem}
\begin{proof}
Fix $\gamma \in \Gamma$. By Corollary \ref{c:AnosovProximal}, one can find isotropic vectors $e_+$ and $e_- $ in $\mathbb{R}^{2,n+1}$ with $\scal{e_+}{e_-} = 1$ and a $\lambda >1$ such that $\rho(\gamma)\cdot e_+ = \lambda e_+$ and $\rho(\gamma)\cdot e_- = \frac{1}{\lambda} e_-$. Moreover, if $V$ denotes the orthogonal of the vector space spanned by $e_-$ and $e_+$, then the restriction of $\rho(\gamma)$ to $V$ has spectral radius strictly less than $\lambda$.
Let $x$ be a point in $S$ and $\hat{x}$ be a lift of $x$ in $\hat{\H}^{2,n}$. Up to scaling $e_-$ and $e_+$, we can write
\[\hat{x} = \alpha(e_- + e_+) + v~,\]
for some $\alpha \in \R$ and some $v\in V$. We thus have
\[\rho(\gamma)^n \cdot \hat{x} = \lambda^n \alpha e_+ + \lambda^{-n} \alpha e_- + \rho(\gamma)^n v~.\]
Since $\rho(\gamma)_{|V}$ has spectral radius strictly less than $\lambda$, we deduce that $\rho(\gamma)^n \cdot x$ converges (in $\ProjR{n+2}$) to $[e_+] = \xi(\gamma_+)$ unless $\alpha = 0$.
Assume by contradiction that $\rho(\gamma)^n \cdot x$ does not converge to $\xi(\gamma_+)$ for any $x \in S$. In this case, $S$ is included in $\Proj(V)$. However, this is not possible because the intersection of $\H^{2,n}$ with $\Proj(V)$ is a sub-manifold of signature $(1,n-1)$, and hence, cannot contain a space-like surface.
\end{proof}
\begin{coro} \label{p:BoundaryMaximalSurface}
The boundary of $S$ in $\H^{2,n} \cup \Ein^{1,n}$ is the image of $\xi$. We denote it by $\partial_\infty S$.
\end{coro}
\begin{proof}
Let $\hat{S}_0$ be a connected component of $\hat{S}$. By Proposition \ref{p:SLipschitzGraph}, $\hat{S}_0$ is the graph of a Lipschitz map $f: \D \to \S^n$. The map $h$ extends to a continuous map $\partial f: \partial \D \to \S^n$ and the boundary of $\hat{S}_0$ is the graph of $\partial f$ (seen as a subset of $\hat{\Ein}^{1,n}$). In particular, it is a topological circle, and so is its projection to $\Ein^{1,n}$.
Now, by Lemma \ref{l:GammaOrbitsS}, $\partial_\infty S$ contains $\xi(\gamma_+)$ for every $\gamma \in \Gamma$. Since the set $\{\gamma_+, \gamma \in \Gamma\}$ is dense in $\partial_\infty \Gamma$, we deduce that $\partial_\infty S$ contains the image of $\xi$. Since the image of $\xi$ is also a topological circle, we conclude that $\partial_\infty S$ is exactly the image of $\xi$.
\end{proof}
\begin{lem} \label{l:HyperplaneSeparatesS}
Let $x$ be a point in $S$. Then $S \cup \partial_\infty S$ does not intersect $x^\perp$.
\end{lem}
\begin{proof}
Let $\hat{x}$ be a lift of $x$ in $\hat{\H}^{2,n}$ and $\hat{S}_0$ the lift of $S$ containing $\hat{x}$. Since the space $\hat{\H}^{2,n}$ is homogeneous, we can choose an identification of $\hat{\H}^{2,n}$ with $\D \times \S^n$ so that $\hat{x}$ is identified to the point $(0,v_0)$ for some $v_0 \in \S^n$.
Let $f:\overline{\D} \to \S^n$ be such that $\hat{S}_0 \cup \partial_\infty \hat{S}_0$ is the graph of $f$. In particular, we have $f(0) = v_0$. For $z$ be a point in $\overline{\D}$, we have
\begin{eqnarray*}
d_{\S^n}(f(z), v_0) & \leq & \int_0^{\norm{z}} \norm{\dt f(tz)} \mathrm{d} t \\
& < & \int_0^{\norm{z}} \frac{2}{1+t^2} \textrm{ by \eqref{LipschitzControl}}\\
& < & 2 \arctan(\norm{z}) \leq \frac{\pi}{2}~.
\end{eqnarray*}
Since points orthogonal to $f(z)$ are at a distance $\frac{\pi}{2}$ in $\S^n$, $v_0$ is not orthogonal to $f(z)$, and we conclude that the point $(z, f(z)) \in \hat{\Ein}^{1,n}$ is not in the orthogonal of $\hat{x}$. Since this is true for any $z\in \overline{\D}$ and since $S \cup \partial_\infty S$ is the graph of $f$, this concludes the proof of the lemma.
\end{proof}
\begin{coro} \label{c:2LiftsBoundary}
The lift of $S \cup \partial_\infty S$ to $\hat{\H}^{2,n} \cup \hat{\Ein}^{1,n}$ has two connected components, and $S$ is homeomorphic to a disc.
\end{coro}
\begin{proof}
The projection from $\hat{S} \cup\partial_\infty \hat{S}$ to $S\cup \partial_\infty S$ is a covering of degree $2$. Let $x$ be a point in $\hat{S}$. Then the function from $\partial_\infty \hat{S}$ to $\{-1,1\}$ associating to $y$ the sign of $\scal{x}{y}$ is a well-defined continuous function. Since $\scal{x}{-y} = - \scal{x}{y}$, this function takes both possible values and $\hat{S} \cup \partial_\infty \hat{S}$ thus has two connected components.
The covering of degree $2$ from $\hat{S}$ to $S$ is thus a trivial covering. Since each connected component of $\hat{S}$ is homeomorphic to a disc, so is $S$.
\end{proof}
\begin{defi}
Let $\partial_\infty \hat{S}_0$ be one connected component of $\partial_\infty \hat{S}$. The convex hull of $\partial_\infty \hat{S}_0$ is the set of vectors $u\in \hat{\H}^{2,n}$ such that any linear form on $\R^{2,n+1}$ which is positive on $\partial_\infty \hat{S}_0$ is positive on $u$. The convex hull of $\partial_\infty S$, denoted $\Conv(\partial_\infty S)$, is the projection to $\H^{2,n}$ of the convex hull of either connected component of $\partial_\infty \hat{S}$.
\end{defi}
\begin{prop} \label{p:MinimalSurfaceinConvexHull}
Assume that $S$ is a maximal surface. Then $S$ is included in the convex hull of $\partial_\infty S$.
\end{prop}
\begin{proof}
Let us choose $\hat{S}_0$ a connected component of $\hat{S}$, and let $\phi$ be a linear form $\R^{2,n+1}$ which is positive on $\partial_\infty \hat{S}_0$. If $u_0$ is a point in $\hat{S}_0$ and $\dot{u}_0$ a tangent vector to $\hat{S}_0$ at $u_0$, then we have
\[\Hess_{u_0}\phi_{|\hat{S}_0}(\dot{u}_0) = q(\dot{u}_0)\phi(u_0) + \phi(\mathrm{II}(\dot{u}_0,\dot{u}_0))~,\]
where $\mathrm{II}$ denotes the second fundamental form of $\hat{S}_0$ in $\hat{\H}^{2,n}$. Since $\hat{S}_0$ is a maximal surface, the trace of $\mathrm{II}$ with respect to the metric induced by $\mathbf{q}$ on $\hat{S}_0$ vanishes. We deduce that $\phi$ satisfies the equation
\[\Delta \phi_{|\hat{S}_0} = \phi_{|\hat{S}_0}~,\]
where $\Delta$ is the Laplace operator of the metric induced by $\mathbf{q}$ on $\hat{S}_0$.
Now, by assumption, $\phi_{|\hat{S}_0}$ is positive in a neighborhood of $\partial_\infty \hat{S}_0$. The classical maximum principle then implies that $\phi$ is positive on $\hat{S}_0$.
Therefore, $\hat{S}_0$ is included in $\Conv\left(\partial_\infty\hat{S}_0 \right)$ and $S$ is included in $\Conv(\partial_\infty S)$.
\end{proof}
We now turn to the proof of Theorem \ref{t:UniquenessMaximalSurfacePrecise}. Let $S_1$ and $S_2$ be two maximal $\rho$-invariant space-like surfaces in $\H^{2,n}$ on which $\rho$ acts co-compactly. Assume by contradiction that $S_1$ and $S_2$ are distinct. Let us start by lifting $S_1$ and $S_2$ to $\hat{\H}^{2,n}$ so that the two lifts have the same boundary. To simplify notations, we still denote those lifts by $S_1$ and $S_2$. Let $\scal{\cdot}{\cdot}$ denote the symmetric bilinear form associated to the quadratic form $\mathbf{q}$ on $\R^{2,n+1}$.
\begin{lem} \label{l:InfMoreThan0}
For all $(u,v)\in S_1\times S_2$,
\[\scal{u}{v} < 0~.\]
\end{lem}
\begin{proof}
By Lemma \ref{l:HyperplaneSeparatesS}, for any $u \in S_1$, the linear form $\scal{u}{\cdot}$ is negative on $\partial_\infty S_1$. Moreover, since $\partial_\infty S_2 = \partial_\infty S_1$, Proposition \ref{p:MinimalSurfaceinConvexHull} implies that $S_2$ is included in $\Conv\left(\partial_\infty S_1\right)$. Therefore, the linear form $\scal{u}{\cdot}$ is negative on $S_2$.
\end{proof}
\begin{lem} \label{l:InfLessThan1}
If $S_1 \neq S_2$, then there exists $(u, v)\in S_1 \times S_2$ such that
\[\scal{u}{v} > -1~.\]
\end{lem}
\begin{proof}
Assume that $S_1$ is not included in $S_2$. Let $x$ be a point in $S_1$ which is not in $S_2$. Choose identification of $\hat{\H}^{2,n}$ with $\D \times \S^n$ for which $x$ is identified to $(0,v_0)$ for some $v_1 \in \S^n$. Since $S_2$ is the graph of some function $f:\D \to \S^n$, there exists $v_2 \in \S^n$ such that $y = (0,v_2) \in S_2$. Since $x\not\in S_2$, we have $v_2 \neq v_1$ and therefore
\[\scal{x}{y} = - \scal{v_1}{v_2} > -1~.\]
\end{proof}
\begin{lem}
The function \[\function{B}{S_1 \times S_2}{\R_{>0}}{(u,v)}{\scal{u}{v}}\] achieves its maximum.
\end{lem}
\begin{proof}
Let $(u_n,v_n) \in \left(S_1 \times S_2 \right)^\N$ be a maximizing sequence for $B$. Since $\rho(\Gamma)$ preserves $B$ and acts co-compactly on $S_1$, we can assume that $\left(u_n\right)_{n\in\N}$ is bounded in $S_1$. Up to extracting a sub-sequence, we can assume that $u_n$ converges to $u \in S_1$. By Lemma \ref{l:InfLessThan1}, we know that $B(u_n,v_n) > -1$ for $n$ sufficiently large.
Assume by contradiction that $(v_n)_{n\in\mathbb{N}}$ is unbounded in $S_2$. Up to extracting a sub-sequence, there exists $\epsilon_n\tend{n\to +\infty} 0$ such that $\epsilon_n v_n$ converges to a vector $v \in \partial_\infty S_2$. Since $B(u_n,v_n)$ is bounded, we have \[B(u,v) = \lim_{n\to +\infty} \epsilon_n B(u_n,v_n) = 0~.\]
The vector $v$ is thus in $u^\perp$. Since $\partial_\infty S_1 = \partial_\infty S_2$, this contradicts Lemma \ref{l:HyperplaneSeparatesS}.
\end{proof}
We now have all the tools needed to apply the minimum principle to $B$ and prove Theorem \ref{t:UniquenessMaximalSurfacePrecise}.
\begin{proof}[Proof of Theorem \ref{t:UniquenessMaximalSurfacePrecise}]
Let $(u_0,v_0) \in S_1\times S_2$ be a point where $B$ achieves its maximum. By Lemmas \ref{l:InfMoreThan0} and \ref{l:InfLessThan1}, we have
\[-1 < B(u_0,v_0) < 0~.\]
For $\dot{u}_0 \in T_{u_0}S_1$ and $\dot{v}_0 \in T_{v_0}S_2$, let $(u(t))_{t\in(-\epsilon, \epsilon)}$ and $(v(t))_{t\in(-\epsilon, \epsilon)}$ be geodesic paths on $S_1$ and $S_2$ respectively, satisfying $u(0) = u_0$, $u'(0) = \dot{u}_0$ and $v(0) = v_0$, $v'(0) = \dot{v}_0$.
Since $B(u(t),v_0)$ is maximal at $t=0$, we have $\scal{\dot{u}_0}{v_0} = 0$. Since $\mathbf{q}(u(t)) = -1$ for all $t$, we also have $\scal{\dot{u}_0}{u_0} = 0$. Similarly, we have
$\scal{\dot{v}_0}{u_0} = \scal{\dot{v}_0}{v_0} = 0$. We thus obtain that $T_{u_0} S_1$ and $T_{v_0} S_2$ are both orthogonal to $u_0$ and $v_0$.
The second derivative of $B(u(t),v(t))$ at $t=0$ is given by
\begin{equation} \label{eq:SecondVariationScalarProduct}\ddt_{|t=0} B(u(t),v(t)) = \xymatrix@=.2em{2 \scal{\dot{u}_0}{\dot{v}_0} + \scal{\mathrm{II}_1(\dot{u}_0,\dot{u}_0)}{v_0} + \scal{\mathrm{II}_2(\dot{v}_0,\dot{v}_0)}{u_0}\\ +\ \mathbf{q}(\dot{u}_0) \scal{u_0}{v_0} + \mathbf{q}(\dot{v}_0) \scal{u_0}{v_0}~,}\end{equation}
where $\mathrm{II}_1: T_{u_0}S_1 \times T_{u_0} S_1 \to u_0^\perp$ and $\mathrm{II}_2: T_{v_0}S_2 \times T_{v_0} S_2 \to v_0^\perp$ denote respectively the second fundamental forms of $S_1$ and $S_2$ in $\hat{\H}^{2,n}$. Our goal is to find $\dot{u}_0$ and $\dot{v}_0$ such that this second derivative is positive.
Since $S_1$ is a maximal surface in $\hat{\H}^{2,n}$, the quadratic form $\beta_1:w \mapsto \scal{\mathrm{II}_1(w,w)}{v_0}$ on $\left(T_{u_0}(S_1),\mathbf{q}\right)$ has two opposite eigenvalues $\lambda$ and $-\lambda$. Similarly, the quadratic form $w \mapsto \scal{\mathrm{II}_2(w,w)}{v_0}$ on $\left(T_{v_0}(S_2),\mathbf{q}\right)$ has two opposite eigenvalues $\mu$ and $-\mu$.
Up to switching $S_1$ and $S_2$, we may assume that $\lambda \geq \mu\geq0$. We now choose $\dot{u}_0$ and $\dot{v}_0$ such that
\[\xymatrix{\mathbf{q}(\dot{u}_0) = 1&\text{and}&\dot{v}_0 =\dfrac{p(\dot{u}_0)}{\sqrt{\mathbf{q}(p(\dot{u}_0))}}}\] where $\beta_1(\dot{u}_0) = \lambda$ and $p:\{u_0,v_0\}^\perp \to T_{v_0}S_2$ denotes the orthogonal projection.
Since $\mathbf{q}(u_0) = \mathbf{q}(v_0) = -1$ and $|\scal{u_0}{v_0}| < 1$, the restriction of $\mathbf{q}$ to $\Vect(u_0,v_0)$ is negative definite. The restriction of $\mathbf{q}$ to $\Vect(u_0,v_0)^\perp$ thus has signature $(2,n-2)$. Since $T_{v_0}S_2$ is a space-like plane in $\Vect(u_0,v_0)^\perp$, we can write $\dot{u}_0 = p(\dot{u}_0) + w$ where $\mathbf{q}(w) \leq 0$. We thus have
\[\mathbf{q}(p(\dot{u}_0)) = \mathbf{q}(\dot{u}_0) - \mathbf{q}(w) \geq \mathbf{q}(\dot{u}_0) = 1~,\]
and therefore
\[\scal{\dot{u}_0}{\dot{v}_0} = \sqrt{\mathbf{q}(p(\dot{u}_0))} \geq 1~.\]
Let us now get back to Equation \eqref{eq:SecondVariationScalarProduct}. With our choices of $\dot{u}_0$ and $\dot{v}_0$, we have $\beta_1(\dot{u}_0) = \lambda$ and $\beta_2(\dot{v}_0) \geq -\mu \geq -\lambda$. Since $\scal{u_0}{v_0} = B(u_0,v_0) >-1$, we have
\begin{eqnarray*}
\ddt_{|t=0} B(u(t),v(t)) & = & 2 \scal{\dot{u}_0}{\dot{v}_0} + 2 \scal{u_0}{v_0} + \beta_1(\dot{u}_0) + \beta_2(\dot{v}_0) \\
&\geq & 2 \scal{u_0}{v_0} + 2 \\
& > & 0~.
\end{eqnarray*}
This contradicts the maximality of $B$ at $(u_0,v_0)$.
\end{proof}
\begin{rmk}[Comparison with the work of Labourie and Bonsante--Schlenker] \label{r:ComparisonBonsanteSchlenker}
In the case of $\SO_0(2,2)$, Corollary \ref{c:UniquenessCriticalPoint} was proven directly by Schoen \cite{schoen} (see also Labourie \cite{labourieminimallagrangian}). This case is quite special because $\SO_0(2,2)$ is a degree $2$ cover of $\PSL(2,\R) \times \PSL(2,\R)$ and $\SO_0(2,2)/\mathrm{S}(\mathrm{O}(2) \times \mathrm{O}(2))$ identifies with $\H^2\times \H^2$.
Krasnov -- Schlenker \cite{krasnovschlenker} and Bonsante -- Schlenker \cite{BonsanteSchlenker} later clarified the link between maximal surfaces in $\H^{2,1}$ and minimal surfaces in $\H^2 \times \H^2$. In \cite{BonsanteSchlenker}, they gave an intrinsic proof of the uniqueness of a maximal surface in $\H^{2,1}$ in a more general setting. In their proof, they maximize the time-like distance between a point in $S_1$ and a point in $S_2$ and derive a contradiction from a maximum principle. This approach requires an estimate on the curvature of the maximal surface. Our strategy above is inspired by their proof, except that we apply the maximum principle to the scalar product instead of the space-like distance, which does not require any curvature estimate. This relieves us from extra technical difficulties.
\end{rmk}
\subsection{Length spectrum of maximal representations}
In this section, we exploit the pseudo-Riemannian geometry of $\H^{2,n}$ and the existence of a $\rho$-equivariant maximal space-like embedding of $\widetilde{\Sigma}$ to obtain a comparison of the length spectrum of $\rho$ with that of a Fuchsian representation.
In our setting, we define the length spectrum of a representation $\rho$ as follows.
\begin{defi}
Let $\rho$ be a representation of $\Gamma$ into $\SO_0(2,n+1)$. The \emph{length spectrum} of $\rho$ is the function $L_\rho: \Gamma \to \R_+$ that associates to an element $\gamma \in \Gamma$ the logarithm of the spectral radius of $L_\rho(\gamma)$ (seen as a squared matrix of size $n+3$).
\end{defi}
\begin{rmk}
Since, for $A \in \SO_0(2,n+1)$, $A$ and $A^{-1}$ have the same spectral radius, this definition coincides with Definition \ref{d:LengthSpectrumIntro}.
\end{rmk}
\begin{theo} \label{t:LengthSpectrumComparison}
If $\rho: \Gamma \to \SO_0(2,n+1)$ is a maximal representation, then either $\rho$ is in the Fuchsian locus (see Definition \ref{d:FuchsianLocus}), or there exists a Fuchsian representation $j: \Gamma \to \SO_0(2,1)$ and $\lambda >1$ such that
\[L_\rho \geq \lambda L_j~.\]
\end{theo}
\begin{rmk}
The representation $\rho$ is in the Fuchsian locus if and only if it stabilizes a totally geodesic space-like copy of $\H^2$ in $\H^{2,n}$. The induced action of $\rho$ on $\H^2$ gives a Fuchsian representation $j$ such that $L_j= L_\rho$.
\end{rmk}
\begin{rmk}
Let $m_{irr}$ denote the irreducible representation of $\SO_0(2,1)$ into $\PSL(n,\R)$. For a Hitchin representation $\rho: \Gamma \to \PSL(n,\R)$, one could hope to find a Fuchsian representation $j:\Gamma \to \SO_0(2,1)$ such that
\[L_\rho \geq L_{m_{irr} \circ j} = \frac{n-1}{2}L_j~.\]
However, this statement fails to be true for $n\geq 4$ (see \cite[Section 3.3]{LeeZhang}). In particular, it is not true for Hitchin representations into $\SO_0(2,3)$ for which, nonetheless, Theorem \ref{t:LengthSpectrumComparison} gives a weaker result.
\end{rmk}
In order to prove Theorem \ref{t:LengthSpectrumComparison}, let us fix a maximal representation $\rho: \Gamma \to \SO_0(2,n+1)$ and let $u: \widetilde{\Sigma} \to \H^{2,n}$ be a $\rho$-equivariant maximal space-like embedding. The pseudo-Riemannian metric on $\H^{2,n}$ induces a Riemannian metric $g_u$ on $\Sigma$ by restriction. By Poincar\'e's Uniformization Theorem, the metric $g_u$ is conformal to a unique metric $g_P$ of constant curvature $-1$.
\begin{lem} \label{l:ComparisonInducedMetric}
Either $\rho $ is in the Fuchsian locus, or there exists $\lambda >1$ such that $g_u \geq \lambda g_P$.
\end{lem}
\begin{proof}
Let $\kappa(g_u)$ denote the Gauss curvature of $g_u$. Recall that $\kappa(g_u)$ can be computed from the second fundamental form by the formula :
\[\kappa(g_u)_x = -1 - \sum_{i=1}^n \det_{g_u} \scal{\mathrm{II}_{u(x)}(\cdot)}{e_i}\]
where $(e_i)_{1\leq i \leq n}$ is an orthonormal basis of the orthogonal of $T_{f(x)} f(\Sigma)$. (Note that the minus sign in front of the sum comes from the fact that the metric of $\H^{2,n}$ is negative definite on this orthogonal.)
Since $f(\Sigma)$ is maximal, the quadratic form $\scal{\mathrm{II}_{u(x)}(\cdot)}{e_i}$ has trace $0$ with respect to $g_u$ and thus $\det_{g_u} \scal{\mathrm{II}_{u(x)}}{e_i} \leq 0$, with equality if and only if $\mathrm{II}_{u(x)} = 0$. Therefore, $\kappa(g_u) \geq -1$, and if $\kappa(g_u) = -1$ everywhere, then $u(\widetilde\Sigma)$ is totally geodesic.
The Lemma now follows from the classical \emph{Ahlfors--Schwarz--Pick lemma} (see for instance \cite{Wolpert82}).
\end{proof}
Let $g$ be a Riemannian metric on $\Sigma$ and denote by $d_g$ the associated distance on $\widetilde{\Sigma}$. We define the length spectrum of $g$ as the map
\[\function{L_g}{\Gamma}{\mathbb{R}_+}{\gamma}{\lim_{n\to +\infty} \frac{1}{n}\ d_g(x,\gamma^n \cdot x)}~,\]
where $x$ is any point in $\widetilde{\Sigma}$.
From now on, we assume that $\rho$ does not preserve a copy of $\H^2$. It follows from Lemma \ref{l:ComparisonInducedMetric} that $L_{g_u} \geq \lambda L_{g_P}$ for some $\lambda >1$. Let $j$ be the Fuchsian representation uniformizing $g_P$, i.e. such that there exists a $j$-equivariant isometry from $(\widetilde{\Sigma}, g_P)$ to $\H^2$. We then have
\[\lambda L_{j} = \lambda L_{g_P} \leq L_{g_u}~.\]
In order to prove Theorem \ref{t:LengthSpectrumComparison}, it is thus enough to show the following:
\begin{lem} \label{l:ComparisonInducedLengthSpectrum}
We have
\[L_\rho \geq L_{g_u}~.\]
\end{lem}
In order to prove this lemma, we need another characterization of $L_\rho$. Recall that $d_{\H^{2,n}}(x,y)$ denotes the length of the space-like geodesic segment between $x$ and $y$. Recall that, if $x$ and $y$ are joined by a space-like geodesic, $d_{\H^{2,n}}(x,y)$ denotes the length of the space-like geodesic segment between $x$ and $y$. We set $d_{\H^{2,n}}(x,y)=0$ otherwise.
\begin{prop} \label{p:PseudoRiemannianLength}
For any $\gamma \in \Gamma$ and any $x \in u(\widetilde{\Sigma})$, we have
\[L_\rho(\gamma) = \lim_{n\to + \infty} \frac{1}{n} d_{\H^{2,n}}(x, \rho(\gamma)^n \cdot x)~.\]
\end{prop}
\begin{proof}
By Corollary \ref{c:AnosovProximal}, one can find two isotropic vectors $e_+$ and $e_- \in \mathbb{R}^{2,n+1}$ with $\scal{e_+}{e_-} = 1$ such that $\rho(\gamma)\cdot e_+ = e^{L_\rho(\gamma)} e_+$ and $\rho(\gamma)\cdot e_- = e^{- L_\rho(\gamma)} e_-$. Moreover, if $V$ denotes the orthogonal of the vector space spanned by $e_-$ and $e_+$, then the spectral radius of the restriction of $\rho(\gamma)$ to $V$ is strictly less than $e^L_\rho(\gamma)$.
Let $v\in \mathbb{R}^{2,n+1}$ be a vector of norm $-1$ whose projection $[v]$ to $\H^{2,n}$ lies in $v(\widetilde \Sigma)$. Up to multiplying $v$ by $-1$ and scaling $e_-$ and $e_+$, we can write
\[v = \alpha(e_- + e_+) + w~,\]
with $w\in V$. By Proposition \ref{p:BoundaryMaximalSurface}, we have
\[\rho(\gamma)^n \cdot [v] \tend{n\to +\infty} [e_+]~.\]
Hence $\alpha \neq 0$.
We have \[\frac{1}{n} d_{\H^{2,n}}([v], \rho(\gamma)^n\cdot [v])~ =~ \frac{1}{n} \cosh^{-1} \left(\scal{v}{\rho(\gamma)^n \cdot v} \right)~.\] The right side of the equation is given by
\[\frac{1}{n} \cosh^{-1} \left(\scal{\alpha e_- + \alpha e_+ + w}{\alpha e^{-nL_\rho(\gamma)} e_- + \alpha e^{n L_\rho(\gamma)} e_+ + \rho(\gamma)^n \cdot w} \right),\]
and thus,
\[\frac{1}{n} d_{\H^{2,n}}([v], \rho(\gamma)^n\cdot [v])~=~\frac{1}{n} \cosh^{-1} \left(2 \alpha^2 e^{nL_\rho(\gamma)} + \scal{w}{\rho(\gamma)^n \cdot w}\right)~.\]
Since the spectral radius of $\rho(\gamma)$ restricted to $V$ is strictly less than $L_\rho(\gamma)$, the term $\scal{w}{\rho(\gamma)^n \cdot v}$ is negligible and we obtain
\[\frac{1}{n} d_{\H^{2,n}}([v], \rho(\gamma)^n\cdot [v]) \tend{n\to +\infty} L_\rho(\gamma).\]
\end{proof}
In order to conclude the proof of Lemma \ref{l:ComparisonInducedLengthSpectrum}, it suffices to prove the following:
\begin{prop} \label{p:ComparisonAmbientMetric}
If $x$ and $y \in u(\widetilde{\Sigma})$ are joined by a space-like geodesic segment, then we have
$d_u(x,y) \leq d_{\H^{2,n}}(x,y)$.
\end{prop}
\begin{rmk}
Though we don't need it, one could easily deduce from the computations in the proof of Lemma \ref{l:HyperplaneSeparatesS} that two distinct points in $u(\widetilde{\Sigma})$ are \emph{always} joined by a space-like geodesic.
\end{rmk}
\begin{proof}[Proof of Proposition \ref{p:ComparisonAmbientMetric}]
Recall that, according to Proposition \ref{p:WarpedProduct}, the space $\hat{\H}^{2,n}$ is isometric to a warped product
\[\H^2 \times \S^n\]
with the metric
\[g = g_{\H^2} \oplus - w(x_1) g_{\S^n}~,\]
for some positive function $w$ on $\H^2$. In this warped product structure, the horizontal slices $\H^2 \times \{x_2\}$ are totally geodesic.
Let $x$ and $y$ be two points in $u(\widetilde{\Sigma})$ let $\hat{x}$ and $\hat{y}$ be lifts of $x$ and $y$ to $\hat{\H}^{2,n}$ belonging to the same lift $\hat{S}$ of $f(\widetilde{\Sigma})$. Let us choose a warped product structure on $\hat{\H}^{2,n}$ such that $x$ and $y$ belong to the same horizontal slice.
Let $\pi$ denote the restriction to $\hat{S}$ of the projection on the $\H^2$ factor with respect to this warped product structure. We then have
\[d_{\H^{2,n}}(x,y) = d_{\H^2}(\pi(x), \pi(y))~.\]
By Proposition \ref{p:SLipschitzGraph}, $\pi$ is a diffeomorphism. Moreover, given the warped product structure of the metric $g_{\H^{2,n}}$, we have
\begin{equation} \label{eq:ProjectionMetric}
\pi^*g_{\H^2} \geq g_{\H^{2,n}}
\end{equation}
on $u(\widetilde{\Sigma})$.
Let $c: [0,1] \to \H^2$ denote the geodesic segment between $\pi(x)$ and $\pi(y)$. We have
\begin{eqnarray*}
d_u(x,y) &\leq & \int_0^1 \sqrt{g_{\H^2}\left(\dt \pi^{-1}\circ c(t) \right) \mathrm{d}t}\\
& \leq & \int_0^1 \sqrt{g_{\mathbb{H}^2}\left(\dt c(t) \right) \mathrm{d} t} = d_{\mathbb{H}^2}(\pi(x), \pi(y)) = d_{\H^{2,n}}(x,y)~.
\end{eqnarray*}
\end{proof}
We can now conclude that for any $\gamma \in \Gamma$,
\begin{eqnarray*}
L_{g_u}(\gamma) &= & \lim_{n\to +\infty} \frac{1}{n}d_u(x,\gamma^n\cdot x) \\
&\leq & \lim_{n\to +\infty} \frac{1}{n}d_{\H^{2,n}}(x,\gamma^n\cdot x) = L_\rho(\gamma)~,
\end{eqnarray*}
which proves Lemma \ref{l:ComparisonInducedLengthSpectrum} and thus Theorem \ref{t:LengthSpectrumComparison}.
\section{Geometrization of maximal representations} \label{s:Geometrization}
In this section, we realize maximal representations in $\SO_0(2,n+1)$ as holonomies of geometric structures. More precisely, we prove the following two theorems:
\begin{theo}\label{t: fibered photons and maximal reps}
The holonomy map gives a bijective correspondence between maximal representations in $\SO_0(2,n+1)$ and fibered photon structures on iterated sphere bundles over $\Sigma$.
\end{theo}
\begin{theo} \label{t:EinsteinHitchin}
For any Hitchin representation $\rho\in \Hit(\Gamma,\SO_0(2,3))$, there exists a maximally fibered conformally flat Lorentz structure on the unit tangent bundle $\pi: T^1\Sigma \longrightarrow \Sigma$ whose holonomy is $ \rho\circ\pi_*$.
\end{theo}
The notions of fibered photon structure, iterated sphere bundles, and maximally fibered conformally flat Lorentz structures are described in the next subsections.
\subsection{$(G,X)$-structures}\label{(G,X)-structures}
Here we recall the basic theory of $(G,X)$-structures. For more details, the reader is referred to \cite{goldmangeometricstructures}.
In this subsection, $G$ will be a semi-simple Lie group, $X=G/H$ an homogeneous space and $M$ a manifold such that $\dim(M)=\dim(X)$.
\begin{defi}
A $(G,X)$-structure on $M$ is a maximal atlas of charts taking value in $X$ whose transition functions are restriction of elements in $G$.
Two $(G,X)$-structures on $M$ are \textit{equivalent} if there exists a diffeomorphism $f: M \longrightarrow M$ isotopic to the identity whose expression in local charts is given by elements in $G$.
\end{defi}
One can associate to a $(G,X)$-structure on $M$ a \textit{developing pair} $(\dev,\rho)$ where
$$\rho: \pi_1 (M) \longrightarrow G$$
is called the \textit{holonomy} of the structure and
$$\dev: \widetilde{M} \longrightarrow X$$
is a locally injective $\rho$-equivariant map called the \textit{developing map}.
The developing pair is not uniquely defined. Given two developing pairs $(\dev_1,\rho_1)$ and $(\dev_2,\rho_2)$, if there exists an element $g\in G$ so that
$$\left\{\begin{array}{l}
\dev_1=g\circ\dev_2 \\
\rho_2(\gamma)=g \circ \rho_1(\gamma) \circ g^{-1},~\forall \gamma \in \pi_1(M)
\end{array}\right.,$$
then $(\dev_1,\rho_1)$ and $(\dev_2,\rho_2)$ correspond to equivalent $(G,X)$-structures.
It is well-known (see for example \cite{goldmangeometricstructures}) that a developing pair fully determine the $(G,X)$-structure on $M$.
In particular, if $\mathcal{D}_{(G,X)}(M)$ is the moduli space of equivalence classes of $(G,X)$-structures on $M$, then we get a well-defined map
$$\textbf{hol}: \mathcal{D}_{(G,X)}(M) \longrightarrow \Rep(\pi_1(M),G),$$
where $\Rep(\pi_1(M),G):=\Hom\big(\pi_1(M),G\big)/G$ is the representation variety.
The well-known \emph{Ehresmann--Thurston principle} sates that this map induces a local homeomorphism from the set of equivalence classes of $(G,X)$-structures on $M$ to the representation variety.
\begin{theo}[Thu80, Chapter 3]
Let $\rho_0$ be the holonomy of a $(G,X)$-structure on the closed manifold $M$. Then any representation $\rho:\pi_1(M) \to G$ sufficiently close to $\rho$ is the holonomy of a $(G,X)$-structure on $M$ close to the initial one, which is unique up to equivalence.
\end{theo}
Given $\rho\in\Rep(\pi_1(M),G)$, one can associate a flat homogeneous $X$-bundle $\mathcal{X}_\rho$ defined by
$$\mathcal{X}_\rho:= \left(P_\rho \times X\right)/G,$$
where $P_\rho$ is the flat principal $G$-bundle of holonomy $\rho$ and $G$ acts diagonally on $P_\rho \times X$.
The homogeneous bundle $\mathcal{X}_\rho$ is equipped with a flat structure, that is an integrable distribution of the dimension of $M$ transverse to the fiber of $p: \mathcal{X}_\rho \longrightarrow M$. It follows that for each $x\in \mathcal{X}_\rho$, we have a splitting
$$T_x \mathcal{X}_\rho = T^v_x\mathcal{X}_\rho \oplus T^h_x\mathcal{X}_\rho.$$
Here $T^v_x\mathcal{X}_\rho= \ker(dp_x)$ is the vertical tangent space and $T^h_x\mathcal{X}_\rho$ is the horizontal tangent space given by the distribution. Note also that the projection $p: \mathcal{X}_\rho \longrightarrow M$ identifies $T_x^h\mathcal{X}_\rho$ with $T_{p(x)}M$.
In this language, a point in the fiber $\textbf{hol}^{-1}(\rho)$ is given by a section $s$ of $\mathcal{X}_\rho$ that is transverse to the horizontal distribution.
\subsection{Fibered photon structures}\label{isotropicplanes}
\begin{defi}
A \textit{photon} in $\R^{2,n+1}$ is an isotropic 2-plane. We denote by $\Pho(\R^{2,n+1})$ the set of photons in $\R^{2,n+1}$.
\end{defi}
\begin{rmk}
Equivalently, a photon is a projective line inside the set of isotropic line $\Ein^{1,n}\subset \ProjR{n+1}$. Such a projective line necessarily corresponds to an isotropic plane in $\R^{2,n+1}$ so the restriction of the conformal metric of $\Ein^{1,n}$ on this line is degenerate.
\end{rmk}
The group $\SO_0(2,n+1)$ acts transitively on $\Pho\big(\R^{2,n+1}\big)$ and the stabilizer of a photon is a parabolic subgroup denoted $P_2$. We thus get an identification
$$\Pho\big(\R^{2,n+1}\big) \cong \SO_0(2,n+1)/P_2.$$
Note that $\dim\left(\Pho\big(\R^{2,n+1}\big)\right)=2n-1$.
By considering $\Pho(\R^{2,n+1})$ as a sub-manifold of the Grassmannian of $2$-planes in $\R^{n+3}$, one gets that $T_V\Pho(\R^{2,n+1})\subset \Hom(V, \R^{2,n+1}/V)$. Given a negative definite line $\ell\in\H^{2,n}$, $\ell^\bot\cong \R^{2,n}\subset \R^{2,n+1}$ and one gets a natural embedding
$$\Pho\big(\ell^\bot\big) \hookrightarrow \Pho\big(\R^{2,n+1}\big).$$
The following lemma is straightforward (by a dimension argument):
\begin{lem}\label{l:projectionphotons}
For $V\in \Pho(\ell^\bot)\subset \Pho\big(\R^{2,n+1} \big)$, the post-composition with the orthogonal projection $p_\ell: \R^{2,n+1} \to \ell$ gives a linear morphism from $T_V \Pho(\R^{2,n+1})$ to $\Hom(V,\ell)$ the kernel of which is exactly $T_V \Pho(\ell^\perp)$.
\end{lem}
Consider an orthogonal splitting $\R^{2,n+1}=E\oplus F$ where $(E,g_E)$ be a positive definite linear subspace of dimension $2$ and scalar product $g_E$ and $(F,g_F)=E^\bot$. For each photon $V\in \Pho(\R^{2,n+1})$, the orthogonal projection $p_E: \R^{2,n+1} \to E$ restricts to an isomorphism between $V$ and $E$. In particular, each photon is the graph of a linear map $\phi: E \to F$ and we get the following identification
$$\begin{array}{lll}
\mathrm{O}(E,F) & \overset{\sim}{\longrightarrow} & \Pho\big(\R^{2,n+1} \big) \\
\varphi & \longmapsto & \Gamma_\varphi:= \text{graph}(\varphi)
\end{array}. $$
Here,
$$\mathrm{O}(E,F):=\big\{\varphi\in \text{Hom}(E,F),~\forall x,y\in E, g_E(x,y)=-g_F(\varphi(x),\varphi(y)) \big\}.$$
Fixing an orthonormal basis $(e_1,e_2)$ of $E$, a map $\varphi\in \mathrm{O}(E,F)$ is fully determined by the pair of orthonormal vectors $(\varphi(e_1),\varphi(e_2))\in F^2$. The group $\mathrm{O}(n+1)$ acts transitively on the set of pairs of orthonormal vectors in $F$ and the stabilizer of a pair is conjugated to $\mathrm{O}(n-1)$. In particular, $ \mathrm{O}(E,F)\cong \mathrm{O}(n+1)/\mathrm{O}(n-1)$.
More geometrically, the space of pairs of orthonormal vectors in $F$ is canonically identified with the unit tangent bundle of the unit sphere in $F$. We call $\mathrm{O}(E,F)$ an \textit{iterated sphere}. Note that in particular, when $\dim(F)=2$, this space has two connected components.
\begin{defi}
An \textit{iterated sphere bundle} over $\Sigma$ is a homogeneous bundle with fiber $\mathrm{O}(n)/\mathrm{O}(n-2)$.
\end{defi}
Given two iterated sphere bundles $M_1$ and $M_2$ over $\Sigma$, the set $\text{Diff}(M_1,M_2)$ is the set of bundle diffeomorphisms $f: M_1 \longrightarrow M_2$ covering the identity $\text{id}: \Sigma \longrightarrow \Sigma$. Such a diffeomorphism preserves the homogeneous bundle structure if and only if its expression in each fiber is given by an element in $\mathrm{O}(n)$.
In particular, the set $\mathrm{O}(M_1,M_2)$ of diffeomorphisms $f\in \text{Diff}(M_1,M_2)$ preserving the homogeneous structure is a principal $\mathrm{O}(n)$-bundle over $\Sigma$. Moreover, the iterated sphere bundles $M_1$ and $M_2$ are equivalent if and only if $\mathrm{O}(M_1,M_2)$ admits a global section, that is, if and only if $\mathrm{O}(M_1,M_2)$ is topologically trivial.
For $n=2$, an iterated sphere bundle is just an $\mathrm{O}(2)$-bundle. When $n>2$, the topology of an $\mathrm{O}(n)$-bundle over $\Sigma$ is classified by its first and second Stiefel-Whitney class $w_1\in H^1(\Sigma,\Z_2)$ and $w_2\in H^2(\Sigma,\Z_2)$ respectively.
\begin{defi}
The \textit{characteristic classes} $\chi(M)$ of an iterated sphere bundle $M\longrightarrow \Sigma$ are the characteristic classes of the principal $\mathrm{O}(n)$-bundle $\mathrm{O}(N,M)$ where $N:=\Sigma\times \mathrm{O}(n)/\mathrm{O}(n-2)$ is the trivial iterated sphere bundle. In particular, $\chi(M)\in H^1(\Sigma,\Z_2)\times H^2(\Sigma,\Z_2)$.
\end{defi}
From the previous discussion, we have the following:
\begin{coro}
Two iterated sphere bundles over $\Sigma$ are topologically equivalent if and only if they have the same characteristic classes.
\end{coro}
We now define a special class of photon structures:
\begin{defi} \label{d:FiberedPhotonStructure}
A \textit{fibered photon structure} is a $\big(\SO_0(2,n+1),\Pho(\R^{2,n+1})\big)$-structure on an iterated sphere bundle $\pi: M \to \Sigma$ such that any associated developing map sends each fiber $M_x$ to a copy of $\Pho\big(\R^{2,n}\big)$ in $\Pho\big(\R^{2,n+1}\big)$
\end{defi}
\begin{rmk} Note that when $n=2,$ the fiber of an iterated sphere bundle is the disjoint union of two circles, and that $\Pho(\R^{2,2})$ is also the disjoint union of two circles.
By definition, the developing map of a fibered photon structure defines a map
\[dev:\widetilde\Sigma\times\mathrm{O}(n)\slash\mathrm{O}(n-2)\longrightarrow\Pho(\R^{2,n+1})\]
which sends the fibers bijectively onto a copies of $\Pho(\R^{2,n})$, and which is equivariant with respect to $\rho: \Gamma \to \SO_0(2,n+1)$.
In particular, for $n=2,$ the image of $\dev$ has two connected components.
\end{rmk}
Given a fibered photon structure on $M$, one can associate a map
$$\begin{array}{llll}
u: & \widetilde\Sigma & \longrightarrow & \H^{2,n} \\
& x & \longmapsto & F_x^\bot,
\end{array}$$
where $F_x\cong \R^{2,n}$ is such that $\dev(M_x)=\Pho(F_x)$. Such a map is equivariant with respect to a representation $\rho \in\Rep\big(\Gamma, \SO_0(2,n+1) \big)$, so one gets a well-defined map
$$\textbf{hol}: \mathcal{D}_{f}(\Sigma) \longrightarrow \Rep(\Gamma,\SO_0(2,n+1)),$$
where $\mathcal{D}_{f}(\Sigma)$ is the moduli space of fibered photon structures on $\Sigma$.
\begin{lem}
The map $\textbf{hol}$ takes value in the set of maximal representations
\end{lem}
\begin{proof}
By local injectivity, the developing map of a fibered photon structure sends the fibers of all points in a neighborhood of $x\in \widetilde\Sigma$ to disjoint photons in $\Pho\big(\R^{2,n+1}\big)$. For $x,y\in \widetilde\Sigma$, $F_x \cap F_y = \big(u(x) \oplus u(y) \big)^\bot$. But $\dev(M_x)\cap \dev(M_y)= \emptyset$ if and only if $\big(u(x) \oplus u(y) \big)^\bot$ does not contain any isotropic plane, that is if and only if $\big(u(x) \oplus u(y) \big)^\bot\cong \R^{1,n}$. It follows that the geodesic passing through $u(x)$ and $u(y)$ is space-like, that is $u$ is a space-like surface.
The second Gauss map of $u$ (see Subsection \ref{subsection-Gaussmaps}) gives a reduction of structure group of the principal $\SO_0(2,n+1)$-bundle $P_\rho$ to a $\SO(2)\times \SO(n+1)$ principal bundle. The quotient of this bundle by $\SO(n+1)$ is a circle bundle that is canonically identified with $T^1\Sigma$ via $u$. In particular, the Toledo invariant of $\rho$ is $2g-2$.
\end{proof}
\subsection{Geometrization}\label{ss:geometrization}
Let $\rho\in\Rep^{max}\big(\Gamma,\SO_0(2,n+1)\big)$ be a maximal representation. By Theorem \ref{p:decompositionbundle}, the flat vector bundle $E$ with holonomy $\rho$ splits as
$$E= \ell\oplus U\oplus V,$$
where $\ell$ is a negative definite line sub-bundle, $(U,g_U)$ is a positive definite rank $2$ sub-bundle of Euler class $2g-2$ and $(V,g_V)$ is a rank $n$ negative definite sub-bundle. Recall also that the characteristic classes of $V$ characterize the connected components of $\Rep^{max}\big(\Gamma,\SO_0(2,n+1)\big)$.
Set
$$\mathrm{O}(U,V):=\big\{ \varphi\in \Hom(U,V),~g_U(u_1,u_2)=-g_V(\varphi(u_1),\varphi(u_2)),~\forall u_1,u_2\in U\big\}.$$
We have the following:
\begin{lem}
The canonical projection $\pi: \mathrm{O}(U,V) \longrightarrow \Sigma$ turns $\mathrm{O}(U,V)$ into an iterated sphere bundle. Moreover, the Stiefel-Whitney classes of $\mathrm{O}(U,V)$ are equal to the Stiefel-Whitney classes of $V$.
\end{lem}
We denote by $\Pho(E)=\big(\widetilde\Sigma \times \Pho(\R^{2,n+1}) \big)/\Gamma$ the flat homogeneous $\Pho(\R^{2,n+1})$-bundle over $\Sigma$ associated to $\rho$. The fiber of $\Pho(E)$ over $p\in\Sigma$ is the set of photons in $E_p$.
By Subsection \ref{(G,X)-structures}, a photon structure on $\mathrm{O}(U,V)$ with holonomy $\rho\circ \pi_*$ is equivalent to a section $s\in \Omega^0\big(\mathrm{O}(U,V),\pi^*\Pho(E)\big)$ which is transverse to the flat structure.
Define $s\in \Omega^0\big(\mathrm{O}(U,V),\pi^*\Pho(E)\big)$ by
$$\begin{array}{llll}
s: & \mathrm{O}(U,V) & \longrightarrow & \pi^*\Pho(E) \\
& x=(p,\varphi) & \longmapsto & \text{graph}(\varphi)\subset U_p\oplus V_p
\end{array}. $$
Here $x\in \mathrm{O}(U,V)$ is such that $\pi(x)=p\in\Sigma$ and $\varphi\in\mathrm{O}(U_p,V_p)$.
\begin{prop}\label{p:s gives fibered photon structure}
The section $s$ introduced above defines a fibered photon structure on $\mathrm{O}(U,V)$ of holonomy $\rho$.
\end{prop}
\begin{proof}
By construction, $s$ maps bijectively the fiber of $\mathrm{O}(U,V)$ over $p\in\Sigma$ to the set of photons in $U_p\oplus V_p=\ell_p^\bot$. In particular, for any $x=(p,\varphi)\in \mathrm{O}(U,V)$, $ds_x$ restricts to an isomorphism between $T^v_x\mathrm{O}(U,V)$ and $T_{s(x)}\Pho(\ell_p^\bot)\subset T_{s(x)}\Pho(E_p)$. From Lemma \ref{l:projectionphotons}, $s$ is transverse to the flat structure if the post-composition of the restriction of $ds_x$ to $T^h_x\mathrm{O}(U,V)$ with the orthogonal projection on $\ell_p$ is injective.
From Subsection \ref{Higgsbundleparametrization}, the cyclic Higgs bundle associated to $\rho$ splits as
\[
(\mathcal{E},\Phi)\ \ =\ \ \vcenter{\xymatrix@R=-.2em{\Ll\ar[r]^1&\Ii\ar[r]^1&\Ll^{-1}\ar[ddl]^{\beta}\\&\oplus&\\&\Vv\ar[uul]^{\beta^\dagger}&}}\]
where $\Ii=\Lambda^{n}\Vv$ is a square root of the trivial bundle and $\mathcal{L}=\mathcal{KI}$.
Moreover, this splitting is orthogonal with respect to the Hermitian metric $h$ solving the Higgs bundle equations. In particular, for $\mathcal{E}= \Ii\oplus \Ll \oplus \Ll^{-1} \oplus \Vv$, we have
$$h=\left(\begin{array}{llll}
h_{\Ii} & & & \\
& h_{\Ll} & & \\
& & h_{\Ll}^{-1} & \\
& & & h_{\Vv}
\end{array} \right) $$
\noindent where $1$ (respectively $l$ and $h_{\Vv}$) is a Hermitian metric on $\mathcal{I}$ (respectively $\mathcal{L}$ and $\mathcal{V}$). The conjugate linear involution fixing $E_\rho$ preserves $\mathcal{I}$, $\mathcal{L}\oplus \mathcal{L}^{-1}$ and $\mathcal{V}$. The $(1,0)$ part $\nabla^{1,0}$ of the flat connection $\nabla= A + \Phi + \Phi^*$ (where $A$ is the Chern connection of $h$ and $\Phi^*=\Phi^{*h}$) is written
$$\nabla^{1,0} = \left(\begin{array}{llll} A_{\Ii}^{1,0} & 1 & 0 & 0 \\
0 & A_{\Ll}^{1,0} & 0 & \eta \\
1 & 0 & A_{\Ll^{-1}}^{1,0} & 0 \\
0 & 0 & \eta^\dagger & A_{\Vv}^{1,0}\end{array} \right) $$
while the $(0,1)$-part $\nabla^{0,1}$ writes
$$\nabla^{0,1} = \left(\begin{array}{llll} \overline{\partial}_{\Ii} & 0 & 1^* & 0 \\
1^* & \overline{\partial}_{\Ll} & 0 & 0 \\
0 & 0 & \overline{\partial}_{\Ll^{-1}} & (\eta^\dagger)^* \\
0 & \eta^* & 0 & \overline{\partial}_{\Vv} \end{array}\right)$$
where $\eta^*$ is the $(0,1)$-form dual to $\eta\in \Omega^{0,1}\big(X,\Hom(\mathcal{L}^{-1},\mathcal V)\big)$, where the dual is taken using $h_{\Ll}^{-1}$ and $h_V$ (and similarly for $1^*$ and $(\eta^\dagger)^*$).
If $\epsilon$ is a local frame of $\mathcal{L}$ with $l(\epsilon,\epsilon)=1$, the bundle $U=\text{Fix}\big(\lambda_{\vert \mathcal{L}\oplus \mathcal{L}^{-1}}\big)$ is locally generated by the orthonormal frame $\frac{1}{\sqrt{2}}e_1,\frac{1}{\sqrt{2}}e_2$ where
$$\left\{ \begin{array}{lll}
e_1 & = & \epsilon + \lambda(\epsilon) \\
e_2 & = & i(\epsilon-\lambda(\epsilon))
\end{array}\right.. $$
In particular, the image of the section $s$ is given by the sub-bundle generated by $\xi_1$ and $\xi_2$ where $\xi_i = e_i + \varphi(e_i)\in U\oplus V$.
We thus get
$$\xi_1= \left( \begin{array}{l} 0 \\ h_{\Ll}^{-1/2} \\ h_{\Ll}^{1/2} \\ \varphi(e_1) \end{array}\right),~ \xi_2= \left( \begin{array}{l} 0 \\ ih_{\Ll}^{-1/2} \\ -ih_{\Ll}^{1/2} \\ \varphi(e_2) \end{array}\right).$$
Denoting by $p_\mathcal{I}: \mathcal{E} \longrightarrow \mathcal{I}$ the orthogonal projection, we obtain
$$\left\{\begin{array}{lll}
p_\mathcal{I}(\nabla_{\partial_z}\xi_1) & = & 1(\partial_z)h_{\Ll}^{-1/2} \\
p_\mathcal{I}(\nabla_{\overline{\partial}_z}\xi_1) & = & 1^*(\overline{\partial}_z)h_{\Ll}^{1/2}\\
p_\mathcal{I}(\nabla_{\partial_z}\xi_2) & = & i1(\partial_z)h_{\Ll}^{-1/2} \\
p_\mathcal{I}(\nabla_{\overline{\partial}_z}\xi_2) & = & -i1^*(\overline{\partial}_z)h_{\Ll}^{1/2}
\end{array}\right..$$
where $1 \in \Omega^{1,0}\big(X,\Hom(\mathcal{I},\mathcal{L})\big)$ and $1^*$ is the dual of $1$ with respect to the Hermitian metrics on $\mathcal{I}$ and $\mathcal{L}$.
In particular, the post-composition of the restriction of $ds$ to $T^v\mathrm{O}(U,V)$ with the projection on $\ell$ is given by the matrix
$$\left(\begin{array}{ll}
1(\partial_z)h_{\Ll}^{-1/2} & i1(\partial_z)h_{\Ll}^{-1/2} \\
1^*(\overline{\partial}_z)h_{\Ll}^{1/2} & -i1^*(\overline{\partial}_z)h_{\Ll}^{1/2}
\end{array}\right).$$
Since the determinant of this matrix is nowhere vanishing, $s$ is transverse to the flat structure.
Finally, note that the section $s$ maps $\mathrm{O}(U,V)_p$ to $\Pho(\ell_p^\bot)$, so the associated surface is the maximal one which is space-like.
\end{proof}
\medskip
\noindent\textbf{The case $\SO_0(2,3)$.} For the case of $\SO_0(2,3)$, one can say more about the topology of $\mathrm{O}(U,V)$. Recall from Remark \ref{r: Gothen v nonGothen components}, that the space of maximal $\SO_0(2,3)$-representations decomposes as
\[\bigsqcup\limits_{sw_1\neq 0,\ sw_2}\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))\ \sqcup\bigsqcup_{0\leq d\leq 4g-4}\Rep_{d}^{max}(\Gamma,\SO_0(2,3)),\]
and the components $\bigsqcup\limits_{0< d\leq 4g-4}\Rep_{d}^{max}(\Gamma,\SO_0(2,3))$ are called Gothen components while the rest of the components are called non-Gothen components.
By Theorem \ref{p:decompositionbundle}, a flat vector bundle $E$ with holonomy representation $\rho\in\Rep^{max}(\Gamma,\SO_0(2,3))$ splits orthogonally as $E=\ell\oplus U\oplus V$ where $\ell$ is a negative definite line sub-bundle, $U$ is an oriented positive definite rank two bundle canonically identified with $T\Sigma$ and $V=(\ell\oplus U)^\bot$.
In this case, the post-composition by elements of $\mathrm{O}(2)$ turns the iterated sphere bundle $\mathrm{O}(U,V)\to\Sigma$ into a principal $\mathrm{O}(2)$-bundle with the same first and second Stiefel-Whitney classes as $V.$
\begin{lem}
The total space of the $\mathrm{O}(2)$-bundle $\mathrm{O}(U,V)\to\Sigma$ is connected if and only if the first Stiefel-Whitney class of $\mathrm{O}(U,V)$ is nonzero.
\end{lem}
\begin{proof}
An $\mathrm{O}(2)$-bundle is disconnected (with two connected components) if and only if the structure group can be reduced to the connected component of the identity $\SO(2).$ Such a reduction of structure exists if and only if the first Stiefel-Whitney class of the orthogonal bundle vanishes.
\end{proof}
Recall that, for each maximal $\SO_0(2,3)$-representation, the associated fibered photon structure on $\mathrm{O}(U,V)$ gives rise to a $\rho$-equivariant injective developing map $\dev:\widetilde \Sigma\times\mathrm{O}(2)\to \Pho(\R^{2,3})$ which sends each fiber bijectively only a copy of $\Pho(\R^{2,2}).$ In particular, the image of $\dev$ has two connected components. The geometry of the quotient $\rho(\Gamma)\backslash \dev(\widetilde\Sigma\times \mathrm{O}(2))$ is given by the following:
\begin{lem}\label{l: O(U,V) bundle description}
Let $\rho$ be a maximal $\SO_0(2,3)$-representation and $\mathrm{O}(U,V)$ be the associated $\mathrm{O}(2)$-bundle.
\begin{itemize}
\item If $\rho$ is in the Gothen component $\Rep^{max}_d(\Gamma,\SO_0(2,3))$, then $\mathrm{O}(U,V)$ is the disjoint union of two circle bundles with degrees $2g-2+d$ and $2g-2-d.$
\item If $\rho$ is in the non-Gothen component $\Rep^{max}_0(\Gamma,\SO_0(2,3))$, then $\mathrm{O}(U,V)$ is the disjoint union of two circle bundles each with degree $2g-2$.
\item If $\rho$ is in the non-Gothen component $\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))$, then $\mathrm{O}(U,V)$ is connected.
\end{itemize}
\end{lem}
\begin{proof}
If $\rho$ is in the Gothen component $\Rep^{max}_{d}(\Gamma,\SO_0(2,3))$, we can choose a unique orientation on $V$ such that $\deg(V)=d>0$.
In that case, the two connected components of $\mathrm{O}(U,V)$ will be denoted respectively by $\SO(U,V)$ and $\SO(U,\overline V)$ corresponding to those bundle maps $\varphi\in \Hom(U,V)$ that preserve and reverse the orientation respectively.
The complex structure $J_U : U \longrightarrow U$ given by the rotation of angle $\pi/2$ defines a canonical identification between $U$ and $\text{Ker}(J_U-i\text{Id})\cong \mathcal{K}^{-1} \subset U\otimes \mathbb{C}$.
In a same way, the complex structure $J_V: V \longrightarrow V$ identifies $V$ with a holomorphic line sub-bundle $\mathcal{N}\subset V\otimes \C$, and $\mathcal{N}$ has degree $d$.
Under these identifications, $\SO(U,V)$ corresponds to those vectors in $\text{Hom}(\mathcal{K}^{-1},\mathcal{N})= \mathcal{KN}$ of norm $1$. So the degree of $\SO(U,V)$ is $2g-2+d$. In the same way, one gets that the degree of $\SO(U,\overline V)$ is $2g-2-d$.
For a non-Gothen representation in $\Rep^{max}_0(\Gamma,\SO_0(2,3))$, the bundle $\mathrm{O}(U,V)$ is disconnected since the first Stiefel-Whitney class of $V$ vanishes.
However, in this case, the degree of $V$ is zero. Thus $\mathrm{O}(U,V)$ is the disjoint union of two circle bundles of degree $2g-2.$
Finally, for non-Gothen representations in $\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))$ the first Stiefel-Whitney class of the $V$ is non-zero, hence $\mathrm{O}(U,V)$ is connected.
\end{proof}
\subsection{Einstein structures for $\SO_0(2,3)$-Hitchin representations} \label{ss:EinsteinHitchin}
Here we prove Theorem \ref{t:EinsteinHitchin}, namely that one can to any $\SO_0(2,3)$-Hitchin representation a maximally fibered conformally flat Lorentz structure on the unit tangent bundle of $\Sigma$ .
More generally, we construct these structures for special $\SO_0(2,3)$ representations which give rise to cyclic Higgs bundles.
\begin{defi}
A \textit{conformally flat Lorentz structure} (CFL structure) on a three dimensional manifold $M$ is a $(G,X)$-structure with $G=\SO_0(2,3)$ and $X=\Ein^{1,2}$.
\end{defi}
A \textit{space-like circle} in $\Ein^{1,2}$ is the intersection of a 3-dimensional linear subspace of $\R^{2,3}$ of signature $(2,1)$ with $\Ein^{1,2}$. The set of space-like circles in $\Ein^{1,2}$ is the pseudo-Riemannian symmetric space
$$\Gr_{(2,1)}(\R^{2,3}):= \SO_0(2,3)/\text{S}(\mathrm{O}(2,1)\times \mathrm{O}(2)).$$
\begin{defi}
A CFL structure on a circle bundle $\pi: M\longrightarrow \Sigma$ is called \textit{fibered} if the developing map sends each fiber onto a space-like circle in $\Ein^{1,2}$ and the holonomy is trivial along the fiber.
\end{defi}
In particular, the holonomy of a fibered space-like structure can thus be written as $\rho\circ\pi_*$ where $\rho: \Gamma \to \SO_0(2,3)$. Also, in a similar way to fibered photon structures, one can associate to a fibered CFL structure on $M$ a $\rho$-equivariant map
$$\Psi: \widetilde{\Sigma} \longrightarrow \Gr_{(2,1)}(\R^{2,3}).$$
The map $\Psi$ sends a point $x\in\widetilde\Sigma$ to the element in $\Gr_{2,1}(\R^{2,3})$ corresponding to the space-like circle $\dev(\pi^{-1}(x))$.
\begin{defi}
A fibered CFL structure will be called \textit{maximal} if $\Psi(\widetilde\Sigma)$ is a space-like extremal surface.
\end{defi}
Consider a representation $\rho\in \Rep\big(\Gamma,\SO_0(2,3)\big)$ such that there exists a Riemann surface structure $X\in \mathcal{T}(\Sigma)$ satisfying the property that the associated $\SO_0(2,3)$-Higgs bundle $(\mathcal{E},\Phi)$ is cyclic and has the form
\begin{displaymath} (\mathcal{E},\Phi)=
\xymatrix{
\mathcal{K}\mathcal L \ar@/_/[r]_{1} & \mathcal L \ar@/_/[r]_{\beta} & \mathcal{O} \ar@/_/[r]_{\beta} & \mathcal{L}^{-1} \ar@/_1pc/[lll]_{\gamma} \ar@/_/[r]_{1} & \mathcal{K}^{-1}\mathcal{L}^{-1} \ar@/_1pc/[lll]_{\gamma}}
\end{displaymath}
where $\mathcal L$ is a holomorphic line bundle of degree $0\leq d\leq 2g-2$ and $\beta\in H^0(X,\mathcal{L}^{-1}\mathcal{K})$ is non-zero. In that case, the splitting $\mathcal{E}=\mathcal{K}\mathcal{L}\oplus \mathcal L\oplus\mathcal{O}\oplus \mathcal{L}^{-1}\oplus \mathcal{K}^{-1}\mathcal{L}^{-1}$ is orthogonal with respect to the Hermitian metric $h$ solving the Higgs bundle equations.
Note that, Hitchin representations satisfy this property with $\mathcal{L}=\mathcal{K}$.
The conjugate linear involution $\lambda: \mathcal{E} \longrightarrow \mathcal{E}$ fixing the flat $\SO_0(2,3)$ bundle $E$ fixes $\mathcal{O}$, $\mathcal L\oplus \mathcal{L}^{-1}$ and $\mathcal{K}\mathcal L\oplus \mathcal{K}^{-1}\mathcal{L}^{-1}$, and one gets a splitting
$$E= \ell\oplus U \oplus V,$$
where $\ell=\text{Fix}(\lambda_{\vert \mathcal{O}})$ is trivial, $U=\text{Fix}(\lambda_{\vert \mathcal L\oplus \mathcal{L}^{-1}})$ and $V=\text{Fix}(\lambda_{\vert \mathcal{K}\mathcal L\oplus \mathcal{K}^{-1}\mathcal{L}^{-1}})$.
Let $\pi: M\longrightarrow \Sigma$ be the circle bundle of Euler class $d$.
Consider a tautological section $s_2: M \longrightarrow \pi^*U$ normalized so that $\Vert s_2 \Vert^2=1$ where the norm is taken with respect to the signature $(2,3)$ metric on $\pi^*E$.
If $s_1$ is the section of $\pi^*\ell$ normalized such that $\Vert s_1 \Vert^2=-1$, then the non-zero section $s=s_1+s_2$ has zero norm. The section $s$ thus defines a section $\sigma$ of the flat homogeneous bundle $\pi^*\Ein(E)$ where
$$\Ein(E):=\big(P_\rho\times \Ein^{1,2}\big)/\SO_0(2,3)$$
and $P_\rho$ is the flat $\SO_0(2,3)$-bundle with holonomy $\rho$.
More concretely, the fiber of $\pi^*\Ein(E)$ over $x\in M$ is the set of isotropic vectors in $(\pi^* E)_x$.
More geometrically, given a unit tangent vector $v\in T^1_x\Sigma$, the half-geodesic generated by $du_x(v)$ (where $u: \widetilde\Sigma \to \H^{2,n}$ is the maximal surface) intersects $\partial\H^{2,n}$ in a point corresponding to $s(x,v)$.
\begin{prop}
The section $\sigma\in\Omega^0\big(M,\pi^*\Ein(E)\big)$ introduced above defines a maximally fibered CFL structure on $M$.
\end{prop}
\begin{proof}
In the splitting $\mathcal{E}=\mathcal{KL}\oplus \mathcal L\oplus\mathcal{O}\oplus \mathcal{L}^{-1}\oplus \mathcal{K}^{-1}\mathcal{L}^{-1}$, the Higgs field $\Phi$ and its dual $\Phi^*$ with respect to $h$ have the following expression:
$$\Phi=\left(\begin{array}{lllll}
0 & 0 & 0 & \gamma & 0 \\
1 & 0 & 0 & 0 & \gamma \\
0 & \beta & 0 & 0 & 0 \\
0 & 0 & \beta & 0 & 0 \\
0 & 0 & 0 & 1 & 0
\end{array}\right)~~~~~~\text{and}~~~~~~\Phi^*=\left(\begin{array}{lllll}
0 & 1^* & 0 & 0 & 0 \\
0 & 0 & \beta^* & 0 & 0 \\
0 & 0 & 0 & \beta^* & 0 \\
\gamma^* & 0 & 0 & 0 & 1^* \\
0 & \gamma^* & 0 & 0 & 0
\end{array}\right), $$
where $\beta^*\in \Omega^{0,1}\big(X,\Hom(\mathcal{O},\mathcal{L})\big)\cong \Omega^{0,1}\big(X,\Hom(\mathcal{L}^{-1},\mathcal{O})\big)$ is the form dual to $\beta$ using the Hermitian metric on $\mathcal L$ and $\mathcal{O}$ (and similarly for $1^*$ and $\gamma^*$).
Consider a local chart $(z,\theta)$ on $\widetilde{\Sigma}\times S^1$, where $z$ is holomorphic. In this chart, the sections $s_1$ and $s_2$ write
$$s_1=\left(\begin{array}{l} 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array}\right)~~~~\text{and}~~~~s_2 = \frac{1}{\sqrt{2}}\left(\begin{array}{l} 0 \\ \mu^{-1}e^{i\theta} \\ 0 \\ \mu e^{-i\theta} \\ 0 \end{array}\right),$$
where $\mu$ is the norm of the local section $\left(\begin{array}{l} 0 \\ e^{i\theta} \\ 0 \\ 0 \\ 0 \end{array}\right)$ with respect to $\pi^*h$. In particular, if $l$ is the local section of $\pi^* \Ll$ corresponding to $e^{i\theta}$, then the restriction of $\pi^*h$ to $\pi^*(\Ll\oplus \Ll^{-1})$ is locally given by
$$\pi^*h_{\vert \pi^*(L\oplus L^{-1})} = \mu^2 l^{-1}\otimes\overline{l}^{-1} + \mu^{-2}l \otimes \overline l.$$
Writing the flat connection $\nabla= A +\Phi + \Phi^*$ (where $A=d + \partial \log h$ is the Chern connection of $(\pi^*\mathcal{E},\pi^*h)$), one obtains
$$\nabla s_1 = \left(\begin{array}{l}0 \\ \beta^*(s_1) \\ 0 \\ \beta(s_1) \\ 0\end{array}\right).$$
The calculations for $s_2$ are more tedious. We get
$$A_{\partial_\theta}s_2=\frac{1}{\sqrt{2}} \left(\begin{array}{l}0 \\ i\mu^{-1}e^{i\theta} \\ 0 \\ -i\mu e^{-i\theta} \\ 0\end{array}\right),~ A_{\partial_z}s_2=\frac{1}{\sqrt{2}} \left(\begin{array}{l}0 \\ \mu^{-2}\partial_ze^{i\theta} \mu \\ 0 \\ -\partial_z\mu e^{-i\theta} \\ 0\end{array}\right),~A_{\overline{\partial}_z}s_2=\frac{1}{\sqrt{2}} \left(\begin{array}{l}0 \\ -\mu^{-2}\overline{\partial}_z\mu e^{i\theta} \\ 0 \\ \overline{\partial}_z\mu e^{-i\theta} \\ 0\end{array}\right),$$
\noindent and
$$\Phi(\partial_z)(s_2) =\frac{1}{\sqrt{2}} \left(\begin{array}{l} \gamma(\partial_z)(\mu e^{-i\theta}) \\ 0 \\ \beta(\partial_z)(\mu^{-1}e^{i\theta}) \\ 0 \\ 1(\partial_z)(\mu e^{-i\theta}) \end{array}\right),~ \Phi^*(\overline\partial_z)(s_2) =\frac{1}{\sqrt{2}} \left(\begin{array}{l} 1^*(\overline\partial_z)(\mu e^{i\theta}) \\ 0 \\ \beta^*(\overline\partial_z)(\mu^{-1}e^{-i\theta}) \\ 0 \\ \gamma^*(\overline\partial_z)(\mu e^{i\theta})\end{array}\right).$$
So finally, we get
$$\nabla_{\partial_z}s=\frac{1}{\sqrt 2}\left(\begin{array}{l} \gamma(\partial_z)(\mu e^{-i\theta}) \\ \mu^{-2}\partial_ze^{i\theta} \mu \\ \beta(\partial_z)(\mu^{-1}e^{i\theta}) \\ -\partial_z\mu e^{-i\theta} + \sqrt{2}\beta(\partial_z)(s_1) \\ 1(\partial_z)(\mu e^{-i\theta})\end{array}\right),$$
$$\nabla_{\overline{\partial}_z}s=\frac{1}{\sqrt{2}} \left(\begin{array}{l} 1^*(\overline\partial_z)(\mu e^{i\theta}) \\ -\mu^{-2}\overline{\partial}_z\mu e^{i\theta} + \sqrt{2}\beta^*(\overline{\partial}_z)(s_1) \\ \beta^*(\overline\partial_z)(\mu^{-1}e^{-i\theta}) \\ \overline{\partial}_z\mu e^{-i\theta} \\ \gamma^*(\overline\partial_z)(\mu e^{i\theta})\end{array}\right) $$
and
$$\nabla_{\partial_\theta}s =\frac{1}{\sqrt{2}} \left(\begin{array}{l}0 \\ i\mu^{-1}e^{i\theta} \\ 0 \\ -i\mu e^{-i\theta} \\ 0\end{array}\right).$$
The section $\sigma\in \Omega^0\big(M,\pi^*\Ein(E)\big)$ is transverse to the flat structure if and only if the sections $\{s,\nabla_{\partial_z}s,\nabla_{\overline{\partial}_z}s,\nabla_{\partial_\theta}s \}$ generate a 4-dimensional space at each point. In particular the non-vanishing of the determinant $\big\vert s_1~s_2~ \nabla_{\partial_z}s~\nabla_{\overline{\partial}_z}s~\nabla_{\partial_\theta}s\big\vert$ is a sufficient condition.
The three vectors $\{s_1, s_2, \nabla_{\partial_\theta} s\}$ span the bundle $\pi^*(\Ll\oplus \mathcal{O} \oplus \Ll^{-1})$ at each point. In particular, the determinant $\big\vert s_1~s_2~ \nabla_{\partial_z}s~\nabla_{\overline{\partial}_z}s~\nabla_{\partial_\theta}s\big\vert$ vanishes exactly when the first and last component of $\{\nabla_{\partial_z}s,~\nabla_{\overline{\partial}_z}s\}$ are proportional, that is when $\Vert \gamma \Vert^2 = \Vert 1\Vert^2$.
Because the section $\gamma\in H^0(X,\mathcal{K}^2\mathcal{L}^2)$ is holomorphic, we have
$$ \Delta \log\Vert \gamma\Vert^2 = -2F_{\mathcal{KL}},$$
where $F_{\mathcal{KL}}$ is the curvature of the bundle $\mathcal{KL}$ with respect to the Hermitian metric $h$.
By the Higgs bundle equation, $F_{\mathcal{KL}} = \Vert 1 \Vert^2-\Vert \gamma\Vert^2$ and we obtain
$$\Delta \log \Vert \gamma\Vert^2=2\Vert \gamma\Vert^2-2\Vert 1 \Vert^2.$$
The maximum principle applies: at a maximum of $\Vert \gamma \Vert^2$, one has $\Vert \gamma\Vert^2<\Vert 1\Vert^2$ and so $\Vert 1 \Vert^2\neq \Vert \gamma \Vert^2$ on $\Sigma$. In particular, $\sigma\in\Omega^0\big(M,\pi^*\Ein(E)\big)$ defines a CFL structure on $M$.
Note also that the associated developing map sends the fiber of $M$ over $x$ to the space-like circle corresponding to the signature $(2,1)$ linear subspace $\ell_x\oplus U_x$, so the CFL structure is fibered. Finally, the corresponding equivariant map $\Psi: \widetilde\Sigma \longrightarrow \Gr_{2,1}(\R^{2,3})$ is the first Gauss map of the maximal surface $u: \widetilde\Sigma\longrightarrow \H^{2,2}$. By Proposition \ref{p-gaussmaps}, $\Psi$ is extremal.
\end{proof}
\begin{rmk}
For $\Ll=\mathcal{K},$ the above construction gives maximally fibered CFL structures on $T^1\Sigma$ whose holonomy factors through a Hitchin representation. But note also that, for any $d\in \mathbb{Z}$ with $\vert d\vert < 2g-2$, our construction gives examples of maximally fibered CFL structures on a degree $d$ circle bundle over $\Sigma$ whose holonomy factor through representations in the connected component of $\Rep(\Gamma,\SO_0(2,3))$ of Toledo invariant $d$. Unfortunately, for $|d|<2g-2$, these representations do not form an open domain of the character variety and we do not know how to characterize the representations arising this way. One can show that these representations do not come from representations in $\SO(2,2),$ so these CFL structures do not come from AdS structures on the circle bundle. It would be interesting to understand whether these representations are Anosov and whether the Einstein structures constructed above are deformations of anti-de Sitter structures.
\end{rmk}
\section{Relation with Guichard-Wienhard construction}
In this section, we show that both the fibered photon structure of Theorem \ref{t: fibered photons and maximal reps} and the maximal CFL structures of Theorem \ref{t:EinsteinHitchin} agree with the geometric structures constructed by Guichard-Wienhard in \cite{wienhardanosov}. As a corollary, we describe the topology of the geometric structures of Guichard-Wienhard.
\subsection{Geometrization ``\`a la Guichard-Wienhard''} Here we explain the construction of geometric structures in \cite{wienhardanosov} in the case of Anosov representations of a surface group in $\SO_0(2,n+1)$.
Let $P_1$ and $P_2$ be respectively the stabilizer of an isotropic line and of an isotropic 2-plane in $\R^{2,n+1}$.
In particular, $\SO_0(2,n+1)/P_1\cong \Ein^{1,n}$ is the Einstein Universe and $\SO_0(2,n+1)/P_2\cong \Pho(\R^{2,n+1})$ is the set of photons in $\R^{2,n+1}$.
Given $\rho\in \Rep(\Gamma,\SO_0(2,n+1))$ a representation which is $P_i$-Anosov ($i=1,2$), there exists a continuous $\rho$-equivariant map
\[\xi_i: \partial_\infty\Gamma \longrightarrow \SO_0(2,n+1)/P_i.\]
The following was established in \cite{labouriehyperconvex} for Hitchin representations and \cite{BILW} for maximal representations.
\begin{prop}\label{p:AnosovnessofMaximalAndHitchin}
If $\rho\in \Rep(\Gamma,\SO_0(2,n+1))$ is a maximal representation then it is $P_1$-Anosov. If $\rho\in \Rep(\Gamma,\SO_0(2,3))$ is a Hitchin representation, then $\rho$ is both $P_1$-Anosov and $P_2$-Anosov.
\end{prop}
If $\rho$ is $P_1$-Anosov, define the subsets $K_\rho^2\subset \Pho(\R^{2,n+1})$ by
\[ K_\rho^2:=\left\{ V\in\Pho(\R^{2,n+1})\ |\ \xi_1(x)\subset V\ \text{for\ some}\ x\in \partial_\infty\Gamma \right\},\]
if $\rho$ is $P_2$-Anosov define the subsets $K_\rho^1\subset \Ein^{1,n}$ by
\[K_\rho^1:=\left\{ \ell\in\Ein^{1,n}\ |\ \ell\subset \xi_2(x)\ \text{for\ some}\ x\in \partial_\infty\Gamma \right\}.\]
Note that $K_\rho^2$ is homeomorphic to $\S^1\times \S^{n-1}$.If $\rho\in \Rep(\Gamma,\SO_0(2,3))$ a $P_2$-Anosov representation, $K_\rho^1$ is homeomorphic to $\S^1\times\S^1.$
The following is proved in \cite{wienhardanosov}:
\begin{theo}\label{t: GW properly discontinuous}
If $\rho\in \Rep\big(\Gamma, \SO_0(2,n+1)\big)$ is $P_1$-Anosov, then $\rho(\Gamma)$ acts properly discontinuously and co-compactly on the set
\[\Omega_\rho^2= \Pho(\R^{2,n+1})\setminus K_\rho^2.\]
Also, if $\rho\in \Rep\big(\Gamma, \SO_0(2,n+1)\big)$ is $P_2$-Anosov, then $\rho(\Gamma)$ acts properly discontinuously and co-compactly on the set
\[\Omega_\rho^1= \Ein^{1,n}\setminus K_\rho^1.\]
Moreover, the topology of the quotient $\rho(\Gamma)\backslash\Omega_\rho^i$ remains constant as the representation $\rho$ is varied continuously (Theorem 9.2 of \cite{wienhardanosov}).
\end{theo}
\begin{rmk}
Recall that the dimension of $\Pho(\R^{2,n+1})$ is $2n-1$. Thus, the space $K_\rho^2$ has codimension $n-1$. However, since little is know about the regularity of the map $\xi_1,$ we cannot automatically conclude connectivity statements for $\Omega_\rho^2. $
\end{rmk}
\subsection{Equivalence of the photon structures}
Here we prove that the fibered photon structures constructed in Theorem \ref{t: fibered photons and maximal reps} are equivalent to those of Guichard-Wienhard.
\begin{theo}\label{t:Equialent Photon Structures}
Let $\rho$ be a maximal representation from $\Gamma$ to $\SO_0(2,n+1)$. Let $\mathrm{O}(U,V)$ be the iterated sphere bundle of Section \ref{ss:geometrization} and $\dev$ the developing map of the photon structure on $\mathrm{O}(U,V)$ constructed in Proposition \ref{p:s gives fibered photon structure}. Then $\dev$ takes values in $\Omega_\rho$ and induces a diffeomorphism from $\mathrm{O}(U,V)$ to $\rho(\Gamma) \backslash \Omega_\rho^2$
\end{theo}
\begin{proof}
Let $\rho\in\Rep^{max}(\Gamma,\SO_0(2,n+1))$ be a maximal representation, denote by $u:\widetilde\Sigma\to\mathbb{H}^{2,n}$ the $\rho$-equivariant maximal surface and by $\xi:\partial_\infty\Gamma\to \Ein^{1,n}\cong\partial \mathbb{H}^{2,n}$ the $\rho$-equivariant continuous map given by the Anosov property of $\rho.$ Recall that the boundary of $u(\widetilde\Sigma)$ corresponds to $\xi(\partial_\infty\Gamma).$ We will show that the developing map of the fibered photon structure of Theorem \ref{t: fibered photons and maximal reps} maps bijectively onto the Guichard-Wienhard domain $\Omega_\rho^2.$
In the construction of the fibered photon structure of Theorem \ref{t: fibered photons and maximal reps}, the developing map sends the fiber of the iterated sphere bundle over a point $x\in \widetilde\Sigma$ bijectively to the set of photons contained in the orthogonal of $u(x)$ in $\R^{2,n+1}$.
By Lemma \ref{l:HyperplaneSeparatesS}, the boundary of $u(\widetilde\Sigma)$ does not intersect $u(x)^\perp$ for any $x\in\widetilde\Sigma$.
In particular, the developing map of the space-like fibered photon structure associated to $\rho$ is contained in the domain $\Omega_\rho^2$.
For the other inclusion, suppose $V\subset\Pho(\R^{2,n+1})$ is a photon and denote its orthogonal by $V^\perp.$ The restriction of the quadratic form $\mathbf{q}$ to $V^\perp$ is non-positive, and vanishes exactly on the subspace $V$. Thus, the subspace $V^\perp$ can be approximated by a sequence $W_k$ of rank $(n+1)$ negative definite subspaces.
By Corollary \ref{c:IntersectionSpacelikeSurface}, each plane $W_{k}$ intersects the surface $u(\widetilde\Sigma)$ in exactly one point. Thus, $V^\perp$ intersects either $u(\widetilde \Sigma)$ or its boundary. This gives rise to a dichotomy:
\begin{itemize}
\item If $V^\perp$ intersects $u(\widetilde\Sigma)$ at a point $x,$ then $V$ is contained in $\Pho(x^\perp)$ and in the image of developing map of the fibered photon structure.
\item If $V^\perp$ intersects the boundary of $u(\widetilde\Sigma)$ at a point $\xi(x),$ then $V$ contains the negative isotropic line $\xi(x)$, and so $V$ belongs to $K^2_\rho$.
\end{itemize}
Therefore, the developing map of the fibered photon structure from Theorem \ref{t: fibered photons and maximal reps} maps surjectively onto $\Omega_\rho^2$.
\end{proof}
The following corollary is immediate:
\begin{coro}
If $\rho: \Gamma \longrightarrow \SO_0(2,n+1)$ is a maximal representation, then the quotient $\rho(\Gamma)\backslash\Omega^2_\rho$ of the Guichard-Wienhard discontinuity domain is homeomorphic to an iterated sphere bundle over $\Sigma$ and the topology of the bundle characterizes the connected component of $\rho$.
\end{coro}
By Lemma \ref{l: O(U,V) bundle description}, for $\SO_0(2,3)$ we can say a little more.
\begin{coro}
For $\rho: \Gamma \longrightarrow \SO_0(2,3)$ maximal, the quotient $\rho(\Gamma)\backslash\Omega^2_\rho$ of the Guichard-Wienhard discontinuity domain
\begin{itemize}
\item is homeomorphic to a connected $\mathrm{O}(2)$-bundle over $\Sigma$ with Steifel-Whitney classes $(sw_1,sw_2)$ if $\rho\in\Rep^{max}_{sw_1,sw_2}(\Gamma,\SO_0(2,3))$
\item is homeomorphic to the disjoint union of two circle bundles of degree $2g-2+d$ and $2g-2-d$ if $\rho\in\Rep^{max}_{d}(\Gamma,\SO_0(2,3))$.
\end{itemize}
\end{coro}
\medskip
\noindent\textbf{The Hitchin component.} For a representation $\rho\in \Hit\big(\Gamma,\SO_0(2,3) \big)$ in the Hitchin component, we have more information about the quotient $\rho(\Gamma)\backslash\Omega^2_\rho$.
More explicitly, Guichard and Wienhard proved in \cite{guichardwienhardsl4} that the quotient of the domain of discontinuity by a Hitchin representation in $\text{PSp}(4,\R)$ gives rise to non-equivalent $(\text{PSp}(4,\R),\ProjR3)$-structures, one being convex, the other not. Using the isomorphism $\text{PSp}(4,\R)\cong \SO_0(2,3)$, the homogeneous space $\Pho(\R^{2,3})$ of photons in $\R^{2,3}$ is identified with the space of lines in $(\R^4,\omega)$, where $\omega$ is a symplectic form on $\R^4$. In particular, a $(\text{PSp}(4,\R),\ProjR3)$-structure is equivalent to a photon structure. We show the following
\begin{prop}
Given a Hitchin representation $\rho\in\Hit(\Gamma,\SO_0(2,3))$, the photon structure on the degree $6g-6$ circle bundle $\SO(U,V)$ constructed in Subsection \ref{ss:geometrization} is equivalent to the non-convex projective structure described above, while the photon structure on the degree $-2g+2$ circle bundle $\SO(U,\overline V)$ corresponds to the convex one.
\end{prop}
\begin{proof}
We will prove the result for the Fuchsian locus, which will give the result for the Hitchin component by continuity. Given $j: \pi_1(\Sigma) \longrightarrow \PSL_2(\R)$ a Fuchsian representation, let $\rho:= m_{irr} \circ j \in\Hit(\Sigma,\text{PSp}(4,\R))$ be the image of $j$ by the irreducible representation $m_{irr}: \PSL_2(\R) \longrightarrow \text{PSp}(4,\R)$ corresponding to the action of $\PSL_2(\R)$ on $\R^4 \cong \text{Sym}^3(\R^2)$. Here, we identify $\R^2$ (respectively $\R^4$) with the set of degree 1 (respectively degree 3) homogeneous polynomial in 2 variables. The convex connected component of $\Omega^2_\rho$ corresponds to those polynomials having a real root and two complex conjugate ones while the non-convex component corresponds to the set of polynomials having 3 distinct real roots.
The uniformization $u: \widetilde\Sigma \longrightarrow \H^2$ associated to $j$ gives an equivariant identification $T^1\widetilde\Sigma \cong \partial_\infty {\H^2}^{(3)}$ where $\partial_\infty {\H^2}^{(3)}$ is the set of pairwise distinct triple $(x_-,x_t,x_+)\in (\partial_\infty \H^2)^3$ that are positively oriented. Indeed, given $(x,v)\in T^1\widetilde\Sigma$, there is a unique triple $(x_-,x_t,x_+)\in\partial_\infty {\H^2}^{(3)}$ such that the geodesic $\gamma$ passing through $du_x(v)$ intersects $\partial_\infty \H^2$ in the future at $x_+$, in the past at $x_-$ and the geodesic orthogonal to $\gamma$ at $x$ intersects the boundary at $x_t$ with $(x_-,x_t,x_+)$ positively oriented.
The developing map $\dev'$ corresponding to the non-convex projective structure is given by
$$\begin{array}{llll}
dev': & T^1\widetilde\Sigma & \longrightarrow & \ProjR3 \\
& ([P_1],[P_2],[P_3]) & \longmapsto & [P_1P_2P_3]
\end{array}.$$
Note here that $\dev'$ is invariant under the $\Z_3$-action on $T^1\widetilde\Sigma$ generated by $([P_1],[P_2],[P_3]) \to ([P_2],[P_3],[P_1])$. In particular, $\dev'$ descends to a $\rho$-equivariant injective map
$$\dev^*: T^1\widetilde\Sigma /\Z_3 \longrightarrow \Omega^2_\rho.$$
The quotient of the image of $\dev^*$ by $\rho(\Gamma)$is thus a circle bundle of degree $(6g-6)$.
Note also that the developing map $\dev: T^1 \widetilde\Sigma \longrightarrow \Omega^2_\rho$ corresponding to the convex foliated projective structure of Guichard--Wienhard is injective so the associated geometric structure is equivalent to the one on $\SO(U,\overline V)$.
\end{proof}
\subsection{Equivalence of Einstein structures}
For a Hitchin representation $\rho:\Gamma\to\SO_0(2,3)$ there is a Guichard-Wienhard domain $\Omega^1_\rho$ in $\Ein^{1,2}$ by Proposition \ref{p:AnosovnessofMaximalAndHitchin}.
Guichard--Wienhard's theorem (Theorem \ref{t: GW properly discontinuous}) implies that the action of $\rho(\Gamma)$ on $\Omega^1_\rho$ is properly discontinuous and co-compact. Actually, one can be a bit more precise. Mimicking their construction of projective structures associated to Hitchin representations into $\SL(4,\R)$ (see \cite{guichardwienhardsl4}), one can give\footnote{This construction was done in some working notes that Guichard and Wienhard kindly shared with us.} a $\rho$-equivariant parametrization of $\Omega^1_\rho$ by the set $\partial_\infty \Gamma^{(3)}$ of oriented triples of distinct points in $\partial_\infty \Gamma$. It follows that $\rho(\Gamma) \backslash \Omega^1_\rho$ is homeomorphic to $T^1\Sigma$. However, the circle bundle structure is not appearing in this construction.
Here, we prove that the conformally flat $3$-manifold associated to $\rho$ by Theorem \ref{t:EinsteinHitchin} is isomorphic (as a conformally flat $3$-manifold) to $\rho(\Gamma) \backslash \Omega^1_\rho$.
\begin{theo} \label{t:EquivalenceEinsteinGW}
Let $\rho: \Gamma \to \SO_0(2,3)$ be a Hitchin representation. Then the developing map $\dev_\rho$ constructed in Section \ref{ss:EinsteinHitchin} is a global homeomorphism from $T^1\widetilde{\Sigma}$ to $\Omega^1_\rho$.
\end{theo}
The proof is less straightforward than that of Theorem \ref{t:Equialent Photon Structures}. We first prove the following lemma, which settles the case when $\rho$ is Fuchsian, and then argue by continuity, using the Ehresmann--Thurston principle.
\begin{lem} \label{l:GWStructureFuchsianCase}
Suppose that $\rho = m_{irr} \circ j$, where $j: \Gamma \to \PSL(2,\R)$ is a Fuchsian representation and $m_{irr}: \PSL(2,\R) \to \SO_0(2,3)$ is the irreducible representation. The developing map $\dev_\rho$ constructed in Section \ref{ss:EinsteinHitchin} is a diffeomorphism onto $\Omega^1_\rho$.
\end{lem}
Lemma \ref{l:GWStructureFuchsianCase} shows in particular that for $\rho_0 = m_{irr} \circ j$, the manifold $\rho_0(\Gamma) \backslash \Omega^1_{\rho_0}$ is homeomorphic to $T^1 \Sigma$. Now, when $\rho$ varies continuously, the topology of $\rho(\Gamma) \backslash \Omega^1_\rho$ does not vary, and its Einstein structure varies continuously by \cite[Theorem 9.2]{wienhardanosov}. Therefore, the developing map $\dev_\rho$ constructed in the proof of Theorem \ref{t:EinsteinHitchin} and the identification of $T^1 \Sigma$ with $\rho(\Gamma) \backslash \Omega^1_\rho$ given by Theorem 9.2 of \cite{wienhardanosov} give two Einstein structures on $T^1\Sigma$ with the same holonomy $\rho$ and depending continuously on $\rho$. Since the two Einstein structures coincide at $\rho_0 = m_{irr}\circ j$, they coincide on the whole connected component of $\rho_0$ according to the Ehresmann--Thurston principle. This concludes the proof of Theorem \ref{t:EquivalenceEinsteinGW}.
\begin{proof}[Proof of Lemma \ref{l:GWStructureFuchsianCase}]
The specificity of the Fuchsian case is that the developing map extends as a $\PSL(2,\R)$-equivariant map from $T^1 \H^2$ to $\Ein^{1,2}$.
Let us recall that the irreducible representation of $\SL(2,\R)$ in dimension $n+1$ is given by the action of $\SL(2,\R)$ on the space $\R_n[X,Y]$ of homogeneous polynomials of degree $n$ in two variables $X$ and $Y$. This action preserves the bilinear form $Q_n$ given in the coordinate system
\[(X^n, X^{n-1}Y, \ldots , XY^{n-1} , Y^n)\] by the matrix
\[\left(\begin{matrix}
&&&& a_{n,0} \\
&&&-a_{n,1}&\\
&&\iddots&&\\
&(-1)^{n-1} a_{n,n-1}\\
(-1)^n a_{n,n}
\end{matrix}\right)\]
where $a_{n,k} = \frac{k! (n-k)!}{n!}$.
This bilinear form is anti-symmetric for $n$ odd and symmetric of signature $(n/2, n/2+1)$ when $n$ is even. In particular, for $n=2$, the quadratic form $-2 Q_2$ is the discriminant of quadratic polynomials, and this representation gives the isomorphism $\PSL(2,\R) \simeq \SO_0(2,1)$. The hyperbolic plane $\H^2$ thus identifies with the projectivisation of the set of quadratic polynomials with negative discriminant (that is, scalar products on $\R^2$) while $\partial_\infty \H^2$ identifies with the projectivisation of the set of quadratic polynomials with vanishing discriminant (that is, squares of linear forms).
Let $j: \Gamma \to \PSL(2,\R)$ be a Fuchsian representation. We identify $j$ with its composition with the isomorphism $\PSL(2,\R) \simeq \SO_0(2,1)$. Now, $\R^{2,3}$ identifies with $\left(\R_4[X,Y], - Q_4\right)$, and the irreducible representation described above is the representation $m_{irr}$.
In this setting, the boundary map $\xi_0: \partial_\infty \Gamma \to \Ein^{1,2}$ given by the Anosov property of $\rho_0$ is identified with the $\PSL(2,\R)$-equivariant map
\[\function{\xi_0}{\partial_\infty \H^2}{\Ein^{1,2}}{[L^2]}{[L^4]~.}\]
(Here, $[L^2]$ denotes the projective class of the square of a linear form on $\R^2$.)
Moreover, given a point $[L^2]$ in $\partial_\infty{\H^2}$, the photon $\xi_1([L^2])$ is the tangent to $\xi_0$ at $L$. It is thus the projectivisation of the space of polynomials of the form $L^3 L'$, where $L'$ is a linear form. We conclude that the domain $\Omega^1_{\rho_0}$ of Guichard and Wienhard is the complement in $\Ein^{1,2}$ of the set of polynomials having a triple root.
On the other side, the $\rho_0$-invariant maximal surface in $\H^{2,2}$ is the image of the $\PSL(2,\R)$-equivariant map
\[\function {f}{\H^2}{\H^{2,2}}{[P]}{[P^2]~.}\]
(Here, $[P]$ denotes the projective class of a positive definite quadratic form on $\R^2$.)
Let $P$ be a positive definite quadratic form on $\R^2$. The tangent space to this maximal surface at the point $f([P])$ is the projective space of polynomials of the form $PQ$, with $Q\in \R_2[X,Y]$. Since none of these polynomials has a triple root, the intersection of this tangent space with $\Ein^{1,2}$ is contained in the domain $\Omega^1_{\rho_0}$. By construction of the developing map $\dev_{\rho_0}$ it follows that $\dev_{\rho_0}$ takes values into $\Omega^1_{\rho_0}$.
Let $[P]$ and $[Q]$ be two distinct points in $\H^2$. Then the intersection between the tangent spaces to $f(\H^2)$ at $f([P])$ and $f([Q])$ is the point $[PQ]$, which never belongs to $\Ein^{1,2}$. Indeed, up to applying an element of $\PSL(2,\R)$, one can assume that $[P] = [X^2+Y^2]$ and $[Q] = [aX^2 + bY^2]$. One easily compute that
\[Q_4(PQ) = \frac{1}{6}(a+b)^2 + 2ab~,\]
which never vanishes when $a$ and $b$ are positive. By construction, it follows that $\dev_{\rho_0}$ is injective.
Let us finally prove that $\dev_{\rho_0}$ is surjective onto $\Omega^1_\rho$. Let $P$ be a non-zero polynomial of degree $4$ such $Q_4(P) = 0$. Assume by contradiction that $[P]$ is not in the image of $\dev_{\rho_0}$. Then $P$ is not divisible by a positive definite quadratic form and $P$ thus splits as a product of $4$ linear forms. If all these linear forms are co-linear, then $[P]$ belongs to the image of $\xi_0$ and thus not to $\Omega^1_{\rho_0}$. Otherwise, one can assume (up to applying an element of $\PSL(2,\R)$) that $P$ has the form
\[XY(aX+bY)(cX+dY)~.\]
One then computes that
\[Q_4(P) = \frac{1}{6}\left((ad)^2 + (bc)^2 -adbc\right)~.\]
Since the polynomial $A^2 +B^2 - AB$ is positive definite the fact that $Q_2(P)$ vanishes implies that both $ad$ and $bc$ vanish, from which we easily deduce that $P$ is divisible by $X^3$ or $Y^3$. Therefore, $P$ belongs to the complement of $\Omega^1_{\rho_0}$.
By contraposition, we deduce that, if $[P]$ belongs to $\Omega^1_{\rho_0}$, then $P$ is divisible by a positive definite quadratic form. Therefore, the developing map $\dev_\rho$ is surjective onto $\Omega^1_{\rho_0}$. This concludes the proof of Lemma \ref{l:GWStructureFuchsianCase}.
\end{proof}
\bibliographystyle{alpha}
|
1,116,691,499,628 | arxiv | \section{Introduction}
Kundt spacetimes \cite{KundtSpacetimes,ExactSolutions,GenKundt}, have been seen to form an important class of spacetimes for the study of scalar curvature invariants on Lorentzian manifolds. In \cite{CharBySpi} and \cite{SCPI} the authors identified a large subclass, the degenerate Kundt spacetimes, for which there exists smooth deformations of the metrics leaving the scalar curvature invariants fixed while going outside the orbit of the metrics. Conversely, they also showed that in dimension four, any Lorentzian metric having such a deformation must belong to the degenerate Kundt class. They key feature which gives rise to degeneracies in the scalar polynomial curvature invariance is the presence of some null-direction, for which the curvature tensor and all its covariant derivatives are of type $II.$ An open question of current interest is whether all such metrics in arbitrary dimensions belong to the Kundt class.
Another interesting aspect is the rich connection between Kundt spacetimes and \emph{CSI} metrics, defined as metrics for which all scalar polynomial curvature invariants are constant across the manifold. In
\cite{CharBySpi} the authors proved that a four-dimensional Lorentzian manifold with constant scalar curvature invariants, is either locally homogeneous or a Kundt spacetime. In \cite{CSI}, \cite{CSI3} and \cite{CSI4} they found necessary and sufficient equations for Kundt metrics in dimension three and four to be $CSI.$ This classification shows examples of Lorentzian CSI metric which do not contain a single local Killing vector field, in sharp contrast to the Riemannian signature, where any CSI metric is automatically locally homogeneous.
Locally homogeneous metrics can be characterized by having a locally transitive collection of Killing vector fields. In \cite{herviknew} the author, in an attempt to give a similar characterization of CSI metrics, gave a generalization of Killing vector fields, the nil-Killing vector fields, defined by requiring that the Lie derivative of the metric gives a nilpotent operator. It was noted that any Kundt vector field lies within this class. The nil-Killing vector fields were further studied in \cite{IDIFF} and \cite{NilKilling}, showing that under the assumption of algebraic preservation, they constitute a Lie algebra. It was seen that any CSI metric has a locally transitive collection of nil-Killing vector fields due to the local homogeneity of the transverse metric.
In this paper we use the flow properties of Kundt and nil-Killing vector fields in order to construct natural $G$-structures $P\rightarrow M$, for which they are realized as infinitesimal automorphisms of $P.$ These $G$-structure are shown to have a number of intrinsic properties. They give rise to an algebraic classification of tensors allowing for full contractions of even ranked tensors of type $II.$ In addition they have a natural class of metrics associated to them which constitute an affine space over symmetric rank two tensors of type $III.$
We go on to define \emph{Kundt structures} by an integrability criterion and requiring existence of certain local infinitesimal automorphisms. The collection of metrics belonging to such $G$-structures are automatically contained in the Kundt class.
Thus we obtain a machinery that allows for natural classes of metrics up to type $III$ tensors in such a way that even-ranked type $II$ tensors play a prominent role and infinitesimal automorphisms have analogous properties with the Kundt vector fields. Since all constructed deformations leaving the scalar polynomial invariants fixed have been achieved by deforming the metric in the direction of type $III,$ we hope that these $G$-structures can be employed to understand deformations of metrics, for which the curvature tensor and all its covariant derivatives are of type $II.$
Motivated by the characterization of CSI metrics through the application of nil-Killing vector fields, we characterize all $G$-invariant Kundt structures on homogeneous manifolds $G/H$. The idea is to present a CSI metric by a homogeneous space, in such a way that the left multiplication maps have the properties given by the flows of nil-Killing vector fields in analogy with representing a homogeneous Lorentzian manifold as the quotient of the isometry group by the isotropy group.
\section{G-structures}
Here we briefly present the notion of $G$-structures given in \cite{KN1} and \cite{KN2}. Suppose that $M$ is an $n$-dimensional manifold and $G\subset GL(n,\mathbb{R})$ is a Lie group. Recall that a $G$-structure on $M$ is a sub-principal bundle $P\overset{\pi}{\rightarrow}M$ with structure group $G$ of the principal frame bundle. Hence given $x\in M$ each fibre $\pi^{-1}(x)$ is a subset of $GL(\mathbb{R}^{n},T_{x}M)$ which is invariant under the right action of $G$ and on which $G$ acts freely.
If $f$ is a diffeomorphism on $M$ then $f$ induces an automorphism of the frame bundle by letting its derivative act on frames. $f$ is said to be an automorphism of the $G$-structure $P\overset{\pi}{\rightarrow} M$ if the induced automorphism of the frame bundle maps $P$ into itself. We let $Aut(P)$ denote the group of such diffeomorphisms.
A vector field $X\in \mathcal{X}(M)$ with flow $\phi_{t}$ is said to be an infinitesimal automorphism of $P\overset{\pi}{\rightarrow} M$ if $\phi_{t}\in Aut(P),$ for all $t.$
\section{The structure group}
In this section we shall discuss the structure group $GN$, which will be considered in the rest of the paper. The elements of the group $GN$ are linear transformations on Minkowski space which are characterized by two properties. They preserve the algebraic structure given by some fixed null-line $\lambda$, and arbitrary full contractions are preserved when acting on even-ranked tensors which are of type $II$ with respect to the $\lambda$.
\newline
Consider $\mathbb{R}^{n}$ endowed with the Minkowski inner-product
\begin{equation}
\eta= \omega^{1}\otimes\omega^{2} + \omega^{2}\otimes\omega^{1} +\omega^{3}\otimes\omega^{3}+\cdots + \omega^{n}\otimes\omega^{n},
\end{equation}
where $\{\omega^{1},\dots \omega^{n}\}$ is the dual to the standard basis $\{e_{1},\dots e_{n}\}.$ In order to follow convention we shall rename the basis elements by letting $\{k,l,m_{1}\dots m_{n-2}\}=\{e_{1},\dots e_{n}\}$.
We let $GN\subset GL(n,\mathbb{R})$ denote the Lie group of invertible linear transformations $f$ satisfying the following:
\begin{enumerate}[i)]
\item $f(k)\in \mathbb{R}k,$
\item $\eta(f(w),f(\tilde{w}))=\eta(w,\tilde{w}),\forall w,\tilde{w}\in \{k\}^{\perp},$
\item $\eta(f(k),f(z))=\eta(k,z),\forall z\in \mathbb{R}^{n}.$
\end{enumerate}
Note in particular that $GN$ is not contained in $\mathcal{O}(1,n-1).$
The elements of $GN$ can be written as matrices of the form
\begin{equation}
a=
\begin{bmatrix}
a & b_{1} & a_{1} &\dots & a_{n-2}\\
0 & a^{-1} & 0 & \dots & 0 \\
0 & b_{2} & c_{11} & \dots & c_{1(n-2)}\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & b_{n-1} & c_{(n-2)1} & \dots & c_{(n-2)(n-2)}
\end{bmatrix},
\end{equation}
where $a\neq0,$ $a_{i},b_{i}$ are arbitrary numbers and $[c_{ij}]$ is an orthogonal matrix.
Letting $\lambda$ be the null-line spanned by $k$, we have the following characterizations which is a special case of a result from \cite{NilKilling} in section 2:
\begin{proposition}
Suppose that $f\in Gl(n,\mathbb{R})$, then the following are equivalent:
\begin{enumerate}[i)]
\item $f\in GN$.
\item $f(\lambda)\subset \lambda$ and $f^{*}\eta -\eta$ is of type $III$ w.r.t. $\lambda.$
\item The induced map $f^*$ on tensors preserves algebraic type and for any given even ranked tensor $T$ of type $II$ w.r.t. $\lambda,$ full contractions are preserved, i.e., \begin{equation}
Tr(f^{*}T)=Tr(T).
\end{equation}
\end{enumerate}
\end{proposition}
In particular we see from \cite{NilKilling} that if $f\in GN$ and $T$ is any even-ranked type $II$ tensor, then the metrics $\eta$ and $f^{*}\eta$ induce the same full contractions of $T.$
\section{GN-structures}\label{KSGNStruct}
In this section we shall study $GN$-structures, and show that they induce an algebraic classification of tensors in analogy with the boost-order classification induced by a Lorentzian metric with a null-distribution \cite{AlgClass}. Moreover, we shall see that each $GN$-structure gives rise to a collection of metrics which forms an affine space over symmetric rank $2$ tensors of type $III$ in the algebraic classification.
\newline
We start by showing that to each $GN$-structure, $P\overset{\pi}{\rightarrow} M$, we can associate two distributions $\lambda,\Lambda$ on $M$ of dimension $1$ and codimension $1,$ respectively.
Given $x\in M$ and $u,\tilde{u}\in \pi^{-1}(x),$ there exists an element $a\in GN$ such that $\tilde{u}=ua.$ Since $a$ leaves the subspaces $\mathbb{R}k$ and $\{k\}^{\perp}$ invariant it follows that $$\tilde{u}(\mathbb{R}k)=ua(\mathbb{R}k)=u(\mathbb{R}k)$$ and $$\tilde{u}(\{k\}^{\perp})=ua(\{k\}^{\perp})=u(\{k\}^{\perp}).$$ Thus at each point of $x\in M$ we have a two well-defined subspaces
\begin{equation}
\lambda_{x} :=\{u(\mathbb{R}k):u\in \pi^{-1}(x)\}\subset T_{x}M,
\end{equation}
\begin{equation}
\Lambda_{x} :=\{u(\{k\}^{\perp}):u\in \pi^{-1}(x)\}\subset T_{x}M,
\end{equation}
of dimension one and codimension one respectively. We let $\lambda$ and $\Lambda$ be the distributions defined by these subspaces at each point in $M,$ and refer to them as the distributions associated with the $GN$-structure $P\overset{\pi}{\rightarrow} M$.
On the cotangent bundle we let $\lambda^{*}$ and $\Lambda^{*}$ be the annihilators of $\Lambda$ and $\lambda$ respectively, i.e.,
\begin{equation}
\lambda^{*}_{x}=\{\omega\in T^{*}_{x}M:\omega(X)=0, \, \forall \, X\in \Lambda_{x}\}
\end{equation}
and
\begin{equation}
\Lambda^{*}_{x}=\{\omega\in T^{*}_{x}M:\omega(X)=0, \, \forall \, X\in \lambda_{x}\}.
\end{equation}
Using this we give an algebraic classification of tensors as follows: Let
\begin{equation}
TM^1=TM,\quad TM^0=\Lambda,\quad TM^{-1}=\lambda
\end{equation}
and
\begin{equation}
T^{*}M^{1}=TM^*,\quad T^{*}M^{0}=\Lambda^{*},\quad T^{*}M^{-1}=\lambda^{*}.
\end{equation}
Let $\mathcal{D}^{r}_{t}(M)$ denote the $C^{\infty}(M)$-module of tensors with $r$ covariant and $t$ contravariant factors. Following standard terminology from \cite{AlgClass} we define the sub-module of tensors with boost-order $s$ by
\begin{equation}\label{KSBoostorder}
\bigoplus_{\substack{s_{1}+\cdots +s_{r+t}=s\\s_{i}\in \{-1,0,1\}}}T^{*}M^{s_{1}}\otimes \cdots\otimes T^{*}M^{s_{r}}\otimes TM^{s_{r+1}}\otimes TM^{s_{r+k}}.
\end{equation}
Tensors of boost-order $0$ and $-1$ are said to be of type $II$ and $III$ respectively. Thus $GN$-structures induce an algebraic boost-order classification of tensors similar to that of a Lorentzian manifold with a given null-distribution.
Next we proceed to show that each $GN$-structure induces a natural collection of Lorentzian metrics. Recall that the Lie subgroup $Sim(n-2)$ of $\mathcal{O}(1,n-1)$ is given by
\begin{equation}
Sim(n-2):=\{f\in \mathcal{O}(1,n-1):f(k)\in \mathbb{R}k\}.
\end{equation}
We see that $Sim(n-2)$ is a Lie subgroup of $GN,$ satisfying the relation
\begin{equation}
Sim(n-2)=GN\cap\mathcal{O}(1,n-1).
\end{equation}
Each $Sim(n-2)$-structure $Q\rightarrow M$ induces a metric on $M$ by letting the elements of $Q$ be null-bases for $g$. Using this we have the following definition.
\begin{definition}
Let $P\rightarrow M$ be a $GN$-structure on $M.$ A metric $g$ on $M$ is said to belong to $P$ if it is determined by a $Sim(n)$-subprincipal bundle of $P.$ The collection of such metrics is denoted by $\mathcal{M}.$
\end{definition}
\begin{characterisation}\label{metricsbelonging}
Let $P\rightarrow M$ be a $GN$-structure with associated distributions $\lambda$ and $\Lambda$. A metric $g$ belongs to $P$ if and only if with respect to $g$ the following are satisfied:
\begin{enumerate}[i)]
\item $\lambda$ is a null distribution.
\item $\lambda^{\perp}=\Lambda.$
\item If $(X,Y,Z_{i})\in P$, then $g(X,Y)=1$ and $g(Z_{i},Z_{j})=\delta_{ij},$ for all $i,j.$
\end{enumerate}
\end{characterisation}
\begin{proof}
Suppose first that $g$ satisfies the three conditions $i)-iii)$. By $i)$, $\lambda$ is a null-distribution with respect to this metric. Let $Q\rightarrow M$ be the $Sim(n)$-structure defined as the collection of null-frames with respect to $g$ whose first frame element points along $\lambda.$ The proof of the implication will be completed if we show that $Q\subset P.$ Now given a point $p\in M$ and an element $(X,Y,Z_{i})$ in $P$ in the fibre above $p$, then $ii)$ and $iii)$ imply that we can find a vector $\tilde{Y}$ such that $(X,\tilde{Y},Z_{i})$ is a null-frame for $g$ and $Y=cX+\tilde{Y}+d^{k}m_{k},$ for some coeffisients $d^{k}$, for $k=1\dots n-2$. Now define $a\in GL(\mathbb{R}^{n})$ by its action on the standard null-frame $\{k,l,m_{i}\}$ on $\mathbb{R}^{1,n-1}$ by $$a(l)=ck+l +d^{i}m_{i},\quad a(k)=k, \quad a(m_{i})=m_{i},$$ for all $i.$ Then clearly $a\in GN$ and $(X,\tilde{Y},Z_{i})a=(X,Y,Z_{i})$. It follows that $(X,\tilde{Y},Z_{i})\in P$ and therefore the fibre of $Q$ above $p$ is contained in $P$, which shows that $Q\subset P,$ thus the metric belongs to $P.$
\end{proof}
\begin{proposition}\label{KSaffine}
Given a $GN$-structure $P\rightarrow M$, the collection $\mathcal{M}$ of metrics belonging to $P$ is a non-empty affine space with respect to covariant symmetric rank $2$ tensors of type $III$.
\end{proposition}
\begin{proof}
Suppose that $g\in \mathcal{M}$ and $T$ is a symmetric two-tensor of type $III.$ Let $\lambda$ and $\Lambda$ be the distributions induced by $P$. By definition we see that $T\in \lambda^{*}\otimes_{s}\Lambda^{*}.$ It follows easily that $\lambda$ is a null-distribution for the metric $g+T$ such that $\lambda^{\perp}=\Lambda.$ Furthermore, if $(X,Y,Z_{i})\in P$, then $T(X,Y)=0$ since $X\in\lambda$. Moreover ${Z_{i}}\subset \Lambda$ and hence $T(Z_{i},Z_{j})=0,$ for all $i,j$. Thus by the above characterisation $g+T$ belongs to $P.$
Now suppose that $\tilde{g}\in \mathcal{M}$. If $(X,Y,Z_{i})\in P,$ then
\begin{equation}
(\tilde{g}-g)(X,Y)=\tilde{g}(X,Y)-g(X,Y)=0,
\end{equation}
and
\begin{equation}
(\tilde{g}-g)(Z_{i},Z_{j})=\tilde{g}(Z_{i},Z_{j})-g(Z_{i},Z_{j})=\delta_{ij} -\delta_{ij}.
\end{equation}
It follows readily that $\tilde{g}-g\in \lambda^{*}\otimes_{s}\Lambda^{*}$ and hence is a tensor of type $III.$
Now let us show that the collection $\mathcal{M}$ of metrics belonging to a $GN$-structure $P\rightarrow M$ is always non-empty. Given a point $p\in M$ we can find a neighborhood $U$ and a local section $s:U\rightarrow P.$ Let $g$ be the unique Lorentzian metric on $U$ such that $s(q)$ is a null-frame for $g,$ for all $q\in U.$ Then $g$ belongs to the $GN$-structure given by the restriction of $P$ to $U.$
Proceeding in this way we can find a locally finite covering $\{U_{\alpha}\}_{\alpha \in I}$ of $M$ and metrics $g_{\alpha}$ on $U_{\alpha}$ belonging to the restriction of $P$ to $U_{\alpha},$ for all $\alpha\in I.$ Now let ${\psi_{\alpha}}_{\alpha\in I}$ be a partition of unity subordinate to $\{U_{\alpha}\}_{\alpha \in I}$. Since any two metrics $g_{\alpha},g_{\beta}$ differ by a tensor of type $III$ on a non-empty intersection $U_{\alpha}\cap U_{\beta},$ it follows that
\begin{equation}
\sum_{\alpha\in I}\psi_{\alpha}g_{\alpha}
\end{equation}
gives a well-defined Lorentzian metric which belongs to $P.$ This finishes the proof.
\end{proof}
We have another characterisation of $GN$-structures in terms of certain quadruples which will be needed later. On a manifold $M$ of dimension $n$ a \emph{null-quadruple} is defined to be a quadruple $(\lambda,\Lambda,g^{\perp},f)$ consisting of the following datum:
\begin{enumerate}[i)]
\item Two distributions $\lambda$ and $\Lambda$ of dimension $1$ and $n-1$ respectively such that $\lambda\subset \Lambda.$
\item A symmetric positive-definite bilinear form $g^{\perp}$ on the quotient bundle $\Lambda/\lambda.$
\item A non-vanishing one-form $f$ on the bundle $\lambda\otimes TM/\Lambda.$
\end{enumerate}
\begin{proposition}
On a manifold $M$ there is a $1:1$ correspondence between the collection of $GN$-structure $P\rightarrow M$ and null-quadruples $(\lambda,\Lambda,g^{\perp},f).$
\end{proposition}
\begin{proof}
If $P\rightarrow M$ is a $GN$-structure on $M$, let $\lambda$ and $\Lambda$ be the associated distributions. If $p\in M$ then taking any element $(X,Y,Z_{i})\in P$ above $p$ we see that $(X,Z_{i})$ determines a degenerate symmetric bilinear form $\beta$ on $\Lambda_{p}$ by setting $\beta(X,X)=0$ and $\beta(Z_{i},Z_{i})=1.$ Due to the form of $GN$ this gives a well-defined degenerate symmetric two-form on $\Lambda$ which is independent of choice of elements in $P.$ Moreover $\beta$ induces a positive definite symmetric bilinear form $g^{\perp}$ on the quotient bundle $\Lambda/\lambda$.
Given $p\in M,$ we define the map $f_{p}:\lambda_{p}\otimes T_{p}M/\Lambda_{p}\rightarrow\mathbb{R}$ as follows: Let $(X,Y,Z_{i})$ be an element of $P$ in the fibre above $p$ whose first entry is $X.$ Then we set \begin{equation}f_{p}(X\otimes[Y])=1,\end{equation} where $[Y]$ is the coset of $Y$ in $T_{p}M/\Lambda_{p}$, and extent $f_p$ to $\lambda_{p}\otimes T_{p}M/\Lambda_{p}$ by linearity. This map is well-defined since if $(\tilde{X},\tilde{Y},\tilde{Z}_{i})$ is another element of $P$, then due to the form of $GN$ we see that $\tilde{X}=bX$ $\tilde{Y}=cX+b^{-1}Y+d^{i}Z_{i}$, for some $b,c,d^{i}\in \mathbb{R}$ which implies that $\tilde{X}\otimes[\tilde{Y}]=X\otimes[Y].$ Hence $(\lambda,\Lambda,g^{\perp},f)$ defines a null-quadruple.
Conversely, suppose we have a null-quadruple $(\lambda,\Lambda,g^{\perp},f)$. At each point $p\in M$ define the $P_{p}$ to be the collection of bases $(X,Y,Z_{i})$ for $T_{p}M$ such that $X\in \lambda_{p}$, $f(X\otimes[Y])=1$ and the classes of $Z_{i}$ in $\Lambda_{p}/\lambda_{p}$ form an orthonormal basis with respect to $g^{\perp}$. Now if $a\in GN$ and $(X,Y,Z_{i})\in P_{p}$ , the letting $(\tilde{X},\tilde{Y},\tilde{Z}_{i}):=(X,Y,Z_{i})a,$ we know that $$(\tilde{X},\tilde{Y},\tilde{Z}_{i})=(bX,cX+b^{-1}Y+d^{j}Z_{j},f^{i}k + \tensor{S}{_i^m}Z_{m}),$$
where $b,c,d^{k},f^{k}$ are constants and $\tensor{S}{_i^m}$ is an orthogonal $(n-2)\times(n-2)$ matrix. Clearly \begin{equation}
f(\tilde{X}\otimes[\tilde{Y}])=f((bX)\otimes[cX+b^{-1}Y+d^{j}Z_{j}])=f(X\otimes[Y])=1,
\end{equation} and since $\tensor{S}{_{i}^m}$ is orthogonal, the classes of $\tilde{Z}_{i}$ are again an orthonormal basis for $g^{\perp}$. Thus $(\tilde{X},\tilde{Y},\tilde{Z}_{i})\in P_{p},$ from which it follows that $P_{p}$ is invariant under the right action of $GN.$
Let us show that $GN$ acts transitively on $P_{p}$. Given any other element $(X^{\prime},Y^{\prime},Z_{i}^{\prime})$, then by injectivity of $f$ it follows that $X^{\prime}\otimes[Y^{\prime}]=X\otimes[Y]$ which implies that $X^{\prime}=bX$ and $Y^{\prime}=cX+b^{-1}Y+d^{j}Z_{j}$ for some constants $b,c,d^{i}$. Using this and the fact that $Z^{\prime}_{i}$ also gives an orthonormal basis for $g^{\perp}$ it is clear that we can construct $a\in GN$ such that $(X^{\prime},Y^{\prime},Z^{\prime}_{i})=(X,Y,Z_{i})a.$ Thus it follows that $\cup_{p\in M}P_{p}$ is a GN-structure.
Clearly the correspondence between $GN$-structures and null-quadruples is a bijection.
\end{proof}
It is useful to characterize automorphisms and metrics belonging to $GN$-structures through this correspondence. Suppose therefore that $P\rightarrow M$ is a $GN$-structure corresponding a null-quadrupe $(\lambda,\Lambda,g^{\perp},f)$.
A diffeomorphism $\phi:M\rightarrow M$ is an automorphism of $P$ if and only if $\phi_{*}$ satisfies the following:
\begin{enumerate}[i)]
\item $\phi_{*}(\lambda)=\lambda$ and $\phi_{*}(\Lambda)=\Lambda,$
\item $\phi^{*}g^{\perp}=g^{\perp},$
\item $\phi^{*}f=f.$
\end{enumerate}
where the pull-backs in $ii)$ and $iii)$ are well-defined due to $i)$.
Furthermore it is clear from characterization \ref{metricsbelonging} that a metric $g$ belongs to $P$ if and only if
\begin{enumerate}[i)]
\item $g(W,W^{\prime})=g^{\perp}([W],[W^{\prime}])$, for all $W,W^{\prime}\in \Lambda$,
\item $g(X,Y)=f(X\otimes[Y]),$ for all $X\in \lambda$ and $Y\in TM$.
\end{enumerate}
Let us now discuss some general features of this framework. If $P\rightarrow M$ is a $GN$-structure corresponding to a null-quadruple $(\lambda,\Lambda,g^{\perp},f).$ If a metric $g$ belongs to $P,$ then since $\lambda$ is a null-distribution w.r.t. $g$ and $\lambda^{\perp}=\Lambda,$ it follows that the algebraic boost-order classification of tensors induced by $(M,g,\lambda)$ is the same as the algebraic classification derived from $P\rightarrow M$ given above in \eqref{KSBoostorder}.
Furthermore, if $\tilde{g}$ is another metric belonging to $P,$ then by proposition \ref{KSaffine}, $\tilde{g}-g$ is a tensor of type $III$ w.r.t. $\lambda.$ Now suppose that $T$ is an even ranked type $II$ tensor. Then any full contraction of $T$ perfomed with respect to either $g$ or $\tilde{g}$ must be the same, since any contribution of a tensor of type $III$ must vanish.
Hence any $GN$-structure $P\rightarrow M$ gives well-defined full contractions
\begin{equation}
Tr(T),
\end{equation}
of any even-ranked tensor $T$ of type $II.$
Lastly, if $X$ is a vector field belonging to the null-distribution $\lambda,$ then using the functional $f\in(\lambda\otimes TM/\Lambda)^{*},$ the $GN$-structure allows us to define a dual form $X^{\natural}$ by
\begin{equation}
X^{\natural}(Z)=f(X\otimes [Z]),\quad \forall\, Z\in \mathcal{X}(M).
\end{equation}
We see that $X^{\natural}$ vanishes on $\Lambda,$ showing that $X^\natural\in \lambda^{*}.$ Clearly if $g\in \mathcal{M},$ then this dual coincides with the metric dual for vector fields belonging to $\lambda.$
\newline
Next we consider connections on $GN$-structures:
\begin{proposition}\label{KSConnections}
Let $P\rightarrow M$ be a $GN$-structure corresponding to a null-quadruple $(\lambda,\Lambda,g^{\perp},f)$ and suppose that $g$ is a metric belonging to $P.$ If $\Gamma$ is a linear connection on $M$ with affine connection $\nabla,$ then
the following are equivalent:
\begin{enumerate}[i)]
\item $\Gamma$ is a connection in $P$.
\item $\nabla_{Z} \lambda\subset \lambda$, $\nabla_{Z}\Lambda\subset \Lambda$ and $\nabla_{Z}g$ is of type $III,$ for all vector fields $Z.$
\item $\nabla$ restricts to an affine connection on $\lambda$ and $\Lambda$ such that $g^\perp$ and $f$ are covariantly constant w.r.t. the connections induced on $\Lambda/\lambda$ and $\lambda \otimes TM/\Lambda$ respectively.
\end{enumerate}
\end{proposition}
\begin{proof}
$"i)\Leftrightarrow ii)"$
Let $\{U_{\alpha}\}_{\alpha\in I}$ be a covering with local frames $\{k^{\alpha},l^{\alpha},m_{i}^{\alpha}\}_{\alpha\in I}$ of $P,$ such that $\{k^{\alpha},l^{\alpha},m_{i}^{\alpha}\}_{\alpha\in I}$ are null-frames for $g.$ Then $\Gamma$ is a connection on $P$ if and only if each local connection one-form
$\{A_{\alpha}\}_{\alpha\in I}$ has values in the Lie algebra $\mathfrak{gn}$ of $GN.$
Suppose that $\Gamma$ is a connection in $P$. If $Z\in \mathcal{X}(M),$ then
\begin{equation}
\begin{gathered}
(\nabla_{Z}g)(k^{\alpha},l^{\alpha})=-g(\nabla_{Z}k^{\alpha},l^{\alpha})-g(k^{\alpha},\nabla_{Z}l^{\alpha})\\=-g(\tensor{(A_{\alpha}(Z))}{^1_1}k^{\alpha},l^{\alpha})-g(k^{\alpha},\tensor{(A_{\alpha}(Z))}{^2_2}l^{\alpha})=0,
\end{gathered}
\end{equation}
where the last equality holds since $\tensor{(A_{\alpha}(Z))}{^1_1}=-\tensor{(A_{\alpha}(Z))}{^2_2}.$ By the same reasoning $\nabla_{Z}g(m^{\alpha}_{i},m^{\alpha}_{j})$ =0, since $\tensor{A(Z)}{^{i}_{j}}$ is skew-symmetric for $i,j\geq 3.$ It therefore follows that $\nabla_{Z}g$ is of type $III$ and $\nabla_{Z}\lambda\subset \lambda$ and $\nabla_{Z}\Lambda \subset \Lambda.$ In order to prove the converse one follows the same reasoning to show that $ii)$ implies that $\tensor{A(Z)}{^{i}_{j}}\in \mathfrak{gn}.$
$ii)$ and $iii)$ are clearly equivalent since $g$ induces $g^{\perp}$ and $f$ on $\Lambda/\lambda\otimes \Lambda/\lambda$ and $\lambda\otimes TM/\Lambda$ respectively.
\end{proof}
\section{GN-structures and Nil-Killing vector fields}
Recall from \cite{herviknew,IDIFF} that if $(M,g)$ is a Lorentzian manifold then a vector field $X$ is said to be nil-Killing if $\tensor{(\mathcal{L}_{X}g)}{^{a}_{b}}$ is nilpotent. Given a point $p\in M,$ one can shows that the operator corresponding to $\mathcal{L}_Xg$ is nilpotent if and only if there is some null-line $\lambda_{p}\subset T_{p}M$ such that $(\mathcal{L}Xg)_{p}$ is of type $III$ w.r.t. $\lambda_{p}.$ Moreover, this null-line is unique provided that $X$ is not Killing.
Therefore if $\lambda$ is a null-distribution on $M$ we say that a vector field $X$ is nil-Killing with respect to $\lambda$ if $\mathcal{L}_{X}g$ is of type $III$ w.r.t. $\lambda.$ One can see that this is equivalent to
\begin{equation}
\mathcal{L}_{X}g(Y,Z)=0, \, \mathcal{L}_{X}g(W,\tilde{W})=0,
\end{equation}
for all $Y\in \lambda,$ $Z\in TM$ and $W,\tilde{W}\in \{\lambda\}^{\perp}.$
If in addition $[X,\lambda]\subset\lambda$, then $X$ preserves algebraic structure in the sense that if $\phi_{t}$ is the flow of $X,$ then $\phi_{t}$ preserves the boost-order of tensors, given by $\lambda.$ We shall refer to such vector fields as algebra preserving nil-Killing vector fields w.r.t. $\lambda$. It was seen in \cite{IDIFF} that the collection of such vector fields
\begin{equation}
\mathfrak{g}_{(g,\lambda)} :=\{\text{Algebra perserving nil-Killing vector fields w.r.t. } \lambda\}
\end{equation}
constitute a Lie algebra. From \cite{NilKilling} we have the following characterization:
\begin{proposition}\label{flowchar}
Let $(M,g,\lambda)$ be a Lorentzian manifold with a null vector distribution $\lambda.$ A vector field $X$ is an algebra preserving nil-Killing w.r.t. $\lambda$ iff. for each $t$ the flow $\phi_{t}$ of $X$ satisfies the following:
\begin{enumerate}[i)]
\item $(\phi_{t})_{*}(\lambda)\subset\lambda.$
\item $\phi_{t}^{*}g-g$ is of type $III$ w.r.t. $\lambda.$
\end{enumerate}
\end{proposition}
Hence we see that such vector fields have the same characteristics as the elements in the structure group $GN$. Indeed we have the following result:
\begin{proposition}\label{nil}
Suppose that $P\rightarrow M$ is a $GN$-structure, and let $\lambda$ be its null-distribution. Given a metric $g$ belonging to $M$, then a vector field $X$ is an infinitesimal automorphism of $P$ iff. $X$ is an algebra preserving nil-Killing vector field w.r.t $(M,g,\lambda).$
\end{proposition}
\begin{proof}
Let $(\lambda,\Lambda,g^{\perp},f)$ be the null-quadruple corresponding to $P\rightarrow M$ and $\phi_{t}$ the flow of $X$.
By proposition \ref{flowchar}, $X$ is an algebra preserving nil-Killing vector field w.r.t. $(M,g,\lambda)$ iff. \begin{equation}(\phi_{t})_{*}(\lambda)\subset \lambda, \end{equation}\begin{equation}g((\phi_{t})_{*}(\tilde{Z}),(\phi_{t})_{*}(Y))=g(\tilde{X},Y),\quad g((\phi_{t})_{*}(W_1),(\phi_{t})_{*}(W_2))=g(W_{1},W_{2}),\end{equation} for all $t,$ $Z\in \lambda$, $Y\in TM$ and $W_1,W_2\in \lambda^{\perp}.$
Since $g$ belongs to $P$ this holds iff. \begin{equation}(\phi_{t})_{*}(\lambda)\subset \lambda,\quad (\phi_{t})_{*}(\Lambda)\subset \Lambda,\end{equation} \begin{equation} f((\phi_{t})_{*}Z\otimes[(\phi_{t})_{*}Y])=f(Z,Y),\quad g^\perp((\phi_{t})_{*}(W_1),(\phi_{t})_{*}(W_2))=g^\perp(W_{1},W_{2}),\end{equation} for all $t,$ $Z\in \lambda$, $Y\in TM$ and $W_1,W_2\in \lambda^{\perp}.$
By the characterisation of automorphisms and metrics belonging to $P$ in terms of null-quadruples given in section \ref{KSGNStruct}, this is true iff. $X$ is an infinitesimal automorphism of $P.$
\end{proof}
If $(M,g,\lambda)$ is a Lorentzian manifold with a null-distribution, then we obtain an associated null-quadruple given by $(\lambda,\lambda^{\perp},g^{\perp},f),$ where $g^{\perp}$ and $f$ are induced by $g$ by taking in a natural manner. We denote the corresponding $GN$-structure by \begin{equation}P(M,g,\lambda).\end{equation} By construction it is clear that the metric $g$ belongs to $P$.
\begin{corollary}\label{autalg}
Let $(M,g,\lambda)$ be a Lorentzian manifold with a null-distribution, and let $P(M,g,\lambda)$ be the associated $GN$-structure. Then
\begin{equation}
\mathfrak{g}_{(g,\lambda)}=aut[P(M,g,\lambda)],
\end{equation}
where $aut[P(M,g,\lambda)]$ is the Lie-algebra of infinitesimal the $GN$-structure.
\end{corollary}
\medskip
\section{Kundt structures}\label{KSKundtStructuresSection}
In this section we shall see use proposition \ref{nil} to give conditions that ensure that the metrics belonging to a $GN$-structure are Kundt spacetimes with respect to the null-distribution associated to the $GN$-structure.
First let us recall the definition of a Kundt spacetime \cite{KundtSpacetimes,GenKundt,ExactSolutions}. Let $(M,g,\lambda)$ be a Lorentzian manifold with a null-distribution $\lambda.$ Such a triple is said to be Kundt if $\lambda^{\perp}$ is integrable and for each point $p\in M$ there exists a vector field $X$ belonging to $\lambda$ such that $X$ is affinely geodesic, shear-free and divergence-free, i.e.,
\begin{equation}\label{KSGeoShearDiv}
\nabla_{X}X=0,\quad \nabla_{(a}X_{b)}\nabla^{a}X^{b}\quad \text{ and } \quad \nabla_{a}X^{a}=0, \end{equation}
respectively. In this case we say that $X$ is a Kundt vector field of $(M,g,\lambda)$.
Without assuming integrability, it was seen in \cite{NilKilling} that a vector field $X$ belonging to a null-distribution $\lambda$ satisfies \eqref{KSGeoShearDiv} if and only if
$X$ is nil-Killing w.r.t $\lambda.$ Since $X$ belongs to $\lambda$ such a vector field automatically preserves the algebraic structure given by $\lambda.$
The following definition give an analogue of this conditions for $GN$-structures.
\begin{definition}
A $GN$-structure $P\rightarrow M$ with associated null-distributions $\lambda$ and $\Lambda$ is said to be a Kundt structure if $\Lambda$ is integrable and about each point $p\in M$ there exists a local vector field $X$ belonging to the distribution $\lambda$ which is an infinitesimal automorphism of $P.$
\end{definition}
Now the following results on metrics belonging to Kundt structures are corollaries of proposition \ref{nil}.
\begin{corollary}\label{KSKundtCorollary}
If $P\rightarrow M$ is a Kundt structure with associated null-distribution $\lambda,$ then for each metric $g$ belonging to $P,$ the triple $(M,g,\lambda)$ is a Kundt spacetime.
\end{corollary}
\begin{proof}
If $p\in M$, then since $P\rightarrow M$ is a Kundt structure we can find a vector field $X$ in a neighborhood about $p$ belonging to $\lambda$ such that $X$ is an infinitesimal automorphism of $P.$ It follows from proposition \ref{nil} that $X$ is nil-Killing w.r.t. $(M,g,\lambda).$ By assumption $\lambda^{\perp}=\Lambda$ is integrable, and therefore $(M,g,\lambda)$ is a Kundt spacetime.
\end{proof}
\begin{corollary}
If $(M,g,\lambda)$ is a Kundt spacetime, then the induced $GN$-structure $P(M,g,\lambda)$ is a Kundt structure.
\end{corollary}
\begin{proof}
This follows from the fact the the collection of local nil-Killing vector fields of $(M,g,\lambda)$ is equal to the collection of local infinitesimal automorphisms of $P.$
\end{proof}
The following proposition gives a characterization of $Kundt$ structures in terms of connections. \begin{proposition}
If $P\rightarrow M$ is a $GN$-structure, then the following are equivalent:
\begin{enumerate}[i)]
\item $P$ is a Kundt structure.
\item For each point $p\in M$ there is a neighborhood $U$ such that the restriction of $P$ to $U$ has a torsion-free connection.
\end{enumerate}
\end{proposition}
\begin{proof}
$"ii)\Rightarrow i)"$
First we show that $\Lambda$ is integrable. Let $U$ be an open set with a torsion-free connection $\nabla$ belonging to the restriction of $P$ to $U.$ If $W_{1},W_{2}\in \mathcal{X}(U)$ belong to $\Lambda,$ then $\nabla_{W_i}W_{j}\in \Lambda,$ for $i,j=1,2,$ by proposition \ref{KSConnections}. Therefore since $\nabla$ is torsion-free this implies that
\begin{equation}[W_{1},W_{2}]=\nabla_{W_{1}}W_{2}-\nabla_{W_{2}}W_{1}\in \Lambda.\end{equation}
Since we can cover $M$ by open sets having torsion-free connections belonging to $P,$ this shows that $\Lambda$ is an integrable distribution.
Now let us show that $P$ has the desired local infinitesimal automorphisms belonging to $\lambda.$ Suppose $p\in M$ and choose a neighborhood $U$ with a torsion-free connection $\nabla$ belonging to $P$ on $U.$ Since $\Lambda$ is integrable we can, by shrinking $U$ if necessary, choose $X\in \lambda$ on $U$ such that $X^{\natural}$ is a closed one-form. By proposition \ref{KSConnections}, we see that $\nabla X^{\natural}=\omega\otimes X^{\natural}$, for some one-form $\omega$.
Since $\nabla$ is Torsion-free it follows that \begin{equation}Alt(\nabla X^{\natural})=dX^{\natural}=0,\end{equation}
and therefore $\omega$ is proportional to $X^{\natural}$, from which we see that $\nabla X^{\natural}$ is of type $III.$
Now let $g$ be any metric belonging to $P.$ It follows again by proposition \ref{KSConnections} that $\nabla_{Z} g$ is a tensor of type $III,$ for all $Z\in \mathcal{X}(M)$. Let us show that if $Z\in \mathcal{X}(M),$ then $\nabla_{Z}(X^\natural)=(\nabla_{Z}X)^{\natural}$. Suppose that $Z^\prime\in \mathcal{X}(M),$ then
\begin{equation}
\begin{gathered}
(\nabla_{Z}X^{\natural})(Z^{\prime})=Z(X^{\natural}(Z^{\prime}))-X^{\natural}(\nabla_{Z}Z^{\prime})\\=Z(g(X,Z^{\prime}))-g(X,\nabla_{Z}Z^{\prime}) =(\nabla_{Z}g)(X,Z^{\prime}) + g(\nabla_{Z}X,Z^{\prime})\\
(\nabla_{Z}X)^\natural(Z^{\prime}),
\end{gathered}
\end{equation}
where the last equality holds since $\nabla_{Z}g$ is of type III.
If $Z,Z^{\prime}\in \mathcal{X}(M),$ then by the torsion-free property
\begin{equation}
\begin{gathered}
\mathcal{L}_{X}g(Z,Z^{\prime})\\=Xg(Z,Z^{\prime})-g([X,Z],Z^{\prime})-g(Z,[X,Z^{\prime}])\\
=Xg(Z,Z^{\prime}) - g(\nabla_{X}Z-\nabla_{Z}X,Z^{\prime})-g(Z,\nabla_{X}Z^{\prime}-\nabla_{Z^\prime}X)\\
=(\nabla_Xg)(Z,Z^{\prime}) +(\nabla_{Z}X)^{\natural}(Z^{\prime})+ (\nabla_{Z^{\prime}}X)^{\natural}(Z)\\
=(\nabla_Xg)(Z,Z^{\prime}) +(\nabla_{Z}X^{\natural})(Z^{\prime})+ (\nabla_{Z^{\prime}}X^{\natural})(Z),
\end{gathered}
\end{equation}
and therefore $\mathcal{L}_Xg=\nabla_{X}g+S(\nabla X^{\natural}),$ where $S$ denotes symmetrization. It follows that $\mathcal{L}_Xg$ is of type $III$ w.r.t the null-distribution $\lambda$ associated to $P,$ and therefore $X$ is nil-Killing w.r.t. $(M,g,\lambda)$. Thus by proposition \ref{nil}, $X$ is a local infinitesimal automorphism of $P,$ showing that $P$ is Kundt.
$"i)\Rightarrow ii)"$
Let $(\lambda,\Lambda,g^{\perp},f)$ be the null-quadruple associated to $P.$ By integrability we can find a neighborhood $U$ about $p$ with coordinates $(u,v,x^{k})$ such that $\frac{\partial}{\partial v}$ is an infinitesimal automorphism of $P$, $\frac{\partial}{\partial v}\in \lambda$, $\frac{\partial}{\partial x^{i}}\in \Lambda$ and $du=(\frac{\partial}{\partial v})^{\natural}$.
Define a metric $g$ on $U$ by
\begin{equation}
g=2dudv + \tilde{g}_{ij}(u,v,x^{k})dx^{i}dx^{j},
\end{equation}
in such a way that $\tilde{g}_{ij}$ is compatible with $g^{\perp}$. Then the metric $g$ belongs to $P$ on $U$ and since $\frac{\partial}{\partial v}$ is an infinitesimal automorphism of $P$ it follows that $\tilde{g}_{ij}$ is independent of $v.$ Therefore its Levi-civita connection is a Torsion-free connection belonging to $P$ over $U.$
\end{proof}
Regarding the curvature of a torsion-free connection belonging to a $GN$-structure, we have the following result:
\begin{proposition}
Let $P\rightarrow M$ be a $GN$-structure. If $\nabla$ is a torsion-free connection which belongs to $P,$ then its curvature tensor $R^{\nabla}$ is of type $II.$
\end{proposition}
\begin{proof}
Fix a metric $g$ belonging to $P$. Let $R$ be the covariant four-tensor defined by
\begin{equation}
R(X,Y,Z,W)=g((R^{\nabla})(X,Y)Z,W).
\end{equation}
The connection $\nabla$ belongs to $P$ and therefore its associated curvature two-form on $P$ is $\mathfrak{gn}$-valued, where $\mathfrak{gn}$ is the Lie algebra of $GN.$ Since in addition $g$ belongs to $P$ this implies that
\begin{equation}\label{Rmskew}
R(X,Y,Z,U) = -R(X,Y,U,Z)\quad \text{ and } \quad R(X,Y,W,W^{\prime}) = -R(X,Y,W^{\prime},W),
\end{equation}
for all $X,Y,U\in \mathcal{X}(M)$, $Z\in \lambda$ and $W,W^{\prime}\in \Lambda.$
By abuse of notation it is therefore clear that $Rm^{\nabla}$ is of type $II$ if and only if the terms
\begin{equation}\label{positivecomp}
R(\lambda,\Lambda,\lambda,TM), \quad R(\Lambda,\Lambda,\lambda,\Lambda),\quad R(TM,\lambda,\lambda,\Lambda),
\quad R(\Lambda,\lambda,\Lambda,\Lambda),
\end{equation}
vanish.
If $p\in M,$ choose a vector field $X\in \lambda$ in a neighborhood of $p$ such that $X^{\natural}$ is closed. Then there exists some smooth function $f$ such that $\nabla X=f(X^{\natural}\otimes X).$ Hence we see that if $W,W^{\prime}\in \Lambda$, then
\begin{equation}
R^{\nabla}(W,W^{\prime})X=[\nabla_{W},\nabla_{W^{\prime}}]X-\nabla_{[W,W^{\prime}]}X=0.
\end{equation}
Therefore the two first expressions in \eqref{positivecomp} vanish. We can see that $R(TM,\lambda,\lambda,\Lambda)$ vanishes, from the fact that $\nabla \lambda \subset \lambda$ by proposition \ref{KSConnections}.
Lastly, suppose that $W,W^{\prime},U,U^{\prime}\in \Lambda$.
Since $\nabla$ is Torsion-free, we know that $Rm^{\nabla}$ satisfies the first Bianchi identity. Using \eqref{Rmskew} together with the first Bianchi identity one can show that
\begin{equation}
R(W,W^{\prime},U,U^{\prime})=R(U,U^{\prime},W,W^{\prime}),
\end{equation}
by copying the proof of the similar statement in the Riemannian setting. In particular this shows that the last expression in \eqref{positivecomp} vanishes, finishing the proof.
\end{proof}
The above proposition shows that if $\nabla$ is a torsion-free connection of a $GN$-structure $P\rightarrow M,$ then its curvature tensor $R^\nabla$ is of type $II.$ By the results of section \ref{KSGNStruct} we can use the $GN$-structure to perform full contractions of tensors in the algebra generated by $R^{\nabla},$ giving smooth functions associated to $\nabla.$
As an example, this gives us a way to defined the scalar curvature of torsion-free connections belonging to $P.$
\newline
Next we show that if $P\rightarrow M$ is a Kundt-structure, then we have an induced map
\begin{equation}
\Phi:\mathcal{M}\rightarrow \Omega^{0}(\Lambda/\lambda).
\end{equation}
from the space of metrics belonging to $P$ into the space of sections of $\Lambda/\lambda.$
If $X$ is a local infinitesimal automorphism of $P$ belonging to $\lambda$ such that $X^{\natural}$ is closed and $g\in \mathcal{M},$ then $\mathcal{L}_{X}g$ is of type $III$ and can therefore be written uniquely as
\begin{equation}
\mathcal{L}_{X}g=X^{\natural}\otimes_{S}\omega,
\end{equation}
where $\omega\in \Lambda^{*}$. Now supposing that $\tilde{X}$ is another local infinitesimal automorphism such that $\tilde{X}^{\natural}$ is closed, then there exists a smooth function $f$ such that $\tilde{X}=fX$ and $W(f)=0,$ for all $W\in \Lambda, $ implying that $df\in \lambda^{*}.$ Therefore
\begin{equation}
\tilde{X}\otimes\tilde{\omega}=\mathcal{L}_{\tilde{X}}g=\mathcal{L}_{fX}g=df\otimes_{s}X^{\natural}+f\mathcal{L}_{X}g = \tilde{X}\otimes (\frac{1}{f}df+\omega).
\end{equation}
It follows that the classes $[\omega],[\tilde{\omega}]$ are the same in $\Lambda^{*}/\lambda^{*}.$ Using the induced metric $g^{\perp}$ we take the dual of $[\omega]$ on $\Lambda/\lambda$ to find a local section in $\Lambda/\lambda.$
Since the resulting section is independent of which infinitesimal automorphism with closed dual we use, this process extends globally to give a well-defined section $\Phi(g)\in\Omega^{0}(\Lambda/\lambda)$.
By taking the norm with respect to $g^{\perp},$ the map $\Phi:\mathcal{M}\rightarrow \Omega^{0}(\Lambda/\lambda)$ gives rise to a map \begin{equation}\Theta:\mathcal{M}\rightarrow C^{\infty}(M),\end{equation} defined by
\begin{equation}
\Theta(g)=g^{\perp}(\Phi(g),\Phi(g)),
\end{equation}
for all $g\in \mathcal{M}.$
\section{Degenerate Kundt metrics.}
Recall from \cite{KundtSpacetimes} that a Kundt spacetime $(M,g,\lambda)$ is said to be degenerate if the Riemannian curvature and all its covariant derivative $\nabla^{m}Rm$ are of type $II$ with respect to $\lambda.$ We have the following classification from \cite{NilKilling}
\begin{proposition}\label{DegkundtChar}
Let $(M,g,\lambda)$ be a Kundt space-time and let $X$ be a kundt vector field defined on an open set $U\subset M.$
Then
\begin{enumerate}[i)]
\item The Riemannian curvature is of type $II$ on $U$ iff. $(\mathcal{L}_{X})^{2}g$ is of boost-order $\leq -2$ w.r.t $\lambda$
\item $(U,g,\lambda)$ is a degenerate Kundt spacetime iff. $(\mathcal{L}_{X})^{2}g$ is of boost-order $\leq -2$ and $(\mathcal{L}_{X})^3g=0.$
\end{enumerate}
\end{proposition}
Suppose $P\rightarrow M$ is a Kundt structure with corresponding null-quadruple \begin{equation}(\lambda,\Lambda,g^{\perp},f).\end{equation} Let \begin{equation}\mathcal{M}_{Deg}\subset \mathcal{M}\end{equation} be the collection of metrics $g\in \mathcal{M}$ such that the corresponding Kundt spacetime $(M,g,\lambda)$ is degenerate.
As a result of proposition \ref{nil} and corollary \ref{KSKundtCorollary} we can characterize the metrics $g\in\mathcal{M}_{Deg}$ as those for which $(\mathcal{L}_{X})^{2}g$ is of boost-order $\leq -2$ and $(\mathcal{L}_{X})^{3}g=0,$ whenever $X$ is a local infinitesimal automorphism of $P$ belonging to $\lambda.$
We can use this characterization in order to show that $P\rightarrow M$ induces a map
\begin{equation}
\Psi:\mathcal{M}_{Deg}\rightarrow C^{\infty}(M),
\end{equation}
defined as follows: If $X$ is any local infinitesimal automorphism of $P$ on an open set $U\subset M$ and $g\in \mathcal{M}_{Deg},$ then $(\mathcal{L}_{X})^2g$ is of boost-order $\leq -2$ and therefore there exists a smooth function $H\in C^{\infty}(U)$ such that
\begin{equation}
\mathcal{L}_{X}g=H(X^{\natural}\otimes X^{\natural}).
\end{equation}
If $\tilde{X}$ is another infinitesimal automorphism on $U$, then there exists a smooth function $f$ satisfying $X(f)=0,$ such that $\tilde{X}=fX.$ This gives
\begin{equation}
\begin{gathered}
\tilde{H}(\tilde{X}^{\natural}\otimes\tilde{X}^{\natural})=(\mathcal{L}_{\tilde{X}})^{2}g\\
=(\mathcal{L}_{fX})^{2}g=\mathcal{L}_{fX}(df\otimes_{S}X^{\natural}+f\mathcal{L}_Xg)\\
=f\mathcal{L}_{fX}\mathcal{L}Xg=f[df\otimes (i_{X}\mathcal{L}_{X}g+f(\mathcal{L}_{X})^{2}g]\\
=f^{2}\mathcal{L}_{X}g=H(\tilde{X}^{\natural}\otimes \tilde{X}^{\natural}),
\end{gathered}
\end{equation}
where $i_{X}\mathcal{L}_{X}g$ is a contraction, showing that $\tilde{H}=H.$ Since the functions we have constructed are independent of the choice of infinitesimal automorphism, we can use these to construct a global function $\Psi(g)\in C^{\infty}(M)$.
The metric $g$ also satisfies $(\mathcal{L}_{X})^3g=0,$ and therefore $\mathcal{L}_{X}\Psi(g)=0,$ showing that $\Psi(g)$ is invariant under the local automorphisms of $P.$
If $g\in \mathcal{M}_{Deg}$, then we can find coordinates $(u,v,x^{k})$ such that $\frac{\partial}{\partial v}$ is a local infinitesimal automorphism of $P$ and $du=(\frac{\partial}{\partial v})^{\natural}$ and the metric can be expressed by
\begin{equation}\label{DegkundtCoordinates}
g=2du(dv+Hdu + W_{i}dx^{i}) + \tilde{g}_{ij}(u,x^{k})dx^{i}dx^{j}
\end{equation}
where
the functions $H,W_{i}$ take the form
\begin{equation}
H(u,v,x^{k})=v^2H^{(2)}(u,x^k)+vH^{(1)}(u,x^k)+H^{(0)}(u,x^k)
\end{equation}
and
\begin{equation}
W_{i}(u,v,x^{k})=vW^{(1)}_{i}(u,x^{k}) +W^{(0)}_{i}(u,x^{k}).
\end{equation}
In terms of these coordinates we see that $\Psi(g)=2H^{(2)}$, and moreover the section of $\Lambda/\lambda$ given by $\Phi(g)$, which we found in section \ref{KSKundtStructuresSection}, can be expressed in terms of the functions $W_{i}^{(1)}.$
It therefore follows from \cite{KundtSpacetimes} that if $\tilde{g},g\in \mathcal{M}_{Deg}$ satisfy
\begin{equation}
\Phi(g)=\Phi(\tilde{g})\quad \text{ and } \quad \Psi(g)=\Psi(\tilde{g}),
\end{equation}
Then $g$ and $\tilde{g}$ have identical scalar polynomial curvature invariants.
Moreover, if $g\in \mathcal{M}_{Deg}$ has constant scalar polynomial curvature invariants across the manifold, then it follows from \cite{CSI4} that $\Psi(g)$ and $\Theta(g)$ differ only by a constant.
\section{Homogeneous GN-structures}
In this section we shall classify left-invariant $GN$ and Kundt structures on homogeneous spaces and study their properties.
We shall consider homogeneous spaces which are quotients $K/H$ of Lie groups $K$ and closed Lie subgroups $H,$ where we assume that the action of $K$ on $K/H$ is effective. By abuse of notation, if $a\in K$ we let $L_{a}$ denote left multiplication of $a$ with elements of both $K$ and $K/H.$ We denote the coset of the identity element $e$ of $K$ by $o$ and refer to it as the origin in $K/H$ and let $\mathfrak{k}$ and $\mathfrak{h}$ be the Lie algebras of $K$ and $H$ respectively. We have a projection $\pi:K\rightarrow K/H$, whose derivative at the identity $(d\pi)_{e}:\mathfrak{k}\rightarrow T_{o}K/H$, induces an isomorphism $\mathfrak{k}/\mathfrak{h}\cong T_{o}K/H,$ which we henceforth use to identify these two vector spaces. Under this identification the vector space quotient $q:\mathfrak{k}\rightarrow \mathfrak{k}/\mathfrak{h}$ and $(d\pi)_{e}$ are the same. If $h\in H,$ then $Ad(h):\mathfrak{k}\rightarrow \mathfrak{k}$ leaves $\mathfrak{h}$ invariant. Therefore we have an induced linear map between vector spaces $Ad(h):\mathfrak{k}/\mathfrak{h}\rightarrow \mathfrak{k}/\mathfrak{h}$.
For each element $A\in \mathfrak{k},$ the one-parameter subgroup $a_{t}=exp(tA)$, induces a one parameter group of diffeomorphisms on $K/H$ given by left multiplication $L_{a_{t}}$, for $t\in \mathbb{R},$ which defines a vector field which we denote by $A^{*}$ on $K/H.$ Under this association we have
\begin{equation}
[A,B]^{*}=-[A^{*},B^{*}],
\end{equation}
for all $A,B\in \mathfrak{k}$.
On the homogeneous space a $G$-structure $Q\rightarrow K/H$ is said to be $K$-invariant if for each $a\in K,$ the left multiplication $L_{a}:K/H\rightarrow K/H$ is an automorphism of $Q.$ In this case $A^{*}$ is an infinitesimal automorphism of $Q,$ for each $A\in \mathfrak{k},$ since the diffeomorphisms of the flow of $A^{*}$ are given by left multiplication by elements in $K.$
Now let us classify the left invariant $GN$-structures and Kundt structures on $K/H.$ In our classification we shall consider quadruples $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ where
\begin{enumerate}[i)]
\item $\mathfrak{h}\subset\mathfrak{a}\subset \mathfrak{b}\subset\mathfrak{k}$ and $\mathfrak{a},\mathfrak{b}$ are $ad(H)$-invariant subspaces such that $\mathfrak{h}$ and $\mathfrak{b}$ are of codimension $1$ in $\mathfrak{a}$ and $\mathfrak{k}$ respectively.
\item $(\cdot,\cdot)$ is a positive definite inner product on $\mathfrak{b}/\mathfrak{a}$ which is $ad(H)$ invariant, i.e. for each $h\in H$ the, linear map $ad(h):\mathfrak{b}/\mathfrak{a}\rightarrow \mathfrak{b}/\mathfrak{a}$ whose existence is ensured by i), is an isometry of $(\cdot,\cdot).$
\item $\beta:\mathfrak{a}/\mathfrak{h}\otimes \mathfrak{k}/\mathfrak{b}\rightarrow \mathbb{R}$ is a non-zero $ad(H)$-invariant functional.
\end{enumerate}
We shall refer to these as $ad(H)$-invariant quadruples.
\begin{theorem}\label{KSGNHom}
There is a 1:1 correspondence between $K$-invariant $GN$-structures \newline $P\rightarrow K/H$ on $K/H$ and $ad(H)$-invariant quadruples $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ on $\mathfrak{k}/\mathfrak{h}.$ Under this correspondence the $K$-invariant Kundt Structures are given by the $ad(H)$-invariant quadruples $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ such that
\begin{enumerate}[i)]
\item $\mathfrak{b}$ is a Lie subalgebra of $\mathfrak{k}$,
\item The induced maps $[a,-]\in End(\mathfrak{b}/\mathfrak{a})$ are skew-adjoint with respect to the inner product $(\cdot,\cdot),$ for all $a\in\mathfrak{a}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose that we are given a $K$-invariant $GN$-structure on $K/H$ and let $(\lambda,\Lambda,g^{\perp},f)$ be its associated null-quadruple. From the $K$ invariance it follows that the subspaces $\Lambda_{0}, \lambda_{0}\subset \mathfrak{k}/\mathfrak{h}$
and the metric $g^{\perp}_{0}$ on $\Lambda_{0}/\lambda_{0}$ are $ad(H)$-invariant, and the functional $f_{0}:\lambda_{0}\otimes (\mathfrak{k}/\mathfrak{h})/\Lambda_{0}\rightarrow$ satisfies $f_{0}\circ ad(h)= f_{0},$ for all $h\in H.$
Now letting $q:\mathfrak{k}\rightarrow \mathfrak{k}/\mathfrak{h}$ denote the quotient map, we define the following $ad(H)$-invariant subspaces of $\mathfrak{k}$:
\begin{equation}
\mathfrak{a}:=q^{-1}(\lambda_{0}),\quad\mathfrak{b}:=q^{-1}(\Lambda_{0}),
\end{equation}
then $h$ has co-dimension $1$ in $\mathfrak{a}$ and $\mathfrak{b}$ has co-dimension $1$ in $\mathfrak{k}$ since $\lambda_{0}$ and $\Lambda_{0}$ have dimension $1$ and co-dimension $1$ respectively.
Let $\beta:\mathfrak{a}/\mathfrak{h}\otimes \mathfrak{k}/\mathfrak{b}\rightarrow \mathbb{R}$ be defined through $f_{0}$ by using the identifications $\mathfrak{a}/\mathfrak{h}=\lambda_{0}$ and $\mathfrak{k}/\mathfrak{b}\cong (\mathfrak{k}/\mathfrak{h})/(\mathfrak{b}/\mathfrak{h})=(\mathfrak{k}/\mathfrak{h})/\Lambda_{0}.$ Then by the $ad(H)$-invariance of $f_{0}$ we see that $\beta$ must also be $ad(H)$-invariant.
Lastly we can use the isomorphism $\mathfrak{b}/\mathfrak{a}\cong(\mathfrak{b}/\mathfrak{h})/(\mathfrak{a}/\mathfrak{h})=\Lambda_{0}/\lambda_{0}$, together with $g^{\perp}$ to induce an $ad(H)$-invariant positive definite inner product $(\cdot,\cdot)$ on $\mathfrak{b}/\mathfrak{a}$.
Thus we have constructed a correspondence from $K$-invariant $GN$-structures to $ad(H)$-invariant quadruples for $\mathfrak{k}/\mathfrak{h}.$ Since any two $K$-invariant $GN$-structures giving the same \newline $ad(H)$-quadruple must have identical fibres over the origin of $K/H$, the $K$-invariance implies that they must coincide everywhere, so the correspondence is injective. Surjectivity follows from the fact that we may induce an $ad(H)$-invariant null-quadruple $(\lambda_{0},\Lambda_{0},g^{\perp},f_{0})$ over the origin of $K/H$ from any $ad(H)$-quadruple $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ and extend it
to a $GN$-structure by using the left translations by $K.$
\newline
Let $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ be an $ad(H)$-invariant quadruple corresponding to a $K$-invariant $GN$-structure $P\overset{\pi}{\rightarrow} K/H$.
First we show that $P$ is integrable iff. $\mathfrak{b}$ is a Lie subalgebra of $\mathfrak{k}$: Let $\Lambda$ be the codimension $1$ distribution on $K/H$ associated to $P$ and let $\tilde{\Lambda}$ be the distribution on $K$ defined by $\tilde{\Lambda}_{g}=(L_{g})_{*}(\mathfrak{b})$, for all $g\in K.$ From the definition it is clear that $\tilde{\Lambda}$ is integrable iff. $\mathfrak{b}$ is a Lie subalgebra of $\mathfrak{k}$. It suffices therefore to prove that $\Lambda$ is integrable iff. $\tilde{\Lambda}$ is integrable. Let $U\subset K/H$ be a neighborhood such that there exists a local trivalization $\pi^{-1}(U)\overset{\Phi}{\rightarrow} U\times H$ for the principal $H$-bundle $K\overset{\pi}{\rightarrow} K/H$ and a frame $\{X_{1},\dots X_{m}\}$ over $U$ for $\Lambda$. We use $\Phi$ to construct vector fields $\{\tilde{X}_{1},\dots,\tilde{X}_{k}\}$ on $\pi^{-1}(U)$ such that $\pi_{*}(\tilde{X}_{i})=X_{i},$ for $1\leq i\leq k.$ Let $Y_{1},\dots,Y_{m}$ be a basis for $\mathfrak{h}$. Since $\mathfrak{b}$ is $ad(H)$-invariant it follows that $[Y_{i},\tilde{X}_{l}]$ and $[Y_{i},Y_{j}]$ belongs to $\tilde{\Lambda}$ for all $1\leq i,j \leq m$ and $1\leq l\leq k$. Thus $\tilde{\Lambda}$ is integrable on $\pi^{-1}(U)$ iff. $[\tilde{X}_{i},\tilde{X}_{j}]$ belongs to $\tilde{\Lambda}$, for all $1\leq i,j\leq k$. Since $\tilde{X}_{i}$ and $X_{i}$ are $\pi$-related this holds iff. $\Lambda$ is integrable over $U.$ This shows that $\Lambda$ is integrable iff. $\tilde{\Lambda}$ is integrable and hence that $\Lambda$ is integrable iff. $\mathfrak{b}$ is a Lie subalgebra of $\mathfrak{k}.$
Let $\lambda$ be the 1-distribution associated to $P$. Here we show that there are local infinitesimal automorphisms of $P$ pointing along $\lambda$ iff. $[a,-]\in End(\mathfrak{b}/\mathfrak{a})$ is skew-adjoint with respect to $(\cdot,\cdot)$: First we choose a metric $g$ and a vector field $X$ defined on a neighborhood $U$ of the origin in $K/H$ such that $g$ belongs to $P$ and $X$ belongs to $\lambda$. Let $N\in \mathfrak{a}\setminus \mathfrak{h} $ and $W_{i}\in \mathfrak{b}$, for $i=1,2$, then $({N^{*}})_{o}\in \lambda_{o}$ and $({W_{i}}^{*})_{o}\in \Lambda_{o}.$ Since $N^{*}$ is an infinitesimal automorphism of $P$, we see that $\mathcal{L}_{N^{*}}g$ is nilpotent with respect to $X$. Therefore at the origin we have
\begin{equation}\label{Neq}
\begin{gathered}
\mathcal{L}_{N^{*}}g(({{W}_{1}}^{*})_{o},({{W}_{2}}^{*})_{o})\\
=({N}^{*})_{o}g({W_{1}}^{*},{{W}_{2}}^{*}) -g([{N}^{*},{{W}_{1}}^{*}]_{o},({{W}_{2}}^{*})_{o})-g(({{W}_{1}}^{*})_{o},[{N}^{*},{{W}_{2}}^{*}]_{o})\\
=({N}^{*})_{o}g({W_{1}}^{*},{{W}_{2}}^{*}) +g({[N,W_{1}]^{*}}_{o},({W_{2}}^{*})_{o})+g(({W_{1}}^{*})_{o},{[N,W_{2}]^{*}}_{o})\\
=({N}^{*})_{o}g({W_{1}}^{*},{{W}_{2}}^{*})+([N,{\overline{W}}_{1}],\overline{W}_{2})+(\overline{W}_{1},[N,\overline{W}_{2}])\\
=0,
\end{gathered}
\end{equation}
where $\overline{W}_{i}$ denotes the coset of $W_{i}$ in $\mathfrak{b}/\mathfrak{a}$, for $i=1,2.$
Furthermore since ${{W}_{i}}^{*}$, for $i=1,2,$ are infinitesimal automorphisms of $P$, it follows that
\begin{equation}\label{eqX}
\begin{gathered}
\mathcal{L}_{X}g(({{W}_{1}}^{*})_{o},({{W}_{2}}^{*})_{o}) \\ =X_{o}(g({{W}_{1}}^{*},{{W}_{2}}^{*}))-g([X,{{W}_{1}}^{*}]_{o},({{W}_{2}}^{*})_{o})-g(({{W}_{1}}^{*})_{o},[X,{{W}_{2}}^{*}]_{o})\\
=X_{o}(g({{W}_{1}}^{*},{{W}_{2}}^{*})),
\end{gathered}
\end{equation}
where the last equality holds since $[X,W_{i}^*]\propto X$.
Suppose first that $P$ has local infinitesimal automorphisms belonging to $\lambda$. Then letting $X$ be such we know that $\mathcal{L}_{X}g$ is nilpotent with respect to $X$. Since $(N^{*})_{o}$ and $X_{o}$ are colinear equation \eqref{eqX} implies that
$({N}^{*})_{o}g({W_{1}}^{*},{{W}_{2}}^{*})=0$ and therefore by \eqref{Neq}, $[N,-]$ is skew-adjoint on $\mathfrak{b}/\mathfrak{a}.$
Conversely, suppose that $[N,-]$ is skew-adjoint on $\mathfrak{b}/\mathfrak{a}$, then by \eqref{Neq} we see that $$({N}^{*})_{o}g({W_{1}}^{*},{{W}_{2}}^{*})=0,$$ but since $({N}^{*})_{o}$ is colinear with $X_{o}$ it follows from \eqref{eqX} that $$\mathcal{L}_{X}g(({{W}_{1}}^{*})_{o},({{W}_{2}}^{*})_{o})=0,$$ and since the $W_{i}$ were arbitrary this in turn implies that $$\mathcal{L}_{X}g(Y,Y^{'})=0,$$
for all $Y,Y^{'}\in \Lambda_{o}.$ Now suppose that $b\in K$, with $\pi(b)\in U.$ Since $L_{b^{-1}}$ preserves $\lambda,$ it follows that there exists some smooth function $f$ such that $(L_{g^{-1}})_{*}(X)=fX.$ Therefore if $Y,Y^{'}\in \Lambda_{o}$, we have
\begin{equation}
\begin{gathered}
\mathcal{L}_{X}g((L_{b})_{*}Y,(L_{b})_{*}Y^{'})\\
=(L_{b})^{*}\mathcal{L}_{X}g(Y,Y^{'})=\mathcal{L}_{fX}((L_{b})^{*}g)(Y,Y^{'})=0,
\end{gathered}
\end{equation}
where the last equality holds since $(L_{b})^{*}g$ belongs to $P$. Thus $X$ satisfies \begin{equation}\mathcal{L}_{X}g(\Lambda,\Lambda)=0,\end{equation} on $U.$
Now if $Z\in \mathfrak{k}\setminus\mathfrak{b}$, then by shrinking $U$ if necessary, we get that $(Z^{*})_{p}\notin \Lambda_{p},$ for all $p\in U,$ and we can find a vector field $X$ on $U$ belonging to $\lambda$ such that $g(X,Z^{*})$ is constant on $U.$ Then
\begin{equation}
\mathcal{L}_{{X}}g({X},{Z}^{*})=Xg(X,Z^{*})-g([{X},{X}],{Z}^{*})-g({X},[{X},{Z}^{*}])=-g({X},[X,Z^{*}])=0,
\end{equation}
Thus $X$ is an infinitesimal automorphism for $P$ on $U.$ If $p\in K/H$, then we may find some $a\in K$ such that $\pi(a)=p.$ Since
\begin{equation}
\mathcal{L}_{(L_{a})_{*}X}((L_{a^{-1}})^{*}g)=(L_{a^{-1}})^{*}\mathcal{L}_{X}g,
\end{equation}
it follows that $(L_{a})_{*}(X)$ is an infinitesimal automorphism of $P$ defined on $L_{a}(U)$. Hence if $[N,-]\in End(\mathfrak{b}/\mathfrak{a})$, for all $N\in \mathfrak{a}$, then there are locally defined infinitesimal automorpisms of $P$ about each point of $K/H.$ This concludes the proof.
\end{proof}
Given a Lie group $G$ with Lie algebra $\mathfrak{g}$ we now consider left-invariant $GN$-structures $P\rightarrow G$. By theorem \ref{KSGNHom} these are in $1:1$ correspondece with quadruples $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta),$ where $\mathfrak{a}$ and $\mathfrak{b}$ are of dimension one and codimension one respectively and satisfy $\mathfrak{a}\subset \mathfrak{b}\subset \mathfrak{g}.$
\begin{example}
Every 3-dimensional Lie group whose Lie algebra $\mathfrak{g}$ is not isomorphic to $\mathfrak{su}(2)$ admits a left-invariant Kundt structure.
This can be seen as follows: If $\mathfrak{g}$ is not isomorphic to $\mathfrak{su}(2)$, then it has a codimension 1 subalgebra $\mathfrak{b}$. If $\mathfrak{b}$ is abelian, then choose any one-dimensional subspace $\mathfrak{a}\in \mathfrak{b}$. If $\mathfrak{b}$ is non-abelian, then set $\mathfrak{a}=[\mathfrak{b},\mathfrak{b}]$ which is automatically one-dimensional. In either case the induced map $[A,-]:\mathfrak{b}/\mathfrak{a}\rightarrow \mathfrak{b}/\mathfrak{a}$ is zero, for all $A\in \mathfrak{a}.$ Therefore if $(\cdot,\cdot)$ is any quadratic form on $\mathfrak{b}/\mathfrak{a}$ and $\beta:\mathfrak{a}\otimes \mathfrak{g}/\mathfrak{b}\rightarrow \mathbb{R}$ is any non-zero linear function, the quadruple $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ corresponds to a left-invariant Kundt structure.
$\mathfrak{su}(2)$ does not contain a codimension one subalgebra, and therefore does not support a quaduple corresponding to a Kundt structure.
\end{example}
The next result suggests that in arbitrary dimension left-invariant Kundt structures need not be as ubiquitous.
\begin{proposition}
Suppose that $N$ is a nilpotent Lie group with Lie algebra $\mathfrak{n}.$ If $(\mathfrak{a},\mathfrak{b},(\cdot,\cdot),\beta)$ is a quaduple on $\mathfrak{n}$ corresponding to a left-invariant Kundt structure $P\rightarrow M$, then
\begin{equation}
[\mathfrak{a},\mathfrak{b}]\subset \mathfrak{a}\quad \text{ and } \quad [\mathfrak{a},\mathfrak{n}]\subset \mathfrak{b}.
\end{equation}
Consequently any left-invariant vector field in $\mathfrak{a}$ is an infinitesimal automorphism of $P.$
\end{proposition}
\begin{proof}
If $A\in \mathfrak{a}$, then $ad(A)$ is nilpotent, and therefore the induced map $ad(A):\mathfrak{b}/\mathfrak{a}\rightarrow \mathfrak{b}/\mathfrak{a}$ is also nilpotent. Since this map is assumed to skew-adjoint it follows that it must be identically zero. This shows that $[A,B]\subset \mathfrak{a}$, for all $B\in \mathfrak{b},$ from which we see that $[\mathfrak{a},\mathfrak{b}]\subset \mathfrak{a}.$
Since $\mathfrak{b}$ is a subalgebra, it follows again from nilpotency by a standard argument that $[\mathfrak{a},\mathfrak{n}]\subset \mathfrak{b}.$
Now let $X$ be a left-invariant vector field on $N,$ such that at the identify $X_e\in \mathfrak{a}$. Then left invariance implies that $X$ belongs to the null-distribution $\lambda$ associated to the Kundt structure. If $g$ is a metric belonging to $P,$ then left-invariance of $(\cdot,\cdot)$ and $\beta$ implies that if $W,W^{\prime}\in \mathfrak{b}$ and $Z\in \mathfrak{n}$, then $g(W,W^{'})$ and $g(X,Z)$ are contant on $N,$ and therefore
\begin{equation}
\mathcal{L}_{X}g(W,W^{\prime})=-g([X,W],W^\prime)-g(W,[X,W^{\prime}])=0,
\end{equation}
and
\begin{equation}
\mathcal{L}_{X}g(X,Z)=-g([X,X],Z)-g(X,[X,Z])=0,
\end{equation}
from which we see that $X$ is nil-Killing w.r.t. $\lambda.$ Hence proposition \ref{nil} implies that $X$ is an infinitesimal automorphism of $P.$
\end{proof}
\section{Conclusion}
In this paper we have identified the $G$-structures to which Kundt and nil-Killing vector fields are infinitesimal automorphisms. The properties of these $G$-structures are studied showing that they give rise to an intrinsic boost-order classification of tensors and the ability to perform full traces of even ranked tensors of type $II$.
Along the way we have shown that the existence of a Kundt vector field can be characterized in terms of the $G$-structure having a torsion-free connection.
Lastly, our desire to represent Kundt-CSI spacetimes in terms of nil-Killing vector fields has lead us to characterize left-invariant Kundt structures on homogeneous spaces. Here we observed some special restrictions such a structure places upon nilpotent Lie groups.
\section*{Acknowledgements}
I would like to thank Boris Kruglikov, Lode Wylleman and my PhD advisor Sigbjørn Hervik for helpful discussions concerning this project.
\medskip
|
1,116,691,499,629 | arxiv | \section{Introduction}
Betatron x-ray sources \cite{PRL2004Rousse,Corde2013} from laser-plasma interaction have the potential to become invaluable tools to reveal ultrafast dynamics at the atomic scale length. In particular, their broadband spectrum and femtosecond duration are ideal features for femtosecond (fs) x-ray absorption spectroscopy applications \cite{Mahieu2018}. However, they remain marginal in the panel of the commonly used x-ray sources, mainly because of their limited photon energies and relatively low average flux. In this letter, we show that the use of density tailored plasmas can dramatically improve the efficiency of Betatron sources and push their energy and flux in the typical range of conventional synchrotron facilities.
A Betatron source reproduces the principle of a synchrotron radiation in a millimeter scale laser-produced plasma \cite{PRL2004Kiselev,PRL2004Rousse}. An ion cavity created in the wake of an intense femtosecond laser simultaneously acts as an accelerator and a wiggler. Electrons trapped in the cavity are accelerated in the longitudinal direction ($\hat{z}$) and are wiggled in the transverse direction ($\hat{x}$,$\hat{y}$) by strong space-charge electromagnetic fields. When electrons reach relativistic energies, they emit synchrotron-like radiation in the x-ray energy range -- the so-called Betatron radiation. All the features of the emitted radiation depend on the electron orbits which are defined by Lorentz factor $\gamma$ (is $\gg1$) of the electron, its transverse oscillation amplitude $r_\beta$, and the background electron plasma density $n_e$. The oscillation frequency of the electron is given by ${\omega_\beta = \omega_{p}/\sqrt{2\gamma}}$, where ${\omega_{p} =k_{p}c= \sqrt{4\pi c^2 r_e n_e}}$ is a plasma frequency, and $r_e$ is the classical electron radius \cite{Corde2013}. We define the parameter $K= r_\beta k_{p}\sqrt{\gamma/2}$, called the wiggler parameter. For $K\gg 1$, typical for laser-plasma accelerators, the Betatron radiation is emitted into an aperture angle $\theta\approx (1+K)/\gamma$ and has a broad spectrum extending up to a critical frequency $\omega_c=3/2 K \gamma^2 \omega_\beta$ after which it rapidly drops. The effective number of photons produced by each electron per oscillation period can be estimated as $N_{ph} \simeq K/30$. When produced with tens of Terawatts class lasers, the Betatron radiation is emitted by electrons with energies in the hundred MeV range. The source delivers few femtosecond x-ray pulses with a broadband spectrum extending up to a few keV and containing about $10^6$~photons/shot/0.1 BW at 1 keV \cite{PoP2005TaPhuoc,PoP2015Schnell}.
Several paths to increase the flux and photon energy of the betatron source have been studied. The most straightforward way is to increase the laser power. It results in an increase of the electrons energy \cite{NatPhys2006Leemans, Kim2013, Kim2017} and therefore in the emission of brighter and more energetic radiation \cite{NatPhys2010Kneip, Wang2012, Fazel2016, Wood2018}. However, this comes at the cost of a lower repetition rate, inherent to the large scale lasers, which is unattractive for the applications. Alternatively, one promising option is to tailor the plasma density profile in order to control the electron orbits. In reference \cite{TaPhuoc2008}, longitudinal density tailoring was studied theoretically and numerically. It was shown that $r_\beta$, $\gamma$ and $\omega_\beta$ can be increased for appropriately chosen density profiles. This results in a drastically improved efficiency of the Betatron source. Here we present the experimental study of the Betatron x-ray radiation in plasmas with the controllable longitudinal and transverse density gradients. We show that $\omega_c$ and the integrated radiated energy can be increased by an order of magnitude as compared with the commonly used constant density plasma, referred to as the reference case.
\section{Radiation enhancement with tailored density profiles}
\Cref{fig1} schematically shows how density tailored plasmas can be used to modify the orbits of the electrons oscillating in the cavity. Two interaction scenarii, that can be realized experimentally, are presented. In \cref{fig1}a, plasma density profile has a longitudinal density up-ramp along the laser propagation ($\hat{z}$-axis). As laser pulse travels through the plasma, the wakefield amplitude grows, hence, the electron oscillation frequency $\omega_\beta$ increases and the plasma ion cavity shrinks (the cavity radius is $\propto n_e^{-1/2}$). Shrinking the cavity counteracts the de-phasing which occurs when particles start to overrun the plasma wake. Electrons are therefore maintained in the strongest field region \cite{Rittershofer2014,Guillaume2015} and reach a higher energy as compared to the reference case. With $\gamma$ and $\omega_\beta$ being increased, the Betatron radiation is expected to become brighter and more energetic.
A density gradient along the transverse (e.g. along $\hat{y}$) direction provides another degree of optimization. \Cref{fig1}b represents the case where a tilted density feature refracts the laser pulse and the associated plasma wake. When laser traverses the ascending gradient its axis is deviated, and in the descending gradient the deviation direction reverses. For a sufficiently sharp gradient, $l_\text{grad}\lesssim\lambda_\beta=2\pi c/\omega_\beta$, the electrons oscillation amplitude $r_\beta$ is increased by a quantity equal to the shift of the laser axis \cite{Yu2018}, thus leading to the emission of more energetic and brighter x-rays without angular deviation. Both longitudinal and transverse density gradients can be combined to further enhance the efficiency of the Betatron source \cite{TaPhuoc2008}.
\section{Results}
The experiment has been performed at Laboratoire d'Optique Appliquée using a 50 TW, 30 fs, linearly polarized (along $\hat{x}$-axis) laser, focussed with a $f/20$ parabolic mirror onto a gas target containing a mixture of helium (99~$\%$) and nitrogen ($1 \%$) gases. Accelerated electrons were bent towards a scintillator screen using the static magnet (1 Tesla field over 40~cm). However, in our parameter range ($n_e>10^{19}$~cm$^{-3}$, gas length $\sim 5$~mm) the propagation length exceeds the de-phasing length, and measured spectra are not representative of the energy of the electrons when they emit most of the Betatron radiation. X-rays were observed using either a deep-depletion x-ray CCD (for radiation up to 15 keV) or a scintillator screen imaged with a 16-bit camera (for the radiation up to 100~keV). Pairs of Ross filters were used to characterize the radiation in the range from 2 to 80~keV. Plasma density profiles were estimated using a Normarsky interferometer. A second laser pulse (300~mJ~/~30~fs) was used as a machining beam to estimate where electrons are injected and where x-ray radiation is produced along the laser propagation axis \cite{Thaury2013,Pai:PoP2005}.
\subsection{Slow longitudinal ramp}
We first studied the case of a slow longitudinal gradient. For this, we compared the Betatron radiation from two nozzles: one with a constant density profile and the other with an up-ramp density profile. The density measurements are shown in \cref{fig3}a. In both cases the x-ray emission was maximized by adjusting the nozzle position with respect to the laser focus and the gas pressure. The x-ray signal was measured through an array of Aluminium, Copper and Titanium filters. The Betatron spectra that best fits the measured x-ray signals are represented by the shaded areas in \cref{fig4}. As expected the x-ray signal is improved significantly. The critical energy shifts from $\simeq 5$ to $\simeq 10$ keV and the flux is enhanced by a factor $\simeq 3$. In order to verify, that the signal enhancement results from the change of electron orbits associated with the up-ramp density, we have estimated the x-ray emission regions using the method described in \cite{Thaury2013}. We have found that the plasma lengths, over which Betatron radiation is produced, were $1.5 \pm 0.5$~mm for the up-ramp density, and $ 2 \pm 0.5$~mm for the constant density cases. Such difference of the propagation lengths cannot account for the observed signal enhancement, which confirms that the density gradient itself has the major effect.
This result is confirmed by test-particle simulations based on the ideal ion cavity model \cite{Phuoc2005}. With this simplified approach we have identified the basic parameters which fit the Betatron radiation features \cite{Corde2011}. In \cref{fig4}, the fit of the experimental spectrum for the constant density case is obtained using as initial conditions $r_\beta = 1.25$ $\mu$m, $n_e=10^{19}$~cm$^{-3}$ and a 2~mm propagation distance (this choice of parameters is not exclusive but this has no importance since we focus on the relative differences in the spectra calculated with and without gradient). The spectrum for the up-ramp is well reproduced using the same initial conditions, but with the density profile that increases from $10^{19}$~cm$^{-3}$ to $2\times10^{19}$~cm$^{-3}$ over 2~mm.
\subsection{Sharp tilted ramp}
The effect of the sharp tilted transverse density gradient was then studied. It was created by inserting a $100$~$\mu$m diameter wire into the gas flow from a tilted supersonic nozzle. For an accurate estimate of the density profile we performed two-dimensional gas flow modeling using OpenFOAM software \cite{OpenFOAM:2009}. The result is shown in \cref{fig3}b and \cref{fig3}c.
The x-ray radiation was characterized with and without the wire. \Cref{fig5} presents typical x-ray beam profiles. Without the wire, the x-ray beam is quasi-circular with a mean divergence $\theta_r\simeq 20 \pm 2$~mrad. When the wire is inserted, the radiation profile becomes elliptical with a main axis in the $\hat{y}$-axis. The mean divergences are ${\theta_x=13 \pm 2}$~mrad and ${\theta_y=29 \pm 3}$~mrad.
The analysis of the x-ray beam profiles provides useful information as there is a direct correlation between x-ray beam profile and electrons orbits \cite{PRL2006TaPhuoc}. In particular, the decrease of the x-ray beam divergence along the $\hat{x}$-axis (perpendicular to the shock tilt) is a signature of an increase of $\gamma$. From the divergence measurement, we could estimate an electron energy gain of the order of 40-50~$\%$ assuming the oscillation amplitude along the $\hat{x}$-axis, $r_{\beta_x}$, constant. In the direction of the density gradient tilt, the radiation divergence is significantly increased. This is the consequence of an increase of the electron oscillation amplitude $r_{\beta_y}$ along the $\hat{y}$-axis. The measured asymmetry with the wire translates to the ratio of the oscillation amplitudes $\theta_y/\theta_x = r_{\beta_y}/r_{\beta_x} \simeq 2.2$.
These deductions confirm that the transverse density gradient produced by the shock allows to increase both $\gamma$ and $r_\beta$. The spectra with and without wire were measured using Ross filter pairs. They are presented in \cref{fig6} together with the reference spectrum. The thin solid lines correspond to the spectra obtained using the test-particle simulations with the same parameters as before, but with $r_\beta = 2.5$ in the case of the wire, which is in good agreement with the ratio of $r_{\beta_y}/r_{\beta_x}$ deduced from the x-ray beam profiles. Fitting the obtained data with the standard synchrotron spectra, we can estimate the critical energies at $50$~keV and $10$ keV with and without the wire respectively, and at $5$ keV for the reference case. The total radiated x-ray energy is further increased 2.5 times when using the wire. From a series of systematic shots, we have found, that the effect was sensitive to the wire $\hat{z}$-position within a $\pm 500$~$\mu$m interval, and the shock is required to be sharp to the level of typically a hundred of microns. Optimizing wire position ensured that electrons are trapped and accelerated to the maximum energies before the shock.
\subsection{Particle-in-Cell simulations}
For an additional insight into the physics of x-rays enhancement, we performed particle-in-cell (PIC) simulations. We have used the quasi-3D pseudo-spectral code FBPIC \cite{Lehe:CPC2016}. The target was considered to be a fully ionized He plasma with 2\% of N$^{+5}$ ion plasma. We considered the cases of gas flow without and with the tilted shock (gray colors in \cref{fig4}). The density profile is an asymmetric Gaussian density profile defined by $n(z<0) = n_0 \exp(-z^2/L_\mathrm{l})$, and $n(z>0) = n_0 \exp(-z^2/L_\mathrm{r})$, where $L_\mathrm{l}=3$~mm and $L_\mathrm{r}=1$~mm. The peak density without wire insertion is $n_0=1.64\times 10^{19}$~cm$^{-3}$. A shock with a peak density $n_0/2$ is added at $z_s=-1.2$~mm; it has an asymmetric Gaussian density profile with $L_\mathrm{sl}=0.1$~mm, $L_\mathrm{sr}=0.7$~mm. The shock tilt angle estimated from CFD simulations is $\theta_s=20^\circ$, and the tilt introduced by replacing $z \to z - z_s + y\tan\theta_s$.
\Cref{fig7} shows the plasma density in gray scale in the $(z,y)$-plane, as well as the laser centroid (thick curve) and particles (thin curves) trajectories, colored according to the laser peak field and the particle energy respectively. When only the longitudinal gradient is considered (see \cref{fig7}a), the acceleration is continuous, and the oscillations amplitudes do not change significantly during the interaction. When the shock is added in \cref{fig7}b, the tilt produces laser refraction and leads to the displacement of the propagation axis by $\approx4.5$~$\mu$m. It induces a kick onto accelerated electrons increasing their amplitude of oscillation $r_\beta$. Moreover, the sharp rise of the plasma density at the shock relocates particles to higher accelerating and focussing fields, which boosts electron energies and induces higher frequency oscillations. The spectra for each case were calculated to estimate the overall effect on the betatron emission. They are shown with the thick solid curves in \cref{fig6}. A good agreement with the experimental measurements in the photon energy distributions (blue and green curves) is obtained. The total energy produced per charge is increased by a factor 3, which is close to the experimental values.
\section{Conclusion}
In conclusion we have demonstrated that the efficiency of Betatron sources can be significantly improved by using longitudinal and vertical density gradients. The radiation produced has a critical energy ten times higher than the Betatron radiation produced in an homogeneous plasma. We anticipate that this progress will represent a significant milestone in the development of table top femtosecond x-ray sources.
*Corresponding author. Electronic address: [email protected]\\
Acknowledgments\\
MK and UC would like to acknowledge the project ADONIS (CZ 02.1.01/0.0/0.0/16-019/0000789) from ERDF and the Project LQ1606 obtained with the financial support of the Ministry of Education, Youth and Sports as part of targeted support from the National Programme of Sustainability II.
|
1,116,691,499,630 | arxiv | \section{Introduction}
\label{sec:intro}
\subsection{Need for interpretability}
Many metrics have been developed to evaluate the performance of machine learning (ML) systems. In the case of supervised systems, these metrics compare the output of the algorithm to a ground truth, in order to evaluate its ability to reproduce a label given by a physician. However, the users (patients and clinicians) may want more information before relying on such systems. On which features is the model relying to compute the results? Are these features close to the way a clinician thinks? If not, why? This questioning coming from the actors of the medical field is justified, as errors in real life may lead to dramatic consequences.
Trust into ML systems cannot be built only based on a set of metrics evaluating the performance of the system.
Indeed, various examples of machine learning systems taking correct decisions for the wrong reasons exist, e.g. ~\cite{ribeiroWhyShouldTrust2016,fongInterpretableExplanationsBlack2017,degraveAIRadiographicCOVID192021}. Thus, even though their performance is high, they may be unreliable and, for instance, not generalize well to slightly different data sets. One can try to prevent this issue by interpreting the model with an appropriate method whose output will highlight the reasons why a model took its decision.
In \cite{ribeiroWhyShouldTrust2016}, the authors show a now classical case of a system that correctly classifies images for wrong reasons. They purposely designed a biased data set in which wolves always are in a snowy environment whereas huskies are not. Then, they trained a classifier to differentiate wolves from huskies: this classifier had a good accuracy, but classified as wolves huskies with a snowy background, and as huskies wolves that were not in the snow. Using an interpretability method, they further highlighted that the classifier was looking at the background and not at the animal (see Figure~\ref{fig: ribeiro_husky_snow}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.6\textwidth]{figures/introduction/ribeiro_husky_snow.png}
\caption[Example of an interpretability method highlighting why a network took the wrong decision.]{Example of an interpretability method highlighting why a network took the wrong decision. The explained classifier was trained on the binary task ``Husky'' vs ``Wolf''. The pixels used by the model are actually in the background and highlight the snow. \\
Adapted from \citep{ribeiroWhyShouldTrust2016}. Permission to reuse was kindly granted by the authors.}
\label{fig: ribeiro_husky_snow}
\end{figure}
Another study \cite{fongInterpretableExplanationsBlack2017} detected a bias in ImageNet (a widely used data set of natural images) as the interpretation of images with the label ``chocolate sauce'' highlighted the importance of the spoon. Indeed, ImageNet ``chocolate sauce'' images often contained spoons, leading to a spurious correlation. There are also examples of similar problems in medical applications. For instance, a recent paper \cite{degraveAIRadiographicCOVID192021} showed with interpretability methods that some deep learning systems detecting COVID-19 from chest radiographs actually relied on confounding factors rather than on the actual pathological features. Indeed, their model focused on other regions than the lungs to evaluate the COVID-19 status (edges, diaphragm and cardiac silhouette). Of note, their model was trained on public data sets which were used by many studies.
\subsection{How to interpret models}
According to \cite{liptonMythosModelInterpretability2018}, model interpretability can be broken down into two categories: transparency and post-hoc explanations.
A model can be considered as transparent when it (or all parts of it) can be fully understood as such, or when the learning process is understandable. A natural and common candidate that fits, at first sight, these criteria is the linear regression algorithm, where coefficients are usually seen as the individual contributions of the input features. Another candidate is the decision tree approach where model predictions can be broken down into a series of understandable operations. One can reasonably consider these models as transparent: one can easily identify the features that were used to take the decision. However, one may need to be cautious not to push too far the medical interpretation. Indeed, the fact that a feature has not been used by the model does not mean that it is not associated with the target. It just means that the model did not need it to increase its performance. For instance, a classifier aiming at diagnosing Alzheimer's disease may need only a set of regions (for instance from the medial temporal lobe of the brain) to achieve an optimal performance. This does not mean that other brain regions are not affected by the disease, just that they were not used by the model to take its decision. This is the case for example for sparse models like LASSO, but also standard multiple linear regressions. Moreover, features given as input to transparent models are often highly-engineered, and choices made before the training step (preprocessing, feature selection) may also hurt the transparency of the whole framework. Nevertheless, in spite of these caveats, such models can reasonably be considered transparent, in particular when compared to deep neural networks which are intrinsically black boxes.
The second category of interpretability methods, post-hoc interpretations, allows dealing with non-transparent models. Xie et al.~\cite{xieExplainableDeepLearning2020} proposed a taxonomy in three categories: \textit{visualization} methods consist in extracting an attribution map of the same size as the input whose intensities allow knowing where the algorithm focused its attention, \textit{distillation} approaches consist in reproducing the behavior of a black-box model with a transparent one, and \textit{intrinsic} strategies include interpretability components within the framework, which are trained along with the main task (for example, a classification). In the present work, we focus on this second category of methods (post-hoc), and proposed a new taxonomy including other methods of interpretation (see Figure~\ref{fig:taxonomy}). Post-hoc interpretability is the category the most used nowadays, as it allows interpreting deep learning methods that became the state-of-the-art for many tasks in neuroimaging, as in other application fields.
\subsection{Chapter content and outline}
This chapter focuses on methods developed to interpret non-transparent machine learning systems, mainly deep learning systems,
computing classification or regression tasks from high-dimensional inputs. The interpretability of other frameworks (in particular generative models such as variational autoencoders or generative adversarial networks) is not covered as there are not enough studies addressing them. It may be because high-dimensional outputs (such as images) are easier to interpret ``as such'', whereas small dimensional outputs (such as scalars) are less transparent.
Most interpretability methods presented in this chapter produce an attribution map: an array with the same dimensions as that of the input (up to a resizing), that can be overlaid on top of the input in order to exhibit an explanation of the model prediction. In the literature, many different terms may coexist to name this output such as saliency map, interpretation map or heatmap. To avoid misunderstandings, in the following, we will only use the term ``attribution map''.
The chapter is organized as follows. Section~\ref{sec:section2} presents the most commonly used interpretability methods proposed for computer vision, independently of medical applications. It also describes metrics developed to evaluate the reliability of interpretability methods. Then, section~\ref{sec:section3} details their application to neuroimaging. Finally, section~\ref{sec:limitations} discusses current limitations of interpretability methods, presents benchmarks conducted in the neuroimaging field and gives some advice to the readers who would like to interpret their own models.
Mathematical notations and abbreviations used during this chapter are summarized in Table~\ref{tab:notations} and Table~\ref{tab:abbreviations}. A short reminder on neural network training procedure and a brief description of the diseases mentioned in the present chapter are provided in Appendices~\ref{appendix:network} and~\ref{appendix:diseases}.
\begin{table}
\caption{Mathematical notations}
\label{tab:notations}
\noindent\hrulefill
\begin{itemize}
\item $X_0$ is the input tensor given to the network, and $X$ refers to any input, sampled from the set $\mathcal{X}$.
\item $y$ is a vector of target classes corresponding to the input.
\item $f$ is a network of $L$ layers. The first layer is the closest to the input, the last layer is the closest to the output. A layer is a function.
\item $g$ is a transparent function which aims at reproducing the behaviour of $f$.
\item $w$ and $b$ are the weights and the bias associated to a linear function (for example in a fully-connected layer).
\item $u$ and $v$ are locations (set of coordinates) corresponding to a node in a feature map. They belong respectively to the set $\mathcal{U}$ and $\mathcal{V}$.
\item $A^{(l)}_k(u)$ is the value of the feature map computed by layer $l$, of $K$ channels at channel $k$, at position $u$.
\item $R^{(l)}_k(u)$ is the value of a property back-propagated through the $l+1$, of $K$ channels at channel $k$, at position $u$. $R^{(l)}$ and $A^{(l)}$ have the same number of channels.
\item $o_c$ is the output node of interest (in a classification framework, it corresponds to the node of the class $c$).
\item $S_c$ is an attribution map corresponding to the output node $o_c$.
\item $m$ is a mask of perturbations. It can be applied to $X$ to compute its perturbed version $X^m$.
\item $\Phi$ is a function producing a perturbed version of an input $X$.
\item $\Gamma_c$ is the function computing the attribution map $S_c$ from the black-box function $f$ and an input $X_0$.
\end{itemize}
\noindent\hrulefill
\end{table}
\begin{table}
\caption{Abbreviations}
\label{tab:abbreviations}
\noindent\hrulefill
\begin{itemize}
\item \textbf{CAM} Class activation maps
\item \textbf{CNN} Convolutional neural network
\item \textbf{CT} Computed tomography
\item \textbf{Grad-CAM} Gradient-weighted class activation mapping
\item \textbf{LIME} Local interpretable model-agnostic explanations
\item \textbf{LRP} Layer-wise relevance
\item \textbf{MRI} Magnetic resonance imaging
\item \textbf{SHAP} SHapley Additive exPlanations
\item \textbf{T1w} T1-weighted [Magnetic Resonance Imaging]
\end{itemize}
\noindent\hrulefill
\end{table}
\section{Interpretability methods}
\label{sec:section2}
This section presents the main interpretability methods proposed in the domain of computer vision. We restrict ourselves to the methods that have been applied to the neuroimaging domain (the applications themselves being presented in Section~\ref{sec:section3}).
The outline of this section is largely inspired from the one proposed by Xie et al.~\cite{xieExplainableDeepLearning2020}:
\begin{enumerate}
\item \textbf{weight visualization} consists in directly visualizing weights learned by the model, which is natural for linear models but quite less informative for deep learning networks,
\item \textbf{feature map visualization} consists in displaying intermediate results produced by a deep learning network to better understand its operation principle,
\item \textbf{back-propagation methods} back-propagate a signal through the machine learning system from the output node of interest $o_c$ to the level of the input to produce an attribution map,
\item \textbf{perturbation methods} locally perturb the input and evaluate the difference in performance between using the original input and the perturbed version to infer which parts of the input are relevant for the machine learning system,
\item \textbf{distillation} approximates the behavior of a black-box model with a more transparent one, and then draw conclusions from this new model,
\item \textbf{intrinsic} includes the only methods of this chapter that are not post-hoc explanations: in this case, interpretability is obtained thanks to components of the framework that are trained at the same time as the model.
\end{enumerate}
Finally, for the methods producing an attribution map, a section is dedicated to the metrics used to evaluate different properties (for example reliability or human-intelligibility) of the maps.
We caution readers that this taxonomy is not perfect: some methods may belong to several categories (for example LIME and SHAP could belong either to perturbation or distillation methods). Moreover, interpretability is still an active research field, then some categories may (dis)appear or be fused in the future.
The interpretability methods were (most of the time) originally proposed in the context of a classification task. In this case, the network outputs an array of size $C$, corresponding to the number of different labels existing in the data set, and the goal is to know how the output node corresponding to a particular class $c$ interacts with the input or with other parts of the network. However, these techniques can be extended to other tasks: for example for a regression task, we will just have to consider the output node containing the continuous variable learned by the network. Moreover, some methods do not depend on the nature of the algorithm (e.g. standard-perturbation or LIME) and can be applied to any machine learning algorithm.
\tikzstyle{root}=[rectangle, draw=black, rounded corners, fill=lightgray, drop shadow, text centered, anchor=north, text=black, text width=5cm, font=\small]
\tikzstyle{category}=[rectangle, draw=black, rounded corners, fill=lightgray, drop shadow, text centered, anchor=north, text=black, text width=3.5cm, font=\footnotesize]
\tikzstyle{description}=[rectangle, draw=black, rounded corners, fill=white, drop shadow, anchor=north, text=black, text width=3.5cm, font=\scriptsize]
\tikzstyle{myarrow}=[stealth-, thick]
\begin{sidewaysfigure}
\centering
\begin{tikzpicture}[node distance=2cm]
\node (Root) [root]
{
\textbf{Interpretability methods}
};
\node (Weight) [category, below left=1cm and 5.5cm of Root]
{
\textbf{Weight \\visualization}
};
\node (WeightDescription) [description, below=0.2cm of Weight]
{
\begin{itemize}[leftmargin=*]
\item[+] \textcolor{Green}{Standard approach for linear models}
\item[--] \textcolor{Red}{Usually uninformative for neural networks}
\end{itemize}
};
\node (Feature) [category, below left=1cm and 1.5cm of Root]
{
\textbf{Feature \\visualization}
};
\node (FeatureDescription) [description, below=0.2cm of Feature]
{
\begin{itemize}[leftmargin=*]
\item[--] \textcolor{Red}{Mostly ad-hoc procedures}
\end{itemize}
};
\node (Backprop) [category, below left=1cm and -2.5cm of Root]
{
\textbf{Back-propagation \\methods}
};
\node (Gradient) [category, below left=5.5cm and 1cm of Backprop]
{
\textbf{Gradient \\back-propagation}
};
\node (Standard) [category, below left=1cm and 0.25cm of Gradient]
{
\textbf{Standard gradient back-propagation}
};
\node (StandardDescription) [description, below=0.2cm of Standard]
{
\begin{itemize}[leftmargin=*]
\item Also called ``saliency map''
\item Very widely used approach
\item[+] \textcolor{Green}{Simple concept and implementation}
\item[+] \textcolor{Green}{A good first choice}
\item[--] \textcolor{Red}{Produces scattered maps}
\end{itemize}
};
\node (Guided) [category, below=1cm of Gradient]
{
\textbf{Guided \\back-propagation}
};
\node (GuidedDescription) [description, below=0.2cm of Guided]
{
\begin{itemize}[leftmargin=*]
\item[--] \textcolor{Red}{Has severe defects \cite{adebayoSanityChecksSaliency2018}}
\end{itemize} };
\node (GradCAM) [category, below right=1cm and 0.25cm of Gradient]
{
\textbf{Gradient-weighted class attribution map (Grad-CAM)}
};
\node (GradCAMDescription) [description, below=0.2cm of GradCAM]
{
\begin{itemize}[leftmargin=*]
\item Very widely used approach
\item[+] \textcolor{Green}{A good first choice}
\item[+] \textcolor{Green}{Non-scattered maps}
\item[--] \textcolor{Red}{Blurry maps due to upsampling}
\end{itemize}
};
\node (Relevance) [category, below right=5.5cm and 1.5cm of Backprop]
{
\textbf{Relevance \\back-propagation}
};
\node (LRP) [category, below left=1cm and -1.75cm of Relevance]
{
\textbf{Layer-wise \\relevance (LRP)}
};
\node (LRPDescription) [description, below=0.2cm of LRP]
{
\begin{itemize}[leftmargin=*]
\item Choose its extensions rather than the original LRP
\end{itemize} };
\node (Taylor) [category, below right=1cm and -1.75cm of Relevance]
{
\textbf{Deep Taylor decomposition}
};
\node (TaylorDescription) [description, below=0.2cm of Taylor]
{
\begin{itemize}[leftmargin=*]
\item Same principle as LRP but with different back-propagation rule
\end{itemize}
};
\node (Perturbation) [category, below right=1cm and -2.5cm of Root]
{
\textbf{Perturbation \\methods}
};
\node (PerturbationDescription) [description, below=0.2cm of Perturbation]
{
\begin{itemize}[leftmargin=*]
\item[+] \textcolor{Green}{Model-agnostic (can be applied to any model)}
\item[+] \textcolor{Green}{Detects image parts that are necessary for a correct decision}
\item[--] \textcolor{Red}{The perturbed data may be outside the training distribution}
\item[--] \textcolor{Red}{Computationally expensive}
\end{itemize}
};
\node (Distillation) [category, below right=1cm and 1.5cm of Root]
{
\textbf{Distillation \\methods}
};
\node (DistillationDescription) [description, below=0.2cm of Distillation]
{
\begin{itemize}[leftmargin=*]
\item Approximate a black-box model with an interpretable one
\item The approximation can be local (e.g. LIME, SHAP) or global
\item So far, global distillation has been rarely used in neuroimaging
\end{itemize}
};
\node (Intrinsinc) [category, below right=1cm and 5.5cm of Root]
{
\textbf{Intrinsinc \\methods}
};
\node (IntrinsincDescription) [description, below=0.2cm of Intrinsinc]
{
\begin{itemize}[leftmargin=*]
\item Interpretability is built-in the model and not {\sl post-hoc}
\item[+] \textcolor{Green}{Can improve interpretability and performance at the same time}
\item[--] \textcolor{Red}{Cannot be applied to an arbitrary model}
\end{itemize}
};
\draw[myarrow] (Weight.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Feature.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Backprop.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Perturbation.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Distillation.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Intrinsinc.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Gradient.east) -| (Backprop.south);
\draw[myarrow] (Standard.north) -- ++(0,0.5) -| (Gradient.south);
\draw[myarrow] (Guided.north) -- ++(0,0.5) -| (Gradient.south);
\draw[myarrow] (GradCAM.north) -- ++(0,0.5) -| (Gradient.south);
\draw[myarrow] (Relevance.west) -| (Backprop.south);
\draw[myarrow] (LRP.north) -- ++(0,0.5) -| (Relevance.south);
\draw[myarrow] (Taylor.north) -- ++(0,0.5) -| (Relevance.south);
\end{tikzpicture}
\caption{Taxonomy of the main interpretability methods.}
\label{fig:taxonomy}
\end{sidewaysfigure}
\subsection{Weight visualization}
At first sight, one of can be tempted to directly visualize the weights learned by the algorithm. This method is really simple, as it does not require further processing. However, even though it can make sense for linear models, it is not very informative for most networks unless they are specially designed for this interpretation.
This is the case for AlexNet \cite{krizhevskyImageNetClassificationDeep2012}, a convolutional neural network (CNN) trained on natural images (ImageNet). In this network the size of the kernels in the first layer is large enough ($11\times11$) to distinguish patterns of interest. Moreover, as the three channels in the first layer correspond to the three color channels of the images (red, green and blue), the values of the kernels can also be represented in terms of colors (this is not the case for hidden layers, in which the meaning of the channels is lost). The 96 kernels of the first layer were illustrated in the original article as in Figure~\ref{fig:alexnet_weights}. However, for hidden layers, this kind of interpretation may be misleading as non-linearity activation layers are added between the convolutions or fully-connected layers, this is why they only visualized the weights of the first layer.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/weights/AlexNet_weights.png}
\caption[Convolutional kernels of learned by the first convolutional layer by AlexNet.]{96 convolutional kernels of size $3@11\times11$ learned by the first convolutional layer on the $3@224\times224$ input images by AlexNet. \\
Adapted from \citep{krizhevskyImageNetClassificationDeep2012}. Permission to reuse was kindly granted by the authors.}
\label{fig:alexnet_weights}
\end{figure}
To understand the weight visualization in hidden layers of a network, Voss et al.~\cite{vossVisualizingWeights2021} proposed to add some context to the input and the output channels. This way they enriched the weight visualization with feature visualization methods able to generate an image corresponding to the input node and the output node (see Figure~\ref{fig:context_weights}). However, the feature visualization methods used to bring some context can also be difficult to interpret themselves, then it only moves the interpretability problem from weights to features.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/weights/context_weights.png}
\caption[Weight visualization using feature maps context.]{The weights of small kernels in hidden layers (here $5\times5$) can be really difficult to interpret alone. Here some context allow better understanding how it modulates the interaction between concepts conveyed by the input and the output. \\
Adapted from \citep{vossVisualizingWeights2021} (CC BY 4.0).}
\label{fig:context_weights}
\end{figure}
\subsection{Feature map visualization}
Feature maps are the results of intermediate computations done from the input and resulting in the output value. Then, it seems natural to visualize them, or link them to concepts to understand how the input is successively transformed into the output.
Methods described in this section aim at highlighting which concepts a feature map (or part of it) $A$ conveys.
\subsubsection{Direct interpretation}
The output of a convolution has the same shape as its input: a 2D image processed by a convolution will become another 2D image (the size may vary). Then, it is possible to directly visualize these feature maps and compare them to the input to understand the operations performed by the network. However, the number of filters of convolutional layers (often a hundred) makes the interpretation difficult as a high number of images must be interpreted for a single input.
Instead of directly visualizing the feature map $A$, it is possible to study the latent space including all the values of the samples of a data set at the level of the feature map $A$. Then, it is possible to study the deformations of the input by drawing trajectories between samples in this latent space, or more simply to look at the distribution of some label in a manifold learned from the latent space. In such a way, it is possible to better understand which patterns were detected, or at which layer in the network classes begin to be separated (in the classification case). There is often no theoretical framework to illustrate these techniques, then we referred to studies in the context of the medical application (see Section~\ref{sec:application_FM} for references).
\subsubsection{Input optimization}
Olah et al.~\cite{olahFeatureVisualization2017a} proposed to compute an input that maximizes the value of a feature map $A$ (see Figure~\ref{fig:FM_parts}). However, this technique leads to unrealistic images that may be themselves difficult to interpret, particularly for neuroimaging data. To have a better insight of the behavior of layers or filters, another simple technique illustrated by the same authors consists in isolating the inputs that led to the highest activation of $A$. The combination of both methods, displayed in Figure~\ref{fig:FM_examples}, allows a better understanding of the concepts conveyed by $A$ of a GoogleNet trained on natural images.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/feature_maps/FM_parts.png}
\caption[Optimization of the input for different levels of feature maps.]{Optimization of the input for different levels of feature maps. \\
Adapted from \citep{olahFeatureVisualization2017a} (CC BY 4.0).}
\label{fig:FM_parts}
\end{figure}
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/feature_maps/FM_examples.png}
\caption[Association of input optimization with examples.]{Interpretation of a neuron of a feature map by optimizing the input associated with a bunch of training examples maximizing this neuron. \\
Adapted from \citep{olahFeatureVisualization2017a} (CC BY 4.0).}
\label{fig:FM_examples}
\end{figure}
\subsection{Back-propagation methods}
\label{subsec:backprop}
The goal of these interpretability methods is to link the value of an output node of interest $o_c$ to the image $X_0$ given as input to a network. They do so by back-propagating a signal from $o_c$ to $X_0$: this process (backward pass) can be seen as the opposite operation than the one done when computing the output value from the input (forward pass).
Any property can be back-propagated as soon as its value at the level of a feature map $l-1$ can be computed from its value in the feature map $l$. In this section, the back-propagated properties are gradients or the relevance of a node $o_c$.
\subsubsection{Gradient back-propagation}
During network training, gradients corresponding to each layer are computed according to the loss to update the weights. Then, we can see these gradients as the difference needed at the layer level to improve the final result: by adding this difference to the weights, the probability of the true class $y$ increases.
In the same way, the gradients can be computed at the image level to find how the input should vary to change the value of $o_c$ (see example on Figure~\ref{fig:simonyan_gradients}. This gradient computation was proposed by \cite{simonyanDeepConvolutionalNetworks2013}, in which the attribution map $S_c$ corresponding to the input image $X_0$ and the output node $o_c$ is computed according to the following equation:
\begin{equation}
S_c = \frac{\partial{o_c}}{\partial{X}}\Bigr|_{\substack{X=X_0}}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/section2/backpropagation/simonyan_BP.png}
\caption[Attribution map of an image found with gradients back-propagation.]{Attribution map of an image found with gradients back-propagation.\\
Adapted from \cite{simonyanDeepConvolutionalNetworks2013}. Permission to reuse was kindly granted by the authors.}
\label{fig:simonyan_gradients}
\end{figure}
Due to its simplicity, this method is the most commonly used to interpret deep learning networks. Its attribution map is often called a ``saliency map'', however this term is also used in some articles to talk about any attribution map, and this is why we chose to avoid this term in this chapter.
This method was modified to derive many similar methods based on gradients computation described in the following paragraphs.
\paragraph{gradient$\odot$input}
This method is the point-wise product of the gradient map described at the beginning of the section and the input. Evaluated in \cite{shrikumarNotJustBlack2017a}, it was presented as an improvement of the gradients method, though the original paper does not give strong arguments on the nature of this improvement.
\paragraph{DeconvNet \& guided back-propagation}
The key difference between this procedure and the standard back-propagation method is the way the gradients are back-propagated through the ReLU layer.
The ReLU layer is a commonly used activation function that sets to 0 the negative input values, and does not affect positive input values. The derivative of this function in layer $l$ is the indicator function $\mathbb{1}_{A^{(l)}>0}$: it outputs 1 (resp. 0) where the feature maps computed during the forward pass were positive (resp. negative).
Springenberg et al.~\cite{springenbergStrivingSimplicityAll2014} proposed to back propagate the signal differently. Instead of applying the indicator function of the feature map $A^{(l)}$ computed during the forward pass, they directly applied ReLU to the back-propagated values $R^{(l+1)}=\frac{\partial{o_c}}{\partial{A^{(l+1)}}}$, which corresponds to multiplying it by the indicator function $\mathbb{1}_{R^{(l+1)}>0}$. This ``backward deconvnet'' method allows back-propagating only the positive gradients, and, according to the authors, it results in a reconstructed image showing the part of the input image that is most strongly activating this neuron.
The guided back-propagation method (equation~\ref{eq: guided backprop}) combines the standard back-propagation (equation~\ref{eq: standard back-propagation}) with the backward deconvnet (equation~\ref{eq: deconvnet}): when back-propagating gradients through ReLU layers, a value is set to 0 if the corresponding top gradients or bottom data is negative. This adds an additional guidance to the standard back-propagation by preventing backward flow of negative gradients.
\begin{equation}\label{eq: standard back-propagation}
R^{(l)} = \mathbb{1}_{A^{(l)}>0} * R^{(l+1)}
\end{equation}
\begin{equation}\label{eq: deconvnet}
R^{(l)} = \mathbb{1}_{R^{(l+1)}>0} * R^{(l+1)}
\end{equation}
\begin{equation}\label{eq: guided backprop}
R^{(l)} = \mathbb{1}_{A^{(l)}>0} * \mathbb{1}_{R^{(l+1)}>0} * R^{(l+1)}
\end{equation}
Any back-propagation procedure can be ``guided'', as it only concerns the way ReLU functions are managed during back-propagation (this is the case for example for guided Grad-CAM).
While it was initially adopted by the community, this method showed severe defects as discussed later in section \ref{sec:limitations}.
\paragraph{CAM \& Grad-CAM}
In this setting, attribution maps are computed at the level of a feature map produced by a convolutional layer, and then upsampled to be overlaid and compared with the input. The first method, class activation maps (CAM) was proposed by Zhou et al.~\cite{zhouLearningDeepFeatures2015}, and can be only applied to CNNs with the following specific architecture:
\begin{enumerate}
\item a series of convolutions associated with activation functions and possibly pooling layers. These convolutions output a feature map $A$ with $N$ channels,
\item a global average pooling that extracts the mean value of each channel of the feature map produced by the convolutions,
\item a single fully-connected layer.
\end{enumerate}
The CAM corresponding to $o_c$ will be the mean of the channels of the feature map produced by the convolutions, weighted by the weights $w_{kc}$ learned in the fully-connected layer
\begin{equation}
S_c = \sum_{k=1}^N w_{kc} * A_k \enspace .
\end{equation}
This map has the same size as $A_k$, which might be smaller than the input if the convolutional part performs downsampling operations (which is very often the case). Then, the map is upsampled to the size of the input to be overlaid on the input.
Selvaraju et al.~\cite{selvarajuGradCAMVisualExplanations2017} proposed an extension of CAM that can be applied to any architecture: Grad-CAM (illustrated on Figure~\ref{fig:selvaraju_gradcam}). As in CAM, the attribution map is a linear combination of the channels of a feature map computed by a convolutional layer. But, in this case, the weights of each channel are computed using gradient back-propagation
\begin{equation}
\alpha_{kc} = \frac{1}{\lvert \mathcal{U} \rvert} \sum_{u \in \mathcal{U}} \frac{\partial {o_c}}{\partial A_{k}(u)} \enspace .
\end{equation}
The final map is then the linear combination of the feature maps weighted by the coefficients. A ReLU activation is then applied to the result to only keep the features that have a positive influence on class $c$
\begin{equation}
S_c = ReLU(\sum_{k=1}^N \alpha_{kc} * A_k) \enspace .
\end{equation}
Similarly to CAM, this map is then upsampled to the input size.
\begin{figure}
\centering
\includegraphics{figures/section2/backpropagation/selvaraju_gradcam.png}
\caption[Grad-CAM explanations highlighting two different objects in an image.]{Grad-CAM explanations highlighting two different objects in an image. (A) the original image, (B) the explanation based on the ``dog'' node, (C) the explanation based on the ``cat'' node. \\ \textcopyright 2017 IEEE. Reprinted, with permission, from \cite{selvarajuGradCAMVisualExplanations2017}.}
\label{fig:selvaraju_gradcam}
\end{figure}
Grad-CAM can be applied to any feature map produced by a convolution, but in practice the last convolutional layer is very often chosen. The authors argue that this layer is ``the best compromise between high-level semantics and detailed spatial information'' (the latter is lost in fully-connected layers, as the feature maps are flattened).
Because of the upsampling step, CAM and Grad-CAM produce maps that are more human-friendly because they contain more connected zones, contrary to other attribution maps obtained with gradient back-propagation that can look very scattered. However, the smallest the feature maps $A_k$, the blurrier they are, leading to a possible loss of interpretability.
\subsubsection{Relevance back-propagation}
\label{subsubsec: Relevance BP}
Instead of back-propagating gradients to the level of the input or of the last convolutional layer, Bach et al.~\cite{bachPixelWiseExplanationsNonLinear2015} proposed to back-propagate the score obtained by a class $c$, which is called the relevance. This score corresponds to $o_c$ after some postprocessing (for example softmax), as its value must be positive if class $c$ was identified in the input. At the end of the back-propagation process, the goal is to find the relevance $R_u$ of each feature $u$ of the input (for example, of each pixel of an image) such that $o_c = \sum_{u \in \mathcal{U}} R_u$.
In their paper, Bach et al.~\cite{bachPixelWiseExplanationsNonLinear2015} take the example of a fully-connected function defined by a matrix of weights $w$ and a bias $b$ at layer $l+1$. The value of a node $v$ in feature map $A^{(l+1)}$ is computed during the forward pass by the given formula:
\begin{equation}
A^{(l+1)}(v) = b + \sum_{u \in \mathcal{U}} w_{uv} A^{(l)}(u)
\end{equation}
During the back-propagation of the relevance, $R^{(l)}(u)$, the value of the relevance at the level of the layer $l+1$, is computed according to the values of the relevance $R^{(l+1)}(v)$ which are distributed according to the weights $w$ learnt during the forward pass and the values of $A^{(l)}(v)$:
\begin{equation}
R^{(l)}(u) = \sum_{v\in \mathcal{V}} R^{(l+1)}(v) \frac{A^{(l)}(u) w_{uv}}{\sum\limits_{u' \in \mathcal{U}} A^{(l)}(u') w_{u'v}} \enspace .
\end{equation}
The main issue of the method comes from the fact that the denominator may become (close to) zero, leading to the explosion of the relevance back-propagated. Moreover, it was shown by~ \cite{shrikumarNotJustBlack2017a} that when all activations are piece-wise linear (such as ReLU or leaky ReLU) the layer-wise relevance (LRP) method reproduces the output of gradient$\odot$input, questioning the usefulness of the method.
This is why Samek et al.~\cite{samekEvaluatingVisualizationWhat2017} proposed two variants of the standard-LRP method~ \cite{bachPixelWiseExplanationsNonLinear2015}. Moreover they describe the behavior of the back-propagation in other layers than the linear ones (the convolutional one following the same formula as the linear). They illustrated their method with a neural network trained on MNIST (see Figure~\ref{fig:samek_LRP}). To simplify the equations in the following paragraphs, we now denote the weighted activations as $z_{uv} = A^{(l)}(u) w_{uv}$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/backpropagation/samek_LRP.png}
\caption[LRP attribution maps explaining the decision of a neural network trained on MNIST.]{LRP attribution maps explaining the decision of a neural network trained on MNIST.\\
\textcopyright 2017 IEEE. Reprinted, with permission, from \cite{samekEvaluatingVisualizationWhat2017}.}
\label{fig:samek_LRP}
\end{figure}
\paragraph{$\epsilon$-rule}
The $\epsilon$-rule integrates a parameter $\epsilon > 0$, used to avoid numerical instability. Though it avoids the case of a null denominator, this variant breaks the rule of relevance conservation across layers
\begin{equation}
R^{(l)}(u) = \sum_{v \in \mathcal{V}} R^{(l+1)}(v) \frac{z_{uv}}{\sum\limits_{u' \in \mathcal{U}} z_{u'v} + \epsilon \times sign\left(\sum\limits_{u' \in \mathcal{U}} z_{u'v}\right)} \enspace .
\end{equation}
\paragraph{$\beta$-rule}
The $\beta$-rule keeps the conservation of the relevance by treating separately the positive weighted activations $z^+_{uv}$ from the negative ones $z^-_{uv}$
\begin{equation}
R^{(l)}(u) = \sum_{v \in \mathcal{V}} R^{(l+1)}(v) \left((1 + \beta)\frac{z^+_{uv}}{\sum\limits_{u' \in \mathcal{U}} z^+_{u'v}} - \beta \frac{z^-_{uv}}{\sum\limits_{u' \in \mathcal{U}} z^-_{u'v}}\right) \enspace .
\end{equation}
Though these two LRP variants improve the numerical stability of the procedure, they imply to choose the values of parameters that may change the patterns in the obtained saliency map.
\paragraph{Deep Taylor decomposition}
Deep Taylor decomposition \citep{montavonExplainingNonlinearClassification2017} was proposed by the same team as the one that proposed the original LRP method and its variants. It is based on similar principles as LRP: the value of the score obtained by a class $c$ is back-propagated, but the back-propagation rule is based on first-order Taylor expansions.
The back-propagation from node $v$ in at the level of $R^{(l+1)}$ to $u$ at the level of $R^{(l)}$ can be written
\begin{equation}
R^{(l)}(u) = \sum_{v \in \mathcal{V}} \frac{\partial R^{(l+1)}(v)}{\partial A^{(l)}(u)} \Bigr|_{\substack{\tilde{A}^{(l)}(u^{(v))}}} \left( A^{(l)}(u) - \tilde{A}^{(l)}(u^{(v))} \right) \enspace .
\end{equation}
This rule implies a root point $\tilde{A}^{(l)}(u^{(v))}$ which is close to $A^{(l)}(u)$ and meets a set of constraints depending on $v$.
\subsection{Perturbation methods}
\label{subsec:perturbations}
Instead of relying on a backward pass (from the output to the input) as in the previous section, perturbation methods rely on the difference between the value of $o_c$ computed with the original inputs and a locally perturbed input. This process is less abstract for humans than back-propagation methods as we can reproduce it ourselves: if the part of the image that is needed to find the good output is hidden, we are also not able to predict correctly. Moreover, it is model-agnostic and can be applied to any algorithm or deep learning architecture.
The main drawback of these techniques is that the nature of the perturbation is crucial, leading to different attribution maps depending on the perturbation function used. Moreover, Montavon et al.~\cite{montavonMethodsInterpretingUnderstanding2018} suggest that the perturbation rule should keep the perturbed input in the training data distribution. Indeed, if it is not the case one cannot know if the network performance dropped because of the location or the nature of the perturbation.
\subsubsection{Standard perturbation}
Zeiler and Fergus~\cite{zeilerVisualizingUnderstandingConvolutional2014} proposed the most intuitive method relying on perturbations. This standard perturbation procedure consists in removing information locally in a specific zone of an input $X_0$ and evaluating if it modifies the output node $o_c$. The more the perturbation degrades the task performance, the more crucial this zone is for the network to correctly perform the task. To obtain the final attribution map, the input is perturbed according to all possible locations. Examples of attribution maps obtained with this method are displayed in Figure~\ref{fig:standard_perturbation}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/perturbation/standard_perturbations.png}
\caption[Attribution maps obtained with standard perturbation.]{Attribution maps obtained with standard perturbation. Here the perturbation is a gray patch covering a specific zone of the input as shown in the left column. The attribution maps (second row) display the probability of the true label: the lower the value, the most important it is for the network to correctly identify the label. This kind of perturbation takes the perturbed input out of the training distribution. \\
Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature, ECCV 2014: Visualizing and Understanding Convolutional Networks, \citep{zeilerVisualizingUnderstandingConvolutional2014}, 2014.}
\label{fig:standard_perturbation}
\end{figure}
As evaluating the impact of the perturbation at each pixel location is computationally expensive, one can choose not to perturb the image at each pixel location, but to skip some of them (i.e. scan the image with a stride $>$ 1). This will lead to a smaller attribution map, which needs to be upsampled to be compared to the original input (in the same way as CAM \& Grad-CAM).
However, in addition to the problem of the nature of the perturbation previously mentioned, this method presents two drawbacks:
\begin{itemize}
\item the attribution maps depend on the size of the perturbation: if the perturbation becomes too large, the perturbation is not local anymore, if it too small it is not meaningful anymore (a pixel perturbation cannot cover a pattern),
\item input pixels are considered independently from each other: if the result of a network relies on a combination of pixels that cannot all be covered at the same time by the perturbation, their influence may not be detected.
\end{itemize}
\subsubsection{Optimized perturbation}
To deal with these two issues,
Fong and Vedaldi~\cite{fongInterpretableExplanationsBlack2017} proposed to optimize a perturbation mask covering the whole input. This perturbation mask $m$ has the same size as the input $X_0$. Its application is associated with a perturbation function $\Phi$ and leads to the computation of the perturbed input $X_0^m$. Its value at a coordinate $u$ reflects the quantity of information remaining in the perturbed image:
\begin{itemize}
\item if $m(u) = 1$, the pixel at location $u$ is not perturbed and has the same value in the perturbed input as in the original input ($X_0^m(u)=X_0(u)$).
\item if $m(u) = 0$ the pixel at location $u$ is fully perturbed and the value in the perturbed image is the one given by the perturbation function only ($X_0^m(u)=\Phi(X_0)(u)$).
\end{itemize}
This principle can be extended to any value between 0 and 1 with the a linear interpolation
\begin{equation}
X_0^m(u)= m(u)X_0(u) + (1-m(u))\Phi(X_0)(u) \enspace .
\end{equation}
Then, the goal is to optimize this mask $m$ according to three criteria:
\begin{enumerate}
\item the perturbed input $X_0^m$ should lead to the lowest performance possible,
\item the mask $m$ should perturb the minimum number of pixels possible, and
\item the mask $m$ should produce connected zones (i.e. avoid the scattered aspect of gradient maps).
\end{enumerate}
These three criteria are optimized using the following loss:
\begin{equation}
f(X_0^m) + \lambda_1 \lVert 1 - m \rVert^{\beta_1}_{\beta_1} + \lambda_2 \lVert \nabla m \rVert^{\beta_2}_{\beta_2}
\end{equation}
with $f$ a function that decreases as the performance of the network decreases.
However, the method also presents two drawbacks:
\begin{itemize}
\item The values of hyperparameters must be chosen ($\lambda_1$, $\lambda_2$, $\beta_1$, $\beta_2$) to find a balance between the three optimization criteria of the mask,
\item The mask may not highlight the most important pixels of the input but instead create artifacts in the perturbed image to artificially degrade the performance of the network (see Figure~\ref{fig:optimized_artifacts}).
\end{itemize}
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/perturbation/artifacts.png}
\caption[Example of artifacts created by optimized perturbation method.]{In this example, the network learned to classify objects in natural images. Instead of masking the maypole at the center of the image, it creates artifacts in the sky to degrade the performance of the network. \\
\textcopyright 2017 IEEE. Reprinted, with permission, from \citep{fongInterpretableExplanationsBlack2017}.}
\label{fig:optimized_artifacts}
\end{figure}
\subsection{Distillation}
\label{subsec:distillation}
Approaches described in this section aim at developing a transparent method to reproduce the behavior of a black-box one. Then it is possible to consider simple interpretability methods (such as weight visualization) on the transparent method instead of considering the black box.
\subsubsection{Local approximation}
\paragraph{LIME}
Ribeiro et al.~\cite{ribeiroWhyShouldTrust2016} proposed Local Interpretable Model-agnostic Explanations (LIME). This approach is:
\begin{itemize}
\item \textbf{local}, as the explanation is valid in the vicinity of a specific input $X_0$,
\item \textbf{interpretable}, as an interpretable model $g$ (linear model, decision tree...) is computed to reproduce the behavior of $f$ on $X_0$, and
\item \textbf{model-agnostic}, as it does not depend on the algorithm trained.
\end{itemize}
This last property comes from the fact that the vicinity of $X_0$ is explored by sampling variations of $X_0$ that are perturbed versions of $X_0$. Then LIME shares the advantage (model agnostic) and drawback (perturbation function dependent) of perturbations methods presented in section~\ref{subsec:perturbations}. Moreover, the authors specify that, in the case of images, they group pixels of the input in $d$ super-pixels (contiguous patches of similar pixels).
The loss to be minimized to find $g$ specific to the input $X_0$ is the following:
\begin{equation}
\mathcal{L}(f, g, \pi_{X_0}) + \Omega(g) \enspace ,
\end{equation}
where $\pi_{X_0}$ is a function that defines the locality of $X_0$ (i.e. $\pi_{X_0}(X)$ decreases as $X$ becomes closer to $X_0$),
$\mathcal{L}$ measures how unfaithful $g$ is in approximating $f$ according $\pi_{X_0}$, and
$\Omega$ is a measure of the complexity of $g$.
Ribeiro et al.~\cite{ribeiroWhyShouldTrust2016} limited their search to sparse linear models, however other assumptions could be made on $g$.
$g$ is not applied to the input directly but to a binary mask $m \in \{0, 1\}^d$ that transforms the input $X$ in $X^m$ and is applied according to a set of $d$ super-pixels. For each super-pixel $u$:
\begin{enumerate}
\item if $m(u) = 1$ the super-pixel $u$ is not perturbed,
\item if $m(u) = 0$ the super-pixel $u$ is perturbed (i.e. it is grayed).
\end{enumerate}
They used $\pi_{X_0}(X) = \exp{\frac{(X - X_0)^2}{\sigma^2}}$ and $\mathcal{L}(f, g, \pi_{X_0}) = \sum_{m} \pi_{X_0}(X_0^m) * (f(X_0^m) - g(m))^2$. Finally $\Omega(g)$ is the number of non-zero weights of $g$, and its value is limited to $K$. This way they select the $K$ super-pixels in $X_0$ that best explain the algorithm result $f(X_0)$.
\paragraph{SHAP}
Lundberg and Lee~\cite{lundbergUnifiedApproachInterpreting2017a} proposed SHAP (SHapley Additive exPlanations), a theoretical framework that encompasses several existing interpretability methods, including LIME. In this framework each of the $N$ features (again, super-pixels for images) is associated with a coefficient $\phi$ that denotes its contribution to the result. The contribution of each feature is evaluated by perturbing the input $X_0$ with a binary mask $m$ (see paragraph on LIME). Then the goal is to find an interpretable model $g$ specific to $X_0$, such that
\begin{equation}
g(m) = \phi_0 + \sum_1^N{\phi_i m_i}
\end{equation}
with $\phi_0$ being the output when the input is fully perturbed.
The authors look for an expression of $\phi$ that respects three properties:
\begin{itemize}
\item \textbf{Local accuracy}\quad $g$ and $f$ should match in the vincinity of $X_0$: $g(m) = f(X_0^m)$.
\item \textbf{Missingness}\quad Perturbed features should not contribute to the result: $m_i = 0 \rightarrow \phi_i = 0$.
\item \textbf{Consistency}\quad Let's denote as $m \setminus i$ the mask $m$ in which $m_i = 0$. For any two models $f^1$ and $f^2$, if
$f^1(X_0^{m}) - f^1(X_0^{m \setminus i}) \ge f^2(X_0^{m}) - f^2(X_0^{m \setminus i})$,
then for all $m \in \{0, 1\}^N$
$\phi^1_i \ge \phi^2_i$ ($\phi^k$ are the coefficients associated with model $f^k$).
\end{itemize}
Lundberg and Lee~\cite{lundbergUnifiedApproachInterpreting2017a} show that only one expression is possible for the coefficients $\phi$, which can be approximated with different algorithms:
\begin{equation}
\phi_i = \sum_{m \in \{0, 1\}^N} \frac{|m|! (N - |m| - 1)!}{N!} \left[ f(X_0^{m}) - f(X_0^{m \setminus i}) \right] \enspace .
\end{equation}
\subsubsection{Model translation}
Contrary to local approximation, which provides an explanation according to a specific input $X_0$, model translation consists in finding a transparent model that reproduces the behavior of the black-box model on the whole data set.
As it was rarely employed in neuroimaging frameworks, this section only discusses the distillation to decision trees proposed in \cite{frosstDistillingNeuralNetwork2017} (preprint). For a more extensive review of model translation methods, we refer the reader to \cite{xieExplainableDeepLearning2020}.
After training a machine learning system $f$, a binary decision tree $g$ is trained to reproduce its behavior. This tree is trained on a set of inputs $X$, and each inner node $i$ learns a matrix of weights $w_i$ and biases $b_i$. The forward pass of $X$ in the node $i$ of the tree is as follows: if $sigmoid(w_iX + b_i) > 0.5$, then the right leaf node is chosen, else the left leaf node is chosen. After the end of the decision tree's training, it is possible to visualize at which level which classes were separated to better understand which classes are similar for the network. It is also possible to visualize the matrices of weights learned by each inner node to identify patterns learned at each class separation. An illustration of this distillation process, on the MNIST data set (hand-written digits), can be found in Figure~\ref{fig: frosst_tree}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/distillation/frosst_tree.png}
\caption[Visualization of a soft decision tree trained on MNIST.]{Visualization of a soft decision tree trained on MNIST. \\
Adapted from \cite{frosstDistillingNeuralNetwork2017}. Permission to reuse was kindly granted by the authors.}
\label{fig: frosst_tree}
\end{figure}
\subsection{Intrinsic}
\label{subsec:intrisinc}
Contrary to the previous sections in which interpretability methods could be applied to (almost) any network after the end of the training procedure, the following methods require to design the framework before the training phase, as the interpretability components and the network are trained simultaneously. In the papers presented in this section~\citep{xuShowAttendTell2016, wangResidualAttentionNetwork2017a, baMultipleObjectRecognition2015}, the advantages of these methods are dual: they improve both the interpretability and performance of the network. However, the drawback is that they have to be implemented before training the network, then they cannot be applied in all cases.
\subsubsection{Attention modules}
Attention is a concept in machine learning that consists in producing an attribution map from a feature map and using it to improve learning of another task (such as classification, regression, reconstruction...) by making the algorithm focus on the part of the feature map highlighted by the attribution map.
In the deep learning domain, we take as reference \cite{xuShowAttendTell2016}, in which a network is trained to produce a descriptive caption of natural images. This network is composed of three parts:
\begin{enumerate}
\item a convolutional encoder that reduces the dimension of the input image to the size of the feature maps $A$,
\item an attention module that generates an attribution map $S_t$ from $A$ and the previous hidden state of the long short-term memory (LSTM) network,
\item an LSTM decoder that computes the caption from its previous hidden state, the previous word generated, $A$ and $S_t$.
\end{enumerate}
As $S_t$ is of the same size as $A$ (smaller than the input), the result is then upsampled to be overlaid on the input image. As one attribution map is generated per word generated by the LSTM, it is possible to know where the network focused when generating each word of the caption (see Figure~\ref{fig:attention_lstm}).
In this example, the attribution map is given to a LSTM, which uses it to generate a context vector $z_t$ by applying a function $\phi$ to $A$ and $S_t$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/intrinsic/LSTM_caption.png}
\caption[Attribution maps obtained with attention modules.]{Examples of images correctly captioned by the network. The focus of the attribution map is highlighted in white and the associated word in the caption is underlined. \\
Adapted from \citep{xuShowAttendTell2016}. Permission to reuse was kindly granted by the authors.}
\label{fig:attention_lstm}
\end{figure}
More generally in CNNs, the point-wise product of the attribution map $S$ and the feature map $A$ is used to generate the refined feature map $A'$ which is given to the next layers of the network. Adding an attention module implies to make new choices for the architecture of the model: its location (on lower or higher feature maps) may impact the performance of the network. Moreover, it is possible to stack several attention modules along the network, as it was done in \cite{wangResidualAttentionNetwork2017a}.
\subsubsection{Modular Transparency}
Contrary to the studies of the previous sections, the frameworks of these categories are composed of several networks (modules) that interact with each other. Each module is a black box, but the transparency of the function, or the nature of the interaction between them, allows understanding how the system works globally and extracting interpretability metrics from it.
A large variety of setups can be designed following this principle, and it is not possible to draw a more detailed general rule for this section. We will take the example described in~\cite{baMultipleObjectRecognition2015}, which was adapted to neuroimaging data (see Section~\ref{sec:application_intrinsic}), to illustrate this section, though it may not be representative of all the aspects of modular transparency.
Ba et al.~\cite{baMultipleObjectRecognition2015} proposed a framework (illustrated in Figure~\ref{fig: ba_modular}) to perform the analysis of an image in the same way as a human, by looking at successive relevant locations in the image. To perform this task, they assemble a set of networks that interact together:
\begin{itemize}
\item \textbf{Glimpse network}\quad This network takes as input a patch of the input image and the location of its center to output a context vector that will be processed by the recurrent network. Then this vector conveys information on the main features in a patch and its location.
\item \textbf{Recurrent network}\quad This network takes as input the successive context vectors and update its hidden state that will be used to find the next location to look at and to perform the learned task at the global scale (in the original paper a classification of the whole input image).
\item \textbf{Emission network}\quad This network takes as input the current state of the recurrent network and outputs the next location to look at. This will allow computing the patch that will feed the glimpse network.
\item \textbf{Context network}\quad This network takes as input the whole input at the beginning of the task and outputs the first context vector to initialize the recurrent network.
\item \textbf{Classification network}\quad This network takes as input the current state of the recurrent network and outputs a prediction for the class label.
\end{itemize}
The global framework can be seen as interpretable as it is possible to review the successive processed locations.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/intrinsic/ba_modular.png}
\caption[Framework with modular transparency browsing an image to compute the output at the global scale.]{Framework with modular transparency browsing an image to compute the output at the global scale. \\
Adapted from \citep{baMultipleObjectRecognition2015}. Permission to reuse was kindly granted by the authors.}
\label{fig: ba_modular}
\end{figure}
\subsection{Interpretability metrics}
\label{sec:evaluation_metrics}
To evaluate the reliability of the methods presented in the previous sections, one cannot only rely on qualitative evaluation. This is why interpretability metrics that evaluate attribution maps were proposed.
These metrics may evaluate different properties of attribution maps.
\begin{itemize}
\item \textbf{Fidelity} evaluates if the zones highlighted by the map influence the decision of the network.
\item \textbf{Sensitivity} evaluates how the attribution map changes according to small changes in the input $X_0$.
\item \textbf{Continuity} evaluates if two close data points lead to similar attribution maps.
\end{itemize}
In the following, $\Gamma$ is an interpretability method computing an attribution map $S$ of the black-box network $f$ and an input $X_0$.
\subsubsection{(In)fidelity}
Yeh et al.~\cite{yehFidelitySensitivityExplanations2019} proposed a measure of infidelity of $\Gamma$ based on perturbations applied according to a vector $m$ of the same shape as the attribution map $S$. The explanation is infidel if perturbations applied in zones highlighted by $S$ on $X_0$ leads to negligible changes in $f(X_0^m)$ or, on the contrary, if perturbations applied in zones not highlighted by $S$ on $X_0$ lead to significant changes in $f(X_0^m)$. The associated formula is
\begin{equation}
\text{INFD}(\Gamma, f, X_0) = \mathbb{E}_{m} \left[ \sum_{i}\sum_{j}m_{ij} \Gamma(f, X_0)_{ij} - (f(X_0) - f(X_0^m))^2 \right] \enspace .
\end{equation}
\subsubsection{Sensitivity}
Yeh et al.~\cite{yehFidelitySensitivityExplanations2019} also gave a measure of sensitivity. As suggested by the definition, it relies on the construction of attribution maps according to inputs similar to $X_0$: $\tilde{X_0}$. As changes are small, sensitivity depends on a scalar $\epsilon$ set by the user, which corresponds to the maximum difference allowed between $X_0$ and $\tilde{X_0}$. Then sensitivity corresponds to the following formula:
\begin{equation}
\text{SENS}_{\text{max}}(\Gamma, f, X_0, \epsilon) = \max_{\lVert \tilde{X_0} - X_0 \rVert \le \epsilon} \lVert \Gamma(f, \tilde{X_0}) - \Gamma(f, X_0) \rVert \enspace .
\end{equation}
\subsubsection{Continuity}
Continuity is very similar to sensitivity, except that it compares different data points belonging to the input domain $\mathcal{X}$, whereas sensitivity may generate similar inputs with a perturbation method.
This measure was introduced in \cite{montavonMethodsInterpretingUnderstanding2018} and can be computed using the following formula:
\begin{equation}
\text{CONT}(\Gamma, f, \mathcal{X}) = \max_{X_1, X_2 \in \mathcal{X}~\& ~X_1 \neq X_2} \frac{\lVert \Gamma(f, X_1) - \Gamma(f, X_2) \rVert_1}{\lVert X_1 - X_2 \rVert_2} \enspace .
\end{equation}
\vspace{1cm}
As these metrics rely on perturbation, they are also influenced by the nature of the perturbation and may lead to different results, which is a major issue (see Section~\ref{sec:limitations}).
Other metrics were also proposed and depend on the task learned by the network: for example in the case of a classification, statistical tests can be conducted between saliency maps of different classes to assess whether they differ according to the class they explain.
\section{Application of interpretability methods to neuroimaging data}
\label{sec:section3}
In this section, we provide a non-exhaustive review of applications of interpretability methods to neuroimaging data. In most cases, the focus of articles is prediction/classification rather than the interpretability method, which is just seen as a tool to analyze the results. Thus, authors do not usually motivate their choice of an interpretability method. Another key consideration here is the spatial registration of brain images, which enables having brain regions roughly at the same position between subjects. This technique is of paramount importance as attribution maps computed for registered images can then be averaged or used to automatically determine the most important brain areas, which would not be possible with unaligned images.
All the studies presented in this section are summarized in Table~\ref{tab:section3}.
This section ends with the presentation of benchmarks conducted in the literature to compare different interpretability methods in the context of brain disorders.
\input{studies_table_wo_preprocessing}
\subsection{Weight visualization applied to neuroimaging}
\label{sec:application_weights}
As the focus of this chapter is on non-transparent models, such as deep learning ones, weight visualization was only rarely found. However, this was the method chosen by
Cecotti and Gr\"{a}ser~\cite{cecottiConvolutionalNeuralNetworks2011}, who developed a CNN architecture adapted to weight visualization to detect P300 signals in electroencephalograms (EEG). The input of this network is a matrix with rows corresponding to the 64 electrodes and columns to 78 time points. The two first layers of the networks are convolutions with rectangular filters: the first filters (size 1$\times$64) combines the electrodes, whereas the second ones (13$\times$1) find time patterns. Then, it is possible to retrieve a coefficient per electrode by summing the weights associated with this electrode across the different filters, and to visualize the results in the electroencephalogram space as show in Figure~\ref{fig:cecotti_weights}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.5\textwidth]{figures/section3/weights/cecotti_weights.png}
\caption[Relative importance of the electrodes for signal detection in EEG using CNN weight visualization]{Relative importance of the electrodes for signal detection in EEG using two different architectures (CNN-1 and CNN-3) and two subjects (A and B) using CNN weight visualization. Dark values correspond to weights with a high absolute value while white values correspond to weights close to 0.\\
\textcopyright 2011 IEEE. Reprinted, with permission, from \citep{cecottiConvolutionalNeuralNetworks2011}.}
\label{fig:cecotti_weights}
\end{figure}
\subsection{Feature map visualization applied to neuroimaging}
\label{sec:application_FM}
Contrary to the limited application of weight visualization, there is an extensive literature about leveraging individual feature maps and latent spaces to better understand how models work. This goes from the visualization of these maps or their projections \citep{ohClassificationVisualizationAlzheimer2019, abrolDeepResidualLearning2020, biffiExplainableAnatomicalShape2020}, to the analysis of neuron behavior \citep{martinez-murciaStudyingManifoldStructure2020, lemingEnsembleDeepLearning2020}, through sampling in latent spaces \citep{biffiExplainableAnatomicalShape2020}.
Oh et al.~\cite{ohClassificationVisualizationAlzheimer2019} displayed the feature maps associated with the convolutional layers of CNNs trained for various Alzheimer's disease status classification tasks (Figure~\ref{fig: oh_FM}). In the first two layers, the extracted features were similar to white matter, cerebrospinal fluid and skull segmentations, while the last layer showcased sparse, global and nearly binary patterns. They used this example to emphasize the advantage of using CNNs to extract very abstract and complex features rather than using custom algorithms for features extraction~ \cite{ohClassificationVisualizationAlzheimer2019}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.8\textwidth]{figures/section3/feature_maps/oh_FM.png}
\caption[Representation of a selection of feature maps.]{Representation of a selection of feature maps (outputs of 4 filters on 10 for each layer) obtained for a single individual. \\
Adapted from \citep{ohClassificationVisualizationAlzheimer2019} (CC BY 4.0).}
\label{fig: oh_FM}
\end{figure}
Another way to visualize a feature map is to project it in a two or three-dimensional space to understand how it is positioned with respect to other feature maps. Abrol et al.~\cite{abrolDeepResidualLearning2020} projected the features obtained after the first dense layer of a ResNet architecture onto a two-dimensional space using the classical t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction technique. For the classification task of Alzheimer's disease statuses, they observed that the projections were correctly ordered according to the disease severity, supporting the correctness of the model~\cite{abrolDeepResidualLearning2020}. They partitioned these projections into three groups: Far-AD (more extreme Alzheimer's Disease patients), Far-CN (more extreme Cognitively Normal participants) and Fused (a set of images at the intersection of AD and CN groups). Using a t-test, they were able to detect and highlight voxels presenting significant differences between groups.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.95\textwidth]{figures/section3/feature_maps/abrol_FM.png}
\caption[Difference in neuroimaging space between groups defined thanks to t-SNE projection.]{Difference in neuroimaging space between groups defined thanks to t-SNE projection. Voxels showing significant differences post false discovery rate (FDR) correction (p \textless 0.05) are highlighted. \\
Reprinted from Journal of Neuroscience Methods, 339, \citep{abrolDeepResidualLearning2020}, 2020, with permission from Elsevier.}
\label{fig: abrol_FM}
\end{figure}
Biffi et al.~\cite{biffiExplainableAnatomicalShape2020} not only used feature map visualization, but also sampled the feature space. Indeed, they trained a ladder variational autoencoder framework to learn hierarchical latent representations of 3D hippocampal segmentations of control subjects and Alzheimer’s disease patients. A multi-layer perceptron was jointly trained on top of the highest two-dimensional latent space to classify anatomical shapes. While lower spaces needed a dimensionality reduction technique (i.e. t-SNE), the highest latent space could directly be visualized, as well as the anatomical variability it captured in the initial input space, by leveraging the generative process of the model. This sampling enabled an easy visualization and quantification of the anatomical differences between each class.
Finally, it may be very informative to better understand the behavior of neurons and what they are encoding. After training deep convolutional autoencoders to reconstruct MR images, segmented gray matter maps and white matter maps, Martinez-Murcia et al.~\cite{martinez-murciaStudyingManifoldStructure2020} computed correlations between each individual hidden neuron value and clinical information (e.g. age, mini-mental state examination) which allowed them to determine to which extent this information was encoded in the latent space. This way they determined which clinical data was the most strongly associated.
Using a collection of nine different MRI data sets, Leming et al.~\cite{lemingEnsembleDeepLearning2020} trained CNNs for various classification tasks (autism vs typically developing, male vs female and task vs rest). They computed a diversity coefficient for each filter of the second layer based on its output feature map. They counted how many different data sets maximally activated each value of this feature map: if they were mainly activated by one source of data the coefficient would be close to 0, whereas if they were activated by all data sets it would be close to 1. This allows assessing the layer stratification, i.e. to understand if a given filter was mostly maximally activated by one phenotype or by a diverse population. They found out that a few filters were only maximally activated by images from a single MRI data set, and that the diversity coefficient was not normally distributed across filters, having generally two peaks at the beginning and at the end of the spectrum, respectively exhibiting the stratification and strongly diverse distribution of the filters.
\subsection{Back-propagation methods applied to neuroimaging}
\label{sec:application_BP}
Back-propagation methods are the most popular methods to interpret models, and a wide range of these algorithms have been used to study brain disorders: standard and guided back-propagation \citep{huDeepLearningBasedClassification2021, ohClassificationVisualizationAlzheimer2019, riekeVisualizingConvolutionalNetworks2018, eitelTestingRobustnessAttribution2019, bohleLayerwiseRelevancePropagation2019}, gradient$\odot$input \citep{eitelUncoveringConvolutionalNeural2019, eitelTestingRobustnessAttribution2019, dyrbaComparisonCNNVisualization2020}, Grad-CAM \citep{burdujaAccurateEfficientIntracranial2020, dyrbaComparisonCNNVisualization2020}, guided Grad-CAM \citep{tangInterpretableClassificationAlzheimer2019}, LRP \citep{eitelUncoveringConvolutionalNeural2019, eitelTestingRobustnessAttribution2019, dyrbaComparisonCNNVisualization2020, bohleLayerwiseRelevancePropagation2019}, DeconvNet \citep{dyrbaComparisonCNNVisualization2020} and deep Taylor Decomposition \citep{dyrbaComparisonCNNVisualization2020}.
\subsubsection{Single interpretation}
Some studies implemented a single back-propagation method, and exploited it to find which brain regions are exploited by their algorithm \citep{ohClassificationVisualizationAlzheimer2019, lemingEnsembleDeepLearning2020, huDeepLearningBasedClassification2021}, to validate interpretability methods \citep{eitelUncoveringConvolutionalNeural2019} or to provide attribution maps to physicians to improve clinical guidance \citep{burdujaAccurateEfficientIntracranial2020}.
Oh et al.~\cite{ohClassificationVisualizationAlzheimer2019} used the standard back-propagation method to interpret CNNs for classification of Alzheimer's disease statuses. They showed that the attribution maps associated with the prediction of the conversion of prodromal patients to dementia included more complex representations, less focused on the hippocampi, than the ones associated with classification between demented patients from cognitively normal participants (see Figure~\ref{fig: oh_BP}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section3/backpropagation/oh_BP.png}
\caption[Distribution of discriminant regions obtained with gradient back-propagation.]{Distribution of discriminant regions obtained with gradient back-propagation in the classification of demented patients and cognitively normal participants (top part, AD vs CN) and the classification of stable and progressive mild cognitive impairment (bottom part, sMCI vs pMCI). \\
Adapted from \citep{ohClassificationVisualizationAlzheimer2019} (CC BY 4.0).}
\label{fig: oh_BP}
\end{figure}
In the context of autism, Leming et al.~\cite{lemingEnsembleDeepLearning2020} used the Grad-CAM algorithm to determine the most important brain connections from functional connectivity matrices . However, the authors pointed out that without further work, this visualization method did not allow understanding the underlying reason of the attribution of a given feature: for instance, one cannot know if a set of edges is important because it is under-connected or over-connected. Finally, Hu et al.~\cite{huDeepLearningBasedClassification2021} used attribution maps produced by guided back-propagation to quantify the difference in the regions used by their network to characterize Alzheimer's disease or fronto-temporal dementia.
The goal of Eitel et al.~\cite{eitelUncoveringConvolutionalNeural2019} was different. Instead of identifying brain regions related to the classification task, they exhibited with LRP that transfer learning between networks trained on different diseases (Alzheimer's disease to multiple sclerosis) and different MRI sequences enabled obtaining attribution maps focused on a smaller number of lesion areas. However, the authors pointed out that it would be necessary confirm their results on larger data sets.
Finally, Burduja et al.~\cite{burdujaAccurateEfficientIntracranial2020} trained a CNN-LSTM model to detect various hemorrhages from brain computed tomography (CT) scans. For each positive slice coming from controversial or difficult scans, they generated Grad-CAM based attribution maps and asked a group of radiologists to classify them as correct, partially correct or incorrect. This classification allowed them to determine patterns for each class of maps, and better understand which characteristics radiologists expected from these maps to be considered as correct and thus useful in practice. In particular, radiologists described maps including any type of hemorrhage as incorrect as soon as some of the hemorrhages were not highlighted, while the model only needed to detect one hemorrhage to correctly classify the slice as pathological.
\subsubsection{Comparison of several interpretability methods}
Papers described in this section used several interpretability methods and compared them in their particular context. However, as the benchmark of interpretability methods is the focus of section~\ref{subsec: which method}, which also include other types of interpretability than back-propagation, we will only focus here on what conclusions were drawn from the attribution maps.
Dyrba et al.~\cite{dyrbaComparisonCNNVisualization2020} compared DeconvNet, guided back-propagation, deep Taylor decomposition, gradient$\odot$input, LRP (with various rules) and Grad-CAM methods for classification of Alzheimer's disease, mild cognitive impairment and normal cognition
statuses. In accordance with the literature, they obtained a highest attention given to the hippocampus for both prodromal and demented patients.
B\"{o}hle et al.~\cite{bohleLayerwiseRelevancePropagation2019} compared two methods, LRP with $\beta$-rule and guided back-propagation for Alzheimer's disease status classification. They found that LRP attribution maps highlight the individual differences between patients, and then that they could be used as a tool for clinical guidance.
\subsection{Perturbation methods applied to neuroimaging}
\label{sec:application_peturbation}
The standard perturbation method has been widely used in the study of Alzheimer's disease \citep{baeTransferLearningPredicting2019, riekeVisualizingConvolutionalNetworks2018, nigriExplainableDeepCNNs2020, eitelTestingRobustnessAttribution2019} and related symptoms (amyloid-$\beta$ pathology) \citep{tangInterpretableClassificationAlzheimer2019}. However, most of the time, authors do not train their model with perturbed images. Hence, to generate explanation maps, the perturbation method uses images outside the distribution of the training set, which may call into question the relevance of the predictions and thus the reliability of attention maps.
\subsubsection{Variants of the perturbation method tailored to neuroimaging}
Several variations of the perturbation method have been developed to adapt to neuroimaging data.
The most common variation in brain imaging is the brain area perturbation method, which consists in perturbing entire brain regions according to a given brain atlas, as done in \cite{riekeVisualizingConvolutionalNetworks2018, abrolDeepResidualLearning2020, ohClassificationVisualizationAlzheimer2019}.
In their study of Alzheimer's disease, Abrol et al.~\cite{abrolDeepResidualLearning2020} obtained high values in their attribution maps for the usually discriminant brain regions, such as the hippocampus,the amygdala, the inferior and superior temporal gyruses, and the fusiform gyrus. Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018} also obtained results in accordance with the medical literature, and noted that the brain area perturbation method led to a less scattered attribution map than the standard method (Figure~\ref{fig: rieke_perturbation}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.7\textwidth]{figures/section3/perturbation/rieke_perturbation.png}
\caption[Mean attribution maps obtained on demented patients obtained with the standard and the brain area perturbation methods.]{Mean attribution maps obtained on demented patients. The first row corresponds to the standard and the second one to the brain area perturbation method. \\
Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature, MLCN 2018, DLF 2018, IMIMIC 2018: Understanding and Interpreting Machine Learning in Medical Image Computing Applications, \citep{riekeVisualizingConvolutionalNetworks2018}, 2018.}
\label{fig: rieke_perturbation}
\end{figure}
Oh et al.~\cite{ohClassificationVisualizationAlzheimer2019} used the method to compare the attribution maps of two different tasks: (1) demented patients vs cognitively normal participants and (2) stable vs progressive mild cognitively impaired patients, and noted that the regions targeted for the first task were shared with the second one (medial temporal lobe), but that some regions were specific to the second task (parts of the parietal lobe).
Guti\'{e}rrez-Becker and Wachinger~\cite{gutierrez-beckerDeepMultistructuralShape2018} adapted the standard perturbation method to a network that classified clouds of points extracted from neuroanatomical shapes of brain regions (e.g. left hippocampus) between different states of Alzheimer's disease. For the perturbation step, the authors set to $0$ the coordinates of a given point $x$ and the ones of its neighbors to then assess the relevance of the point $x$. This method allows easily generating and visualizing a 3D attribution map of the shapes under study.
\subsubsection{Advanced perturbation methods}
More advanced perturbation based methods have also been used in the literature. Nigri et al.~\cite{nigriExplainableDeepCNNs2020} compared a classical perturbation method to a swap test. The swap test replaces the classical perturbation step by a swapping step where patches are exchanged between the input brain image and a reference image chosen according to the model prediction. This exchange is possible as brain images were registered and thus brain regions are positioned in roughly the same location in each image.
Finally, Thibeau-Sutre et al.~\cite{thibeau-sutreVisualizationApproachAssess2020} used the optimized version of the perturbation method to assess the robustness of CNNs in identifying regions of interest for Alzheimer's disease detection. They applied optimized perturbations on gray matter maps extracted from T1w MR images, and the perturbation method consisted in increasing the value of the voxels to transform patients into controls. This process aimed at simulating gray matter reconstruction to identify the most important regions that needed to be ``de-atrophied'' to be considered again as normal. However they unveiled a lack of robustness of the CNN: different retrainings led to different attribution maps (shown in Figure~\ref{fig: thibeausutre_occlusion}) even though the performance did not change.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section3/perturbation/thibeausutre_perturbation.png}
\caption[Attribution maps obtained with the optimized perturbation methods.]{Coronal view of the mean attribution masks on demented patients obtained for five reruns of the same network with the optimized perturbation method. \\
Adapted with permission from Medical Imaging 2020: Image Processing, \citep{thibeau-sutreVisualizationApproachAssess2020}.}
\label{fig: thibeausutre_occlusion}
\end{figure}
\subsection{Distillation methods applied to neuroimaging}
\label{sec:application_distillation}
Distillation methods are less commonly used, but some very interesting use cases can be found in the literature on brain disorders, with methods such as LIME \citep{mageshExplainableMachineLearning2020} or SHAP \citep{ballIndividualVariationUnderlying2020}.
Magesh et al.~\cite{mageshExplainableMachineLearning2020} used LIME to interpret a CNN for Parkinson's disease detection from single-photon single-photon emission computed tomography (SPECT) scans. Most of the time the most relevant regions are the putamen and the caudate (which is clinically relevant), and some patients
also showed an anomalous increase in dopamine activity in nearby areas, which is a characteristic feature of late-stage Parkinson's disease. The authors did not specify how they extracted the ``super-pixels'' necessary to the application of the method, though it could have been interesting to consider neuroanatomical regions instead of creating the voxels groups with an agnostic method.
Ball et al.~\cite{ballIndividualVariationUnderlying2020} used SHAP to obtain explanations at the individual level from three different models trained to predict participants' age from regional cortical thicknesses and areas: regularised linear model, Gaussian process regression and XGBoost, (Figure~\ref{fig: ball_SHAP}). The authors exhibited a set of regions driving predictions for all models, and showed that regional attention was highly correlated on average with weights of the regularised linear model. However, they showed that while being consistent across models and training folds, explanations of SHAP at the individual level were generally not correlated with feature importance obtained from the weight analysis of the regularised linear model.
The authors also exemplified that the global contribution of a region to the final prediction error (``brain age delta''), even with a high SHAP value, was in general small, which indicated that this error was best explained by changes spread across several regions~\cite{ballIndividualVariationUnderlying2020}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.7\textwidth]{figures/section3/distillation/ball_SHAP.png}
\caption[Mean absolute SHAP values averaged across all subjects for regional thickness and area.]{Mean absolute feature importance (SHAP values) averaged across all subjects for XGBoost on regional thicknesses (red) and areas (green).\\
Adapted from \citep{ballIndividualVariationUnderlying2020} (CC BY 4.0).}
\label{fig: ball_SHAP}
\end{figure}
\subsection{Intrinsic methods applied to neuroimaging}
\label{sec:application_intrinsic}
\subsubsection{Attention modules}
Attention modules have been increasingly used in the past couple of years, as they often allow a boost in performance while being rather easy to implement and interpret.
To diagnose various brain diseases from brain CT images,
Fu et al.~\cite{fuAttentionbasedFullSlice2021} built a model integrating a ``two step attention'' mechanism that selects both the most important slices and the most important pixels in each slice. The authors then leveraged these attention modules to retrieve the five most suspicious slices and highlight the areas with the more significant attention.
In their study of Alzheimer's disease,
Jin et al.~\cite{jinGeneralizableReproducibleNeuroscientifically2020} used a 3D attention module to capture the most discriminant brain regions used for Alzheimer's disease diagnosis. As shown in Figure~\ref{fig: jin_attention}, they obtained significant correlations between attention patterns for two independent databases. They also obtained significant correlations between regional attention scores of two different databases, which indicated a strong reproducibility of the results.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.9\textwidth]{figures/section3/intrinsic/jin_attention.png}
\caption[Attribution maps generated by an attention mechanism module.]{Attribution maps (left: in-house database, right: ADNI database) generated by an attention mechanism module, indicating the discriminant power of various brain regions for Alzheimer's disease diagnosis. \\
Adapted from \citep{jinGeneralizableReproducibleNeuroscientifically2020} (CC BY 4.0). }
\label{fig: jin_attention}
\end{figure}
\subsubsection{Modular transparency}
Modular transparency has often been used in brain imaging analysis. A possible practice consists in first generating a target probability map of a black-box model, before feeding this map to a classifier to generate a final prediction, as done in \citep{qiuDevelopmentValidationInterpretable2020, leeInterpretableAlzheimerDisease2019}.
Qiu et al.~\cite{qiuDevelopmentValidationInterpretable2020} used a convolutional network to generate an attribution map from patches of the brain, highlighting brain regions associated with Alzheimer's disease diagnosis (see Figure~\ref{fig: qiu_modular}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.9\textwidth]{figures/section3/intrinsic/qiu_modular.png}
\caption[Example of modular transparency using random patch learning.]{Randomly selected samples of T1-weighted full MRI volumes are used as input to learn the Alzheimer's disease status at the individual level (Step 1). The application of the model to whole images leads to the generation of participant-specific disease probability maps of the brain (Step 2). \\
Adapted from Brain: A Journal of Neurology, 143, \citep{qiuDevelopmentValidationInterpretable2020}, 2020, with permission of Oxford University Press. }
\label{fig: qiu_modular}
\end{figure}
Lee et al.~\cite{leeInterpretableAlzheimerDisease2019} first parcellated gray matter density maps into 93 regions. For each of these regions, several deep neural networks were trained on randomly selected voxels and their outputs were averaged to obtain a mean regional disease probability. Then, by concatenating these regional probabilities, they generated a region-wise disease probability map of the brain, which was further used to perform Alzheimer's disease detection.
The approach of Ba et al.~\cite{baMultipleObjectRecognition2015} was also applied to Alzheimer's disease detection~\cite{woodNEURODRAM3DRecurrent2019} (preprint). Though that work is still a preprint, the idea is interesting as it aims at reproducing the way a radiologist looks at an MR image. The main difference with \cite{baMultipleObjectRecognition2015} is the initialization, as the context network does not take as input the whole image but clinical data of the participant. Then the framework browses the image in the same way as in the original paper: a patch is processed by a recurrent neural network and from its internal state the glimpse network learns which patch should be looked at next. After a fixed number of iterations, the internal state of the recurrent neural network is processed by a classification network that gives the final outcome. The whole system is interpretable as the trajectory of the locations (illustrated in Figure~\ref{fig: wood_modular}) processed by the framework allows understanding which regions are more important for the diagnosis. However this framework may have a high dependency to clinical data: as the initialization depends on scores used to diagnose Alzheimer's disease, the classification network may learn to classify based on the initialization only and most of the trajectory may be negligible to assess the correct label.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.9\textwidth]{figures/section3/intrinsic/wood_modular.png}
\caption[Trajectory taken by the framework trained based on the work of Ba et al.~\citep{baMultipleObjectRecognition2015}.]{Trajectory taken by the framework for a participant from the ADNI test set. A bounding box around the first location attended to is included to indicate the approximate size of the glimpse that the recurrent neural network receives; this is the same for all subsequent locations. \\
Adapted from \citep{woodNEURODRAM3DRecurrent2019}. Permission to reuse was kindly granted by the authors.}
\label{fig: wood_modular}
\end{figure}
Another framework, the DaniNet, proposed by Ravi et al.~\cite{raviDegenerativeAdversarialNeuroimage2022}, is composed of multiple networks, each with a defined function, as illustrated in Figure~\ref{fig: ravi_modular}.
\begin{itemize}
\item The conditional deep autoencoder (in orange) learns to reduce the size of the slice $x$ to a latent variable $Z$ (encoder part), and then to reconstruct the original image based on $Z$ and two additional variables: the diagnosis and age (generator part). Its performance is evaluated thanks to the reconstruction loss $L^{rec}$.
\item Discriminator networks (in yellow) either force the encoder to take temporal progression into account ($D_z$) or try to determine if the output of the generator are real or generated images ($D_b$).
\item Biological constraints (in grey) force the previous generated image of the same participant to be less atrophied than the next one (voxel loss) and learn to find the diagnosis thanks to regions of the generated images (regional loss).
\item Profile weight functions (in blue) aim at funding appropriate weights for each loss to compute the total loss.
\end{itemize}
The assembly of all these components allows learning a longitudinal model that characterizes the progression of the atrophy of each region of the brain. This atrophy evolution can then be visualized thanks to a neurodegeneration simulation generated by the trained model by sampling missing intermediate values.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.8\textwidth]{figures/section3/intrinsic/ravi_modular.png}
\caption[Pipeline used for training the DaniNet framework.]{Pipeline used for training the proposed DaniNet framework that aims to learn a longitudinal model of the progression of Alzheimer's disease. \\
Adapted from \citep{raviDegenerativeAdversarialNeuroimage2022} (CC BY 4.0).}
\label{fig: ravi_modular}
\end{figure}
\subsection{Benchmarks conducted in the literature}
\label{subsec:benchmarks}
This section describes studies that compared several interpretability methods. We separated evaluations based on metrics from those which are purely qualitative. Indeed, even if the interpretability metrics are not mature yet, it is essential to try to measure quantitatively the difference between methods rather than to only rely on human perception, which may be biased.
\subsubsection{Quantitative evaluations}
Eitel and Ritter~\cite{eitelTestingRobustnessAttribution2019} tested the robustness of four methods: standard perturbation, gradient$\odot$input, guided back-propagation and LRP. To evaluate these methods, the authors trained ten times the same model with a random initialization and generated attribution maps for each of the ten runs. For each method, they exhibited significant differences between the averaged true positives/negatives attribution maps of the ten runs. To quantify this variance, they computed the $\ell$2-norm between the attribution maps, and determined for each model the brain regions with the highest attribution. They concluded that LRP and guided back-propagation were the most consistent methods, both in terms of distance between attribution maps and most relevant brain regions. However this study makes a strong assumption: to draw these conclusions, the network should provide stable interpretations across retrainings. Unfortunately, Thibeau-Sutre et al.~\cite{thibeau-sutreVisualizationApproachAssess2020} showed that the study of the robustness of the interpretability method and of the network should be done separately, as their network retraining was not robust. Indeed, they first showed that the interpretability method they chose (optimized perturbation) was robust according to different criteria, then they observed that network retraining led to different attribution maps. The robustness of an interpretability method thus cannot be assessed from the protocol described in~\cite{eitelTestingRobustnessAttribution2019}. Moreover, the fact that guided back-propagation is one of the most stable method meets the results of \cite{adebayoSanityChecksSaliency2018}, who observed that guided back-propagation always gave the same result independently from the weights learned by a network (see Section~\ref{sec:theoretical_limitations}).
B\"{o}hle et al.~\cite{bohleLayerwiseRelevancePropagation2019} measured the benefit of LRP with $\beta$-rule compared to guided back-propagation by comparing the intensities of the mean attribution map of demented patients and the one of cognitively normal controls. They concluded that LRP allowed a stronger distinction between these two classes than guided back-propagation, as there was a greater difference between the mean maps for LRP. Moreover, they found a stronger correlation between the intensities of the LRP attribution map in the hippocampus and the hippocampal volume than for guided back-propagation. But as \cite{adebayoSanityChecksSaliency2018} demonstrated that guided back-propagation has serious flaws, it does not allow drawing strong conclusions.
Nigri et al.~\cite{nigriExplainableDeepCNNs2020} compared the standard perturbation method to a swap test (see Section~\ref{sec:application_peturbation}) using two properties: the continuity and the sensitivity. The continuity property is verified if two similar input images have similar explanations. The sensitivity property affirms that the most salient areas in an explanation map should have the greater impact in the prediction when removed. The authors carried out experiments with several types of models, and both properties were consistently verified for the swap test, while the standard perturbation method showed a significant absence of continuity and no conclusive fidelity values~\cite{nigriExplainableDeepCNNs2020}.
Finally Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018} compared four visualization methods: standard back-propagation, guided back-propagation, standard perturbation and brain area perturbation. They computed the Euclidean distance between the mean attribution maps of the same class for two different methods and observed that both gradient methods were close, whereas brain area perturbation was different from all others. They concluded that as interpretability methods lead to different attribution maps, one should compare the results of available methods and not trust only one attribution map.
\subsubsection{Qualitative evaluations}
Some works compared interpretability methods using a purely qualitative evaluation.
First, Eitel et al.~\cite{eitelUncoveringConvolutionalNeural2019} generated attribution maps using the LRP and gradient$\odot$input methods and obtained very similar results. This could be expected as it was shown that there is a strong link between LRP and gradient$\odot$input (see Section~\ref{subsubsec: Relevance BP}).
Dyrba et al.~\cite{dyrbaComparisonCNNVisualization2020} compared DeconvNet, guided back-propagation, deep Taylor decomposition, gradient$\odot$input, LRP (with various rules) and Grad-CAM. The different methods roughly exhibited the same highlighted regions, but with a significant variability in focus, scatter and smoothness, especially for the Grad-CAM method. These conclusions were derived from a visual analysis. According to the authors, LRP and deep Taylor decomposition delivered the most promising results with a highest focus and less scatter~\cite{dyrbaComparisonCNNVisualization2020}.
Tang et al.~\cite{tangInterpretableClassificationAlzheimer2019} compared two interpretability methods that seemed to have different properties: guided Grad-CAM would provide a fine-grained view of feature salience, whereas standard perturbation highlights the interplay of features among classes. A similar conclusion was drawn by Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018}.
\subsubsection{Conclusions from the benchmarks}
The most extensively compared method is LRP, and each time it has been shown to be the best method compared to others. However, its equivalence with gradient$\odot$input for networks using ReLU activations still questions the usefulness of the method, as gradient$\odot$input is much easier to implement. Moreover, the studies reaching this conclusion are not very insightful: \cite{eitelTestingRobustnessAttribution2019} may suffer from methodological biases, \cite{bohleLayerwiseRelevancePropagation2019} compared LRP only to guided back-propagation, which was shown to be irrelevant \citep{adebayoSanityChecksSaliency2018}, and \cite{dyrbaComparisonCNNVisualization2020} only performed a qualitative assessment.
As proposed in conclusion by Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018}, a good way to assess the quality of interpretability methods could be to produce some form of ground truth for the attribution maps, for example by implementing simulation models that control for the level of separability or location of differences.
\section{Limitations and recommendations}
\label{sec:limitations}
Many methods have been proposed for interpretation of deep learning models. The field is not mature yet and none of them has become a standard. Moreover, a large panel of studies have been applied to neuroimaging data, but the value of the results obtained from the interpretability methods is often still not clear. Furthermore, many applications suffer from methodological issues, making their results (partly) irrelevant. In spite of this, we believe that using interpretability methods is highly useful, in particular to spot cases where the model exploits biases in the dataset.
\subsection{Limitations of the methods}
\label{sec:theoretical_limitations}
It is not often clear whether the interpretability methods really highlight features relevant to the algorithm they interpret. This way, Adebayo et al.~\cite{adebayoSanityChecksSaliency2018} showed that the attribution maps produced by some interpretability methods (guided back-propagation and guided Grad-CAM) may not be correlated at all with the weights learned by the network during its training procedure. They prove it with a simple test called ``cascading randomization''. In this test, the weights of a network trained on natural images are randomized layer per layer, until the network is fully randomized. At each step, they produce an attribution map with a set of interpretability methods to compare it to the original ones (attribution maps produced without randomization). In the case of guided back-propagation and guided Grad-CAM, all attribution maps were identical, which means that the results of these methods were independent of the training procedure.
Unfortunately, this type of failures does not only affect interpretability methods but also the metrics designed to evaluate their reliability, which makes the problem even more complex. Tomsett et al.~\cite{tomsettSanityChecksSaliency2020} investigated this issue by evaluating interpretability metrics with three properties:
\begin{itemize}
\item \textbf{inter-rater interpretability} assesses whether a metric always rank different interpretability methods in the same way for different samples in the data set,
\item \textbf{inter-method reliability} checks that the scores given by a metric on each saliency method fluctuate in the same way between images,
\item \textbf{internal consistency} evaluates if different metrics measuring the same property (for example fidelity) produce correlated scores on a set of attribution maps.
\end{itemize}
They concluded that the investigated metrics were not reliable, though it is difficult to know the origin of this unreliability due to the tight coupling of model, interpretability method and metric.
\subsection{Methodological advice}
\label{sec:methodological_advice}
Using interpretability methods is more and more common in medical research. Even though this field is not yet mature and the methods have limitations, we believe that using an interpretability method is usually a good thing because it may spot cases where the model took decisions from irrelevant features. However, there are methodological pitfalls to avoid and good practices to adopt to make a fair and sound analysis of your results.
You should first clearly state in your paper which interpretability method you use as there exist several variants for most of the methods (see section~\ref{sec:section2}), and its parameters should be clearly specified. Implementation details may also be important: for the Grad-CAM method, attribution maps can be computed at various levels in the network; for a perturbation method, the size and the nature of the perturbation greatly influence the result.
The data on which methods are applied should also be made explicit: for a classification task, results may be completely different if samples are true positives or true negatives, or if they are taken from the train or test sets.
Taking a step back from the interpretability method and especially attribution maps is fundamental as they present several limitations~\cite{bohleLayerwiseRelevancePropagation2019}.
First, there is no ground truth for such maps, which are usually visually assessed by authors. Comparing obtained results with the machine learning literature is a good first step, but be aware that you will most of the time find a paper to support your findings, so we suggest to look at established clinical references.
Second, attribution maps are usually sensitive to the interpretability method, its parameters (e.g. $\beta$ for LRP), but also to the final scale used to display maps. A slight change in one of these variables may significantly impact the interpretation.
Third, an attribution map is a way to measure the impact of pixels on the prediction of a given model, but it does not provide underlying reasons (e.g. pathological shape) or explain potential interactions between pixels. A given pixel might have a low attribution when considered on its own, but have a huge impact on the prediction when combined with another.
Fourth, the quality of a map strongly depends on the performance of the associated model. Indeed, low performance models are more likely to use wrong features. However, even in this case, attribution maps may be leveraged, e.g. to determine if the model effectively relies on irrelevant features (such as visual artefacts) or if there are biases in the data set~\cite{lapuschkinAnalyzingClassifiersFisher2016}.
One must also be very careful when trying to establish new medical findings using model interpretations, as we do not always know how the interpretability methods react when applied to correlated features. Then even if a feature seems to have no interest for a model, this does not mean that it is not useful in the study of the disease (for example, a model may not use information from the frontal lobe when diagnosing Alzheimer's disease dementia, but this does not mean that this region is not affected by the disease).
Finally, we suggest implementing different interpretability methods to obtain complementary insights from attribution maps. For instance, using LRP in addition to the standard back-propagation method provides a different type of information, as standard back-propagation gives the sensibility of the output with respect to the input, while LRP shows the contribution of each input feature to the output. Moreover, using several metrics allows a quantitative comparison between them using interpretability metrics (see section~\ref{sec:evaluation_metrics}).
\subsection{Which method should I choose?}
\label{subsec: which method}
We conclude this section on how to choose an interpretability method. Some benchmarks were conducted to assess the properties of some interpretability methods compared to others (see Section~\ref{subsec:benchmarks}). Though these are good initiatives, there are still not enough studies (and some of them suffer from methodological flaws) to draw solid conclusions. This is why we give in this section some practical advice to the reader to choose an interpretability method based on more general concepts.
Before implementing an interpretability method, we suggest reviewing the following points to help you choose carefully.
\begin{itemize}
\item \textbf{Implementation complexity}\quad Some methods are more difficult to implement than others, and may require substantial coding efforts. However, many of them have already been implemented in libraries or github repositories (e.g. \cite{uozbulak_pytorch_vis_2021}), so we suggest looking online before trying to re-implement them. This is especially true for model-agnostic methods, such as LIME, SHAP or perturbations, for which no modification of your model is required. For model-specific methods, such as back-propagation ones, the implementation will depend on the model, but if its structure is a common one (e.g. regular CNN with feature extraction followed by a classifier), it is also very likely that an adequate implementation is already available (e.g. Grad-CAM on CNN in \cite{uozbulak_pytorch_vis_2021}).
\item \textbf{Time cost}\quad Computation time greatly differs from one method to another, especially when input data is heavy. For instance, perturbing high dimension images is time expensive, and it would be much faster to use standard back-propagation.
\item \textbf{Method parameters}\quad The number of parameters to set varies between methods, and their choice may greatly influence the result. For instance, the patch size, the step size (distance between two patches) as well as the type of perturbation (e.g. white patches or blurry patches) must be chosen for the standard perturbation method, while the standard back-propagation does not need any parameter. Thus, without prior knowledge on the interpretability results, methods with no or only a few parameters are a good option.
\item \textbf{Literature}\quad Finally, our last piece of advice is to look into the literature to determine the methods that have commonly been used in your domain of study. A highly used method does not guarantee its quality (e.g. guided back-propagation~\cite{adebayoSanityChecksSaliency2018}), but it is usually a good first try.
\end{itemize}
To sum up, we suggest that you choose (or at least begin with) an interpretability method that is easy to implement, time efficient, with no parameters (or only a few) to tune and commonly used. In the context of brain image analysis, we suggest using the standard back-propagation or Grad-CAM methods.
Before using a method you do not know well, you should check that other studies did not show that this method is not relevant (which is the case for guided back-propagation or guided Grad-CAM), or that it is not equivalent to another method (for example LRP on networks with ReLU activation layers and gradient$\odot$input).
Regarding interpretability metrics, there is no consensus in the community as the field is not mature yet. General advice would be to use different metrics and confront them to human observers, taking for example the methodology described in~\cite{ribeiroWhyShouldTrust2016}.
\section{Conclusion}
Interpretability of machine learning models is an important topic, in particular in the medical field. First, this is a natural need expressed by clinicians who are potential users of medical decision support systems. Moreover, it has been shown in many occasions that models with high performance can actually be using irrelevant features. This is dangerous because it means that they are exploiting biases in the training data sets and thus may dramatically fail when applied to new data sets or deployed in clinical routine.
Interpretability is a very active field of research and many approaches have been proposed. They have been extensively applied in neuroimaging, and very often allowed highlighting clinically relevant regions of the brain that were used by the model. However, comparative benchmarks are not entirely conclusive and it is currently not clear which approach is the most adapted for a given aim. In other words, it is very important to keep in mind that the field of interpretability is not yet mature. It is not yet clear which are the best methods or even if the most widely used approaches will still be considered a standard in the near future.
That being said, we still strongly recommend that a classification or regression model be studied with at least one interpretability method. Indeed, evaluating the performance of the model is not sufficient in itself and the additional use of an interpretation method may allow detecting biases and models that perform well but for bad reasons and thus would not generalize to other settings.
\clearpage
\section*{Appendices}
\renewcommand{\thesubsection}{\Alph{subsection}}
\setcounter{subsection}{0}
\subsection{Short reminder on network training procedure}
\label{appendix:network}
During the training phase, a neural network updates its weights to make a series of inputs match with their corresponding target labels:
\begin{enumerate}
\item \textit{Forward pass}\quad The network processes the input image to compute the output value.
\item \textit{Loss computation}\quad The difference between the true labels and the output values is computed according to a criterion (cross-entropy, mean squared error...). This difference is called the loss, and should be as low as possible
\item \textit{Backward pass}\quad For each learnable parameter of the network, the gradients with respect to the loss are computed.
\item \textit{Weight update}\quad Weights are updated according to the gradients and an optimizer rule (stochastic gradient descent, Adam, Adadelta...).
\end{enumerate}
As a network is a composition of functions, the gradients of the weights of a layer $l$ with respect to the loss can be easily obtained according to the values of the gradients in the following layers. This way of computing gradients layer per layer is called back-propagation.
\subsection{Description of the main brain disorders mentioned in the reviewed studies}
\label{appendix:diseases}
This appendix aims at shortly presenting the diseases considered by the studies reviewed in Section~\ref{sec:section3}.
The majority of the studies focused on the classification of Alzheimer's disease (AD), a neurodegenerative disease of the elderly. Its pathological hallmarks are senile plaques formed by amyloid-$\beta$ protein and neurofibrillary tangles that are tau protein aggregates. Both can be measured in vivo using either PET imaging or CSF biomarkers. Several other biomarkers of the disease exist. In particular, atrophy of gray and white matter measured from T1w MRI is often used, even though it is not specific of AD.
There is strong and early atrophy in the hippocampi that can be linked to the memory loss, even though other clinical signs are found and other brain areas are altered.
The following diagnosis statuses are often used:
\begin{itemize}
\item \textbf{AD} refers to demented patients,
\item \textbf{CN} refers to cognitively normal participants,
\item \textbf{MCI} refers to patients in with mild cognitive impairment (they have an objective cognitive decline but it is not sufficient yet to cause a loss of autonomy),
\item \textbf{stable MCI} refers to MCI patients who stayed stable during a defined period (often three years),
\item \textbf{progressive MCI} refers to MCI patients who progressed to Alzheimer's disease during a defined period (often three years).
\end{itemize}
Most of the studies analysed T1w MRI data, except \cite{tangInterpretableClassificationAlzheimer2019} where the patterns of amyloid-$\beta$ in the brain are studied.
Fronto-temporal dementia is another neurodegenerative disease in which the neuronal loss dominates in the frontal and temporal lobes. Behavior and language are the most affected cognitive functions.
Parkinson's disease is also a neurodegenerative disease. It primarily affects dopaminergic neurons in the substantia nigra. A commonly used neuroimaging technique to detect this loss of dopaminergic neurons is the SPECT, as it uses a ligand that binds to dopamine transporters. Patients are affected by different symptoms linked to motor faculties such as tremor, slowed movements and gait disorder, but also sleep disorder, depression and other symptoms.
Multiple sclerosis is a demyelinating disease with a neurodegenerative component affecting younger people (it begins between the ages of 20 and 50). It causes demyelination of the white matter in the brain (brain stem, basal ganglia, tracts near the ventricles), optic nerve and spinal cord. This demyelination results in autonomic, visual, motor and sensory problems.
Intracranial hemorrhage may result from a physical trauma or nontraumatic causes such as a ruptured aneurysm. Different subtypes exist depending on the location of the hemorrhage.
Autism is a spectrum of neurodevelopmental disorders affecting social interation and communication. Diagnosis is done based on clinical signs (behavior) and the patterns that may exist in the brain are not yet reliably described as they overlap with the neurotypical population.
Some brain characteristics that may be related to brain disorders and detected in CT scans were considered in the data set CQ500:
\begin{itemize}
\item \textbf{Midline Shift} is a shift of the center of the brain past the center of the skull.
\item \textbf{Mass Effect} is caused by the presence of an intracranial lesion (for example a tumor) that is compressing nearby tissues.
\item \textbf{Calvarial Fractures} are fractures of the skull.
\end{itemize}
Finally, one study \citep{ballIndividualVariationUnderlying2020} learned to predict the age of cognitively normal patients. Such algorithm can help in diagnosing brain disorders as patients will have a greater brain age than their chronological age, then it establishes that a participant is not in the normal distribution.
\clearpage
\section*{Acknowledgments}
The research leading to these results has received funding from the French government under management of Agence Nationale de la Recherche as part of the ``Investissements d'avenir'' program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute) and reference ANR-10-IAIHU-06 (Agence Nationale de la Recherche-10-IA Institut Hospitalo-Universitaire-6).
\clearpage
\bibliographystyle{spbasicsort}
\subsection*{\refname}}
\renewcommand{\bibpreamble}{\scriptsize \begin{multicols}{2}}
\renewcommand{\bibpostamble}{\end{multicols}}
\mdfsetup{skipabove=\topskip, skipbelow=\topskip}
\newcounter{nicebox}
\newenvironment{nicebox}[1][]{%
\refstepcounter{nicebox}%
\ifstrempty{#1}%
{\mdfsetup{%
frametitle={%
\tikz[baseline=(current bounding box.east),outer sep=0pt]
\node[anchor=east,rectangle,fill=blue!20]
{\strut Theorem~\thetheo};}}
}%
{\mdfsetup{%
frametitle={%
\tikz[baseline=(current bounding box.east),outer sep=0pt]
\node[anchor=east,rectangle,fill=blue!20]
{\strut Box~\thenicebox:~#1};}}%
}%
\mdfsetup{innertopmargin=10pt,linecolor=blue!20, linewidth=2pt,topline=true, frametitleaboveskip=\dimexpr-\ht\strutbox\relax,}
\begin{mdframed}[]\relax%
}{\end{mdframed}}
\newfloat{floatbox}{thp}{lop}
\floatname{floatbox}{Box}
\newcommand{Box}{Box}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathAlphabet{\mathsfit}{\encodingdefault}{\sfdefault}{m}{sl}
\SetMathAlphabet{\mathsfit}{bold}{\encodingdefault}{\sfdefault}{bx}{n}
\section{Introduction}
\label{sec:intro}
\subsection{Need for interpretability}
Many metrics have been developed to evaluate the performance of machine learning (ML) systems. In the case of supervised systems, these metrics compare the output of the algorithm to a ground truth, in order to evaluate its ability to reproduce a label given by a physician. However, the users (patients and clinicians) may want more information before relying on such systems. On which features is the model relying to compute the results? Are these features close to the way a clinician thinks? If not, why? This questioning coming from the actors of the medical field is justified, as errors in real life may lead to dramatic consequences.
Trust into ML systems cannot be built only based on a set of metrics evaluating the performance of the system.
Indeed, various examples of machine learning systems taking correct decisions for the wrong reasons exist, e.g. ~\cite{ribeiroWhyShouldTrust2016,fongInterpretableExplanationsBlack2017,degraveAIRadiographicCOVID192021}. Thus, even though their performance is high, they may be unreliable and, for instance, not generalize well to slightly different data sets. One can try to prevent this issue by interpreting the model with an appropriate method whose output will highlight the reasons why a model took its decision.
In \cite{ribeiroWhyShouldTrust2016}, the authors show a now classical case of a system that correctly classifies images for wrong reasons. They purposely designed a biased data set in which wolves always are in a snowy environment whereas huskies are not. Then, they trained a classifier to differentiate wolves from huskies: this classifier had a good accuracy, but classified as wolves huskies with a snowy background, and as huskies wolves that were not in the snow. Using an interpretability method, they further highlighted that the classifier was looking at the background and not at the animal (see Figure~\ref{fig: ribeiro_husky_snow}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.6\textwidth]{figures/introduction/ribeiro_husky_snow.png}
\caption[Example of an interpretability method highlighting why a network took the wrong decision.]{Example of an interpretability method highlighting why a network took the wrong decision. The explained classifier was trained on the binary task ``Husky'' vs ``Wolf''. The pixels used by the model are actually in the background and highlight the snow. \\
Adapted from \citep{ribeiroWhyShouldTrust2016}. Permission to reuse was kindly granted by the authors.}
\label{fig: ribeiro_husky_snow}
\end{figure}
Another study \cite{fongInterpretableExplanationsBlack2017} detected a bias in ImageNet (a widely used data set of natural images) as the interpretation of images with the label ``chocolate sauce'' highlighted the importance of the spoon. Indeed, ImageNet ``chocolate sauce'' images often contained spoons, leading to a spurious correlation. There are also examples of similar problems in medical applications. For instance, a recent paper \cite{degraveAIRadiographicCOVID192021} showed with interpretability methods that some deep learning systems detecting COVID-19 from chest radiographs actually relied on confounding factors rather than on the actual pathological features. Indeed, their model focused on other regions than the lungs to evaluate the COVID-19 status (edges, diaphragm and cardiac silhouette). Of note, their model was trained on public data sets which were used by many studies.
\subsection{How to interpret models}
According to \cite{liptonMythosModelInterpretability2018}, model interpretability can be broken down into two categories: transparency and post-hoc explanations.
A model can be considered as transparent when it (or all parts of it) can be fully understood as such, or when the learning process is understandable. A natural and common candidate that fits, at first sight, these criteria is the linear regression algorithm, where coefficients are usually seen as the individual contributions of the input features. Another candidate is the decision tree approach where model predictions can be broken down into a series of understandable operations. One can reasonably consider these models as transparent: one can easily identify the features that were used to take the decision. However, one may need to be cautious not to push too far the medical interpretation. Indeed, the fact that a feature has not been used by the model does not mean that it is not associated with the target. It just means that the model did not need it to increase its performance. For instance, a classifier aiming at diagnosing Alzheimer's disease may need only a set of regions (for instance from the medial temporal lobe of the brain) to achieve an optimal performance. This does not mean that other brain regions are not affected by the disease, just that they were not used by the model to take its decision. This is the case for example for sparse models like LASSO, but also standard multiple linear regressions. Moreover, features given as input to transparent models are often highly-engineered, and choices made before the training step (preprocessing, feature selection) may also hurt the transparency of the whole framework. Nevertheless, in spite of these caveats, such models can reasonably be considered transparent, in particular when compared to deep neural networks which are intrinsically black boxes.
The second category of interpretability methods, post-hoc interpretations, allows dealing with non-transparent models. Xie et al.~\cite{xieExplainableDeepLearning2020} proposed a taxonomy in three categories: \textit{visualization} methods consist in extracting an attribution map of the same size as the input whose intensities allow knowing where the algorithm focused its attention, \textit{distillation} approaches consist in reproducing the behavior of a black-box model with a transparent one, and \textit{intrinsic} strategies include interpretability components within the framework, which are trained along with the main task (for example, a classification). In the present work, we focus on this second category of methods (post-hoc), and proposed a new taxonomy including other methods of interpretation (see Figure~\ref{fig:taxonomy}). Post-hoc interpretability is the category the most used nowadays, as it allows interpreting deep learning methods that became the state-of-the-art for many tasks in neuroimaging, as in other application fields.
\subsection{Chapter content and outline}
This chapter focuses on methods developed to interpret non-transparent machine learning systems, mainly deep learning systems,
computing classification or regression tasks from high-dimensional inputs. The interpretability of other frameworks (in particular generative models such as variational autoencoders or generative adversarial networks) is not covered as there are not enough studies addressing them. It may be because high-dimensional outputs (such as images) are easier to interpret ``as such'', whereas small dimensional outputs (such as scalars) are less transparent.
Most interpretability methods presented in this chapter produce an attribution map: an array with the same dimensions as that of the input (up to a resizing), that can be overlaid on top of the input in order to exhibit an explanation of the model prediction. In the literature, many different terms may coexist to name this output such as saliency map, interpretation map or heatmap. To avoid misunderstandings, in the following, we will only use the term ``attribution map''.
The chapter is organized as follows. Section~\ref{sec:section2} presents the most commonly used interpretability methods proposed for computer vision, independently of medical applications. It also describes metrics developed to evaluate the reliability of interpretability methods. Then, section~\ref{sec:section3} details their application to neuroimaging. Finally, section~\ref{sec:limitations} discusses current limitations of interpretability methods, presents benchmarks conducted in the neuroimaging field and gives some advice to the readers who would like to interpret their own models.
Mathematical notations and abbreviations used during this chapter are summarized in Table~\ref{tab:notations} and Table~\ref{tab:abbreviations}. A short reminder on neural network training procedure and a brief description of the diseases mentioned in the present chapter are provided in Appendices~\ref{appendix:network} and~\ref{appendix:diseases}.
\begin{table}
\caption{Mathematical notations}
\label{tab:notations}
\noindent\hrulefill
\begin{itemize}
\item $X_0$ is the input tensor given to the network, and $X$ refers to any input, sampled from the set $\mathcal{X}$.
\item $y$ is a vector of target classes corresponding to the input.
\item $f$ is a network of $L$ layers. The first layer is the closest to the input, the last layer is the closest to the output. A layer is a function.
\item $g$ is a transparent function which aims at reproducing the behaviour of $f$.
\item $w$ and $b$ are the weights and the bias associated to a linear function (for example in a fully-connected layer).
\item $u$ and $v$ are locations (set of coordinates) corresponding to a node in a feature map. They belong respectively to the set $\mathcal{U}$ and $\mathcal{V}$.
\item $A^{(l)}_k(u)$ is the value of the feature map computed by layer $l$, of $K$ channels at channel $k$, at position $u$.
\item $R^{(l)}_k(u)$ is the value of a property back-propagated through the $l+1$, of $K$ channels at channel $k$, at position $u$. $R^{(l)}$ and $A^{(l)}$ have the same number of channels.
\item $o_c$ is the output node of interest (in a classification framework, it corresponds to the node of the class $c$).
\item $S_c$ is an attribution map corresponding to the output node $o_c$.
\item $m$ is a mask of perturbations. It can be applied to $X$ to compute its perturbed version $X^m$.
\item $\Phi$ is a function producing a perturbed version of an input $X$.
\item $\Gamma_c$ is the function computing the attribution map $S_c$ from the black-box function $f$ and an input $X_0$.
\end{itemize}
\noindent\hrulefill
\end{table}
\begin{table}
\caption{Abbreviations}
\label{tab:abbreviations}
\noindent\hrulefill
\begin{itemize}
\item \textbf{CAM} Class activation maps
\item \textbf{CNN} Convolutional neural network
\item \textbf{CT} Computed tomography
\item \textbf{Grad-CAM} Gradient-weighted class activation mapping
\item \textbf{LIME} Local interpretable model-agnostic explanations
\item \textbf{LRP} Layer-wise relevance
\item \textbf{MRI} Magnetic resonance imaging
\item \textbf{SHAP} SHapley Additive exPlanations
\item \textbf{T1w} T1-weighted [Magnetic Resonance Imaging]
\end{itemize}
\noindent\hrulefill
\end{table}
\section{Interpretability methods}
\label{sec:section2}
This section presents the main interpretability methods proposed in the domain of computer vision. We restrict ourselves to the methods that have been applied to the neuroimaging domain (the applications themselves being presented in Section~\ref{sec:section3}).
The outline of this section is largely inspired from the one proposed by Xie et al.~\cite{xieExplainableDeepLearning2020}:
\begin{enumerate}
\item \textbf{weight visualization} consists in directly visualizing weights learned by the model, which is natural for linear models but quite less informative for deep learning networks,
\item \textbf{feature map visualization} consists in displaying intermediate results produced by a deep learning network to better understand its operation principle,
\item \textbf{back-propagation methods} back-propagate a signal through the machine learning system from the output node of interest $o_c$ to the level of the input to produce an attribution map,
\item \textbf{perturbation methods} locally perturb the input and evaluate the difference in performance between using the original input and the perturbed version to infer which parts of the input are relevant for the machine learning system,
\item \textbf{distillation} approximates the behavior of a black-box model with a more transparent one, and then draw conclusions from this new model,
\item \textbf{intrinsic} includes the only methods of this chapter that are not post-hoc explanations: in this case, interpretability is obtained thanks to components of the framework that are trained at the same time as the model.
\end{enumerate}
Finally, for the methods producing an attribution map, a section is dedicated to the metrics used to evaluate different properties (for example reliability or human-intelligibility) of the maps.
We caution readers that this taxonomy is not perfect: some methods may belong to several categories (for example LIME and SHAP could belong either to perturbation or distillation methods). Moreover, interpretability is still an active research field, then some categories may (dis)appear or be fused in the future.
The interpretability methods were (most of the time) originally proposed in the context of a classification task. In this case, the network outputs an array of size $C$, corresponding to the number of different labels existing in the data set, and the goal is to know how the output node corresponding to a particular class $c$ interacts with the input or with other parts of the network. However, these techniques can be extended to other tasks: for example for a regression task, we will just have to consider the output node containing the continuous variable learned by the network. Moreover, some methods do not depend on the nature of the algorithm (e.g. standard-perturbation or LIME) and can be applied to any machine learning algorithm.
\tikzstyle{root}=[rectangle, draw=black, rounded corners, fill=lightgray, drop shadow, text centered, anchor=north, text=black, text width=5cm, font=\small]
\tikzstyle{category}=[rectangle, draw=black, rounded corners, fill=lightgray, drop shadow, text centered, anchor=north, text=black, text width=3.5cm, font=\footnotesize]
\tikzstyle{description}=[rectangle, draw=black, rounded corners, fill=white, drop shadow, anchor=north, text=black, text width=3.5cm, font=\scriptsize]
\tikzstyle{myarrow}=[stealth-, thick]
\begin{sidewaysfigure}
\centering
\begin{tikzpicture}[node distance=2cm]
\node (Root) [root]
{
\textbf{Interpretability methods}
};
\node (Weight) [category, below left=1cm and 5.5cm of Root]
{
\textbf{Weight \\visualization}
};
\node (WeightDescription) [description, below=0.2cm of Weight]
{
\begin{itemize}[leftmargin=*]
\item[+] \textcolor{Green}{Standard approach for linear models}
\item[--] \textcolor{Red}{Usually uninformative for neural networks}
\end{itemize}
};
\node (Feature) [category, below left=1cm and 1.5cm of Root]
{
\textbf{Feature \\visualization}
};
\node (FeatureDescription) [description, below=0.2cm of Feature]
{
\begin{itemize}[leftmargin=*]
\item[--] \textcolor{Red}{Mostly ad-hoc procedures}
\end{itemize}
};
\node (Backprop) [category, below left=1cm and -2.5cm of Root]
{
\textbf{Back-propagation \\methods}
};
\node (Gradient) [category, below left=5.5cm and 1cm of Backprop]
{
\textbf{Gradient \\back-propagation}
};
\node (Standard) [category, below left=1cm and 0.25cm of Gradient]
{
\textbf{Standard gradient back-propagation}
};
\node (StandardDescription) [description, below=0.2cm of Standard]
{
\begin{itemize}[leftmargin=*]
\item Also called ``saliency map''
\item Very widely used approach
\item[+] \textcolor{Green}{Simple concept and implementation}
\item[+] \textcolor{Green}{A good first choice}
\item[--] \textcolor{Red}{Produces scattered maps}
\end{itemize}
};
\node (Guided) [category, below=1cm of Gradient]
{
\textbf{Guided \\back-propagation}
};
\node (GuidedDescription) [description, below=0.2cm of Guided]
{
\begin{itemize}[leftmargin=*]
\item[--] \textcolor{Red}{Has severe defects \cite{adebayoSanityChecksSaliency2018}}
\end{itemize} };
\node (GradCAM) [category, below right=1cm and 0.25cm of Gradient]
{
\textbf{Gradient-weighted class attribution map (Grad-CAM)}
};
\node (GradCAMDescription) [description, below=0.2cm of GradCAM]
{
\begin{itemize}[leftmargin=*]
\item Very widely used approach
\item[+] \textcolor{Green}{A good first choice}
\item[+] \textcolor{Green}{Non-scattered maps}
\item[--] \textcolor{Red}{Blurry maps due to upsampling}
\end{itemize}
};
\node (Relevance) [category, below right=5.5cm and 1.5cm of Backprop]
{
\textbf{Relevance \\back-propagation}
};
\node (LRP) [category, below left=1cm and -1.75cm of Relevance]
{
\textbf{Layer-wise \\relevance (LRP)}
};
\node (LRPDescription) [description, below=0.2cm of LRP]
{
\begin{itemize}[leftmargin=*]
\item Choose its extensions rather than the original LRP
\end{itemize} };
\node (Taylor) [category, below right=1cm and -1.75cm of Relevance]
{
\textbf{Deep Taylor decomposition}
};
\node (TaylorDescription) [description, below=0.2cm of Taylor]
{
\begin{itemize}[leftmargin=*]
\item Same principle as LRP but with different back-propagation rule
\end{itemize}
};
\node (Perturbation) [category, below right=1cm and -2.5cm of Root]
{
\textbf{Perturbation \\methods}
};
\node (PerturbationDescription) [description, below=0.2cm of Perturbation]
{
\begin{itemize}[leftmargin=*]
\item[+] \textcolor{Green}{Model-agnostic (can be applied to any model)}
\item[+] \textcolor{Green}{Detects image parts that are necessary for a correct decision}
\item[--] \textcolor{Red}{The perturbed data may be outside the training distribution}
\item[--] \textcolor{Red}{Computationally expensive}
\end{itemize}
};
\node (Distillation) [category, below right=1cm and 1.5cm of Root]
{
\textbf{Distillation \\methods}
};
\node (DistillationDescription) [description, below=0.2cm of Distillation]
{
\begin{itemize}[leftmargin=*]
\item Approximate a black-box model with an interpretable one
\item The approximation can be local (e.g. LIME, SHAP) or global
\item So far, global distillation has been rarely used in neuroimaging
\end{itemize}
};
\node (Intrinsinc) [category, below right=1cm and 5.5cm of Root]
{
\textbf{Intrinsinc \\methods}
};
\node (IntrinsincDescription) [description, below=0.2cm of Intrinsinc]
{
\begin{itemize}[leftmargin=*]
\item Interpretability is built-in the model and not {\sl post-hoc}
\item[+] \textcolor{Green}{Can improve interpretability and performance at the same time}
\item[--] \textcolor{Red}{Cannot be applied to an arbitrary model}
\end{itemize}
};
\draw[myarrow] (Weight.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Feature.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Backprop.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Perturbation.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Distillation.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Intrinsinc.north) -- ++(0,0.5) -| (Root.south);
\draw[myarrow] (Gradient.east) -| (Backprop.south);
\draw[myarrow] (Standard.north) -- ++(0,0.5) -| (Gradient.south);
\draw[myarrow] (Guided.north) -- ++(0,0.5) -| (Gradient.south);
\draw[myarrow] (GradCAM.north) -- ++(0,0.5) -| (Gradient.south);
\draw[myarrow] (Relevance.west) -| (Backprop.south);
\draw[myarrow] (LRP.north) -- ++(0,0.5) -| (Relevance.south);
\draw[myarrow] (Taylor.north) -- ++(0,0.5) -| (Relevance.south);
\end{tikzpicture}
\caption{Taxonomy of the main interpretability methods.}
\label{fig:taxonomy}
\end{sidewaysfigure}
\subsection{Weight visualization}
At first sight, one of can be tempted to directly visualize the weights learned by the algorithm. This method is really simple, as it does not require further processing. However, even though it can make sense for linear models, it is not very informative for most networks unless they are specially designed for this interpretation.
This is the case for AlexNet \cite{krizhevskyImageNetClassificationDeep2012}, a convolutional neural network (CNN) trained on natural images (ImageNet). In this network the size of the kernels in the first layer is large enough ($11\times11$) to distinguish patterns of interest. Moreover, as the three channels in the first layer correspond to the three color channels of the images (red, green and blue), the values of the kernels can also be represented in terms of colors (this is not the case for hidden layers, in which the meaning of the channels is lost). The 96 kernels of the first layer were illustrated in the original article as in Figure~\ref{fig:alexnet_weights}. However, for hidden layers, this kind of interpretation may be misleading as non-linearity activation layers are added between the convolutions or fully-connected layers, this is why they only visualized the weights of the first layer.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/weights/AlexNet_weights.png}
\caption[Convolutional kernels of learned by the first convolutional layer by AlexNet.]{96 convolutional kernels of size $3@11\times11$ learned by the first convolutional layer on the $3@224\times224$ input images by AlexNet. \\
Adapted from \citep{krizhevskyImageNetClassificationDeep2012}. Permission to reuse was kindly granted by the authors.}
\label{fig:alexnet_weights}
\end{figure}
To understand the weight visualization in hidden layers of a network, Voss et al.~\cite{vossVisualizingWeights2021} proposed to add some context to the input and the output channels. This way they enriched the weight visualization with feature visualization methods able to generate an image corresponding to the input node and the output node (see Figure~\ref{fig:context_weights}). However, the feature visualization methods used to bring some context can also be difficult to interpret themselves, then it only moves the interpretability problem from weights to features.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/weights/context_weights.png}
\caption[Weight visualization using feature maps context.]{The weights of small kernels in hidden layers (here $5\times5$) can be really difficult to interpret alone. Here some context allow better understanding how it modulates the interaction between concepts conveyed by the input and the output. \\
Adapted from \citep{vossVisualizingWeights2021} (CC BY 4.0).}
\label{fig:context_weights}
\end{figure}
\subsection{Feature map visualization}
Feature maps are the results of intermediate computations done from the input and resulting in the output value. Then, it seems natural to visualize them, or link them to concepts to understand how the input is successively transformed into the output.
Methods described in this section aim at highlighting which concepts a feature map (or part of it) $A$ conveys.
\subsubsection{Direct interpretation}
The output of a convolution has the same shape as its input: a 2D image processed by a convolution will become another 2D image (the size may vary). Then, it is possible to directly visualize these feature maps and compare them to the input to understand the operations performed by the network. However, the number of filters of convolutional layers (often a hundred) makes the interpretation difficult as a high number of images must be interpreted for a single input.
Instead of directly visualizing the feature map $A$, it is possible to study the latent space including all the values of the samples of a data set at the level of the feature map $A$. Then, it is possible to study the deformations of the input by drawing trajectories between samples in this latent space, or more simply to look at the distribution of some label in a manifold learned from the latent space. In such a way, it is possible to better understand which patterns were detected, or at which layer in the network classes begin to be separated (in the classification case). There is often no theoretical framework to illustrate these techniques, then we referred to studies in the context of the medical application (see Section~\ref{sec:application_FM} for references).
\subsubsection{Input optimization}
Olah et al.~\cite{olahFeatureVisualization2017a} proposed to compute an input that maximizes the value of a feature map $A$ (see Figure~\ref{fig:FM_parts}). However, this technique leads to unrealistic images that may be themselves difficult to interpret, particularly for neuroimaging data. To have a better insight of the behavior of layers or filters, another simple technique illustrated by the same authors consists in isolating the inputs that led to the highest activation of $A$. The combination of both methods, displayed in Figure~\ref{fig:FM_examples}, allows a better understanding of the concepts conveyed by $A$ of a GoogleNet trained on natural images.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/feature_maps/FM_parts.png}
\caption[Optimization of the input for different levels of feature maps.]{Optimization of the input for different levels of feature maps. \\
Adapted from \citep{olahFeatureVisualization2017a} (CC BY 4.0).}
\label{fig:FM_parts}
\end{figure}
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/feature_maps/FM_examples.png}
\caption[Association of input optimization with examples.]{Interpretation of a neuron of a feature map by optimizing the input associated with a bunch of training examples maximizing this neuron. \\
Adapted from \citep{olahFeatureVisualization2017a} (CC BY 4.0).}
\label{fig:FM_examples}
\end{figure}
\subsection{Back-propagation methods}
\label{subsec:backprop}
The goal of these interpretability methods is to link the value of an output node of interest $o_c$ to the image $X_0$ given as input to a network. They do so by back-propagating a signal from $o_c$ to $X_0$: this process (backward pass) can be seen as the opposite operation than the one done when computing the output value from the input (forward pass).
Any property can be back-propagated as soon as its value at the level of a feature map $l-1$ can be computed from its value in the feature map $l$. In this section, the back-propagated properties are gradients or the relevance of a node $o_c$.
\subsubsection{Gradient back-propagation}
During network training, gradients corresponding to each layer are computed according to the loss to update the weights. Then, we can see these gradients as the difference needed at the layer level to improve the final result: by adding this difference to the weights, the probability of the true class $y$ increases.
In the same way, the gradients can be computed at the image level to find how the input should vary to change the value of $o_c$ (see example on Figure~\ref{fig:simonyan_gradients}. This gradient computation was proposed by \cite{simonyanDeepConvolutionalNetworks2013}, in which the attribution map $S_c$ corresponding to the input image $X_0$ and the output node $o_c$ is computed according to the following equation:
\begin{equation}
S_c = \frac{\partial{o_c}}{\partial{X}}\Bigr|_{\substack{X=X_0}}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/section2/backpropagation/simonyan_BP.png}
\caption[Attribution map of an image found with gradients back-propagation.]{Attribution map of an image found with gradients back-propagation.\\
Adapted from \cite{simonyanDeepConvolutionalNetworks2013}. Permission to reuse was kindly granted by the authors.}
\label{fig:simonyan_gradients}
\end{figure}
Due to its simplicity, this method is the most commonly used to interpret deep learning networks. Its attribution map is often called a ``saliency map'', however this term is also used in some articles to talk about any attribution map, and this is why we chose to avoid this term in this chapter.
This method was modified to derive many similar methods based on gradients computation described in the following paragraphs.
\paragraph{gradient$\odot$input}
This method is the point-wise product of the gradient map described at the beginning of the section and the input. Evaluated in \cite{shrikumarNotJustBlack2017a}, it was presented as an improvement of the gradients method, though the original paper does not give strong arguments on the nature of this improvement.
\paragraph{DeconvNet \& guided back-propagation}
The key difference between this procedure and the standard back-propagation method is the way the gradients are back-propagated through the ReLU layer.
The ReLU layer is a commonly used activation function that sets to 0 the negative input values, and does not affect positive input values. The derivative of this function in layer $l$ is the indicator function $\mathbb{1}_{A^{(l)}>0}$: it outputs 1 (resp. 0) where the feature maps computed during the forward pass were positive (resp. negative).
Springenberg et al.~\cite{springenbergStrivingSimplicityAll2014} proposed to back propagate the signal differently. Instead of applying the indicator function of the feature map $A^{(l)}$ computed during the forward pass, they directly applied ReLU to the back-propagated values $R^{(l+1)}=\frac{\partial{o_c}}{\partial{A^{(l+1)}}}$, which corresponds to multiplying it by the indicator function $\mathbb{1}_{R^{(l+1)}>0}$. This ``backward deconvnet'' method allows back-propagating only the positive gradients, and, according to the authors, it results in a reconstructed image showing the part of the input image that is most strongly activating this neuron.
The guided back-propagation method (equation~\ref{eq: guided backprop}) combines the standard back-propagation (equation~\ref{eq: standard back-propagation}) with the backward deconvnet (equation~\ref{eq: deconvnet}): when back-propagating gradients through ReLU layers, a value is set to 0 if the corresponding top gradients or bottom data is negative. This adds an additional guidance to the standard back-propagation by preventing backward flow of negative gradients.
\begin{equation}\label{eq: standard back-propagation}
R^{(l)} = \mathbb{1}_{A^{(l)}>0} * R^{(l+1)}
\end{equation}
\begin{equation}\label{eq: deconvnet}
R^{(l)} = \mathbb{1}_{R^{(l+1)}>0} * R^{(l+1)}
\end{equation}
\begin{equation}\label{eq: guided backprop}
R^{(l)} = \mathbb{1}_{A^{(l)}>0} * \mathbb{1}_{R^{(l+1)}>0} * R^{(l+1)}
\end{equation}
Any back-propagation procedure can be ``guided'', as it only concerns the way ReLU functions are managed during back-propagation (this is the case for example for guided Grad-CAM).
While it was initially adopted by the community, this method showed severe defects as discussed later in section \ref{sec:limitations}.
\paragraph{CAM \& Grad-CAM}
In this setting, attribution maps are computed at the level of a feature map produced by a convolutional layer, and then upsampled to be overlaid and compared with the input. The first method, class activation maps (CAM) was proposed by Zhou et al.~\cite{zhouLearningDeepFeatures2015}, and can be only applied to CNNs with the following specific architecture:
\begin{enumerate}
\item a series of convolutions associated with activation functions and possibly pooling layers. These convolutions output a feature map $A$ with $N$ channels,
\item a global average pooling that extracts the mean value of each channel of the feature map produced by the convolutions,
\item a single fully-connected layer.
\end{enumerate}
The CAM corresponding to $o_c$ will be the mean of the channels of the feature map produced by the convolutions, weighted by the weights $w_{kc}$ learned in the fully-connected layer
\begin{equation}
S_c = \sum_{k=1}^N w_{kc} * A_k \enspace .
\end{equation}
This map has the same size as $A_k$, which might be smaller than the input if the convolutional part performs downsampling operations (which is very often the case). Then, the map is upsampled to the size of the input to be overlaid on the input.
Selvaraju et al.~\cite{selvarajuGradCAMVisualExplanations2017} proposed an extension of CAM that can be applied to any architecture: Grad-CAM (illustrated on Figure~\ref{fig:selvaraju_gradcam}). As in CAM, the attribution map is a linear combination of the channels of a feature map computed by a convolutional layer. But, in this case, the weights of each channel are computed using gradient back-propagation
\begin{equation}
\alpha_{kc} = \frac{1}{\lvert \mathcal{U} \rvert} \sum_{u \in \mathcal{U}} \frac{\partial {o_c}}{\partial A_{k}(u)} \enspace .
\end{equation}
The final map is then the linear combination of the feature maps weighted by the coefficients. A ReLU activation is then applied to the result to only keep the features that have a positive influence on class $c$
\begin{equation}
S_c = ReLU(\sum_{k=1}^N \alpha_{kc} * A_k) \enspace .
\end{equation}
Similarly to CAM, this map is then upsampled to the input size.
\begin{figure}
\centering
\includegraphics{figures/section2/backpropagation/selvaraju_gradcam.png}
\caption[Grad-CAM explanations highlighting two different objects in an image.]{Grad-CAM explanations highlighting two different objects in an image. (A) the original image, (B) the explanation based on the ``dog'' node, (C) the explanation based on the ``cat'' node. \\ \textcopyright 2017 IEEE. Reprinted, with permission, from \cite{selvarajuGradCAMVisualExplanations2017}.}
\label{fig:selvaraju_gradcam}
\end{figure}
Grad-CAM can be applied to any feature map produced by a convolution, but in practice the last convolutional layer is very often chosen. The authors argue that this layer is ``the best compromise between high-level semantics and detailed spatial information'' (the latter is lost in fully-connected layers, as the feature maps are flattened).
Because of the upsampling step, CAM and Grad-CAM produce maps that are more human-friendly because they contain more connected zones, contrary to other attribution maps obtained with gradient back-propagation that can look very scattered. However, the smallest the feature maps $A_k$, the blurrier they are, leading to a possible loss of interpretability.
\subsubsection{Relevance back-propagation}
\label{subsubsec: Relevance BP}
Instead of back-propagating gradients to the level of the input or of the last convolutional layer, Bach et al.~\cite{bachPixelWiseExplanationsNonLinear2015} proposed to back-propagate the score obtained by a class $c$, which is called the relevance. This score corresponds to $o_c$ after some postprocessing (for example softmax), as its value must be positive if class $c$ was identified in the input. At the end of the back-propagation process, the goal is to find the relevance $R_u$ of each feature $u$ of the input (for example, of each pixel of an image) such that $o_c = \sum_{u \in \mathcal{U}} R_u$.
In their paper, Bach et al.~\cite{bachPixelWiseExplanationsNonLinear2015} take the example of a fully-connected function defined by a matrix of weights $w$ and a bias $b$ at layer $l+1$. The value of a node $v$ in feature map $A^{(l+1)}$ is computed during the forward pass by the given formula:
\begin{equation}
A^{(l+1)}(v) = b + \sum_{u \in \mathcal{U}} w_{uv} A^{(l)}(u)
\end{equation}
During the back-propagation of the relevance, $R^{(l)}(u)$, the value of the relevance at the level of the layer $l+1$, is computed according to the values of the relevance $R^{(l+1)}(v)$ which are distributed according to the weights $w$ learnt during the forward pass and the values of $A^{(l)}(v)$:
\begin{equation}
R^{(l)}(u) = \sum_{v\in \mathcal{V}} R^{(l+1)}(v) \frac{A^{(l)}(u) w_{uv}}{\sum\limits_{u' \in \mathcal{U}} A^{(l)}(u') w_{u'v}} \enspace .
\end{equation}
The main issue of the method comes from the fact that the denominator may become (close to) zero, leading to the explosion of the relevance back-propagated. Moreover, it was shown by~ \cite{shrikumarNotJustBlack2017a} that when all activations are piece-wise linear (such as ReLU or leaky ReLU) the layer-wise relevance (LRP) method reproduces the output of gradient$\odot$input, questioning the usefulness of the method.
This is why Samek et al.~\cite{samekEvaluatingVisualizationWhat2017} proposed two variants of the standard-LRP method~ \cite{bachPixelWiseExplanationsNonLinear2015}. Moreover they describe the behavior of the back-propagation in other layers than the linear ones (the convolutional one following the same formula as the linear). They illustrated their method with a neural network trained on MNIST (see Figure~\ref{fig:samek_LRP}). To simplify the equations in the following paragraphs, we now denote the weighted activations as $z_{uv} = A^{(l)}(u) w_{uv}$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/backpropagation/samek_LRP.png}
\caption[LRP attribution maps explaining the decision of a neural network trained on MNIST.]{LRP attribution maps explaining the decision of a neural network trained on MNIST.\\
\textcopyright 2017 IEEE. Reprinted, with permission, from \cite{samekEvaluatingVisualizationWhat2017}.}
\label{fig:samek_LRP}
\end{figure}
\paragraph{$\epsilon$-rule}
The $\epsilon$-rule integrates a parameter $\epsilon > 0$, used to avoid numerical instability. Though it avoids the case of a null denominator, this variant breaks the rule of relevance conservation across layers
\begin{equation}
R^{(l)}(u) = \sum_{v \in \mathcal{V}} R^{(l+1)}(v) \frac{z_{uv}}{\sum\limits_{u' \in \mathcal{U}} z_{u'v} + \epsilon \times sign\left(\sum\limits_{u' \in \mathcal{U}} z_{u'v}\right)} \enspace .
\end{equation}
\paragraph{$\beta$-rule}
The $\beta$-rule keeps the conservation of the relevance by treating separately the positive weighted activations $z^+_{uv}$ from the negative ones $z^-_{uv}$
\begin{equation}
R^{(l)}(u) = \sum_{v \in \mathcal{V}} R^{(l+1)}(v) \left((1 + \beta)\frac{z^+_{uv}}{\sum\limits_{u' \in \mathcal{U}} z^+_{u'v}} - \beta \frac{z^-_{uv}}{\sum\limits_{u' \in \mathcal{U}} z^-_{u'v}}\right) \enspace .
\end{equation}
Though these two LRP variants improve the numerical stability of the procedure, they imply to choose the values of parameters that may change the patterns in the obtained saliency map.
\paragraph{Deep Taylor decomposition}
Deep Taylor decomposition \citep{montavonExplainingNonlinearClassification2017} was proposed by the same team as the one that proposed the original LRP method and its variants. It is based on similar principles as LRP: the value of the score obtained by a class $c$ is back-propagated, but the back-propagation rule is based on first-order Taylor expansions.
The back-propagation from node $v$ in at the level of $R^{(l+1)}$ to $u$ at the level of $R^{(l)}$ can be written
\begin{equation}
R^{(l)}(u) = \sum_{v \in \mathcal{V}} \frac{\partial R^{(l+1)}(v)}{\partial A^{(l)}(u)} \Bigr|_{\substack{\tilde{A}^{(l)}(u^{(v))}}} \left( A^{(l)}(u) - \tilde{A}^{(l)}(u^{(v))} \right) \enspace .
\end{equation}
This rule implies a root point $\tilde{A}^{(l)}(u^{(v))}$ which is close to $A^{(l)}(u)$ and meets a set of constraints depending on $v$.
\subsection{Perturbation methods}
\label{subsec:perturbations}
Instead of relying on a backward pass (from the output to the input) as in the previous section, perturbation methods rely on the difference between the value of $o_c$ computed with the original inputs and a locally perturbed input. This process is less abstract for humans than back-propagation methods as we can reproduce it ourselves: if the part of the image that is needed to find the good output is hidden, we are also not able to predict correctly. Moreover, it is model-agnostic and can be applied to any algorithm or deep learning architecture.
The main drawback of these techniques is that the nature of the perturbation is crucial, leading to different attribution maps depending on the perturbation function used. Moreover, Montavon et al.~\cite{montavonMethodsInterpretingUnderstanding2018} suggest that the perturbation rule should keep the perturbed input in the training data distribution. Indeed, if it is not the case one cannot know if the network performance dropped because of the location or the nature of the perturbation.
\subsubsection{Standard perturbation}
Zeiler and Fergus~\cite{zeilerVisualizingUnderstandingConvolutional2014} proposed the most intuitive method relying on perturbations. This standard perturbation procedure consists in removing information locally in a specific zone of an input $X_0$ and evaluating if it modifies the output node $o_c$. The more the perturbation degrades the task performance, the more crucial this zone is for the network to correctly perform the task. To obtain the final attribution map, the input is perturbed according to all possible locations. Examples of attribution maps obtained with this method are displayed in Figure~\ref{fig:standard_perturbation}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/perturbation/standard_perturbations.png}
\caption[Attribution maps obtained with standard perturbation.]{Attribution maps obtained with standard perturbation. Here the perturbation is a gray patch covering a specific zone of the input as shown in the left column. The attribution maps (second row) display the probability of the true label: the lower the value, the most important it is for the network to correctly identify the label. This kind of perturbation takes the perturbed input out of the training distribution. \\
Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature, ECCV 2014: Visualizing and Understanding Convolutional Networks, \citep{zeilerVisualizingUnderstandingConvolutional2014}, 2014.}
\label{fig:standard_perturbation}
\end{figure}
As evaluating the impact of the perturbation at each pixel location is computationally expensive, one can choose not to perturb the image at each pixel location, but to skip some of them (i.e. scan the image with a stride $>$ 1). This will lead to a smaller attribution map, which needs to be upsampled to be compared to the original input (in the same way as CAM \& Grad-CAM).
However, in addition to the problem of the nature of the perturbation previously mentioned, this method presents two drawbacks:
\begin{itemize}
\item the attribution maps depend on the size of the perturbation: if the perturbation becomes too large, the perturbation is not local anymore, if it too small it is not meaningful anymore (a pixel perturbation cannot cover a pattern),
\item input pixels are considered independently from each other: if the result of a network relies on a combination of pixels that cannot all be covered at the same time by the perturbation, their influence may not be detected.
\end{itemize}
\subsubsection{Optimized perturbation}
To deal with these two issues,
Fong and Vedaldi~\cite{fongInterpretableExplanationsBlack2017} proposed to optimize a perturbation mask covering the whole input. This perturbation mask $m$ has the same size as the input $X_0$. Its application is associated with a perturbation function $\Phi$ and leads to the computation of the perturbed input $X_0^m$. Its value at a coordinate $u$ reflects the quantity of information remaining in the perturbed image:
\begin{itemize}
\item if $m(u) = 1$, the pixel at location $u$ is not perturbed and has the same value in the perturbed input as in the original input ($X_0^m(u)=X_0(u)$).
\item if $m(u) = 0$ the pixel at location $u$ is fully perturbed and the value in the perturbed image is the one given by the perturbation function only ($X_0^m(u)=\Phi(X_0)(u)$).
\end{itemize}
This principle can be extended to any value between 0 and 1 with the a linear interpolation
\begin{equation}
X_0^m(u)= m(u)X_0(u) + (1-m(u))\Phi(X_0)(u) \enspace .
\end{equation}
Then, the goal is to optimize this mask $m$ according to three criteria:
\begin{enumerate}
\item the perturbed input $X_0^m$ should lead to the lowest performance possible,
\item the mask $m$ should perturb the minimum number of pixels possible, and
\item the mask $m$ should produce connected zones (i.e. avoid the scattered aspect of gradient maps).
\end{enumerate}
These three criteria are optimized using the following loss:
\begin{equation}
f(X_0^m) + \lambda_1 \lVert 1 - m \rVert^{\beta_1}_{\beta_1} + \lambda_2 \lVert \nabla m \rVert^{\beta_2}_{\beta_2}
\end{equation}
with $f$ a function that decreases as the performance of the network decreases.
However, the method also presents two drawbacks:
\begin{itemize}
\item The values of hyperparameters must be chosen ($\lambda_1$, $\lambda_2$, $\beta_1$, $\beta_2$) to find a balance between the three optimization criteria of the mask,
\item The mask may not highlight the most important pixels of the input but instead create artifacts in the perturbed image to artificially degrade the performance of the network (see Figure~\ref{fig:optimized_artifacts}).
\end{itemize}
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section2/perturbation/artifacts.png}
\caption[Example of artifacts created by optimized perturbation method.]{In this example, the network learned to classify objects in natural images. Instead of masking the maypole at the center of the image, it creates artifacts in the sky to degrade the performance of the network. \\
\textcopyright 2017 IEEE. Reprinted, with permission, from \citep{fongInterpretableExplanationsBlack2017}.}
\label{fig:optimized_artifacts}
\end{figure}
\subsection{Distillation}
\label{subsec:distillation}
Approaches described in this section aim at developing a transparent method to reproduce the behavior of a black-box one. Then it is possible to consider simple interpretability methods (such as weight visualization) on the transparent method instead of considering the black box.
\subsubsection{Local approximation}
\paragraph{LIME}
Ribeiro et al.~\cite{ribeiroWhyShouldTrust2016} proposed Local Interpretable Model-agnostic Explanations (LIME). This approach is:
\begin{itemize}
\item \textbf{local}, as the explanation is valid in the vicinity of a specific input $X_0$,
\item \textbf{interpretable}, as an interpretable model $g$ (linear model, decision tree...) is computed to reproduce the behavior of $f$ on $X_0$, and
\item \textbf{model-agnostic}, as it does not depend on the algorithm trained.
\end{itemize}
This last property comes from the fact that the vicinity of $X_0$ is explored by sampling variations of $X_0$ that are perturbed versions of $X_0$. Then LIME shares the advantage (model agnostic) and drawback (perturbation function dependent) of perturbations methods presented in section~\ref{subsec:perturbations}. Moreover, the authors specify that, in the case of images, they group pixels of the input in $d$ super-pixels (contiguous patches of similar pixels).
The loss to be minimized to find $g$ specific to the input $X_0$ is the following:
\begin{equation}
\mathcal{L}(f, g, \pi_{X_0}) + \Omega(g) \enspace ,
\end{equation}
where $\pi_{X_0}$ is a function that defines the locality of $X_0$ (i.e. $\pi_{X_0}(X)$ decreases as $X$ becomes closer to $X_0$),
$\mathcal{L}$ measures how unfaithful $g$ is in approximating $f$ according $\pi_{X_0}$, and
$\Omega$ is a measure of the complexity of $g$.
Ribeiro et al.~\cite{ribeiroWhyShouldTrust2016} limited their search to sparse linear models, however other assumptions could be made on $g$.
$g$ is not applied to the input directly but to a binary mask $m \in \{0, 1\}^d$ that transforms the input $X$ in $X^m$ and is applied according to a set of $d$ super-pixels. For each super-pixel $u$:
\begin{enumerate}
\item if $m(u) = 1$ the super-pixel $u$ is not perturbed,
\item if $m(u) = 0$ the super-pixel $u$ is perturbed (i.e. it is grayed).
\end{enumerate}
They used $\pi_{X_0}(X) = \exp{\frac{(X - X_0)^2}{\sigma^2}}$ and $\mathcal{L}(f, g, \pi_{X_0}) = \sum_{m} \pi_{X_0}(X_0^m) * (f(X_0^m) - g(m))^2$. Finally $\Omega(g)$ is the number of non-zero weights of $g$, and its value is limited to $K$. This way they select the $K$ super-pixels in $X_0$ that best explain the algorithm result $f(X_0)$.
\paragraph{SHAP}
Lundberg and Lee~\cite{lundbergUnifiedApproachInterpreting2017a} proposed SHAP (SHapley Additive exPlanations), a theoretical framework that encompasses several existing interpretability methods, including LIME. In this framework each of the $N$ features (again, super-pixels for images) is associated with a coefficient $\phi$ that denotes its contribution to the result. The contribution of each feature is evaluated by perturbing the input $X_0$ with a binary mask $m$ (see paragraph on LIME). Then the goal is to find an interpretable model $g$ specific to $X_0$, such that
\begin{equation}
g(m) = \phi_0 + \sum_1^N{\phi_i m_i}
\end{equation}
with $\phi_0$ being the output when the input is fully perturbed.
The authors look for an expression of $\phi$ that respects three properties:
\begin{itemize}
\item \textbf{Local accuracy}\quad $g$ and $f$ should match in the vincinity of $X_0$: $g(m) = f(X_0^m)$.
\item \textbf{Missingness}\quad Perturbed features should not contribute to the result: $m_i = 0 \rightarrow \phi_i = 0$.
\item \textbf{Consistency}\quad Let's denote as $m \setminus i$ the mask $m$ in which $m_i = 0$. For any two models $f^1$ and $f^2$, if
$f^1(X_0^{m}) - f^1(X_0^{m \setminus i}) \ge f^2(X_0^{m}) - f^2(X_0^{m \setminus i})$,
then for all $m \in \{0, 1\}^N$
$\phi^1_i \ge \phi^2_i$ ($\phi^k$ are the coefficients associated with model $f^k$).
\end{itemize}
Lundberg and Lee~\cite{lundbergUnifiedApproachInterpreting2017a} show that only one expression is possible for the coefficients $\phi$, which can be approximated with different algorithms:
\begin{equation}
\phi_i = \sum_{m \in \{0, 1\}^N} \frac{|m|! (N - |m| - 1)!}{N!} \left[ f(X_0^{m}) - f(X_0^{m \setminus i}) \right] \enspace .
\end{equation}
\subsubsection{Model translation}
Contrary to local approximation, which provides an explanation according to a specific input $X_0$, model translation consists in finding a transparent model that reproduces the behavior of the black-box model on the whole data set.
As it was rarely employed in neuroimaging frameworks, this section only discusses the distillation to decision trees proposed in \cite{frosstDistillingNeuralNetwork2017} (preprint). For a more extensive review of model translation methods, we refer the reader to \cite{xieExplainableDeepLearning2020}.
After training a machine learning system $f$, a binary decision tree $g$ is trained to reproduce its behavior. This tree is trained on a set of inputs $X$, and each inner node $i$ learns a matrix of weights $w_i$ and biases $b_i$. The forward pass of $X$ in the node $i$ of the tree is as follows: if $sigmoid(w_iX + b_i) > 0.5$, then the right leaf node is chosen, else the left leaf node is chosen. After the end of the decision tree's training, it is possible to visualize at which level which classes were separated to better understand which classes are similar for the network. It is also possible to visualize the matrices of weights learned by each inner node to identify patterns learned at each class separation. An illustration of this distillation process, on the MNIST data set (hand-written digits), can be found in Figure~\ref{fig: frosst_tree}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/distillation/frosst_tree.png}
\caption[Visualization of a soft decision tree trained on MNIST.]{Visualization of a soft decision tree trained on MNIST. \\
Adapted from \cite{frosstDistillingNeuralNetwork2017}. Permission to reuse was kindly granted by the authors.}
\label{fig: frosst_tree}
\end{figure}
\subsection{Intrinsic}
\label{subsec:intrisinc}
Contrary to the previous sections in which interpretability methods could be applied to (almost) any network after the end of the training procedure, the following methods require to design the framework before the training phase, as the interpretability components and the network are trained simultaneously. In the papers presented in this section~\citep{xuShowAttendTell2016, wangResidualAttentionNetwork2017a, baMultipleObjectRecognition2015}, the advantages of these methods are dual: they improve both the interpretability and performance of the network. However, the drawback is that they have to be implemented before training the network, then they cannot be applied in all cases.
\subsubsection{Attention modules}
Attention is a concept in machine learning that consists in producing an attribution map from a feature map and using it to improve learning of another task (such as classification, regression, reconstruction...) by making the algorithm focus on the part of the feature map highlighted by the attribution map.
In the deep learning domain, we take as reference \cite{xuShowAttendTell2016}, in which a network is trained to produce a descriptive caption of natural images. This network is composed of three parts:
\begin{enumerate}
\item a convolutional encoder that reduces the dimension of the input image to the size of the feature maps $A$,
\item an attention module that generates an attribution map $S_t$ from $A$ and the previous hidden state of the long short-term memory (LSTM) network,
\item an LSTM decoder that computes the caption from its previous hidden state, the previous word generated, $A$ and $S_t$.
\end{enumerate}
As $S_t$ is of the same size as $A$ (smaller than the input), the result is then upsampled to be overlaid on the input image. As one attribution map is generated per word generated by the LSTM, it is possible to know where the network focused when generating each word of the caption (see Figure~\ref{fig:attention_lstm}).
In this example, the attribution map is given to a LSTM, which uses it to generate a context vector $z_t$ by applying a function $\phi$ to $A$ and $S_t$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/intrinsic/LSTM_caption.png}
\caption[Attribution maps obtained with attention modules.]{Examples of images correctly captioned by the network. The focus of the attribution map is highlighted in white and the associated word in the caption is underlined. \\
Adapted from \citep{xuShowAttendTell2016}. Permission to reuse was kindly granted by the authors.}
\label{fig:attention_lstm}
\end{figure}
More generally in CNNs, the point-wise product of the attribution map $S$ and the feature map $A$ is used to generate the refined feature map $A'$ which is given to the next layers of the network. Adding an attention module implies to make new choices for the architecture of the model: its location (on lower or higher feature maps) may impact the performance of the network. Moreover, it is possible to stack several attention modules along the network, as it was done in \cite{wangResidualAttentionNetwork2017a}.
\subsubsection{Modular Transparency}
Contrary to the studies of the previous sections, the frameworks of these categories are composed of several networks (modules) that interact with each other. Each module is a black box, but the transparency of the function, or the nature of the interaction between them, allows understanding how the system works globally and extracting interpretability metrics from it.
A large variety of setups can be designed following this principle, and it is not possible to draw a more detailed general rule for this section. We will take the example described in~\cite{baMultipleObjectRecognition2015}, which was adapted to neuroimaging data (see Section~\ref{sec:application_intrinsic}), to illustrate this section, though it may not be representative of all the aspects of modular transparency.
Ba et al.~\cite{baMultipleObjectRecognition2015} proposed a framework (illustrated in Figure~\ref{fig: ba_modular}) to perform the analysis of an image in the same way as a human, by looking at successive relevant locations in the image. To perform this task, they assemble a set of networks that interact together:
\begin{itemize}
\item \textbf{Glimpse network}\quad This network takes as input a patch of the input image and the location of its center to output a context vector that will be processed by the recurrent network. Then this vector conveys information on the main features in a patch and its location.
\item \textbf{Recurrent network}\quad This network takes as input the successive context vectors and update its hidden state that will be used to find the next location to look at and to perform the learned task at the global scale (in the original paper a classification of the whole input image).
\item \textbf{Emission network}\quad This network takes as input the current state of the recurrent network and outputs the next location to look at. This will allow computing the patch that will feed the glimpse network.
\item \textbf{Context network}\quad This network takes as input the whole input at the beginning of the task and outputs the first context vector to initialize the recurrent network.
\item \textbf{Classification network}\quad This network takes as input the current state of the recurrent network and outputs a prediction for the class label.
\end{itemize}
The global framework can be seen as interpretable as it is possible to review the successive processed locations.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/section2/intrinsic/ba_modular.png}
\caption[Framework with modular transparency browsing an image to compute the output at the global scale.]{Framework with modular transparency browsing an image to compute the output at the global scale. \\
Adapted from \citep{baMultipleObjectRecognition2015}. Permission to reuse was kindly granted by the authors.}
\label{fig: ba_modular}
\end{figure}
\subsection{Interpretability metrics}
\label{sec:evaluation_metrics}
To evaluate the reliability of the methods presented in the previous sections, one cannot only rely on qualitative evaluation. This is why interpretability metrics that evaluate attribution maps were proposed.
These metrics may evaluate different properties of attribution maps.
\begin{itemize}
\item \textbf{Fidelity} evaluates if the zones highlighted by the map influence the decision of the network.
\item \textbf{Sensitivity} evaluates how the attribution map changes according to small changes in the input $X_0$.
\item \textbf{Continuity} evaluates if two close data points lead to similar attribution maps.
\end{itemize}
In the following, $\Gamma$ is an interpretability method computing an attribution map $S$ of the black-box network $f$ and an input $X_0$.
\subsubsection{(In)fidelity}
Yeh et al.~\cite{yehFidelitySensitivityExplanations2019} proposed a measure of infidelity of $\Gamma$ based on perturbations applied according to a vector $m$ of the same shape as the attribution map $S$. The explanation is infidel if perturbations applied in zones highlighted by $S$ on $X_0$ leads to negligible changes in $f(X_0^m)$ or, on the contrary, if perturbations applied in zones not highlighted by $S$ on $X_0$ lead to significant changes in $f(X_0^m)$. The associated formula is
\begin{equation}
\text{INFD}(\Gamma, f, X_0) = \mathbb{E}_{m} \left[ \sum_{i}\sum_{j}m_{ij} \Gamma(f, X_0)_{ij} - (f(X_0) - f(X_0^m))^2 \right] \enspace .
\end{equation}
\subsubsection{Sensitivity}
Yeh et al.~\cite{yehFidelitySensitivityExplanations2019} also gave a measure of sensitivity. As suggested by the definition, it relies on the construction of attribution maps according to inputs similar to $X_0$: $\tilde{X_0}$. As changes are small, sensitivity depends on a scalar $\epsilon$ set by the user, which corresponds to the maximum difference allowed between $X_0$ and $\tilde{X_0}$. Then sensitivity corresponds to the following formula:
\begin{equation}
\text{SENS}_{\text{max}}(\Gamma, f, X_0, \epsilon) = \max_{\lVert \tilde{X_0} - X_0 \rVert \le \epsilon} \lVert \Gamma(f, \tilde{X_0}) - \Gamma(f, X_0) \rVert \enspace .
\end{equation}
\subsubsection{Continuity}
Continuity is very similar to sensitivity, except that it compares different data points belonging to the input domain $\mathcal{X}$, whereas sensitivity may generate similar inputs with a perturbation method.
This measure was introduced in \cite{montavonMethodsInterpretingUnderstanding2018} and can be computed using the following formula:
\begin{equation}
\text{CONT}(\Gamma, f, \mathcal{X}) = \max_{X_1, X_2 \in \mathcal{X}~\& ~X_1 \neq X_2} \frac{\lVert \Gamma(f, X_1) - \Gamma(f, X_2) \rVert_1}{\lVert X_1 - X_2 \rVert_2} \enspace .
\end{equation}
\vspace{1cm}
As these metrics rely on perturbation, they are also influenced by the nature of the perturbation and may lead to different results, which is a major issue (see Section~\ref{sec:limitations}).
Other metrics were also proposed and depend on the task learned by the network: for example in the case of a classification, statistical tests can be conducted between saliency maps of different classes to assess whether they differ according to the class they explain.
\section{Application of interpretability methods to neuroimaging data}
\label{sec:section3}
In this section, we provide a non-exhaustive review of applications of interpretability methods to neuroimaging data. In most cases, the focus of articles is prediction/classification rather than the interpretability method, which is just seen as a tool to analyze the results. Thus, authors do not usually motivate their choice of an interpretability method. Another key consideration here is the spatial registration of brain images, which enables having brain regions roughly at the same position between subjects. This technique is of paramount importance as attribution maps computed for registered images can then be averaged or used to automatically determine the most important brain areas, which would not be possible with unaligned images.
All the studies presented in this section are summarized in Table~\ref{tab:section3}.
This section ends with the presentation of benchmarks conducted in the literature to compare different interpretability methods in the context of brain disorders.
\input{studies_table_wo_preprocessing}
\subsection{Weight visualization applied to neuroimaging}
\label{sec:application_weights}
As the focus of this chapter is on non-transparent models, such as deep learning ones, weight visualization was only rarely found. However, this was the method chosen by
Cecotti and Gr\"{a}ser~\cite{cecottiConvolutionalNeuralNetworks2011}, who developed a CNN architecture adapted to weight visualization to detect P300 signals in electroencephalograms (EEG). The input of this network is a matrix with rows corresponding to the 64 electrodes and columns to 78 time points. The two first layers of the networks are convolutions with rectangular filters: the first filters (size 1$\times$64) combines the electrodes, whereas the second ones (13$\times$1) find time patterns. Then, it is possible to retrieve a coefficient per electrode by summing the weights associated with this electrode across the different filters, and to visualize the results in the electroencephalogram space as show in Figure~\ref{fig:cecotti_weights}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.5\textwidth]{figures/section3/weights/cecotti_weights.png}
\caption[Relative importance of the electrodes for signal detection in EEG using CNN weight visualization]{Relative importance of the electrodes for signal detection in EEG using two different architectures (CNN-1 and CNN-3) and two subjects (A and B) using CNN weight visualization. Dark values correspond to weights with a high absolute value while white values correspond to weights close to 0.\\
\textcopyright 2011 IEEE. Reprinted, with permission, from \citep{cecottiConvolutionalNeuralNetworks2011}.}
\label{fig:cecotti_weights}
\end{figure}
\subsection{Feature map visualization applied to neuroimaging}
\label{sec:application_FM}
Contrary to the limited application of weight visualization, there is an extensive literature about leveraging individual feature maps and latent spaces to better understand how models work. This goes from the visualization of these maps or their projections \citep{ohClassificationVisualizationAlzheimer2019, abrolDeepResidualLearning2020, biffiExplainableAnatomicalShape2020}, to the analysis of neuron behavior \citep{martinez-murciaStudyingManifoldStructure2020, lemingEnsembleDeepLearning2020}, through sampling in latent spaces \citep{biffiExplainableAnatomicalShape2020}.
Oh et al.~\cite{ohClassificationVisualizationAlzheimer2019} displayed the feature maps associated with the convolutional layers of CNNs trained for various Alzheimer's disease status classification tasks (Figure~\ref{fig: oh_FM}). In the first two layers, the extracted features were similar to white matter, cerebrospinal fluid and skull segmentations, while the last layer showcased sparse, global and nearly binary patterns. They used this example to emphasize the advantage of using CNNs to extract very abstract and complex features rather than using custom algorithms for features extraction~ \cite{ohClassificationVisualizationAlzheimer2019}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.8\textwidth]{figures/section3/feature_maps/oh_FM.png}
\caption[Representation of a selection of feature maps.]{Representation of a selection of feature maps (outputs of 4 filters on 10 for each layer) obtained for a single individual. \\
Adapted from \citep{ohClassificationVisualizationAlzheimer2019} (CC BY 4.0).}
\label{fig: oh_FM}
\end{figure}
Another way to visualize a feature map is to project it in a two or three-dimensional space to understand how it is positioned with respect to other feature maps. Abrol et al.~\cite{abrolDeepResidualLearning2020} projected the features obtained after the first dense layer of a ResNet architecture onto a two-dimensional space using the classical t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction technique. For the classification task of Alzheimer's disease statuses, they observed that the projections were correctly ordered according to the disease severity, supporting the correctness of the model~\cite{abrolDeepResidualLearning2020}. They partitioned these projections into three groups: Far-AD (more extreme Alzheimer's Disease patients), Far-CN (more extreme Cognitively Normal participants) and Fused (a set of images at the intersection of AD and CN groups). Using a t-test, they were able to detect and highlight voxels presenting significant differences between groups.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.95\textwidth]{figures/section3/feature_maps/abrol_FM.png}
\caption[Difference in neuroimaging space between groups defined thanks to t-SNE projection.]{Difference in neuroimaging space between groups defined thanks to t-SNE projection. Voxels showing significant differences post false discovery rate (FDR) correction (p \textless 0.05) are highlighted. \\
Reprinted from Journal of Neuroscience Methods, 339, \citep{abrolDeepResidualLearning2020}, 2020, with permission from Elsevier.}
\label{fig: abrol_FM}
\end{figure}
Biffi et al.~\cite{biffiExplainableAnatomicalShape2020} not only used feature map visualization, but also sampled the feature space. Indeed, they trained a ladder variational autoencoder framework to learn hierarchical latent representations of 3D hippocampal segmentations of control subjects and Alzheimer’s disease patients. A multi-layer perceptron was jointly trained on top of the highest two-dimensional latent space to classify anatomical shapes. While lower spaces needed a dimensionality reduction technique (i.e. t-SNE), the highest latent space could directly be visualized, as well as the anatomical variability it captured in the initial input space, by leveraging the generative process of the model. This sampling enabled an easy visualization and quantification of the anatomical differences between each class.
Finally, it may be very informative to better understand the behavior of neurons and what they are encoding. After training deep convolutional autoencoders to reconstruct MR images, segmented gray matter maps and white matter maps, Martinez-Murcia et al.~\cite{martinez-murciaStudyingManifoldStructure2020} computed correlations between each individual hidden neuron value and clinical information (e.g. age, mini-mental state examination) which allowed them to determine to which extent this information was encoded in the latent space. This way they determined which clinical data was the most strongly associated.
Using a collection of nine different MRI data sets, Leming et al.~\cite{lemingEnsembleDeepLearning2020} trained CNNs for various classification tasks (autism vs typically developing, male vs female and task vs rest). They computed a diversity coefficient for each filter of the second layer based on its output feature map. They counted how many different data sets maximally activated each value of this feature map: if they were mainly activated by one source of data the coefficient would be close to 0, whereas if they were activated by all data sets it would be close to 1. This allows assessing the layer stratification, i.e. to understand if a given filter was mostly maximally activated by one phenotype or by a diverse population. They found out that a few filters were only maximally activated by images from a single MRI data set, and that the diversity coefficient was not normally distributed across filters, having generally two peaks at the beginning and at the end of the spectrum, respectively exhibiting the stratification and strongly diverse distribution of the filters.
\subsection{Back-propagation methods applied to neuroimaging}
\label{sec:application_BP}
Back-propagation methods are the most popular methods to interpret models, and a wide range of these algorithms have been used to study brain disorders: standard and guided back-propagation \citep{huDeepLearningBasedClassification2021, ohClassificationVisualizationAlzheimer2019, riekeVisualizingConvolutionalNetworks2018, eitelTestingRobustnessAttribution2019, bohleLayerwiseRelevancePropagation2019}, gradient$\odot$input \citep{eitelUncoveringConvolutionalNeural2019, eitelTestingRobustnessAttribution2019, dyrbaComparisonCNNVisualization2020}, Grad-CAM \citep{burdujaAccurateEfficientIntracranial2020, dyrbaComparisonCNNVisualization2020}, guided Grad-CAM \citep{tangInterpretableClassificationAlzheimer2019}, LRP \citep{eitelUncoveringConvolutionalNeural2019, eitelTestingRobustnessAttribution2019, dyrbaComparisonCNNVisualization2020, bohleLayerwiseRelevancePropagation2019}, DeconvNet \citep{dyrbaComparisonCNNVisualization2020} and deep Taylor Decomposition \citep{dyrbaComparisonCNNVisualization2020}.
\subsubsection{Single interpretation}
Some studies implemented a single back-propagation method, and exploited it to find which brain regions are exploited by their algorithm \citep{ohClassificationVisualizationAlzheimer2019, lemingEnsembleDeepLearning2020, huDeepLearningBasedClassification2021}, to validate interpretability methods \citep{eitelUncoveringConvolutionalNeural2019} or to provide attribution maps to physicians to improve clinical guidance \citep{burdujaAccurateEfficientIntracranial2020}.
Oh et al.~\cite{ohClassificationVisualizationAlzheimer2019} used the standard back-propagation method to interpret CNNs for classification of Alzheimer's disease statuses. They showed that the attribution maps associated with the prediction of the conversion of prodromal patients to dementia included more complex representations, less focused on the hippocampi, than the ones associated with classification between demented patients from cognitively normal participants (see Figure~\ref{fig: oh_BP}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section3/backpropagation/oh_BP.png}
\caption[Distribution of discriminant regions obtained with gradient back-propagation.]{Distribution of discriminant regions obtained with gradient back-propagation in the classification of demented patients and cognitively normal participants (top part, AD vs CN) and the classification of stable and progressive mild cognitive impairment (bottom part, sMCI vs pMCI). \\
Adapted from \citep{ohClassificationVisualizationAlzheimer2019} (CC BY 4.0).}
\label{fig: oh_BP}
\end{figure}
In the context of autism, Leming et al.~\cite{lemingEnsembleDeepLearning2020} used the Grad-CAM algorithm to determine the most important brain connections from functional connectivity matrices . However, the authors pointed out that without further work, this visualization method did not allow understanding the underlying reason of the attribution of a given feature: for instance, one cannot know if a set of edges is important because it is under-connected or over-connected. Finally, Hu et al.~\cite{huDeepLearningBasedClassification2021} used attribution maps produced by guided back-propagation to quantify the difference in the regions used by their network to characterize Alzheimer's disease or fronto-temporal dementia.
The goal of Eitel et al.~\cite{eitelUncoveringConvolutionalNeural2019} was different. Instead of identifying brain regions related to the classification task, they exhibited with LRP that transfer learning between networks trained on different diseases (Alzheimer's disease to multiple sclerosis) and different MRI sequences enabled obtaining attribution maps focused on a smaller number of lesion areas. However, the authors pointed out that it would be necessary confirm their results on larger data sets.
Finally, Burduja et al.~\cite{burdujaAccurateEfficientIntracranial2020} trained a CNN-LSTM model to detect various hemorrhages from brain computed tomography (CT) scans. For each positive slice coming from controversial or difficult scans, they generated Grad-CAM based attribution maps and asked a group of radiologists to classify them as correct, partially correct or incorrect. This classification allowed them to determine patterns for each class of maps, and better understand which characteristics radiologists expected from these maps to be considered as correct and thus useful in practice. In particular, radiologists described maps including any type of hemorrhage as incorrect as soon as some of the hemorrhages were not highlighted, while the model only needed to detect one hemorrhage to correctly classify the slice as pathological.
\subsubsection{Comparison of several interpretability methods}
Papers described in this section used several interpretability methods and compared them in their particular context. However, as the benchmark of interpretability methods is the focus of section~\ref{subsec: which method}, which also include other types of interpretability than back-propagation, we will only focus here on what conclusions were drawn from the attribution maps.
Dyrba et al.~\cite{dyrbaComparisonCNNVisualization2020} compared DeconvNet, guided back-propagation, deep Taylor decomposition, gradient$\odot$input, LRP (with various rules) and Grad-CAM methods for classification of Alzheimer's disease, mild cognitive impairment and normal cognition
statuses. In accordance with the literature, they obtained a highest attention given to the hippocampus for both prodromal and demented patients.
B\"{o}hle et al.~\cite{bohleLayerwiseRelevancePropagation2019} compared two methods, LRP with $\beta$-rule and guided back-propagation for Alzheimer's disease status classification. They found that LRP attribution maps highlight the individual differences between patients, and then that they could be used as a tool for clinical guidance.
\subsection{Perturbation methods applied to neuroimaging}
\label{sec:application_peturbation}
The standard perturbation method has been widely used in the study of Alzheimer's disease \citep{baeTransferLearningPredicting2019, riekeVisualizingConvolutionalNetworks2018, nigriExplainableDeepCNNs2020, eitelTestingRobustnessAttribution2019} and related symptoms (amyloid-$\beta$ pathology) \citep{tangInterpretableClassificationAlzheimer2019}. However, most of the time, authors do not train their model with perturbed images. Hence, to generate explanation maps, the perturbation method uses images outside the distribution of the training set, which may call into question the relevance of the predictions and thus the reliability of attention maps.
\subsubsection{Variants of the perturbation method tailored to neuroimaging}
Several variations of the perturbation method have been developed to adapt to neuroimaging data.
The most common variation in brain imaging is the brain area perturbation method, which consists in perturbing entire brain regions according to a given brain atlas, as done in \cite{riekeVisualizingConvolutionalNetworks2018, abrolDeepResidualLearning2020, ohClassificationVisualizationAlzheimer2019}.
In their study of Alzheimer's disease, Abrol et al.~\cite{abrolDeepResidualLearning2020} obtained high values in their attribution maps for the usually discriminant brain regions, such as the hippocampus,the amygdala, the inferior and superior temporal gyruses, and the fusiform gyrus. Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018} also obtained results in accordance with the medical literature, and noted that the brain area perturbation method led to a less scattered attribution map than the standard method (Figure~\ref{fig: rieke_perturbation}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.7\textwidth]{figures/section3/perturbation/rieke_perturbation.png}
\caption[Mean attribution maps obtained on demented patients obtained with the standard and the brain area perturbation methods.]{Mean attribution maps obtained on demented patients. The first row corresponds to the standard and the second one to the brain area perturbation method. \\
Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature, MLCN 2018, DLF 2018, IMIMIC 2018: Understanding and Interpreting Machine Learning in Medical Image Computing Applications, \citep{riekeVisualizingConvolutionalNetworks2018}, 2018.}
\label{fig: rieke_perturbation}
\end{figure}
Oh et al.~\cite{ohClassificationVisualizationAlzheimer2019} used the method to compare the attribution maps of two different tasks: (1) demented patients vs cognitively normal participants and (2) stable vs progressive mild cognitively impaired patients, and noted that the regions targeted for the first task were shared with the second one (medial temporal lobe), but that some regions were specific to the second task (parts of the parietal lobe).
Guti\'{e}rrez-Becker and Wachinger~\cite{gutierrez-beckerDeepMultistructuralShape2018} adapted the standard perturbation method to a network that classified clouds of points extracted from neuroanatomical shapes of brain regions (e.g. left hippocampus) between different states of Alzheimer's disease. For the perturbation step, the authors set to $0$ the coordinates of a given point $x$ and the ones of its neighbors to then assess the relevance of the point $x$. This method allows easily generating and visualizing a 3D attribution map of the shapes under study.
\subsubsection{Advanced perturbation methods}
More advanced perturbation based methods have also been used in the literature. Nigri et al.~\cite{nigriExplainableDeepCNNs2020} compared a classical perturbation method to a swap test. The swap test replaces the classical perturbation step by a swapping step where patches are exchanged between the input brain image and a reference image chosen according to the model prediction. This exchange is possible as brain images were registered and thus brain regions are positioned in roughly the same location in each image.
Finally, Thibeau-Sutre et al.~\cite{thibeau-sutreVisualizationApproachAssess2020} used the optimized version of the perturbation method to assess the robustness of CNNs in identifying regions of interest for Alzheimer's disease detection. They applied optimized perturbations on gray matter maps extracted from T1w MR images, and the perturbation method consisted in increasing the value of the voxels to transform patients into controls. This process aimed at simulating gray matter reconstruction to identify the most important regions that needed to be ``de-atrophied'' to be considered again as normal. However they unveiled a lack of robustness of the CNN: different retrainings led to different attribution maps (shown in Figure~\ref{fig: thibeausutre_occlusion}) even though the performance did not change.
\begin{figure}[!tbh]
\centering
\includegraphics[width=\textwidth]{figures/section3/perturbation/thibeausutre_perturbation.png}
\caption[Attribution maps obtained with the optimized perturbation methods.]{Coronal view of the mean attribution masks on demented patients obtained for five reruns of the same network with the optimized perturbation method. \\
Adapted with permission from Medical Imaging 2020: Image Processing, \citep{thibeau-sutreVisualizationApproachAssess2020}.}
\label{fig: thibeausutre_occlusion}
\end{figure}
\subsection{Distillation methods applied to neuroimaging}
\label{sec:application_distillation}
Distillation methods are less commonly used, but some very interesting use cases can be found in the literature on brain disorders, with methods such as LIME \citep{mageshExplainableMachineLearning2020} or SHAP \citep{ballIndividualVariationUnderlying2020}.
Magesh et al.~\cite{mageshExplainableMachineLearning2020} used LIME to interpret a CNN for Parkinson's disease detection from single-photon single-photon emission computed tomography (SPECT) scans. Most of the time the most relevant regions are the putamen and the caudate (which is clinically relevant), and some patients
also showed an anomalous increase in dopamine activity in nearby areas, which is a characteristic feature of late-stage Parkinson's disease. The authors did not specify how they extracted the ``super-pixels'' necessary to the application of the method, though it could have been interesting to consider neuroanatomical regions instead of creating the voxels groups with an agnostic method.
Ball et al.~\cite{ballIndividualVariationUnderlying2020} used SHAP to obtain explanations at the individual level from three different models trained to predict participants' age from regional cortical thicknesses and areas: regularised linear model, Gaussian process regression and XGBoost, (Figure~\ref{fig: ball_SHAP}). The authors exhibited a set of regions driving predictions for all models, and showed that regional attention was highly correlated on average with weights of the regularised linear model. However, they showed that while being consistent across models and training folds, explanations of SHAP at the individual level were generally not correlated with feature importance obtained from the weight analysis of the regularised linear model.
The authors also exemplified that the global contribution of a region to the final prediction error (``brain age delta''), even with a high SHAP value, was in general small, which indicated that this error was best explained by changes spread across several regions~\cite{ballIndividualVariationUnderlying2020}.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.7\textwidth]{figures/section3/distillation/ball_SHAP.png}
\caption[Mean absolute SHAP values averaged across all subjects for regional thickness and area.]{Mean absolute feature importance (SHAP values) averaged across all subjects for XGBoost on regional thicknesses (red) and areas (green).\\
Adapted from \citep{ballIndividualVariationUnderlying2020} (CC BY 4.0).}
\label{fig: ball_SHAP}
\end{figure}
\subsection{Intrinsic methods applied to neuroimaging}
\label{sec:application_intrinsic}
\subsubsection{Attention modules}
Attention modules have been increasingly used in the past couple of years, as they often allow a boost in performance while being rather easy to implement and interpret.
To diagnose various brain diseases from brain CT images,
Fu et al.~\cite{fuAttentionbasedFullSlice2021} built a model integrating a ``two step attention'' mechanism that selects both the most important slices and the most important pixels in each slice. The authors then leveraged these attention modules to retrieve the five most suspicious slices and highlight the areas with the more significant attention.
In their study of Alzheimer's disease,
Jin et al.~\cite{jinGeneralizableReproducibleNeuroscientifically2020} used a 3D attention module to capture the most discriminant brain regions used for Alzheimer's disease diagnosis. As shown in Figure~\ref{fig: jin_attention}, they obtained significant correlations between attention patterns for two independent databases. They also obtained significant correlations between regional attention scores of two different databases, which indicated a strong reproducibility of the results.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.9\textwidth]{figures/section3/intrinsic/jin_attention.png}
\caption[Attribution maps generated by an attention mechanism module.]{Attribution maps (left: in-house database, right: ADNI database) generated by an attention mechanism module, indicating the discriminant power of various brain regions for Alzheimer's disease diagnosis. \\
Adapted from \citep{jinGeneralizableReproducibleNeuroscientifically2020} (CC BY 4.0). }
\label{fig: jin_attention}
\end{figure}
\subsubsection{Modular transparency}
Modular transparency has often been used in brain imaging analysis. A possible practice consists in first generating a target probability map of a black-box model, before feeding this map to a classifier to generate a final prediction, as done in \citep{qiuDevelopmentValidationInterpretable2020, leeInterpretableAlzheimerDisease2019}.
Qiu et al.~\cite{qiuDevelopmentValidationInterpretable2020} used a convolutional network to generate an attribution map from patches of the brain, highlighting brain regions associated with Alzheimer's disease diagnosis (see Figure~\ref{fig: qiu_modular}).
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.9\textwidth]{figures/section3/intrinsic/qiu_modular.png}
\caption[Example of modular transparency using random patch learning.]{Randomly selected samples of T1-weighted full MRI volumes are used as input to learn the Alzheimer's disease status at the individual level (Step 1). The application of the model to whole images leads to the generation of participant-specific disease probability maps of the brain (Step 2). \\
Adapted from Brain: A Journal of Neurology, 143, \citep{qiuDevelopmentValidationInterpretable2020}, 2020, with permission of Oxford University Press. }
\label{fig: qiu_modular}
\end{figure}
Lee et al.~\cite{leeInterpretableAlzheimerDisease2019} first parcellated gray matter density maps into 93 regions. For each of these regions, several deep neural networks were trained on randomly selected voxels and their outputs were averaged to obtain a mean regional disease probability. Then, by concatenating these regional probabilities, they generated a region-wise disease probability map of the brain, which was further used to perform Alzheimer's disease detection.
The approach of Ba et al.~\cite{baMultipleObjectRecognition2015} was also applied to Alzheimer's disease detection~\cite{woodNEURODRAM3DRecurrent2019} (preprint). Though that work is still a preprint, the idea is interesting as it aims at reproducing the way a radiologist looks at an MR image. The main difference with \cite{baMultipleObjectRecognition2015} is the initialization, as the context network does not take as input the whole image but clinical data of the participant. Then the framework browses the image in the same way as in the original paper: a patch is processed by a recurrent neural network and from its internal state the glimpse network learns which patch should be looked at next. After a fixed number of iterations, the internal state of the recurrent neural network is processed by a classification network that gives the final outcome. The whole system is interpretable as the trajectory of the locations (illustrated in Figure~\ref{fig: wood_modular}) processed by the framework allows understanding which regions are more important for the diagnosis. However this framework may have a high dependency to clinical data: as the initialization depends on scores used to diagnose Alzheimer's disease, the classification network may learn to classify based on the initialization only and most of the trajectory may be negligible to assess the correct label.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.9\textwidth]{figures/section3/intrinsic/wood_modular.png}
\caption[Trajectory taken by the framework trained based on the work of Ba et al.~\citep{baMultipleObjectRecognition2015}.]{Trajectory taken by the framework for a participant from the ADNI test set. A bounding box around the first location attended to is included to indicate the approximate size of the glimpse that the recurrent neural network receives; this is the same for all subsequent locations. \\
Adapted from \citep{woodNEURODRAM3DRecurrent2019}. Permission to reuse was kindly granted by the authors.}
\label{fig: wood_modular}
\end{figure}
Another framework, the DaniNet, proposed by Ravi et al.~\cite{raviDegenerativeAdversarialNeuroimage2022}, is composed of multiple networks, each with a defined function, as illustrated in Figure~\ref{fig: ravi_modular}.
\begin{itemize}
\item The conditional deep autoencoder (in orange) learns to reduce the size of the slice $x$ to a latent variable $Z$ (encoder part), and then to reconstruct the original image based on $Z$ and two additional variables: the diagnosis and age (generator part). Its performance is evaluated thanks to the reconstruction loss $L^{rec}$.
\item Discriminator networks (in yellow) either force the encoder to take temporal progression into account ($D_z$) or try to determine if the output of the generator are real or generated images ($D_b$).
\item Biological constraints (in grey) force the previous generated image of the same participant to be less atrophied than the next one (voxel loss) and learn to find the diagnosis thanks to regions of the generated images (regional loss).
\item Profile weight functions (in blue) aim at funding appropriate weights for each loss to compute the total loss.
\end{itemize}
The assembly of all these components allows learning a longitudinal model that characterizes the progression of the atrophy of each region of the brain. This atrophy evolution can then be visualized thanks to a neurodegeneration simulation generated by the trained model by sampling missing intermediate values.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.8\textwidth]{figures/section3/intrinsic/ravi_modular.png}
\caption[Pipeline used for training the DaniNet framework.]{Pipeline used for training the proposed DaniNet framework that aims to learn a longitudinal model of the progression of Alzheimer's disease. \\
Adapted from \citep{raviDegenerativeAdversarialNeuroimage2022} (CC BY 4.0).}
\label{fig: ravi_modular}
\end{figure}
\subsection{Benchmarks conducted in the literature}
\label{subsec:benchmarks}
This section describes studies that compared several interpretability methods. We separated evaluations based on metrics from those which are purely qualitative. Indeed, even if the interpretability metrics are not mature yet, it is essential to try to measure quantitatively the difference between methods rather than to only rely on human perception, which may be biased.
\subsubsection{Quantitative evaluations}
Eitel and Ritter~\cite{eitelTestingRobustnessAttribution2019} tested the robustness of four methods: standard perturbation, gradient$\odot$input, guided back-propagation and LRP. To evaluate these methods, the authors trained ten times the same model with a random initialization and generated attribution maps for each of the ten runs. For each method, they exhibited significant differences between the averaged true positives/negatives attribution maps of the ten runs. To quantify this variance, they computed the $\ell$2-norm between the attribution maps, and determined for each model the brain regions with the highest attribution. They concluded that LRP and guided back-propagation were the most consistent methods, both in terms of distance between attribution maps and most relevant brain regions. However this study makes a strong assumption: to draw these conclusions, the network should provide stable interpretations across retrainings. Unfortunately, Thibeau-Sutre et al.~\cite{thibeau-sutreVisualizationApproachAssess2020} showed that the study of the robustness of the interpretability method and of the network should be done separately, as their network retraining was not robust. Indeed, they first showed that the interpretability method they chose (optimized perturbation) was robust according to different criteria, then they observed that network retraining led to different attribution maps. The robustness of an interpretability method thus cannot be assessed from the protocol described in~\cite{eitelTestingRobustnessAttribution2019}. Moreover, the fact that guided back-propagation is one of the most stable method meets the results of \cite{adebayoSanityChecksSaliency2018}, who observed that guided back-propagation always gave the same result independently from the weights learned by a network (see Section~\ref{sec:theoretical_limitations}).
B\"{o}hle et al.~\cite{bohleLayerwiseRelevancePropagation2019} measured the benefit of LRP with $\beta$-rule compared to guided back-propagation by comparing the intensities of the mean attribution map of demented patients and the one of cognitively normal controls. They concluded that LRP allowed a stronger distinction between these two classes than guided back-propagation, as there was a greater difference between the mean maps for LRP. Moreover, they found a stronger correlation between the intensities of the LRP attribution map in the hippocampus and the hippocampal volume than for guided back-propagation. But as \cite{adebayoSanityChecksSaliency2018} demonstrated that guided back-propagation has serious flaws, it does not allow drawing strong conclusions.
Nigri et al.~\cite{nigriExplainableDeepCNNs2020} compared the standard perturbation method to a swap test (see Section~\ref{sec:application_peturbation}) using two properties: the continuity and the sensitivity. The continuity property is verified if two similar input images have similar explanations. The sensitivity property affirms that the most salient areas in an explanation map should have the greater impact in the prediction when removed. The authors carried out experiments with several types of models, and both properties were consistently verified for the swap test, while the standard perturbation method showed a significant absence of continuity and no conclusive fidelity values~\cite{nigriExplainableDeepCNNs2020}.
Finally Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018} compared four visualization methods: standard back-propagation, guided back-propagation, standard perturbation and brain area perturbation. They computed the Euclidean distance between the mean attribution maps of the same class for two different methods and observed that both gradient methods were close, whereas brain area perturbation was different from all others. They concluded that as interpretability methods lead to different attribution maps, one should compare the results of available methods and not trust only one attribution map.
\subsubsection{Qualitative evaluations}
Some works compared interpretability methods using a purely qualitative evaluation.
First, Eitel et al.~\cite{eitelUncoveringConvolutionalNeural2019} generated attribution maps using the LRP and gradient$\odot$input methods and obtained very similar results. This could be expected as it was shown that there is a strong link between LRP and gradient$\odot$input (see Section~\ref{subsubsec: Relevance BP}).
Dyrba et al.~\cite{dyrbaComparisonCNNVisualization2020} compared DeconvNet, guided back-propagation, deep Taylor decomposition, gradient$\odot$input, LRP (with various rules) and Grad-CAM. The different methods roughly exhibited the same highlighted regions, but with a significant variability in focus, scatter and smoothness, especially for the Grad-CAM method. These conclusions were derived from a visual analysis. According to the authors, LRP and deep Taylor decomposition delivered the most promising results with a highest focus and less scatter~\cite{dyrbaComparisonCNNVisualization2020}.
Tang et al.~\cite{tangInterpretableClassificationAlzheimer2019} compared two interpretability methods that seemed to have different properties: guided Grad-CAM would provide a fine-grained view of feature salience, whereas standard perturbation highlights the interplay of features among classes. A similar conclusion was drawn by Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018}.
\subsubsection{Conclusions from the benchmarks}
The most extensively compared method is LRP, and each time it has been shown to be the best method compared to others. However, its equivalence with gradient$\odot$input for networks using ReLU activations still questions the usefulness of the method, as gradient$\odot$input is much easier to implement. Moreover, the studies reaching this conclusion are not very insightful: \cite{eitelTestingRobustnessAttribution2019} may suffer from methodological biases, \cite{bohleLayerwiseRelevancePropagation2019} compared LRP only to guided back-propagation, which was shown to be irrelevant \citep{adebayoSanityChecksSaliency2018}, and \cite{dyrbaComparisonCNNVisualization2020} only performed a qualitative assessment.
As proposed in conclusion by Rieke et al.~\cite{riekeVisualizingConvolutionalNetworks2018}, a good way to assess the quality of interpretability methods could be to produce some form of ground truth for the attribution maps, for example by implementing simulation models that control for the level of separability or location of differences.
\section{Limitations and recommendations}
\label{sec:limitations}
Many methods have been proposed for interpretation of deep learning models. The field is not mature yet and none of them has become a standard. Moreover, a large panel of studies have been applied to neuroimaging data, but the value of the results obtained from the interpretability methods is often still not clear. Furthermore, many applications suffer from methodological issues, making their results (partly) irrelevant. In spite of this, we believe that using interpretability methods is highly useful, in particular to spot cases where the model exploits biases in the dataset.
\subsection{Limitations of the methods}
\label{sec:theoretical_limitations}
It is not often clear whether the interpretability methods really highlight features relevant to the algorithm they interpret. This way, Adebayo et al.~\cite{adebayoSanityChecksSaliency2018} showed that the attribution maps produced by some interpretability methods (guided back-propagation and guided Grad-CAM) may not be correlated at all with the weights learned by the network during its training procedure. They prove it with a simple test called ``cascading randomization''. In this test, the weights of a network trained on natural images are randomized layer per layer, until the network is fully randomized. At each step, they produce an attribution map with a set of interpretability methods to compare it to the original ones (attribution maps produced without randomization). In the case of guided back-propagation and guided Grad-CAM, all attribution maps were identical, which means that the results of these methods were independent of the training procedure.
Unfortunately, this type of failures does not only affect interpretability methods but also the metrics designed to evaluate their reliability, which makes the problem even more complex. Tomsett et al.~\cite{tomsettSanityChecksSaliency2020} investigated this issue by evaluating interpretability metrics with three properties:
\begin{itemize}
\item \textbf{inter-rater interpretability} assesses whether a metric always rank different interpretability methods in the same way for different samples in the data set,
\item \textbf{inter-method reliability} checks that the scores given by a metric on each saliency method fluctuate in the same way between images,
\item \textbf{internal consistency} evaluates if different metrics measuring the same property (for example fidelity) produce correlated scores on a set of attribution maps.
\end{itemize}
They concluded that the investigated metrics were not reliable, though it is difficult to know the origin of this unreliability due to the tight coupling of model, interpretability method and metric.
\subsection{Methodological advice}
\label{sec:methodological_advice}
Using interpretability methods is more and more common in medical research. Even though this field is not yet mature and the methods have limitations, we believe that using an interpretability method is usually a good thing because it may spot cases where the model took decisions from irrelevant features. However, there are methodological pitfalls to avoid and good practices to adopt to make a fair and sound analysis of your results.
You should first clearly state in your paper which interpretability method you use as there exist several variants for most of the methods (see section~\ref{sec:section2}), and its parameters should be clearly specified. Implementation details may also be important: for the Grad-CAM method, attribution maps can be computed at various levels in the network; for a perturbation method, the size and the nature of the perturbation greatly influence the result.
The data on which methods are applied should also be made explicit: for a classification task, results may be completely different if samples are true positives or true negatives, or if they are taken from the train or test sets.
Taking a step back from the interpretability method and especially attribution maps is fundamental as they present several limitations~\cite{bohleLayerwiseRelevancePropagation2019}.
First, there is no ground truth for such maps, which are usually visually assessed by authors. Comparing obtained results with the machine learning literature is a good first step, but be aware that you will most of the time find a paper to support your findings, so we suggest to look at established clinical references.
Second, attribution maps are usually sensitive to the interpretability method, its parameters (e.g. $\beta$ for LRP), but also to the final scale used to display maps. A slight change in one of these variables may significantly impact the interpretation.
Third, an attribution map is a way to measure the impact of pixels on the prediction of a given model, but it does not provide underlying reasons (e.g. pathological shape) or explain potential interactions between pixels. A given pixel might have a low attribution when considered on its own, but have a huge impact on the prediction when combined with another.
Fourth, the quality of a map strongly depends on the performance of the associated model. Indeed, low performance models are more likely to use wrong features. However, even in this case, attribution maps may be leveraged, e.g. to determine if the model effectively relies on irrelevant features (such as visual artefacts) or if there are biases in the data set~\cite{lapuschkinAnalyzingClassifiersFisher2016}.
One must also be very careful when trying to establish new medical findings using model interpretations, as we do not always know how the interpretability methods react when applied to correlated features. Then even if a feature seems to have no interest for a model, this does not mean that it is not useful in the study of the disease (for example, a model may not use information from the frontal lobe when diagnosing Alzheimer's disease dementia, but this does not mean that this region is not affected by the disease).
Finally, we suggest implementing different interpretability methods to obtain complementary insights from attribution maps. For instance, using LRP in addition to the standard back-propagation method provides a different type of information, as standard back-propagation gives the sensibility of the output with respect to the input, while LRP shows the contribution of each input feature to the output. Moreover, using several metrics allows a quantitative comparison between them using interpretability metrics (see section~\ref{sec:evaluation_metrics}).
\subsection{Which method should I choose?}
\label{subsec: which method}
We conclude this section on how to choose an interpretability method. Some benchmarks were conducted to assess the properties of some interpretability methods compared to others (see Section~\ref{subsec:benchmarks}). Though these are good initiatives, there are still not enough studies (and some of them suffer from methodological flaws) to draw solid conclusions. This is why we give in this section some practical advice to the reader to choose an interpretability method based on more general concepts.
Before implementing an interpretability method, we suggest reviewing the following points to help you choose carefully.
\begin{itemize}
\item \textbf{Implementation complexity}\quad Some methods are more difficult to implement than others, and may require substantial coding efforts. However, many of them have already been implemented in libraries or github repositories (e.g. \cite{uozbulak_pytorch_vis_2021}), so we suggest looking online before trying to re-implement them. This is especially true for model-agnostic methods, such as LIME, SHAP or perturbations, for which no modification of your model is required. For model-specific methods, such as back-propagation ones, the implementation will depend on the model, but if its structure is a common one (e.g. regular CNN with feature extraction followed by a classifier), it is also very likely that an adequate implementation is already available (e.g. Grad-CAM on CNN in \cite{uozbulak_pytorch_vis_2021}).
\item \textbf{Time cost}\quad Computation time greatly differs from one method to another, especially when input data is heavy. For instance, perturbing high dimension images is time expensive, and it would be much faster to use standard back-propagation.
\item \textbf{Method parameters}\quad The number of parameters to set varies between methods, and their choice may greatly influence the result. For instance, the patch size, the step size (distance between two patches) as well as the type of perturbation (e.g. white patches or blurry patches) must be chosen for the standard perturbation method, while the standard back-propagation does not need any parameter. Thus, without prior knowledge on the interpretability results, methods with no or only a few parameters are a good option.
\item \textbf{Literature}\quad Finally, our last piece of advice is to look into the literature to determine the methods that have commonly been used in your domain of study. A highly used method does not guarantee its quality (e.g. guided back-propagation~\cite{adebayoSanityChecksSaliency2018}), but it is usually a good first try.
\end{itemize}
To sum up, we suggest that you choose (or at least begin with) an interpretability method that is easy to implement, time efficient, with no parameters (or only a few) to tune and commonly used. In the context of brain image analysis, we suggest using the standard back-propagation or Grad-CAM methods.
Before using a method you do not know well, you should check that other studies did not show that this method is not relevant (which is the case for guided back-propagation or guided Grad-CAM), or that it is not equivalent to another method (for example LRP on networks with ReLU activation layers and gradient$\odot$input).
Regarding interpretability metrics, there is no consensus in the community as the field is not mature yet. General advice would be to use different metrics and confront them to human observers, taking for example the methodology described in~\cite{ribeiroWhyShouldTrust2016}.
\section{Conclusion}
Interpretability of machine learning models is an important topic, in particular in the medical field. First, this is a natural need expressed by clinicians who are potential users of medical decision support systems. Moreover, it has been shown in many occasions that models with high performance can actually be using irrelevant features. This is dangerous because it means that they are exploiting biases in the training data sets and thus may dramatically fail when applied to new data sets or deployed in clinical routine.
Interpretability is a very active field of research and many approaches have been proposed. They have been extensively applied in neuroimaging, and very often allowed highlighting clinically relevant regions of the brain that were used by the model. However, comparative benchmarks are not entirely conclusive and it is currently not clear which approach is the most adapted for a given aim. In other words, it is very important to keep in mind that the field of interpretability is not yet mature. It is not yet clear which are the best methods or even if the most widely used approaches will still be considered a standard in the near future.
That being said, we still strongly recommend that a classification or regression model be studied with at least one interpretability method. Indeed, evaluating the performance of the model is not sufficient in itself and the additional use of an interpretation method may allow detecting biases and models that perform well but for bad reasons and thus would not generalize to other settings.
\clearpage
\section*{Appendices}
\renewcommand{\thesubsection}{\Alph{subsection}}
\setcounter{subsection}{0}
\subsection{Short reminder on network training procedure}
\label{appendix:network}
During the training phase, a neural network updates its weights to make a series of inputs match with their corresponding target labels:
\begin{enumerate}
\item \textit{Forward pass}\quad The network processes the input image to compute the output value.
\item \textit{Loss computation}\quad The difference between the true labels and the output values is computed according to a criterion (cross-entropy, mean squared error...). This difference is called the loss, and should be as low as possible
\item \textit{Backward pass}\quad For each learnable parameter of the network, the gradients with respect to the loss are computed.
\item \textit{Weight update}\quad Weights are updated according to the gradients and an optimizer rule (stochastic gradient descent, Adam, Adadelta...).
\end{enumerate}
As a network is a composition of functions, the gradients of the weights of a layer $l$ with respect to the loss can be easily obtained according to the values of the gradients in the following layers. This way of computing gradients layer per layer is called back-propagation.
\subsection{Description of the main brain disorders mentioned in the reviewed studies}
\label{appendix:diseases}
This appendix aims at shortly presenting the diseases considered by the studies reviewed in Section~\ref{sec:section3}.
The majority of the studies focused on the classification of Alzheimer's disease (AD), a neurodegenerative disease of the elderly. Its pathological hallmarks are senile plaques formed by amyloid-$\beta$ protein and neurofibrillary tangles that are tau protein aggregates. Both can be measured in vivo using either PET imaging or CSF biomarkers. Several other biomarkers of the disease exist. In particular, atrophy of gray and white matter measured from T1w MRI is often used, even though it is not specific of AD.
There is strong and early atrophy in the hippocampi that can be linked to the memory loss, even though other clinical signs are found and other brain areas are altered.
The following diagnosis statuses are often used:
\begin{itemize}
\item \textbf{AD} refers to demented patients,
\item \textbf{CN} refers to cognitively normal participants,
\item \textbf{MCI} refers to patients in with mild cognitive impairment (they have an objective cognitive decline but it is not sufficient yet to cause a loss of autonomy),
\item \textbf{stable MCI} refers to MCI patients who stayed stable during a defined period (often three years),
\item \textbf{progressive MCI} refers to MCI patients who progressed to Alzheimer's disease during a defined period (often three years).
\end{itemize}
Most of the studies analysed T1w MRI data, except \cite{tangInterpretableClassificationAlzheimer2019} where the patterns of amyloid-$\beta$ in the brain are studied.
Fronto-temporal dementia is another neurodegenerative disease in which the neuronal loss dominates in the frontal and temporal lobes. Behavior and language are the most affected cognitive functions.
Parkinson's disease is also a neurodegenerative disease. It primarily affects dopaminergic neurons in the substantia nigra. A commonly used neuroimaging technique to detect this loss of dopaminergic neurons is the SPECT, as it uses a ligand that binds to dopamine transporters. Patients are affected by different symptoms linked to motor faculties such as tremor, slowed movements and gait disorder, but also sleep disorder, depression and other symptoms.
Multiple sclerosis is a demyelinating disease with a neurodegenerative component affecting younger people (it begins between the ages of 20 and 50). It causes demyelination of the white matter in the brain (brain stem, basal ganglia, tracts near the ventricles), optic nerve and spinal cord. This demyelination results in autonomic, visual, motor and sensory problems.
Intracranial hemorrhage may result from a physical trauma or nontraumatic causes such as a ruptured aneurysm. Different subtypes exist depending on the location of the hemorrhage.
Autism is a spectrum of neurodevelopmental disorders affecting social interation and communication. Diagnosis is done based on clinical signs (behavior) and the patterns that may exist in the brain are not yet reliably described as they overlap with the neurotypical population.
Some brain characteristics that may be related to brain disorders and detected in CT scans were considered in the data set CQ500:
\begin{itemize}
\item \textbf{Midline Shift} is a shift of the center of the brain past the center of the skull.
\item \textbf{Mass Effect} is caused by the presence of an intracranial lesion (for example a tumor) that is compressing nearby tissues.
\item \textbf{Calvarial Fractures} are fractures of the skull.
\end{itemize}
Finally, one study \citep{ballIndividualVariationUnderlying2020} learned to predict the age of cognitively normal patients. Such algorithm can help in diagnosing brain disorders as patients will have a greater brain age than their chronological age, then it establishes that a participant is not in the normal distribution.
\clearpage
\section*{Acknowledgments}
The research leading to these results has received funding from the French government under management of Agence Nationale de la Recherche as part of the ``Investissements d'avenir'' program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute) and reference ANR-10-IAIHU-06 (Agence Nationale de la Recherche-10-IA Institut Hospitalo-Universitaire-6).
\clearpage
\bibliographystyle{spbasicsort}
\subsection*{\refname}}
\renewcommand{\bibpreamble}{\scriptsize \begin{multicols}{2}}
\renewcommand{\bibpostamble}{\end{multicols}}
\mdfsetup{skipabove=\topskip, skipbelow=\topskip}
\newcounter{nicebox}
\newenvironment{nicebox}[1][]{%
\refstepcounter{nicebox}%
\ifstrempty{#1}%
{\mdfsetup{%
frametitle={%
\tikz[baseline=(current bounding box.east),outer sep=0pt]
\node[anchor=east,rectangle,fill=blue!20]
{\strut Theorem~\thetheo};}}
}%
{\mdfsetup{%
frametitle={%
\tikz[baseline=(current bounding box.east),outer sep=0pt]
\node[anchor=east,rectangle,fill=blue!20]
{\strut Box~\thenicebox:~#1};}}%
}%
\mdfsetup{innertopmargin=10pt,linecolor=blue!20, linewidth=2pt,topline=true, frametitleaboveskip=\dimexpr-\ht\strutbox\relax,}
\begin{mdframed}[]\relax%
}{\end{mdframed}}
\newfloat{floatbox}{thp}{lop}
\floatname{floatbox}{Box}
\newcommand{Box}{Box}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathAlphabet{\mathsfit}{\encodingdefault}{\sfdefault}{m}{sl}
\SetMathAlphabet{\mathsfit}{bold}{\encodingdefault}{\sfdefault}{bx}{n} |
1,116,691,499,631 | arxiv | \section{Introduction}
The basic mechanism underlying the activity of active galactic nuclei (AGN) is the accretion of matter onto the central supermassive black holes (SMBHs) of mass $M_{\rm BH}\sim10^{5}-10^{10} M_{\rm \odot}$. AGN are the most luminous ($10^{41}-10^{47}$erg~s$^{-1}$) persistent sources of electromagnetic radiation in the universe and have been the centre of interest for several reasons. They are considered as the only probes to determine the changing demographics and accretion history of SMBHs with cosmic time. The evolution of galaxies is closely linked with the growth and energy output from SMBHs (e.g. \citealt{kh13}). The feedback from AGN can regulate the star formation in their host galaxies and hence play a key role in galaxy evolution (see e.g. \citealt{fab12}).
AGN emit over the entire electromagnetic spectrum, from radio to gamma-rays, with each waveband allowing us to probe different aspects of the physics of accretion onto the central SMBH. The broadband X-ray spectrum of a large fraction of AGN generally consists of the following main components: a primary power-law continuum, a soft X-ray excess emission below $\sim2$\keV{}, iron (Fe)~K emission complex at around 6$-$7\keV{}, Compton hump above $10$\keV{} and complex absorption. The power-law continuum is thought to be produced by inverse-Compton scattering of lower energy optical/UV seed photons from the accretion disc in a corona consisting of hot electrons \citep{st80,ha91,ha93}. The X-ray continuum emission from the hot corona is bent down onto the accretion disc due to strong gravity and gives rise to X-ray reflection features by the processes of Compton scattering, photoelectric absorption, fluorescent line emission and bremsstrahlung \citep{rf05}. The most notable reflection feature is the Fe~K complex in the 6$-$7\keV{} energy range. The reflection spectrum is blurred by the combination of Doppler shifts, relativistic beaming and gravitational redshifts in the strong gravitational field very close to the black hole \citep{fa89,la91}. Moreover, we observe absorption features in the X-ray spectrum which can be caused by the presence of absorbing clouds along the line-of-sight or wind launched from the surface of the accretion disc (e.g. \citealt{po03,to11,re14,par17,pin18}).
The broad Fe~K$_{\alpha}$ emission line and soft X-ray excess emission are considered as direct probes available for the innermost accretion disc and black hole spin. However, the physical origin of the soft excess (SE) emission in many AGN remains highly debated (e.g. \citealt{cr06,do12}). The SE can be modelled physically by thermal Comptonization of the optical/UV seed photons in an optically thick, warm corona (e.g. \citealt{de07,lo12,po18}) and/or relativistically broadened lines from the inner ionized accretion disc (e.g. \citealt{cr06,fa09,na11,ga14}). Recently, \citet{ga16} have attempted to solve this issue by providing a new reflection model where the density of the disc atmosphere is a free parameter varying over the range $n_{\rm e}=10^{15}-10^{19}$~cm$^{\rm -3}$. The main effect of high-density on the reflection model occurs below 3\keV{} and is very important for powerful coronae and low-mass, highly-accreting black holes. The high-density relativistic reflection model has been successfully applied in one black-hole binary Cygnus~X-1 \citep{tom18} and one AGN IRAS~13224--3809 \citep{ji18}.
AGN are variable in all wavebands over timescales from a few seconds to years depending upon the physical processes they are governed by. The X-ray emission from AGN is ubiquitous and shows strong variability which depends on energy and flux of the source (e.g. \citealt{mark07,va11,al13,par15,ma17,al18}). This strong variability implies that the X-ray emission is originated in the inner regions of the central engine. Despite many efforts to understand the variable AGN emission, the exact origin of the energy-dependent variability of AGN is not clearly understood because of the presence of multiple emission components and the interplay between them. There are a few approaches that can probe the nature and origin of energy-dependent variability in AGN. One compelling approach is to measure the time-lag between different energy bands of X-ray emission (e.g. \citealt{pa01,mc04,ar06,ams08}). The measured time lag could be both frequency and energy dependent (e.g. \citealt{dm11,ka14}). Depending on the source geometry and emission mechanism, the time lags have positive and/or negative values. The positive or hard lag refers to the delayed hard photon variations relative to the soft photons and is detected at relatively low frequencies. The hard lag was first seen in X-ray binaries (e.g. \citealt{mi88,no99,ko01}). The origin of positive or hard lag is explained in the framework of viscous propagation fluctuation scenario \citep{ko01,au06,ut11,hogg16}. The negative or soft time-lag (soft photon variations are delayed with respect to the hard photons) is usually observed at higher frequencies. The soft lag was first discovered in one low-mass bright AGN 1H~0707--495 \citep{fa09} and interpreted as a sign of the reverberation close to the black hole (e.g. \citealt{zo10,em11,zo12,ca13,ka14}). The alternative justification for soft lag is the reflection from distant clouds distributed along the line sight or the accretion disc wind \citep{mi10}.
Another important perspective to probe the origin of energy-dependent variability is to examine the fractional root-mean-squared (rms) variability amplitude as a function of energy, the so-called fractional rms spectrum \citep{ed02,va03}. The modelling of the fractional rms spectra can shed light on the variable emission mechanisms and has been performed in a handful of AGN (e.g. MCG--6-30-15: \citealt{mi07}, 1H~0707--495: \citealt{fa12}, RX~J1633.3+4719: \citealt{ma16}, PG~1404+226: \citealt{md17}, Ark~120: \citealt{ma17}). This tool acts as a bridge between the energy spectrum and the observed variability and can effectively identify the constant and variable emission components present in the observed energy spectrum. It provides an orthogonal way to probe the variable components and the causal connection between them. Moreover, the Fourier frequency-resolved fractional rms spectrum allows us to understand both frequency and energy dependence of variability and hence different physical processes occurring on various timescales.
Here we perform the broadband (0.3$-$50\keV{}) spectral and timing studies of the highly-accreting AGN Mrk~1044 using data from \xmm{}, \swift{} and \nustar{}. We explore the new high-density relativistic reflection model as the origin of both soft and hard X-ray excess emission as well as the underlying variability mechanisms in the source. Mrk~1044 is a radio-quiet narrow-line Seyfert~1 galaxy at redshift $z=0.016$. The central SMBH mass of Mrk~1044 obtained from the H$_\beta$ reverberation mapping is $3\times10^6 M_{\rm \odot}$ \citep{wa01,du15}. The dimensionless mass accretion rate as estimated by \citet{du15} using the standard thin disc equation is $\dot{m}=\frac{\dot{M}c^{2}}{L_{\rm E}}=20.1\left(\frac{l_{44}}{cosi}\right)^{3/2}\left(\frac{M_{BH}}{10^{7}M_{\odot}}\right)^{-2}=16.6^{+25.1}_{-10.1}$, where $L_{\rm E}$ is the Eddington luminosity, $M_{BH}$ is the SMBH mass, $i$ is the disc inclination angle, $l_{44}=L_{5100}/10^{44}$~erg~s$^{-1}$ where $L_{5100}$ represents the AGN continuum luminosity at the rest frame wavelength of $5100\textrm{\AA}$.
The paper is organized as follows. In Section~\ref{sec:obs}, we describe the observations and data reduction method. In Section~\ref{sec:spec}, we present the broadband (0.3$-$50\keV{}) spectral analysis and results. In Section~\ref{sec:time}, we present the timing analysis and results. In Section~\ref{sec:EDV}, we present the energy-dependent variability study of the source in different Fourier frequencies. We summarize and discuss our results in Section~\ref{sec:discussion}.
\section{Observations and Data Reduction}
\label{sec:obs}
\subsection{\xmm{}}
\xmm{} \citep{ja01} observed Mrk~1044 on 27th January 2013 (Obs.~ID 0695290101) with a total duration of $\sim130$\ks{}. We analyzed the data from the European Photon Imaging Camera (EPIC-pn; \citealt{st01}) and Reflection Grating Spectrometer (RGS; \citealt{den01}). The EPIC-pn camera was operated in the small window (\textsc{sw}) mode with the thin filter to decrease pile up. The log of \xmm{}/EPIC-pn observation used in this work is shown in Table~\ref{table0}. The data sets were processed with the Scientific Analysis System (\textsc{sas}~v.15.0.0) and the updated (as of 2017 December 31) calibration files. We processed the EPIC-pn data using \textsc{epproc} and produced calibrated photon event files. We filtered the processed pn events with \textsc{pattern}$\leq4$ and \textsc{flag}$==0$ by taking both single and double pixel events but removing bad pixel events. To exclude the proton flare intervals, we created a \textsc{gti} (Good Time Interval) file above 10\keV{} for the full field with the \textsc{rate}$<0.5$\rm~cts~s$^{-1}$ using the task \textsc{tabgtigen} and acquired the maximum signal-to-noise. We examined for the pile-up with the task \textsc{epatplot} and did not find any pile-up in the EPIC-pn data. We extracted the source spectrum from a circular region of radius 30~arcsec centred on the source and the background spectrum from a nearby source free circular region with a radius of 30~arcsec. We produced the Redistribution Matrix File (\textsc{rmf}) and Ancillary Region File (\textsc{arf}) with the \textsc{sas} tasks \textsc{rmfgen} and \textsc{arfgen}, respectively. We extracted the background-subtracted, deadtime and vignetting corrected source light curves for different energy bands from the cleaned pn event file using the task \textsc{epiclccorr}. Finally, we grouped the EPIC-pn spectral data in order to oversample by at least a factor of 5 and to have a minimum of 20 counts per energy bin with the task \textsc{specgroup}. The net count rate estimated for EPIC-pn is ($17.48\pm0.01$)\rm~cts~s$^{-1}$ resulting in a total of $1.26\times10^{6}$ pn counts.
We processed the RGS data with the task \textsc{rgsproc}. The response files were produced using the \textsc{sas} task \textsc{rgsrmfgen}. We combined the spectra and response files for RGS~1 and RGS~2 using the task \textsc{rgscombine}. Finally, we grouped the RGS spectral data with a minimum of 25 counts per energy bin using the \textsc{ftools} \citep{bl95} task \textsc{grppha}.
\subsection{\nustar{}}
Mrk~1044 was observed with the \nustar{} telescope \citep{ha13} on February 8, 2016 (Obs.ID~60160109002) starting at 07:31:08 UT with a total duration of $\sim40$\ks{} with its two co-aligned focal plane modules FPMA and FPMB. The observation log of \nustar{} is shown in Table~\ref{table0}. We analyzed the data sets with the \nustar{} Data Analysis Software (\textsc{nustardas}) package (v.1.6.0) and the updated (as of 2017 December 31) calibration data base (version 20170222). We produced the calibrated and cleaned photon event files with the task \textsc{nupipeline}. We used the script \textsc{nuproducts} to extract the source spectra and light curves from a circular region of radius 50~arcsec centred on the source while the background spectrum and light curves were extracted from two same-sized circular regions free from source contamination and the edges of the CCD. We generated the background-subtracted FPMA and FPMB light curves with the \textsc{ftools} task \textsc{lcmath}. Finally, we grouped the spectra using the \textsc{grppha} tool with a minimum of 30 counts per bin. The net count rate is $\sim0.18$\rm~cts~s$^{-1}$ resulting in a total of $\sim4000$ counts with the net exposure time of about 22\ks{} for both FPMA and FPMB.
\subsection{\swift{}}
\swift{} \citep{ge04} observed Mrk~1044 several times with one observation quasi-simultaneous with \nustar{} on February 8, 2016 (Obs.ID~00080912001) starting at 12:09:57 UT for a total duration of $\sim6.2$\ks{}. The details of the \swift{} observations used in this work are listed in Table~\ref{table0}. Here we analyzed the data from the X-Ray Telescope (XRT; \citealt{bu05}) with the XRT Data Analysis Software (\textsc{xrtdas}) package (v.3.2.0) and the recent (as of 2017 December 31) calibration data base (version 20171113). We generated the cleaned event files with the task \textsc{xrtpipeline}. We have extracted the source spectra from a circular region of radius 40~arcsec centred on the source position and background spectra were produced from a nearby source free circular region of radius 40~arcsec with the script \textsc{xrtproducts}. We combined the spectra and response files for all observations with the tool \textsc{addascaspec}. Finally, we grouped the spectral data with the \textsc{grppha} tool so that we had a minimum of 20 counts per energy bin. The total net XRT count rate is $\sim0.6$\rm~cts~s$^{-1}$ resulting in a total of $\sim16500$ counts with the net exposure time of $\sim29.4$\ks{}.
\begin{table*}
\centering
\caption{\xmm{}/EPIC-pn, \nustar{} and \swift{}/XRT observations of Mrk~1044. The net exposure time is the live time of the instrument after background filtering.}
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{cccccccc}
\hline
Satellite & Camera & Obs. ID & Date & Net exposure & Net count rate \\
& & & (yyyy-mm-dd) & (ks) & (counts~s$^{-1}$) \\
\hline
\xmm{} & EPIC-pn & 0695290101 & 2013-01-27 & 72.0 & 17.5 \\ [0.2cm]
\hline
\nustar{} & FPMA & 60160109002 & 2016-02-08 & 21.7 & 0.18 \\ [0.2cm]
& FPMB & 60160109002 & 2016-02-08 & 21.6 & 0.18 \\ [0.2cm]
\hline
\swift{} & XRT & 00035760001 & 2007-07-25 & 3.0 & 0.71 \\ [0.2cm]
& & 00035760002 & 2007-08-01 & 7.6 & 0.32 \\ [0.2cm]
& & 00035760003 & 2008-03-02 & 5.5 & 0.21 \\ [0.2cm]
& & 00091316001 & 2012-06-05 & 3.0 & 1.66 \\ [0.2cm]
& & 00080912001 & 2016-02-08 & 6.1 & 0.59 \\ [0.2cm]
& & 00093018003 & 2017-09-08 & 4.1 & 0.72 \\ [0.2cm]
\hline
\end{tabular}}
\end{center}
\label{table0}
\end{table*}
\begin{figure*}
\includegraphics[scale=0.32,angle=-90]{fig1a.ps}
\includegraphics[scale=0.32,angle=-90]{fig1b.ps}
\caption{Left: The \xmm{}/EPIC-pn 3$-$10\keV{} spectrum, the \textsc{zpowerlw} model ($\Gamma = 1.96$) modified by the Galactic absorption and the residuals. A strong residual in the Fe~K region (6$-$7\keV{}) is clearly seen. The data are binned up for clarity. Right: The EPIC-pn 3$-$10\keV{} spectrum, the best-fitting phenomenological model, \textsc{tbabs$\times$(zgauss1$+$zgauss2$+$zpowerlw)} and the residual spectrum. The model consists of an absorbed power-law along with narrow and broad iron emission lines.}
\label{pn_hard_pow}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.32,angle=-90]{fig2a.ps}
\includegraphics[scale=0.32,angle=-90]{fig2b.ps}
\caption{Left: The EPIC-pn 3$-$10\keV{} spectral data, the fitted distant reflection model [\textsc{tbabs$\times$(xillverd+zpowerlw)}] and the residuals which show a broad excess emission in the $6.5-6.9$\keV{} energy range. The data are binned up for clarity. Right: The EPIC-pn 3$-$10\keV{} spectrum, the best-fitting reflection model, \textsc{tbabs$\times$(relxilld+xillverd+zpowerlw)} along with the model components and the residuals. The best-fitting model consists of three main components: a primary power-law emission (in dash-dotted), an ionized, relativistic disc reflection with a single power-law emissivity profile (in dashed line) and a neutral, distant reflection (in dotted line).}
\label{pn_hard_best}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.32,angle=-90]{fig3a.ps}
\includegraphics[scale=0.32,angle=-90]{fig3b.ps}
\caption{Left: The full band (0.3$-$10\keV{}) EPIC-pn spectrum, the hard band (3$-$10\keV{}) best-fitting spectral model, \textsc{tbabs$\times$(relxilld+xillverd+zpowerlw)} extrapolated down to 0.3\keV{} and the ratio of data to the model. Right: The full band (0.3$-$10\keV{}) EPIC-pn spectrum, the best-fitting model, \textsc{tbabs$\times$gabs$\times$(relxilld+xillverd+zpowerlw)} and the residuals. The model components are: a primary power-law emission (in dash-dotted), an ionized, high-density relativistic disc reflection with a broken power-law emissivity profile (in dashed), a neutral distant reflection (in dotted), and a broad Gaussian absorption line at around 0.7\keV{}.}
\label{pn_ldr}
\end{figure*}
\section{Broadband (0.3$-$50\keV{}) Spectral analysis: A first look}
\label{sec:spec}
We performed the spectral analysis of Mrk~1044 in \textsc{xspec}~v.12.9.0n \citep{ar96} and \textsc{spex}\footnote[1]{\url{https://www.sron.nl/spex}}~v.3.04.00 \citep{ka96}. We employed the $\chi^{2}$ statistics and estimated the errors at the 90~per~cent confidence limit corresponding to $\Delta\chi^{2}=2.71$, unless specified otherwise.
\subsection{The 3$-$10\keV{} EPIC-pn spectrum}
AGN are well known to exhibit a complex soft X-ray excess below $\sim2$\keV{} and hence we first focused on the hard X-ray emission above 3\keV{}. We begin the spectral analysis by fitting the 3$-$10\keV{} EPIC-pn spectrum of Mrk~1044 with a simple power-law (\textsc{zpowerlw}) corrected by the Galactic absorption model (\textsc{tbabs}) assuming the cross-sections and solar interstellar medium (ISM) abundances of \citet{aspl09}. The Galactic neutral hydrogen column density was fixed at $N_{\rm H}=3.6\times10^{20}$\rm~cm$^{-2}$ \citep{wil13}. The fitting of the 3$-$10\keV{} data with the \textsc{tbabs$\times$zpowerlw} model provided an unacceptable fit with $\chi^{2}$/d.o.f = 220.2/165 and a strong residual in the Fe~K region (6$-$7\keV{}) as shown in Figure~\ref{pn_hard_pow} (left). In order to assess the presence of a neutral Fe~K emission from the `torus' or other distant material, we model an emission line that is expected to be unresolved by the EPIC-pn camera. Any additional broad contribution to the line profile would then come from material closer to the black hole, e.g., the inner disc. Therefore, we first added one narrow Gaussian line (\textsc{zgauss1}) to model the neutral Fe K emission by fixing the line width at $\sigma_{N}=10$\ev{}, which improved the fit statistics to $\chi^{2}$/d.o.f = 186.6/163 ($\Delta\chi^{2}$=$-$33.6 for 2 d.o.f). However, the residual plot shows a broad excess emission centered at $\sim6.8$\keV{}. Then, we added another Gaussian line (\textsc{zgauss2}) and set the line width to vary freely. The fitting of the 3$-$10\keV{} EPIC-pn spectrum with the phenomenological model, \textsc{tbabs$\times$(zgauss1$+$zgauss2$+$zpowerlw)} provided a statistically acceptable fit with $\chi^{2}$/d.o.f = 154.3/160 ($\Delta\chi^{2}$=$-$32.3 for 3 d.o.f) as shown in Fig.~\ref{pn_hard_pow} (right). The centroid energies of the narrow and broad emission lines are $E_{1}=6.43^{+0.05}_{-0.04}$\keV{} and $E_{2}=6.84^{+0.14}_{-0.15}$\keV{}, which are representatives of the neutral Fe~K$_{\alpha}$ line and highly-ionized Fe line, respectively. The line width of the ionized Fe emission line is $\sigma_{B}=0.31^{+0.23}_{-0.14}$\keV{}. The best-fitting values for the equivalent width of the Fe~K$_{\alpha}$ and ionized Fe emission lines are ${\rm EW_{1}}=36.9^{+24.8}_{-23.5}$\ev{} and ${\rm EW_{2}}=131.7^{+57.2}_{-62.8}$\ev{}, respectively. Thus, we infer that the 3$-$10\keV{} EPIC-pn spectrum of Mrk~1044 is well described by a power-law like primary emission along with neutral, narrow and ionized, broad iron emission lines.
Since modelling of the iron emission features with Gaussian lines is not physically realistic, we explored the physically-motivated reflection models \citep{ga14,ga16} to fit the iron emission lines and hence understand the physical conditions of the accretion disc. First, we modelled the narrow Fe~K$_{\alpha}$ emission feature with the reflection (\textsc{xillverd}) model \citep{ga16} without relativistic blurring, as appropriate for distant reflection and with density fixed at $n_{\rm e}=10^{15}$cm$^{\rm -3}$, the lowest value afforded by the model. We also fixed the ionization parameter of the \textsc{xillverd} model at its minimum value ($\log\xi=0$) as the Fe~K$_{\alpha}$ line is neutral. The inclination angle of the distant reflector was fixed at a higher value, $i=60^{\circ}$ and decoupled from the relativistically blurred reflection component modelling the disc reflection. Since the incident continuum as assumed by the high-density reflection model is a power-law {\it without} a variable high-energy cut-off parameter, we have used a simple power-law (\textsc{zpowerlw}) model as the illuminating continuum. The fitting of the 3$-$10\keV{} spectrum with the model, \textsc{tbabs$\times$(xillverd+zpowerlw)} provided a $\chi^{2}$/d.o.f = 184.1/163 with a broad excess emission at $\sim6.5-6.9$\keV{} energy range (see Figure~\ref{pn_hard_best}, left). The origin of the broad Fe line could be the relativistic reflection from the inner regions of an ionized accretion disc. Therefore, we have added the high-density relativistic reflection model (\textsc{relxilld}; \citealt{ga16}) with a single power-law emissivity profile. The relevant parameters of the \textsc{relxilld} model are: emissivity index ($q$, where emissivity of the relativistic reflection is defined by $\epsilon\propto r^{-q}$), inner disc radius ($r_{\rm in}$), outer disc radius ($r_{\rm out}$), black hole spin ($a$), disc inclination angle ($\theta^{\circ}$), ionization parameter, electron density, iron abundance, reflection fraction ($R$) and photon index. We fixed the SMBH spin and outer disc radius at $a=0.998$ and $r_{\rm out}=1000r_{\rm g}$, respectively, where $r_{\rm g}=GM_{\rm BH}/c^{2}$. The reflection fraction of the \textsc{relxilld} model was fixed at $R=-1$ to obtain only the reflected emission. The photon index and iron abundance for the relativistic reflection (\textsc{relxilld}) component were tied with the distant reflection (\textsc{xillverd}) component. In \textsc{xspec}, the 3$-$10\keV{} model reads as \textsc{tbabs$\times$(relxilld+xillverd+zpowerlw)} which provided a reasonably good fit with $\chi^{2}$/d.o.f = 157.2/157. The best-fitting values for the photon index, emissivity index, disc inclination angle, ionization parameter and electron density are $\Gamma=2.25^{+0.07}_{-0.09}$, $q=10^{+0p}_{-5.7}$, $\theta^{\circ}=44.5^{+5.2}_{-2.3}$, $\xi= 492^{+223}_{-382}$~erg~cm~s$^{-1}$ and $\log(n_{\rm e}$/cm$^{\rm -3})=16.4^{+2.6p}_{-1.4p}$, respectively. Therefore, we infer that the hard X-ray (3$-$10\keV{}) spectrum of Mrk~1044 is well explained by a neutral distant reflection as well as an ionized, relativistic disc reflection where the emission is centrally concentrated in the inner regions of the accretion disc. The 3$-$10\keV{} EPIC-pn spectral data, the best-fitting physical model along with all the model components and the deviations of the observed data from the model are shown in Fig.~\ref{pn_hard_best} (right).
\begin{table*}
\centering
\caption{The best-fitting model parameters for the EPIC-pn (0.3$-$10\keV{}) spectrum. Parameters with notations `(f)' and `$\ast$' indicate fixed and tied values, respectively. Errors are quoted at a 90~per~cent confidence level and estimated from the MCMC output.}
\begin{center}
\scalebox{0.95}{%
\begin{tabular}{cccccc}
\hline
Component & Parameter & EPIC-pn & Description \\
& & (0.3$-$10\keV{}) & \\ [0.2cm]
\hline
Galactic absorption (\textsc{tbabs}) & $N_{\rm H}$(10$^{20}$~cm$^{-2}$) & $4.6^{+0.2}_{-0.3}$ & Galactic neutral hydrogen column density \\[0.2cm]
Intrinsic absorption (\textsc{gabs}) & $E_{\rm abs}$(eV) & $705.5^{+8.3}_{-12.6}$ & Absorption line energy in the observed ($z=0.016$) frame \\ [0.2cm]
& $\sigma_{\rm abs}$(eV) & $77.5^{+15.8}_{-8.0}$ & Absorption line width \\ [0.2cm]
& $\tau_{\rm abs}$(10$^{-2}$) & $1.8^{+0.6}_{-0.2}$ & Absorption line depth \\ [0.2cm]
Relativistic reflection (\textsc{relxilld}) & $q_{\rm in}$ & $9.73^{+0.24}_{-1.13}$ & Inner emissivity index\\ [0.2cm]
& $q_{\rm out}$ & $2.4^{+0.5}_{-0.2}$ & Outer emissivity index\\ [0.2cm]
& $r_{\rm br}$($r_{\rm g}$) & $3.2^{+0.3}_{-0.3}$ & Break disc radius\\ [0.2cm]
& $a$ & $0.992^{+0.005}_{-0.016}$ & SMBH spin \\ [0.2cm]
& $\theta^{\circ}$ & $46.4^{+1.9}_{-5.0}$ & Disc inclination angle\\ [0.2cm]
& $r_{\rm in}$($r_{\rm g}$) & $1.29^{+0.12}_{-0.05}$ & Inner disc radius\\ [0.2 cm]
& $r_{\rm out}$($r_{\rm g}$) & 1000(f) & Outer disc radius \\ [0.2cm]
& $\Gamma$ & 2.29$^{\ast}$& Blurred reflection photon index \\ [0.2cm]
& $A_{\rm Fe}$ & 2.2$^{+0.5}_{-0.6}$& Iron abundance (solar)\\ [0.2cm]
& $\log(n_{\rm e}$/cm$^{\rm -3}$) & 16.2$^{+0.3}_{-0.1}$& Electron density of the disc \\ [0.2cm]
& $\xi_{\rm blur}$(erg~cm~s$^{-1}$) & 909$^{+90}_{-201}$& Ionization parameter for the disc\\ [0.2cm]
& $N_{\rm blur}$(10$^{-4}$) & 2.4$^{+0.1}_{-1.1}$ & Normalization of the relativistic reflection component \\ [0.2cm]
Distant reflection (\textsc{xillverd}) & $\Gamma$ & 2.29$^{\ast}$ & Distant reflection photon index\\ [0.2cm]
& $A_{\rm Fe}$ & 2.2$^{\ast}$ & Iron abundance (solar)\\ [0.2cm]
& $\log(n_{\rm e}$/cm$^{\rm -3}$) & 15(f) & Density of the distant reflector \\ [0.2cm]
& $\xi_{\rm distant}$(erg~cm~s$^{-1}$)& 1(f) & Ionization parameter of the distant reflector\\ [0.2cm]
& $i^{\circ}$ & 60(f) & Inclination angle of the distant reflector \\ [0.2cm]
& $N_{\rm distant}$(10$^{-6}$) & 4.6$^{+2.4}_{-2.3}$& Normalization of the distant reflection component \\ [0.2cm]
Incident continuum (\textsc{zpowerlw}) & $\Gamma$ & 2.29$^{+0.01}_{-0.03}$ & Photon index of the incident continuum \\ [0.2cm]
& $N_{\rm PL}$(10$^{-3}$) & 2.1$^{+0.9}_{-0.5}$ & Normalization of the incident continuum \\ [0.2cm]
\textsc{flux} & $F_{0.3-2}$(10$^{-11}$) & 2.26 & Observed 0.3$-$2\keV{} flux in units of erg~cm$^{-2}$~s$^{-1}$ \\ [0.2cm]
& $F_{2-10}$(10$^{-11}$) & 1.07 & Observed 2$-$10\keV{} flux in units of erg~cm$^{-2}$~s$^{-1}$ \\ [0.2cm]
& $\chi^2$/$\nu$ & 239.5/225 & Fit statistic \\ [0.2cm]
\hline
\end{tabular}}
\end{center}
\label{table1}
\end{table*}
\begin{table*}
\centering
\caption{The best-fitting physical model parameters for the Galactic and intrinsic absorptions, which are obtained from the fitting of the RGS (0.38$-$1.8\keV{}) spectrum.}
\begin{center}
\scalebox{0.95}{%
\begin{tabular}{cccccc}
\hline
Component&Parameter & RGS (0.38$-$1.8\keV{}) &Description \\
\hline
Galactic absorption (\textsc{hot}) & $N_{\rm H}$(10$^{20}$~cm$^{-2}$) & $3.8^{+0.1}_{-0.1}$ & Galactic neutral hydrogen column density \\[0.2cm]
Intrinsic absorption (\textsc{xabs}) & $v_{\rm out}$($c$) & $0.1^{+0.01}_{-0.01}$ & Outflow velocity of the wind \\ [0.2cm]
& $N_{\rm H}^{\rm abs}$(10$^{20}$~cm$^{-2}$) & $4.7^{+0.9}_{-1.2}$ & Column density \\ [0.2cm]
& $\log(\xi_{\rm abs}$/erg~cm~s$^{-1}$) & $1.8^{+0.6}_{-0.2}$ & Ionization state \\ [0.2cm]
& $\sigma_{\rm v}$(km~s$^{-1}$) & $14147^{+3648}_{-2344}$ & Velocity broadening \\ [0.2cm]
& $\chi^2$/$\nu$ & 2353/2346 & Fit statistic \\ [0.2cm]
\hline
\end{tabular}}
\end{center}
\label{table1b}
\end{table*}
\begin{figure*}
\includegraphics[scale=0.32,angle=-0]{fig6a.eps}
\includegraphics[scale=0.32,angle=-0]{fig6b.eps}
\caption{Left: The 0.38$-$1.8\keV{} RGS spectrum, the spectral model, \textsc{hot$\times$reds$\times$(relxilld+xillverd+powerlaw)} and the residuals, showing a broad absorption feature at $\sim0.7$\keV{} in the observer's frame. Right: The 0.38$-$1.8\keV{} RGS spectrum, the best-fitting spectral model, \textsc{hot$\times$reds$\times$xabs$\times$(relxilld+xillverd+powerlaw)} and the residual plot. The data are binned up by a factor of 4 for plotting purposes only.}
\label{rgs}
\end{figure*}
\begin{table*}
\centering
\caption{The best-fitting model parameters for the joint fitting of \swift{}/XRT (0.3$-$6\keV{}), \xmm{}/EPIC-pn (0.3$-$10\keV{}) and \nustar{} (3$-$50\keV{}) spectra of Mrk~1044. Parameters with notations `(f)' and `$\ast$' indicate fixed and tied values, respectively.}
\begin{center}
\scalebox{0.95}{%
\begin{tabular}{ccccccccc}
\hline
Component&Parameter & \xmm{}/EPIC-pn & \swift{}/XRT & \nustar{} \\
& & (0.3$-$10\keV{})& (0.3$-$6\keV{}) & (3$-$50\keV{}) \\
\hline
Galactic absorption (\textsc{tbabs}) & $N_{\rm H}$(10$^{20}$~cm$^{-2}$) & $4.6^{+0.2}_{-0.2}$ & $4.6^{\ast}$ & $4.6^{\ast}$ \\[0.2cm]
Intrinsic absorption (\textsc{gabs}) & $E_{\rm abs}$(eV) & $707.2^{+7.1}_{-7.0}$ & $707.2^{\ast}$ & $707.2^{\ast}$ \\ [0.2cm]
& $\sigma_{\rm abs}$(eV) & $77.9^{+9.4}_{-8.3}$ & $77.9^{\ast}$ & $77.9^{\ast}$ \\ [0.2cm]
& $\tau_{\rm abs}$(10$^{-2}$) & $1.9^{+0.2}_{-0.2}$ & $1.9^{\ast}$ & $1.9^{\ast}$ \\ [0.2cm]
Relativistic reflection (\textsc{relxilld}) & $q_{\rm in}$ & $9.4^{+0.5}_{-0.7}$ & $9.4^{\ast}$ & $9.4^{\ast}$ \\ [0.2cm]
& $q_{\rm out}$ & $2.4^{+0.3}_{-0.2}$ & $2.4^{\ast}$ & $2.4^{\ast}$ \\ [0.2cm]
& $r_{\rm br}$($r_{\rm g}$) & $3.2^{+0.1}_{-0.1}$ & $3.2^{\ast}$ & $3.2^{\ast}$ \\ [0.2cm]
& $a$ & $0.997^{+0.001}_{-0.016}$ & $0.997^{\ast}$ & $0.997^{\ast}$ \\ [0.2cm]
& $\theta^{\circ}$ & $47.2^{+1.0}_{-2.5}$ & $47.2^{\ast}$ & $47.2^{\ast}$ \\ [0.2cm]
& $r_{\rm in}$($r_{\rm g}$) & $1.31^{+0.08}_{-0.05}$ & $1.31^{\ast}$ & $1.31^{\ast}$ \\ [0.2 cm]
& $r_{\rm out}$($r_{\rm g}$) & 1000(f) & 1000(f) & 1000(f) \\ [0.2cm]
& $\Gamma$ & $2.28^{\ast}$ & $2.31^{\ast}$ & $1.89^{\ast}$ \\ [0.2cm]
& $A_{\rm Fe}$ & $2.3^{+0.1}_{-0.2}$& $2.3^{\ast}$ & $2.3^{\ast}$ \\ [0.2cm]
& $\log(n_{\rm e}$/cm$^{\rm -3}$) & $16.7^{+0.3}_{-0.7}$ & $16.7^{\ast}$ & $16.7^{\ast}$ \\ [0.2cm]
& $\xi_{\rm blur}$(erg~cm~s$^{-1}$) & $812^{+131}_{-64}$ & $81^{+108}_{-29}$ & $1980^{+235}_{-908}$ \\ [0.2cm]
& $N_{\rm blur}$(10$^{-4}$) & $1.2^{+0.1}_{-0.4}$ & $1.2^{\ast}$ & $1.2^{\ast}$\\ [0.2cm]
Distant reflection (\textsc{xillverd}) & $\Gamma$ & $2.28^{\ast}$ & $2.31^{\ast}$ & $1.89^{\ast}$ \\ [0.2cm]
& $A_{\rm Fe}$ & $2.3^{\ast}$ & $2.3^{\ast}$ & $2.3^{\ast}$ \\ [0.2cm]
& $\log(n_{\rm e}$/cm$^{\rm -3}$) & 15(f) & $15^{\ast}$& $15^{\ast}$ \\ [0.2cm]
& $\xi_{\rm distant}$(erg~cm~s$^{-1}$)& 1(f) &1(f) & 1(f)\\ [0.2cm]
& $i^{\circ}$ & $60$(f) & $60^{\ast}$ & $60^{\ast}$ \\ [0.2cm]
& $N_{\rm distant}$(10$^{-6}$) & $2.3^{+0.7}_{-0.7}$ & $2.3^{\ast}$ & $2.3^{\ast}$ \\ [0.2cm]
Incident continuum (\textsc{zpowerlw}) & $\Gamma$ & $2.28^{+0.01}_{-0.01}$ & $2.31^{+0.07}_{-0.06}$ & $1.89^{+0.14}_{-0.06}$ \\ [0.2cm]
& $N_{\rm PL}$(10$^{-4}$) & $9.3^{+2.6}_{-1.2}$ & $11.0^{+1.9}_{-2.2}$ & $4.5^{+4.1}_{-4.4}$ \\ [0.2cm]
& $\chi^2$/$\nu$ & 735/694 & --- & --- \\ [0.2cm]
\hline
\end{tabular}}
\end{center}
\label{table2}
\end{table*}
\begin{figure*}
\includegraphics[scale=0.32,angle=-90]{fig7a.ps}
\includegraphics[scale=0.32,angle=-90]{fig7b.ps}
\caption{Left-hand panel: The \swift{}/XRT (0.3$-$6\keV{}: in green crosses), \nustar{}/FPMA (3$-$50\keV{}: in black circles) and FPMB (3$-$50\keV{}: in red triangles) spectral data, the primary power-law (\textsc{zpowerlw}: $\Gamma\sim2.13$) continuum model modified by the Galactic absorption (\textsc{tbabs}) fitted in the 0.3$-$50\keV{} band and the residuals as a function of energy. The residual plot shows a soft X-ray excess emission below $\sim1$\keV{}, a narrow emission line at $\sim6.4$\keV{} and a hard X-ray excess emission at around 15$-$30\keV{} energy range. The spectra are binned for clarity. Right-hand panel: The combined \swift{}/XRT (green crosses), \xmm{}/EPIC-pn (blue plus), \nustar{}/FPMA (black circles) and FPMB (red triangles) spectra, the best-fitting model, \textsc{tbabs$\times$gabs$\times$(relxilld+xillverd+zpowerlw)} and the residuals. The best-fitting model (in grey solid) consists of the following components: a primary power-law emission (in dash-dotted), a distant reflection (in dotted) for the 6.4\keV{} narrow Fe~K$_{\alpha}$ emission, and an ionized, high-density relativistic disc reflection (in dashed) responsible for the soft X-ray excess emission, broad Fe line and Compton hump.}
\label{sw_nu}
\end{figure*}
\subsection{The 0.3$-$10\keV{} EPIC-pn spectrum}
\label{full-pn}
To examine the presence of any excess emission in the soft X-ray band, we extrapolated the hard band (3$-$10\keV{}) best-fitting spectral model, \textsc{tbabs$\times$(relxilld$+$xillverd$+$zpowerlw)} down to 0.3\keV{}. Figure~\ref{pn_ldr} (left) shows the full band (0.3$-$10\keV{}) EPIC-pn spectral data, the extrapolated spectral model and the data-to-model ratio, which unveils an excess emission along with a broad absorption dip at $\sim0.7$\keV{} in the observed frame. For spectral analysis, we ignored the 1.8$-$2.5\keV{} band due to instrumental features present around the Si and Au detector edges at $\sim1.8$\keV{} and $\sim2.2$\keV{}, respectively (see e.g. \citealt{mar14,ma14}). Then we fitted the full band (0.3$-$10\keV{}) EPIC-pn spectrum with the hard band (3$-$10\keV{}) best-fitting model where the emissivity profile had a single power-law shape without any break. This resulted in a poor fit with $\chi^{2}$/d.o.f = 551.6/230. To fit the soft X-ray excess emission, we consider that the emissivity profile of the relativistic disc reflection follows a broken power-law shape: $\epsilon\propto r^{-q_{\rm in}}$ for $r<r_{\rm br}$ and $\epsilon\propto r^{-q_{\rm out}}$ for $r>r_{\rm br}$ where $r_{\rm br}$, $q_{\rm in}$ and $q_{\rm out}$ are the break radius, inner and outer emissivity indices, respectively. The fitting of the 0.3$-$10\keV{} EPIC-pn spectrum with the model, \textsc{tbabs$\times$(relxilld$+$xillverd$+$zpowerlw)}, where emissivity has a broken power-law shape, improved the fit statistics to $\chi^{2}$/d.o.f = 431/228 ($\Delta\chi^{2}$=$-$120.6 for 2 d.o.f). To model the absorption dip at $\sim0.7$\keV{}, we multiplied a Gaussian absorption line (\textsc{gabs}). The fitting of the 0.3$-$10\keV{} EPIC-pn spectral data with the high-density relativistic reflection plus constant density distant reflection as well as an intrinsic absorption [\textsc{tbabs$\times$gabs$\times$(relxilld+xillverd+zpowerlw)}] provided a statistically acceptable fit with $\chi^{2}$/d.o.f = 239.5/225. We also checked for the possible presence of warm absorbers and created an \textsc{xstar} table model \citep{ka01} with the default turbulent velocity of 300~km~s$^{-1}$, where we varied $\log\xi_{\rm wa}$ and $\log N_{\rm H}^{\rm wa}$ from $-5$ to $+5$ and $20$ to $25$, respectively. However, the modelling of the $\sim0.7$\keV{} absorption dip with the low-velocity warm absorber model(s) does not improve the fit statistics. An absorber with $\log\xi_{\rm wa}\sim0-1$ and $N_{\rm H}^{\rm wa}\sim5\times10^{20}$~cm$^{-2}$ yields minimal continuum curvature below $\sim0.6$\keV{} and an Fe~M UTA (unresolved transition array) feature close to $\sim0.8$\keV{}, which is too high for what is observed here. The lower ionization values, e.g., closer to $\log\xi_{\rm wa}\sim-1$ moves the Fe~M UTA close to $\sim0.7$\keV{} but adds moderate continuum curvature below $\sim0.5-0.6$\keV{} even for low column densities such as $N_{\rm H}^{\rm wa}\sim5\times10^{20}$~cm$^{-2}$. However, we did not see any extra continuum curvature below $\sim0.6$\keV{} even by fixing the Galactic neutral hydrogen column density at $N_{\rm H}=3.6\times10^{20}$\rm~cm$^{-2}$ \citep{wil13}. The centroid energies of the absorption line in the observed and rest frames are $\sim705.5$\ev{} and $\sim717.1$\ev{}, respectively. The $717.1$\ev{} absorption feature in the rest frame of the source can be due to either O~VII or O~VIII with corresponding outflow velocities of $\sim0.25c$ or $\sim0.1c$, where $c$ is the light velocity in vacuum. The interstellar hydrogen column density obtained from the 0.3$-$10\keV{} EPIC-pn spectral fitting is $N_{\rm H}=4.6^{+0.2}_{-0.3}\times10^{20}$\rm~cm$^{-2}$ where we considered the cross-sections and solar ISM abundances of \citet{aspl09}. The full band (0.3$-$10\keV{}) EPIC-pn spectrum, the best-fitting model, \textsc{tbabs$\times$gabs$\times$(relxilld+xillverd+zpowerlw)} and the deviations of the observed data from the best-fitting spectral model are shown in Fig.~\ref{pn_ldr} (right). In order to assure that the fitted parameter values are not stuck at any local minima, we performed a Markov Chain Monte Carlo (MCMC) analysis in \textsc{xspec}. We used the Goodman-Weare algorithm with 200 walkers and a total length of $10^{6}$. Figure~\ref{prob} shows the probability distributions of various parameters. To verify that the spectral parameters are not degenerate, we have shown the variation of different spectral parameters with the disc density and ionization parameter. Figure~\ref{cont} represents the contour plots between the disc density ($\log n_{\rm e}$) and other spectral parameters ($\Gamma$, $A_{\rm Fe}$, $\log\xi_{\rm blur}$, $q_{\rm in}$, $q_{\rm out}$, $r_{\rm br}$, $\theta^{\circ}$) and between disc ionization parameter ($\log\xi_{\rm blur}$) and two other spectral parameters ($r_{\rm in}$ and $q_{\rm in}$), which indicate that there is no degeneracy in the parameter space and the fitted parameters are independently constrained. The best-fitting spectral model parameters and their corresponding 90~per~cent confidence levels are summarized in Table~\ref{table1}. Thus, the \xmm{}/EPIC-pn spectral analysis indicates that the SE emission in Mrk~1044 results from the relativistic reflection off an ionized, high-density accretion disc with the ionization parameter of $\xi=909^{+90}_{-201}$~erg~cm~s$^{-1}$ and electron density of $\log(n_{\rm e}$/cm$^{\rm -3})=16.2^{+0.3}_{-0.1}$, respectively. We do not require any extra low-temperature Comptonization component to model the SE emission from the source.
\subsection{The 0.38$-$1.8\keV{} RGS spectrum}
To confirm the presence of the $\sim0.7$\keV{} absorption line in the observed frame and also to detect any emission or absorption features, we have modelled the high-resolution RGS spectrum in \textsc{spex}~v.3.04. Since RGS data alone cannot constrain the continuum, we applied the full band (0.3$-$10\keV{}) EPIC-pn continuum model without any absorption component and redshift correction to the RGS data and multiplied a constant component to account for the cross-calibration uncertainties. We fixed all the continuum model parameters to the best-fitting EPIC-pn value. The EPIC-pn continuum model is then corrected for redshift and the Galactic ISM absorption with the models \textsc{reds} and \textsc{hot} in \textsc{spex}, respectively. The temperature of the \textsc{hot} model was fixed at $T=0.5$\ev{} \citep{pin13}, while the column density was set to vary freely. The fitting of the 0.38$-$1.8\keV{} RGS spectrum with the model, \textsc{hot$\times$reds$\times$(relxilld+xillverd+powerlaw)}, provided a $\chi^{2}$/d.o.f = 2411.5/2250 with a significant residual at $\sim0.7$\keV{} in the observed frame (Figure~\ref{rgs}, left). To model the absorption feature, we have used the photoionized absorption model (\textsc{xabs}) in \textsc{spex}~v.3.04. The \textsc{xabs} model calculates the absorption by a slab of material in photoionization equilibrium. The relevant parameters of the model are: column density ($N_{\rm H}^{\rm abs}$), line width ($\sigma_{\rm v}$), line-of-sight velocity ($v_{\rm out}$) and ionization parameter ($\log\xi_{\rm abs}$), where $\xi_{\rm abs}=\frac{L}{nr^{2}}$, $L$ is the source luminosity, $n$ is the hydrogen density and $r$ is the distance of the slab from the ionizing source. The modelling of the broad absorption dip with the \textsc{xabs} model improved the fit statistics to $\chi^{2}$/d.o.f = 2353/2346 ($\Delta \chi^{2}=-58.5$ for 4 d.o.f). The best-fitting parameters of the absorbers are summarized in Table~\ref{table1b}. The fitting of the rest-frame $\sim0.72$\keV{} absorption feature with a single absorber provides an outflow velocity of $v=(0.1\pm0.01)c$. If we fix the outflow velocity to $v=0.25c$ corresponding to the O~VII wind, then the fit statistics get worse by $\Delta\chi^{2}$=$8.1$ for 1 d.o.f. Thus our analysis prefers the slower ($0.1c$) O~VIII outflow over the faster ($0.25c$) O~VII outflow. The 0.38$-$1.8\keV{} RGS spectrum, the best-fitting spectral model, \textsc{hot$\times$reds$\times$xabs$\times$(relxilld+xillverd+powerlaw)} and the residuals are shown in Fig.~\ref{rgs} (right).
\subsection{Joint fitting of \swift{}, \xmm{} and \nustar{} spectra}
We jointly fitted the \swift{}/XRT, \xmm{}/EPIC-pn and \nustar{}/FPMA, FPMB spectra to obtain tighter constraints on the reflection continuum including the SE emission and investigate the presence of any hard X-ray excess emission above 10\keV{}. We tied all the parameters together between four data sets and multiplied a constant component to take care of the cross-normalization factors. This factor is kept fixed at 1 for the FPMA and varied for the FPMB, XRT and EPIC-pn spectral data. Initially, we fitted the quasi-simultaneous \swift{}/XRT and \nustar{}/FPMA, FPMB spectral data with a simple power-law (\textsc{zpowerlw}) model modified by the Galactic absorption (\textsc{tbabs}), which provided a poor fit with $\chi^{2}$/d.o.f = 601/329. The deviations of the observed data from the absorbed power-law model are shown in Figure~\ref{sw_nu} (left). The residuals show the presence of a soft X-ray excess emission below $\sim1$\keV{}, a narrow emission feature $\sim6.4$\keV{} and a hard X-ray excess emission in the energy range 15$-$30\keV{}, which most likely represents the Compton reflection hump as observed in many AGN (e.g. NGC~1365: \citealt{ri13}, MCG--6-30-15: \citealt{mar14}, 1H~070--495: \citealt{ka15}, Ark~120: \citealt{po18}).
Then we applied the best-fitting EPIC-pn (0.3$-$10\keV{}) spectral model to the combined XRT, EPIC-pn, FPMA and FPMB spectral data sets. The joint fitting of the all four spectral data (0.3$-$50\keV{}) with the EPIC-pn spectral model, \textsc{tbabs$\times$gabs$\times$(relxilld+xillverd+zpowerlw)} provided a poor fit with $\chi^{2}$/d.o.f = 960/700. Since we are using multi-epoch observations, the spectral parameters might undergo significant variability. First, we set the photon index to vary between the \swift{}/XRT, \xmm{}/EPIC-pn and \nustar{} data sets. This improved the fit statistics to $\chi^{2}$/d.o.f = 776/698 ($\Delta\chi^{2}$=$-184$ for 2 d.o.f). Then we set the normalization of the incident continuum to vary between the observations, which improved the fit statistics to $\chi^{2}$/d.o.f = 765/696 ($\Delta\chi^{2}$=$-11$ for 2 d.o.f). Another spectral parameter that might encounter significant variability is the ionization parameter for the relativistic disc reflection component, which is directly proportional to the incident continuum flux. Therefore, we allow $\log\xi_{\rm blur}$ to vary, which provided an improvement in the fit statistics with $\chi^{2}$/d.o.f = 735/694 ($\Delta\chi^{2}$=$-30$ for 2 d.o.f) without any significant residuals. The cross-normalization factors obtained from the best-fitting model are $\sim1.02$ for FPMB, $\sim2.56$ for XRT and $\sim2.43$ for EPIC-pn, which is expected given the non-simultaneous observations over a long $\sim10$~year period. The best-fitting values for the electron density, inner radius, break radius and inclination angle of the disc are $\log(n_{\rm e}$/cm$^{\rm -3})=16.7^{+0.3}_{-0.7}$, $r_{\rm in}=1.31^{+0.08}_{-0.05}r_{\rm g}$, $r_{\rm br}=3.2^{+0.1}_{-0.1}r_{\rm g}$ and $\theta^{\circ}=47.2^{+1.0}_{-2.5}$, respectively. The inclusion of the data above 10\keV{} prefers moderately higher value for the disc density parameter. If we fix $\log(n_{\rm e}$/cm$^{\rm -3}$) to 15, then the broadband fit statistics get worse by $\Delta\chi^{2}$=$12.25$ for 1 d.o.f. An F-test indicates that the high-density disc model is preferred at the $\sim3.4\sigma$ confidence level compared to a low-density disc. Thus, the broadband (0.3$-$50\keV{}) spectral analysis suggests that the soft X-ray excess, highly ionized broad Fe line and Compton hump in Mrk~1044 is well explained by the relativistic disc reflection with a broken power-law emissivity profile of an ionized, high-density accretion disc. The \swift{}/XRT, \xmm{}/EPIC-pn, \nustar{}/FPMA and FPMB spectral data sets, the best-fitting model, \textsc{tbabs$\times$gabs$\times$(relxilld+xillverd+zpowerlw)} along with the components and the deviations of the broadband (0.3$-$50\keV{}) data from the best-fitting model are shown in Fig.~\ref{sw_nu} (right). The best-fitting broadband spectral model parameters are summarized in Table~\ref{table2}. If we replace the simple power-law (\textsc{zpowerlw}) model by a power-law with high-energy exponential cut-off (\textsc{cutoffpl}), then the estimated lower limit to the high-energy cut-off is $\sim128$\keV{}.
\begin{figure*}
\includegraphics[scale=0.32,angle=-0]{fig8a.eps}
\includegraphics[scale=0.32,angle=-0]{fig8b.eps}
\caption{Left: The background-subtracted, deadtime corrected \xmm{}/EPIC-pn light curves in the full (0.3$-$10\keV{}), soft (S=0.3$-$2\keV{}) and hard (H=2$-$10\keV{}) X-ray bands and the hardness ratio (H/S). Right: The background-subtracted \nustar{}/FPMA+FPMB light curves in three different energy bands: full (3$-$50\keV{}), soft (S=3$-$10\keV{}), hard (H=10$-$50\keV{}) and the corresponding hardness ratio (H/S) of Mrk~1044. The bin size used in all the panels is 400\s{}.}
\label{lc}
\end{figure*}
\begin{figure}
\includegraphics[width=0.47\textwidth,angle=-0]{fig9.eps}
\caption{The variation of the X-ray hardness ratio, H/S (S=0.3$-$2\keV{}, H=2$-$10\keV{}) as a function of the total (0.3$-$10\keV{}) X-ray count rate, representing a `softer-when-brighter' behaviour of Mrk~1044 as commonly observed in radio-quiet Seyfert~1 galaxies.}
\label{hr_flux}
\end{figure}
\begin{figure*}
\includegraphics[width=0.45\textwidth,angle=-0]{fig10a.eps}
\includegraphics[width=0.45\textwidth,angle=-0]{fig10b.eps}
\caption{The 0.3$-$2\keV{} vs. 2$-$10\keV{} flux$-$flux plot for Mrk~1044 (left). The time bin size used is 400\s{}. The binned flux$-$flux plot for Mrk~1044 (right). The dashed and dotted lines represent the linear (${\rm H}=m\times {\rm S}+c$) and power-law (${\rm H}=\alpha\times {\rm S}^{\beta}$) fits to the data.}
\label{flux_flux}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig11a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig11b.eps}
\caption{Frequency-dependent time lags between the $E_{1}$ (0.3$-$0.8\keV{}) and $E_{3}$ (1.5$-$5\keV{}) light curves (left) and between the $E_{2}$ (0.8$-$1\keV{}) and $E_{3}$ (1.5$-$5\keV{}) light curves (right). Lags were calculated relative to the soft band ($E_{1}=0.3-0.8$\keV{}: left and $E_{2}=0.8-1$\keV{}: right), and positive lag implies that the hard band variations are delayed relative to the soft band variations.}
\label{lag_fre}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig12a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig12b.eps}
\caption{Poisson noise subtracted coherence between 0.3$-$0.8\keV{} and 1.5$-$5\keV{} (left) and between 0.8$-$1\keV{} and 1.5$-$5\keV{} (right).}
\label{coh_fre}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig13a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig13b.eps}
\caption{Left: The lag-energy spectrum of Mrk~1044 in the lowest frequency range $\nu\sim[2-6]\times10^{-5}$\hz{}, where a hard lag is observed. The profile has a power-law like shape of the form $\tau(E)=1036.9\times E^{0.75}-912.2$\s{} and is shown as the solid, green line. Right: The lag-energy spectrum of Mrk~1044 in the high-frequency range $\nu\sim[1-2]\times10^{-4}$\hz{}, where a soft lag is detected. The high-frequency lag peaks at 0.8$-$1\keV{} and is very similar to the ratio of 0.3$-$10\keV{} data to the 3$-$10\keV{} best-fitting reflection model extrapolated down to 0.3\keV{}, as shown in the left, bottom panel of Fig.~\ref{pn_ldr}.}
\label{lag_E}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig14a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig14b.eps}
\caption{Energy-dependent, noise corrected coherence in the lowest frequency range $\nu\sim[2-6]\times10^{-5}$\hz{} (left) and high-frequency range $\nu\sim[1-2]\times10^{-4}$\hz{} (right).}
\label{coh_E}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig15a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig15b.eps}
\caption{Left: The 0.3$-$10\keV{} EPIC-pn mean (in circle) and frequency-averaged absolute rms (in square) spectra. Right: The 0.3$-$10\keV{} frequency-averaged fractional rms spectrum.}
\label{rms_data}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig16a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig16b.eps}
\caption{The 0.3$-$10\keV{} absolute rms spectrum of Mrk~1044 in the low-frequency range, $\nu_{\rm low}\sim[1.7-10]\times10^{-5}$\hz{} (left) and high-frequency range, $\nu_{\rm high}\sim[1-10]\times10^{-4}$\hz{} (right). A hint of enhanced variability is found at $\sim0.85$\keV{} in both frequency bands.}
\label{rms_nu_resol}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.47\textwidth,angle=-0]{fig17a.eps}
\includegraphics[width=0.47\textwidth,angle=-0]{fig17b.eps}
\caption{The 0.3$-$10\keV{} fractional rms spectrum of Mrk~1044 in the low-frequency range, $\nu_{\rm low}\sim[1.7-10]\times10^{-5}$\hz{} (left) and high-frequency range, $\nu_{\rm high}\sim[1-10]\times10^{-4}$\hz{} (right).}
\label{fvar_nu_resol}
\end{figure*}
\begin{figure}
\includegraphics[width=0.47\textwidth,angle=-0]{fig18.eps}
\caption{The 0.3$-$10\keV{} frequency-averaged $F_{\rm var}$ spectrum, the best-fitting variability spectral model (in solid blue) and the data-to-model ratio. The best-fitting model has one constant distant reflection component and two either uncorrelated or moderately anti-correlated variable spectral components: an ionized high-density relativistic reflection continuum with variable normalization ($\Delta N_{\rm blur}$) and a direct power-law continuum with variable normalization ($\Delta N_{\rm PL}$) and spectral index ($\Delta \Gamma$), where $\Delta N_{\rm PL}$ and $\Delta \Gamma$ are positively correlated. The dotted line represents the data-to-model ratio of 1. Error-bars are 1$\sigma$.}
\label{rms_model}
\end{figure}
\section{Timing Analysis}
\label{sec:time}
To understand the time and energy dependence of the variability and the causal connection between the variable emission components, we explored the timing properties of Mrk~1044 using various model-independent approaches.
\subsection{Time series and hardness ratio}
Initially, we derived the background-subtracted, deadtime and vignetting corrected, full band (0.3$-$10\keV{}) EPIC-pn time series of Mrk~1044 with the time bin size of 400\s{}, as shown in the top, left panel of Figure~\ref{lc}. The source noticeably exhibits strong short-term variability. By applying the Scargle's Bayesian Blocks segmentation algorithm \citep{sc13} to the time series, we found that the X-ray emission from Mrk~1044 varied by $\sim50$~per~cent in about 800\s{}. The max-to-min amplitude variation of the source count rate is $\sim3.1$ on a timescale of $\sim22.8$~hr. To characterise the flux variations of the source, we computed the fractional rms variability, $F_{\rm var}$ and its $1\sigma$ error using the formula given in \citet{va03}. The estimated fractional rms amplitude in the full band (0.3$-$10\keV{}) is $F_{\rm var, 0.3-10}=(19.7\pm0.1$)~per~cent. We also investigated the energy dependence of variability by generating light curves in two different energy bands: soft (S=0.3$-$2\keV{}) and hard (H=2$-$10\keV{}), which are mostly dominated by the SE and primary emission, respectively. We have shown the soft and hard X-ray light curves of Mrk~1044 in Fig.~\ref{lc} (left). The variability trend in these two bands is found to be similar. However, the soft band is more variable than the hard band with the fractional rms variability of $F_{\rm var,0.3-2}=(20.3\pm0.1$)~per~cent and $F_{\rm var,2-10}=(15.5\pm0.5$)~per~cent, respectively which is indicative of the presence of multi-component variability. The max-to-min amplitude variations of the count rate in the 0.3$-$2\keV{} and 2$-$10\keV{} bands are $\sim3.2$ and $\sim2.9$ on timescales of $\sim22.4$~hr and $\sim22.7$~hr, respectively. In the lower, left panel of Fig.~\ref{lc}, we have shown the temporal variations of the hardness ratio (HR), defined by H/S. The $\chi^{2}$ test revealed the presence of a significant variability in the source hardness with $\chi^{2}$/d.o.f=1028/318. The max-to-min amplitude variation of the hardness ratio is $\sim2.2$ on $\sim1.2$~hr timescales. Thus, we infer that Mrk~1044 showed a significant variability in flux as well as in spectral shape during the 2013 \xmm{} observation.
We also study the timing behaviour of the source in the 3$-$50\keV{} energy range during the \nustar{} observation in 2016. The full (3$-$50\keV{}), soft (3$-$10\keV{}) and hard (10$-$50\keV{}) X-ray, background-subtracted, combined FPMA and FPMB light curves of Mrk~1044 are shown in Fig.~\ref{lc} (right). The max-to-min amplitude variations in the 3$-$50\keV{}, 3$-$10\keV{} and 10$-$50\keV{} bands are $\sim2.2$, $\sim2.4$ and $\sim2.6$ on timescales of $\sim6.1$~hr, $\sim6.1$~hr and $\sim4.6$~hr, respectively. According to the $\chi^{2}$ test, the variability in the full (3$-$50\keV{}), soft (3$-$10\keV{}) and hard (10$-$50\keV{}) bands is statistically significant with $\chi^{2}$/d.o.f=200/58, 180/58 and 67/58, respectively. The fractional variability analysis confirms these results, indicating the presence of moderate variability with the amplitude of $F_{\rm var,3-50}=(14.4\pm1.3$)~per~cent, $F_{\rm var,3-10}=(15.1\pm1.5$)~per~cent and $F_{\rm var,10-50}=(7.8\pm4.8$)~per~cent in the full (3$-$50\keV{}), soft (3$-$10\keV{}) and hard (10$-$50\keV{}) bands, respectively. In the bottom, right panel of Fig.~\ref{lc}, we show the time series of the hardness ratio (HR=10$-$50\keV{}/3$-$10\keV{}). Although the source was variable in flux, the $\chi^{2}$ test revealed no significant variability in the hardness ($\chi^{2}$/d.o.f=42/58) during the 2016 \nustar{} observation.
\subsection{Hardness$-$flux and flux$-$flux analyses}
During the 2013 \xmm{} observation, the source was variable both in flux and spectral shape as shown in Fig.~\ref{lc} (left). To understand the interrelationship between the spectral shape and flux variations, we study the variation of the hardness ratio (HR=H/S: H=2$-$10\keV{} and S=0.3$-$2\keV{}) as a function of the total X-ray (0.3$-$10\keV{}) count rate. Figure~\ref{hr_flux} shows the hardness$-$intensity diagram of Mrk~1044, derived with the time bin size of 400\s{}. We found a decrease in source spectral hardness with the flux or `softer-when-brighter' behaviour of Mrk~1044, which is usually observed in radio-quiet Seyfert~1 galaxies (e.g. \citealt{mev03}, 1H~0707--495: \citealt{wi14}, Ark~120: \citealt{ma17,lo18}). To quantify the statistical significance of the observed `softer-when-brighter' trend of Mrk~1044, we estimated the Spearman rank correlation coefficient between the hardness ratio and X-ray count rate, which is $\sim-0.5$ with the null hypothesis probability of $p\sim1.2\times10^{-21}$. Although there is a lot of scatter in the diagram, a moderate anti-correlation between the hardness ratio and the total X-ray count rate is statistically significant.
We also studied the connection between the 0.3$-$2\keV{} and 2$-$10\keV{} band count rates using the flux$-$flux analysis which is a model-independent approach to study the spectral variability and was pioneered by \citet{ch01} and \citet{ta03}. We derived the flux$-$flux plot for Mrk~1044 with the time bins of 400\s{} (Figure~\ref{flux_flux}, left). We fitted the flux$-$flux plot with a linear model of the form ${\rm H}=m\times {\rm S}+c$, which provided a poor fit with $\chi^{2}$/d.o.f = 866/318, slope $m\sim0.07$ and a hard offset $c\sim0.44$~cts~s$^{-1}$. In order to investigate whether spectral pivoting of the primary power-law emission is a plausible cause for the observed variability, we fitted the data with a simple power-law model of the form ${\rm H}=\alpha\times {\rm S}^{\beta}$, which resulted in a statistically unacceptable fit with $\chi^{2}$/d.o.f = 862/318, $\alpha\sim0.7$ and $\beta\sim0.22$. The poor quality of the fitting could be caused by the presence of lots of scatter in the data. For greater clarity, we binned up the flux$-$flux plot in the flux domain. The binned flux$-$flux plot is shown in the right panel of Fig~\ref{flux_flux}. However, the fitting of the binned flux$-$flux plot with the linear and power-law models provided unacceptable fits with $\chi^{2}$/d.o.f = 37.4/16 and $\chi^{2}$/d.o.f = 28.9/16, respectively. To effectively distinguish between a linear versus a power-law fit, a wide range of flux is required. However, the EPIC-pn count rate spans only a factor of $\sim2-3$ for Mrk~1044. The error bars on the binned flux-flux points are too small to yield a good fit without adding some systematics. Another possible reason for the bad fit could be due to the presence of a degree of independent variability between the soft and hard bands.
\subsection{Frequency-dependent lag and coherence}
To probe the physical processes dominating on various timescales, we evaluate the time lag as a function of temporal frequency using the Fourier method described in \citet{vn97,no99}. The time lag between two different energy bands is given by the formula $\tau(\nu)=\phi(\nu)/2\pi \nu$, where $\phi(\nu)={\rm arg}(<C(\nu)>)$, is the phase of the average cross power spectrum, $C(\nu)$. The expression for the cross power spectrum is $C(\nu)=S^{*}(\nu)H(\nu)$, where $S(\nu)$ and $H(\nu)$ are the discrete Fourier transforms of two different time series $s(t)$ and $h(t)$, respectively. First, we composed light curves in three different energy bands: $E_{1}=0.3-0.8$\keV{} (super-soft band dominated by the free-free emission within the high-density disc), $E_{2}=0.8-1$\keV{} (soft band corresponding to the Fe-L line and dominated by the disc reflection) and $E_{3}=1.5-5$\keV{} (primary power-law continuum dominated). In order to obtain evenly-sampled light curves, we used the time bin size of $400$\s{}. We computed time lag by averaging the cross power spectra over five unfolded segments of time series and then averaging in logarithmically spaced frequency bins (each bin spans $\nu\rightarrow 1.5\nu$). The resulting lags between $E_{1}$ and $E_{2}$ and between $E_{1}$ and $E_{3}$ as a function of temporal frequency are shown in the left and right panels of Figure~\ref{lag_fre}, respectively. The lags were estimated relative to the $E_{1}$ and $E_{2}$ bands and the positive lag implies that the $E_{1}$ and $E_{2}$ bands are leading the $E_{3}$ variations. In the lowest frequency range $\nu\sim[2-6]\times10^{-5}$\hz{}, we detected a hard lag of $1173\pm327$\s{} and $815\pm267$\s{} between the 0.3$-$0.8\keV{} and 1.5$-$5\keV{} bands, and between the 0.8$-$1\keV{} and 1.5$-$5\keV{} bands, respectively. Therefore, at the lowest frequencies ($\leq 6\times10^{-5}$\hz{}) or longer timescales ($\geq16.7$\ks{}), the primary emission dominated hard band, lags behind the reflection dominated super-soft band (0.3$-$0.8\keV{}) by $1173\pm327$\s{} and soft band (0.8$-$1\keV{}) by $815\pm267$\s{}. However, as the frequency increases, the time lag between 0.3$-$0.8\keV{} and 1.5$-$5\keV{} becomes zero. At frequencies $\nu\sim[1-2]\times10^{-4}$\hz{}, the soft band (0.8$-$1\keV{}) dominated by the relativistic disc reflection, lags behind the primary power-law dominated hard band (1.5$-$5\keV{}), by $-183\pm145$\s{}.
We examined the significance of the lags by calculating the Poisson noise subtracted coherence as a function of frequency, following the prescription of \citet{vn97}. The resulting coherence is high ($>0.5$) over the entire frequency range as shown in Figure~\ref{coh_fre}. The high coherence implies that time series in all three energy bands are well correlated linearly and the physical processes responsible for the soft X-ray excess, Fe emission complex and primary power-law emission are linked with each other.
\section{Energy-dependent variability}
\label{sec:EDV}
In order to study the energy dependence of variable spectral components, we determined the combined spectral and timing characteristics of Mrk~1044 with the use of different techniques: lag-energy spectrum, coherence spectrum and rms variability spectra on various timescales.
\subsection{Lag-energy and coherence spectra}
The variation of the time lag as a function of energy can compare the energy spectral components and their relative lag and hence provides important insights into the origin of different spectral components. First, we extracted light curves in ten different energy bands: 0.3$-$0.4\keV{}, 0.4$-$0.5\keV{}, 0.5$-$0.6\keV{}, 0.6$-$0.8\keV{}, 0.8$-$1\keV{}, 1$-$1.5\keV{}, 1.5$-$2\keV{}, 2$-$3\keV{}, 3$-$5\keV{} and 5$-$10\keV{}. Then, we estimated the lag between the time series in each energy band and a reference band. The reference band is defined as the full (0.3$-$10\keV{}) band minus the energy band of interest. This allows us to obtain high signal-to-noise ratio in the reference band and also to overcome correlated Poisson noise. Here, the positive lag means that the variation in the chosen energy band is delayed relative to the defined reference band. In the left panel of Figure~\ref{lag_E}, we have shown the lag-energy spectrum of Mrk~1044 at the lowest frequency range $\nu\sim[2-6]\times10^{-5}$\hz{}, where the hard lag was observed. The low-frequency lag is increasing with energy and the lag-energy profile can be well described by a power-law of the form, $\tau(E)=1036.9\times E^{0.75}-912.2$\s{} and is shown as the solid, green line in Fig.~\ref{lag_E} (left). The low-frequency lag-energy spectrum of Mrk~1044 is similar to that observed in other AGN (e.g. Mrk~335 and Ark~564: \citealt{ka13}, MS~2254.9$-$3712: \citealt{al15}).
The lag-energy spectrum of Mrk~1044 in the high-frequency range $\nu\sim[1-2]\times10^{-4}$\hz{}, where we found a hint of the soft lag, is shown in the right panel of Fig.~\ref{lag_E}. The lag-energy profile has a peak at 0.8$-$1\keV{}, which could indicate a larger contribution of the reprocessed soft X-ray excess emission from the ionized accretion disc that results in a delayed emission with respect to the direct nuclear emission. This is consistent with our energy spectral fitting. However, we do not see any Fe-K lag because of poor signal-to-noise in the hard band.
The noise-corrected coherence as a function of energy for these two frequency ranges, $\nu\sim[2-6]\times10^{-5}$\hz{} and $\nu\sim[1-2]\times10^{-4}$\hz{}, are shown in the left and right panels of Figure~\ref{coh_E}, respectively. The coherence is nearly consistent with unity over the entire energy range, indicating that the physical processes dominating different energy bands are well connected with each other.
\subsection{The absolute and fractional rms variability spectra}
The variation of the rms variability amplitude as a function of energy will allow us to distinguish between the constant and variable spectral components and is worthwhile to understand the origin of energy-dependent variability in accreting systems \citep{gi05,mi07,fa12,ma16,md17,ma17}. Moreover, the modelling of the fractional rms spectrum can quantify the percentage of variability of variable spectral components and probe the causal connection between them.
\subsubsection{Deriving the absolute and fractional rms spectra}
We derived the \xmm{}/EPIC-pn (0.3$-$10\keV{}) frequency-averaged ($\sim 7.8\times10^{-6}-1.25\times10^{-3}$\hz{}) absolute and fractional rms variability spectra of Mrk~1044 following the prescription of \citet{va03}. The absolute rms ($\sigma_{\rm rms}$) is the square root of the excess variance ($\sigma_{\rm XS}^2$) and fractional rms ($F_{\rm var}$) is defined as the square root of the normalized excess variance $\sigma_{\rm NXS}^2$ which is the ratio of the excess variance ($\sigma_{\rm XS}^2$) and the square of the mean ($\overline{x}$) of the time series:
\begin{equation}
\sigma_{\rm rms}=\sqrt{\sigma_{\rm XS}^2}=\sqrt{S^2-\overline{\sigma_{\rm err}^2}}
\label{rms}
\end{equation}
\begin{equation}
F_{\rm var}=\sqrt{\sigma_{\rm NXS}^2}=\sqrt{\frac{\sigma_{\rm XS}^{2}}{\overline{x}^2}}=\sqrt{\frac{S^2-\overline{\sigma_{\rm err}^2}}{\overline{x}^2}},
\label{fvar}
\end{equation}
where $\sigma_{\rm XS}^2$ is defined by the sample variance $S^2$ minus the mean squared error $\overline{\sigma_{\rm err}^2}$.
The sample variance, $S^2$ is defined by
\begin{equation}
S^{2} = \frac{1}{N-1}\sum\limits^{N}_{i=1}(x_{\rm i} - \overline{x})^{2},
\end{equation}
and mean squared error, $\overline{\sigma_{\rm err}^2}$ is:
\begin{equation}
\overline{\sigma_{\rm err}^{2}}=\frac{1}{N}\sum\limits_{i=1}^{N}\sigma_{\rm err,i}^{2}.
\end{equation}
The uncertainty on $F_{\rm var}$ was estimated using the equation~(B2) of \citet{va03} and is given by:
\begin{equation}
{\rm err}(F_{\rm var})=\frac{1}{2F_{\rm var}}\sqrt{\Big(\sqrt{\frac{2}{N}}.\frac{\overline{\sigma_{\rm err}^{2}}}{\overline{x}^{2}}\Big)^{2}+\Big(\sqrt{\frac{\overline{\sigma_{\rm err}^{2}}}{N}}.\frac{2F_{\rm var}}{\overline{x}}\Big)^{2}}
\label{fvar_err}
\end{equation}
We derived the deadtime corrected, background subtracted EPIC-pn light curves in 19 different energy bands with the time bins of 400\s{}. Then we computed the frequency-averaged ($\sim 7.8\times10^{-6}-1.25\times10^{-3}$\hz{}) absolute rms ($\sigma_{\rm rms}$) in units of counts~s$^{-1}$~keV$^{-1}$ and fractional rms ($F_{\rm{var}}$) in each time series. We also derived the EPIC-pn mean spectrum by rebinning the response matrix with same energy binning as the rms spectra. We have shown the EPIC-pn mean and absolute rms spectra in the left panel and fractional rms spectrum in the right panel of Figure~\ref{rms_data}. The fractional rms amplitude of the source decreases with energy convexly with a hint of drops at $\sim0.7$\keV{} and $\sim0.9$\keV{}.
Then we explored the frequency-resolved variability spectra of Mrk~1044 by calculating the absolute and fractional rms variability amplitudes in two broad frequency bands: $\nu_{\rm{low}}\sim[1.7-10]\times10^{-5}$\hz{} and $\nu_{\rm{high}}\sim[1-10]\times10^{-4}$\hz{}. The corresponding timescales are $\sim 5-60$\ks{} and $\sim 0.5-10$\ks{}, respectively. To obtain the low-frequency variability spectra, we extracted the corrected light curves with the time bin size of $\Delta t=5$\ks{} and segment length of $t=60$\ks{}. For the high-frequency variability spectra, the chosen time bin size and segment length of the time series were $\Delta t=500$\s{} and $t=10$\ks{}, respectively. The derived low and high-frequency absolute and fractional rms spectra of Mrk~1044 are shown in the left and right panels of Figure~\ref{rms_nu_resol} and Figure~\ref{fvar_nu_resol}, respectively.
\subsubsection{Modelling the fractional rms spectrum}
To identify the variable spectral components responsible for the observed energy-dependent variability of Mrk~1044, we modelled the 0.3$-$10\keV{} $F_{\rm var}$ spectrum following the approach of \citet{ma17}. The $F_{\rm var}$ spectrum can be considered as a representative of the fractional rms variations of the average energy spectrum. The best-fitting average energy spectral model of Mrk~1044 consists of a direct power-law continuum ($f_{\rm DPC}$), an ionized high-density relativistic reflection continuum ($f_{\rm RRC}$) and a neutral distant reflection ($f_{\rm dist}$), modified by the Galactic ($f_{\rm GA}$) and intrinsic ($f_{\rm abs}$) absorption components and can be expressed as
\begin{equation}
f(E)=f_{\rm GA}f_{\rm abs}[f_{\rm DPC}(E)+f_{\rm RRC}(E)+f_{\rm dist}(E)]
\end{equation}
If we consider that the observed energy-dependent variability of Mrk~1044 is caused by variations in the normalization ($N_{\rm PL}$) and photon index ($\Gamma$) of the hot Comptonization component as well as in the normalization ($N_{\rm blur}$) of the relativistic reflection component, then the variations in the average energy spectrum can be written as
\begin{small}
\begin{equation}
\Delta f(E)=\Delta f_{\rm DPC}(E,N_{\rm PL},\Gamma)+\Delta f_{\rm RRC}(E,N_{\rm blur})
\end{equation}
\end{small}
where
\begin{small}
\begin{equation}
\Delta f_{\rm DPC}(E,N_{\rm PL},\Gamma)=f_{\rm DPC}(N_{\rm PL}+\Delta N_{\rm PL},\Gamma+\Delta \Gamma)-f_{\rm DPC}(N_{\rm PL},\Gamma)
\label{eq1}
\end{equation}
\end{small}
and
\begin{small}
\begin{equation}
\Delta f_{\rm RRC}(E,N_{\rm blur})=f_{\rm RRC}(N_{\rm blur}+\Delta N_{\rm blur})-f_{\rm RRC}(N_{\rm blur})
\label{eq2}
\end{equation}
\end{small}
Therefore, we can write the expression for the fractional rms spectrum $F_{\rm var}(E)$ using the equation~(3) of \citet{ma17}:
\begin{small}
\begin{equation}
F_{\rm var}(E)=\frac{\sqrt{<(\Delta f_{\rm DPC}(E,N_{\rm PL},\Gamma)+\Delta f_{\rm RRC}(E,N_{\rm blur}))^2>}}{f_{\rm DPC}(E)+f_{\rm RRC}(E)+f_{\rm dist}(E)}.
\label{eq3}
\end{equation}
\end{small}
We simplified the expression for the fractional rms spectrum by expanding the first term on the right-hand side of equation~(\ref{eq1}) and (\ref{eq2}) in a Taylor series around the variable parameters ($N_{\rm PL}$, $\Gamma$), $N_{\rm blur}$, respectively and neglecting the higher order (second-order derivatives onward) terms. We can also obtain the correlation coefficients, $\alpha$ between $\Delta N_{\rm PL}$ and $\Delta \Gamma$ and $\beta$ between $\Delta N_{\rm PL}$ and $\Delta N_{\rm blur}$ from the numerator of equation~(\ref{eq3}). We then fitted the observed fractional rms spectrum of Mrk~1044 by implementing the equation~(\ref{eq3}) in \textsc{ISIS}~v.1.6.2-40 \citep{ho00} as a local model.
Initially, we fitted the 0.3$-$10\keV{} $F_{\rm var}$ spectrum of Mrk~1044 with a model consisting of variable hot Comptonization component ($\Delta f_{\rm DPC}$) with free parameters $\Delta N_{\rm PL}$, $\Delta \Gamma$. We also consider that $\Delta N_{\rm PL}$ and $\Delta \Gamma$ are correlated by the correlation coefficient $\alpha$. This provided a statistically unacceptable fit with $\chi^{2}$/d.o.f = 80.1/16. Then we included variability in the normalization ($\Delta N_{\rm blur}$) of the ionized disc reflected emission and also consider that variations in the normalization of the direct power-law continuum, $\Delta N_{\rm PL}$ and inner disc reflection, $\Delta N_{\rm blur}$ are connected to each other by the correlation coefficient, $\beta$. This model describes the observed fractional rms spectrum of Mrk~1044 well with $\chi^{2}$/d.o.f = 12.8/14 ($\Delta\chi^{2}$=$-67.3$ for 2 d.o.f). The best-fitting values for the fractional rms spectral model parameters are: $\frac{\Delta N_{\rm PL}}{N_{\rm PL}}=62.8^{+12.6}_{-11.6}$~per~cent, $\frac{\Delta \Gamma}{\Gamma}=8.6^{+5.4}_{-2.5}$~per~cent, $\alpha=1.0^{+0p}_{-0.21}$, $\frac{\Delta N_{\rm blur}}{N_{\rm blur}}=25.0^{+2.5}_{-3.0}$~per~cent and $\beta=-0.34^{+0.34}_{-0.33}$. The 0.3$-$10\keV{} frequency-averaged fractional rms variability spectrum, the best-fitting variability spectral model and the data-to-model ratio are shown in Figure~\ref{rms_model}. Thus we infer the presence of a less variable inner disc reflection with variable flux and a more variable direct coronal emission where the flux variations of the coronal emission and relativistic disc reflection are either uncorrelated or moderately anti-correlated with each other.
\section{Summary and Discussion}
\label{sec:discussion}
We present the first results from the broadband (0.3$-$50\keV{}) spectral and timing studies of the highly accreting, narrow-line Seyfert~1 galaxy Mrk~1044 using \xmm{} ($\sim130$\ks{}), \nustar{} ($\sim22$\ks{}) and \swift{} ($\sim29$\ks{}) observations. Here we perform time-averaged spectral modelling, frequency and energy-dependent time-lag, coherence, absolute and fractional rms variability spectral analyses. We investigate the underlying physical processes (Comptonization, reverberation and propagation fluctuation) surrounding the SMBH and disentangle various emission components responsible for the observed energy-dependent variability on various timescales. The main results of our work are summarized below:
\begin{enumerate}
\item The time-averaged \xmm{} (0.3$-$10\keV{}) spectrum of Mrk~1044 shows strong soft X-ray excess emission below $\sim2$\keV{}, a narrow Fe~K$_{\alpha}$ emission line at $\sim6.4$\keV{}, a broad Fe emission line at $\sim6.84$\keV{} and a possible O~VIII wind with an outflowing velocity of $(0.1\pm0.01)c$.
\item The broadband (0.3$-$50\keV{}) quasi-simultaneous \swift{} and \nustar{} spectra confirm the presence of a soft X-ray excess below $\sim1$\keV{}, a narrow Fe~K$_{\alpha}$ emission line at $\sim6.4$\keV{} and reveal a Compton hump at $\sim15-30$\keV{}.
\item The fitting of the average energy spectrum requires an ionized high-density relativistic reflection model with a broken power-law emissivity profile of the accretion disc, to describe the soft X-ray excess, broad Fe line and Compton hump. The best-fitting values for the electron density, inner radius and break radius of the disc are $n_{\rm e}=5.2^{+5.2}_{-4.2}\times10^{16}$~cm$^{\rm -3}$, $r_{\rm in}=1.31^{+0.08}_{-0.05}r_{\rm g}$ and $r_{\rm br}=3.2^{+0.1}_{-0.1}r_{\rm g}$, respectively.
\item During the 2013 \xmm{} observation, Mrk~1044 shows a significant variability with changes in the total X-ray (0.3$-$10\keV{}) count rate by $\sim50$~per~cent on a timescale of $\sim800$\s{}. The hardness$-$flux analysis shows that source hardness decreases with the total X-ray (0.3$-$10\keV{}) count rate, suggesting a softer-when-brighter trend as usually observed in radio-quiet narrow-line Seyfert~1 AGN (Fig.~\ref{hr_flux}).
\item At low frequencies ($\sim[2-6]\times10^{-5}$\hz{}), the hard band (1.5$-$5\keV{}) which is dominated by the illuminating continuum lags behind the relativistic reflection dominated super-soft (0.3$-$0.8\keV{}) and soft (0.8$-1$\keV{}) bands by ($1173\pm327$)\s{} and ($815\pm267$)\s{}, respectively. As the frequency increases, the time-lag between the super-soft and hard bands disappears. However, we do see a negative lag between the soft (0.8$-$1\keV{}) and hard (1.5$-$5\keV{}) bands where the soft band lags behind the hard band by ($183\pm145$)\s{} at higher frequencies ($\sim[1-2]\times10^{-4}$\hz{}) (Fig.~\ref{lag_fre}).
\item The low-frequency lag-energy spectrum is featureless and has a power-law like shape, while the high-frequency lag spectrum has a maximum value at 0.8$-$1\keV{} where the Fe-L emission line peaks and can be attributed to the delayed emission from the inner accretion disc (Fig.~\ref{lag_E}). However, we do not see any Fe-K lag. The non-detection of the Fe-K lag has previously been reported in another Seyfert~1 galaxy MCG--6-30-15 \citep{ka14}.
\item The source variability decreases with energy during both 2013 and 2016 observations. The fractional rms amplitudes in the 0.3$-$2\keV{}, 2$-$10\keV{}, 3$-$10\keV{} and 10$-$50\keV{} bands are $F_{\rm var,0.3-2}=(20.3\pm0.1$)~per~cent, $F_{\rm var,2-10}=(15.5\pm0.5$)~per~cent, $F_{\rm var,3-10}=(15.1\pm1.5$)~per~cent, $F_{\rm var,10-50}=(7.8\pm4.8$)~per~cent, respectively. The modelling of the \xmm{}/EPIC-pn frequency-averaged ($\sim 7.8\times10^{-6}-1.25\times10^{-3}$\hz{}) fractional rms spectrum reveals that the observed energy-dependent variability of Mrk~1044 is mainly driven by two components: more variable illuminating continuum and less variable relativistic reflection from an ionized high-density accretion disc.
\end{enumerate}
\begin{figure}
\includegraphics[width=0.47\textwidth,angle=-0]{fig19.eps}
\caption{The variation of the disc electron density as a function of the dimensionless mass accretion rate derived from the radiation pressure solution of \citet{sz94}, and considering $\alpha=0.1$, $M_{\rm BH}=3\times10^{6}M_{\odot}$, $f=0.9$ and $r=10$. The dotted and dashed lines represent the observed value of the electron density and its 90~per~cent confidence limits, respectively.}
\label{density_mdot}
\end{figure}
\subsection{Origin of the soft and hard X-ray excess emission: high-density disc reflection}
We investigate the origin of the soft and hard X-ray excess emission including the Fe~K emission complex and Compton hump in this low-mass, highly accreting AGN. The modelling of the hard band (3$-$10\keV{}) EPIC-pn spectrum of the source requires a hot Comptonization component with a photon index of $\Gamma=2.25^{+0.07}_{-0.09}$ along with an ionized, relativistic reflection with a single power-law emissivity profile for the accretion disc to fit the broad Fe emission line and a neutral, non-relativistic reflection from the distant medium to fit the narrow Fe~K$_{\alpha}$ emission line. The extrapolation of the 3$-$10\keV{} best-fitting model down to 0.3\keV{} reveals strong residuals below $\sim2$\keV{} due to the soft X-ray excess emission. The modelling of the full band (0.3$-$10\keV{}) EPIC-pn spectrum including the soft X-ray excess requires a broken power-law emissivity profile and high electron density ($n_{\rm e}\sim5\times10^{16}$~cm$^{\rm -3}$) for the accretion disc. Moreover, the photon index ($\Gamma\sim2.25$), disc inclination angle ($\theta\sim45^{\circ}$), inner emissivity index ($q_{\rm in}\sim8-10$) obtained from considering only the 3$-$10\keV{} spectrum explaining the Fe emission complex, are consistent with that inferred from fitting the full band (0.3$-$10\keV{}) spectrum including soft X-ray excess. We find that the shape of the inner emissivity profile ($\epsilon_{\rm in}\propto r^{-9}$ for $r<r_{\rm br}$) describing the broad Fe emission line remains unaffected even after the inclusion of the soft band data and we only require another flat power-law emissivity profile ($\epsilon_{\rm out}\propto r^{-2}$ for $r>r_{\rm br}$) of the disc to explain the soft X-ray excess. The physical implication of the broken power-law emissivity profile is that the disc emitting regions responsible for the broad Fe emission line and soft X-ray excess are different and separated by the break radius. The best-fit value of the break radius obtained from the full band (0.3$-$10\keV{}) EPIC-pn spectrum is $r_{\rm br}\sim3r_{\rm g}$ which implies that soft X-ray excess originates from the high-density accretion disc above $\sim3r_{\rm g}$ and the broad Fe emission line arises in the innermost regions of the accretion disc below the break radius of around $3r_{\rm g}$.
We also detect a Compton hump at around 15$-$30\keV{} during the 2016 \nustar{} observation. The inclusion of the 10$-$50\keV{} spectral data does not affect the spectral model parameters (disc inclination angle, spin, emissivity indices, break radius) obtained from fitting of the 0.3$-$10\keV{} spectrum only. The best-fit value of the disc electron density as derived from the modelling of the broadband (0.3$-$50\keV{}) spectral data is $n_{\rm e}=5.2^{+5.2}_{-4.2}\times10^{16}$~cm$^{\rm -3}$. Mrk~1044 is known to be a highly accreting AGN with the dimensionless mass accretion rate of $\dot{m}=\frac{\dot{M}c^{2}}{L_{\rm E}}=16.6^{+25.1}_{-10.1}$ \citep{du15}. At high accretion rate, the inner region of a standard $\alpha$-disc \citep{ss73} is radiation pressure-dominated and the electron density of the disc can be written as \citep{sz94}
\begin{equation}
n_e=\frac{1}{\sigma_{\rm T}R_{\rm S}} \frac{256\sqrt{2}}{27}\alpha^{-1}r^{3/2}\dot m^{-2} [1-(3/r)]^{-1} (1-f)^{-3}.
\label{density}
\end{equation}
where $\sigma_{\rm T}=6.64\times10^{-25}$~cm$^2$ is the Thomson scattering cross section, $R_{\rm S}=2GM_{\rm BH}/c^{2}$ is the Schwarzschild radius, $M_{\rm BH}$ is the black hole mass, $\alpha=0.1$ is the disc viscosity parameter, $r=R/R_{\rm S}$, $R$ is the characteristic disc radius, $\dot{m}=\frac{\dot{M}c^{2}}{L_{\rm E}}$ is the dimensionless mass accretion rate, $f$ is the fraction of the total power released by the disc into the corona. The variation of the disc electron density ($n_e$) with the dimensionless mass accretion rate ($\dot{m}$) for $\alpha=0.1$, $M_{\rm BH}=3\times10^{6}M_{\odot}$, $f=0.9$ and $r=10$ is shown as the solid curve in Figure~\ref{density_mdot}. As evident from Fig.~\ref{density_mdot}, the assumption of constant disc density ($n_{\rm e}=10^{15}$~cm$^{\rm -3}$) is not physically realistic for low-mass AGN even when the mass accretion rate is very high. The observed best-fit value for the disc electron density of the source and its 90~per~cent confidence limits are shown as the dotted and dashed lines in Fig.~\ref{density_mdot}, respectively. The corresponding dimensionless mass accretion rate of Mrk~1044 estimated using equation~(\ref{density}) is $\dot{m}\approx10-32$, which is in agreement with that found by \citet{du15}. We further verified the SMBH mass with the use of X-ray variability techniques as pioneered by \citet{po12}. The relation between the SMBH mass ($M_{\rm BH,7}$) in units of $10^{7}M_{\odot}$ and normalized excess variance ($\sigma_{\rm NXS}^{2}$) in the 2$-$10\keV{} light curves of 10\ks{} segments and the bin size of 250\s{}, can be written as
\begin{equation}
\log(\sigma_{\rm NXS}^{2})=(-1.83\pm0.1)+(-1.04\pm0.09)\log(M_{\rm BH,7}).
\label{mass_Fvar}
\end{equation}
The SMBH mass of Mrk~1044 measured using equation~(\ref{mass_Fvar}) is $M_{\rm BH}=(4-5)\times10^{6}M_{\odot}$ which is close to that measured by \citet{du15}.
We detected a broad absorption line with the rest frame energy of around $0.72$\keV{} as a single feature and one of the strongest lines present both in the EPIC-pn and RGS spectra. The modelling of the RGS spectrum with the photoionized absorption model infers the presence of an O~VIII wind moving with the outflow velocity of $(0.1\pm0.01)c$. The existence of a persistent (flux independent) O~VIII outflow has previously been reported in another highly-accreting NLS1 AGN IRAS~13224--3809 \citep{pin18}.
Our broadband spectral analysis also reveals that the central SMBH of Mrk~1044 is spinning very fast with the spin parameter $a=0.997^{+0.001}_{-0.016}$. The high spin ($a\approx1$) is a natural consequence of high radiative efficiency of prograde accretion (see Fig.5 of \citealt{ki08}), which is further supported by the very high accretion rate of Mrk~1044. So the high spin is not the result of the shortcoming of the relativistic reflection model. Hence we conclude that the origin of the soft X-ray excess, broad Fe emission line and Compton hump is the relativistic reflection from an ionized, high-density accretion disc around the rapidly rotating SMBH.
\subsection{Probing Comptonization, propagation fluctuation and reverberation scenarios}
We detected hard lags where the $1.5-5$\keV{} band emission lags behind the emission from the 0.3$-$0.8\keV{} and 0.8$-$1\keV{} bands by $1173\pm327$\s{} and $815\pm267$\s{}, respectively in the lowest frequency range $(2-6)\times10^{-5}$\hz{}. Here we explore both the Compton scattering and propagation fluctuation scenarios for the origin of the hard lag. In the framework of Compton up-scattering, a soft X-ray photon with energy $E_{\rm S}$ after $N$ scatterings produces a hard X-ray photon of energy $E_{\rm H}$ \citep{zd85}
\begin{equation}
E_{\rm H}=A^{N}E_{\rm S},
\label{comp}
\end{equation}
where $A=1+4\theta+16\theta^{2}$, $\theta=\frac{k_{B}T_{e}}{m_{e}c^{2}}$ and $T_{e}$ and $m_{e}$ are the electron temperature and rest mass, respectively.
If $t_{c}$ is the time delay between successive scatterings and the hard X-ray photons are produced within the optically thin, hot corona after $N$ scatterings, then the Comptonization timescale is
\begin{equation}
t_{comp}=Nt_{c}.
\end{equation}
For an optically thin, hot ($T_{e}\sim100$\keV{}) corona of size $r\sim10r_g$, the Comptonization timescales of $\sim0.3$\keV{} and $\sim0.8$\keV{} photons which eventually produce $\sim5$\keV{} photons within the corona, are $\sim470$\s{} and $\sim300$\s{}, respectively. The estimated Comptonization timescales are much smaller than the observed hard lags. Therefore, the origin of the hard lag is not entirely due to the Compton up-scattering of the soft X-ray photons from the disc. Then we investigated the viscous propagation fluctuation scenario as proposed by \citet{ko01}. In this scenario, the fluctuations produced at different radii in the disc propagates inwards and then hits the edge of the X-ray emitting corona on timescales corresponding to the viscous timescale. The viscous timescale at the disc emission radius $r$ is defined by
\begin{equation}
t_{\rm vis} = \frac{1}{\alpha}\left(\frac{r}{h}\right)^{2}t_{\rm dyn},
\end{equation}
where $\alpha\sim0.1$ is the viscosity parameter, $h$ is the disc height which is much smaller than the disc radius $r$ for a standard thin disc. However, in the inner regions of the disc, $r$ is only a few $r_{\rm g}$ and one can consider $h\approx r$. Therefore the expression for the viscous timescale becomes $t_{\rm vis}\approx\frac{1}{\alpha}t_{\rm dyn}$. For a SMBH of mass $M_{\rm BH}$, the dynamical timescale is $t_{\rm dyn}=\large\sqrt{\frac{r^3}{GM_{\rm BH}}}\approx500\left(\frac{M_{\rm BH}}{10^{8}M_{\rm \odot}}\right)\left(\frac{r}{r_{\rm g}}\right)^{3/2}$\s{}. From the modelling of the full band \xmm{}/EPIC-pn spectrum with the relativistic reflection model, we found that the soft X-ray photons are produced in the disc above the break radius of $r_{\rm br}=r\sim3r_{\rm g}$. Therefore, the minimum viscous timescale required for a soft X-ray photon to hit the edge of the corona is estimated to be $t_{\rm vis}\sim780$\s{}. Thus the observed hard lags of $1173\pm327$\s{} and $815\pm267$\s{} between the super-soft ($0.3-0.8$\keV{}) and hard ($1.5-5$\keV{}) bands and between the soft ($0.8-1$\keV{}) and hard ($1.5-5$\keV{}) bands can indeed correspond to the sum of the fluctuation propagation timescale ($\sim780$\s{}) between the inner disc and edge of the corona and the Comptonization timescales ($\sim470$\s{} and $\sim300$\s{}) of the $\sim0.3$\keV{} and $\sim0.8$\keV{} seed photons within the corona, respectively. In the propagation fluctuation scenario, we expect to see log-linear behaviour of the lag-energy spectrum. However, the shape of the low-frequency lag-energy spectrum is not entirely consistent with the log-linear lags. The possible reason for the dilution of the log-linear behaviour could be the Compton up-scattering of the soft X-ray photons in the corona \citep{ut11}. Thus we conclude that both propagation fluctuation and Comptonization are responsible for the observed time-lags on longer timescales.
We also detected a soft lag where the 0.8$-$1\keV{} band emission lags behind the harder 1.5$-$5\keV{} band emission by $38-328$\s{} in the higher frequency range $\sim(1-2)\times10^{-4}$\hz{}. We estimated the soft lag for Mrk~1044 using the scaling relation between the soft lag amplitude ($|\tau|$) and SMBH mass ($M_{\rm BH}$) from \citet{dm13}: $\log|\tau| = 1.98[\pm0.08] + 0.59[\pm0.11]\log(M_{\rm BH})$, where $M_{\rm BH}$ is the SMBH mass in units of $10^{7}M_{\odot}$. For Mrk~1044, the scaling relation suggests the amplitude of the soft lag to lie in the range $|\tau|=44-49$\s{}, which is consistent with our measured time-lag. The origin of the soft lag can be explained in the context of the reverberation scenario where the soft X-ray excess emission is produced due to relativistic reflection from an ionized accretion disc and lag amplitude corresponds to the light-crossing time between the direct power-law emitting hot corona and soft X-ray emitting inner disc. For Mrk~1044, the measured soft lag of ($38-328$)\s{} implies that the physical separation between the corona and soft X-ray emitting inner disc is in the range $r=(2.5-22)r_{\rm g}$. This radius is consistent with our average energy spectral fitting with the high-density relativistic reflection model, which revealed that the soft X-ray excess emission was originated above a certain radius ($r_{\rm br}\sim3r_{\rm g}$), known as the break radius of the accretion disc. However, we do not find the Fe-K lag in Mrk~1044 which could be caused by several reasons. Since Mrk~1044 is a low-mass AGN and Fe-K emission originates from the innermost region ($r\sim1.3r_{\rm g}$) of the disc, the predicted Fe-K lag amplitude is only $\sim20$\s{} which is difficult to detect given the signal-to-noise in the hard band. Another reason for the non-detection of the Fe-K lag could be due to the absence of correlated variability between the direct continuum and disc reflection or changes in the height or radius of the corona, which we discuss in the next section.
\subsection{Origin of the broadband X-ray variability and disc/corona connection}
The X-ray emission from the source is variable and the variability amplitude is both energy and frequency-dependent. The source showed a decrease in fractional rms amplitude with energy and the shape of the variability spectrum is very similar in both low ($\nu_{\rm low}\sim[1.7-10]\times10^{-5}$\hz{}) and high ($\nu_{\rm high}\sim[1-10]\times10^{-4}$\hz{}) frequency bands, although the amplitude is different on different timescales (see Fig.~\ref{fvar_nu_resol}). The fractional variability amplitudes of the source in the $0.3-10$\keV{}, $0.3-2$\keV{} and $2-10$\keV{} bands are $(15.9\pm0.1$)~per~cent, $(16.5\pm0.1$)~per~cent and $(11.1\pm0.4$)~per~cent, respectively on timescales of $\sim 5-60$\ks{} and $(17.1\pm0.1$)~per~cent, $(17.6\pm0.2$)~per~cent and $(13.4\pm0.5$)~per~cent, respectively on timescales of $\sim0.5-10$\ks{}. Thus, it is apparent that the X-ray emission from Mrk~1044 is more variable on short timescales ($<10$\ks{}) in all three energy bands, implying that the shorter timescale variability has been originated in a more compact emitting region which radiates at the smaller radii of the disc and hence close to the SMBH. The modelling of the frequency-averaged fractional rms spectrum reveals the presence of two-component variability: direct coronal and reflected inner disc emission, both of which are variable either in an uncorrelated or moderately anti-correlated manner and the disc reflection variability is about a factor of $2$ smaller than the coronal variability. The primary power-law emission from the hot corona is variable both in flux and spectral index. We find that the flux and spectral index variability are positively correlated with each other by $\sim79-100$~per~cent, which can be explained in the context of Compton cooling of the soft seed photons in the hot corona. The lack of positive correlation between the hot coronal and inner disc reflected emission indicates that either the primary X-ray emitting hot corona is compact and moving closer or further from the central SMBH or the corona is extended and is expanding or contracting with time (see \citealt{wi14}). In this scenario, the total number of photons emitted from the source remains constant and the observed energy-dependent X-ray variability is mainly caused by changes in the location or geometry of the corona in terms of height if the corona is compact or radius if the corona is extended.
\section{Acknowledgments}
We thank the anonymous referee for useful comments that improved the quality of the paper. LM gratefully acknowledges financial support from the University Grants Commission (UGC), Government of India. LM is highly grateful to the University of Cambridge for supporting an academic visit. WNA and ACF acknowledge support from the European Union Seventh Framework Programme (FP7/2013-2017) under grant agreement no. 312789, StrongGravity. WNA, ACF and CP acknowledge support from the European Research Council through Advanced Grant 340442, on Feedback. This research has made use of archival data of \xmm{}, \swift{} and \nustar{} observatories through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA Goddard Space Flight Center. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the NASA. This research has made use of the XRT Data Analysis Software (XRTDAS) developed under the responsibility of the ASI Science Data Center (ASDC), Italy. This research has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (Caltech, USA). This research has made use of ISIS functions (ISISscripts) provided by ECAP/Remeis observatory and MIT (http://www.sternwarte.uni-erlangen.de/isis/). Figures in this paper were made with the graphics package \textsc{pgplot} and GUI scientific plotting package \textsc{veusz}. This research made use of the \textsc{python} packages \textsc{numpy}, \textsc{scipy} and \textsc{matplotlib}.
|
1,116,691,499,632 | arxiv | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\sect{Introduction\label{intro}}
One of the aims of this paper is to introduce ${{\bf Z}_2}$ graded versions of
Drinfeld's quasi-Hopf algebras \cite{Dri90}, which are referred to as
quasi-Hopf superalgebras. We then introduce elliptic quantum
supergroups, which are defined as quasi-triangular quasi-Hopf
superalgebras arising from twisting the normal quantum supergroups by
twistors which satisfy the graded shifted cocycle condition,
thus generalizing Drinfeld's quasi-Hopf twisting
procedure \cite{Bab96,Fro97,Jim97,Arn97,Enr97}
to the supersymmetric case.
We adopt the approach in \cite{Jim97}
and construct two types of twistors, i.e. the face type twistor
associated to any Kac-Moody superalgebra ${\cal G}$ with a
symmetrizable generalized Cartan matrix and the vertex type twistor
associated to $\widehat{sl(n|n)}$ in a non-standard
simple root system in which all simple roots are odd (or fermionic).
It should be pointed out that the face type twistors for certain
classes of {\it non-affine} simple superalgebras were also constructed in
\cite{Arn97}.
The elliptic quantum groups \cite{Fod94,Fel95}
are believed to provide the
underlying algebraic structures for integrable models based on elliptic
solutions of the (dynamical) Yang-Baxter equation, such as Baxter's
8-vertex model \cite{Bax72}, the ABF model \cite{And84}
and their group theoretical generalizations \cite{Bel81}\cite{Jim88}.
The elliptic quantum supergroups described in this paper are
expected to play a similar role in supersymmetric integrable models
based on elliptic solutions \cite{Baz85,Deg91}
of the graded (dynamical) Yang-Baxter equation.
\sect{Quasi-Hopf Superalgebras}
\begin{Definition}\label{quasi-bi}:
A ${{\bf Z}_2}$ graded quasi-bialgebra is a ${{\bf Z}_2}$ graded
unital associative algebra $A$
over a field $K$ which is equipped with algebra homomorphisms $\epsilon:
A\rightarrow K$ (co-unit), $\Delta: A\rightarrow A\otimes A$ (co-product)
and an invertible homogeneous element $\Phi\in A\otimes A\otimes A$
(co-associator) satisfying
\begin{eqnarray}
&& (1\otimes\Delta)\Delta(a)=\Phi^{-1}(\Delta\otimes 1)\Delta(a)\Phi,~~
\forall a\in A,\label{quasi-bi1}\\
&&(\Delta\otimes 1\otimes 1)\Phi \cdot (1\otimes 1\otimes\Delta)\Phi
=(\Phi\otimes 1)\cdot(1\otimes\Delta\otimes 1)\Phi\cdot (1\otimes
\Phi),\label{quasi-bi2}\\
&&(\epsilon\otimes 1)\Delta=1=(1\otimes\epsilon)\Delta,\label{quasi-bi3}\\
&&(1\otimes\epsilon\otimes 1)\Phi=1.\label{quasi-bi4}
\end{eqnarray}
\end{Definition}
(\ref{quasi-bi2}), (\ref{quasi-bi3}) and (\ref{quasi-bi4}) imply
that $\Phi$ also obeys
\begin{equation}
(\epsilon\otimes 1\otimes 1)\Phi=1=(1\otimes 1\otimes\epsilon)\Phi.\label{e(phi)=1}
\end{equation}
The multiplication rule for the tensor products is ${{\bf Z}_2}$ graded and is defined
for homogeneous elements $a,b,a',b'\in A$ by
\begin{equation}
(a\otimes b)(a'\otimes b')=(-1)^{[b][a']}\,(aa'\otimes bb'),
\end{equation}
where $[a]\in{\bf Z}_2$
denotes the grading of the element $a$.
\begin{Definition}\label{quasi-hopf}: A quasi-Hopf superalgebra is
a ${{\bf Z}_2}$ graded quasi-bialgebra $(A,\Delta,\epsilon,\Phi)$ equipped with a
${{\bf Z}_2}$ graded algebra anti-homomorphism $S: A\rightarrow A$ (anti-pode) and
canonical elements $\alpha,~\beta\in A$ such that
\begin{eqnarray}
&& m\cdot (1\otimes\alpha)(S\otimes 1)\Delta(a)=\epsilon(a)\alpha,~~~\forall
a\in A,\label{quasi-hopf1}\\
&& m\cdot (1\otimes\beta)(1\otimes S)\Delta(a)=\epsilon(a)\beta,~~~\forall a\in A,
\label{quasi-hopf2}\\
&& m\cdot (m\otimes 1)\cdot (1\otimes\beta\otimes\alpha)(1\otimes S\otimes
1)\Phi^{-1}=1,\label{quasi-hopf3}\\
&& m\cdot(m\otimes 1)\cdot (S\otimes 1\otimes 1)(1\otimes\alpha\otimes
\beta)(1\otimes 1\otimes S)\Phi=1.\label{quasi-hopf4}
\end{eqnarray}
\end{Definition}
Here $m$ denotes the usual product map on $A$: $m\cdot (a\otimes b)=ab,~
\forall a,b\in A$. Note that since $A$ is associative we have
$m\cdot(m\otimes 1)=m\cdot (1\otimes m)$.
For the homogeneous elements $a,b\in A$, the antipode satisfies
\begin{equation}
S(ab)=(-1)^{[a][b]}S(b)S(a),
\end{equation}
which extends to inhomogeneous elements through linearity.
Applying $\epsilon$ to defintion (\ref{quasi-hopf3}, \ref{quasi-hopf4})
we obtain, in view
of (\ref{quasi-bi4}), $\epsilon(\alpha)\epsilon(\beta)=1$. It follows that the canonical
elements $\alpha, \beta$ are both even. By applying $\epsilon$ to
(\ref{quasi-hopf1}), we have $\epsilon(S(a))=\epsilon(a),~\forall a\in A$.
In the following we show that the category of quasi-Hopf superalgebras
is invariant under a kind of gauge transformation. Let $(A,\Delta,\epsilon,\Phi)$
be a qausi-Hopf superalgebra, with $\alpha,\beta, S$ satisfying
(\ref{quasi-hopf1})-(\ref{quasi-hopf4}), and let $F\in A\otimes A$
be an invertible homogeneous element satisfying the co-unit properties
\begin{equation}
(\epsilon\otimes 1)F=1=(1\otimes \epsilon)F.\label{e(f)=1}
\end{equation}
It follows that $F$ is even. Throughout we set
\begin{eqnarray}
&&\Delta_F(a)=F\Delta(a)F^{-1},~~~\forall a\in A,\label{twisted-d}\\
&&\Phi_F=(F\otimes 1)(\Delta\otimes
1)F\cdot\Phi\cdot(1\otimes\Delta)F^{-1}(1\otimes F^{-1}).\label{twisted-phi}
\end{eqnarray}
\begin{Theorem}\label{t-quasi-hopf}:
$(A,\Delta_F,\epsilon,\Phi_F)$ defined by (\ref{twisted-d},
\ref{twisted-phi}) together with
$\alpha_F,\beta_F, S_F$ given by
\begin{equation}
S_F=S,~~~\alpha_F=m\cdot(1\otimes\alpha)(S\otimes 1)F^{-1},~~~
\beta_F=m\cdot(1\otimes\beta)(1\otimes S)F,\label{twisted-s-ab}
\end{equation}
is also a quasi-Hopf superalgebra. The element $F$ is referred to as
a twistor, throughout.
\end{Theorem}
The proof of this theorem is elementary. For demonstration we
show in some details the proof of the anti-pode properties.
Care has to be taken
of the gradings in tensor product multiplications and also in extending
the antipode to the whole algebra. First of all let us state
\begin{Lemma}\label{L1}: For any elements $\eta\in A\otimes A$ and
$\xi\in A\otimes A\otimes A$,
\begin{eqnarray}
&&m\cdot(1\otimes\alpha_F)(S\otimes 1)\eta=m\cdot(1\otimes \alpha)
(S\otimes 1)(F^{-1}\eta),\label{L11}\\
&&m\cdot(1\otimes\beta_F)(1\otimes S)\eta=m\cdot(1\otimes\beta)
(1\otimes S)(\eta F),\label{L12}\\
&&m\cdot(m\otimes 1)\cdot(1\otimes\beta_F\otimes\alpha_F)
(1\otimes S\otimes 1)\xi\nonumber\\
&&~~~~=m\cdot(m\otimes 1)\cdot
(1\otimes\beta\otimes\alpha)(1\otimes S\otimes 1)
[(1\otimes F^{-1})\cdot\xi\cdot(F\otimes 1)],\label{L13}\\
&&m\cdot(m\otimes 1)\cdot(S\otimes 1\otimes 1)(1\otimes\alpha_F\otimes
\beta_F)(1\otimes 1\otimes S)\xi\nonumber\\
&&~~~~=m\cdot(m\otimes 1)\cdot
(S\otimes 1\otimes 1)(1\otimes\alpha\otimes\beta)(1\otimes 1\otimes
S)\nonumber\\
&&~~~~\cdot[(F^{-1}\otimes 1)\cdot\xi\cdot(1\otimes F)].\label{L14}
\end{eqnarray}
\end{Lemma}
\vskip.1in
\noindent {\em Proof}: Write
$F=f_i\otimes f^i$ and $F^{-1}=\bar{f}_i\otimes\bar{f}^i$. Here
and throughout, summation convention on repeated indices is
assumed. Then
(\ref{twisted-s-ab}) can be written as
\begin{equation}
\alpha_F=S(\bar{f}_i)\alpha\bar{f}^i,~~~~~~~\beta_F=f_i\beta S(f^i).\label{twisted-ab}
\end{equation}
Further write $\eta=\eta_k\otimes\eta^k$ and
$\xi=\sum_ix_i\otimes y_i\otimes z_i$.
Then
\begin{eqnarray}
{\rm l.h.s.~of ~(\ref{L11})}&=&m\cdot (1\otimes S(\bar{f}_i)\alpha\bar{f}^i)
(S(\eta_k)\otimes\eta^k)=m\cdot(S(\eta_k)\otimes S(\bar{f}_i)\alpha
\bar{f}^i\eta^k)\nonumber\\
&=&S(\eta_k)S(\bar{f}_i)\alpha\bar{f}^i\eta^k
=S(\bar{f}_i\eta_k)\alpha\bar{f}^i\eta^k
\times (-1)^{[\eta_k][\bar{f}_i]},\nonumber\\
{\rm r.h.s.~of~(\ref{L11})}&=&m\cdot(1\otimes\alpha)(S\otimes 1)
(\bar{f}_i\eta_k\otimes
\bar{f}^i\eta^k)\times(-1)^{[\bar{f}^i][\eta_k]}\nonumber\\
& =& S(\bar{f}_i\eta_k)\alpha\bar{f}^i\eta^k\times (-1)^{[\bar{f}_i][\eta_k]},\nonumber
\end{eqnarray}
thus proving ({\ref{L11}). (\ref{L12}) can be proved
similarly. As for (\ref{L13}) we have:
\begin{eqnarray}
{\rm l.h.s.~of~(\ref{L13})}&=&\sum_ix_i\beta_FS(y_i)\alpha_Fz_i
=\sum_ix_if_j\beta S(f^j)S(y_i)S(\bar{f}_k)\alpha\bar{f}^kz_i\nonumber\\
&=&\sum_ix_if_j\beta S(\bar{f}_ky_if^j)\alpha\bar{f}^kz_i
\times(-1)^{[y_i]([f_j]+[\bar{f}_k])+[\bar{f}_k][f_j]},\nonumber\\
{\rm r.h.s.~of~(\ref{L13})}&=&m\cdot(m\otimes 1)\cdot(1\otimes\beta\otimes\alpha)
(1\otimes S\otimes 1)\nonumber\\
& &\cdot\sum_i[x_if_j\otimes \bar{f}_ky_if^j\otimes\bar{f}^kz_i]
\times(-1)^{[y_i]([f_j]+[\bar{f}_k])+[\bar{f}_k][f_j]},\nonumber\\
&=&\sum_ix_if_j\beta S(\bar{f}_ky_if^j)\alpha\bar{f}^kz_i
\times(-1)^{[y_i]([f_j]+[\bar{f}_k])+[\bar{f}_k][f_j]},\nonumber
\end{eqnarray}
where we have used the fact that the element $F$ is even.
(\ref{L14}) is proved similarly.
Now let us prove the property (\ref{quasi-hopf1}) for $\alpha_F$ and
$\Delta_F$. We write, following Sweedler,
\begin{equation}
\Delta(a)=\sum_{(a)}a_{(1)}\otimes a_{(2)}.
\end{equation}
Then, in view of lemma \ref{L1},
\begin{eqnarray}
m\cdot(1\otimes\alpha_F)(S\otimes 1)\Delta_F(a)&=&m\cdot(1\otimes\alpha)(S\otimes
1)(F^{-1}\Delta_F(a)\nonumber\\
&=&m\cdot(1\otimes\alpha)(S\otimes 1)(\Delta(a)F^{-1})\nonumber\\
&=&m\cdot(1\otimes\alpha)\sum_{(a)}(S(a_{(1)}\bar{f}_i)\otimes a_{(2)}
\bar{f}^i)\times (-1)^{[\bar{f}_i][a_{(2)}]}\nonumber\\
&=&S(\bar{f}_i)\sum_{(a)}S(a_{(1)})\alpha a_{(2)}\bar{f}^i\times (-1)^{[\bar{f}_i]
([a_{(1)}]+[a_{(2)}])}\nonumber\\
&=&S(\bar{f}_i)\sum_{(a)}S(a_{(1)})\alpha a_{(2)}\bar{f}^i
\times(-1)^{[\bar{f}_i][a]}\nonumber\\
&=&(-1)^{[\bar{f}_i][a]}S(\bar{f}_i)\;\sum_{(a)}S(a_{(1)})\alpha
a_{(2)}\bar{f}^i\nonumber\\
&\stackrel{(\ref{quasi-hopf1})}{=}&
S(\bar{f}_i)\epsilon(a)\alpha\bar{f}^i\times(-1)^{[\bar{f}_i][a]}\nonumber\\
&=&S(\bar{f}_i)\epsilon(a)\alpha\bar{f}^i\stackrel{(\ref{twisted-ab})}{=}\epsilon(a)\alpha_F,
\end{eqnarray}
where we have used the fact that
\begin{equation}
\epsilon(a)=0,~~~~{\rm if~} [a]=1.\label{e(a)=0}
\end{equation}
The property (\ref{quasi-hopf2}) for
$\beta_F$ and $\Delta_F$ is proved similarly. We then prove property
(\ref{quasi-hopf3}), which reads in terms of the twisted objects
\begin{equation}
m\cdot (m\otimes 1)\cdot (1\otimes\beta_F\otimes\alpha_F)(1\otimes S\otimes
1)\Phi_F^{-1}=1.\label{extra1}
\end{equation}
Let us write
\begin{equation}
\Phi^{-1}=\sum_\nu \bar{X}_\nu\otimes\bar{Y}_\nu\otimes\bar{Z}_\nu.
\end{equation}
Then, in view of (\ref{L13}),
\begin{eqnarray}
{\rm l.h.s.~of~(\ref{extra1})}&=&m\cdot(m\otimes 1)\cdot
(1\otimes\beta\otimes\alpha)(1\otimes S\otimes 1)[(1\otimes F^{-1})
\Phi_F^{-1}(F\otimes 1)]\nonumber\\
&=&m\cdot(m\otimes 1)\cdot
(1\otimes\beta\otimes\alpha)(1\otimes S\otimes 1)[(1\otimes\Delta)F\cdot
\Phi^{-1}\cdot(\Delta\otimes 1)F^{-1}]\nonumber\\
&=&m\cdot(m\otimes 1)\cdot
(1\otimes\beta\otimes\alpha)(1\otimes S\otimes 1)\nonumber\\
& &\cdot \sum_{\nu,(f),(\bar{f})}
[f_i\bar{X}_\nu\bar{f}_{j(1)}\otimes f_{(1)}^i\bar{Y}_\nu
\bar{f}_{j(2)}\otimes f^i_{(2)}\bar{Z}_\nu\bar{f}^j]\nonumber\\
& &\times (-1)^{([\bar{X}_\nu]+[\bar{f}_{j(1)}])([f^i_{(1)}]+[f^i_{(2)}])
+[\bar{Z}_\nu]([\bar{f}_{j(1)}]+[\bar{f}_{j(2)}])
+[\bar{Y}_\nu]([\bar{f}_{j(1)}]+[f^i_{(2)}])
+[f^i_{(2)}][\bar{f}_{j(2)}]}\nonumber\\
&=&\sum_\nu f_i\bar{X}_\nu\sum_{(\bar{f})}\bar{f}_{j(1)}\beta
S(\bar{f}_{j(2)})S(\bar{Y}_\nu)\sum_{(f)}S(f_{(1)}^i)\alpha
f^i_{(2)}\bar{Z}_\nu\bar{f}^j\nonumber\\
& &\cdot (-1)^{([\bar{X}_\nu]+[\bar{Y}_\nu])([f^i_{(1)}]+[f^i_{(2)}])
+([\bar{Y}_\nu]+[\bar{Z}_\nu])([\bar{f}_{j(1)}]+[\bar{f}_{j(2)}])
+([f^i_{(1)}]+[f^i_{(2)}])([\bar{f}_{j(1)}]+[\bar{f}_{j(2)}])}\nonumber\\
&=&\sum_\nu f_i\bar{X}_\nu\sum_{(\bar{f})}\bar{f}_{j(1)}\beta
S(\bar{f}_{j(2)})S(\bar{Y}_\nu)\sum_{(f)}S(f_{(1)}^i)\alpha
f^i_{(2)}\bar{Z}_\nu\bar{f}^j\nonumber\\
& &\cdot (-1)^{([\bar{X}_\nu]+[\bar{Y}_\nu])[f^i]
+([\bar{Y}_\nu]+[\bar{Z}_\nu])[\bar{f}_j]
+[f^i][\bar{f}_j]}\nonumber\\
&=&\sum_\nu f_i\bar{X}_\nu\cdot (-1)^{([\bar{X}_\nu]+[\bar{Y}_\nu])[f^i]
+([\bar{Y}_\nu]+[\bar{Z}_\nu])[\bar{f}_j]
+[f^i][\bar{f}_j]}\nonumber\\
& &\cdot \sum_{(\bar{f})}\bar{f}_{j(1)}\beta
S(\bar{f}_{j(2)})S(\bar{Y}_\nu)\sum_{(f)}S(f_{(1)}^i)\alpha
f^i_{(2)}\bar{Z}_\nu\bar{f}^j\nonumber\\
&\stackrel{(\ref{quasi-hopf1}, \ref{quasi-hopf2})}{=}&
\sum_\nu f_i\bar{X}_\nu\epsilon(\bar{f}_{j})\beta
S(\bar{Y}_\nu)\epsilon(f^i)\alpha
\bar{Z}_\nu\bar{f}^j
\cdot(-1)^{([\bar{X}_\nu]+[\bar{Y}_\nu])[f^i]
+([\bar{Y}_\nu]+[\bar{Z}_\nu])[\bar{f}_j]
+[f^i][\bar{f}_j]}\nonumber\\
&\stackrel{(\ref{e(a)=0})}{=}&
\sum_\nu f_i\bar{X}_\nu\epsilon(\bar{f}_{j})\beta
S(\bar{Y}_\nu)\epsilon(f^i)\alpha
\bar{Z}_\nu\bar{f}^j\nonumber\\
&=&m\cdot(m\otimes 1)\cdot
(1\otimes\beta\otimes\alpha)(1\otimes S\otimes 1)\nonumber\\
& &\cdot [((1\otimes\epsilon)F\otimes 1)\cdot
\Phi^{-1}\cdot((\epsilon\otimes 1)F^{-1}\otimes 1)]\nonumber\\
&\stackrel{(\ref{e(f)=1})}{=}&m\cdot(m\otimes 1)\cdot
(1\otimes\beta\otimes\alpha)(1\otimes S\otimes 1)\Phi^{-1}
\stackrel{(\ref{quasi-hopf3})}{=}1.\nonumber
\end{eqnarray}
The property (\ref{quasi-hopf4}) for the twisted objects, which reads,
\begin{equation}
m\cdot(m\otimes 1)\cdot (S\otimes 1\otimes 1)(1\otimes\alpha_F\otimes
\beta_F)(1\otimes 1\otimes S)\Phi_F=1,
\end{equation}
is proved in a similar way.
\begin{Definition}\label{quasi-quasi}: A quasi-Hopf
superalgebra $(A,\Delta,\epsilon,\Phi)$ is called quasi-triangular if there
exists an invertible homogeneous element ${\cal R}\in A\otimes A$ such that
\begin{eqnarray}
&&\Delta^T(a){\cal R}={\cal R}\Delta(a),~~~~\forall a\in A,\label{dr=rd}\\
&&(\Delta\otimes 1){\cal R}=\Phi^{-1}_{231}{\cal R}_{13}\Phi_{132}{\cal R}_{23}\Phi^{-1}_{123},
\label{d1r}\\
&&(1\otimes \Delta){\cal R}=\Phi_{312}{\cal R}_{13}\Phi^{-1}_{213}{\cal R}_{12}\Phi_{123}.
\label{1dr}
\end{eqnarray}
\end{Definition}
Throughout, $\Delta^T=T\cdot\Delta$ with $T$ being the graded twist map
which is defined, for homogeneous elements $a,b\in A$, by
\begin{equation}
T(a\otimes b)=(-1)^{[a][b]}b\otimes a;
\end{equation}
and $\Phi_{132}$ {\it etc} are derived from $\Phi\equiv\Phi_{123}$
with the help of $T$
\begin{eqnarray}
&&\Phi_{132}=(1\otimes T)\Phi_{123},\nonumber\\
&&\Phi_{312}=(T\otimes 1)\Phi_{132}=(T\otimes 1)
(1\otimes T)\Phi_{123},\nonumber\\
&&\Phi^{-1}_{231}=(1\otimes T)\Phi^{-1}_{213}=(1\otimes T)
(T\otimes 1)\Phi^{-1}_{123},\nonumber
\end{eqnarray}
and so on. We remark that our convention differs from the usual one
which employs the inverse permutation on the positions (c.f.
\cite{Jim97}).
It is easily shown that the properties (\ref{dr=rd})-(\ref{1dr})
imply the graded Yang-Baxter type equation,
\begin{equation}
{\cal R}_{12}\Phi^{-1}_{231}{\cal R}_{13}\Phi_{132}{\cal R}_{23}\Phi^{-1}_{123}
=\Phi^{-1}_{321}{\cal R}_{23}\Phi_{312}{\cal R}_{13}\Phi^{-1}_{213}{\cal R}_{12},
\label{quasi-ybe}
\end{equation}
which is referred to as the graded quasi-Yang-Baxter equation,
and the co-unit properties of ${\cal R}$:
\begin{equation}
(\epsilon\otimes 1){\cal R}=1=(1\otimes \epsilon){\cal R}.\label{e(R)=1}
\end{equation}
\begin{Theorem}\label{t-quasi-quasi}: Denoting by the set
$(A,\Delta,\epsilon,\Phi,{\cal R})$ a
quasi-triangular quasi-Hopf superalgebra, then $(A, \Delta_F, \epsilon, \Phi_F, {\cal R}_F)$
is also a quasi-triangular quasi-Hopf superalgebra, with the choice of
$R_F$ given by
\begin{equation}
{\cal R}_F=F^T {\cal R} F^{-1},\label{twisted-R}
\end{equation}
where $F^T=T\cdot F\equiv F_{21}$. Here $\Delta_F$ and $\Phi_F$ are given
by (\ref{twisted-d}) and (\ref{twisted-phi}), respectively.
\end{Theorem}
The proof of this theorem is elementary
computation. As an example,
let us illustrate the proof of the property (\ref{d1r}) for $\Delta_F,
{\cal R}_F$ and $\Phi_F$. Applying the homomorphism $T\otimes 1$ to
$(\Phi^{-1}_F)_{123}$, one obtains
\begin{eqnarray}
(\Phi_F^{-1})_{213}&=&F_{13}(T\otimes 1)(1\otimes \Delta)F\cdot
\Phi^{-1}_{213}\cdot(\Delta^T\otimes)F^{-1}\cdot (F^T)^{-1}_{12}\nonumber\\
&=&F_{13}\sum_{(f)}(-1)^{[f^i_{(1)}][f_i]}(f^i_{(1)}\otimes f_i\otimes
f^i_{(2)})\Phi^{-1}_{213}(\Delta^T\otimes 1)F^{-1}\cdot (F^T)^{-1}_{12},
\label{1}
\end{eqnarray}
which gives rise to, by applying
the homomorphism $1\otimes T$ to both sides,
\begin{eqnarray}
(\Phi^{-1}_F)_{231}&=&F_{12}\sum_{(f)}(-1)^{([f^i_{(1)}]+[f^i_{(2)}])[f_i]}
(f^i_{(1)}\otimes
f^i_{(2)}\otimes f_i)\Phi^{-1}_{231}(1\otimes T)(\Delta^T\otimes 1)F^{-1}
\cdot (F^T)^{-1}_{13}\nonumber\\
&=&F_{12}(\Delta\otimes 1)F^T\cdot\Phi^{-1}_{231}(1\otimes T)(\Delta^T\otimes
1)F^{-1}\cdot (F^T)^{-1}_{13}.\label{2}
\end{eqnarray}
Then,
\begin{eqnarray}
(\Delta_F\otimes 1){\cal R}_F&=&(F\otimes 1)(\Delta\otimes 1){\cal R}_F\cdot(F^{-1}\otimes
1)\nonumber\\
&=&F_{12}(\Delta\otimes 1)(F^T{\cal R} F^{-1})\cdot F^{-1}_{12}\nonumber\\
&=&F_{12}(\Delta\otimes 1)F^T(\Delta\otimes 1){\cal R}(\Delta\otimes 1)F^{-1}\cdot
F^{-1}_{12}\nonumber\\
&\stackrel{(\ref{d1r})}{=}&F_{12}(\Delta\otimes 1)F^T\cdot\Phi^{-1}_{231}{\cal R}_{13}
\Phi_{132}{\cal R}_{23}\Phi^{-1}_{123}(\Delta\otimes 1)F^{-1}\cdot
F^{-1}_{12}\nonumber\\
&\stackrel{(\ref{2})}{=}&(\Phi^{-1}_F)_{231}(F^T)_{13}(1\otimes T)
(\Delta^T\otimes 1)F\cdot {\cal R}_{13}
\Phi_{132}{\cal R}_{23}\Phi^{-1}_{123}(\Delta\otimes 1)F^{-1}\cdot
F^{-1}_{12}\nonumber\\
&\stackrel{(\ref{twisted-phi})}{=}&(\Phi^{-1}_F)_{231}(F^T)_{13}(1\otimes T)
(\Delta^T\otimes 1)F\nonumber\\
& & \cdot {\cal R}_{13}
\Phi_{132}{\cal R}_{23}(1\otimes \Delta)F^{-1}\cdot
F^{-1}_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&=&(\Phi^{-1}_F)_{231}(F^T)_{13}(1\otimes T)[
(\Delta^T\otimes 1)F\cdot {\cal R}_{12}]\nonumber\\
& &\cdot \Phi_{132}{\cal R}_{23}(1\otimes \Delta)F^{-1}\cdot
F^{-1}_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&\stackrel{(\ref{dr=rd})}{=}&(\Phi^{-1}_F)_{231}(F^T)_{13}(1\otimes T)[
{\cal R}_{12}(\Delta\otimes 1)F]\nonumber\\
& &\cdot \Phi_{132}(1\otimes \Delta^T)F^{-1}\cdot {\cal R}_{23}
F^{-1}_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&=&(\Phi^{-1}_F)_{231}(F^T)_{13}{\cal R}_{13}(1\otimes T)[
(\Delta\otimes 1)F]\nonumber\\
& &\cdot \Phi_{132}(1\otimes \Delta^T)F^{-1}\cdot {\cal R}_{23}
F^{-1}_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&\stackrel{(\ref{twisted-R})}{=}&(\Phi^{-1}_F)_{231}({\cal R}_F)_{13}F^{-1}_{13}
(1\otimes T)[(\Delta\otimes 1)F]\nonumber\\
& &\cdot \Phi_{132}(1\otimes \Delta^T)F^{-1}
(F^T)^{-1}_{23}({\cal R}_F)_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&=&(\Phi^{-1}_F)_{231}({\cal R}_F)_{13}(1\otimes T)[F^{-1}_{12}
(\Delta\otimes 1)F
\Phi_{123}(1\otimes \Delta)F^{-1}\cdot
F^{-1}_{23}]\nonumber\\
& &\cdot ({\cal R}_F)_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&\stackrel{(\ref{twisted-phi})}{=}&(\Phi^{-1}_F)_{231}({\cal R}_F)_{13}(1\otimes T)
(\Phi_F)_{123}\cdot
({\cal R}_F)_{23}(\Phi^{-1}_F)_{123}\nonumber\\
&=&(\Phi^{-1}_F)_{231}({\cal R}_F)_{13}(\Phi_F)_{132}
({\cal R}_F)_{23}(\Phi^{-1}_F)_{123}.
\end{eqnarray}
\vskip.1in
Let us now consider the special case that $A$ arises from a normal
quasi-triangular Hopf superalgebra via twisting with $F$.
A quasi-triangular Hopf superalgebra is a quasi-triangular
quasi-Hopf superalgebra with $\alpha=\beta=1,~\Phi=1\otimes 1\otimes 1$.
Hence $A$ has the following ${{\bf Z}_2}$ graded quasi-Hopf algebra structure,
\begin{eqnarray}
&&\Delta_F(a)=F\Delta(a)F^{-1},~~~~\forall a\in A,\nonumber\\
&&\Phi_F=F_{12}\cdot(\Delta\otimes 1)F\cdot(1\otimes\Delta)F^{-1}\cdot
F_{23}^{-1},\nonumber\\
&&\alpha_F=m\cdot(S\otimes 1)F^{-1},~~~~\beta_F=m\cdot(1\otimes S)F,\nonumber\\
&&{\cal R}_F=F^T {\cal R} F^{-1}.\label{twisting-qg}
\end{eqnarray}
The twisting procedure is particularly interesting
when the twistor $F\in A\otimes A$ depends on an element $\lambda\in A$,
i.e. $F=F(\lambda)$, and is a shifted cocycle in the following sense.
Here $\lambda$ is assumed to depend on one (or possible several) parameters.
\begin{Definition}: A twistor $F(\lambda)$ depending
on $\lambda\in A$ is a shifted cocycle if it satisfies the graded
shifted cocycle condition:
\begin{equation}
F_{12}(\lambda)\cdot (\Delta\otimes 1)F(\lambda)=F_{23}(\lambda+h^{(1)})\cdot
(1\otimes \Delta)F(\lambda),\label{shifted-cocycle}
\end{equation}
where $h^{(1)}=h\otimes 1\otimes 1$ and $h\in A$ is fixed.
\end{Definition}
Let $(A,\Delta_\lambda,\epsilon,\Phi(\lambda),{\cal R}(\lambda))$ be the quasi-triangular quasi-Hopf
superalgebra obtained from twisting the quasi-triangular Hopf superalgebra
by the twistor $F(\lambda)$. Then
\begin{Proposition}: We have
\begin{eqnarray}
&&\Phi(\lambda)\equiv\Phi_F=F_{23}(\lambda+h^{(1)})F_{23}(\lambda)^{-1},\label{phi-lambda}\\
&&\Delta_\lambda(a)^T {\cal R}(\lambda)={\cal R}(\lambda)\Delta_\lambda(a),~~~~\forall a\in
A,\label{d-lambda}\\
&&(\Delta_\lambda\otimes 1){\cal R}(\lambda)=\Phi_{231}(\lambda)^{-1}{\cal R}_{13}(\lambda){\cal R}_{23}
(\lambda+h^{(1)}),\label{d-lambda-1-r}\\
&&(1\otimes \Delta_\lambda){\cal R}(\lambda)={\cal R}_{13}(\lambda+h^{(2)}){\cal R}_{12}(\lambda)\Phi_{123}(\lambda).
\label{1-d-lambda-r}
\end{eqnarray}
As a corollary, ${\cal R}(\lambda)$ satisfies the graded dynamical Yang-Baxter
equation
\begin{equation}
{\cal R}_{12}(\lambda+h^{(3)}){\cal R}_{13}(\lambda){\cal R}_{23}(\lambda+h^{(1)})
={\cal R}_{23}(\lambda){\cal R}_{13}(\lambda+h^{(2)}){\cal R}_{12}(\lambda).\label{dynamical-ybe}
\end{equation}
\end{Proposition}
\sect{Quantum Supergroups}
Let ${\cal G}$ be a Kac-Moody superalgebra \cite{Kac77,Kac78}
with a symmetrizable generalized
Cartan matrix $A=(a_{ij})_{i,j,\in I}$. As is well-known, a given Kac-Moody
superalgebra allows many inequivalent systems of simple roots.
A system of simple roots is called distinguished if it has
minimal odd roots.
Let $\{\alpha_i,~i\in I\}$ denote a
chosen set of simple roots. Let $(~,~)$ be a fixed
invariant bilinear form on the root space of ${\cal G}$. Let ${\cal H}$ be
the Cartan subalgebra and throughout we identify the
dual ${\cal H}^*$ with ${\cal H}$ via $(~,~)$.
The generalized Cartan matrix $A=(a_{ij})_{i,j\in I}$ is
defined from the simple roots by
\begin{equation}
a_{ij}=\left \{
\begin{array}{l}
\frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)},~~~~{\rm if}~(\alpha_i,\alpha_i)\neq 0\\
(\alpha_i,\alpha_j),~~~~{\rm if}~(\alpha_i,\alpha_i)=0
\end{array}
\right .
\end{equation}
As we mentioned in the previous section,
quantum Kac-Moody superalgebras are quasi-triangular quasi-Hopf
superalgebras with $\alpha=\beta=1,~\Phi=1\otimes 1\otimes 1$.
We shall not give the standard relations obeyed by the simple
generators (or Chevalley generators) $\{h_i,~e_i,~f_i,~i\in I\}$ of
$U_q({\cal G})$, but mention that for certain types of Dynkin diagrams
extra $q$-Serre relations are
needed in the defining relations. We adopt the following
graded Hopf algebra structure
\begin{eqnarray}
\Delta(h)&=&h\otimes 1+1\otimes h,\nonumber\\
\Delta(e_i)&=&e_i\otimes 1+t_i\otimes e_i,~~~~
\Delta(f_i)=f_i\otimes t_i^{-1}+1\otimes f_i,\nonumber\\
\epsilon(e_i)&=&\epsilon(f_i)=\epsilon(h)=0,\nonumber\\
S(e_i)&=&-t_i^{-1}e_i,~~~~S(f_i)=-f_it_i,~~~~S(h)=-h,\label{e-s}
\end{eqnarray}
where $i\in I$, $t_i=q^{h_i}$ and $h\in {\cal H}$.
The canonical element ${\cal R}$ is called the universal R-matrix of $U_q({\cal G})$, which
satisfies the basic properties (e.g. (\ref{dr=rd})-(\ref{1dr}) with
$\Phi=1\otimes 1\otimes 1$ and (\ref{e(R)=1}}))
\begin{eqnarray}
&&\Delta^T(a){\cal R}={\cal R}\Delta(a),~~~~\forall a\in U_q({\cal G}),\nonumber\\
&&(\Delta\otimes 1){\cal R}={\cal R}_{13}{\cal R}_{23},\nonumber\\
&&(1\otimes\Delta){\cal R}={\cal R}_{13}{\cal R}_{12},\nonumber\\
&&(\epsilon\otimes 1){\cal R}=(1\otimes\epsilon){\cal R}=1,\label{D-hR}
\end{eqnarray}
and the graded Yang-Baxter equation (c.f. (\ref{quasi-ybe}) with
$\Phi=1\otimes 1\otimes 1$)
\begin{equation}
{\cal R}_{12}{\cal R}_{13}{\cal R}_{23}={\cal R}_{23}{\cal R}_{13}{\cal R}_{12}.
\end{equation}
The Hopf superalgebra $U_q({\cal G})$ contains two important Hopf subalgebras
$U_q^+$ and $U_q^-$ which are generated by
$e_i$ and
$f_i$, respectively. By Drinfeld's
quantum double construction, the
universal R-matrix ${\cal R}$ can be written in the form
\begin{equation}
{\cal R}=\left(1\otimes 1+\sum_t\,a^t\otimes a_t\right)\cdot q^{-{\cal T}}
\end{equation}
where $\{a^t\}\in U_q^+,~\{a_t\}\in U_q^-$. The element ${\cal T}$
is defined as follows.
If the symmetrical Cartan matrix is non-degenerate,
then ${\cal T}$ is the usual canonical element of ${\cal H}\otimes{\cal H}$.
Let $\{h_l\}$ be a basis of ${\cal H}$ and
$\{h^l\}$ be its dual basis. Then ${\cal T}$ can be written as
\begin{equation}
{\cal T}=\sum_lh_l\otimes h^l.\label{T}
\end{equation}
In the case of a degenerate symmetrical
Cartan matrix, we extend the Cartan subalgebra ${\cal H}$ by adding some elements
to it in such a way that the extended symmetrical Cartan matrix is
non-degenerate \cite{Kho91}. Then ${\cal T}$ stands for the canonical element
of the extended Cartan subalgebra. It still takes the
form (\ref{T}) but now $\{h_l\}~(\{h^l\})$ is understood to be the
(dual) basis of the extended Cartan subalgebra. After such enlargement,
one has $h=\sum_l(h^l,h)h_l=\sum_l(h_l,h)h^l$ for any given $h$
in the enlarged Cartan subalgebra.
For later use, we work out the explicit form of the
universal R-matrix for the
simplest quantum affine superalgebra ${U_q[\widehat{sl(1|1)}]}$. This algebra is generated
by Chevalley generators $\{e_i, f_i, h_i, d, i=0,1\}$
with $e_i,~f_i$ odd and $h_i,~d$ even. Here and throughout $d$ stands
for the derivation operator. Let us write $h_i=\alpha_i$.
Then we have $h_0=\delta-\varepsilon_1+\delta_1,~h_1=\varepsilon_1-\delta_1$, where
$\{\varepsilon_1,\delta_1,\delta\}$ satisfy
$(\varepsilon_1,\varepsilon_1)=1
=-(\delta_1,\delta_1),~(\varepsilon_1,\delta_1)=(\delta,\delta)=(\delta,\varepsilon_1)=(\delta,\delta_1)=0$.
We extend the Cartan
subalgebra by adding to it the element $h_{\rm ex}=\varepsilon_1+\delta_1$.
A basis for the enlarged Cartan subalgebra is thus
$\{h_{\rm ex},h_0,h_1,d\}$.
It is easily shown that the dual basis
is $\{h^{\rm ex},h^0,h^1,c\}$, where $h^{\rm ex}=\frac{1}{2}(\varepsilon_1-\delta_1)=
\frac{1}{2}h_1,~
h^0=d,~h^1=\varepsilon_1+d-\frac{1}{2}(\varepsilon_1-\delta_1)=d+\frac{1}{2}h_{\rm ex}$.
As is well-known, ${U_q[\widehat{sl(1|1)}]}$ can also be realized in terms of the Drinfeld
generators \cite{Dri88}
$\{X^\pm_n, H_n, H^{\rm ex}_n, n\in {\bf Z},c,d\}$, where
$X^\pm_n$ are odd and all other generators are even. The relations
satisfied by the Drinfeld generators read \cite{Zha97}
\begin{eqnarray}
&&[c,a]=[H_0,a]=[d,d]=[H_n,H_m]=[H^{\rm ex}_n,H^{\rm ex}_m]=0,~~~
\forall a\in {U_q[\widehat{sl(1|1)}]},\nonumber\\
&&q^{H^{\rm ex}_0}X^\pm_nq^{-H^{\rm ex}_0}=q^{\pm 2}X^\pm_n,\nonumber\\
&&[d,X^\pm_n]=nX^\pm_n,~~~[d,H_n]=nH_n,~~~[d,H^{\rm ex}_n]=nH^{\rm
ex}_n,\nonumber\\
&&[H_n, H_m^{\rm ex}]=\delta_{n+m,0}\frac{[2n]_q[nc]_q}{n},\nonumber\\
&&[H^{\rm ex}_n,
X^\pm_m]=\pm\frac{[2n]_q}{n}X^\pm_{n+m}q^{\mp|n|c/2},\nonumber\\
&&[H_n,X^\pm_m]=0=[X^\pm_n,X^\pm_m],\nonumber\\
&&[X^+_n, X^-_m]=\frac{1}{q-q^{-1}}\left(q^{\frac{c}{2}(n-m)}
\psi^+_{n+m}-q^{-\frac{c}{2}(n-m)}\psi^-_{n+m}\right),\label{Drinfeld-sl11}
\end{eqnarray}
where $[x]_q=(q^x-q^{-x})/(q-q^{-1})$,
$[a,b]\equiv ab-(-1)^{[a][b]}ba$ denotes the supercommutator
and $\psi^\pm_{\pm n}$ are related to $H_{\pm n}$ by relations
\begin{equation}
\sum_{n\geq 0}\psi^\pm_{\pm n}z^{\mp n}=q^{\pm H_0}\exp\left(
\pm(q-q^{-1})\sum_{n>0}H_{\pm n}z^{\mp n}\right).
\end{equation}
The relationship between the Drinfeld generators and the
Chevalley generators is
\begin{eqnarray}
&&e_1=X^+_0,~~~~f_1=X^-_0,~~~~h_1=H_0,~~~~h_{\rm ex}=H^{\rm ex}_0,\nonumber\\
&&e_0=X^-_1 q^{-H_0},~~~~f_0=-q^{H_0}X^+_{-1},~~~~h_0=c-H_0.
\end{eqnarray}
With the help of the Drinfeld generators, we find
the following universal R-matrix
\begin{equation}
{\cal R}={\cal R}'\cdot q^{-{\cal T}},
\end{equation}
where
\begin{eqnarray}
{\cal T}&=&h_{\rm ex}\otimes h^{\rm ex}+h_0\otimes h^0+h_1\otimes h^1
+d\otimes c\nonumber\\
&=&\frac{1}{2}(H_0\otimes H_0^{\rm ex}
+H^{\rm ex}_0\otimes H_0)+c\otimes d+d\otimes c,\nonumber\\
{\cal R}'&=&{\cal R}^< \,{\cal R}^0 \,{\cal R}^>,\nonumber\\
{\cal R}^<&=&\prod^{\rightarrow}_{n\geq 0}\exp\left[(q-q^{-1})
(q^{-nc/2}X^+_n\otimes q^{nc/2}X^-_{-n})\right],\nonumber\\
{\cal R}^0&=&\exp\left[-(q-q^{-1})\sum_{n=1}^\infty\frac{n}{[2n]_q}
(H_n\otimes H^{\rm ex}_{-n}+H^{\rm ex}_n\otimes H_{-n})\right],\nonumber\\
{\cal R}^>&=&\prod^{\leftarrow}_{n\geq 0}\exp\left[-(q-q^{-1})
(X^-_{n+1}q^{nc/2-H_0}\otimes q^{-nc/2+H_0}X^+_{-n-1})\right].
\label{sl11-R}
\end{eqnarray}
Here and throughout,
\begin{equation}
\prod_{k\geq 0}^{\rightarrow}A_k=A_0A_1A_2\cdots,~~~~~
\prod_{k\geq 0}^{\leftarrow}A_k=\cdots A_2A_1A_0.
\end{equation}
It seems to us that even for this simplest quantum affine superalgebra
${U_q[\widehat{sl(1|1)}]}$ the universal R-matrix has not been written down
in its explicit form before.
Let us compute the image of ${\cal R}$ in the 2-dimensional evaluation
representaion $(\pi,V)$ of ${U_q[\widehat{sl(1|1)}]}$, where $V={\bf C}^{1|1}={\bf C} v_1\oplus{\bf C}
v_2$ with $v_1$ even and $v_2$ odd.
Let $e_{ij}$ be the $2\times 2$ matrix whose $(i,j)$-element is unity
and zero otherwise.
In the homogeneous gradation, the simple
generators are represented by
\begin{eqnarray}
&&e_1=\sqrt{[\theta]_q}e_{12},~~~f_1=\sqrt{[\theta]_q}e_{21},~~~
h_1=\theta(e_{11}+e_{22}),~~~h_{\rm ex}=2e_{11}+c_0(e_{11}+e_{22}),\nonumber\\
&&e_0=z\sqrt{[\theta]_q}e_{21},~~~f_0=-z^{-1}\sqrt{[\theta]_q}e_{12},~~~
h_0=-\theta(e_{11}+e_{22}),\label{rep-V}
\end{eqnarray}
where $\theta$ and $c_0$ are arbitrary constants. Then it can be shown
that the Drinfeld generators are represented by
\begin{eqnarray}
&&H_n=z^n\frac{[n\theta]_q}{n}(e_{11}+e_{22}),~~~~
H^{\rm ex}_n=z^n\frac{[2n]_q}{n}q^{n\theta}e_{11}+z^nc_n(e_{11}+e_{22}),\nonumber\\
&&X^+_n=z^nq^{n\theta}\sqrt{[\theta]_q}e_{12},~~~~
X^-_n=z^nq^{n\theta}\sqrt{[\theta]_q}e_{21},
\end{eqnarray}
where again $c_n$ are arbitrary constants. In the following we set
$c_n$ to be zero. Then the image $R_{VV}(z;\theta,\theta')=(\pi_\theta\otimes\pi_{\theta'}){\cal R}$
depends on two extra non-additive parameters $\theta,~\theta'$, and is given by
\begin{eqnarray}
R_{VV}(z;\theta,\theta')&=& \frac{q^{-\theta-\theta'}-z}
{1-zq^{-\theta-\theta'}}e_{11}\otimes e_{11}+e_{22}\otimes e_{22}
+\frac{q^{-\theta'}-zq^{-\theta}}
{1-zq^{-\theta-\theta'}}e_{11}\otimes e_{22}\nonumber\\
& & +\frac{q^{-\theta}-zq^{-\theta'}}
{1-zq^{-\theta-\theta'}}e_{22}\otimes e_{11}
+\sqrt{[\theta]_q[\theta']_q}q^{-\theta}\frac{q-q^{-1}}
{1-zq^{-\theta-\theta'}}e_{12}\otimes e_{21}\nonumber\\
& & -\sqrt{[\theta]_q[\theta']_q}q^{-\theta'}\frac{z(q-q^{-1})}
{1-zq^{-\theta-\theta'}}e_{21}\otimes e_{12}. \label{sl11-r}
\end{eqnarray}
(\ref{sl11-r}) is nothing but the R-matrix obtained in \cite{Bra94} by
solving the Jimbo equation.
\sect{Elliptic Quantum Supergroups}
Following Jimbo et al \cite{Jim97}, we define elliptic quantum supergroups
to be quasi-triangular quasi-Hopf superalgebras obtained from
twisting the normal quantum supergroups (which are quasi-triangular quasi-Hopf
superalgebras with $\alpha=\beta=1,~\Phi=1\otimes 1\otimes 1$)
by twistors which satisfy the graded
shifted cocycle condition.
\subsection{Elliptic Quantum Supergroups of Face Type}
Let $\rho$ be an element in the (extended) Cartan subalgebra such
that $(\rho,\alpha_i)=(\alpha_i,\alpha_i)/2$ for all $i\in I$, and
\begin{equation}
\phi={\rm Ad}(q^{\frac{1}{2}\sum_lh_lh^l-\rho}),
\end{equation}
be an automorphism of $U_q({\cal G})$. Here
$\{h_l\},~\{h^l\}$ are as in (\ref{T}) and are
the dual basis of the (extended) Cartan
subalgebra. Namely,
\begin{equation}
\phi(e_i)=e_it_i,~~~~\phi(f_i)=t_i^{-1}f_i,~~~~\phi(q^h)=q^h.
\end{equation}
In the following we consider the special case in which
the element $\lambda$ introduced before
belongs to the (extended) Cartan subalgebra. Let
\begin{equation}
\phi_\lambda=\phi^2\cdot{\rm Ad}(q^{2\lambda})={\rm Ad}(q^{\sum_lh_lh^l-2\rho+
2\lambda})
\end{equation}
be an automorphism depending on the element $\lambda$ and ${\cal R}$ be the
universal R-matrix of $U_q({\cal G})$. Following Jimbo et al \cite{Jim97},
we define a twistor $F(\lambda)$ by the infinite product
\begin{equation}
F(\lambda)=\prod_{k\geq 1}^{\leftarrow}\left(\phi_\lambda^k\otimes 1\right)
\left(q^{\cal T}{\cal R}\right)^{-1}.\label{twistor-f}
\end{equation}
It is easily seen that $F(\lambda)$ is a formal series in parameter(s)
in $\lambda$ with leading term 1. Therefore the infinite product makes
sense. The twistor $F(\lambda)$ is referred to as face type twistor.
It can be shown that $F(\lambda)$ satisfies the graded shifted cocycle condition
\begin{equation}
F_{12}(\lambda)(\Delta\otimes 1)F(\lambda)=F_{23}(\lambda+h^{(1)})(1\otimes\Delta)F(\lambda),
\label{shifted-f}
\end{equation}
where, if $\lambda=\sum_l\lambda_lh^l$, then $\lambda+h^{(1)}=\sum_l(\lambda_l+h_l^{(1)})
h^l$.
The proof of (\ref{shifted-f}) is identical to the non-super case given by
Jimbo et al \cite{Jim97}, apart from the use of the graded tensor products.
Moreover, it is easily seen that $F(\lambda)$ obeys the co-unit property
\begin{equation}
(\epsilon\otimes 1)F(\lambda)=(1\otimes\epsilon)F(\lambda)=1.
\end{equation}
We have
\begin{Definition} {\bf (Face type elliptic quantum supergroup)}: We define
elliptic quantum supergroup ${\cal B}_{q,\lambda}({\cal G})$ of face type to be
the quasi-triangular quasi-Hopf superalgebra
$(U_q({\cal G}),\Delta_\lambda,\epsilon,\Phi(\lambda),{\cal R}(\lambda))$ together with the graded algebra
anti-homomorphism $S$ defined by (\ref{e-s})
and $\alpha_\lambda=m\cdot(S\otimes 1)F(\lambda)^{-1}$,
$\beta_\lambda=m\cdot(1\otimes S)F(\lambda)$. Here $\epsilon$ is defined by (\ref{e-s}),
and
\begin{eqnarray}
&&\Delta_\lambda(a)=F(\lambda)\Delta(a)F(\lambda)^{-1},~~~\forall a\in U_q({\cal G}),\nonumber\\
&&{\cal R}(\lambda)=F(\lambda)^T{\cal R} F(\lambda)^{-1},\nonumber\\
&&\Phi(\lambda)=F_{23}(\lambda+h^{(1)})F_{23}(\lambda)^{-1}.
\end{eqnarray}
\end{Definition}
We now consider the particularly interesting case where ${\cal G}$ is of affine type.
Then $\rho$ contains two parts
\begin{equation}
\rho=\bar{\rho}+gd,
\end{equation}
where $g=(\psi,\psi+2\bar{\rho})/2$, $\bar{\rho}$ is the graded half-sum
of positive roots of the non-affine part $\bar{{\cal G}}$ and $\psi$ is highest
root of $\bar{{\cal G}}$; $d$ is the derivation operator which gives the
homogeneous gradation
\begin{equation}
[d, e_i]=\delta_{i0}e_i,~~~~[d,f_i]=-\delta_{i0}f_i,~~~~i\in I.
\end{equation}
We also set
\begin{equation}
\lambda=\bar{\lambda}+(r+g)d+s'c,~~~~r,s'\in {\bf C},
\end{equation}
where $\bar{\lambda}$ stands for the projection of $\lambda$ onto the
(extended) Cartan subalgebra of $\bar{{\cal G}}$.
Denoting by $\{\bar{h}_j\},~\{\bar{h}^j\}$ the dual basis of the
(extended) Cartan subalgebra of $\bar{{\cal G}}$
and setting $p=q^{2r}$, we can decompose $\phi_\lambda$ into two parts
\begin{equation}
\phi_\lambda={\rm Ad}(p^dq^{2cd})\cdot\bar{\phi}_\lambda,~~~~
\bar{\phi}_\lambda={\rm
Ad}(q^{\sum_j\bar{h}_j\bar{h}^j+2(\bar{\lambda}-\bar{\rho})}).
\end{equation}
Introduce a formal parameter $z$ (which will
be identified with spectral parameter) into ${\cal R}$ and $F(\lambda)$ by setting
\begin{eqnarray}
&&{\cal R}(z)={\rm Ad}(z^d\otimes 1){\cal R},\nonumber\\
&&F(z,\lambda)={\rm Ad}(z^d\otimes 1)F(\lambda),\nonumber\\
&&{\cal R}(z,\lambda)={\rm Ad}(z^d\otimes 1){\cal R}(\lambda)=F(z^{-1},\lambda)^T
{\cal R}(z)F(z,\lambda)^{-1}.
\end{eqnarray}
Then it can be shown from the definition of $F(\lambda)$
that $F(z,\lambda)$ satisfies the difference equation
\begin{eqnarray}
&&F(pq^{2c^{(1)}}z,\lambda)=(\bar{\phi}_\lambda\otimes 1)^{-1}(F(z,\lambda))\cdot
q^{\cal T} {\cal R}(pq^{2c^{(1)}}z),\nonumber\\
&&F(0,\lambda)=F_{\bar{{\cal G}}}(\bar{\lambda}).\label{dif-face}
\end{eqnarray}
The initial condition follows from the fact that ${\cal R}(z)q^{d\otimes c
+c\otimes d}|_{z=0}$ reduces to the universal R-matrix of $U_q(\bar{{\cal G}})$.
Let us give some examples.
\vskip.1in
\noindent\underline{\bf The case ${\cal B}_{q,\lambda}[sl(1|1)]$}:
\vskip.1in
In this case the universal R-matrix is given simply by
\begin{eqnarray}
&&{\cal R}=\exp[(q-q^{-1})\,e\otimes f]\;q^{-{\cal T}}=[1+(q-q^{-1})\,e\otimes f]
\;q^{-{\cal T}},\nonumber\\
&&{\cal T}=\frac{1}{2}(h\otimes h_{\rm ex}+h_{\rm ex}\otimes h).
\end{eqnarray}
Let us write
\begin{equation}
\lambda=(s'+1)\frac{1}{2}h+s\frac{1}{2}h_{\rm ex},~~~~s',s\in {\bf C}.
\end{equation}
Since $h$ commutes with everything, $\phi_\lambda$ is independent of $s'$.
Set $w=q^{2(s+h)}$, we have
\begin{equation}
\phi_\lambda={\rm Ad}(w^{\frac{1}{2}h_{\rm ex}}).
\end{equation}
The formula for the twistor becomes
\begin{eqnarray}
F(w)&=&\prod_{k\geq 1}\left(1-(q-q^{-1})w^k\,q^{-h}e\otimes fq^h\right)\nonumber\\
&=&1-(q-q^{-1})\sum_{k=1}^\infty w^k\,q^{-h}e\otimes fq^h\nonumber\\
&=&1-(q-q^{-1})\frac{w}{1-w}\,q^{-h}e\otimes fq^h.
\end{eqnarray}
\vskip.1in
\noindent\underline{\bf The case ${\cal B}_{q,\lambda}[\widehat{sl(1|1)}]$}:
\vskip.1in
Taking a basis $\{c,d,h,h_{\rm ex}\}$ of the enlarged Cartan subalgebra
of $\widehat{sl(1|1)}$, we write
\begin{equation}
\lambda=rd+s'c+(s''+1)\frac{1}{2}h+s\frac{1}{2}h_{\rm ex},~~~~r,s',s'',s\in {\bf C}.
\end{equation}
Then $\phi_\lambda$ is independent of $s'$ and $s''$. Set
\begin{equation}
p=q^{2r},~~~~~~w=q^{2(s+h)},
\end{equation}
Set $F(z;p,w)\equiv F(z,\lambda)$. Then (\ref{dif-face}) take the form
\begin{eqnarray}
&&F(pq^{2c^{(1)}}z;p,w)=(\bar{\phi}_w^{-1}\otimes 1)(F(z;p,w))\cdot
q^{\cal T} {\cal R}(pq^{2c^{(1)}}z),\label{dif-face-sl11}\\
&&F(0;p,w)=F_{sl(1|1)}(w),\label{initial-sl11-face}
\end{eqnarray}
where $\bar{\phi}_w={\rm Ad}(w^{\frac{1}{2}h_{\rm ex}})$.
The image of (\ref{dif-face-sl11}) in the two-dimensional representation
$(\pi, V)$ given by (\ref{rep-V}) (by setting $\theta=1$)
yields a difference equation for
$F_{VV}(z;p,w)=(\pi\otimes \pi)F(z;p,w)$. Noting that $\pi\cdot\bar{\phi}_w
={\rm Ad}(D_w^{-1})\cdot \pi$, where $D_w=e_{11}+we_{22}$, we find
\begin{equation}
F_{VV}(pz;p,w)={\rm Ad}(D_w\otimes 1)(F_{VV}(z;p,w))\cdot K
R_{VV}(pz),\label{dif-face-sl11-image}
\end{equation}
where $K=(\pi\otimes\pi)q^{\cal T}=q^2 e_{11}\otimes e_{11}+q e_{11}\otimes
e_{22}+q e_{22}\otimes e_{11}+e_{22}\otimes e_{22}$ and $R_{VV}(pz)$
is given by (\ref{sl11-r}) (with $\theta=\theta'=1$).
(\ref{dif-face-sl11-image}) is a system of difference equations of
$q$-KZ equation type \cite{Fre92}, and can be solved with the help of
the $q$-hypergeometric series. The solution with the initial condition
(\ref{initial-sl11-face}) is given by
\begin{eqnarray}
F_{VV}(z;p,w)&=&{}_1\phi_0(z;p,w)e_{11}\otimes e_{11}
+e_{22}\otimes e_{22}\nonumber\\
& & +f_{11}(z;p,w)e_{11}\otimes e_{22}
+f_{22}(z;p,w)e_{22}\otimes e_{11}\nonumber\\
& &+f_{12}(z;p,w)e_{12}\otimes e_{21}
+f_{21}(z;p,w)e_{21}\otimes e_{12},\label{face-solution}
\end{eqnarray}
where
\begin{eqnarray}
&&{}_1\phi_0(z;p,w)=\frac{(pq^{-2}z;p)_\infty}{(pq^2z;p)_\infty},\nonumber\\
&&f_{11}(z;p,w)={}_2\phi_1\left(
\begin{array}{c}
wq^{-2}~~q^{-2}\\
w
\end{array}
;p, pq^2z\right),\nonumber\\
&&f_{12}(z;p,w)=-\frac{w(q-q^{-1})}{1-w}\;{}_2\phi_1\left(
\begin{array}{c}
wq^{-2}~~pq^{-2}\\
pw
\end{array}
;p, pq^2z\right),\nonumber\\
&&f_{21}(z;p,w)=\frac{zpw^{-1}(q-q^{-1})}{1-pw^{-1}}\;{}_2\phi_1\left(
\begin{array}{c}
pw^{-1}q^{-2}~~pq^{-2}\\
p^2w^{-1}
\end{array}
;p, pq^2z\right),\nonumber\\
&&f_{22}(z;p,w)={}_2\phi_1\left(
\begin{array}{c}
pw^{-1}q^{-2}~~q^{-2}\\
pw^{-1}
\end{array}
;p, pq^2z\right).
\end{eqnarray}
Here
\begin{eqnarray}
&&{}_2\phi_1\left(
\begin{array}{c}
q^a~~q^b\\
q^c
\end{array}
;p, x\right)=\sum_{n=0}^\infty\frac{q^a;p)_n(q^b;p)_n}
{(p;p)_n(q^c;p)_n}x^n,\nonumber\\
&&(a;p)_n=\prod_{k=0}^{n-1}(1-ap^k),~~~~(a;p)_0=1.
\end{eqnarray}
\subsection{Elliptic Quantum Supergroups of Vertex Type}
As we mentioned before, a given Kac-Moody superalgebras ${\cal G}$ allows
many inequivalent
simple root systems. By means of the ``extended" Weyl transformation
method introduced in \cite{Lei85},
one can transform from one simple root system to another inequivalent
one \cite{Fra89}.
For ${\cal G}=\widehat{sl(n|n)}$, there exists a simple root system in which
all simple roots are odd (or fermionic). This system can be constructed
from the distinguished
simple root system by using the ``extended"
Weyl operation repeatedly. We find the following simple
roots, all of which are odd (or fermionic)
\begin{eqnarray}
&&\alpha_0=\delta-\varepsilon_1+\delta_n\,,\nonumber\\
&&\alpha_{2j}=\delta_j-\varepsilon_{j+1}\,,~~~~~j=1,2,\cdots, n-1,\nonumber\\
&&\alpha_{2i-1}=\varepsilon_i-\delta_i\,,~~~~~i=1, 2,\cdots,n\label{roots}
\end{eqnarray}
with $\delta,~\{\varepsilon_i\}_{i=1}^n$ and $\{\delta_i\}_{i=1}^n$ satisfying
\begin{eqnarray}
&&(\delta,\delta)=(\delta,\varepsilon_i)=(\delta,\delta_i)=0,~~~~(\varepsilon_i,\varepsilon_j)=\delta_{ij},\nonumber\\
&&(\delta_i,\delta_j)=-\delta_{ij},~~~~(\varepsilon_i,\delta_j)=0.
\end{eqnarray}
Such a simple root system is usually called non-standard.
It seems to us that $\widehat{sl(n|n)}$ is the only non-twisted affine
superalgebra which has a non-standard system of simple
roots, all of which are fermionic.
As will be shown below,
for ${\cal G}=\widehat{sl(n|n)}$ with the above fermionic simple roots,
one can construct a different type of twistor. Following Jimbo
et al \cite{Jim97}, we say this twistor is of {\it vertex type}.
Let us write $h_i=\alpha_i~(i=0,1,\cdots,2n-1)$ with $\alpha_i$ given by
(\ref{roots}). We extend the Cartan
subalgebra of $\widehat{sl(n|n)}$
by adding to it the element $h_{\rm ex}=\sum_{i=1}^n
(\varepsilon_i+\delta_i)$. A basis of the extended Cartan subalgebra is
$\{h_{\rm ex}, h_0,h_1,\cdots,h_{2n-1},d\}$.
Denote by $\{h^{\rm ex},h^0,h^1,\cdots,h^{2n-1},c\}$ the dual basis. We have
\begin{eqnarray}
&&h^{\rm ex}=\frac{1}{2n}\sum_{i=1}^n(\varepsilon_i-\delta_i),\nonumber\\
&&h^{2k}=d+\sum_{i=1}^k(\varepsilon_i-\delta_i)-\frac{k}{n}\sum_{i=1}^n
(\varepsilon_i-\delta_i),\nonumber\\
&&h^{2k+1}=d+\sum_{i=1}^{k+1}\varepsilon_i-\sum_{i=1}^k\delta_i-\frac{2k+1}{2n}\sum_{i=1}^n
(\varepsilon_i-\delta_i),
\end{eqnarray}
where $k=0,1,\cdots,n-1$. The canonical element ${\cal T}$ in the extended
Cartan subalgebra reads
\begin{equation}
{\cal T}=h_{\rm ex}\otimes h^{\rm ex}+\sum_{i=0}^{2n-1}(h_i\otimes h^i)
+d\otimes c.
\end{equation}
Let $\tau$ be the diagram automorphism of $U_q[\widehat{sl(n|n)}]$ such that
\begin{equation}
\tau(e_i)=e_{i+1~{\rm mod}~2n},~~~~\tau(f_i)=f_{i+1~{\rm mod}~2n},~~~~
\tau(h_i)=h_{i+1~{\rm mod}~2n}.
\end{equation}
Obviously, the automorphism $\tau$ is non-graded since it preserves the grading
of the generators and moreover, $\tau^{2n}=1$. Then we can show
\begin{eqnarray}
&&\tau(h_{\rm ex})=-h_{\rm ex}+\xi c,~~~~
\tau(c)=c,~~~~\tau(h^{\rm ex})=-h^{\rm ex}+\frac{1}{2n}c,\nonumber\\
&&\tau(h^{2k})=h^{2k+1~{\rm mod}2n}+\frac{\xi}{2n}\sum_{i=1}^n
(\varepsilon_i-\delta_i)-\frac{\xi+n-2k-1}{2n}c,\nonumber\\
&&\tau(h^{2k+1})=h^{2k+2~{\rm mod}2n}+\frac{\xi}{2n}\sum_{i=1}^n
(\varepsilon_i-\delta_i)-\frac{n-2k-1}{2n}c,
\end{eqnarray}
where $k=0,1,\cdots,n-1$ and $\xi$ is an arbitrary constant.
Introduce element
\begin{equation}
\tilde{\rho}=\sum_{i=0}^{2n-1}h^i+\xi nh^{\rm ex},
\end{equation}
which gives the principal gradation
\begin{equation}
[\tilde{\rho},e_i]=e_i,~~~~[\tilde{\rho},f_i]=f_i,~~~~i=0,1,\cdots,2n-1.
\end{equation}
It is easily shown that
\begin{equation}
\tau(\tilde{\rho})=\tilde{\rho},~~~~~(\tau\otimes\tau){\cal T}={\cal T}.
\end{equation}
Notice also that
\begin{eqnarray}
&&(\tau\otimes\tau)\cdot\Delta=\Delta\cdot\tau,\nonumber\\
&&(\tau\otimes\tau){\cal R}={\cal R}.
\end{eqnarray}
Here the second relation is deduced from the uniqueness of the universal
R-matrix of $U_q[\widehat{sl(n|n)}]$. It can be shown that
\begin{equation}
\sum_{k=1}^{2n}(\tau^{k}\otimes 1){\cal T}={\tilde{\rho}}\otimes c+c\otimes{\tilde{\rho}}
-\frac{2(n^2-1)-3\xi}{6}c\otimes c.
\end{equation}
Therefore, if we set
\begin{equation}
{\tilde{\cal T}}=\frac{1}{2n}\left({\tilde{\rho}}\otimes c+c\otimes{\tilde{\rho}}
-\frac{2(n^2-1)-3\xi}{6}c\otimes c\right),
\end{equation}
then we have
\begin{equation}
\sum_{k=1}^{2n}(\tau^k\otimes 1)({\cal T}-{\tilde{\cal T}})=0.\label{tau-t-t}
\end{equation}
Introduce an automorphism
\begin{equation}
{\tilde{\phi}}_r=\tau\cdot{\rm Ad}\left(q^{\frac{r+c}{n}{\tilde{\rho}}}\right),
\end{equation}
which depends on a parameter $r\in{\bf C}$.
Then the $2n$-fold product
\begin{equation}
\prod_{2n\geq k\geq 1}^{\leftarrow}({\tilde{\phi}}_r^k\otimes 1)\left(
q^{\tilde{\cal T}}{\cal R}\right)^{-1}
\end{equation}
is a formal power series in $p^{\frac{1}{2n}}$ where $p=q^{2r}$.
Moreover, it has leading term 1 thanks to the relation (\ref{tau-t-t}).
Following Jimbo et al \cite{Jim97}, we define the vertex type twistor
\begin{equation}
E(r)=\lim_{N\rightarrow\infty}\prod_{2nN\geq k\geq 1}^{\leftarrow}
\left({\tilde{\phi}}_r^k\otimes 1\right)
\left(q^{\tilde{\cal T}}{\cal R}\right)^{-1}.\label{twistor-e}
\end{equation}
Then one can show that
$E(r)$ satisfies the graded shifted cocycle condition
\begin{equation}
E_{12}(r)(\Delta\otimes 1)E(r)=E_{23}(r+c^{(1)})(1\otimes\Delta)E(r).
\end{equation}
Moreover, $E(r)$ obeys the co-unit property
\begin{equation}
(\epsilon\otimes 1)E(r)=(1\otimes\epsilon)E(r)=1.
\end{equation}
We have
\begin{Definition} {\bf (Vertex type elliptic quantum supergroup)}: We define
elliptic quantum supergroup ${\cal A}_{q,p}[\widehat{sl(n|n)}]$
of vertex type to be
the quasi-triangular quasi-Hopf superalgebra
$(U_q[\widehat{sl(n|n)}],\Delta_r,\epsilon,\Phi(r),{\cal R}(r))$
together with the graded algebra
anti-homomorphism $S$ defined by (\ref{e-s})
and $\alpha_r=m\cdot(S\otimes 1)E(r)^{-1}$,
$\beta_r=m\cdot(1\otimes S)E(r)$. Here $\epsilon$ is defined by (\ref{e-s}),
and
\begin{eqnarray}
&&\Delta_r(a)=E(r)\Delta(a)E(r)^{-1},~~~\forall a\in U_q({\cal G}),\nonumber\\
&&{\cal R}(r)=E(r)^T{\cal R} E(r)^{-1},\nonumber\\
&&\Phi(r)=E_{23}(r+c^{(1)})E_{23}(r)^{-1}.
\end{eqnarray}
\end{Definition}
Similar to the face type case, introduce a formal parameter $\zeta$
(or spectral parameter) into ${\cal R}$ and $E(r)$ by the formulae
\begin{eqnarray}
&&\tilde{\cal R}(\zeta)={\rm Ad}(\zeta^{\tilde{\rho}}\otimes 1){\cal R},\nonumber\\
&&E(\zeta,r)={\rm Ad}(\zeta^{\tilde{\rho}}\otimes 1)E(r),\nonumber\\
&&\tilde{\cal R}(\zeta,r)={\rm Ad}(\zeta^{\tilde{\rho}}\otimes 1){\cal R}(r)=E(\zeta^{-1},r)^T
\tilde{\cal R}(\zeta)E(\zeta,r)^{-1}.
\end{eqnarray}
Then it can be shown from the definition of $E(r)$
that $E(\zeta,r)$ satisfies the difference equation
\begin{eqnarray}
&&E(p^\frac{1}{2n}q^{\frac{1}{n}c^{(1)}}\zeta,r)=(\tau\otimes 1)^{-1}
(E(\zeta,r))\cdot q^{\tilde{\cal T}} \tilde{\cal R}
(p^\frac{1}{2n}q^{\frac{1}{n}c^{(1)}}\zeta),\label{dif-vertex1}\\
&&E(0,r)=1.\label{initial-vertex}
\end{eqnarray}
The initial condition follows from (\ref{tau-t-t}) and the fact that
we are working in the principal gradation.
(\ref{dif-vertex1}) implies that
\begin{equation}
E\left(\left(p^\frac{1}{2n}q^{\frac{1}{n}c^{(1)}}\right)^{2n}\zeta,r\right)
=E(\zeta,r))\cdot \prod_{2n-1\geq k\geq 0}^{\leftarrow}
\;q^{\tilde{\cal T}}\; (\tau\otimes 1)^{2n-k} \tilde{\cal R}\left(\left(
p^\frac{1}{2n}q^{\frac{1}{n}c^{(1)}}\right)^{2n-k}\zeta\right).\label{dif-vertex2}
\end{equation}
Some remarks are in order. In non-super case \cite{Jim97},
$\pi$ and $\tau$ are commutable in the sense that
$\pi\cdot\tau={\rm Ad}(h)\cdot\pi$ with $h$ obeying $hv_i=v_{i+1~{\rm
mod}~m}$, where $\{v_i\}$ are basis of
the vector module $V={\bf C}^m={\bf C} v_1\oplus\cdots\oplus{\bf C} c_m$ of
${\cal A}_{q,p}(\hat{sl}_m)$ and
$\tau$ is the cyclic diagram automorphism of $\hat{sl}_m$. In
the super (or ${{\bf Z}_2}$ graded) case,
however, $\pi$ and $\tau$ are not ``commutable"
in the above sense. This is
because $\tau$ is grading-preserving while the
$2n$-dimensional defining representation space
$V={\bf C}^{n|n}={\bf C} v_1\oplus\cdots\oplus{\bf C} v_{2n}$ is graded.
So to compute the image, one has
to work out the action of $\tau$ at the universal level and then
apply the representation $\pi$. Therefore, the knowledge of
the universal R-matrix in its explicit form is required. This
makes the image computation of the twistor more involved
in the supersymmetric case.
As an example, consider the simplest case of elliptic
quantum affine superalgebra ${\cal A}_{q,p}[\widehat{sl(1|1)}]$.
Let us calculate the image in the two-dimensional representation $(\pi,
V)$, ~$V={\bf C}^{1|1}$.
As remarked above, we have to work at the universal level
first and then apply the representation. We have
\begin{Lemma}\label{L2}: In the principal gradation,
the action of $\tau$ on the Drinfeld generators
is represented on $V$ by
\begin{eqnarray}
&&\tau(X^+_n)=(-1)^nz^{2n+1}q^{-n}e_{12},~~~~
\tau(X^-_n)=(-1)^{n+1}z^{2n-1}q^{-n}e_{21},\nonumber\\
&&\tau(H_n)=(-1)^{n+1}z^{2n}\frac{[n]_q}{n}(e_{11}+e_{22}),\nonumber\\
&&\tau(H^{\rm ex}_n)=(-1)^{n+1}z^{2n}\frac{[2n]_q}{n}\left(q^{-n}e_{11}
+\frac{q-q^{-1}}{2}[n]_q(e_{11}+e_{22})\right).
\label{tau-rep}
\end{eqnarray}
\end{Lemma}
\vskip.1in
Applying $\pi\otimes
\pi$ to the both side of (\ref{dif-vertex2}) and writing
$E_{VV}(\zeta;p)\equiv (\pi\otimes\pi)E(\zeta,r)$, where
$p=q^{2r}$, we get
\begin{equation}
E_{VV}(p\zeta;p)=E_{VV}(\zeta;p)\cdot(\pi\otimes \pi)\left((\tau\otimes 1)
\tilde{{\cal R}}(p^\frac{1}{2}\zeta)\right)\cdot\tilde{{\cal R}}_{VV}(p\zeta),\label{evv}
\end{equation}
where $\tilde{R}_{VV}(\zeta)=(\pi\otimes\pi)\tilde{R}(\zeta)$.
In view of (\ref{tau-rep}) and the explicit formula (\ref{sl11-R}) of the
universal R-matrix, (\ref{evv}) is a system of eight difference equations.
We can also proceed directly. We have, with the help of lemma \ref{L2},
\begin{eqnarray}
&&(\pi\otimes\pi)(\tau^{2k}\otimes 1)\left({\rm Ad}(p^k\zeta)^{\tilde{\rho}}\otimes
1\right){\cal R}^{-1}q^{-{\tilde{\cal T}}}=K\cdot \bar{E}_{2k},\nonumber\\
&&(\pi\otimes\pi)(\tau^{2k-1}\otimes 1)\left({\rm Ad}(p^{k-\frac{1}{2}}
\zeta)^{\tilde{\rho}}\otimes
1\right){\cal R}^{-1}q^{-{\tilde{\cal T}}}=\rho_{2k-1}\cdot
K^{-1}\cdot \bar{E}_{2k-1},
\end{eqnarray}
where $K=(\pi\otimes \pi)q^{\cal T}$ and
\begin{eqnarray}
\rho_{2k-1}&=&\frac{(1+q^2p^{2k-1}\zeta^2)(1+q^{-2}p^{2k-1}
\zeta^2)}{(1+p^{2k-1}\zeta^2)^2},\nonumber\\
\bar{E}_{2k}&=& \frac{1}{1-q^2p^{2k}\zeta^2}\left((1-q^{-2}p^{2k}\zeta^2)
e_{11}\otimes e_{11}+(1-q^2p^{2k}\zeta^2)e_{22}\otimes e_{22}\right.\nonumber\\
& & +(1-p^{2k}\zeta^2)e_{11}\otimes e_{22}\
+(1-p^{2k}\zeta^2)e_{22}\otimes e_{11}\nonumber\\
& &\left. -(q-q^{-1})p^k\zeta
e_{12}\otimes e_{21}
+(q-q^{-1})p^k\zeta
e_{21}\otimes e_{12}\right), \label{sl11-E2k}\\
\bar{E}_{2k-1}&=& \frac{1}{1+q^{-2}p^{2k-1}\zeta^2}\left((1+q^2p^{2k-1}\zeta^2)
e_{11}\otimes e_{11}+(1+q^{-2}p^{2k-1}\zeta^2)e_{22}\otimes e_{22}
\right.\nonumber\\
& & +(1+p^{2k-1}\zeta^2)
e_{11}\otimes e_{22}
+(1+p^{2k-1}\zeta^2)e_{22}\otimes e_{11}\nonumber\\
& &\left. +(q-q^{-1})p^{k-\frac{1}{2}}\zeta e_{12}\otimes e_{21}
-(q-q^{-1})p^{k-\frac{1}{2}}\zeta
e_{21}\otimes e_{12}\right). \label{sl11-E2k-1}
\end{eqnarray}
Then
\begin{equation}
E_{VV}(\zeta;p)=\prod_{k\geq 1}^{\leftarrow}
\rho_{2k-1}K\bar{E}_{2k}K^{-1}\bar{E}_{2k-1}
=\rho(\zeta;p)\left(E^1_{VV}(\zeta;p)
+E^2_{VV}(\zeta;p)\right),
\end{equation}
where
\begin{eqnarray}
\rho(\zeta;p)&=&\frac{(-pq^2\zeta^2;p^2)_\infty}{(pq\zeta;p)_\infty
(-pq\zeta;p)_\infty},\label{rho}\\
E^1_{VV}(\zeta;p)&=&\prod_{k\geq 1}^{\leftarrow}\frac{1}{(1+p^{2k-1}
\zeta^2)^2}\left((1-q^{-2}p^{2k}\zeta^2)(1+q^2p^{2k-1}\zeta^2)
e_{11}\otimes e_{11}\right.\nonumber\\
& &+(1-q^2p^{2k}\zeta^2)(1+q^{-2}p^{2k-1}\zeta^2)e_{22}\otimes e_{22}
\nonumber\\
& &+(q-q^{-1})p^{k-\frac{1}{2}}\zeta(1-q^{-2}p^{2k}\zeta^2)e_{12}\otimes
e_{12}\nonumber\\
& &\left. -(q-q^{-1})p^{k-\frac{1}{2}}\zeta(1-q^2p^{2k}\zeta^2)
e_{21}\otimes e_{21}\right),\label{E1}\\
E^2_{VV}(\zeta;p)&=&\prod_{k\geq 1}^{\leftarrow}\frac{1}{1+p^{2k-1}
\zeta^2}\left((1-p^{2k}\zeta^2)e_{11}\otimes e_{22}
+(1-p^{2k}\zeta^2)e_{22}\otimes e_{11}\right.\nonumber\\
& &\left.-(q-q^{-1})p^k\zeta e_{12}\otimes e_{21}
+(q-q^{-1})p^k\zeta e_{21}\otimes e_{12}\right).\label{E2}
\end{eqnarray}
The infinite product in $E^2_{VV}(\zeta;p)$ can be calculated
directly and we find
\begin{equation}
E^2_{VV}(\zeta;p)=b_E(\zeta)(e_{11}\otimes e_{22}+e_{22}\otimes e_{11})
+c_E(\zeta)(e_{12}\otimes e_{21}-e_{21}\otimes e_{12}),\label{sol-E2}
\end{equation}
where,
\begin{equation}
b_E(\zeta)\pm c_E(\zeta)=\frac{(pq^{\pm 1}\zeta;p)_\infty
(-pq^{\mp 1}\zeta;p)_\infty}{(-p\zeta^2;p^2)_\infty}.
\end{equation}
As for $E^1_{VV}(\zeta;p)$, it can be written as
\begin{eqnarray}
E^1_{VV}(\zeta;p)&=&X_{11}(\zeta;p) e_{11}\otimes e_{11}
+X_{22}(\zeta;p) e_{22}\otimes e_{22}\nonumber\\
& & +X_{12}(\zeta;p) e_{12}\otimes e_{12}
+X_{21}(\zeta;p) e_{21}\otimes e_{21},\label{E1=X}
\end{eqnarray}
where $X_{ij}(\zeta;p)$ are the solution to the
following system of four difference equations
\begin{eqnarray}
&&X_{11}(p\zeta;p)=\frac{1}{1-q^{-2}p^2\zeta^2}\left((1+q^{-2}p\zeta^2)
X_{11}(\zeta;p)-p^\frac{1}{2}\zeta
(q-q^{-1})X_{12}(\zeta;p)\right),\nonumber\\
&&X_{12}(p\zeta;p)=\frac{1}{1-q^{2}p^2\zeta^2}\left(-p^\frac{1}{2}
\zeta (q-q^{-1})X_{11}(\zeta;p)+(1+q^{2}p\zeta^2)
X_{12}(\zeta;p)\right),\nonumber\\
&&X_{21}(p\zeta;p)=\frac{1}{1-q^{-2}p^2\zeta^2}\left(p^\frac{1}{2}
\zeta (q-q^{-1})X_{22}(\zeta;p)+(1+q^{-2}p\zeta^2)
X_{21}(\zeta;p)\right),\nonumber\\
&&X_{22}(p\zeta;p)=\frac{1}{1-q^{2}p^2\zeta^2}\left((1+q^{2}p\zeta^2)
X_{22}(\zeta;p)+p^\frac{1}{2}\zeta
(q-q^{-1})X_{21}(\zeta;p)\right).\label{dif-E1}
\end{eqnarray}
\vskip.3in
\noindent {\bf Acknowledgements.}
Our interest in quasi-Hopf algebras was ignited by
Bo-Yu Hou's one beautful lecture on Jimbo et al paper \cite{Jim97}
when Y.-Z.Z. was visiting Northwest University, Xi'an, in December 1997.
Y.-Z.Z. thanks Bo-Yu Hou for interesting him the subject and for
helpful discussions. The financial support from Australian Research
Council through a Queen Elizabeth II Fellowship Grant for Y.-Z.Z is
also gratefully acknowledged.
\vskip.3in
|
1,116,691,499,633 | arxiv | \section{Introduction}
In this paper we investigate how much of the theory of the Tutte polynomial $T_G(x,y)$ \cite{tutte2}, which is a large and important branch of graph and matroid theory, can be generalized to hypergraphs and polymatroids. We will see that some of the basic definitions carry over in a new and interesting way. From among the many important properties of the Tutte polynomial, we find an elegant generalization of the equation relating planar dual graphs. The deletion--contraction rule also has an extension to our context but, interestingly, such formulas play a lesser role here than in the classical case.
The central object of our theory is the lattice polytope $Q_\mathscr H$, associated to a hypergraph $\mathscr H$, which generalizes the well known spanning tree polytope of ordinary graphs. The lattice points in $Q_\mathscr H$ are called hypertrees. A hypertree, see Definition \ref{def:hiperfa}, is essentially the valence distribution, taken at one side of the bipartition, of a spanning tree in the bipartite graph naturally associated to $\mathscr H$. It is the use of this concept that sets apart the current paper from earlier work.
From $Q_\mathscr H$, we will read off the one-variable polynomial invariants $I_\mathscr H$ and $X_\mathscr H$ of $\mathscr H$ which generalize $T_G(x,1)$ and $T_G(1,y)$, respectively. They are called the interior and exterior polynomials. Both have positive integer coefficients, the sum of which is the number of hypertrees. Their definitions involve a direct, yet non-obvious generalization of Tutte's idea of ordering the (hyper)edges and using the order to define their internal and external activities with respect to (hyper)trees.
The hypertree polytope is
naturally embedded as
the set of bases in a certain integer polymatroid. The latter can be viewed as a generalization to hypergraphs of the cycle matroid of a graph. The interior and exterior polynomials can in fact be defined for all integer (extended) polymatroids, in other words for all integer-valued submodular set functions regardless of whether they are non-decreasing. In this paper we will merely mention this possibility as we intend to keep the focus on bipartite graphs and hypergraphs.
As the counterpart of the planar duality relation $T_{G^*}(x,y)=T_G(y,x)$ we obtain the formula $I_{\mathscr H^*}=X_\mathscr H$, where $\mathscr H^*$ is a naturally associated dual to $\mathscr H$ in cases when the hypergraph $\mathscr H$ can be represented with a plane bipartite graph drawing.
Our invariants have another fundamental property which does not manifest itself in the classical case. Namely, any hypergraph $\mathscr H$ has an abstract dual $\overline{\mathscr H}$ that results from interchanging the roles of its vertices and hyperedges. For such a pair, we conjecture the identity $I_\mathscr H=I_{\overline{\mathscr H}}$.
A weaker statement is that $\mathscr H$ and $\overline{\mathscr H}$ have the same number of hypertrees. This was recently proven by Alexander Postnikov. Indeed the discovery of $Q_\mathscr H$ (and its dual relationship with $Q_{\overline{\mathscr H}}$) should be attributed to him. In his beautiful paper \cite{post} he puts these polytopes in several important contexts.
The polynomials $I_\mathscr H$ and $X_\mathscr H$ however seem to appear for the first time in this article. I read Postnikov's work after most results presented here have been obtained. Prior to that I was only able to prove that $Q_\mathscr H$ and $Q_{\overline{\mathscr H}}$ had the same number of lattice points when $\mathscr H$ was planar. This brings us to the second half of the paper.
We will revisit a delicate
picture that William Tutte \cite{tutte1} introduced before his discovery of the dichromate. It consists of three plane bipartite graphs that together triangulate the sphere $S^2$. Their planar duals are canonically directed so that a so-called arborescence number can be associated to each. Tutte's Tree Trinity Theorem states that in such a triple, the three arborescence numbers coincide. We will see that this value is also the number of hypertrees in any of the six hypergraphs found in Tutte's picture. The six hypertree polytopes, for which we will derive a concise determinant formula based on work of Kenneth Berman, form three centrally symmetric pairs. The total number of interior and exterior polynomials associated to the six hypergraphs is reduced from $12$ to $6$ by our planar duality result, and the conjecture above on abstract duality implies that the number is in fact only $3$.
The notions that we are about to introduce also have a strong connection to low-dimensional topology. Indeed, the results contained in this paper are by-products of the author's investigations in knot theory, especially of research done on the Homfly polynomial.
There is a well known method, called the median construction, that one may apply to a plane graph to get an alternating link diagram. When the graph is bipartite, the link has a natural orientation. The interior polynomial was originally developed to describe a pattern found in the Homfly polynomials of these associated links. Then in joint work with Andr\'as Juh\'asz and Jacob Rasmussen, we found a manifestation of the hypertree polytope in Heegaard Floer theory. But as the former of these two results remains a conjecture at the time of this writing, it feels prudent to save the exploration of topological aspects to future papers.
In another forthcoming paper, joint with Alex Bene, we extend the theory of the hypertree polytope and interior and exterior polynomials to topological bipartite graphs and hypergraphs, in the same way that the Bollob\'as--Riordan polynomial generalizes the Tutte polynomial for fatgraphs.
The paper is organized as follows. Section \ref{sec:prelim} fixes terminology and recalls Tutte's definition of the dichromate.
In Section \ref{sec:hyper}, we introduce (that is to say, recall from \cite{post}) the hypertree polytope and in Section \ref{sec:geometry} we examine its basic geometry. After all this preparation, in section \ref{sec:poly} we define interior and exterior polynomials. The material on polymatroids is found in subsections \ref{ssec:polymat} and \ref{ssec:polypoly}. Section \ref{sec:prop} establishes a variety of properties of the new invariants.
In Section \ref{sec:sejtes} we state the main unsolved problem of the paper, the abstract duality conjecture, along with some supporting evidence. We discuss planar duality in Section \ref{sec:dual}. Finally, Section \ref{sec:trinity} introduces tri\-ni\-ties of plane bipartite graphs and some earlier work on them, and in Section \ref{sec:moretrinity} we lay out our new results on that subject.
{\bf Acknowledgements.} Part of the research reported here was completed while the author worked at the University of Tokyo. My understanding of issues pertaining to the current paper has greatly benefited from discussions with my collaborators Alex Bene, Andr\'as Juh\'asz, and Jake Rasmussen. I also want to express my gratitude to L\'aszl\'o Feh\'er, L\'aszl\'o Lov\'asz, Hitoshi Murakami, Rich\'ard Rim\'anyi, Oliver Riordan, G\'abor Tardos, and Vera V\'ertesi for stimulating discussions. Last but not least, I thank P\'eter Juh\'asz for checking many examples by computer.
\section{Preliminaries}\label{sec:prelim}
We will use the standard notion of a \emph{graph} as an ordered pair $G=(V,E)$ with finite vertex set $V$ and finite edge (multi)set $E$. Loop edges and multiple edges are allowed.
By $G'=(V',E')$ being a \emph{subgraph} of $G$
we will mean
$V'=V$ and $E'\subset E$ (thus subgraphs of a given graph are equivalent to their edge sets). For example, if a subgraph contains no edge adjacent to some vertex $v$, then $\{v\}$ (more precisely, $(\{v\},\varnothing)$) is a connected component of the subgraph.
We will write $G\setminus\{e\}$ for the graph that results from removing (one copy of) the edge $e$ from $E$. (In \eqref{eq:delcontr} below and in subsection \ref{ssec:delcontr} this will also appear as $G-e$.) The symbol $G-\{v\}$ will stand for the graph that remains after removing the vertex $v$ and all its adjacent edges from $G$.
Graphs can be viewed as one-dimensional cell complexes and in that sense the \emph{nullity} of a graph, $n(G)$, is the rank of its first homology group. (Thus we may also refer to the nullity as the \emph{first Betti number}.)
\subsection{Review of the Tutte polynomial}\label{oldtutte}
Tutte's original construction \cite{tutte2} of the two-variable polynomial $T_G(x,y)$, associated to the graph $G=(V,E)$, proceeds as follows. Order $E$ arbitrarily.
Consider the set $\mathscr T$ of \emph{spanning trees}, that is connected and cycle-free subgraphs, for $G$. (In order for $\mathscr T$ to be non-empty, $G$ needs to be connected. This will almost always be assumed.)
\begin{Def}\label{def:oldactive}
Given a spanning tree $\Gamma\in\mathscr T$, call an edge $e\in\Gamma$ \emph{internally active} with respect to $\Gamma$ if, after removing $e$ from $\Gamma$, connectedness of the subgraph cannot be restored by adding an edge to $\Gamma\setminus\{e\}$ that is smaller than $e$.
An edge $e\not\in\Gamma$ is \emph{externally active} if, after adding $e$ to $\Gamma$, cycle-freeness cannot be restored by removing an edge from $\Gamma\cup\{e\}$ that is smaller than $e$.
\end{Def}
In fact Tutte put `larger' instead of `smaller' in both cases above. But as we will see in Theorem \ref{thm:tutte} below, reversing the order does not affect our main notion.
\begin{Def}\label{def:tuttepoly}
Let $\iota(\Gamma)$ and $\varepsilon(\Gamma)$ denote the number of internally and externally active edges, respectively, with respect to $\Gamma$ (under the given order). Then, the \emph{Tutte polynomial} or \emph{dichromate} of the
graph $G$ is the generating function
\[T_G(x,y)=\sum_{\Gamma\in\mathscr T}x^{\iota(\Gamma)}y^{\varepsilon(\Gamma)}.\]
\end{Def}
\begin{tetel}[Tutte \cite{tutte2}]\label{thm:tutte}
The polynomial $T_G(x,y)$ is independent of the chosen edge order.
\end{tetel}
A multitude of properties of $T_G(x,y)$ is known. One of them is the deletion--contraction relation
\begin{equation}\label{eq:delcontr}
\begin{array}{rcll}
T_G&=&xT_{G/e}&\text{if }e\text{ is a bridge}\\
T_G&=&yT_{G-e}&\text{if }e\text{ is a loop}\\
T_G&=&T_{G-e}+T_{G/e}&\text{if }e\text{ is neither a bridge nor a loop.}
\end{array}
\end{equation}
(A bridge is an edge whose removal disconnects the graph. The contraction $G/e$ will be formally defined in subsection \ref{ssec:delcontr}.) For an excellent survey on why the Tutte polynomial is central in graph theory, see \cite{em}.
\begin{pelda}\label{ex:egy}
We show a graph and its Tutte polynomial.
\unitlength .04in
\[\begin{picture}(36,22)
\put(1,6){\circle*{2}}
\put(1,16){\circle*{2}}
\put(11,1){\circle*{2}}
\put(11,11){\circle*{2}}
\put(11,21){\circle*{2}}
\put(21,6){\circle*{2}}
\put(21,16){\circle*{2}}
\thicklines
\put(1,6){\line(0,1){10}}
\put(11,11){\line(0,1){10}}
\put(21,6){\line(0,1){10}}
\put(1,6){\line(2,1){10}}
\put(1,16){\line(2,1){10}}
\put(11,1){\line(2,1){10}}
\put(1,6){\line(2,-1){10}}
\put(11,11){\line(2,-1){10}}
\put(11,21){\line(2,-1){10}}
\put(25,11){\vector(1,0){10}}
\put(25,10){\line(0,1){2}}
\end{picture}
\begin{array}[b]{l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l}
+y^3&&&&&&\\
+3y^2&+3xy^2&&&&&\\
+2y&+7xy&+6x^2y&+3x^3y&&&\\
&+2x&+6x^2&+7x^3&+6x^4&+3x^5&+x^6.
\end{array}
\]
\end{pelda}
\subsection{Hypergraphs and bipartite graphs}
For the purposes of this paper these structures are almost equivalent, as follows.
\begin{Def} A \emph{bipartite graph} is a triple $G=(V_0,V_1,E)$, where $V_0$ and $V_1$ are disjoint finite sets, called \emph{color classes}, and $E$ is a finite set of edges, each connecting an element of $V_0$ to an element of $V_1$ (multiple edges are not allowed). We will treat $(V_0,V_1,E)$, $(V_1,V_0,E)$, and the graph $(V_0\cup V_1,E)$ as the same object.
\end{Def}
\begin{Def} A \emph{hypergraph} is a pair $\mathscr H=(V,E)$, where $V$ is a finite set and $E$ is a finite multiset of non-empty subsets of $V$. Elements of $V$ are called \emph{vertices} and the elements of $E$ are the \emph{hyperedges}.
\end{Def}
Thus, hyperedges with multiplicity (that is, several copies of the same subset of $V$) are allowed. On the other hand, obviously, each hyperedge contains each of its elements exactly once.
A hypergraph is both a generalization and a special case of a graph. The first point is obvious. Conversely, the sets $V$ and $E$ that constitute the hypergraph $\mathscr H$ may be viewed as the color classes of a bipartite graph $\bip\mathscr H$: we connect $v\in V$ to $e\in E$ with an edge if $v\in e$. We will refer to the result as the \emph{bipartite graph associated to the hypergraph} $\mathscr H$.
The construction of $\bip\mathscr H$ above is reversible if we specify one of the two color classes in the bipartite graph $G=(V_0,V_1,E)$. We will use the notation
\begin{equation}\label{eq:absdual}
\mathscr G_0=(V_1,V_0)\quad\text{and}\quad\mathscr G_1=(V_0,V_1)
\end{equation}
for the resulting pair of hypergraphs. (The index of the hypergraph is chosen to emphasize its hyperedges.) Note that $\bip\mathscr G_0=\bip\mathscr G_1=G$.
\begin{Def}
The bipartite graph $G$ above is said to \emph{induce} the hypergraphs $\mathscr G_0$ and $\mathscr G_1$. Two hypergraphs are called \emph{abstract duals} if they can be obtained in the form \eqref{eq:absdual}. In other words, the abstract dual $\overline{\mathscr H}=(E,V)$ of a hypergraph $\mathscr H=(V,E)$ is defined by interchanging the roles of its vertices and hyperedges.
\end{Def}
\section{Hypertrees}\label{sec:hyper}
To generalize the approach of Subsection \ref{oldtutte} to hypergraphs, first we need a notion corresponding to spanning trees. The rest of the paper is built around this concept, and almost all novelty contained herein stems from its use.
\begin{Def}\label{def:hiperfa}
Let $\mathscr H=(V,E)$ be a hypergraph so that its associated bipartite graph $\bip\mathscr H$ is connected. By a \emph{hypertree} in $\mathscr H$ we mean a function (vector) $f\colon E\to\ensuremath{\mathbf N}=\{\,0,1,\ldots\,\}$ so that a spanning tree of $\bip\mathscr H$ can be found which has degree $f(e)+1$ at each $e\in E$. Such a spanning tree is said to \emph{realize} or to \emph{induce} $f$.
\end{Def}
If $f$ is a hypertree, then $0\le f(e)\le|e|-1$ for each $e\in E$. The condition that $\bip\mathscr H$ be connected is
not essential. We could talk about `hyperforests' realized by spanning forests but, partly to avoid such
terminology, we will generally assume that the bipartite graphs associated to our hypergraphs are connected.
\begin{megj}\label{rem:altalanosit}
Hypertrees generalize spanning trees of graphs: there, an edge $e$ is in the tree if $f(e)=1$ and not in the tree if $f(e)=0$. In our case, we allow hyperedges to be in hypertrees only ``partially.''
\end{megj}
The number of realizations may vary greatly from hypertree to hypertree. However this phenomenon will not be incorporated to our theory in any way. For example, if $f$ is a hypertree and $f(e)=0$, then any edge of $\bip\mathscr H$ adjacent to the hyperedge $e$ can be chosen to be part of a realization of $f$, irrespective of the rest of the construction. The next lemma claims only slightly more.
\begin{lemma}\label{lem:anchor}
Let $\mathscr H=(V,E)$ be a hypergraph and $f\colon E\to\ensuremath{\mathbf N}$ a hypertree. For any collection of edges $\alpha_e$ of $\bip\mathscr H$ so that $\alpha_e$ is adjacent to $e$ for all $e\in E$, there is a realization of $f$ that contains $\alpha_e$ for all $e$.
\end{lemma}
\begin{proof}
Start with an arbitrary realization $\Gamma$ of $f$. If $e$ is such that $\alpha_e\not\in\Gamma$, then add $\alpha_e$ to $\Gamma$. This creates a unique cycle in the subgraph which goes through $e$. Now if we remove the edge of the cycle that is adjacent to $e$ but different from $\alpha_e$, the result is another realization of $f$. Carry this process out at every hyperedge.
\end{proof}
\subsection{The hypertree polytope}\label{ssec:politop}
It turns out that hypertrees are exactly the integer lattice points in a convex polytope. This fact was first realized by Postnikov \cite{post}. But as the author discovered it independently and our points of view and proofs are different, we shall give a full account here. This will be followed by a sampling of Postnikov's ideas, including a sketch of the proof of his duality theorem.
To describe the polytope, let $E'\subset E$ be a non-empty subset of hyperedges and let $\bigcup E'\subset V$ denote the set of its neighbors in $\bip\mathscr H$. Let $\bip\mathscr H\big|_{E'}$, the \emph{bipartite graph induced by $E'$}, be the graph with color classes $E'$ and $\bigcup E'$ and edges inherited from $\bip\mathscr H$. Let us denote the number of its connected components by $c(E')$.
\begin{tetel}\label{thm:politop}
Let $\mathscr H=(V,E)$ be a hypergraph so that its associated bipartite graph $\bip\mathscr H$ is connected. The hypertrees $f\colon E\to\ensuremath{\mathbf N}$ of $\mathscr H$ are exactly the integer solutions of the following system of linear inequalities in $\ensuremath{\mathbf R}^E$:
\begin{subequations}\label{eq:hypertree}
\begin{alignat}{2}
f(e)&\ge0&\quad&\text{for all }e\in E\label{eq:hypertree1}\\
\sum_{e\in E'}f(e)&\le|\textstyle\bigcup E'|-c(E')&\quad&\text{for all non-empty }E'\subset E\label{eq:hypertree2}\\
\sum_{e\in E}f(e)&=|V|-1.\label{eq:hypertree3}
\end{alignat}
\end{subequations}
\end{tetel}
\begin{megj}\label{rem:-1}
We get an equivalent system of inequalities if we replace \eqref{eq:hypertree2} with $\sum_{e\in E'}f(e)\le|\bigcup E'|-1$, still required for all non-empty $E'\subset E$. This is because if $\bip\mathscr H\big|_{E'}$ is not connected, then its connected components are also of the form $\bip\mathscr H\big|_{E''}$ and we get the stronger version of the inequality by summing the inequalities associated to these smaller subsets $E''$.
\end{megj}
\begin{megj}\label{rem:nemnegkijon}
The conditions \eqref{eq:hypertree1} follow from \eqref{eq:hypertree2} and \eqref{eq:hypertree3}: this is obvious if $E=\{e\}$ and otherwise, with $E'=E\setminus\{e\}$, we have
\[f(e)=|V|-1-\sum_{e'\in E'}f(e')\ge(|V|-|\textstyle\bigcup E'|)+(c(E')-1),\]
where the right hand side is the sum of two non-negative quantities.
\end{megj}
\begin{proof}[Proof of Theorem \ref{thm:politop}]
The conditions \eqref{eq:hypertree} are necessary because of the well known facts that
\begin{enumerate}[(i)]
\item\label{a} the number of edges in a spanning forest (maximal cycle-free subgraph) of a graph is the number of vertices minus the number of connected components of the graph and
\item\label{b} any cycle-free subgraph is part of a spanning forest.
\end{enumerate}
Indeed, for a given hypertree $f$ and non-empty subset $E'\subset E$, any spanning tree of $\bip\mathscr H$ that realizes $f$ has an intersection with $\bip\mathscr H\big|_{E'}$ which is a cycle-free subgraph of the latter. As such, it may have at most $|E'|+|\bigcup E'|-c(E')$ edges. Since each of those has exactly one of its ends at an element of $E'$, we have
\[\sum_{e\in E'}(f(e)+1)\le|E'|+|\bigcup E'|-c(E'),\]
which is just another form of \eqref{eq:hypertree2}. The claim \eqref{eq:hypertree1} is obvious and \eqref{eq:hypertree3} is immediate from \eqref{a} above.
To see why \eqref{eq:hypertree} is also sufficient, let us formulate a lemma that is marginally stronger than what we currently need, but the form in which we state it will be useful later. Recall that
a graph has nullity zero if and only if it is cycle-free.
\begin{lemma}\label{lem:kormentes}
Suppose that the integer vector $f\colon E\to\ensuremath{\mathbf N}$ satisfies conditions \eqref{eq:hypertree1} and \eqref{eq:hypertree2} (including for $E'=E$) but not necessarily \eqref{eq:hypertree3}. Then there exists a cycle-free subgraph in $\bip\mathscr H$ that has valence $f(e)+1$ at all elements $e\in E$.
\end{lemma}
Let the vector $f$ satisfy the conditions and choose an arbitrary subgraph $P$ of $\bip\mathscr H$ whose valence at each $e\in E$ is $f(e)+1$. This is possible because \eqref{eq:hypertree2} applied to $E'=\{e\}$ says $f(e)\le|e|-1$. If $P$ is cycle-free, we are done. Assume it is not.
It suffices to show that there is another subgraph of $\bip\mathscr H$ that has the same valences at elements of $E$ as $P$ (prescribed by $f$) but whose nullity is one less than that of $P$. The subgraph $P$ has a connected component $C$ which contains a cycle. Let $C$ intersect $E$ in the set $E'$. Applying \eqref{eq:hypertree2} to $E'$ we see that there is a hyperedge $e\in E'$ which is connected by an edge $\alpha$ of $\bip\mathscr H$ to a vertex which is not in $C$. (Otherwise, $P$ and $\bip\mathscr H\big|_{E'}$ would form a connected and not cycle-free intersection; as such a subgraph contains a spanning tree of $\bip\mathscr H\big|_{E'}$, we would get a contradiction between \eqref{eq:hypertree2} and \eqref{a}.)
Now if $e$ is part of a cycle in $C$, we are done because we may remove an edge of that cycle (adjacent to $e$) and replace it with $\alpha$, whereby valences at hyperedges remain the same but the nullity reduces by one. Otherwise, each edge of $C$ coming out of $e$ leads to a different connected component of $C-\{e\}$. At least one of these (call it $C'$) still contains a cycle. Replace one such edge with $\alpha$. This results in a subgraph $P'$ that has the right valences and the same nullity as $P$, but which has a non-tree component $C'$ containing fewer elements of $E$ than $C$ did. Repeat the procedure of this paragraph to $P'$ and $C'$. After a finite number of iterations, the desired reduction in the nullity will occur. This finishes the proof of the lemma.
If we apply Lemma \ref{lem:kormentes} to a vector $f$ that satisfies \eqref{eq:hypertree}, the resulting subgraph is not just cycle-free but because of \eqref{eq:hypertree3} it is actually a spanning tree.
\end{proof}
\begin{Def}
Let $\mathscr H$ be a hypergraph. The \emph{hypertree polytope} of $\mathscr H$, denoted with $Q_\mathscr H$, is the set of solutions in $\ensuremath{\mathbf R}^E$ of the inequalities \eqref{eq:hypertree}.
\end{Def}
By Theorem \ref{thm:politop}, the set of hypertrees in $\mathscr H$ can be written as $Q_\mathscr H\cap\ensuremath{\mathbf Z}^E$. Furthermore, $Q_\mathscr H$ is a lattice polytope, that is $Q_\mathscr H=\conv(Q_\mathscr H\cap\ensuremath{\mathbf Z}^E)$, where $\conv$ denotes the usual convex hull.
The hypertree polytope generalizes the usual spanning tree polytope of a graph (cf.\ Remarks \ref{rem:altalanosit} and \ref{rem:-1}). Equation \eqref{eq:hypertree3} means that $Q_\mathscr H$ is part of an affine hyperplane in $\ensuremath{\mathbf R}^E$. Thus, $Q_\mathscr H$ is situated in the lattice cut out from $\ensuremath{\mathbf Z}^E$ by the hyperplane.
\begin{pelda}\label{ex:ketto}
The graph used in Example \ref{ex:egy} is in fact bipartite. Let us denote it with $G$ and label its color classes with $V_0=\{\,a,b,c\,\}$ and $V_1=\{\,p,q,r,s\,\}$ according to Figure \ref{fig:betuzes}.
\begin{figure}[htbp]
\unitlength 3pt
\begin{picture}(36,30)
\put(8,9){\circle*{2}}
\put(8,19){\circle*{2}}
\put(18,4){\circle*{2}}
\put(18,14){\circle*{2}}
\put(18,24){\circle*{2}}
\put(28,9){\circle*{2}}
\put(28,19){\circle*{2}}
\thicklines
\put(8,9){\line(0,1){10}}
\put(18,14){\line(0,1){10}}
\put(28,9){\line(0,1){10}}
\put(8,9){\line(2,1){10}}
\put(8,19){\line(2,1){10}}
\put(18,4){\line(2,1){10}}
\put(8,9){\line(2,-1){10}}
\put(18,14){\line(2,-1){10}}
\put(18,24){\line(2,-1){10}}
\put(17,10){$q$}
\put(4,19){$p$}
\put(17,0){$r$}
\put(30,19){$s$}
\put(17,26){$c$}
\put(4,8){$a$}
\put(30,8){$b$}
\end{picture}
\caption{A bipartite graph.}
\label{fig:betuzes}
\end{figure}
There are $T_G(1,1)=50$ spanning trees in $G$. They give rise to seven hypertrees in $\mathscr G_0$, and the same number in $\mathscr G_1$, cf.\ Theorem \ref{thm:post}. These are shown, with a concrete spanning tree realizing each, in Figure \ref{fig:ujfak}. Because of \eqref{eq:hypertree3}, it is natural to view the polytopes $Q_{\mathscr G_0}$ and $Q_{\mathscr G_1}$ in barycentric coordinate systems with basepoints labeled with
\[\mathbf a(3,0,0),\,\mathbf b(0,3,0),\,\mathbf c(0,0,3)\]
and
\[\mathbf p(2,0,0,0),\,\mathbf q(0,2,0,0),\,\mathbf r(0,0,2,0),\,\mathbf s(0,0,0,2),\]
respectively. In Figure \ref{fig:polytopes}, we indicated individual hypertrees by dots and the two hypertree polytopes by thickened edges.
\begin{figure}[htbp]
\labellist
\small
\pinlabel $\mathbf a$ at -10 15
\pinlabel $\mathbf b$ at 340 15
\pinlabel $\mathbf c$ at 160 270
\pinlabel $\mathbf q$ at 575 290
\pinlabel $\mathbf r$ at 455 75
\pinlabel $\mathbf s$ at 710 -10
\pinlabel $\mathbf p$ at 770 150
\endlabellist
\centering
\includegraphics[width=4in]{polytopes}
\caption{Hypertree polytopes associated to a pair of abstract dual hypergraphs.}
\label{fig:polytopes}
\end{figure}
\end{pelda}
The hypertree polytope is of course a polytope, i.e., convex, bounded, and closed. In particular it is closed under convex linear combinations. For its integer lattice points, that is the hypertrees themselves, convexity translates to the following obvious (from Theorem \ref{thm:politop}) but useful property.
\begin{lemma}\label{lem:discreteconvex}
Let $\{f_i\}$ be some set of hypertrees in the hypergraph $\mathscr H=(V,E)$ and let $g\colon E\to\ensuremath{\mathbf N}$ be a function so that $g(e)\ge0$ for all $e\in E$ and $\sum_{e\in E}g(e)=|V|-1$. If it is also true that for all non-empty $E'\subset E$, there is an index $i$ so that $\sum_{e\in E'}g(e)\le\sum_{e\in E'}f_i(e)$, then $g$ is a hypertree.
\end{lemma}
\subsection{Postnikov's approach}
In the terminology of \cite{post}, a hypertree is a draconian sequence \cite[Definition 9.2]{post} and the hypertree polytope is
a trimmed generalized permutohedron \cite[Definition 11.2]{post}. Let us now summarize
Corollary 11.8, Theorem 12.9, and some other surrounding statements from \cite{post}.
\begin{tetel}[Postnikov]\label{thm:post}
Let $G=(V_0,V_1,E)$ be a connected bipartite graph with associated hypergraphs $\mathscr G_0$ and $\mathscr G_1$ as in \eqref{eq:absdual}. Then $|Q_{\mathscr G_0}\cap\ensuremath{\mathbf Z}^{V_0}|=|Q_{\mathscr G_1}\cap\ensuremath{\mathbf Z}^{V_1}|$. In other words, abstract dual hypergraphs have the same number of hypertrees.
\end{tetel}
It turns out that $Q_{\mathscr G_0}$ and $Q_{\mathscr G_1}$ have even more in common. Conjecture \ref{conj:dual} below proposes a generalization of Postnikov's theorem.
Without going into the fine details of an elegant but complicated proof, we will quote some of Postnikov's observations. Of course, an equivalent statement is
\[|Q_{\mathscr H}\cap\ensuremath{\mathbf Z}^E|=|Q_{\overline{\mathscr H}}\cap\ensuremath{\mathbf Z}^V|,\]
where $\mathscr H=(V,E)$ is an arbitrary hypergraph and $\overline{\mathscr H}=(E,V)$ is its abstract dual. We will use this latter formulation. Denote the standard generators of $\ensuremath{\mathbf R}^E$ with $u_e$, $e\in E$. Recall that the \emph{Minkowski sum} of two subsets $A,B$ of a vector space is the set of vectors one can obtain as $a+b$ with $a\in A$ and $b\in B$. The \emph{Minkowski difference} $A-B$ is defined as the set of vectors $x$ so that $x+B\subset A$. It can also be thought of as the set of all translates of $B$ that are contained in $A$. See \cite[Chapter 3]{bojler} for a thorough introduction to these operations.
Next, let us quote \cite[Lemma 11.7]{post}.
\begin{all}[Postnikov]\label{pro:felbont}
The hypertree polytope $Q_{\mathscr H}\subset\ensuremath{\mathbf R}^E$ of the hypergraph $\mathscr H=(V,E)$ can be written as
\[
Q_{\mathscr H}=\Big(\sum_{v\in V}\Delta_v\Big)-\Delta_E,
\]
where $\Sigma$ denotes the Minkowski sum of the simplices $\Delta_v=\conv\{\,u_e\mid v\in e\,\}$, $\Delta_E=\conv\{\,u_e\mid e\in E\,\}$ is the unit simplex and the right hand side is a Minkowski difference.
\end{all}
Of course, if we denote the standard basis for $\ensuremath{\mathbf R}^V$ with $u_v$, $v\in V$ and define the simplices $\Delta_e=\conv\{\,u_v\mid v\in e\,\}\subset\ensuremath{\mathbf R}^V$ for all $e\in E$, as well as $\Delta_V=\conv\{\,u_v\mid v\in V\,\}$, then
\[Q_{\overline{\mathscr H}}=\Big(\sum_{e\in E}\Delta_e\Big)-\Delta_V.\]
The Minkowski sums (in Postnikov's terminology, the (untrimmed) generalized permutohedra)
\[Q_{\mathscr H}^+=\sum_{v\in V}\Delta_v\quad\text{and}\quad Q_{\overline{\mathscr H}}^+=\sum_{e\in E}\Delta_e\] are related through the so-called root polytope
\[Q=\conv\{\,u_e+u_v\mid v\in e\,\}\subset\ensuremath{\mathbf R}^E\oplus\ensuremath{\mathbf R}^V\]
(an $(|E|+|V|-2)$-dimensional polytope whose vertices are indexed by the edges of $\bip\mathscr H$) as described below.
\begin{all}\label{pro:csel}
If we define the affine subspaces $S_E$ and $S_V$ of $\ensuremath{\mathbf R}^E\oplus\ensuremath{\mathbf R}^V$ as
\[S_E=\pi_V^{-1}\left(\frac1{|V|}\sum_{v\in V}u_v\right)\quad\text{and}\quad S_V=\pi_E^{-1}\left(\frac1{|E|}\sum_{e\in E}u_e\right),\]
where $\pi_V\colon\ensuremath{\mathbf R}^E\oplus\ensuremath{\mathbf R}^V\to\ensuremath{\mathbf R}^V$ and $\pi_E\colon\ensuremath{\mathbf R}^E\oplus\ensuremath{\mathbf R}^V\to\ensuremath{\mathbf R}^E$ are the projections, then, up to translation,
\[Q_{\mathscr H}^+=|V|\left(Q\cap S_E\right)\quad\text{and}\quad Q_{\overline{\mathscr H}}^+=|E|\left(Q\cap S_V\right).\]
\end{all}
Studying triangulations of $Q$, Postnikov observes that a set of vertices of $Q$ is affinely independent if and only if the corresponding edges form a forest in $\bip\mathscr H$. The simplices $T_\Gamma$ that arise this way from spanning trees $\Gamma$ have the same volume.
Most importantly, if we form the intersections $T_\Gamma\cap S_E$ and $T_\Gamma\cap S_V$ and identify them with subsets of $Q_{\mathscr H}^+$ and $Q_{\overline{\mathscr H}}^+$, respectively, as described in Proposition \ref{pro:csel}, then those subsets contain unique translates of $\Delta_E$ and $\Delta_V$, respectively, by integer vectors. (Also, they are essentially disjoint from other integer translates of those unit simplices.) In fact, those integer vectors are the two hypertrees induced by $\Gamma$.
Thus Postnikov finds that any triangulation of $Q$ gives a bijection between the integer lattice points of the Minkowski differences which are the hypertree polytopes. We also see that the volume of the root polytope associated to a bipartite graph is proportional to the number of hypertrees in each of its induced hypergraphs.
\section{The geometry of the hypertree polytope}\label{sec:geometry}
\subsection{Polymatroids}\label{ssec:polymat}
It turns out that for our central concepts, namely the interior and exterior polynomials which will be introduced in the next section, the right level of generality is that of a polymatroid. Moreover, basic submodular function techniques will be useful to simplify our arguments, most notably the proof that our polynomials are well defined, even in the hypergraph case. We will recall some elements of this theory here, using \cite[Volume B, Chapter 44]{sch} as basic reference.
\begin{Def}
Let $S$ be a finite ground set and $\mu\colon\mathscr P(S)\to\ensuremath{\mathbf R}$ a set function, i.e., a function defined on all subsets of $S$. We say that $\mu$ is \emph{submodular} if
\[\mu(U)+\mu(V)\ge\mu(U\cap V)+\mu(U\cup V)\]
holds for all subsets $U,V$ of $S$. The set function $\mu$ is called \emph{non-decreasing} if $U\subset V\subset E$ implies $\mu(U)\le\mu(V)$.
\end{Def}
For an arbitrary set function $\nu\colon\mathscr P(S)\to\ensuremath{\mathbf R}$, we define the polyhedra
\begin{equation*}\begin{split}
P_\nu=\{\,\mathbf{x}\in\ensuremath{\mathbf R}^S\mid\mathbf x\ge\mathbf0,\;\mathbf x\cdot\mathbf i_U\le\nu(U)\text{ for all }U\subset S\,\}\\\text{and }
EP_\nu=\{\,\mathbf{x}\in\ensuremath{\mathbf R}^S\mid\mathbf x\cdot\mathbf i_U\le\nu(U)\text{ for all }U\subset S\,\}.
\end{split}\end{equation*}
Here $\mathbf x\ge\mathbf0$ means that all entries in $\mathbf x$ are non-negative and $\mathbf i_U$ is the indicator function of the subset $U$ so that the dot product $\mathbf x\cdot\mathbf i_U$ becomes the sum of entries in $\mathbf x$ corresponding to elements of $U$.
\begin{Def}
If $\mu\colon\mathscr P(S)\to\ensuremath{\mathbf R}$ is a submodular set function, then $P_\mu$ and $EP_\mu$ are called the \emph{polymatroid} and \emph{extended polymatroid}, respectively, of $\mu$. A vector $\mathbf x\in EP_\mu$ is called a \emph{base} if $\mathbf x\cdot\mathbf i_S=\mu(S)$. We denote the set of bases with $B_\mu$ and call it the \emph{base polytope}.
\end{Def}
The notions of submodular set function and extended polymatroid are essentially equivalent as described in \cite[Section 44.4]{sch}. By the same token, non-decreasing submodular set functions are equivalent to polymatroids. Integer-valued submodular functions correspond to (extended) polymatroids that are \emph{integer}, meaning that they are the convex hulls of their integer lattice points. As to bases, we note
\begin{lemma}\label{lem:nincstobbbazis}
If $\mu$ is submodular and non-decreasing, then we have $B_\mu\subset P_\mu$.
\end{lemma}
\begin{proof}
Let $\mathbf x$ be a base, $s\in S$, and $U=S\setminus\{s\}$. Then the $s$-component of $\mathbf x$ is $\mu(S)-\mathbf x\cdot\mathbf i_U\ge\mu(S)-\mu(U)\ge\mu(S)-\mu(S)=0$.
\end{proof}
The following definition expresses a basic property of polymatroids that will have an important role in our treatment.
\begin{Def}\label{def:tight}
Let $\nu\colon\mathscr P(S)\to\ensuremath{\mathbf R}$ be a set function and $\mathbf x\in EP_\nu$. We say that the subset $U\subset S$ is \emph{tight} at $\mathbf x$ if $\mathbf x\cdot\mathbf i_U=\nu(U)$.
The set function $\nu$ is called \emph{tight} if for all $\mathbf x\in EP_\nu$, the set of subsets that are tight at $\mathbf x$ is closed under taking unions and intersections.
\end{Def}
\begin{all}[Theorem 44.2 in \cite{sch}]\label{pro:szoros}
Submodular set functions are tight.
\end{all}
Our next claim is on certain $3$-dimensional intersections of the base polytope $B_\mu$.
\begin{lemma}\label{lem:teglalap}
Let $\mu\colon\mathscr P(S)\to\ensuremath{\mathbf R}$ be a submodular set function and let $p$, $q$, $r$, and $s$ be distinct elements of $S$.
Other than these four, give some fixed values
to all components and denote the resulting subset of $B_\mu$ with $B'$. Then, assuming $B'\ne\varnothing$, the face of $B'$ along which the sum of the $k$ and $l$ components takes its maximum is a (possibly degenerate) rectangle with sides parallel to the vectors $\mathbf i_{\{p\}}-\mathbf i_{\{q\}}$ and $\mathbf i_{\{r\}}-\mathbf i_{\{s\}}$.
\end{lemma}
\begin{proof}
The set $B'$ is cut out of the appropriate $3$-dimensional affine subspace $A$ as the intersection of fourteen half-spaces corresponding to the non-trivial subsets of $\{\,p,q,r,s\,\}$. Six of these have the normal vectors $\mathbf i_{\{p\}}+\mathbf i_{\{q\}}-\mathbf i_{\{r\}}-\mathbf i_{\{s\}}$ and so on so that their intersection is a (possibly degenerate) cuboid $C$; see Figure \ref{fig:muszaki}. The other eight half-spaces have normal vectors such as $\pm(\mathbf i_{\{p\}}+\mathbf i_{\{q\}}+\mathbf i_{\{r\}}-3\mathbf i_{\{s\}})$ and they cut off pieces of $C$ near its vertices. Our task is to show that in $B'$ there can not remain a segment of positive length from any edge of $C$.
\begin{figure}[htbp]
\labellist
\small
\pinlabel $\mathbf p$ at 530 1130
\pinlabel $\mathbf q$ at -20 980
\pinlabel $\mathbf r$ at 290 660
\pinlabel $\mathbf s$ at 255 810
\pinlabel $\mathbf x$ at 1130 990
\endlabellist
\centering
\includegraphics[width=4in]{muszaki}
\caption{Upper left: the portion of the simplex $\mu(S)\Delta_S$ that lies in the affine subspace $A$. If $\sigma$ denotes $\mu(S)$ minus the sum of the fixed components then, writing them in the order $p,q,r,s$, the other four components of the points shown are $\mathbf p(\sigma,0,0,0)$ and so on. Upper right: The cuboid $C$ with the normal vectors of its faces. Bottom: the polytope $B'=B_\mu\cap A$ and the direction vectors of two of its edges.}
\label{fig:muszaki}
\end{figure}
If the opposite was the case and, for instance, such an interior point $\mathbf x$ remained on, say, the edge of $C$ that we thickened in Figure \ref{fig:muszaki}, then $\mathbf x$ would be a base in $B_\mu$ at which the sets $\{\,p,q\,\}$ and $\{\,r,s\,\}$ are tight but neither their union $\{\,p,q,r\,\}$ nor their intersection $\{p\}$ is tight. As this is impossible,
we see that the correct form of $B'$ is as shown in the lower panel of Figure \ref{fig:muszaki}.
\end{proof}
Submodular functions are relevant in this paper due to the following fact, pointed out to the author by L\'aszl\'o Lov\'asz.
\begin{all}\label{pro:submodular}
Let $\mathscr H=(V,E)$ be a hypergraph. The function
\begin{equation}\label{eq:subm}
\mu(E')=|\textstyle\bigcup E'|-c(E')
\end{equation}
of Theorem \ref{thm:politop}, extended to the empty set as $\mu(\varnothing)=0$, is non-decreasing and submodular on the set $E$ of hyperedges.
\end{all}
\begin{proof}
We leave the proof of the first assertion to the reader. As to the second, according to \cite[Theorem 44.1]{sch}, it suffices to prove
\begin{equation}\label{eq:laci}
\big(\mu(E'\cup\{e_1\})-\mu(E')\big)+\big(\mu(E'\cup\{e_2\})-\mu(E')\big)\ge\mu(E'\cup\{\,e_1,e_2\,\})-\mu(E')
\end{equation}
for all $E'\subset E$ and distinct $e_1,e_2\in E\setminus E'$. Assume that from the connected components of $\bip\mathscr H\big|_{E'}$, the number of those that are connected to both $e_1$ and $e_2$ is $m_{12}$. Aside from these, let $e_i$, $i=1,2$ be connected to another $m_i$ components of $\bip\mathscr H\big|_{E'}$. Furthermore, let us denote the number of such common elements of $e_1$ and $e_2$ that are not in $\bigcup E'$ with $n_{12}$ and assume that other than these, $e_i$ contains $n_i$ non-elements of $\bigcup E'$. Then we have
\begin{equation*}\begin{split}
\mu(E'\cup\{e_1\})-\mu(E')=n_1+n_{12}+m_1+m_{12}-1\text{ and}\\
\mu(E'\cup\{e_2\})-\mu(E')=n_2+n_{12}+m_2+m_{12}-1,
\end{split}\end{equation*}
whereas the right hand side of \eqref{eq:laci} is
\[\mu(E'\cup\{\,e_1,e_2\,\})-\mu(E')=\left\{\begin{array}{l>{\hspace{-12pt}}r}
n_1+n_2+m_1+m_2-2&\text{if }n_{12}=m_{12}=0\\
n_1+n_2+n_{12}+m_1+m_2+m_{12}-1&\text{otherwise.}
\end{array}\right.\]
The required inequality holds in all cases (with equality if and only if $n_{12}=m_{12}=0$ or one of the two is $1$ and the other $0$).
\end{proof}
\begin{Def}\label{def:PH}
The \emph{polymatroid of the hypergraph} $\mathscr H$ is the polymatroid $P_\mathscr H=P_\mu$ associated to the non-decreasing submodular set function $\mu$ of equation \eqref{eq:subm}.
\end{Def}
The next Proposition is just a reformulation of Theorem \ref{thm:politop} and Lemma \ref{lem:kormentes} which was used in its proof.
\begin{all}
The integer lattice points in the polymatroid $P_\mathscr H$ of the hypergraph $\mathscr H$ are exactly the functions $f\colon E\to\ensuremath{\mathbf N}$ so that $\bip\mathscr H$ has a cycle-free subgraph with valence $f(e)+1$ at each $e\in E$. The hypertree polytope $Q_\mathscr H$ coincides with the base polytope of $P_\mathscr H$.
\end{all}
Each graph has a so-called cycle matroid. This is a classical construction that goes back to the very beginning of matroid theory. The rank function of any matroid is submodular and therefore defines a polymatroid. (In fact, one way to define a matroid is as a polymatroid associated to a non-negative integer-valued, non-decreasing submodular set function that assigns $0$ to $\varnothing$ and $0$ or $1$ to singleton sets.) Thus every graph has a natural polymatroid associated to it, and Definition \ref{def:PH} generalizes this association to hypergraphs.
\subsection{Hyperedges in sequence}
We are going to need the following elementary observations later.
\begin{lemma}\label{lem:erdo}
Let the spanning tree $\Gamma\subset\bip\mathscr H$ induce the hypertree $f\colon E\to\ensuremath{\mathbf N}$. The inequality \eqref{eq:hypertree2} is sharp for $E'\subset E$ if and only if $\Gamma\cap(\bip\mathscr H\big|_{E'})$ is a spanning forest in the latter. Given such a $\Gamma$ and $E'$, if we remove $\Gamma\cap(\bip\mathscr H\big|_{E'})$ from $\Gamma$ and replace it with another spanning forest $F$ of $\bip\mathscr H\big|_{E'}$, then the result $\tilde\Gamma$ is another spanning tree of $\bip\mathscr H$. It realizes a hypertree which agrees with $f$ on the set $E\setminus E'$.
\end{lemma}
\begin{proof}
The first assertion is trivial from the proof of Theorem \ref{thm:politop}. For the second, notice that paths in $\Gamma$ and $\tilde\Gamma$ are in a one-to-one correspondence, as follows. Any path $\varphi\subset\Gamma$ has maximal segments that fall within $\bip\mathscr H\big|_{E'}$ and obviously each such segment is in one connected component of $\bip\mathscr H\big|_{E'}$. Hence, we get a new path $\tilde\varphi\subset\tilde\Gamma$ by replacing each segment with the unique connection that exists between its endpoints in $F$. This correspondence of paths has an inverse constructed in the analogous way. As $\varphi$ and $\tilde\varphi$ share the same endpoints, we see that $\tilde\Gamma$ is connected and cycle-free. The final claim of the lemma is of course just stating the obvious.
\end{proof}
Now suppose that the set $E$ of hyperedges in $\mathscr H=(E,V)$ has been ordered and labeled so that
\[e_1<e_2<\cdots<e_{|E|}.\]
Think of $\mathscr H$ (and of $\bip\mathscr H$) as being built step-by-step by adding one hyperedge at a time in the prescribed order. We will use the notation
\[G_k=\bip\mathscr H\big|_{\{e_1,\ldots,e_k\}}\]
for the bipartite graphs of the intermediate stages. In each step, the nullity of the graph may go up. We record this by introducing the \emph{nullity-jump function} of the chosen order,
\[nj(e_k)=n(G_k)-n(G_{k-1}),\]
where $k=1,2,\ldots,|E|$. Obviously, $nj(e)\ge0$ for all $e$ and $\sum_{e\in E}nj(e)=n(\bip\mathscr H)$.
\begin{lemma}\label{lem:moho}
For any order of the hyperedges, the vector $g(e)=|e|-1-nj(e)$ is a hypertree. It is the unique hypertree that makes the inequality \eqref{eq:hypertree2} true with an equality sign for all subsets $\{\,e_1,\ldots,e_k\,\}$, $k=1,2,\ldots,|E|$.
\end{lemma}
\begin{proof}
Using the order, it is easy to construct a spanning tree that realizes $g$. The $k$'th stage of the construction will be a spanning forest $F_k$ of the bipartite graph $G_k$. All edges of $\bip\mathscr H$ that are adjacent to $e_1$ will be part of $F_1$ (note that $nj(e_1)=0$). Suppose that a forest $F_{k-1}\subset G_{k-1}$, with the valences $g(e_i)+1$ at $e_1,\ldots,e_{k-1}$, respectively, has been defined.
The graph $G_k-\{e_k\}$ consists of $G_{k-1}$ and some isolated points. Suppose that $e_k$ is joined by an edge to $c$ of its connected components. It is easy to see that $nj(e_k)=|e_k|-c$, since the `first edge' of $G_k$ to connect $e_k$ to one of the $c$ components does not increase the nullity, whereas all others after the first increase it by $1$. We define $F_k$ by adding to $F_{k-1}$ a collection of $c=|e_k|-nj(e_k)=g(e_k)+1$ edges, all adjacent to $e_k$ and leading to different components of $G_k-\{e_k\}$. This is a spanning forest of $G_k$. After the last stage, $F_{|E|}$ is a spanning tree of $G_{|E|}=\bip\mathscr H$ which realizes $g$.
From the fact that $F_k$ is a spanning forest of $G_k$ and the first claim in Lemma \ref{lem:erdo} it is immediate that \eqref{eq:hypertree2} is sharp for $g$ and each set $E'=\{\,e_1,\ldots,e_k\,\}$. No other hypertree can have that property for the simple reason that
\[g(e_k)=\mu(\{\,e_1,\ldots,e_k\,\})-\mu(\{\,e_1,\ldots,e_{k-1}\,\})\]
is uniquely determined as the difference of two consecutive right-hand sides.
\end{proof}
\begin{megj}
The greedy algorithm offers another way of defining the hypertree $g$ of the previous lemma: if for all $k=1,\ldots,|E|$, we choose $f(e_k)$ to be the maximal value so that the valences $f(e_i)+1$, $i=1,\ldots,k$, can be realized by a cycle-free subgraph of $G_k$, then it can be shown that $f=g$.
\end{megj}
We will also refer to the hypertree of Lemma \ref{lem:moho} as the \emph{greedy hypertree} of the given order. The next definition will be fundamental in the next section.
\begin{Def}\label{def:transfer}
Let $\mathscr H=(V,E)$ be a hypergraph, $f\colon E\to\ensuremath{\mathbf N}$ a hypertree and $a,b\in E$ hyperedges. We say that \emph{$f$ is such that a transfer of valence is possible} from $a$ to $b$ if the function obtained from $f$ by reducing $f(a)$ by $1$ and increasing $f(b)$ by $1$ is also a hypertree.
\end{Def}
\begin{lemma}\label{cor:nemeles}
For any non-empty collection $E'\subset E$ of hyperedges, there exists a hypertree $f\colon E\to\ensuremath{\mathbf N}$ so that \eqref{eq:hypertree2} is sharp for $f$ and $E'$. If $E'$ and $f$ are so that \eqref{eq:hypertree2} is not sharp, then $f$ is such that it is possible to transfer valence from some element of $E\setminus E'$ to some element of $E'$.
\end{lemma}
\begin{proof}
Choose any order in which the elements of $E'$ are the smallest and construct its greedy hypertree $g\colon E\to\ensuremath{\mathbf N}$. By Lemma \ref{lem:moho}, $g$ has the required property.
To prove the second claim, we construct an indirect argument using Propositions \ref{pro:submodular} and \ref{pro:szoros}. Let $f$ be a hypertree and $E'\subset E$ a subset so that no transfer of valence is possible from any element of $E\setminus E'$ to any element of $E'$. By Theorem \ref{thm:politop}, this implies that for any $a\in E'$ and $b\in E\setminus E'$, there exists a subset $U_{a,b}\subset E$ of hyperedges so that with the set function $\mu$ of \eqref{eq:subm}, we have
\[a\in U_{a,b},\quad b\not\in U_{a,b},\quad\text{and}\quad\sum_{x\in U_{a,b}}f(x)=\mu(U_{a,b}).\]
In other words, $U_{a,b}$ is tight at $f$.
Then so is $E'=\bigcup_{a\in E'}\left(\bigcap_{b\in E\setminus E'}U_{a,b}\right)$, i.e., \eqref{eq:hypertree2} is sharp for $E'$ and $f$.
\end{proof}
\begin{lemma}\label{lem:mohofelett}
Let $\mathscr H=(V,E)$ be a hypergraph with its hyperedges ordered as above. Let $f\colon E\to\ensuremath{\mathbf N}$ be a hypertree and $e\in E$ a hyperedge so that $f(e)>g(e)$, where $g$ is the greedy hypertree of the order. Then there exists a hyperedge $x<e$ so that $f$ is such that a transfer of valence is possible from $e$ to $x$.
\end{lemma}
\begin{proof}
Assume the contrary. Then for all $x<e$, there exists a set $U_{e,x}$ of hyperedges that is tight at $f$, contains $x$, and does not contain $e$. Let $U=\bigcup_{x<e}U_{e,x}$. Then we have $e\not\in U$ and $\{\,x\mid x<e\,\}\subset U$ while, by Proposition \ref{pro:szoros}, $U$ is also tight at $f$.
Now define a new order $<'$ on $E$ by letting the elements of $U$ be smallest while keeping the original order among them. (It does not matter how the rest of $E$ gets ordered.) Let $g'$ be the greedy hypertree of $<'$. By Lemma \ref{lem:moho}, it is such that both $\{\,x\mid x<e\,\}$ and $U$ are tight at $g'$. Fix realizations $\Gamma'$ of $g'$ and $\Gamma$ of $f$. By Lemma \ref{lem:erdo}, these are such that both $\Gamma\cap(\bip\mathscr H\big|_U)$ and $\Gamma'\cap(\bip\mathscr H\big|_U)$ are spanning forests in $\bip\mathscr H\big|_U$. If in $\Gamma$ we replace the former with the latter, the result, again by Lemma \ref{lem:erdo}, is a spanning tree in $\bip\mathscr H$. Now for the hypertree $f'$ realized by this spanning tree we have
\begin{multline*}
\sum_{x\le e}f'(x)=f'(e)+\sum_{x<e}f'(x)=f(e)+\sum_{x<e}g'(x)=f(e)+\sum_{x<e}g(x)\\
>g(e)+\sum_{x<e}g(x)=\sum_{x\le e}g(x)=\mu(\{\,x\mid x\le e\,\}),
\end{multline*}
which contradicts Theorem \ref{thm:politop}.
\end{proof}
\section{Internal and external polynomials}\label{sec:poly}
\subsection{Activities}
Let $\mathscr H=(V,E)$ be a hypergraph so that $\bip\mathscr H$ is connected. Just as in Subsection \ref{oldtutte}, we order
the set $E$ arbitrarily. With regard to a fixed hypertree $f$, we make the following definitions.
\begin{Def}\label{def:activity}
A hyperedge $e\in E$ is \emph{internally active} with respect to the hypertree $f$ if it is not possible to decrease $f(e)$ by $1$ and increase $f$ of a hyperedge smaller than $e$ by $1$ so that another hypertree results.
A hyperedge $e\in E$ is \emph{externally active} with respect to $f$ if it is not possible to increase $f(e)$ by $1$ and to decrease $f$ of a smaller hyperedge by $1$ so that another hypertree results.
\end{Def}
Recall that in Definition \ref{def:transfer} the operation on hypertrees used above was termed a transfer of valence. For example, a hyperedge is internally active with respect to a hypertree if it cannot transfer valence to smaller hyperedges, and it is externally active if valence cannot be transferred to it from smaller hyperedges. Regarding transfers of valence, we will need the following lemma later.
\begin{lemma}\label{lem:rombusz}
Let $a$, $b$, and $c$ be hyperedges in $\mathscr H=(V,E)$ and $f$ such a hypertree that $a$ can transfer valence to $b$ and $b$ can transfer valence to $c$. Then $f$ is also such that $a$ can transfer valence to $c$. In other words, regarding the rhombus of Figure \ref{fig:rombusz} (explained below), if the three lattice points indicated by full dots are hypertrees then so is the one represented by the hollow dot.
\end{lemma}
\begin{proof}
This is immediate from Lemma \ref{lem:discreteconvex} but we think it worthwile to examine the picture. In $\ensuremath{\mathbf R}^E$, the three-dimensional affine subspace through $f$ spanned by $\{\,u_a,u_{b},u_{c}\}$ forms a two-dimensional intersection $Q_0$ with $Q_\mathscr H$. The triangle we labelled with $abc$ in Figure \ref{fig:rombusz} is in fact spanned by the vectors whose $u_a$, $u_{b}$, and $u_{c}$-components are equal to $(f(a)+f(b)+f(c),0,0)$, $(0,f(a)+f(b)+f(c),0)$, and $(0,0,f(a)+f(b)+f(c))$, respectively, whereas their other components are identical to those of $f$.
\begin{figure}[htbp]
\unitlength .8pt
\begin{picture}(180,160)
\put(10,10){\line(1,0){160}}
\put(10,10){\line(3,5){80}}
\put(170,10){\line(-3,5){80}}
\thicklines
\put(80,30){\circle*{6}}
\put(62,60){\circle*{6}}
\put(116,30){\circle*{6}}
\put(98,60){\circle{6}}
\put(80,30){\line(1,0){36}}
\put(80,30){\line(-3,5){18}}
\put(80,30){\line(3,5){16}}
\put(2,5){$a$}
\put(173,5){$b$}
\put(87,148){$c$}
\put(66,26){$f$}
\put(121,26){$f_1$}
\put(46,60){$f_2$}
\put(103,60){$\hat f$}
\end{picture}
\caption{A rhombus of hypertrees.}
\label{fig:rombusz}
\end{figure}
The conditions in the lemma are that $f$, $f_1$, and $f_2$ are in $Q_\mathscr H$ and the conclusion is that $\hat f$ is, too. Now this is obvious from the fact (Theorem \ref{thm:politop}) that $Q_0$ is cut out from the plane by lines parallel to the sides of the triangle.
\end{proof}
It is easy to see that the notions of Definition \ref{def:activity} generalize the ones found in Definition \ref{def:oldactive}. It will be slightly more convenient for us to work with their negations in what follows.
\begin{Def}
Let $\mathscr H=(V,E)$ be a hypergraph so that $\bip\mathscr H$ is connected. With respect to a given hypertree $f\colon E\to\ensuremath{\mathbf N}$ and some order of the hyperedges, let the number of internally inactive hyperedges of $\mathscr H$ be denoted with $\bar\iota(f)$ and the number of externally inactive hyperedges be denoted with $\bar\varepsilon(f)$. Then, let the \emph{internal polynomial} and the \emph{external polynomial} of $\mathscr H=(V,E)$ be defined as
\begin{equation}\label{polinomok}
I_\mathscr H(\xi)=\sum_f \xi^{\bar\iota(f)}\quad\text{and}\quad X_\mathscr H(\eta)=\sum_f \eta^{\bar\varepsilon(f)},
\end{equation}
respectively, where both summations are over all hypertrees $f$ in $\mathscr H$.
\end{Def}
These notions generalize the valuations
\begin{equation}\label{eq:regitutte}
\xi^{|V|-1}T_G(1/\xi,1)\quad\text{and}\quad\eta^{|E|-|V|+1}T_G(1,1/\eta),
\end{equation}
respectively, of the classical Tutte polynomial $T_G$ of the graph $G=(V,E)$.
There are two ways of treating hypergraphs whose associated bipartite graphs are disconnected. One is to assign to them the product of the polynomials associated to their connected components. The other is to extend the definition verbatim, which results in the value $0$ for both polynomials because a disconnected graph has no spanning trees and therefore the set of hypertrees is empty. In this paper we take the latter approach. This will hardly matter since we almost always assume $\bip\mathscr H$ to be connected.
Our first order of business, however, is to address well definedness.
\begin{tetel}\label{thm:independent}
The formulas \eqref{polinomok} for the internal and external polynomials do not depend on the chosen order of the hyperedges.
\end{tetel}
We will need to following statement.
\begin{lemma}\label{lem:nyil}
Let $e_1\ne e_2$ be hyperedges in the hypergraph $\mathscr H=(V,E)$ and let $E_0=E\setminus\{\,e_1,e_2\,\}$. Fix a function $f_0\colon E_0\to\ensuremath{\mathbf N}$ and consider its extensions to $E$ that are hypertrees in $\mathscr H$. Among these, let $f_1$ and $f_2$ be such that $f_1(e_1)>f_2(e_1)$ and let also $x\in E_0$ be a hyperedge.
\begin{enumerate}
\item If $f_1$ is such that valence can be transferred from $e_1$ to $x$ then $f_2$ is such that valence can be transferred from $e_2$ to $x$.
\item If $f_1$ is such that valence can be transferred from $x$ to $e_2$ then $f_2$ is such that valence can be transferred from $x$ to $e_1$.
\end{enumerate}
In other words, regarding both `staples' of Figure \ref{fig:nyil}, if the three lattice points indicated by full dots are hypertrees then so is the one represented by the hollow dot.
\end{lemma}
\begin{proof}
The argument is very similar to the proof of Lemma \ref{lem:rombusz}. Examining Figure \ref{fig:nyil}, it is easy to see that if $f_1$, $f_2$, and $g_1$ satisfy all constraints of Theorem \ref{thm:politop}, then so does $\hat g_1$. Likewise, if $f_1$, $f_2$, and $g_2$ are in $Q_\mathscr H$ then so is $\hat g_2$.
\begin{figure}[htbp]
\unitlength .8pt
\begin{picture}(393,160)
\put(10,10){\line(1,0){160}}
\put(10,10){\line(3,5){80}}
\put(170,10){\line(-3,5){80}}
\put(0,5){$x$}
\put(173,5){$e_1$}
\put(85,148){$e_2$}
\put(110,26){$f_1$}
\put(68,98){$f_2$}
\put(55,26){$g_1$}
\put(58,56){$\hat g_1$}
\put(230,10){\line(1,0){160}}
\put(230,10){\line(3,5){80}}
\put(390,10){\line(-3,5){80}}
\put(220,5){$x$}
\put(393,5){$e_1$}
\put(305,148){$e_2$}
\put(330,26){$f_1$}
\put(288,98){$f_2$}
\put(348,56){$g_2$}
\put(322,98){$\hat g_2$}
\thicklines
\put(70,90){\line(3,-5){36}}
\put(70,90){\line(-3,-5){16}}
\put(106,30){\line(-1,0){36}}
\put(70,90){\circle*{6}}
\put(106,30){\circle*{6}}
\put(70,30){\circle*{6}}
\put(52,60){\circle{6}}
\put(290,90){\line(3,-5){36}}
\put(290,90){\line(1,0){33}}
\put(326,30){\line(3,5){18}}
\put(290,90){\circle*{6}}
\put(326,30){\circle*{6}}
\put(344,60){\circle*{6}}
\put(326,90){\circle{6}}
\end{picture}
\caption{Two `staples' formed by hypertrees.}
\label{fig:nyil}
\end{figure}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:independent}]
We will emulate Tutte's original proof, i.e., analyze the effect of changing the relative position in the order of two adjacent hyperedges. Let $a,b\in E$ and assume that $o_1$ is an order on $E$ so that $a<_1b$ with no other hyperedge in between, whereas the order $o_2$ only differs from $o_1$ in that $b<_2a$. The rest of the hyperedges are partitioned in two sets so that in both orders, elements of $E_-$ are smaller than $a$ and $b$ while elements of $E_+$ are larger than both.
Let $f\colon E\to\ensuremath{\mathbf N}$ be a hypertree. Our goal is to compare the values $\bar\iota_1(f)$ and $\bar\iota_2(f)$ (as well as $\bar\varepsilon_1(f)$ and $\bar\varepsilon_2(f)$) resulting from the two orders. If a hyperedge differs from both $a$ and $b$, then it is easy to check that (in the internal as well as in the external sense) it is active with respect to $f$ in $o_1$ if and only if the same holds in $o_2$. Let us now separate three cases.
\begin{enumerate}[I.]
\item\label{case1} If $f$ is such that no transfer of valence is possible between $a$ and $b$, then the activity statuses of $a$ and $b$ are also unaffected by the change in the order. Hence in such cases $\bar\iota_1(f)=\bar\iota_2(f)$ and $\bar\varepsilon_1(f)=\bar\varepsilon_2(f)$.
\item\label{case2} Next, assume that $f$ is such that valence can be transferred between $a$ and $b$ in both directions. Then in the order $o_1$ the hyperedge $b$ is not active whereas with respect to $o_2$, the hyperedge $a$ is not active. (This holds in both the internal and external senses.) Now according to Lemma \ref{lem:rombusz}, if $a$ is not active in $o_1$ (i.e., there is some $x\in E_-$ so that valence can be transferred from $a$ to $x$ (internal case) or from $x$ to $a$ (external case)), then $b$ is not active in $o_2$ and vice versa. Therefore we again have $\bar\iota_1(f)=\bar\iota_2(f)$ and $\bar\varepsilon_1(f)=\bar\varepsilon_2(f)$.
\item\label{case3} Lastly, let $f$ be such that valence can be transferred between $a$ and $b$ but only in one direction. Before analyzing activities, let us establish that these kinds of hypertrees are partitioned into pairs. Indeed, the line
\[\left\{\,g\colon E\to\ensuremath{\mathbf R}\biggm| g\big|_{E\setminus\{a,b\}}=f\big|_{E\setminus\{a,b\}}, \sum_{e\in E}g(e)=|V|-1\,\right\}\subset\ensuremath{\mathbf R}^E\]
intersects the polytope $Q_\mathscr H$ in a segment\footnote{The endpoints of this segment are $f$ and another lattice point $f^*$. This can be shown using a property of polymatroids called total dual integrality. However we will not need this presently.}. Among the lattice points of the segment, $f$ represents one extreme; let the other extreme be denoted with $f^*$. This is necessarily different from $f$ by our assumption and if a transfer of valence is possible only from $a$ to $b$ (and not in the other direction) at one of $f$ and $f^*$, then the opposite transfer (and only that) is possible at the other one. It is also clear that $(f^*)^*=f$, so we get the desired pairs. (The lattice points between $f$ and $f^*$ were discussed in case \ref{case2}, whereas case \ref{case1} is when the segment degenerates to a point.)
Without loss of generality we may assume that $f$ is such that $a$ can transfer valence to $b$ and the opposite transfer is possible at $f^*$. Let us first examine the hyperedges $a$ and $b$ themselves. We will start with their internal activities.
\begin{enumerate}
\item Assume that $\bar\iota_1(f)=\bar\iota_2(f)$. We will show that in this case $\bar\iota_1(f^*)=\bar\iota_2(f^*)$ holds, too. As the activity status of $b$ with respect to $f$ is the same in $o_1$ as in $o_2$, a similar property has to hold for $a$. Because $a$ is not active with respect to $f$ in $o_2$, this is only possible if there exists a hyperedge $x\in E_-$ so that $f$ is such that valence can be transferred from $a$ to $x$. By Lemma \ref{lem:nyil}, this implies that $b$ is inactive with respect to $f^*$ in both $o_1$ and $o_2$. Whether $a$ is active with respect to $f^*$ depends on the same thing in both orders: namely, on whether $f^*$ is such that valence can be transferred from $a$ to some element $y\in E_-$.
\item\label{bbb} The only way $\bar\iota_1(f)$ and $\bar\iota_2(f)$ can be different is if $f$ is such that $a$ cannot transfer valence to any element of $E_-$. In such cases $b$ has the same property by Lemma \ref{lem:rombusz} so that $\bar\iota_1(f)=\bar\iota_2(f)-1$ ($b$ is active with respect to $f$ in both orders and $a$ is active only in $o_1$). Examining $f^*$ now, by Lemma \ref{lem:nyil} we see that it has to be such that $b$ cannot transfer valence to any element of $E_-$. This implies, using Lemma \ref{lem:rombusz}, that $a$ has the same property. Therefore we have $\bar\iota_1(f^*)=\bar\iota_2(f^*)+1$ as $a$ is active with respect to $f^*$ in both orders and $b$ is active in $o_2$ only.
\end{enumerate}
External activities can be handled in almost the same way.
\begin{enumerate}[(a')]
\item If $\bar\varepsilon_1(f)=\bar\varepsilon_2(f)$, then (since the activity status of $a$ with respect to $f$ is independent of order) there must be some $x\in E_-$ so that valence can be transferred from $x$ to $b$. Then by Lemma \ref{lem:nyil}, $f^*$ is such that valence can be transferred from $x$ to $a$, making $a$ inactive with respect to $f^*$ in both orders. Since the activity status of $b$ with respect to $f^*$ is the same in both orders, we obtain $\bar\varepsilon_1(f^*)=\bar\varepsilon_2(f^*)$.
\item\label{bbbb} If $\bar\varepsilon_1(f)\ne\bar\varepsilon_2(f)$, that is if $f$ is such that no $x\in E_-$ can transfer valence to $b$, then the same holds for transfers of valence from $x$ to $a$ by Lemma \ref{lem:rombusz}. Therefore $a$ is active with respect to $f$ in both orders, whereas $b$ is active in $o_2$ and inactive in $o_1$. By the preceding paragraph, a similar analysis has to apply to $f^*$, i.e., $b$ is active with respect to $f^*$ in both orders, whereas $a$ is active in $o_1$ and inactive in $o_2$. So we have obtained $\bar\varepsilon_1(f)=\bar\varepsilon_2(f)+1$ and $\bar\varepsilon_1(f^*)=\bar\varepsilon_2(f^*)-1$.
\end{enumerate}
\end{enumerate}
The only hypertrees whose $\bar\iota$ or $\bar\varepsilon$ values actually changed as a result of switching from $o_1$ to $o_2$ were described in the cases (b) and (b') above. We saw that they occur in pairs of the form $\{f,f^*\}$. Now to complete our proof it suffices to show that for such a pair and any hyperedge $y$ different from $a$ and $b$, the activity status
of $y$ with respect to $f$ is the same as with respect to $f^*$. (We saw that the choice between $o_1$ and $o_2$ does not matter for $y$, only for $a$ and $b$. Switching to a new hypertree, on the other hand, could in principle make a big difference.) Indeed, that will imply $\bar\iota_1(f)=\bar\iota_2(f^*)$ and $\bar\iota_2(f)=\bar\iota_1(f^*)$ in case (b), as well as $\bar\varepsilon_1(f)=\bar\varepsilon_2(f^*)$ and $\bar\varepsilon_2(f)=\bar\varepsilon_1(f^*)$ in case (b') so that, regardless of order, the generating functions $I_\mathscr H$ and $X_\mathscr H$ always encode the same information.
As we are about to relate activities with respect to different hypertrees, this last part of the proof is where we will rely most heavily on our assumptions. What we need is essentially Lemma \ref{lem:teglalap} but we chose to spell the argument out without an explicit reference to it. It will be convenient to branch out into several cases again.
\begin{enumerate}[(1)]
\item We will deal with internal activities first. Recall that we are under the assumptions of the case \ref{case3}(b), in particular $f$ and $f^*$ are such that neither $a$ nor $b$ can transfer valence to any element of $E_-$. Let $y$ be a hyperedge which is internally inactive with respect to $f$. Our goal is to show that $y$ is also internally inactive with respect to $f^*$.
\begin{enumerate}[i.]
\item Assume $y\in E_+$. If $f$ is such that $y$ can transfer valence to $a$ or $b$, then Lemma \ref{lem:rombusz} implies that it can definitely transfer to $b$ and then Lemma \ref{lem:nyil} says that $f^*$ is such that $y$ can transfer valence to $a$. Therefore $y$ is inactive with respect to $f^*$.
If $y\in E_+$ but $f$ is such that $y$ cannot transfer valence to $a$ or $b$, then $f^*$ has to have the same property by the usual combination of Lemmas \ref{lem:rombusz} and \ref{lem:nyil}. Because $y$ is inactive with respect to $f$, the hypertree $f$ must be such that $y$ can transfer valence to some $x<y$ with $a\ne x\ne b$. We will show that $f^*$ is also such that $y$ can transfer valence to $x$. Suppose the opposite is true. Then there are sets of hyperedges $U_a\ni a$, $U_b\ni b$, and $U_x\ni x$ so that all three are tight at $f^*$ and neither contains $y$. (Here tightness is in the sense of Definition \ref{def:tight} with regards the set function $\mu$ of Proposition \ref{pro:submodular}.) By Proposition \ref{pro:szoros}, the union of the three subsets is also tight at $f^*$. Since the union contains both $a$ and $b$, the sum of the $f^*$-values over it agrees with the sum of the $f$-values. Therefore $U_a\cup U_b\cup U_x$ is also tight at $f$, which contradicts our assumption that $f$ is such that $y$ (which is not in the union) can transfer valence to $x$ (which is).
\item Let now $y\in E_-$. Then of course there is a hyperedge $x\in E_-$ so that $f$ is such that $y$ can transfer valence to $x$. We will show that $f^*$ is also such that $y$ can transfer valence to $x$. Assuming the contrary, we find sets of hyperedges $U_y\not\ni y$, $U_a\not\ni a$, and $U_b\not\ni b$ which are tight at $f^*$ and all contain $x$. Their intersection is also tight at $f^*$ by Proposition \ref{pro:szoros}. As $U_a\cap U_b\cap U_y$ contains neither $a$ nor $b$, it is also tight at $f$. But that is a contradiction with the transfer of valence from $y\not\in U_a\cap U_b\cap U_y$ to $x\in U_a\cap U_b\cap U_y$ which is possible at $f$.
\end{enumerate}
\item In the external case, the assumptions of \ref{case3}(b') were that $f$ and $f^*$ are both such that no transfer of valence is possible from elements of $E_-$ to $a$ or to $b$. In such a situation, let $y$ be externally inactive with respect to $f$. We wish to prove that $y$ is also externally inactive with respect to $f^*$.
\begin{enumerate}[i'.]
\item If $y\in E_+$, then it may be that $f$ is such that $a$ or $b$ can transfer valence to $y$. If $b$ can, then so can $a$ by Lemma \ref{lem:rombusz}. Thus in either case we can apply Lemma \ref{lem:nyil} to conclude that $f^*$ is such that $b$ can transfer valence to $y$.
Suppose now that $y\in E_+$ but $f$ (and consequently $f^*$) is such that neither $a$ nor $b$ can transfer valence to $y$. Then some other hyperedge $x<y$ can and we will prove that the same transfer is also possible at $f^*$. If not, then there are sets of hyperedges $U_a\not\ni a$, $U_b\not\ni b$, and $U_x\not\ni x$, all containing $y$, that are tight at $f^*$. Then their intersection has the same property by Proposition \ref{pro:szoros} and furthermore, as it contains neither $a$ nor $b$, it is tight at $f$ as well. That contradicts the assumption that at $f$, a transfer of valence is possible from outside of the set (from $x$) to one of its elements (namely $y$).
\item Assuming $y\in E_-$, it being externally inactive implies the existence of another hyperedge $x\in E_-$ so that $f$ is such that $x$ can transfer valence to $y$. If the same transfer is not possible at $f^*$, that means that there are sets of hyperedges $U_y\ni y$, $U_a\ni a$, and $U_b\ni b$ that are tight at $f^*$ and none of them contains $x$. Then their union has the same properties and it is also tight at $f$, which yields the usual contradiction.
\end{enumerate}
\end{enumerate}
This completes the proof.
\end{proof}
\begin{pelda}\label{ex:harom}
We revisit the hypertrees that were discussed in Example \ref{ex:ketto}.
They are shown in Figure \ref{fig:ujfak} with an actual spanning tree realization for each. For the particular orders $a<b<c$ and $p<q<r<s$, respectively, we also indicated the internal and external activity status of each hyperedge with respect to each hypertree, as well as the resulting $(\bar\iota,\bar\varepsilon)$ values. Thus, we find that
\begin{equation}\label{eq:egyik}
I_{\mathscr G_0}(\xi)=1+3\xi+3\xi^2\quad\text{and}\quad X_{\mathscr G_0}(\eta)=1+3\eta+3\eta^2,
\end{equation}
whereas
\begin{equation}\label{eq:masik}
I_{\mathscr G_1}(\xi)=1+3\xi+3\xi^2\quad\text{and}\quad X_{\mathscr G_1}(\eta)=1+2\eta+3\eta^2+\eta^3.
\end{equation}
These values are in line with Propositions \ref{pro:const} and \ref{pro:rang} below, as well as with Conjecture \ref{conj:dual}.
Up to isomorphism there is only one order on the three-element set $V_0$. The four-element set $V_1$, on the other hand, has four essentially different orders depending on the position of $q$. The reader may check that the other three give rise to the same interior and exterior polynomials as in \eqref{eq:masik}.
Note the lack of similarity between our polynomials and the one shown in Example \ref{ex:egy}. Indeed, the author is not aware of any formula relating the Tutte polynomial $T_G(x,y)$ of a bipartite graph $G$ and the interior and exterior polynomials of its induced hypergraphs.
\begin{figure}[htbp]
\labellist
\pinlabel $(1,1)$ at 170 630
\pinlabel $(2,0)$ at 585 630
\pinlabel $(1,2)$ at 1000 630
\pinlabel $(2,2)$ at 1415 630
\pinlabel $(2,1)$ at 1830 630
\pinlabel $(0,2)$ at 2245 630
\pinlabel $(1,1)$ at 2660 630
\pinlabel $(1,2)$ at 170 80
\pinlabel $(0,3)$ at 585 80
\pinlabel $(2,1)$ at 1000 80
\pinlabel $(2,1)$ at 1415 80
\pinlabel $(1,2)$ at 1830 80
\pinlabel $(1,2)$ at 2245 80
\pinlabel $(2,0)$ at 2660 80
\endlabellist
\centering
\includegraphics[width=\linewidth]{ujfak}
\caption{Hypertrees in a pair of abstract dual hypergraphs. Hollow shapes are internally inactive and full ones are internally active; circles are externally inactive while squares are externally active.}
\label{fig:ujfak}
\end{figure}
\end{pelda}
\begin{megj}
Unfortunately, for a hypergraph that is not a graph, the two-variable polynomial $\sum_f \xi^{\bar\iota(f)}\eta^{\bar\varepsilon(f)}$ (cf.\ Definition \ref{def:tuttepoly}) does depend on the choice of order. For instance, for the hypergraph $\mathscr G_1$ of the previous example, the order that we used there gives rise to the two-variable generating function $\eta^3+3\xi\eta^2+2\xi^2\eta+\xi^2$ (this is directly read off of the bottom row of Figure \ref{fig:ujfak}), whereas the order $p<r<s<q$ gives $\xi\eta^3+\xi^2\eta^2+\xi\eta^2+\eta^2+2\xi^2\eta+\xi$. (Note how the two polynomials do coincide after setting $\xi=1$ or $\eta=1$.)
\end{megj}
\subsection{Extension to polymatroids}\label{ssec:polypoly}
The interior and exterior polynomials are constructed from the hypertree polytope. The same procedure can be carried out after replacing $Q_\mathscr H$ with the set of bases $B_\mu$ of an integer extended polymatroid. Because of Lemma \ref{lem:nincstobbbazis} and the fact that integer polymatroids can always be defined by integer-valued non-decreasing submodular set functions, we immediately obtain invariants of integer polymatroids as well. In this subsection we outline the minimal modifications that are needed in our arguments to get the generalization.
Let $S$ be a finite set and $\mu\colon\mathscr P(S)\to\ensuremath{\mathbf R}$ an integer-valued submodular set function, with associated base polytope $B_\mu\subset EP_\mu$. We say that the base $\mathbf x\in B_\mu\cap\ensuremath{\mathbf Z}^S$ is such that a \emph{transfer} is possible from $s_1\in S$ to $s_2\in S$ if by decreasing the $s_1$-component of $\mathbf x$ by $1$ and increasing its $s_2$-component by $1$, we get another base.
Now order $S$ arbitrarily. Call an element $s\in S$ \emph{internally active} with respect to the base $\mathbf x\in B_\mu\cap\ensuremath{\mathbf Z}^S$ if $\mathbf x$ is such that a transfer is possible from $s$ to a smaller element of $S$. We say that $s$ is \emph{externally active} with respect to $\mathbf x$ if it is such that a transfer is possible to $s$ from a smaller element of $S$.
For $\mathbf x\in B_\mu\cap\ensuremath{\mathbf Z}^S$, define $\bar\iota(\mathbf x)$ to be the number of elements of $S$ that are not internally active with respect to $\mathbf x$. Similarly, let $\bar\varepsilon(\mathbf x)$ denote the number of elements of $S$ that are not externally active with respect to $\mathbf x$.
\begin{Def}\label{def:polym}
Let
\[I_\mu(\xi)=\sum_{\mathbf x\in B_\mu\cap\ensuremath{\mathbf Z}^S}\xi^{\bar\iota(\mathbf x)}\quad\text{and}\quad X_\mu(\eta)=\sum_{\mathbf x\in B_\mu\cap\ensuremath{\mathbf Z}^S}\eta^{\bar\varepsilon(\mathbf x)}\]
and call these quantities the \emph{interior polynomial}, respectively the \emph{exterior polynomial} of the submodular set function $\mu$.
\end{Def}
The fact that these polynomials do not depend on the order that was used to write them can be shown just like in the proof of Theorem \ref{thm:independent}. Indeed, the first half of that proof depended on the elementary lemmas \ref{lem:rombusz} (rhombus lemma) and \ref{lem:nyil} (staple lemma) where we only used the fact that $Q_\mathscr H$ is cut out by placing an upper bound on each partial sum of the components of its elements. We also used the triviality that lines intersect bounded convex sets in segments. Then in the second half of the argument we relied on Proposition \ref{pro:szoros} and the fact that our upper bounds are values of a tight set function.
To illustrate the geometry underlying our polynomials, we show in Figure \ref{fig:szakoca} the hypothetical base polytope of a polymatroid $P_\mu$ with a ground set $S=\{\,a,b,c,d\,\}$ of four elements. (We reused the diagram from the proof of Lemma \ref{lem:teglalap} but placed it in a different frame of reference.) It is situated in a tetrahedron with vertices $\mathbf a(\mu(S),0,0,0)$, $\mathbf b(0,\mu(S),0,0)$, $\mathbf c(0,0,\mu(S),0)$, and $\mathbf d(0,0,0,\mu(S))$. If we set the order $a<b<c<d$, then the marked vertex represents the only hypertree with $\bar\iota=0$; the other lattice points along the six thickened edges are the ones with $\bar\iota=1$; the rest of the lattice points that are visible (along a total of seven faces) in the view we used are those with $\bar\iota=2$; finally the invisible lattice points have $\bar\iota=3$.
\begin{figure}[htbp]
\labellist
\small
\pinlabel $\mathbf a$ at 610 660
\pinlabel $\mathbf b$ at -30 10
\pinlabel $\mathbf c$ at 1170 320
\pinlabel $\mathbf d$ at 450 980
\endlabellist
\centering
\includegraphics[width=2.5in]{szakoca}
\caption{The geometry of the interior polynomial.}
\label{fig:szakoca}
\end{figure}
A similar picture applies to the exterior polynomial. It would be very interesting to see whether invariants of arbitrary (i.e., not necessarily integer) extended polymatroids can be defined in a way analogous to Definition \ref{def:polym}, but replacing counts of lattice points along certain faces by taking the volumes of those faces.
It may also be worth it to investigate if there is a wider class of set functions (or polytopes) for which interior and exterior polynomials are well defined. Already in the set function case some assumption is
necessary, as the next example shows.
\begin{pelda}
Consider the tetrahedron
\[Q=\conv\{\,(1,1,0,0),(1,0,1,0),(1,0,0,1),(0,0,1,1)\,\}\subset\ensuremath{\mathbf R}^4_{xyzt}.\]
The four given points are exactly the integer solutions of the equation $x+y+z+t=2$ and the linear inequalities
\[\begin{array}{rcl@{\hspace{.5in}}rcl@{\hspace{.5in}}rcl}
x&\le&1;&x+y&\le&2;&y+z+t&\le&2;\\
y&\le&1;&x+z&\le&2;&x+z+t&\le&2;\\
z&\le&1;&x+t&\le&2;&x+y+t&\le&2;\\
t&\le&1;&y+z&\le&1;&x+y+z&\le&2.\\
&&&y+t&\le&1;&&&\\
&&&z+t&\le&2;&&&
\end{array}\]
Furthermore, each of the fourteen (fifteen) inequalities is sharp for at least one of the four points. The right hand sides
\emph{do not} give a tight set function. For example, the set of the $y$ and $z$ coordinates, as well as the set of the $y$ and $t$ coordinates is tight at the point $(1,1,0,0)$. However the union of the two sets is not tight at the same point. We also see that Lemma \ref{lem:teglalap} is violated: the face along which the sum of the $y$ and $z$ (or the $y$ and $t$) coordinates takes its maximum is a triangle instead of a rectangle.
From the order $x<y<z<t$, we obtain the interior and exterior polynomials $1+2\xi+\xi^2$ and $1+2\eta+\eta^2$. If we use $y<z<t<x$ instead, the interior polynomial becomes $2+2\xi^2$. For $x<t<z<y$, the exterior polynomial is $2+2\eta^2$.
\end{pelda}
\section{Properties}\label{sec:prop}
\subsection{Low-order terms}
We first make an observation on the degrees of the interior and exterior polynomials and then turn our attention to some individual coefficients.
\begin{all}
For a hypergraph $\mathscr H=(V,E)$, the degree of its interior polynomial is at most $\min\{\,|E|,|V|\,\}-1$, while the degree of its exterior polynomial is at most $|E|-1$.
\end{all}
\begin{proof}
That $|E|-1$ is an upper bound for both degrees is obvious from the observation that the smallest hyperedge in an order is both internally and externally active with respect to any hypertree.
As to $\deg I_\mathscr H\le|V|-1$, note that in order for a hyperedge $e$ to be internally inactive with respect to a hypertree $f$, we have to have $f(e)\ge1$. This combined with \eqref{eq:hypertree3} gives the result.
\end{proof}
For the constant term in the exterior polynomial and the two lowest-order coefficients in the interior polynomial, the following two results offer a more direct way of showing their order-independence.
\begin{all}\label{pro:const}
Both the internal and external polynomials of the hypergraph $\mathscr H=(E,V)$ have $1$ for constant term.
\end{all}
\begin{proof}
Let us fix an arbitrary order on $E$ and denote the greedy hypertree of Lemma \ref{lem:moho} with $g$. We claim that $g$ is the unique hypertree with respect to which (and the given order) every hyperedge is internally active. Comparing the assertions of Lemma \ref{lem:moho} and Theorem \ref{thm:politop} with the definition of internal activity, it is clear that $g$ is indeed one such hypertree.
The fact that there are no others follows easily either from Lemma \ref{cor:nemeles} or from Lemma \ref{lem:mohofelett}. Yet it may be interesting to see an argument that does not rely on abstract properties of submodular functions.
To this end, let $f\colon E\to\ensuremath{\mathbf N}$ be a hypertree different from $g$. We need to show that there is an internally inactive hyperedge with respect to it. We will make use of notation from subsection \ref{ssec:politop}, in particular the sequence of forests $F_1\subset F_2\subset\cdots\subset F_{|E|}$ that was constructed in the proof of Lemma \ref{lem:moho}. Let $e_k$ be the smallest hyperedge in the order so that $f(e_k)\ne g(e_k)$, which of course implies $f(e_k)<g(e_k)$. By Lemma \ref{lem:erdo} we may choose a realization $\Gamma$ for $f$ so that $\Gamma\cap G_{k-1}=F_{k-1}$.
There is at least one connected component of $G_k-\{e_k\}$ which is not connected to $e_k$ by any edge in $\Gamma$, even though there is an edge $\alpha$ of $\bip\mathscr H$ between $e_k$ and this component (we can take for example the one which was selected into $F_k$). Adding $\alpha$ to $\Gamma$ creates a cycle which has to go through a hyperedge $e_l$ with $l>k$. By removing from $\Gamma$ an edge of the cycle adjacent to $e_l$ and replacing it with $\alpha$, we have created a new spanning tree for $\bip\mathscr H$. It induces a hypertree which only differs from $f$ at $e_l$ (where it is one smaller than $f$) and $e_k$ (where it is one bigger). This shows that the hyperedge $e_l$ is internally inactive with respect to $f$.
In the case of the external polynomial, the unique hypertree without an (externally) inactive hyperedge is again as in Lemma \ref{lem:moho} but constructed using the reverse order. The rest of the proof can be carried out just like above.
\end{proof}
\begin{tetel}\label{pro:rang}
For any hypergraph $\mathscr H=(E,V)$, the coefficient of the linear term in the interior polynomial $I_\mathscr H$ is the nullity (first Betti number) $n(\bip\mathscr H)$ of the bipartite graph $\bip\mathscr H$.
\end{tetel}
\begin{proof}
We will extend the analysis carried out in the proofs of Lemma \ref{lem:moho} and Proposition \ref{pro:const} a little further. After fixing an order on $E$, we are going to construct $n(\bip\mathscr H)$ hypertrees. Namely, if $e$ is a hyperedge, then we will associate to it $nj(e)$ hypertrees as follows.
Recall the greedy hypertree $g(e)=|e|-1-nj(e)$ of Lemma \ref{lem:moho} and fix one of its realizations.
If $e$ is a hyperedge that has $nj(e)>0$ with respect to the order, then add to the realization one more edge adjacent to $e$. This creates a cycle which goes through another hyperedge $e'$. Remove an edge of this cycle adjacent to $e'$ to get a new spanning tree. Its induced hypertree is the result of a transfer of valence from $e'$ to $e$. Then add another edge and make another transfer and so on until all edges adjacent to $e$ are used up. The hyperedges that we transfer valence from are all smaller than $e$ because of Lemma \ref{lem:moho} and Theorem \ref{thm:politop}.
The argument in the previous paragraph shows that for all $1\le i\le nj(e)$, there is an $i$-element multiset of hyperedges smaller than $e$ so that the result of reducing their associated $g$-values by $1$ and increasing $g(e)$ to $g(e)+i$ is a hypertree in $\mathscr H$. Now define $f_{e,i}\colon E\to\ensuremath{\mathbf N}$ ($e\in E$, $1\le i\le nj(e)$) to be the hypertree among these so that its associated multiset $M_{e,i}$ is largest in reverse lexicographical order, i.e., if $e_1<\cdots<e_k$ are all the hyperedges smaller than $e$, then $M_{e,i}$ has the highest possible multiplicity at $e_k$; then among those the highest possible multiplicity at $e_{k-1}$, and so on.
We claim that the $f_{e,i}$ are exactly those hypertrees in $\mathscr H$ that have a unique internally inactive hyperedge in the given order.
It is easy to see that the $f_{e,i}$ do have this property. Namely, $e$ is the unique hyperedge that is not internally active with respect to $f_{e,i}$. Indeed, if $e'>e$ then $e'$ is internally active with respect to $f_{e,i}$ because $\sum_{x<e'}f_{e,i}(x)=\sum_{x<e'}f(x)$ is already at the largest value allowed by \eqref{eq:hypertree2}. If $e'<e$, then it is internally active because a downward transfer of valence from $e'$ would contradict the way $M_{e,i}$ was chosen. Finally, $e$ itself is not active because by Lemma \ref{lem:discreteconvex} applied to $f_1=g$ and $f_2=f_{e,i}$, it can transfer valence to any element of $M_{e,i}$.
Let now $f$ be a hypertree in $\mathscr H$ so that it has a unique internally inactive hyperedge $e$. Our goal is to show that $f$ is one of the $f_{e,i}$.
First we claim that \eqref{eq:hypertree2} is sharp for $f$ and the set $E'=\{\,x\in E\mid x\le e'\,\}$ for all $e'\ge e$: indeed if it was not, then by Lemma \ref{cor:nemeles} there had to be hyperedges larger than $e'$ (and hence different from $e$) that are internally inactive with respect to $f$. Because of the similar property of $g$ stated in Lemma \ref{lem:moho}, we see that $f=g$ (and hence $f=f_{e,i}$ for any $i$) for all hyperedges larger than $e$. It also follows that $f(e)\ge g(e)$ and in fact $f(e)>g(e)$, because otherwise $e$ could not be inactive.
Next, we note that if $e'$ is a hyperedge smaller than $e$, then $f(e')\le g(e')$ because if this was not the case then Lemma \ref{lem:mohofelett} would imply that $e'$ is internally inactive. So far we have shown that $f$ is obtained from $g$ by transferring valence to $e$ from a (non-empty) multiset of hyperedges that are smaller than $e$.
Assume now that $f\ne f_{e,i}$, where we set $i=f(e)-g(e)$. Because of the way $f_{e,i}$ was constructed, the largest hyperedge $e'$ where the two hypertrees differ is so that $e'<e$ and $f(e')>f_{e,i}(e')$. We are going to argue that $e'$ is internally inactive with respect to $f$ by showing that $f$ is such that $e'$ can transfer valence to at least one element of the set $S=\{\,x\in E\mid f(x)<f_{e,i}(x)\,\}$. (Note that $S$ is non-empty because $\sum_{y\in E}f(y)=\sum_{y\in E}f_{e,i}(y)$ and also that its elements are smaller than $e'$.)
Suppose again that the opposite is true. Then by the usual argument based on Proposition \ref{pro:szoros}, there exists a set $U$ of hyperedges that is tight at $f$, contains $S$, and does not contain $e'$. Over $E\setminus U$, the sum of $f$-values is higher than the sum of $f_{e,i}$-values; therefore over $U$, the sum of $f_{e,i}$-values is higher. But this means that $f_{e,i}$ and $U$ contradict the inequality \eqref{eq:hypertree2} because the sum of $f$-values over $U$ is already $\mu(U)$, where $\mu$ is as in equation \eqref{eq:subm}.
Therefore $e'$ is internally inactive, but that contradicts our assumption on $f$. The only possible conclusion, then, is $f=f_{e,i}$.
\end{proof}
\begin{megj}
If we change the definition of $f_{e,i}$ in the proof above to now require that $M_{e,i}$ have the lowest possible multiplicity at $e_1$, then among those choices the lowest possible multiplicity at $e_2$ and so on, then a similar argument reveals that this, too, is a hypertree in which $e$ is the unique internally inactive hyperedge. Therefore the two descriptions define the same hypertree.
\end{megj}
It is not hard to see (cf.\ \cite[Section 44.6.c]{sch}) that Proposition \ref{pro:const} generalizes to arbitrary integer extended polymatroids, i.e., interior and exterior polynomials always have a constant term of $1$. Theorem \ref{pro:rang}, in turn, offers a way of defining the nullity of an integer extended polymatroid via its interior polynomial.
In the classical case, Proposition \ref{pro:const} translates to a well known property of the Tutte polynomial. Proposition \ref{pro:rang}, however, reveals (to the best of the author's knowledge) previously undiscovered information.
\begin{kov}
Let $G=(V,E)$ be a connected graph. By summing the coefficients of its Tutte polynomial $T_G(x,y)$ in front of terms that contain $x^{|V|-2}$, we obtain the nullity of $G$. In other words, under any order, the number of spanning trees that contain exactly one internally inactive edge is the first Betti number of $G$ as a one-dimensional cell complex.
\end{kov}
\subsection{Product formulas}
Let us now discuss situations when the interior and exterior polynomials behave multiplicatively. Our first claim is obvious.
\begin{lemma}\label{lem:farkinca}
From $\mathscr H=(V,E)$, construct another hypergraph $\mathscr H'$ by
\begin{enumerate}[(a)]
\item adding a singleton hyperedge $e'=\{v\}$ to $E$ for some $v\in V$;
\item adding a new vertex $v'$ to $V$ and making it part of exactly one hyperedge $e\in E$.
\end{enumerate}
Then $I_{\mathscr H'}=I_{\mathscr H}$ and $X_{\mathscr H'}=X_{\mathscr H}$.
\end{lemma}
\begin{proof}
In both cases, new and old hypertrees are in an obvious bijection. In the first, new hypertrees are exactly the extensions of old ones by assigning $0$ to $e'$. The new hyperedge $e'$ is both internally and externally active with respect to all hypertrees regardless of the order used.
In the second case, increase $f(e)$ by $1$ in each hypertree $f$ of $\mathscr H$ to get the collection of hypertrees in $\mathscr H'$. With respect to any order, this correspondence preserves $\bar\iota$ and $\bar\varepsilon$ values.
\end{proof}
The main result of this subsection examines the picture when two disjoint bipartite graphs are joined by identifying two of their edges.
\begin{tetel}\label{thm:edgesum}
Let $\mathscr H_1=(V_1,E_1)$ and $\mathscr H_2=(V_2,E_2)$ be hypergraphs so that $V_1\cap V_2=\{v\}$. Suppose that $e_1\in E_1$ and $e_2\in E_2$ both contain $v$ and form the hypergraph
\[\mathscr H=(V,E)=\big(V_1\cup V_2,(E_1\setminus\{e_1\})\cup(E_2\setminus\{e_2\})\cup\{e\}\big)\]
by merging $e_1$ and $e_2$
into a single hyperedge $e=e_1\cup e_2$. For hypertrees $f_1$ in $\mathscr H_1$ and $f_2$ in $\mathscr H_2$, define the function $f_1\#f_2\colon E\to\ensuremath{\mathbf N}$ by
\[(f_1\#f_2)(e)=f_1(e_1)+f_2(e_2)\]
and by otherwise letting $(f_1\#f_2)\big|_{E_i\setminus\{e_i\}}=f_i$, $i=1,2$. Then $\#$ defines a bijection
\[(Q_{\mathscr H_1}\cap\ensuremath{\mathbf Z}^{E_1})\times(Q_{\mathscr H_2}\cap\ensuremath{\mathbf Z}^{E_2})\cong Q_{\mathscr H}\cap\ensuremath{\mathbf Z}^{E}.\] Consequently,
\[I_{\mathscr H}=I_{\mathscr H_1}I_{\mathscr H_2}\quad\text{and}\quad X_{\mathscr H}=X_{\mathscr H_1}X_{\mathscr H_2}.\]
\end{tetel}
\begin{proof}
For the claim on the sets of hypertrees, choose arbitrary elements $f_i\in Q_{\mathscr H_i}\cap\ensuremath{\mathbf Z}^{E_i}$ and realize them with spanning trees which contain the edges connecting $e_i$ and $v$. This can be done by Lemma \ref{lem:anchor}. After merging $e_1$ and $e_2$, the union of the two trees becomes a spanning tree of $\bip\mathscr H$ realizing $f_1\#f_2$, which is therefore a hypertree.
Conversely, if $f$ is a hypertree in $\mathscr H$ then we may realize it with a spanning tree that contains the edge between $e$ and $v$. This can be separated into spanning trees of $\bip\mathscr H_1$ and of $\bip\mathscr H_2$ which induce hypertrees in $\mathscr H_1$ and $\mathscr H_2$, respectively. It is easy to see that this defines an inverse to the correspondence $(f_1,f_2)\mapsto f_1\#f_2$.
To prove the claim on the polynomials, we make the following observation. If the hypertree $f=f_1\#f_2$ is such that valence can be transferred from $a_1\in E_1\setminus\{e_1\}$ to $a_2\in E_2\setminus\{e_2\}$, then
\begin{enumerate}[(a)]
\item\label{belso} $f_1$ is such that valence can be transferred from $a_1$ to $e_1$ and
\item\label{kulso} $f_2$ is such that valence can be transferred from $e_2$ to $a_2$.
\end{enumerate}
Indeed by the above, the result $f'$ of the assumed transfer of valence has a decomposition $f'=f'_1\#f'_2$ where (since the entries in $f_i$ and $f'_i$ share the same sum) $f'_1$ is a hypertree that shows the truth of \eqref{belso} and the existence of $f'_2$ proves \eqref{kulso}.
Let us now fix an order on $E$ in which $e$ is smallest and use its restrictions to order $E_1$ and $E_2$ (so that $e_i$ becomes the smallest element in $E_i$). Then, we claim that for any pair of hypertrees $f_i$ in $\mathscr H_i$, we have
\begin{equation}\label{eq:addsup}
\bar\iota(f_1\#f_2)=\bar\iota(f_1)+\bar\iota(f_2)\quad\text{and}\quad\bar\varepsilon(f_1\#f_2)=\bar\varepsilon(f_1)+\bar\varepsilon(f_2)
\end{equation}
because, both in the internal and in the external sense, the set of inactive hyperedges for $f_1\#f_2$ is the union of the sets of inactive hyperedges for $f_1$ and for $f_2$. (As $e$, $e_1$, and $e_2$ are smallest, they can not be inactive with respect to any hypertree.) The backward inclusion is obvious because if, say, $f'_1$ results from $f_1$ by a single transfer of valence, then $f'_1\#f_2$ and $f_1\#f_2$ are related by essentially the same transfer. The forward inclusion follows equally easily with the help of \eqref{belso} and \eqref{kulso} above.
Comparing \eqref{eq:addsup} with the definitions of $I$ and $X$ completes the proof.
\end{proof}
The next two statements describe the situation when two bipartite graphs are joined at a single vertex. They follow from Theorem \ref{thm:edgesum} and Lemma \ref{lem:farkinca} by first slightly enlarging a hypergraph as in the Lemma and then joining it to another hypergraph as in the Theorem.
\begin{kov}\label{cor:eleterint}
Let $\mathscr H_1=(V_1,E_1)$ and $\mathscr H_2=(V_2,E_2)$ be hypergraphs with disjoint vertex sets. Choose $e_1\in E_1$ and $e_2\in E_2$ and form the hypergraph $\mathscr H$ with vertex set $V_1\cup V_2$ and hyperedge set $(E_1\setminus\{e_1\})\cup(E_2\setminus\{e_2\})\cup\{\,e_1\cup e_2\,\}$. Then $I_{\mathscr H}=I_{\mathscr H_1}I_{\mathscr H_2}$ and $X_{\mathscr H}=X_{\mathscr H_1}X_{\mathscr H_2}$.
\end{kov}
\begin{kov}\label{cor:pontoterint}
Let $\mathscr H_1=(V_1,E_1)$ and $\mathscr H_2=(V_2,E_2)$ be hypergraphs with disjoint vertex sets. Choose $v_1\in V_1$ and $v_2\in V_2$ and form the hypergraph $\mathscr H$ by identifying them to a single vertex $v$. I.e., the vertex set of $\mathscr H$ is $(V_1\setminus\{v_1\})\cup(V_2\setminus\{v_2\})\cup\{v\}$ and its hyperedge set is identified with $E_1\cup E_2$ so that $v$ is an element of a `new' hyperedge if and only if either $v_1$ or $v_2$ was an element of the `old' hyperedge. Then $I_{\mathscr H}=I_{\mathscr H_1}I_{\mathscr H_2}$ and $X_{\mathscr H}=X_{\mathscr H_1}X_{\mathscr H_2}$.
\end{kov}
\subsection{Deletion and contraction}\label{ssec:delcontr}
These operations are of special importance in the theory of the Tutte polynomial. However in the hypergraph case, so far they play surprisingly small roles.
\begin{Def}
Let $\mathscr H=(V,E)$ be a hypergraph and $e\in E$ a hyperedge. \emph{Deleting} $e$ from $\mathscr H$ means passing to the hypergraph $\mathscr H-e=(V,E\setminus\{e\})$. The result of \emph{contracting} $e$ is the hypergraph $\mathscr H/e$ which is obtained from $\mathscr H-e$ by identifying the elements of $e$. I.e., the hyperedge set of $\mathscr H/e$ is essentially $E\setminus\{e\}$ and its vertex set is $(V\setminus e)\cup\{\bar e\}$, where the new vertex $\bar e$ belongs to the hyperedge $e'\in E\setminus\{e\}$ if and only if $e'\cap e\ne\varnothing$.
\end{Def}
It would be more consistent with our previous notation to write $\mathscr H\setminus\{e\}$ instead of $\mathscr H-e$, but for this subsection we decided to conform to the existing literature and use the simpler symbol.
Regarding the hypertree polytope $Q_{\mathscr H}$ and the hyperedge $e$, let us define the \emph{top face} of $Q_{\mathscr H}$ with respect to $e$ as $\{\,f\in Q_{\mathscr H}\mid f(e)=|e|-1\,\}$ and let the \emph{bottom face} be $\{\,f\in Q_{\mathscr H}\mid f(e)=0\,\}$. Because $Q_\mathscr H$, being the set of bases in an integer polymatroid, is itself an integer polytope, the top and bottom faces with respect to any hyperedge coincide with the convex hulls of the hypertrees that they contain.
\begin{all}\label{pro:szendvics}
Let $\mathscr H=(V,E)$ be a hypergraph with hypertree polytope $Q_{\mathscr H}$. For any hyperedge $e\in E$, the hypertree polytopes of $\mathscr H-e$ and $\mathscr H/e$ are naturally isomorphic to the bottom and the top face, respectively, of $Q_{\mathscr H}$ with respect to $e$.
\end{all}
\begin{proof}
It suffices to equate the sets of hypertrees within the respective polytopes.
Deletion: If we take any hypertree in $\mathscr H-e$ and an arbitrary realization, then by adding to it a single edge of $\bip\mathscr H$ adjacent to $e$, we realize a hypertree that lies along the bottom face. Conversely, any realization of a hypertree from the bottom face of $Q_\mathscr H$ has a single edge adjacent to $e$ and by removing it we obtain a tree which realizes a hypertree in $\mathscr H-e$. Therefore a bijection is defined by extending hypertrees in $\mathscr H-e$ to $e$ with the value zero. (It is possible for he bottom face of $Q_\mathscr H$ to be empty, namely when $\bip\mathscr H-\{e\}=\bip(\mathscr H-e)$ is disconnected. Of course in such a case $Q_{\mathscr H-e}$ is empty as well.)
Contraction: Let $\bar e$ denote the new vertex in $\mathscr H/e$ that resulted from the contraction of $e$. If $f$ is a hypertree in $\mathscr H/e$, then, using Lemma \ref{lem:anchor}, construct a realization $\Gamma$ for it which contains all edges adjacent to $\bar e$. Based on this tree, we carry out the following construction in $\bip\mathscr H$. Keep all edges of $\Gamma$ that are not adjacent to $\bar e$. If $e'\in E\setminus\{e\}$ is such that $e'\cap e\ne\varnothing$, then connect $e'$ to an arbitrarily chosen element of the intersection. Finally, add all edges that are adjacent to $e$. The result is a spanning tree in $\bip\mathscr H$ so that its induced hypertree is part of the top face of $Q_\mathscr H$ and it agrees with $f$ on $E\setminus\{e\}$.
The inverse of this correspondence is constructed as follows. Let $g$ be a hypertree from the top face of $Q_\mathscr H$. Any of its realizations contains all edges of $\bip\mathscr H$ adjacent to $e$. Take one such tree and (viewing it as a topological space) contract the union of its edges adjacent to $e$ to a single point. The result is another tree which naturally embeds in $\bip(\mathscr H/e)$ as a spanning tree so that it realizes a hypertree in $\mathscr H/e$ that agrees with $g$ over $E\setminus\{e\}$.
\end{proof}
We now turn to generalizations of the classical deletion--contraction formulas \eqref{eq:delcontr}. Lemma \ref{lem:farkinca} can be viewed as a trivial extension of the second one to hypergraphs. (The reader may wish to consult \eqref{eq:regitutte} to see that there is no contradiction.) The first formula also has an easy extension to hypergraphs as follows.
\begin{all}
Let $\mathscr H=(V,E)$ be a hypergraph with a hyperedge $e$ so that $\bip\mathscr H$ is connected but $\bip\mathscr H-\{e\}$ is the union of $|e|$ connected components. Then $I_\mathscr H=I_{\mathscr H/e}$ and $X_\mathscr H=X_{\mathscr H/e}$ (and both are equal to obvious $|e|$-fold products).
\end{all}
\begin{proof}
Immediate from Lemma \ref{lem:farkinca} and Corollaries \ref{cor:eleterint} and \ref{cor:pontoterint}.
\end{proof}
\begin{megj}
For the abstract duals of the hypergraphs in the previous Proposition, the same argument yields the same conclusion: $I_{\overline{\mathscr H}}=I_{\overline{\mathscr H/e}}$ and $X_{\overline{\mathscr H}}=X_{\overline{\mathscr H/e}}$, because both sides of each equation agree with the obvious $|e|$-fold product.
\end{megj}
Finally, let us generalize the third deletion--contraction formula from \eqref{eq:delcontr}.
\begin{all}\label{pro:delcontr}
Let $\mathscr H=(V,E)$ be a hypergraph that contains a two-element hyperedge $e$ which is so that $\bip\mathscr H-\{e\}$ is connected. Then we have
\begin{equation}\label{eq:elcsusznak}
I_\mathscr H(\xi)=I_{\mathscr H-e}(\xi)+\xi I_{\mathscr H/e}(\xi)\quad\text{and}\quad X_\mathscr H(\eta)=\eta X_{\mathscr H-e}(\eta)+X_{\mathscr H/e}(\eta).
\end{equation}
\end{all}
\begin{proof}
As $|e|=2$, each hypertree in $\mathscr H$ lies either on the top (if its value at $e$ is $1$) or on the bottom (if it is $0$) face of $Q_\mathscr H$ with respect to $e$. Let us choose an order on $E$ in which $e$ is the largest hyperedge.
With respect to hypertrees along the bottom face, $e$ is internally active. With respect to those on the top face, by Lemma \ref{lem:mohofelett}, $e$ is internally inactive: indeed the Lemma applies because $nj(e)=1$ by our assumption which means that the value of the greedy hypertree at $e$ is $g(e)=|e|-1-nj(e)=0$. With this, our first equation follows easily from Proposition \ref{pro:szendvics}.
The second is equally easy if we notice that with respect to hypertrees of the top face, $e$ is externally active, whereas with respect to those along the bottom face, it is externally inactive. The latter can be argued for example like this: If $f$ is a hypertree in $\mathscr H$ so that $f(e)=0$, then take any of its realizations and add to it the `other' edge adjacent to $e$, too. The cycle thus created goes through at least one more hyperedge $e'$ and if we break the cycle by removing one of its edges adjacent to $e'$, we get a realization of a hypertree which is the result of a transfer of valence to $e$ from the smaller hyperedge $e'$.
\end{proof}
It would be highly desirable to formulate a version of the last proposition for hyperedges of arbitrary size. Such formulas, however, are currently lacking.
\section{Abstract duality}\label{sec:sejtes}
The observations on the interior polynomial contained in Propositions \ref{pro:const} and \ref{pro:rang} show more than just the (already proved) order-independence of the coefficients. The values observed also remain the same if we exchange the roles of hyperedges and vertices, that is they are invariant under abstract duality. We conjecture that this is true for the rest of the interior polynomial as well. In other words, the interior polynomial is in fact an invariant of bipartite graphs.
\begin{sejt}\label{conj:dual}
Let $G=(V_0,V_1,E)$ be a connected bipartite graph which induces the hypergraphs $\mathscr G_0$ and $\mathscr G_1$ as in \eqref{eq:absdual}. Then $I_{\mathscr G_0}=I_{\mathscr G_1}$.
\end{sejt}
So far, supporting evidence for this includes Postnikov's Theorem \ref{thm:post}, which says that $I_{\mathscr G_0}(1)=I_{\mathscr G_1}(1)$, and Theorem \ref{pro:rang}. We present some more below.
Conjecture \ref{conj:dual} was certainly true in Example \ref{ex:harom}, where it was also illustrated that the operation of abstract duality will usually result in a change in the exterior polynomial. (Indeed, the linear terms can already be different.) Let us work out a more general family of examples.
\begin{pelda}
We consider the complete bipartite graph $K_{m,n}=(V_0,V_1,E)$, where $|V_0|=n$ and $|V_1|=m$. For both $\mathscr G_0$ and $\mathscr G_1$ the condition \eqref{eq:hypertree2} in Theorem \ref{thm:politop} becomes vacuous and we see that their hypertree polytopes are the simplices
\[Q_{\mathscr G_0}=(m-1)\Delta_{V_0}\quad\text{and}\quad Q_{\mathscr G_1}=(n-1)\Delta_{V_1},\]
both of which contain ${n+m-2\choose n-1}$ hypertrees. (The number of lattice points in a standard $d$-dimensional simplex of sidelength $k$ is ${k+d\choose d}$.)
Fix now some order on $V_0$. If $f\colon V_0\to\ensuremath{\mathbf N}$ is a hypertree in $\mathscr G_0$ and $e\in V_0$, then $e$ is internally inactive with respect to $f$ if and only if $f(e)>0$ and $e$ is not the smallest in the order. Hence to enumerate hypertrees in $\mathscr G_0$ with internal inactivity $k$, we need to
\begin{enumerate}[(a)]
\item choose the set of the $k$ internally inactive hyperedges and
\item partition the sum of hypertree entries, $m-1$, between them and the smallest hyperedge so that the latter may receive $0$ but the others get a positive amount.
\end{enumerate}
This is an easy exercise
whose solution is ${n-1\choose k}{m-1\choose k}$. As this is a symmetric expression in $n$ and $m$, we see that Conjecture \ref{conj:dual} holds for complete bipartite graphs.
We leave it for the reader to verify that the number of hypertrees in $\mathscr G_0$ with external inactivity $k$ is ${m+k-2\choose k}$ for all $0\le k\le n-1$. For this of course it does matter if we interchange $n$ and $m$.
\end{pelda}
Let us temporarily call a bipartite graph \emph{good} if it satisfies Conjecture \ref{conj:dual}. Bipartite graphs with an isomorphism that interchanges their color classes are good. We have seen a number of operations that, when applied to good graphs, result in other good graphs. These include adding on a new valence one point (Lemma \ref{lem:farkinca}), joining two bipartite graphs at a vertex (Corollaries \ref{cor:eleterint} and \ref{cor:pontoterint}) or at an edge (Theorem \ref{thm:edgesum}). Adding a new valence two point is also such an operation. To see why, we have to accompany Proposition \ref{pro:delcontr} with the following result.
\begin{all}\label{pro:delcontr'}
Let $\mathscr H=(V,E)$ be a hypergraph with a vertex $v\in V$ belonging to exactly two hyperedges $e_1$ and $e_2$. Define the deletion of $v$ as $\mathscr H'=\overline{\overline{\mathscr H}-v}$ and the contraction of $v$ as $\mathscr H''=\overline{\overline{\mathscr H}/v}$ (which are the obvious ways). Assuming that $\bip\mathscr H-\{v\}$ is connected, we have
\begin{equation}\label{eq:megintcsusznak}
I_\mathscr H(\xi)=I_{\mathscr H'}(\xi)+\xi I_{\mathscr H''}(\xi).
\end{equation}
If we also suppose that $v$ is not the only common element of $e_1$ and $e_2$, we obtain the relation $X_\mathscr H(\eta)=X_{\mathscr H'}(\eta)+\eta X_{\mathscr H''}(\eta)$.
\end{all}
Notice that \eqref{eq:megintcsusznak} matches the first equation in \eqref{eq:elcsusznak} exactly. There is however a discrepancy between the formulas concerning exterior polynomials, not to mention the extra assumption (which, by the way, can not be dropped).
\begin{proof}
Just like in the proof of Proposition \ref{pro:delcontr}, first we are going to split the set of hypertrees in $\mathscr H$ in two so that the two sides are in one-to-one correspondences with hypertrees in $\mathscr H'$ and $\mathscr H''$, respectively, and then study activities. The geometry however will be different this time and therefore the argument more complex.
If $f$ is a hypertree in $\mathscr H'$, define $f'(e_1)=f(e_1)+1$ and $f'(x)=f(x)$ for other hyperedges. This is a hypertree in $\mathscr H$ because a realization of $f$ in $\bip\mathscr H'=\bip\mathscr H-\{v\}$ together with the edge of $\bip\mathscr H$ between $e_1$ and $v$ gives a realization of $f'$. The isometric embedding $f\mapsto f'$ is obviously injective.
Note that $\mathscr H''$ contains a distinguished hyperedge $e$ where $e_1$ and $e_2$ merge. If $g$ is a hypertree in $\mathscr H''$ then $\mathscr H$ has hypertrees which agree with $g$ away from $e_1$ and $e_2$ because the same edges that constitute a realization of $g$ in $\bip\mathscr H''$ form a two-component forest in $\bip\mathscr H$ which can be made a spanning tree by adding to it both edges adjacent to $v$. The set of these,
\
L_g=\left\{\,h\in Q_\mathscr H\cap\ensuremath{\mathbf Z}^E\biggm|h\big|_{E\setminus\{e_1,e_2\}}=g\big|_{E\setminus\{e_1,e_2\}}\,\right\},\]
is analogous to the line segments that we used in the proof of Theorem \ref{thm:independent}. Let us now associate to $g$ the hypertree $g''\in L_g$ which has the highest value at $e_2$.
The sets $L_g$, which we will also refer to as \emph{fibers}, lie on parallel lines. Furthermore, any hypertree $h$ in $\mathscr H$ is part of a set $L_g$. To see this, use Lemma \ref{lem:anchor} to construct a realization of $h$ containing both edges adjacent to $v$ and then contract those two edges to a point to obtain a spanning tree in $\bip\mathscr H''$ that induces the right $g$.
Moreover, if $g_1\ne g_2$, then the lines containing $L_{g_1}$ and $L_{g_2}$ are disjoint (so that the $g$ above is uniquely determined by $h$). This is because $E\setminus\{\,e_1,e_2\,\}$ represents all but one of the hyperedges in $\mathscr H''$ and by \eqref{eq:hypertree3}, the last value (at $e$) is uniquely determined by the rest. In particular, the correspondence $g\mapsto g''$ is also injective.
There do not exist $f$ and $g$ so that $f'=g''$ because
\begin{enumerate}[(a)]
\item\label{atf'} For any $f$, the hypertree $f'$ is such that a transfer of valence is possible from $e_1$ to $e_2$: just repeat the same argument that $f'$ is a hypertree but with the roles of $e_1$ and $e_2$ reversed.
\item\label{atg''} At a hypertree of the form $g''$, the same transfer is never possible.
\end{enumerate}
On the other hand, elements of $L_g\setminus\{g''\}$, i.e., all hypertrees $h$ in $\mathscr H$ that are such that a transfer of valence is possible from $e_1$ to $e_2$, are of the form $f'$ for some $f$. This is because each such $h$ has a realization that does not contain the edge $\gamma$ of $\bip\mathscr H$ connecting $e_2$ and $v$. To show this, take any realization $\Xi$ of $h$. If it does not contain $\gamma$, we are done. Otherwise, by Lemma \ref{lem:anchor}, we may assume that it contains both edges of $\bip\mathscr H$ adjacent to $v$. \label{xipage} By removing those two edges, we separate $\Xi$ to two smaller trees $\Xi_1$ and $\Xi_2$. Let $E_1\ni e_1$ be the set of hyperedges contained by $\Xi_1$ and let $E_2\ni e_2$ be contained by $\Xi_2$ so that $E$ is the disjoint union of $E_1$ and $E_2$.
There have to be elements of $E_2$ that are adjacent in $\bip\mathscr H$ to vertices of $\Xi_1$. This is because otherwise $\Xi\cap\big(\bip\mathscr H\big|_{E_2}\big)=\Xi_2\cup\{\gamma\}$ would be a spanning tree in $\bip\mathscr H\big|_{E_2}$, which by Lemma \ref{lem:erdo} would mean that $E_2$ is tight at $h$ and that would rule out the possibility of the assumed transfer of valence from $e_1$ to $e_2$. Whenever we locate this kind of element $a$ in $E_2$, we `switch it over' to $E_1$ by our usual method: add to $\Xi$ an edge $\alpha$ connecting $
a$ to a vertex of $\Xi_1$; this creates a unique cycle $C$ in $\Xi\cup\{\alpha\}$ which necessarily contains $\gamma$; remove the edge of $C$ adjacent to $a$ which is not $\alpha$.
By iterating this procedure, we move through realizations of $h$ with smaller and smaller associated sets $E_2$. Therefore after finitely many steps, the hyperedge $a$ that we operate on will be $e_2$ itself. The edge that we remove in that step is necessarily $\gamma$. This finishes the argument that a hypertree in $\mathscr H$ is always either of the form $f'$ or of the form $g''$ but never both.
In order to study activities, let us now order $E$ so that $e_1$ and $e_2$ are the smallest hyperedges. (Later we will use $e_1<e_2$ in the internal case and $e_2<e_1$ in the external case, but for now we leave this unspecified.) We will use this order in $\mathscr H$ to define $\bar\iota$ and $\bar\varepsilon$,
as well as in $\mathscr H'$ to define $\bar\iota'$ and $\bar\varepsilon'$
values. In $\mathscr H''$, let $e$ be the smallest hyperedge and otherwise use the same order in the definitions of $\bar\iota''$ and $\bar\varepsilon''$.
We start with two easy claims.
\begin{enumerate}[(i)]
\item\label{ftof'} If a hyperedge $a$ is inactive (in either sense) with respect to $f$ in $\mathscr H'$, then $a$ is inactive with respect to $f'$ in $\mathscr H$. This is because if the hypertrees $f_1$ and $f_2$ in $\mathscr H'$ are related by a single transfer of valence then
$f_1'$ and $f_2'$ are related by the same transfer.
\item\label{g''tog} Regarding $\mathscr H''$, an implication of the opposite kind is easy: If the hyperedge $a\in E\setminus\{\,e_1,e_2\,\}$ is inactive with respect to some element of $L_g$ (in $\mathscr H$), then $a$ is inactive with respect to $g$ (in $\mathscr H''$) as well. Just take the hypertree that results from the transfer of valence which `makes' $a$ inactive and notice that it is part of a set $L_{g_0}$; this $g_0$ differs from $g$ by the single transfer of valence which establishes the claim.
\end{enumerate}
The rest of the proof is concerned with the degree to which the converses of \eqref{ftof'} and \eqref{g''tog} above are true. We start with \eqref{g''tog}.
\begin{lemma}\label{lem:sinpar}
If the hyperedge $a\in E\setminus\{\,e_1,e_2\,\}$ is inactive, in either sense, with respect to the hypertree $g$ in $\mathscr H''$, then $a$ is inactive with respect to any element $h\in L_g$ (e.g.\ $h=g''$) in $\mathscr H$.
\end{lemma}
To show this, we separate two cases.
\begin{enumerate}[I.]
\item Let $h\in L_g$. If $g$ is such that a transfer of valence is possible from $a$ to $e$ (resp.\ from $e$ to $a$) resulting in the hypertree $g_0$ in $\mathscr H''$, then we claim that $h$ is such that a transfer of valence is possible from $a$ to $e_1$ or $e_2$ (resp.\ from $e_1$ or $e_2$ to $a$). This follows by fixing a $2$-dimensional plane containing $L_g$ and $L_{g_0}$ and constructing the type of elementary argument that we used to prove Lemmas \ref{lem:rombusz} and \ref{lem:nyil}.
\item If $g$ is such that no transfer valence is possible from $a$ to $e$ (resp.\ from $e$ to $a$), then by the argument in \eqref{g''tog} it follows that $h$
is such that there is no transfer of valence from $a$ to $e_1$ or to $e_2$ (resp.\ from $e_1$ or from $e_2$ to $a$). As $a$ is inactive, there is another hyperedge $b\in E\setminus\{\,e_1,e_2\,\}$, smaller than $a$, so that $g$ is such that valence can be transferred from $a$ to $b$ (resp.\ from $b$ to $a$), resulting in the hypertree $g_0$. From Lemma \ref{lem:rombusz} and \eqref{g''tog} we obtain that $h$ is such that valence can not be transferred from $b$ to $e_1$ or $e_2$ (resp.\ from $e_1$ or $e_2$ to $b$) either. Thus Lemma \ref{lem:teglalap}, applied to $p=e_1$, $q=e_2$, $r=a$, $s=b$ (resp.\ $p=a$, $q=b$, $r=e_1$, $s=e_2$) implies that $L_g$ and $L_{g_0}$ are located in a rectangular cross-section of $Q_\mathscr H$ so that every hypertree along $L_g$ is such that a transfer of valence is possible from $a$ to $b$ (resp.\ from $b$ to $a$). This completes the proof of the lemma.
\end{enumerate}
Let us assume that $e_1$ and $e_2$ have a second common vertex $v'\ne v$. This is equivalent to saying that for any hypertree $g$ in $\mathscr H''$, the set $L_g$ has at least two elements\footnote{As an alternative to the short argument below, the equivalence can also be deduced from Postnikov's Proposition \ref{pro:felbont}.}, which in turn makes it meaningful for us to let $g'''$ denote the element of $L_g$ adjacent to $g''$. (Note that $g'''$ is of the form $f'$ for some $f$.) Indeed, if $v'$ exists, then a spanning tree of $\bip\mathscr H$ may contain at most three edges of the quadrangle $e_1ve_2v'$ which makes it very easy to realize a transfer of valence between $e_1$ and $e_2$. Conversely, if $e_1\cap e_2=\{v\}$, then there are (greedy) hypertrees in $\mathscr H$ which take the maximal values $|e_1|-1$ and $|e_2|-1$ at $e_1$ and $e_2$, respectively.
In addition to our previous assumptions, we now set $e_2<e_1$. Concerning external inactivities, we prove the following items.
\begin{itemize}
\item $\bar\varepsilon(g'')=\bar\varepsilon''(g)+1$ for all hypertrees $g$ in $\mathscr H''$. It is clear that $e_1$ is externally inactive with respect to $g''$ whereas of course $e$ is active with respect to $g$. The rest follows from Lemma \ref{lem:sinpar} and \eqref{g''tog}.
\item $\bar\varepsilon(f')=\bar\varepsilon'(f)$ for all hypertrees $f$ in $\mathscr H'$. A converse of \eqref{ftof'} is indeed true in this case. If $f'$ is such that $e_2$ can transfer valence to $e_1$, then the same is obviously true for $f$. If $f'$ is such that $e_1$ or $e_2$ can transfer valence to some other hyperedge $a$ then, even if the result of the transfer happens to be of the form $g''$, it is because the transfer came from $e_1$ and it is easy to see that a transfer of valence from $e_2$ to $a$ turns $f'$ into $g'''$. Finally, if $f'$ and $a$ are such that no transfer of valence is possible from $e_1$ or $e_2$ to $a$ but the hyperedge $b$ can transfer valence to $a$, resulting in the hypertree $h$, then the usual combination of Lemmas \ref{lem:rombusz} and \ref{lem:teglalap} shows that the fibers containing $f'$ and $h$ lie in a rectangular cross-section and therefore $h$ is not of the form $g''$ either.
\end{itemize}
The combination of the last two equations settles the claim in the Proposition concerning exterior polynomials.
Finally, we turn to internal activities and establish \eqref{eq:megintcsusznak}. For the remainder of the proof, we will use the order $e_1<e_2$ among the two smallest hyperedges. Under the simplifying assumption that all fibers contain at least two hypertrees, the result follows relatively easily as above in the external case. Without that assumption, both components of the argument break down: some hypertrees $g''$ are such that $e_2$ can not transfer valence to $e_1$, i.e., $g''$ does not pick up an extra internally inactive hyperedge beyond those of $g$; on the other hand there are also hypertrees $f'$ which do pick up extra internally inactive hyperedges beyond those of $f$ because some transfer of valence can turn them into one of the $g''$ without a convenient $g'''$ nearby. Our task is to show that the two problems cancel each other out.
Let us call a hypertree in $\mathscr H$ \emph{lonely} if it is the unique element of its fiber, i.e., if it is such that no transfer of valence is possible between $e_1$ and $e_2$. We claim that if $h$ is a lonely hypertree, then there is a hyperedge $a\in E\setminus\{\,e_1,e_2\,\}$ so that $h$ is such that both $e_1$ and $e_2$ can transfer valence to $a$. After noting that any realization $\Xi$ of $h$ has to contain both edges of $\bip\mathscr H$ adjacent to $v$, we proceed as earlier (on p.\ \pageref{xipage}) to define the subtrees and hyperedge sets $\Xi_i\supset E_i\ni e_i$, $i=1,2$. Now there has to be a hyperedge $a$ in $E_1$ or $E_2$ that is connected by an edge $\alpha$ of $\bip\mathscr H$ to a vertex in $\Xi_2$ or $\Xi_1$, respectively, for otherwise removing $v$ would disconnect the graph. Adding $\alpha$ to $\Xi$ creates a cycle through $v$ so that by removing either edge of $\bip\mathscr H$ adjacent to $v$, we realize the two desired transfers of valence to $a$.
If $h$ is a lonely hypertree then select the smallest hyperedge $a_h$ with the property above and let the hypertree $t(h)$ be obtained from $h$ by a single transfer of valence from $e_2$ to $a_h$. From the construction it is obvious that there is
a hypertree $f$ in $\mathscr H'$ so that $f'=t(h)$. It is also clear from Lemma \ref{lem:rombusz} that $t(h)$ is part of a two-element fiber; in particular it is such that $e_2$ can not transfer valence to $e_1$.
For a hypertree $f$ in $\mathscr H'$, let us introduce
\[T_f=\{\,h\in Q_\mathscr H\cap\ensuremath{\mathbf Z}^E\mid h\text{ is a lonely hypertree with }t(h)=f'\,\}.\]
The proof of \eqref{eq:megintcsusznak} will be immediate from the following items.
\begin{enumerate}[1)]
\item\label{OK} $\bar\iota(g'')=\bar\iota''(g)+1$ for all hypertrees $g$ in $\mathscr H''$ so that $g''$ is not lonely. This follows easily from \eqref{g''tog}, Lemma \ref{lem:sinpar}, and the observation that $e_2$ is internally inactive with respect to such hypertrees.
\item\label{keves} $\bar\iota(g'')=\bar\iota''(g)$ whenever $g''$ is lonely. In these cases $e_2$ is internally active; otherwise see above.
\item\label{tulsok} $\bar\iota(f')=\bar\iota'(f)+|T_f|$ for all hypertrees $f$ in $\mathscr H'$.
\item\label{sorozat} For any hypertree $f$ in $\mathscr H'$, the values of $\bar\iota$ among the elements of $T_f$ are $\bar\iota'(f),\bar\iota'(f)+1,\ldots,\bar\iota'(f)+|T_f|-1$, each occurring exactly once.
\end{enumerate}
We will prove \ref{tulsok}) and \ref{sorozat}) momentarily but let us first take care of \eqref{eq:megintcsusznak}. For each hypertree $g$ in $\mathscr H''$, we need a hypertree in $\mathscr H$ whose $\bar\iota$ is one higher than $\bar\iota''(g)$. If it is not lonely then $g''$ can play this role by \ref{OK}); otherwise \ref{keves}) says that $g''$ is not good but it belongs to a unique set $T_f$ which, by \ref{sorozat}), either contains an element with the desired $\bar\iota$ or if it does not then, according to \ref{tulsok}), $f'$ will do the job. Also, for each hypertree $f$ in $\mathscr H'$ we need a hypertree in $\mathscr H$ with the same $\bar\iota$. If $T_f=\varnothing$ then, by \ref{tulsok}), $f'$ will do; otherwise \ref{sorozat}) says that the element of $T_f$ with the lowest $\bar\iota$ will suffice. Since we used each hypertree in $\mathscr H$ exactly once, \eqref{eq:megintcsusznak} follows.
Let us now fix a hypertree $f$ in $\mathscr H$. For each $h\in T_f$, the hyperedge $a_h$ is internally inactive with respect to $f'$ because a transfer of valence from $a_h$ to $e_2$ turns $f'$ into $h$. We claim that the same hyperedges $a_h$ are internally active with respect to $f$, implying via \eqref{ftof'} that $\bar\iota(f')\ge\bar\iota'(f)+|T_f|$. For this it suffices to prove that any downward transfer of valence from $a_h$ will turn $f'$ into a hypertree of the form $g''$ for some $g$. The hypertree $f'$ is such that valence can not be transferred from $a_h$ to $e_1$ because $h$ was such that valence could not be transferred from $e_2$ to $e_1$. If $f'$ is such that a transfer of valence is possible from $a_h$ to some hyperedge $e_2<x<a_h$ (resulting in the hypertree $j$) then, by the way $a_h$ was chosen in the definition of $t(h)$, the hypertree $h$ is such that valence can not be transferred from $e_1$ to $x$. In other words, $j$ is such that valence can not be transferred from $e_1$ to $e_2$ which, by the characterization \eqref{atg''}, establishes our claim.
To see why $\bar\iota(f')\le\bar\iota'(f)+|T_f|$ holds too, we let $y\in E$ be internally inactive with respect to $f'$ and show that it is either internally inactive with respect to $f$ as well or it is one of the $a_h$ for some $h\in T_f$. If $f'$ is such that a downward transfer of valence from $y$ turns it into $f_0'$ for some $f_0$ then the first option holds. For instance, by Lemma \ref{lem:rombusz} and \eqref{atf'} applied to $f'$, this is the case if $f'$ is such that a transfer of valence is possible from $y$ to $e_1$. Assume now that $f'$ is such that valence can be transferred from $y$ to a hyperedge $e_2<x<y$ resulting in the hypertree $i$. By \eqref{atf'}, $f'$ is such that valence can be transferred from $e_1$ to $e_2$ (resulting in the hypertree $j$) but we may assume that the opposite is the case at $i$. That means that there is a set $U\subset E$ which is tight at $i$, contains $e_2$, but does not contain $e_1$. Because the same set $U$ is not tight at $f'$, it must be the case that $U$ contains $x$ and does not contain $y$. Next, apply Lemma \ref{lem:teglalap} with $p=e_2$, $q=x$, $r=e_1$, $s=y$ to find a rectangular cross-section containing $i$ and notice that it also contains $j$. This implies the existence of two more hypertrees in the cross-section, one of which shows that $f'$ is such that a transfer of valence is possible from $y$ to $e_2$. Let the result of that transfer be the hypertree $h$.
If $h$ is of the form $f_0'$ for some $f_0$ or if it is not but it is not lonely either then we see that $y$ is internally inactive with respect to $f$. If $h$ is lonely, then because $h$ is such that both $e_1$ and $e_2$ can transfer valence to $y$, we have $a_h\le y$; but if $a_h\ne y$ then $f'$ is such that $y$ can transfer valence to $a_h$ (the result being $t(h)$), showing that $y$ is internally inactive with respect to $f$. This finishes the proof of \ref{tulsok}).
What remains is to examine the set $T_f\cup\{f'\}$ of lattice points. Note that $f'=t(h)$ and the hyperedge $a_h$ determine the hypertree $h$. Let us label the elements of $T_f$ so that $a_{h_1}>a_{h_2}>\cdots>a_{h_{|T_f|}}$ holds. Other than the ones belonging to $e_2$ and the $a_{h_i}$, $i=1,2,\ldots,|T_f|$, all components of the elements of $T_f\cup\{f'\}$ are identical. The remaining components are almost the same too as each $h_i$ is derived from $f'$ by a transfer of valence from $a_{h_i}$ to $e_2$. From this description it is clear that $T_f\cup\{f'\}$ is the set of vertices of an inverted $|T_f|$-dimensional unit simplex. Based on this, using the same techniques as above, it is not hard to show that with respect to $h_i$, the hyperedges $e_2,a_{h_{|T_f|}},\ldots, a_{h_{i}}$ are internally active and $a_{h_{i-1}},\ldots,a_{h_1}$ are internally inactive, whereas for all other hyperedges
we obtain that they are internally active either with respect to all elements of $T_f\cup\{f'\}$ or with respect to none of those. This completes the proof of \ref{sorozat}) and hence that of the Proposition.
\end{proof}
The list of operations before Proposition \ref{pro:delcontr'} reduces Conjecture \ref{conj:dual} to certain prime graphs, one of which is discussed in the next example.
\begin{pelda}
Consider the bipartite graph shown in Figure \ref{fig:peti}. It is in fact a plane bipartite cubic graph, which is a class that will be important for other reasons in Sections \ref{sec:trinity} and \ref{sec:moretrinity}. For now, we note that both of its induced hypergraphs have the interior polynomial
\[1+10\xi+48\xi^2+146\xi^3+302\xi^4+410\xi^5+277\xi^6+49\xi^7+\xi^8.\]
This is a non-trivial example of Conjecture \ref{conj:dual} because the structure is not symmetric: even though both color classes have nine elements with matching valences, there is no isomorphism that interchanges them. This is demonstrated by the slight difference between the corresponding exterior polynomials. If the full dots play the role of hyperedges, we obtain
\[1+8\eta+36\eta^2+110\eta^3+235\eta^4+344\eta^5+318\eta^6+162\eta^7+30\eta^8,\]
whereas if the hollow dots represent hyperedges, we get
\[1+8\eta+36\eta^2+110\eta^3+235\eta^4+348\eta^5+326\eta^6+159\eta^7+21\eta^8.\]
All three polynomials are outputs of a computer code written by P\'eter Juh\'asz.
\begin{figure}[htbp]
\centering
\includegraphics[width=2in]{peti}
\caption{A plane bipartite cubic graph.}
\label{fig:peti}
\end{figure}
\end{pelda}
\section{Planar duality}\label{sec:dual}
We call a hypergraph $\mathscr H=(V,E)$ \emph{planar} if the corresponding bipartite graph $\bip\mathscr H$ is planar, i.e., if it admits an embedding into the $2$--sphere $S^2$. We will almost always assume that such an embedding is fixed, but for extra clarity, we will talk about \emph{plane} (hyper)graphs to mean a graph together with a particular embedding. A \emph{region} of a plane (hyper)graph is a connected component of the complement of the image of the embedding. If the graph is connected, each region is homeomorphic to a disk.
For the rest of the paper, it will be convenient to allow multiple edges in (plane) bipartite graphs. Such objects induce the same two hypergraphs (and therefore the same two hypertree polytopes and the same interior and exterior polynomials) as the bipartite graph which results from reducing each set of multiple edges to a single edge. This is all fairly natural given that a spanning tree may contain at most one element from a set of multiple edges.
\begin{Def}\label{def:planedual}
Let $\mathscr H$ be a plane hypergraph so that its associated bipartite graph $\bip\mathscr H$ is connected. We define the \emph{planar dual hypergraph} $\mathscr H^*$ of $\mathscr H$ by keeping the set $E$ of hyperedges and letting the new set of vertices be the set $R$ of regions of $\mathscr H$. We let a region belong to a hyperedge if it is incident with the point representing it on the plane.
\end{Def}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{egyszin_dual}
\caption{Left: the planar dual of a hypergraph $\mathscr H$ of three hyperedges. Right: the planar dual of the corresponding bipartite graph $\bip\mathscr H$.}
\label{fig:dualisok}
\end{figure}
The planar dual hypergraph $\mathscr H^*$ is also planar because the bipartite graph $\bip\mathscr H^*$ can be represented by a planar diagram where we place a new vertex in each element $r\in R$ and connect it to all points around the boundary $\partial r$ that represent elements of $E$. See Figure \ref{fig:dualisok} for an example. In fact we will identify $R$ and the set of points just introduced and think of the planar dual hypergraph as given with the embedding that we have just outlined.
It is easy to see that our notion generalizes the usual duality of plane graphs. Indeed, a plane graph $G=(V,E)$ is also a hypergraph. Passing to $\bip G$ means placing a new vertex to the midpoint of each edge. Now in $G^*$, these same midpoints are connected to the points which represent the two regions on either side of the original edge. Finally, if we forget the midpoint, these two connections merge to form the usual dual edge.
The following is also almost immediate.
\begin{all}\label{pro:dualdual}
For a plane hypergraph $\mathscr H$, the planar dual of the planar dual of $\mathscr H$ is naturally isomorphic to $\mathscr H$.
\end{all}
\begin{proof}
We explained after Definition \ref{def:planedual} how $\bip\mathscr H^*$ is embedded in $S^2$. Each point representing a vertex $v\in V$ is of course in some connected component of the complement of $\bip\mathscr H^*$. This correspondence is onto and one-to-one.
\end{proof}
The hypertree polytopes $Q_\mathscr H$ and $Q_{\mathscr H^*}$ of a planar dual pair are subsets of the same Euclidean space $\ensuremath{\mathbf R}^E$. Next, let us address the effect of planar duality on the hypertree polytope, as well as the interior and exterior polynomials.
\begin{tetel}\label{thm:sikdualis}
Let the hypergraphs $\mathscr H$ and $\mathscr H^*$ be planar duals. Then $Q_\mathscr H$ and $Q_{\mathscr H^*}$ are centrally symmetric. Furthermore, up to interchanging the indeterminates $\xi$ and $\eta$, we have $I_{\mathscr H^*}=X_{\mathscr H}$ and $X_{\mathscr H^*}=I_{\mathscr H}$.
\end{tetel}
In Figure \ref{fig:dualisok} we see that the hypergraph $\mathscr G_0$ of our earlier examples is planar and in fact planar self-dual. This explains why \eqref{eq:egyik} showed its interior and exterior polynomials to coincide.
Parts of the following proof were simplified from an earlier version by A.\ Bene.
\begin{proof}[Proof of Theorem \ref{thm:sikdualis}]
Proposition \ref{pro:dualdual} implies that for the claim on the polytopes, it suffices to show that if $f$ is a hypertree in $\mathscr H$ then $f^*(e)=|e|-1-f(e)$ defines a hypertree in $\mathscr H^*$. (So the center in which $Q_\mathscr H$ and $Q_{\mathscr H^*}$ are symmetric becomes the point whose $e$-coordinate is $\frac12(|e|-1)$ for all $e\in E$.)
To this end, choose a spanning tree $\Gamma\subset\bip\mathscr H$ that realizes $f$. Form its planar dual $\Gamma^d$ in the classical sense, i.e., put a vertex in each element of $R$ and connect these by edges that bisect each edge of $\bip\mathscr H$ that is not an edge in $\Gamma$. This is well known to be a spanning tree in the dual graph, in particular it is cycle-free, connected, and contains all points of $R$. We are going to use a continuous deformation followed by some discrete steps to turn $\Gamma^d$ into a spanning tree $\Gamma^*\subset\bip\mathscr H^*$ that realizes $f^*$.
Each edge $\gamma^d$ of $\Gamma^d$ bisects an edge $\gamma$ of $\bip\mathscr H$ and $\gamma$ has exactly one end $e$ in $E$. Let us push the midpoint of $\gamma^d$ along $\gamma$ so that it gets very close to $e$. If $r\in R$ and $e\in E$ are adjacent with two edges $\gamma_1,\gamma_2$ of $\bip\mathscr H$, then we merge the halves of $\gamma_1^d$ and $\gamma_2^d$ adjacent to $r$, as shown in the right side of Figure \ref{fig:csillag}. The result of the modifications so far is still a plane tree which we will continue to denote with $\Gamma^d$.
\begin{figure}[htbp]
\labellist
\footnotesize
\pinlabel $e$ at 275 273
\pinlabel $e$ at 1050 273
\pinlabel $r$ at 410 490
\pinlabel $\gamma_2$ at 375 550
\pinlabel $\gamma_2^d$ at 300 550
\pinlabel $\downarrow$ at 300 505
\pinlabel $\gamma_1$ at 500 475
\pinlabel $\gamma_1^d$ at 515 400
\pinlabel $\leftarrow$ at 470 400
\endlabellist
\centering
\includegraphics[width=4in]{csillag}
\caption{Modifying a tree $\Gamma^d$ in $(\bip\mathscr H)^*$ toward a tree $\Gamma^*$ in $\bip\mathscr H^*$. Solid edges represent $\bip\mathscr H$; the thick ones are in $\Gamma$ and the thin ones are not.}
\label{fig:csillag}
\end{figure}
Now let $e\in E$ be arbitrary. Write the edges of $\bip\mathscr H$ emanating from $e$, in a cyclic order determined by the embedding, as $\gamma_1,\ldots,\gamma_{|e|}$. Let the edges among these that do not belong to $\Gamma$ form $k$ maximal subsequences of consecutive elements. Let the lengths of these subsequences be $s_1,\ldots,s_k$. In our construction so far, each of these $k$ groups resulted in a point $e_i$ near $e$ with $s_i+1$ edges connecting it to elements of $R$. See Figure \ref{fig:csillag}, which shows $k=3$ with $s_1=2$, $s_2=1$, and $s_3=4$. We wish to push these points $e_1,\ldots,e_k$ all the way to $e$ so that the edges adjacent to them become edges of $\bip\mathscr H^*$.
The simplest case is that of $k=1$ when we can just identify $e_1$ with $e$. Note that after this, the number of edges of $\Gamma^d$ adjacent to $e$ is $s_1+1=|e|-(f(e)+1)+1=f^*(e)+1$. When $k\ge2$, we still start with pushing $e_1$ to $e$. When we do the same to $e_2$, however, a unique cycle is created in $\Gamma^d$ and we remove one of its two edges adjacent to $e$ in order to have a tree again. Then we push $e_3$ to $e$ and remove another edge, and so forth until the last point $e_k$. In the tree that results at the end, the valence of $e$ is $(s_1+1)+s_2+\cdots+s_k=|e|-(f(e)+1)+1=f^*(e)+1$. After applying this procedure to all elements of $e\in E$ with $k\ge1$, we arrive at a tree that is adjacent to all such hyperedges as well as to all elements of $R$.
We intentionally left out the case $k=0$ which of course corresponds to $f^*(e)=0$. To finish the construction of the spanning tree $\Gamma^*\subset\bip\mathscr H^*$ realizing $f^*$, we add one more edge for each such hyperedge so that it is connected to an arbitrarily chosen adjacent element of $R$.
The claim on the polynomials is now easily obtained
by the following observation. The correspondence $f\leftrightarrow f^*$ is a bijection between hypertrees in $\mathscr H$ and $\mathscr H^*$. The hypertree $f$ can be transformed into $g$ by a single transfer of valence if and only if $f^*$ can be transformed into $g^*$ by the opposite transfer. Therefore with respect to any order of the hyperedges, we have $\bar\iota(f)=\bar\varepsilon(f^*)$.
\end{proof}
\begin{Def}\label{def:hyperdual}
If the hypertree $f$ in the plane hypergraph $\mathscr H$ and the hypertree $f^*$ in $\mathscr H^*$ are related as above (namely, $f(e)+f^*(e)=|e|-1$ for all hyperedges $e$), then we will call them \emph{planar dual hypertrees}.
\end{Def}
\section{Trinities}\label{sec:trinity}
\subsection{Basic observations}\label{ssec:basic}
Starting from a plane hypergraph and iterating the constructions of planar and abstract duality, a total of six plane hypergraphs can be built. The corresponding bipartite graphs form a triple; see Figure \ref{fig:trinity} for an example. The picture thus obtained has a rich combinatorics and admits several equivalent definitions (cf.\ Remark \ref{rem:pbcg}). To highlight the perfect symmetry between the constituent parts, we choose to build our discussion around the following notion.
\begin{Def}
A \emph{trinity} is a triangulation of the sphere $S^2$ together with a three-coloring of the $0$-simplices. (I.e., $0$-simplices joined by a $1$-simplex have different colors.) According to dimension, we will refer to the simplices as \emph{points, edges}, and \emph{triangles}.
\end{Def}
Most claims of this subsection can be repeated for three-colored triangulations of other closed, orientable surfaces. The material in the rest of the section, however, is quite specific to the sphere. We hope to return to this point in our forthcoming joint paper with Bene.
We will use the names red, emerald, and violet for the colors in the trinity and denote the respective sets of points with $R$, $E$, and $V$. Let us color each edge in the triangulation with the color that does not occur among its ends. Then $E$ and $V$ together with the red edges form a bipartite graph that we will call the \emph{red graph} and denote with $G_R$. Each region of the red graph contains a unique red point. Likewise, the \emph{emerald graph} $G_E$ has red and violet points, emerald edges, and regions marked with emerald points. Finally, the \emph{violet graph} contains $R$ and $E$ as vertices, violet edges, and a violet point in each of its regions.
\begin{Def}
We will refer to the plane bipartite graphs $G_R$, $G_E$, and $G_V$ above as the \emph{constituent bipartite graphs} of the trinity. The hypergraphs induced by the constituent bipartite graphs are said to be \emph{contained} in the trinity.
\end{Def}
A trinity contains six plane hypergraphs. It can be uniquely reconstructed from each of the six as follows.
\begin{all}
Let $T$ be a trinity with points set $R\cup E\cup V$ as above. Let $\mathscr H=(V,E)$ be the hypergraph with emerald hyperedges and violet vertices so that $\bip\mathscr H=G_R$. Then we also have $(R,E)=\mathscr H^*$, $(E,R)=\overline{\mathscr H^*}$, $(V,R)=\left(\overline{\mathscr H^*}\right)^*=\overline{\left(\overline{\mathscr H}\right)^*}$, $(R,V)=\left(\overline{\mathscr H}\right)^*$, and $(E,V)=\overline{\mathscr H}$.
\end{all}
\begin{proof}
Obvious from an examination of Figure \ref{fig:trinity}.
\end{proof}
\begin{figure}[htbp]
\labellist
\footnotesize
\pinlabel $r_0$ at 706 334
\pinlabel $r_1$ at 318 388
\pinlabel $r_2$ at 102 388
\pinlabel $r_3$ at 210 208
\pinlabel $e_0$ at 56 244
\pinlabel $e_1$ at 382 262
\pinlabel $e_2$ at 219 496
\pinlabel $v_0$ at 38 441
\pinlabel $v_1$ at 201 334
\pinlabel $v_2$ at 218 117
\pinlabel $v_3$ at 417 424
\pinlabel $t_1$ at 110 450
\pinlabel $t_2$ at 110 330
\pinlabel $t_3$ at 160 175
\pinlabel $t_4$ at 260 260
\pinlabel $t_5$ at 250 410
\pinlabel $t_6$ at 490 480
\pinlabel $t_7$ at 390 360
\pinlabel $t_8$ at 470 170
\endlabellist
\centering
\includegraphics[width=4in]{trinity}
\caption{A trinity of plane bipartite graphs.}
\label{fig:trinity}
\end{figure}
The triangles of a trinity can also be colored but not with the original three colors. Notice that each triangle is adjacent with exactly one edge and one point of each color. Compared to the orientation of the sphere, the cyclic order of the colors around each triangle may be positive or negative. If two triangles share an edge, these orientations are opposite.
Hence the triangles have a black and white checkerboard coloring according to orientation, cf.\ Figure \ref{fig:trinity}. In particular, the dual graph of a trinity is a plane bipartite cubic (i.e., uniform trivalent) graph. It turns out that the converse is also true.
\begin{all}\label{pro:pbcg}
Planar duals of plane bipartite cubic graphs are three-colorable.
\end{all}
\begin{proof}
Let $\theta$ be a plane bipartite cubic graph and let us call its color classes black and white. Direct each edge in $\theta$ from its black to its white endpoint. Now if we assign the modulo $3$ coefficient $1$ to the thus directed edges, the result is a cycle $C$ and hence a homology class in $H_1(S^2,\ensuremath{\mathbf Z}_3)=0$. Therefore the modulo $3$ intersection number of a closed loop with $C$ is zero.
Choose a base region of $\theta$ and assign the remainder class $0\in\ensuremath{\mathbf Z}_3$ to it. Given any other region, connect it to the base and assign to the region the modulo $3$ intersection number of the path and $C$. By the above remark, this is well defined and it is obviously a three-coloring with the colors $0$, $1$, and $2$.
\end{proof}
\begin{megj}\label{rem:pbcg}
According to Proposition \ref{pro:pbcg} and the paragraph above it, the notion of a trinity is equivalent to that of a plane bipartite cubic graph.
\end{megj}
Finally, notice that the sets of red edges, emerald edges, violet edges, white triangles, and black triangles all have the same cardinality $n$. In particular, adjacency defines natural bijections between white triangles and edges of each color. Now if we apply Euler's formula to the trinity $T$, we get $|R|+|E|+|V|-3n+2n=2$, that is
\begin{equation}\label{eq:euler}
\text{ the total number of points exceeds that of the white triangles by }2.
\end{equation}
\subsection{The Tree Trinity Theorem}
Next, form the planar dual $G^*_{R}$ of the red graph $G_{R}$. This is now in the classical sense, i.e., the vertex set of $G^*_{R}$ is $R$ and its edges are in a one-to-one correspondence with the red edges of the trinity. This graph has a natural orientation (more precisely, two natural orientations that are opposites of each other), defined as follows. Put a positive spin (as in a small spinning top) to the emerald points and a negative spin to the violet points. If $e\in E$ and $v\in V$ are connected by a (red) edge, then the two spins induce the same orientation on the dual edge. See Figure \ref{fig:dualisok} for an illustration. Notice that at each red point $r$, the edges of $G^*_{R}$ that are adjacent to it are oriented toward and away from $r$ in an alternating fashion. In particular, each vertex of $G^*_{R}$ has the same number of incoming and outgoing edges.
\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{dualitas_uj}
\caption{The trinity of Figure \ref{fig:trinity} and the directed graph $G^*_{R}$.}
\label{fig:dualitas}
\end{figure}
Similar properties hold of course for the graphs $G^*_{E}$ and $G^*_{V}$. It is in fact this triple of directed graphs that Tutte called a trinity. In \cite{tutte1}, he proved a general property of directed graphs which will be stated after the next definition.
\begin{Def}
Let $D$ be a finite
directed graph (possibly with multiple edges) and fix a vertex $r\in D$, called the \emph{root}. A spanning tree in $D$ is called a \emph{spanning arborescence} with respect to $r$ if the unique path in the tree from $r$ to any other vertex is oriented toward that vertex.
\end{Def}
Directed graphs with the same in-degree and out-degree everywhere are called \emph{balanced}.
\begin{tetel}[Tutte]
Let $D$ be a finite, balanced directed graph. The number of its spanning arborescences does not depend on the choice of root. The same holds for trees that are oriented toward the root and the counts of the two kinds of tree coincide.
\end{tetel}
The invariant of balanced directed graphs introduced in the Theorem above is denoted with $\rho(D)$ and called the \emph{arborescence number}. The following will be a useful characterization.
\begin{lemma}\label{lem:tree}
Spanning arborescences of a finite directed graph with root $r$ are exactly those cycle-free subgraphs which also have the property that for any vertex other than $r$, exactly one edge of the subgraph is directed toward that vertex.
\end{lemma}
\begin{proof}
Let $D=(V,E)$ be a finite directed graph. If it is disconnected then both properties above describe the empty set. If $D$ is connected and $A$ is a spanning arborescence with respect to $r$, then of course it is cycle-free and it has at least one edge directed into any vertex other than $r$, namely the last edge on the path from $r$ to that vertex. But because $A$ contains exactly $|V|-1$ edges, these already exhaust all of them.
Conversely, if a subgraph is cycle-free than it may have at most $|V|-1$ edges. If it has one incoming edge for all points other than $r$, then it has to have exactly $|V|-1$ of them (so it is a spanning tree) and none can be directed into $r$. For any vertex $v\ne r$, the first edge of the unique path from $r$ to $v$ has to be directed toward $v$. The same holds for the second edge lest they share terminal points with the first. Etc.
\end{proof}
Let us now return to trinities and recall a beautiful result by Tutte.
\begin{tetel}[Tree Trinity Theorem]\label{thm:ttt}
The arborescence numbers $\rho(G^*_{R})$, $\rho(G^*_{E})$, and $\rho(G^*_{V})$ of the directed graphs associated to a trinity $T$ are the same.
\end{tetel}
We will call this common value the \emph{arborescence number of the trinity} and denote it with $\rho(T)$. We quickly sketch a proof, based on \cite{tutte3}, as elements of it will be relevant later.
\begin{proof}
Recall that the triangles of a trinity have a black-and-white color scheme. We may concretize our choice of orientation in the three directed graphs by requiring that the head of each edge be in a white triangle while the tail end of each is in a black triangle. Compare Figures \ref{fig:dualisok} and \ref{fig:dualitas}. Then by Lemma \ref{lem:tree}, a spanning arborescence of $G^*_{R}$, say, is an assignment of an adjacent white triangle to each non-root red point (the unique incoming edge will reach the point through the chosen triangle). We adapt this point of view for the rest of the proof.
Let us now distinguish a white triangle $t_0$ (called the outer triangle) and choose its adjacent points $r_0\in R$, $e_0\in E$, and $v_0\in V$ as the roots in the three directed graphs. Then $t_0$ can never be selected into an arborescence of any color. Notice that by \eqref{eq:euler}, the rest of the triangles and the rest of the points are in a one-to-one correspondence.
Fix a spanning arborescence $A$ in $G^*_{R}$. We are going to associate to it two other spanning arborescences, in $G^*_{E}$ and $G^*_{V}$ respectively, so that
\begin{equation}\label{eq:bijection}
\parbox{3.5in}{the union of the three mappings is a bijection from the set of non-root points to the set of non-outer white triangles.}
\end{equation}
In other words, no white triangle will get assigned to two different points. We also claim that, given $A$, there is a unique way to do this.
The dual $A^*$ of $A$ is a spanning tree in $G_{R}$. This tree $A^*$ consists of those edges whose duals are not in $A$, in other words of those red edges whose adjacent white triangles have not been assigned to a red point. Note that the edge connecting $e_0$ and $v_0$ is part of $A^*$.
As $A^*$ is a tree, it has at least two leaves, i.e., valence one points. If those are $e_0$ and $v_0$, then $A^*$ consists of a single edge, hence there can not be any other emerald or violet points and we have no assigning to do. Otherwise, as our leaf is adjacent to a unique white triangle that has not yet been assigned, we make the obvious assignment and delete the leaf (with its single adjacent edge) from $A^*$. We continue this procedure until $A^*$ is reduced to the single edge of $t_0$ between $e_0$ and $v_0$. By that time, all other emerald and violet points have been associated to an adjacent white triangle so that no white triangle was used twice.
We have to check (cf.\ Lemma \ref{lem:tree}) that the subgraphs we constructed in $G^*_{E}$ and $G^*_{V}$ are cycle-free. This is shown using proof by contradiction, arguing that a violet or emerald cycle would prevent some red point from being connected to the red root $r_0$ in $A$. See \cite{tutte3} for details.
As the color red played no special role above, any triple of spanning arborescences satisfying \eqref{eq:bijection} can be uniquely reconstructed from any of its three members. We conclude that spanning arborescences of our three directed graphs are arranged in disjoint triples and so their numbers agree.
\end{proof}
A mapping satisfying \eqref{eq:bijection} will be called a \emph{Tutte matching} (with respect to some fixed outer white triangle $t_0$). Tutte matchings are exactly the nonzero terms in the expansion of a determinant that first appeared in the work of Berman.
\begin{tetel}[Berman]\label{thm:berman}
The common arborescence number of the three directed graphs of a trinity can be expressed as the absolute value of the determinant of the adjacency matrix of non-outer white triangles and non-root points.
\end{tetel}
This is of course a square matrix by the observation \eqref{eq:euler}. To prove the theorem, Berman \cite{berman} checks that any Tutte matching is in fact the union of three spanning arborescences (which boils down to cycle-freeness and another argument by contradiction) and that all non-zero expansion terms come with the same sign.
\begin{pelda}
The adjacency matrix associated with the trinity of Figure \ref{fig:trinity} is
\[
M=
\bordermatrix
{~&\text{\footnotesize $t_1$}&\text{\footnotesize $t_2$}&\text{\footnotesize $t_3$}&\text{\footnotesize $t_4$}&\text{\footnotesize $t_5$}&\text{\footnotesize $t_6$}&\text{\footnotesize $t_7$}&\text{\footnotesize $t_8$}\cr
\text{\footnotesize $r_1$}&0&0&0&0&1&0&1&0\cr
\text{\footnotesize $r_2$}&1&1&0&0&0&0&0&0\cr
\text{\footnotesize $r_3$}&0&0&1&1&0&0&0&0\cr
\text{\footnotesize $e_1$}&0&0&0&1&0&0&1&1\cr
\text{\footnotesize $e_2$}&1&0&0&0&1&1&0&0\cr
\text{\footnotesize $v_1$}&0&1&0&1&1&0&0&0\cr
\text{\footnotesize $v_2$}&0&0&1&0&0&0&0&1\cr
\text{\footnotesize $v_3$}&0&0&0&0&0&1&1&0\cr}.
\]
Its determinant is $7$, which equals the number of spanning arborescences in each of the three dual directed graphs, including the one shown in Figure \ref{fig:dualitas}.
\end{pelda}
\section{Hypertrees in a trinity}\label{sec:moretrinity}
\subsection{Hypertrees and arborescences}
The main result of this last section is the following.
\begin{tetel}\label{thm:alexander}
Let $G=(V_0,V_1,E)$ be a plane bipartite graph. The arborescence number of its planar dual $G^*$ agrees with the number of hypertrees in the hypergraphs $\mathscr G_0=(V_1,V_0)$ and $\mathscr G_1=(V_0,V_1)$ induced by $G$. I.e.,
\[\rho(G^*)=|Q_{\mathscr G_0}\cap\ensuremath{\mathbf Z}^{V_0}|=|Q_{\mathscr G_1}\cap\ensuremath{\mathbf Z}^{V_1}|.\]
\end{tetel}
\begin{proof}
Let us denote the vertex set of $G^*$ with $R$ and fix a root $r_0\in R$. Any spanning arborescence $A$ rooted at $r_0$ has a planar dual spanning tree $A^*$ in $G$ and that in turn has a valence distribution which gives rise to a hypertree $f_A\colon V_0\to\ensuremath{\mathbf N}$. We will show that the mapping $A\mapsto f_A$ is one-to-one and onto. (A similar statement holds of course for hypertrees in $\mathscr G_1$ instead of $\mathscr G_0$. Thus what we find as a byproduct is a set of spanning trees in $G$ that simultaneously realize all hypertrees in $\mathscr G_0$ and $\mathscr G_1$.)
For any $v\in V_0$, the edges of $G$ adjacent to it give rise to a directed cycle $C_v$ in $G^*$ and the edge set of $G^*$ is the disjoint union of these cycles. Note that each cycle $C_v$ travels around the corresponding point $v$ following the same orientation.
First we show that if two arborescences $A$ and $B$ contain the same number of edges from each $C_v$ (which is equivalent to $f_{A}=f_{B}$), then $A=B$. Assume this is not so. Then at least one cycle $C_{v_1}$ contains an edge $\alpha_1$ of $A$ that is not an edge in $B$. Let the head of $\alpha_1$ be $r_1$. Obviously $r_1\ne r_0$ because $A$ has an edge pointing to it. The other arborescence $B$ also has to have such an edge $\beta_1$ but that has to be part of a different cycle $C_{v_2}$. The edge $\beta_1$ cannot belong to $A$ because then $A$ would have two edges pointing to $r_1$. Then, because $A$ and $B$ intersect $C_{v_2}$ in the same number of edges, $C_{v_2}$ also contains an edge $\alpha_2$ which belongs to $A$ but does not belong to $B$.
Iterating our argument, we find cycles $C_{v_1},C_{v_2},C_{v_3},\ldots$ in $G^*$ until the first repetition occurs. After discarding some cycles at the beginning and relabeling the rest, we may assume that $C_{v_1}=C_{v_{k+1}}$ was the first such coincidence. Then we find ourselves in the situation depicted in Figure \ref{fig:korbeer}. The interiors of the cycles $C_{v_i}$ are disjoint from $G^*$, hence the non-root points $r_1,\ldots,r_k$ separate $G^*$ into the smaller directed graphs $D_1$ and $D_2$, see Figure \ref{fig:korbeer}, which have only those points in common. Both $D_1$ and $D_2$ have to contain vertices other than $r_1,\ldots,r_k$ because otherwise $A$ or $B$ would not be cycle-free. Only one of them, however, can contain the root $r_0$. Now the contradiction is apparent from the fact that no directed path in $A$ can pass from $D_1$ to $D_2$ and no directed path in $B$ can pass from $D_2$ to $D_1$.
\begin{figure}[htbp]
\labellist
\footnotesize
\pinlabel $D_1$ at -100 300
\pinlabel $D_2$ at 320 260
\pinlabel $C_{v_1}$ at 100 260
\pinlabel $r_1$ at 110 350
\pinlabel $\alpha_1$ at 190 340
\pinlabel $\beta_1$ at 100 430
\pinlabel $C_{v_2}$ at 210 430
\pinlabel $r_2$ at 295 430
\pinlabel $\alpha_2$ at 310 360
\pinlabel $\beta_2$ at 330 500
\pinlabel $C_{v_3}$ at 420 410
\pinlabel $r_3$ at 510 400
\pinlabel $\alpha_3$ at 520 320
\pinlabel $\beta_3$ at 570 450
\pinlabel $\alpha_{k-2}$ at 540 110
\pinlabel $\beta_{k-2}$ at 500 10
\pinlabel $C_{v_{k-1}}$ at 400 100
\pinlabel $r_{k-1}$ at 305 75
\pinlabel $\alpha_{k-1}$ at 270 160
\pinlabel $\beta_{k-1}$ at 255 15
\pinlabel $C_{v_k}$ at 180 100
\pinlabel $r_k$ at 130 160
\pinlabel $\alpha_k$ at 170 200
\pinlabel $\beta_k$ at 70 170
\endlabellist
\centering
\includegraphics[width=2in]{cycles}
\caption{A cycle of cycles in the directed graph $G^*$.}
\label{fig:korbeer}
\end{figure}
To prove that any hypertree $f\colon V_0\to\ensuremath{\mathbf N}$ can be obtained in the form $f=f_A$, we employ the following strategy. Start from an arbitrary spanning tree $\Gamma$ in $G$ that realizes $f$ and take its dual tree $\Gamma^*\subset G^*$. We are going to change $\Gamma$ step-by-step through other realizations of the same hypertree $f$ so that the dual moves closer and closer to being an arborescence. It is sufficient to keep track of the changes made to the dual tree $\Gamma^*$. Then, the condition that we preserve the hypertree translates to requiring that for each $v\in V_0$, the number of edges in $C_v\cap\Gamma^*$ stays invariant throughout the process.
It is of course crucial to describe carefully what we mean by getting closer to an arborescence. In a rooted tree, edges $\gamma$ have a well defined distance $d(\gamma)\in\ensuremath{\mathbf N}$ to the root (those adjacent to the root have $d=0$ etc.). If the tree is directed, then there is also a clear sense for each edge to be directed toward the root or away from the root. The first kind of edge will be called \emph{bad} and the second kind \emph{good}. The tree is an arborescence if and only if all of its edges are good. Let us now associate the following quantities to a finite, directed, rooted tree $(T,r)$.
\begin{itemize}
\item Let $n(T,r)$ denote the smallest value of $d$ among bad edges. (So that within a radius of $n$ from the root, the tree is an arborescence.)
\item For $1\le m\le n(T,r)$, let $\lambda_{T,r}(m)$ be the number of edges $\gamma$ with $d(\gamma)=m-1$. These values are positive. For $m>n(T,r)$, we define $\lambda_{T,r}(m)=0$.
\end{itemize}
Then, for a pair of rooted trees we write $(T_1,r_1)\prec(T_2,r_2)$ if either
\begin{enumerate}
\item\label{lexi} the sequence $\lambda_{T_1,r_1}(1),\lambda_{T_1,r_1}(2),\lambda_{T_1,r_1}(3),\ldots$ is smaller than the sequence $\lambda_{T_2,r_2}(1),\lambda_{T_2,r_2}(2),\lambda_{T_2,r_2}(3),\ldots$ in lexicographic order, or
\item\label{tuske} the two sequences coincide (implying $n(T_1,r_1)=n(T_2,r_2)=n$) but the number of bad edges in $T_1$ with $d=n$ is higher than the same count in $T_2$.
\end{enumerate}
This may sound complicated but the actual idea of the proof is very simple. If $\Gamma^*$ is not a spanning arborescence already, then it contains a bad edge $\gamma$ with $d(\gamma)=n(\Gamma^*,r_0)$. If we remove $\gamma$ from $\Gamma^*$, then the tree falls apart into a root component and a non-root component. Now $\gamma$ is part of a cycle $C_v$ which contains points from both components. Therefore $C_v$ also has an edge $\gamma'$ that goes from a point of the root component to a point of the non-root component. Let ${\Gamma^*}'$ denote the tree obtained from $\Gamma^*$ by replacing $\gamma$ with $\gamma'$.
Note that $\gamma'$ is a good edge of the new tree. The difficulty is that in what used to be the non-root component, distances to the root and relative orientations may have changed. What we do know though is that each of those edges is farther away from the root than $\gamma'$. Let us separate two cases according to whether the non-root component is reattached `close to' or `far from' the root.
\begin{enumerate}[(i)]
\item If $d(\gamma')<n(\Gamma^*,r_0)$ (where $d(\gamma')$ is of course measured in ${\Gamma^*}'$), then $(\Gamma^*,r_0)\prec({\Gamma^*}',r_0)$ by \eqref{lexi}. Indeed, with the addition of $\gamma'$, $\lambda(d(\gamma')+1)$ went up by $1$ whereas the $\lambda$ values before it stayed the same.
\item If $d(\gamma')\ge n(\Gamma^*,r_0)$, then $(\Gamma^*,r_0)\prec({\Gamma^*}',r_0)$ either by \eqref{lexi} (if $\gamma$ was the unique bad edge of $\Gamma^*$ with $d(\gamma)=n(\Gamma^*,r_0)$, in which case $\lambda(n(\Gamma^*,r_0)+1)$ moves from $0$ to a positive value) or by \eqref{tuske}.
\end{enumerate}
We conclude that if the dual of a realization of $f$ is not a spanning arborescence, then it is possible to change it to another tree, dual to another realization, that is larger in the sense of the linear order $\prec$. Because there are altogether finitely many spanning trees in $G^*$, it is now obvious that a spanning arborescence will be reached after finitely many improvements. Thus we find that our map $A\mapsto f_A$ is onto.
\end{proof}
Theorem \ref{thm:alexander} of course verifies Postnikov's Theorem \ref{thm:post} in the case of a planar bipartite graph $G$. Postnikov \cite[Lemma 12.6]{post} describes a sufficient condition on a set of spanning trees of $G$ in order for them to define a triangulation of the root polytope and therefore to simultaneously realize all hypertrees of the two induced hypergraphs. It is interesting to note that our set of spanning trees satisfies his condition, as follows.
\begin{all}
Let $G$ be a plane bipartite graph with (classical) dual $G^*$ as before. Let $A$ and $B$ denote two spanning arborescences of $G^*$ with respect to the same root $r$. Then there is no cycle $\alpha_1,\beta_1,\alpha_2,\beta_2\ldots,\alpha_k,\beta_k$ in $G$ composed of edges $\alpha_1,\alpha_2,\ldots,\alpha_k$ of $A^*$ and $\beta_1,\beta_2,\ldots.\beta_k$ of $B^*$.
\end{all}
\begin{proof}
It is easy to see that edges of $A$ could only cross such a cycle in one direction and edges of $B$ could only cross it in the opposite direction. That leads to the usual contradiction of some points becoming inaccessible from the root for one of the two arborescences.
\end{proof}
Tutte's Theorem \ref{thm:ttt} says that directed graphs in a trinity have the same number of spanning arborescences, just like the two graphs in a planar dual pair have the same number of spanning trees. We may now add that the six hypergraphs contained in the trinity have the same number of hypertrees. Moreover, the following is immediate from Theorem \ref{thm:alexander}.
\begin{kov}\label{cor:rho}
The arborescence number $\rho$ associated to a trinity is also the number of hypertrees in any of the six hypergraphs contained in the trinity.
\end{kov}
Let us make one more observation. We saw how each Tutte matching contains three spanning arborescences (one for each of $G_R^*$, $G_E^*$, and $G_V^*$) and how the (classical) planar duals of these realize a hypertree in each of the six hypergraphs. This way, each hypertree turns up in relation to exactly one Tutte matching. In fact we also have the following.
\begin{all}
In any trinity, the set of the six hypertrees induced by a Tutte matching consists of three pairs of planar duals (in the sense of Definition \ref{def:hyperdual}).
\end{all}
\begin{proof}
Pick an arbitrary, say emerald point from the trinity and denote it with $e$. It is matched to one adjacent white triangle (or, if the point is the root, substitute the outer white triangle here). Out of its other $|e|-1$ adjacent white triangles, $m$ are matched to red points and $|e|-1-m$ to violet points. Therefore there are $|e|-m$ red edges and $m+1$ violet edges adjacent to $e$ that appear in the spanning trees of $G_R$ and $G_V$, respectively, induced by the Tutte matching. This causes the $e$-coordinates of the corresponding hypertrees to be $|e|-m-1$ and $m$, respectively, the sum of which is indeed $|e|-1$.
\end{proof}
\subsection{Determinant formulas}
We are going to extend Berman's Theorem \ref{thm:berman} to write hypertree polytopes associated with plane hypergraphs in determinant form. Let $M$ denote the adjacency matrix with rows indexed by the non-root points and columns indexed by the non-outer white triangles. If a triangle $t_i$ is adjacent with the emerald point $e_j$ and the non-root violet point $v_k$, then at the intersection of row $v_k$ and column $t_i$, change the entry $1$ to $e_j$. (After it becomes a matrix entry, we will think of $e_j$ as an indeterminate associated with the original point.)
Call this matrix the \emph{enhanced adjacency matrix} and denote it with $M_{e\to v}$. Notice that even though $M$ contains no row indexed by a root, indeterminates belonging to roots do appear in enhanced adjacency matrices.
It would be rather pointless to write the symbol $e_j$ in the row indexed by that point because that would simply multiply the determinant by $e_j$. The little twist that we introduced, on the other hand, turns out to be quite useful.
\begin{tetel}\label{thm:det}
Let the plane bipartite graphs $G_{R}$, $G_{E}$, and $G_{V}$ be the constituents of a trinity. Fix an outer white triangle $t_0$ which is adjacent to the roots $r_0\in R$, $e_0\in E$, and $v_0\in V$ and form the enhanced adjacency matrix $M_{e\to v}$ as above. Then for the hypergraph $\mathscr H=(V,E)$ and its hypertree polytope, we have
\[Q_{\mathscr H}=\det M_{e\to v}\]
in the following sense. The determinant on the right hand side is a sum of monomials in the indeterminates $e\in E$. Either each monomial has coefficient $+1$ or each has $-1$. If we write the exponents in each monomial as a vector, the set we obtain is exactly the left hand side.
\end{tetel}
As a special case, we may use this theorem to write the spanning tree polytope of a plane graph as a determinant, too.
\begin{proof}
The claim on the uniform signs of course follows from Berman's Theorem \ref{thm:berman} on the adjacency matrix $M$. There and in the proof of Theorem \ref{thm:ttt} we saw that the nonzero terms in the expansion of $\det M$, which are the Tutte matchings (of non-root points in the trinity to adjacent non-outer white triangles), are also triples of spanning arborescences in the directed graphs $G^*_{R}$, $G^*_{E}$, and $G^*_{V}$. Furthermore, all spanning arborescences from each of the three directed graphs occur as part of exactly one triple.
Then in the proof of Theorem \ref{thm:alexander} we found that the planar duals $A^*$ of the spanning arborescences $A$ of $G^*_{R}$ realize each hypertree in $Q_{\mathscr H}$ exactly once. Recall that the value of the hypertree $f_A$ at an emerald point $e_j$ is the number of adjacent (to $e_j$) edges of $A^*$ minus one. A red edge is in $A^*$ if and only if its adjacent white triangle is not assigned to its red point, i.e., if the triangle is the outer one or if it is assigned to an emerald or violet point.
If $e_j\ne e_0$ then it is not adjacent to $t_0$ and out of all white triangles adjacent to $e_j$, exactly one is assigned to an emerald point (namely, $e_j$). Thus $f_A(e_j)$ is the number of white triangles that are adjacent to $e_j$ and which are assigned to their violet points. The same holds for $e_0$ where now $t_0$ plays the role of the correction term $-1$.
Finally, we just have to check the effect of enhancing $M$ into $M_{e\to v}$ on the expansion term that corresponds to $A$. In that monomial, by definition, the indeterminate $e_j$ will appear once for each time that a (non-outer) white triangle $t_i$, which is adjacent to the point $e_j$, gets assigned to its adjacent violet point $v_k$. This completes the proof.
\end{proof}
\begin{pelda}
The hypertree polytopes of Example \ref{ex:ketto} can be recomputed as the determinants of the following two matrices $M_{e\to v}$ and $M_{v\to e}$ (warning: the labels of the points have changed, cf.\ Figure \ref{fig:trinity}):
\[
\bordermatrix
{~&\text{\footnotesize $t_1$}&\text{\footnotesize $t_2$}&\text{\footnotesize $t_3$}&\text{\footnotesize $t_4$}&\text{\footnotesize $t_5$}&\text{\footnotesize $t_6$}&\text{\footnotesize $t_7$}&\text{\footnotesize $t_8$}\cr
\text{\footnotesize $r_1$}&0&0&0&0&1&0&1&0\cr
\text{\footnotesize $r_2$}&1&1&0&0&0&0&0&0\cr
\text{\footnotesize $r_3$}&0&0&1&1&0&0&0&0\cr
\text{\footnotesize $e_1$}&0&0&0&1&0&0&1&1\cr
\text{\footnotesize $e_2$}&1&0&0&0&1&1&0&0\cr
\text{\footnotesize $v_1$}&0&e_0&0&e_1&e_2&0&0&0\cr
\text{\footnotesize $v_2$}&0&0&e_0&0&0&0&0&e_1\cr
\text{\footnotesize $v_3$}&0&0&0&0&0&e_2&e_1&0\cr};
\bordermatrix
{~&\text{\footnotesize $t_1$}&\text{\footnotesize $t_2$}&\text{\footnotesize $t_3$}&\text{\footnotesize $t_4$}&\text{\footnotesize $t_5$}&\text{\footnotesize $t_6$}&\text{\footnotesize $t_7$}&\text{\footnotesize $t_8$}\cr
\text{\footnotesize $r_1$}&0&0&0&0&1&0&1&0\cr
\text{\footnotesize $r_2$}&1&1&0&0&0&0&0&0\cr
\text{\footnotesize $r_3$}&0&0&1&1&0&0&0&0\cr
\text{\footnotesize $e_1$}&0&0&0&v_1&0&0&v_3&v_2\cr
\text{\footnotesize $e_2$}&v_0&0&0&0&v_1&v_3&0&0\cr
\text{\footnotesize $v_1$}&0&1&0&1&1&0&0&0\cr
\text{\footnotesize $v_2$}&0&0&1&0&0&0&0&1\cr
\text{\footnotesize $v_3$}&0&0&0&0&0&1&1&0\cr}.
\]
The first is equal to $e_0^2 e_1 + e_0 e_1^2 + e_0^2 e_2 + e_0 e_1 e_2 + e_1^2 e_2 + e_0 e_2^2 + e_1 e_2^2$, and the second one is $v_0 v_1 + v_1^2 + v_0 v_2 + v_1 v_2 + v_0 v_3 + v_1 v_3 + v_2 v_3$. Observe that the sequences of exponents ($(2,1,0)$ etc.\ and $(1,1,0,0)$ etc., respectively) are exactly the hypertrees that we saw earlier.
\end{pelda}
Of course, as we have already done in the example above, we may use Theorem \ref{thm:det} to compute any of the six hypertree polytopes associated to a trinity. This leads to another proof of Theorem \ref{thm:sikdualis} in terms of manipulating determinants. It is quick but it is probably much less instructive than the proof given in Section \ref{sec:dual}.
Indeed, starting from $M_{e\to v}$, pull out $e_j$ from any column that belongs to a white triangle adjacent with $e_j$. Do this for all emerald points $e_j$. Then for the non-root emerald points, pull out $e_j^{-1}$ from the row indexed by $e_j$. The result is a monomial times the enhanced adjacency matrix $M_{e\to r}$ but with $e_j^{-1}$ written everywhere where $e_j$ should appear. Therefore the determinant of the matrix is $-Q_{\mathscr H^*}$.
If we examine the monomial factor, we see that the exponent of $e_j$ in it is exactly $|e_j|-1$.
Here $|e_j|$ means the size of $e_j$ as a hyperedge, which is the same in $\mathscr H$ as in $\mathscr H^*$. It can also be described as the number of violet and red points connected to $e_j$ by red, respectively violet edges of the trinity, as well as the number of white triangles adjacent to $e_j$. The claim is obvious for the non-root emerald points and the exponent of $e_0$ is $|e_0|-1$ because $e_0$ is adjacent to the outer white triangle $t_0$ which had no column in the matrix.
\subsection{Summary and two final notes}
Let us summarize our findings on hypertree polytopes and interior and exterior polynomials associated to the six hypergraphs contained in a trinity. See Figure \ref{fig:hetszog}. The six polytopes form three centrally symmetric pairs by Theorem \ref{thm:sikdualis}, i.e., there are only three `shapes' associated to the trinity. Let us denote these, indicating the respective sets of hyperedges, with $Q_R$, $Q_E$, and $Q_V$. These polytopes have different dimensions but by Corollary \ref{cor:rho} each contains the same number of integer lattice points, namely $\rho$, where $\rho$ is the arborescence number of the trinity.
A priori, there are twelve interior and exterior polynomials associated to the six hypergraphs. But again by Theorem \ref{thm:sikdualis}, that number is reduced to six, as indicated in Figure \ref{fig:hetszog}. These polynomials all have non-negative integer coefficients with their sum equal to $\rho$ in each case. Furthermore, if Conjecture \ref{conj:dual} holds, then $I=I'$, $X=X'$, and $Y=Y'$, so there are in fact only three different polynomials.
\begin{figure}[htbp]
\labellist
\footnotesize
\pinlabel hypergraph at 370 580
\pinlabel {interior polynomial} at 370 636
\pinlabel {exterior polynomial} at 370 692
\pinlabel polytope at 370 524
\pinlabel $V,E$ at 526 497
\pinlabel $I$ at 574 533
\pinlabel $X$ at 622 569
\pinlabel $R,E$ at 565 335
\pinlabel $X$ at 617 323
\pinlabel $I$ at 669 311
\pinlabel $E,R$ at 459 202
\pinlabel $X'$ at 484 152
\pinlabel $Y$ at 509 102
\pinlabel $V,R$ at 281 202
\pinlabel $Y$ at 256 152
\pinlabel $X'$ at 231 102
\pinlabel $R,V$ at 175 335
\pinlabel $Y'$ at 123 323
\pinlabel $I'$ at 71 311
\pinlabel $E,V$ at 214 497
\pinlabel $I'$ at 166 533
\pinlabel $Y'$ at 118 569
\pinlabel $Q$ at 370 380
\pinlabel $Q_E$ at 505 410
\pinlabel $Q_V$ at 235 410
\pinlabel $Q_R$ at 370 237
\endlabellist
\centering
\includegraphics[width=3.6in]{hetszog}
\caption{Relations between the hypertree polytopes, interior, and exterior polynomials of the six hypergraphs contained in a trinity. If Conjecture \ref{conj:dual} is true, then $I=I'$, $X=X'$, and $Y=Y'$.}
\label{fig:hetszog}
\end{figure}
Our first `final note' is that the polytopes $Q_{R}$, $Q_{E}$, and $Q_{V}$ are all projections of a single set $Q$ of lattice points. This can be defined by superimposing $M_{e\to v}$, $M_{v\to r}$, and $M_{r\to e}$ to obtain the matrix $M_{e\to v\to r}$ and taking its determinant. In our running example, the result is
\[
M_{e\to v\to r}=
\bordermatrix
{~&\text{\footnotesize $t_1$}&\text{\footnotesize $t_2$}&\text{\footnotesize $t_3$}&\text{\footnotesize $t_4$}&\text{\footnotesize $t_5$}&\text{\footnotesize $t_6$}&\text{\footnotesize $t_7$}&\text{\footnotesize $t_8$}\cr
\text{\footnotesize $r_1$}&0&0&0&0&v_1&0&v_3&0\cr
\text{\footnotesize $r_2$}&v_0&v_1&0&0&0&0&0&0\cr
\text{\footnotesize $r_3$}&0&0&v_2&v_1&0&0&0&0\cr
\text{\footnotesize $e_1$}&0&0&0&r_3&0&0&r_1&r_0\cr
\text{\footnotesize $e_2$}&r_2&0&0&0&r_1&r_0&0&0\cr
\text{\footnotesize $v_1$}&0&e_0&0&e_1&e_2&0&0&0\cr
\text{\footnotesize $v_2$}&0&0&e_0&0&0&0&0&e_1\cr
\text{\footnotesize $v_3$}&0&0&0&0&0&e_2&e_1&0\cr},
\]
with a determinant of $e_0^2 e_1 r_0^2 v_0 v_1^2 + e_0 e_1^2 r_0 r_3 v_0 v_1 v_2 +
e_1^2 e_2 r_1 r_2 v_1^2 v_2 + e_0^2 e_2 r_0 r_1 v_0 v_1 v_3 +
e_0 e_2^2 r_0 r_2 v_1^2 v_3 + e_0 e_1 e_2 r_1 r_3 v_0 v_2 v_3 +
e_1 e_2^2 r_2 r_3 v_1 v_2 v_3$. Obviously, $Q_E=\det M_{e\to v}$, $Q_V=\det M_{v\to r}$, and $Q_R=\det M_{r\to e}$ are all obtained from $\det M_{e\to v\to r}$ by substituting $1$ for the unneeded variables. This corresponds to projections of the associated sets of lattice points which we indicated with the three arrows in Figure \ref{fig:hetszog}. In particular, all four determinants contain the same number of lattice points, which is the arborescence number of the trinity. It is not yet clear, however, whether $Q=\det M_{e\to v\to r}$ is itself a lattice polytope.
Finally, there exists a curious relationship between our results and an invariant introduced by Jaeger \cite{jaegerpoly}. We noted in subsection \ref{ssec:basic} that the planar dual of a trinity is a plane bipartite cubic graph. Jaeger associated a one-variable polynomial $S(u)$ (with non-negative integer coefficients) to such objects. His definition does not make use of the three-coloring of Proposition \ref{pro:pbcg} and indeed, his polynomial is different from any of the six (three) that we introduced. (For example, the trinity of Figure \ref{fig:trinity} has $S(u)=6u^3+u^5$.) The sum of the coefficients is, however, the same.
\begin{all}
Let $T$ be a trinity. Then $S(T^*,1)=\rho(T)$, the arborescence number of $T$.
\end{all}
\begin{proof}
As the definition of $S$ is based on a recursion, it is natural to prove our claim by induction. Jaeger sets his initial condition $S(u)=1$ at a `free loop,' but an equivalent notion results if we start by setting $S(u)=u$ for the theta graph (which has two vertices and three edges connecting them). The theta graph is dual to the trinity formed by one black and one white triangle with a common boundary.
Each of the three directed graphs associated to the latter has a single vertex and one loop edge and, therefore, only one arborescence.
\begin{figure}[htbp]
\labellist
\small
\pinlabel $T$ at -40 400
\pinlabel $T_0$ at -40 30
\pinlabel $T$ at 720 400
\pinlabel $T_1$ at 670 10
\pinlabel $T_2$ at 1110 10
\pinlabel $e$ at 880 300
\pinlabel $v'$ at 755 260
\pinlabel $v'$ at 1000 30
\pinlabel $v''$ at 1020 440
\pinlabel $v''$ at 1255 205
\pinlabel $v$ at 610 100
\endlabellist
\centering
\includegraphics[width=3in]{collapse}
\caption{Collapsing moves used in the definition of Jaeger's polynomial.}
\label{fig:collapse}
\end{figure}
The invariant $S$ obeys two kinds of recursive `collapsing' rules. Figure \ref{fig:collapse} shows the associated pictures, drawn not for the plane bipartite cubic graphs but for their dual trinities (with an arbitrarily chosen local coloring). The part of the trinity outside of the emerald contour does not change under the collapsing operations. (The red and violet points in the pictures have other neighbors that are not shown.) It is easy to show that if a trinity contains at least four triangles than at least one of the moves can always be carried out.
The rules themselves are $S(T^*,u)=uS(T_0^*,u)$ for the collapsing move on the left and $S(T^*,u)=S(T_1^*,u)+S(T_2^*,u)$ for the one on the right. In the first case, $\rho(T)=\rho(T_0)$ by an application of Corollary \ref{cor:rho} and either Lemma \ref{lem:farkinca} (if we choose to work with the red or the violet graph) or the observation that a double edge has been removed from the emerald graph.
In the second case, separate the points of $T$ by color into the usual sets $R$, $E$, and $V$. Notice that the set $V_1$ of violet points in $T_1$ is obtained from $V$ by identifying $v'$ and $v''$ with a single point $v$. The set of emerald points in both $T_1$ and $T_2$ is $E\setminus\{e\}$. Furthermore, the hypergraph $(V_1,E\setminus\{e\})$ contained in $T_1$ and the hypergraph $(V,E\setminus\{e\})$ contained in $T_2$ are the contraction and deletion, respectively, of the hyperedge $e$ from the hypergraph $(V,E)$ contained in $T$. Therefore we get the desired conclusion from Corollary \ref{cor:rho} and Proposition \ref{pro:delcontr}.
\end{proof}
|
1,116,691,499,634 | arxiv | \section{Introduction}\label{sec:1}
Superconductors described by multi-component order parameters have drawn a lot of attention over the last few decades\cite{tanaka_multicomponent_2015-1, lin_ground_2014,milosper}. They exhibit many interesting properties that are not possible in the conventional single-component superconductors, such as collective Leggett modes\cite{lin_massless_2012}, fractional vortices\cite{babaev_vortices_2002}, skyrmionic knotted solitons\cite{babaev_hidden_2002}, phase solitons\cite{tanaka_soliton_2001, babaev_hidden_2002, lin_phase_2012}, and hidden criticality \cite{komendova_twoband_2012}, to name a few.
Of particular interest are the spin-triplet chiral $p$-wave superconductors with the order parameter of type $\Delta_{\pm}(p) \sim p_x \pm i p_y$\cite{kallin_chiral_2012, kallin_chiral_2016}. Such symmetries can be realized in the A phase of superfluid $^3$He\cite{volovik_universe_2009} and may be attributed to the layered ruthenate superconductor Sr$_2$RuO$_4$\cite{maeno_superconductivity_1994}. Due to the extra degree of freedom of spin-triplet states, the order parameter is a multi-component one. In addition, such complex structure of the order parameter breaks the time-reversal symmetry (TRS), indicating that Cooper pairs carry internal angular momentum. For this reason, chiral $p$-wave superconductors can support rich topological defect states with exotic physical properties. For example, a vortex exhibits different properties depending on whether its vorticity is parallel or anti-parallel to the internal angular momentum of the Cooper pairs\cite{matsumoto_chiral_1999, sauls_vortices_2009, yokoyama_chirality_2008}. Further, one finds that domains of different chiralities, namely $p_x + i p_y$ and $p_x - i p_y$, are degenerate in energy and therefore can coexist in the ground state, separated by a domain wall - another type of topological defect unique to $p$-wave superconductivity\cite{matsumoto_quasiparticle_1999, serban_domain_2010, becerra_multichiral_2016-1}. Such domain wall is attractive for half-quantum vortices\cite{garaud_skyrmionic_2012}, and when enclosed it can form a so-called coreless vortex, with skyrmionic topological properties\cite{fernandez_becerra_vortical_2016, zhang_electronic_2016, garaud_properties_2015}.
Another important aspect of chiral $p$-wave superconductivity is its non-trivial topological order\cite{kallin_chiral_2016}, analogous to that of the Moore-Read state for quantum Hall systems at $5/2$ filling\cite{read_paired_2000}. A consequence of that topological order is the existence of Majorana zero modes\cite{qi_topological_2011-1}, which obey non-Abelian statistics and hold promise for realization of a topological quantum computer. A well-known example is that the half-quantum vortex in a chiral $p$-wave superconductor supports a single Majorana zero mode at its core.\cite{ivanov_non-abelian_2001} However, the half-quantum vortex is thermodynamically unfavorable because its energy diverges logarithmically with the size of the system due to unscreened spin currents. Therefore, one possible way to stabilize such half-quantum vortices is to employ mesoscopic confinement. The evidences for the existence of half-quantum vortices in a mesoscopic Sr$_2$RuO$_4$ ring\cite{jang_observation_2011}, as well as in trapped superfluid $^3$He\cite{autti_observation_2016}, have recently been reported.
Mesoscopic superconducting systems, whose dimensions are comparable to the penetration depth and the coherence length, often serve as a platform to investigate the fundamental physics of topological defect states. Vortex states in confined conventional superconductors have been well studied over the past few decades\cite{geim_phase_1997, schweigert_vortex_1998, xu_magnetic_2008, berdiyorov_confinement_2009, zhang_unconventional_2012}, with emphasis on their dependence on the size and geometry of the sample. For example, a coalescence of a multi-vortex state into a giant vortex (one vortex but carrying multiple flux quanta) under influence of mesoscopic confinement was predicted and observed experimentally\cite{schweigert_vortex_1998, kanda_experimental_2004, cren_vortex_2011}. In multi-component superconductivity, mesoscopic samples are expected to stabilize fractional vortices\cite{chibotaru_thermodynamically_2007}, which are thermodynamically unfavorable in bulk samples. However, the emergent states in mesoscopic chiral $p$-wave superconductors are still under debate. For example, Ref.~\onlinecite{huo_field-induced_2012} and Ref. \onlinecite{huang_phase_2012} show contradictory results for small mesoscopic $p$-wave disks, even in the absence of external magnetic field. For that reason, in this paper we study possible states in small mesoscopic chiral $p$-wave superconducting disks by solving the microscopic Bogoliubov-de Gennes (BdG) equations self-consistently, in two dimensions, without any unnecessary assumption. We find that, in contrast to the vortex states always being the ground state in ultimately small disks, domain-wall states become the ground state in larger mesoscopic samples. These domain walls appear upon splitting of the composite vortex, which has coinciding vortex (or anti-vortex) cores in each component of the order parameter. In terms of symmetry breaking, this phase transition is similar to the one between giant vortex states and multi-vortex states in $s$-wave superconductors. These novel states made of domain walls and their interplay with vortices in the mesoscopic limit are unique to $p$-wave order, and therefore relevant to several recently realized systems with $p$-wave-like topological superconductivity\cite{fu_superconducting_2008, mourik_signatures_2012, sau_generic_2010, bernardo_p-wave_2017}. Arguably, our results are most relevant to the nanoscale Pb/Co/Si(111) system of Ref. \onlinecite{menard_two-dimensional_2016}, in which the chiral edge modes were observed and where the dependence of the ground state on the size of the system and applied magnetic field can be directly explored, with an eye on stabilization and observation of the Majorana bound states within the emergent topological defects.
The paper is organized as follows. In Section \ref{sec:2} we introduce our theoretical (Bogoliubov-de Gennes) formalism for $p$-wave superconductors, allowing for non-cylindrically symmetric states. In Section \ref{sec:3} we present the results of our simulations, reporting the equilibrium phase diagram of emergent topological-defect states as a function of the applied field and the size of the mesoscopic $p$-wave system, with thorough investigation of all found states and transitions between them. Our findings are summarized in Section \ref{sec:4}.
\section{Theoretical formalism}\label{sec:2}
The order parameter with chiral $p$-wave pairing symmetry can be expressed as
\begin{equation}\label{Eq:OPvpm}
\mathbf{\Delta}(\mathbf{r},\mathbf{k})=\Delta_+(\mathbf{r})Y_+(\mathbf{k})+\Delta_-(\mathbf{r})Y_-(\mathbf{k}).
\end{equation}
Here $\Delta_{\pm}(\mathbf{r})$ are the spatial $p_x \pm ip_y$-wave order parameters and $Y_{\pm}(\mathbf{k})=(k_x \pm ik_y)/k_F$ are the pairing functions in relative momentum space. The spinless BdG equations are written as \cite{matsumoto_vortex_2001}:
\begin{equation}\label{Eq:BdG}
\begin{bmatrix}
H_e(\mathbf{r}) & \Pi(\mathbf{r}) \\
-\Pi^*(\mathbf{r}) & -H_e^*(\mathbf{r})
\end{bmatrix}
\begin{bmatrix}
u_n(\mathbf{r}) \\
v_n(\mathbf{r})
\end{bmatrix}
= E_n
\begin{bmatrix}
u_n(\mathbf{r}) \\
v_n(\mathbf{r})
\end{bmatrix},
\end{equation}
where
\begin{equation}\label{Eq:He}
H_e(\mathbf{r})= \frac{1}{2m_e}[\frac{\hbar}{i}\nabla-\frac{e}{c}\mathbf{A}(\mathbf{r})]^2-E_F
\end{equation}
is the single-particle Hamiltonian with $m_e$ being the electron mass, $E_F$ the Fermi energy and $\mathbf{A}(\mathbf{r})$ the vector potential. We use the gauge $\nabla \cdot \mathbf{A} = 0$. For simplicity, we consider a cylindrical two dimensional Fermi surface. In addition, the contribution of the supercurrent to the total magnetic field can be neglected in thin superconducting samples\cite{zhang_electronic_2016}, resulting in a vector potential of form $\mathbf{A}(\mathbf{r}) = \frac{1}{2}H_0 r \mathbf{e}_{\theta}$, with the applied magnetic field $\mathbf{H} = H_0 \mathbf{e}_{z}$. Term $\Pi(\mathbf{r})$ is written as
\begin{equation}\label{Eq:BigPi}
\Pi(\mathbf{r}) = -\frac{i}{k_F} \sum_{\pm} [\Delta_{\pm}\square_{\pm} + \frac{1}{2}(\square_{\pm}\Delta_{\pm})],
\end{equation}
with $\square_{\pm}=e^{\pm i \theta} (\partial_r \pm \frac{i}{r}\partial_\theta )$ in cylindrical coordinates. $u_n(\mathbf{r})$($v_n(\mathbf{r})$) are electron(hole)-like quasiparticle eigen-functions obeying the normalization condition
\begin{equation}\label{Eq:normuv}
\int |u_n(\mathbf{r})|^2+|v_n(\mathbf{r})|^2 d\mathbf{r}=1,
\end{equation}
and $E_n$ are the corresponding quasiparticle eigen-energies. The system is considered to be a disk of radius $R$, therefore, the boundary conditions for the wavefunctions are $u_n(r=R)=0$ and $v_n(r=R)=0$. The order parameters, $\Delta_{\pm}(\mathbf{r})$, satisfy the self-consistent gap equations
\begin{equation}\label{Eq:DxDy}
\begin{split}
\Delta_{\pm}(\mathbf{r}) = &-i\frac{g}{2k_F}\sum_{E_n<\hbar \omega_D} [v_n^*(\mathbf{r})\square_{\mp}u_n(\mathbf{r})- \\ & u_n(\mathbf{r})\square_{\mp}v_n^*(\mathbf{r})]\times [1-2f(E_n)],
\end{split}
\end{equation}
where $k_F=\sqrt{2mE_F/\hbar^2}$ is the Fermi wave number, $g$ the superconducting coupling strength and $f(E_n)=[1+\exp(E_n/k_B T)]^{-1}$ is the Fermi-Dirac distribution function. The summation in Eq.~(\ref{Eq:DxDy}) is over all the quasiparticle states with energies in the Debye window, $\hbar \omega_D$. Due to the $p_x \pm ip_y$ symmetry, the angular momentum can take the values $\pm1$, and the phase winding numbers $L_{\pm}$ of $\Delta_{\pm}$ always preserve the relative relation $L_-=L_++2$.
To solve the previous equations, we expand the quasiparticle wavefunctions, $u_n(\mathbf{r})$ and $v_n(\mathbf{r})$, in terms of a complete orthonormal basis set:
\begin{equation}\label{Eq:Bessel1}
\begin{pmatrix}
u_n(\mathbf{r})\\
v_n(\mathbf{r})
\end{pmatrix}
=
\sum_{\mu j}
\begin{pmatrix}
c^n_{\mu j} \\
d^n_{\mu j}
\end{pmatrix}
\varphi_{j\mu}(r,\theta),
\end{equation}
where $c^n_{\mu j}$ and $d^n_{\mu j}$ are coefficients, $\mu \in \mathbb{Z}$ are angular quantum numbers corresponding to the angular momentum operator, and the basis functions
\begin{equation}\label{Eq:Bessel2}
\varphi_{j\mu}(r)=\frac{\sqrt{2}}{RJ_{\mu+1}(\alpha_{j\mu})}J_{\mu}(\alpha_{j\mu}\frac{r}{R}) \frac{e^{i \mu \theta}}{\sqrt{2\pi}},
\end{equation}
with $J_{\mu}$ the $\mu$th Bessel function and $\alpha_{j\mu}$ the $j$th zero of $J_{\mu}$. The BdG equations can now be reduced to a matrix eigenvalue problem. Here, we do not impose the cylindrical symmetry on the order parameters. Therefore, $\Delta_{\pm}$ have the general form $\Delta_{\pm} = \sum_m \Delta_{\pm}(r)e^{im\theta}$.
To construct the ground-state phase diagram, we calculate the free energy at each self-consistent iteration step (see Appendix~\ref{Ap:sec:FE} for more details) as
\begin{equation}\label{Eq:G_all}
\begin{split}
\mathcal{G} =& \sum_{n} \Big \{ 2E_nf_n -2 E_n \int d\mathbf{r} |v_n|^2 \\
&-\int d\mathbf{r} \Big [ u_n^* \tilde{\Pi} v_n f_n + v_n \tilde{\Pi} u_n^* (1-f_n) \Big ] \Big \} -TS.
\end{split}
\end{equation}
Here we define $\tilde{\Pi} = 2\Pi-\Pi'$, where $\Pi$ is calculated using Eq.~\eqref{Eq:BigPi} with the initial input of $\Delta_{\pm}$ but in $\Pi'$ the order parameters $\Delta_{\pm}$ are calculated according to Eq.~\eqref{Eq:DxDy}. Once the self-consistence loop converges, we obtain $\tilde{\Pi} = \Pi$. The magnetic energy has been neglected since we consider a thin sample. For convenience, we define the superconducting free energy density $G$ with respect to the corresponding normal state one, i.e.,
\begin{equation}\label{Eq:G_den}
G = (\mathcal{G} - \mathcal{G}_N)/A,
\end{equation}
where $\mathcal{G}_N$ is the free energy of the normal state (i.e. $\Delta_{\pm} \equiv 0$) and $A=\pi R^2$ is the area of the disk sample. The corresponding bulk superconducting free energy density at zero temperature ($T=0$) in absence of magnetic field ($\phi=0$) is denoted as $G_0$.
Without particular loss of generality, and taking into account numerical convenience, the parameters used in this paper are $E_F=\hbar \omega_D \approx 27 \Delta_0$, resulting in (1) $k_F\xi_0 \approx 18$, where $\xi_0=\hbar v_F/\pi\Delta_0$ is the BCS coherence length at zero temperature, with $v_F$($k_F$) the Fermi velocity (wave number) and $\Delta_0$ the bulk order parameter amplitude at zero temperature; (2) $\Delta_0/k_BT_c = 1.76$ (weak coupling regime), where $k_B$ is the Boltzmann constant. The considered temperature is $T=0.1T_c$, beyond the applicability range of the Ginzburg-Landau-based models.
\section{Results}\label{sec:3}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig1.eps}
\caption{(Color online) The (magnetic flux, size) ground-state phase diagram for a mesoscopic $p$-wave disk. Magnetic flux $\phi$ through the sample is in units of flux quantum $\phi_0=\frac{hc}{2e}$ and radius $R$ of the sample is in units of BCS coherence length $\xi_0$. In small disks, the ground state are (giant) composite vortex states with cylindrical symmetry, characterized by the sequentially increasing vorticity $L_+$. With increasing size the system undergoes symmetry-breaking phase transitions from composite vortex states to the chiral domain-wall states (except for the vortex-free state $L_+=0$). The number of chiral domain walls in a given state corresponds to $L_++2$. Dashed lines indicate the second-order phase transitions and solid lines the first-order ones.}
\label{phasediagram}
\end{figure}
In this study, we focus on small mesoscopic $p$-wave superconducting disks exposed to perpendicular magnetic field, to highlight the unique phase transitions in that regime. The ground state of the system is obtained by comparing the free energy density $G$ of all found states. The solutions are obtained by using different initial conditions for $\Delta_{\pm}(\mathbf{r})$, such as spatially randomized values (field-cooled conditions), different winding numbers, and/or different vortex configurations. We put forward our main results in Fig.~\ref{phasediagram}, summarizing the equilibrium phase diagram as a function of the radius of the sample $R$, and the magnetic flux $\phi$ threading the sample.
Contrary to what many would expect, the ground state is not vortex-free in absence of magnetic field. As seen in Fig.~\ref{phasediagram}, the ground state at zero field is a vortex state with winding numbers $L_\pm = \mp1$ in the two components of the order parameter. The vortex-free state, corresponding to the conventional Meissner state of $s$-wave superconductors, stabilizes as the ground state only at higher magnetic field.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig2.eps}
\caption{(Color online) The superconducting free energy density $G$ of the superconducting states as a function of the sample size ($R$), in absence of magnetic field ($\phi=0$). The $L_+=-1$ domain wall (DW) state replaces the time-reversal-symmetric (TRS) $L_+=-1$ vortex state as the ground state for $R\gtrsim6\xi_0$. Note that the vortex-free state with $L_+=0$ becomes the ground state in much larger disks, while the anti-parallel vortex state with $L_+=-1$ (in which $\Delta_-$ component is nearly completely suppressed) only exists as a metastable state.}
\label{freeenergy}
\end{figure}
We have actually checked that the ground state in the bulk sample with same parameters is the vortex-free state with $L_+ = 0$ ($L_-=L_++2$ holds in all cases, and will be omitted from here on). Why is then the vortex state with $L_+ = -1$ the ground state at zero field for mesoscopic samples? To explain this, we recall that chiral $p$-wave superconductors host edge states with an edge current\cite{suzuki_spontaneous_2016, bouhon_current_2014} due to their topological nature. When decreasing the size of the sample ($R$), the edge states overlap and their interaction increases, causing destruction of superconductivity in the vicinity of the sample center, very similar to the normal vortex core. Due to this, as seen from Fig.~\ref{freeenergy}, the Gibbs free energy density of the vortex-free state with $L_+ = 0$ increases strongly with $R$ decreasing. In contrast, the vortex state with $L_+ = -1$ in ultra small disks and at zero field exhibits lower energy, and preserves time-reversal-symmetry (TRS). In this case, both components of the order parameter not only satisfy cylindrical symmetry, i.e. $\Delta_{\pm} = |\Delta_{\pm}(r)|e^{iL_{\pm}\theta}$, but also have the same spatial distribution, i.e. $|\Delta_{+}(r)| = |\Delta_{-}(r)|$. In other words, there is an anti-vortex at the center of the sample in one component, and a vortex at the very same place in the other component. Their supercurrents ideally cancel each other, making this phase stable at zero field. As a result, the $L_+ = -1$ (TRS) state is fostered by the interaction between the edge states, and the free energy of this state does not increase as fast as the energy of the vortex-free state with $R$ decreasing. As seen from Fig.~\ref{freeenergy}, the $L_+ = -1$ (TRS) state is the ground state in a wide range of small disks ($R\lesssim6\xi_0$).
Note that the so-called anti-parallel vortex state (with $L_+ = -1$, but nearly completely suppressed $\Delta_-$ component of the order parameter) does not exist in such a small disk. Although it stabilizes in larger samples, it remains metastable (i.e. with higher energy, as shown in Fig. \ref{freeenergy}), even for non-zero external magnetic field.
When the size of the sample $R$ is increased, the confinement weakens. In this case, we find that in the $L_+ = -1$ (TRS) state the cores of the anti-vortex in $\Delta_{+}$ and the vortex in $\Delta_{-}$ separate from each other, leading to broken cylindrical symmetry. As an illustrative example, we show the spatial profile of the order parameter after the formation of such state in Fig.~\ref{Vtx-1}(a,c,e). As seen there, $\Delta_{+}$ and $\Delta_{-}$ segregate, and $|\Delta_{+}|$ becomes mirror symmetric with $|\Delta_{-}|$, as the anti-vortex shifts to the left side of the sample while the vortex shifts to the right side [see the plots of the phase of the respective components of the order parameter in Fig.~\ref{Vtx-1}(b,d)]. This results in a clear chiral domain wall between $\Delta_{+}$ and $\Delta_{-}$, passing through the center of the sample. The total order parameter, $|\Delta|$, is suppressed at the domain wall [shown in Fig.~\ref{Vtx-1}(e)]. In addition, as a topological defect, the domain wall carries low-lying bound states passing Fermi energy, leading to low-lying LDOS distributions at the domain wall\cite{zhang_electronic_2016}. Fig.~\ref{Vtx-1}(f) shows the zero-bias LDOS for the $L_+ = -1$ (DW) state in the sample with $R=13\xi_0$. The domain wall yields a maximum in LDOS near the sample center. Near the sample edge, the domain wall causes suppression of the LDOS due to the interference effect between the domain-wall bound states and the edge bound states. The latter ones yield enhanced LDOS everywhere else near the sample edges. With increasing external magnetic field, the domain wall shifts to either left or right depending on the polarity of the applied field.
The $L_+ = -1$ (DW) state replaces the $L_+ = -1$ (TRS) state as the ground state in larger samples, with radius beyond $\approx6\xi_0$ (as seen in Fig.~\ref{freeenergy}, $L_+ = -1$ (DW) state attains lower free energy with respect to $L_+ = -1$ (TRS) state for $R>6\xi_0$). The phase transition between $L_+ = -1$ (DW) and $L_+ = -1$ (TRS) state is of second order and therefore fully reversible, either as a function of size or as a function of applied magnetic field (corresponding to crossing the dashed line in Fig. \ref{phasediagram}).
Thus, to summarize the discussion so far, there are three stable states at zero field for a mesoscopic chiral $p$-wave superconductor, namely i) the $L_+ = -1$ (TRS) vortex state; ii) the one domain wall state with $L_+ = -1$; and iii) the vortex-free state with $L_+ = 0$. These states respectively replace one another in the ground state of the system, as sample is made progressively larger.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig3.eps}
\caption{(Color online) Characterization of the $L_+=-1$ domain-wall state. For the sample of size $R=13\xi_0$ and zero magnetic field, we show the spatial distribution of the two components of the order parameter $|\Delta_{\pm}|$ (a,c) and their phase $\theta_{\pm}$ (b,d), respectively. Panel (e) shows the spatial distribution of the total order parameter $|\Delta|= \sqrt{|\Delta_+|^2+|\Delta_-|^2}$ and (f) is the corresponding zero-bias LDOS.}
\label{Vtx-1}
\end{figure}
With increasing magnetic field, we find similar relationship between the vortex states and the domain-wall states for given vorticity. In the limit of very small samples, the composite (giant) vortex of successively increasing vorticity is the cylindrically-symmetric ground state of the system, in agreement with previous Ginzburg-Landau investigations\cite{huang_phase_2012}. Note however, that this behavior is not captured in Ref.~\onlinecite{huo_field-induced_2012}, using a BdG formalism, where an erroneous phase diagram is presented. This being aside the main message of our present work, we provide more details on the comparison of our results to those of Ref. \onlinecite{huo_field-induced_2012} in Appendix~\ref{Ap:sec:Disk}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig4.eps}
\caption{(Color online) Characterization of the $L_+=1$ (three) domain-wall state, for the sample with $R=13\xi_0$ and $\phi=4.8\phi_0$. Displayed quantities are the same as in Fig.~\ref{Vtx-1}.}
\label{Vtx1}
\end{figure}
The main focus of our present investigation is the topological phase transition occurring with increasing size of mesoscopic chiral $p$-wave superconductors, during which the two components of the order parameter segregate, and the composite vortex states are replaced by domain wall states as the ground state of the system, for any applied magnetic field. For example, the vortex state with $L_+ = 1$ obeys cylindrical symmetry in ultra small samples ($R\lesssim4.5\xi_0$), but develops into the three-domain-wall state in larger samples, shown in Fig.~\ref{Vtx1}. More precisely, during this transition the giant vortex in $\Delta_-$ with winding number $L_-=3$ splits into a multi-vortex state. Meanwhile, the vortex state in $\Delta_+$ with $L_+ =1$ hosts a vortex-antivortex molecule with net vorticity 1 (in order to best complement the symmetry of $\Delta_-$, instead of a simple vortex, a giant anti-vortex with $L=-2$ is formed at the disk center surrounded by three vortices). At the domain walls, the total order parameter is depleted and LDOS exhibits zero bias peaks.
To generalize our findings to all fields, the $L_+$ vorticity sequentially increases with applied magnetic field, and the number of domain walls for given $L_+$ vorticity matches $L_-=L_++2$. $\Delta_+$ consists of spread $L_++2$ vortices and a centered giant anti-vortex with $L=-2$, while $\Delta_-$ consists of $L_++2$ vortices. Spatial distributions of the order parameter (directly verifiable by e.g. STM) for states of higher vorticity are shown in Fig.~\ref{Vtx2t4}, together with the corresponding zero-bias LDOS, all exhibiting low-lying states inside the domain walls. In the present consideration we neglected the self-field of the superconductor, but it is trivial to conclude that the self-field of domain-wall states would be focused at the domain walls, offering a route for direct verification in magnetic-probe microscopy\cite{khotkevych_scanning_2008, curran_search_2014}. The range of sample sizes and the applied field needed for stability of each of the domain-wall states in the ground state is shown in Fig.~\ref{phasediagram}. We note that the domain-wall states dominate the composite vortex states as magnetic field increases, i.e. the cylindrically-symmetric states are not sufficient to describe the ground state at larger magnetic field even in very small $p$-wave samples.
As a last remark, we notice that the behavior of the zero-bias LDOS for the $L_+ = -1$ domain-wall state is different from the other domain-wall states at higher field [cf. Fig. \ref{Vtx-1}(e) and lower panels of Fig.~\ref{Vtx2t4}]. In the $L_+ = -1$ state, the domain wall suppresses the edge states in the sample, while the domain walls in higher-vorticity states enhance the LDOS peaks at the sample edge. This is due to the fact that in $L_+ = -1$ state the domain wall separates a vortex in $\Delta_{-}$ and an antivortex in $\Delta_{+}$, while in other states the domain wall separates vortices in both $\Delta_{-}$ and $\Delta_{+}$. As a result, we identify two types of domain walls, exhibiting chirality-dependent effects on the edge states\footnote{For a similar polarity-dependent effect on edge states due to nearby (anti)parallel vortex, see Ref. \onlinecite{yokoyama_chirality_2008}.}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig5.eps}
\caption{(Color online) The order parameter distribution $|\Delta|$ (upper panels) and the corresponding zero-bias LDOS (lower panels) of the domain-wall states with $L_+=$2, 3, and 4 (respectively from left to right).}
\label{Vtx2t4}
\end{figure}
\section{Conclusions}\label{sec:4}
In conclusion, motivated by recent experimental efforts, we reported the ground-state phase diagram in small-to-intermediate mesoscopic chiral $p$-wave superconducting samples exposed to perpendicular magnetic field. For this study, we employed the self-consistent Bogoliubov-de Gennes numerical formalism, where we went beyond the approximations of similar earlier works. In ultra small samples, due to strong confinement, the ground states are the (giant) composite vortex states obeying cylindrical symmetry, in agreement with earlier results in literature\cite{huang_phase_2012} and conventional wisdom for small superconductors. However, in samples larger than few coherence lengths, the domain-wall states replace the vortex states as the ground state of the system. These domain walls mark the segregation of the two components of the order parameter and the vortices therein, and their number increases with increasing magnetic field. We also reveal two types of domain walls in a chiral $p$-wave superconductor, distinguished by their chirality-dependent effect on the edge states in LDOS. These domain walls can be directly visualized by scanning tunneling microscopy as locally suppressed order parameter and correspondingly boosted bound states in LDOS, or by magnetic-probe microscopy as locally focused self-field of the sample. These predictions are directly relevant to recent realizations of two-dimensional topological superconductivity, as in for example atomically thin Pb/Co on Si(111), where the size of the $p$-wave system is determined by the size of the underlying Co cluster\cite{menard_two-dimensional_2016}. On theoretical grounds, our results translate to other superconducting systems with a multicomponent order parameter, where similar topological transitions from overlapping to segregated components can be expected and emergent physics can be significantly richer than in standard single-component superconductors.
|
1,116,691,499,635 | arxiv | \section{Introduction}
The Higgs boson discovered at the LHC~\cite{Aad:2012tfa,Chatrchyan:2012ufa} beautifully fits into the Standard Model (SM) predictions so far~\cite{CMS:yva}. The determination of its mass~\cite{Agashe:2014kda}
\al{
M_H
&= 125.7\pm0.4\ensuremath{\,\text{GeV} }
}
completes the list of the SM parameters, among which the ones in the Higgs potential,
\al{
V &= m^2\ab{H}^2+\lambda\ab{H}^4,
}
have turned out to be $m^2\sim-\paren{90\ensuremath{\,\text{GeV} }}^2$ and $\lambda\simeq0.13$, depending on the precise values of the top and Higgs masses; see e.g.\ Ref.~\cite{Buttazzo:2013uya}.
We have not seen any hint of a new physics beyond the SM at the LHC, and it is important to guess at what scale it appears, as we know for sure that it must be somewhere in order to account for {\color{black} the tiny neutrino masses, dark matter, baryogenesis, inflation, etc.} In this work, we assume that the Higgs sector is not altered up to a very high scale,\footnote{
See e.g.\ Refs.~\cite{Haba:2013lga,Haba:2014zda,Hamada:2014xka,Haba:2014zja,Haba:2014sia,Bhattacharya:2014gva,Kawana:2014zxa} for a possible minimal extension of the SM with {\color{black} the dark matter and right-handed neutrinos}.
}
in accordance with the following indications:
The renormalization group (RG) running of the quartic coupling $\lambda$ revealed that it takes the minimum value at around the Planck scale $\sim10^{18}\ensuremath{\,\text{GeV} }$ and that the minimum value can be zero depending on the precise value of the top quark mass~\cite{Holthausen:2011aa,Bezrukov:2012sa,Degrassi:2012ry,Alekhin:2012py,Masina:2012tz,Hamada:2012bp,Jegerlehner:2013cta,Jegerlehner:2013nna,Hamada:2013cta,Buttazzo:2013uya,Branchina:2013jra,Kobakhidze:2014xda,Spencer-Smith:2014woa,Branchina:2014usa,Branchina:2014rva,Jegerlehner:2014xxa}.
We have also found that the bare Higgs mass can vanish at the Planck scale as well~\cite{Hamada:2012bp,Jegerlehner:2013cta,Jegerlehner:2013nna,Bian:2013xra,Hamada:2013cta,Masina:2013wja,Alsarhi:1991ji,Jones:2013aua}.\footnote{
See also Refs.~\cite{Jegerlehner:2014mua,Cherchiglia:2014gna,Bai:2014lea} for discussion of quadratic divergences.
}
That is, the Veltman condition~\cite{Veltman:1980mj} can be met at the Planck scale. In fact he speculates, {\it``This mass-relation, implying a certain cancellation between bosonic and fermionic effects, would in this view be due to an underlying supersymmetry.''} To summarize, it turned out that there is a triple coincidence: $\lambda$, its running, and the bare Higgs mass can all be accidentally small at around the Planck scale.
This is a direct hint for Planck scale physics in the context of superstring theory.
The vanishing bare Higgs mass implies that the supersymmetry is restored at the Planck scale and that the Higgs field resides in a massless string state.
{\color{black} The} smallness of {\color{black} both} $\lambda$ and its beta function
{\color{black} is consistent with the Higgs potential being very flat around the string scale; see Fig.~\ref{phenomenological Higgs potential}.}\footnote{
\color{black} This is indeed suggested
by the multiple point criticality principle (MPP)~\cite{Froggatt:1995rt,Froggatt:2001pa,Nielsen:2012pu}, the classical conformality~\cite{Meissner:2007xv,Foot:2007iy,Meissner:2007xv,Iso:2009ss,Iso:2009nw,Hur:2011sv,Iso:2012jn,Chankowski:2014fva,Kobakhidze:2014afa,Gorsky:2014una,Kubo:2014ova,Foot:2014ifa,Kawana:2015tka}, the asymptotic safety~\cite{Shaposhnikov:2009pv}, the hidden duality and symmetry~\cite{Kawamura:2013kua,Kawamura:2013xwa}, and the maximum entropy principle~\cite{Kawai:2011qb,Kawai:2013wwa,Hamada:2014ofa,Kawana:2014vra,Hamada:2014xra}.
}
Such a flat potential opens up the possibility that the Higgs field plays the role of inflaton {\color{black} in the} early universe~\cite{Bezrukov:2007ep,Hamada:2013mya,Hamada:2014iga,Bezrukov:2014bra,Ko:2014eia,Xianyu:2014eba,Hamada:2014wna,Ibanez:2014swa,Bezrukov:2014ipa}.\footnote{
There are different models of the Higgs inflation involving higher dimensional operators~\cite{Germani:2010gm,Kamada:2010qe,Kamada:2012se,Nakayama:2014koa,Lee:2014spa}.
}
\begin{figure}[t]
\begin{center}
\hfill
\includegraphics[width=.5\textwidth]{phenomenological_potential.pdf}
\hfill\mbox{}
\caption{
{\color{black}
The SM Higgs potential $V$ as a function of the Higgs field $\varphi$.
Here we take $M_H=125\ensuremath{\,\text{GeV} }$ and tune the top mass in such a way that the potential becomes flat; see e.g.\ Ref.~\cite{Hamada:2014wna}.
}
}\label{phenomenological Higgs potential}
\end{center}
\end{figure}
{\color{black}
To understand the whole structure of the potential, it is crucial to investigate its behavior beyond the Planck scale.
The calculation based on field theory cannot be trusted in this region.
Although it is hard to reproduce the SM completely as a low energy effective theory of superstring, we can explore generic trans-Planckian structure of the Higgs field, under the assumption that the SM is close to a non-supersymmetric perturbative vacuum of superstring theory.
}
{\color{black} In four dimensions, string theory has {\color{black} many more} tachyon-free non-super\-sym\-metric vacua than the supersymmetric ones.
The latest LHC results suggest the possibility of the absence of the low energy supersymmetry, and the research based on the non-supersymmetric vacua is becoming more and more important~\cite{Blaszczyk:2014qoa,Angelantonj:2014dia,Hamada:2014eia,Nibbelink:2015ena,Abel:2015oxa}.}
{\color{black}
In such non-supersymmetric vacua,}
{\color{black} almost all the moduli are lifted up perturbatively,
contrary to the supersymmetric ones which typically possess tens or even hundreds of flat directions that cannot be raised purturbatively.}
{\color{black}
However, there remains a problem of instability in the non-supersymmetric models: The perturbative corrections generate tadpoles for the dilaton and other moduli such as the radii of toroidal compactifications.}
{\color{black}
The dilaton can be stabilized within the perturbation series when $g_s\sim 1$~\cite{Dine:1985he}, or else by the balance between the one-loop and the non-perturbative potentials when $g_s$ is small~\cite{Abel:2015oxa}. In this paper, we assume that the dilaton is already stabilized.
}{\color{black}
We will discuss other instabilities than the dilaton direction in Sections~2 and 3.
}
We start from the tachyon-free non-supersymmetric vacua of the heterotic string theory.
We assume that the Higgs comes from a closed string and that its emission vertex at the zero momentum can be decomposed into a product of operators whose conformal dimensions are $(1,0)$ and $(0,1)$.
This is realized in the following cases for example:
\begin{itemize}
\item The Higgs comes from an extra dimensional component of a gauge field~\cite{Manton:1979kb,Fairlie:1979at,Salam:1981xd,RandjbarDaemi:1982hi,Hosotani:1983xw,Hosotani:1983vn,Hosotani:1988bm}.
\item The Higgs is the only one doublet in generic fermionic constructions~\cite{Kawai:1986va,Kawai:1986ah,Lerche:1986cx,Antoniadis:1986rn}.
\item The Higgs comes from an untwisted sector in the orbifold construction~\cite{Dixon:1985jw,Dixon:1986jc}; see e.g.\ Ref.~\cite{Blaszczyk:2014qoa} {\color{black} for a recent model-building example}.\footnote{
In Ref.~\cite{Blaszczyk:2014qoa} the SM-like one Higgs doublet model is constructed,
in which Higgs is realized as an extra dimensional gauge field.
For example, the model under the $\mathbb{Z}_{6-I}$ orbifold compactification of $SO(16)\times SO(16)$ heterotic string with the shift vector
\als{
V=\paren{-1/2,\,-1/2,\,1/6,\,1/2,\,-2/3,\,-1/2,\,0,\,1/6}\paren{-2/3,\,-1/2,\,0,\,-1/2,\,-1/2,\,0,\,1/6,\,2/3}
}
and Wilson lines
\als{
A_5
&= \paren{1/2,\,1/2,\,-1/2,\,5/6,\,-1/6,\,1/2,\,1/6,\,-1/2}\paren{1/2,\,-1/6,\,-5/6,\,7/6,\,1/6,\,5/6,\,1/2,\,-1/6} \\
A_6 &= \paren{1/2,\,1/2,\,-1/2,\,5/6,\,-1/6,\,1/2,\,1/6,\,-1/2}\paren{1/2,\,-1/6,\,-5/6,\,7/6,\,1/6,\,5/6,\,1/2,\,-1/6}
}
fits in all the three criteria.
We thank the authors of Ref.~\cite{Blaszczyk:2014qoa} on this point.
}
\end{itemize}
Then we consider multiple insertions of such emission vertices to evaluate the effective potential.
It is very important to understand the whole shape of the Higgs potential in order to discuss the initial condition of the Higgs inflation, as well as to examine whether the MPP is realized or not.
We will show that, in the large field region, the Higgs potential is connected to a runaway vacuum
with vanishing energy, which corresponds to opening up an extra dimension.
We find that such potential can realize an eternal inflation.
This paper is organized as follows.
In Sec.~\ref{classification}, we show that the potential in the large field limit with fixed radius can be classified into the above three categories.
In Sec.~\ref{SO(16)}, we compute the one-loop partition function as a function of a background field in $SO(16)\times SO(16)$ non-supersymmetric heterotic string on $\mathbb{R}^{1,8}\times S^1$, as a concrete toy model~\cite{Dixon:1986iz,AlvarezGaume:1986jb,Ginsparg:1986wr,Itoyama:1986ei,Itoyama:1987rc}. We explicitly check that the limiting behavior of the potential fits into the three categories mentioned above. We argue that physically this corresponds to opening up a multi degrees of freedom space above the Planck scale and that the runaway vacuum is a direction in this space.
In Sec.~\ref{eternal section}, we point out a possibility that the Higgs inflation is preceded by an eternal inflation, which occurs either in a domain wall or in a false vacuum.
In Sec.~\ref{solution to cosmological constant}, we show a possible explanation for the vanishing cosmological constant in terms of the MPP, and consider a possible mechanism to yield the observed value of the order of $\paren{\text{meV}}^4$.
In Sec.~\ref{summary}, we summarize our results.
In Appendix~\ref{notation}, we summarize our notation for several mathematical functions. In Appendix~\ref{fermionic construction}, we review the fermionic construction that we use for the heterotic superstring theory. The computation of the partition function is also outlined. In Appendix~\ref{T-duality section}, we review the T-duality that we use in this work. In Appendix~\ref{MPP review}, we review the MPP.
\section{Higgs potential in string theory}\label{classification}
In this section, we show how to treat the large constant background of a massless mode in closed string theory. In general, we start from a worldsheet action, say,
\al{
S_0={1\over 2\pi\alpha'}\int \text{d}^2z\,G_{MN}\,\partial X^M \bar{\partial} X^N+\cdots,
}
where $G_{MN}$ is the target space metric, $M,N,\dots$ run from $0$ to $D-1${\color{black} , and $\alpha'$ is the string tension}.
In general, a genetic massless string state has the emission vertex
\al{
\mathcal{O}\fn{z,\bar{z}}e^{ik\cdot X},
}
where $k^2=0$ and $\mathcal{O}\fn{z,\bar{z}}$ has conformal dimensions $(1,1)$ to preserve the conformal symmetry on the worldsheet.
\begin{figure}[tn]
\begin{center}
\hfill
\includegraphics[width=.9\textwidth]{string_diagram.pdf}
\hfill\mbox{}
\caption{
Partition function under the presence of the background $\phi$. Summing up all the possible insertions of $\phi$, it exponentiates to yield Eq.~\eqref{shifted action}. This picture shows the one-loop case.
}\label{string diagram}
\end{center}
\end{figure}
As said in Introduction, we assume in this paper that the emission vertex at the zero momentum of the physical Higgs can be decomposed into a product of the $(1,0)$ operator $\mathcal O_L\fn{z}$ and the $(0,1)$ operator $\mathcal O_R\fn{\bar z}$:
\al{
\mathcal O\fn{z,\bar z}
&= \mathcal O_L\fn{z}\,\mathcal O_R\fn{\bar z}.
}
An operator of this form is exactly marginal: Insertions of the operator $\phi\,\mathcal O\fn{z,\bar z}$ can be exponentiated without renormalization, and hence the deformation of the worldsheet action
\al{
S &= S_0
+\phi\int \text{d}^2 z\,\mathcal{O} (z,\bar{z})
\label{shifted action}
}
keeps the theory conformally invariant; see Fig.~\ref{string diagram}.
We want to know the effective potential for the background: $V\fn{\phi}$.
At the tree-level, the potential vanishes
\al{
V_\text{tree}(\phi)=0.
}
This is because the one-point function of any emission vertex, especially that of the graviton, vanishes on the sphere as it has non-zero conformal dimension. At the one-loop level and higher, we have non-zero effective potential.\footnote{
On the whole plane that is mapped from the sphere, an operator $\mathcal O$ with the scale dimension $d_s$ satisfies $\langle\mathcal O\fn{\lambda z}\rangle=\langle\mathcal O\fn{z}\rangle\lambda^{-d_s}$ and the translational invariance reads $\langle\mathcal O\fn{\lambda z}\rangle=\langle\mathcal O\fn{z}\rangle$. Hence we get $\langle\mathcal O\fn{z}\rangle=0$ for $d_s\neq0$. On the other hand, for torus and surfaces with higher genera, we cannot define the scale transformation, unlike the plane.
}
The $D$-dimensional energy density is given by
\al{
V_\text{$g$-loop}
&= -{Z_{g}\over \mathcal V_D},
\label{energy density}
}
where $\mathcal V_D$ is the volume of $D$-dimensional spacetime and $Z_{g}$ is the partition function on the worldsheet with genus $g$ after moduli integration.
We note that the potential~\eqref{energy density} is given in the Jordan frame that does not yet make the gravitational action canonical; we will come back to this point in Secs.~\ref{graviton background} and \ref{S1 with Wilson}.
We emphasize that in string theory, the partition function $Z_g$ can be obtained even for the field value larger than the Planck scale, unlike the ordinary quantum field theory where infinite number of Planck-suppressed operators become relevant and uncontrollable.
Before generalizing to arbitrary compactification, we first analyze two simple examples to build intuition:
In Sec.~\ref{graviton background}, we study the large field limit of the radion, namely an extra dimensional component of the graviton under the toroidal compactification. This limit corresponds to the large radius limit of the compactified dimension.
In Sec.~\ref{momentum boost}, we further turn on the Wilson line and the anti-symmetric tensor field. We can analyze this setup by considering the corresponding boost in the momentum space~\cite{Narain:1985jj,Narain:1986am}.
From the analysis of the spectrum of these modes, we argue that the effective potential in the large field limit can be classified into three categories, namely, runaway, periodic, and chaotic.
(In Sec.~\ref{SO(16)}, we will confirm it by a concrete computation for the toroidal compactification of the $SO(16)\times SO(16)$ heterotic string theory.)
In Sec.~\ref{generalization}, we discuss more general compactifications, and show that the same classification holds.
\subsection{Radion potential}\label{graviton background}
As said above, we start from the toroidal compactification of the $(D-1)$th direction: $X^{D-1}\sim X^{D-1}+2\pi R$.
The emission vertex of the radion, $G_{D-1\,D-1}$, is
\al{
\partial X^{D-1}\bar{\partial} X^{D-1}\,e^{i k\cdot X}.
}
Its constant background is given by setting the momentum $k=0$.
We want the partition function with the radion background $\phi$:
\al{
S_\text{worldsheet}={1\over 2\pi\alpha'}\int \text{d}^2z \paren{1+\phi}\partial X^{D-1} \bar{\partial} X^{D-1}+\cdots.
}
In this case, we can transform the action into the original form with $\phi=0$ by the field redefinition
\al{
X'^{D-1}=\sqrt{1+\phi}\,X^{D-1},
}
which however changes the periodicity as
\al{
X'^{D-1}\sim X'^{D-1}+2\pi\sqrt{1+\phi}\,R.
}
That is, the radion background changes the radius of $S^1$ to
\al{
R' &:= \sqrt{1+\phi}\,R.
}
Therefore if the compactification radius $R'$ is large, the effective action is proportional to it, and the $(D-1)$-dimensional effective action for large $\phi$ becomes
\al{
S_\text{eff}
&\sim
\int \text{d}^{D-1}x \sqrt{-g}\,R'\left(\mathcal{R}-C-{2\over R'^2}\paren{\partial R'}^2\right)\nonumber\\
&= \int \text{d}^{D-1}x \sqrt{-g}\sqrt{1+\phi}\,R\left(\mathcal{R}-C-{1\over2\paren{1+\phi}^2}\paren{\partial\phi}^2\right)
\label{potential in lower dimensions}
}
up to an overall numerical coefficient, where we have taken the $\alpha'=1$ units, $\mathcal R$ is the Ricci scalar in $(D-1)$-dimensions, and $C$ is a $\phi$-independent constant that is generated from loop corrections in the non-supersymmetric string theory.
$C$ can be viewed as the $D$-dimensional cosmological constant.
This can be confirmed at the one-loop level as follows. The radius dependent part of the one-loop partition function before the moduli integration is
\al{\label{radius dependence}
\sum_{n,w=-\infty}^\infty \exp\left[
2\pi i \tau_1 n w-\pi \tau_2 \alpha'\left(\left({n\over R'}\right)^2+\left({R' w\over \alpha'}\right)^2\right)
\right],
}
where $n$ and $w$ are the Kaluza-Klein (KK) and winding numbers, respectively, and $\tau=\tau_1+i\tau_2$ is the moduli of the worldsheet torus.
In the large radius limit $R'\gg\sqrt{\alpha'}$, we can rewrite Eq.~\eqref{radius dependence} by the Poisson resummation formula:
\al{
{R'\over\sqrt{\pi\tau_2\alpha'}}\sum_{m,w}
\exp\left[
-{\pi R'^2\over \alpha'\tau_2}|m-w\tau|^2
\right].
\label{prop to R after Poisson}
}
We see that the partition function becomes indeed proportional to $R'$ in the large $R'$ limit. Note that in the large $R'$ limit, only the $w=0$ modes contribute, and hence that the winding modes are not important here.
We then rewrite the action~\eqref{potential in lower dimensions} in the Einstein frame.
In $(D-1)$-dimensions, the field redefinition by the Weyl transformation, $g_{\mu\nu}^\text{E}=e^{2\omega} g_{\mu\nu}$, gives us the volume element and the Ricci scalar in the Einstein frame as
\al{
\sqrt{-g^\text{E}}
&= e^{\paren{D-1}\omega}\sqrt{-g}, \\
\mathcal{R}^\text{E}
&= e^{-2\omega}\sqbr{
\mathcal{R}
-2\paren{D-2}\nabla^2\omega
-\paren{D-3}\paren{D-2}g^{\mu\nu}\partial_\mu\omega\partial_\nu\omega
},\label{curvature}
}
respectively.
By choosing $e^{\paren{D-3}\omega}=R'$, we get the Einstein frame action:
\al{
S_\text{eff}
&= \int \text{d}^{D-1}x \sqrt{-g^\text{E}}\paren{
\mathcal{R}^\text{E}
+\paren{D-3}\paren{D-2}g^{\text{E}\mu\nu}\,\partial_\mu\omega\,\partial_\nu\omega
-e^{-2\omega}C
-{2\over R'^2}\paren{\partial R'}^2
}\nonumber\\
&= \int \text{d}^{D-1}x \sqrt{-g^\text{E}}\paren{
\mathcal{R}^\text{E}
-{D-4\over D-3}{g^{\text{E}\mu\nu}\over R'^2}\partial_\mu R'\,\partial_\nu R'
-{C\over R'^{2/\paren{D-3}}}
}\nonumber\\
&= \int \text{d}^{D-1}x \sqrt{-g^\text{E}}\paren{
\mathcal{R}^\text{E}
-{g^{\text{E}\mu\nu}\over2}\partial_\mu \chi\,\partial_\nu \chi
-C\exp\fn{-\sqrt{2}\chi\over\sqrt{\paren{D-3}\paren{D-4}}}
},
\label{to Einstein}
}
where the second term in Eq.~\eqref{curvature} has become a total derivative and we have defined $R'=:\exp\fn{{\chi\over\sqrt{2}}\sqrt{D-3\over D-4}}$. When $D>4$ and $C>0$, we see that the last term, the potential, becomes runaway for large $R'$ or $\chi$.\footnote{
The small radius limit $R'\ll\sqrt{\alpha'}$ is the same {\color{black} as the large radius limit} due to the T-duality: {\color{black} $R'\longleftrightarrow \alpha'/R'$}~\cite{Kikkawa:1984cp,Sakai:1985cs,Maharana:1992my}.
}
To summarize, the large field limit of the radion $\phi$, the extra dimensional component of the graviton, leads to the decompactification of the corresponding dimension.
{\color{black} This decompactified vacuum corresponds to the runaway potential if the cosmological constant is positive~\cite{Appelquist:1982zs,Appelquist:1983vs}.}
{\color{black} Since the large radius limit is equivalent to the weak coupling limit, the runaway vacuum corresponds to a free theory. Therefore this runaway nature is not altered by the higher order corrections. We will see in Section~\ref{summary} that this argument also applies to the dilaton background.}
\subsection{Boost on momentum lattice}\label{momentum boost}
As the second example, we turn on the backgrounds for graviton, gauge, and anti-symmetric tensor fields.
Let $p$ and $q$ be the numbers of the compactified dimensions in the left and right moving sectors of the closed string, other than our four dimensions. We take $p\geq q$ without loss of generality.
The spectrum of $(p+q)$-dimensional momenta $(\vec k_\text{L},\vec k_\text{R})$ of the non-oscillatory mode
is restricted to form an (even self-dual) momentum lattice, due to the modular invariance~\cite{Narain:1985jj,Narain:1986am}; see Appendix~\ref{bosonic part}. Different lattices that are related by the $SO(p,q)$ rotation of $(\vec k_\text{L},\vec k_\text{R})$ correspond to different compactifications, up to the $SO(p)\times SO(q)$ rotation that leaves $\vec k_\text{L}^2$ and $\vec k_\text{R}^2$ invariant. Therefore the compactifications are classified by the transformation
\al{\label{Narain}
{SO(p,q)\over SO(p)\times SO(q)}.
}
This is the moduli space of the theory at the tree level, which is lifted up by the loop corrections in non-supersymmetric string theory.
The boost in the momentum space corresponds to putting constant backgrounds for the degrees of freedom that are massless at the tree-level~\cite{Narain:1985jj,Narain:1986am}:
\al{
C_{ij}\,\partial X^i_\text{L}\,\bar\partial X_\text{R}^{\bar j},
\label{C_ij}
}
where $i$ and $\bar j$ run for $1,\dots,p$ and $1,\dots,q$, respectively.
In terms of $q$-dimensional fields, they can be interpreted as the symmetric tensor (metric), antisymmetric tensor, and ${U(1)}^{p-q}$ gauge fields (Wilson lines), whose total number is
\al{
{q\paren{q+1}\over 2}+{q\paren{q-1}\over2}+q\paren{p-q}=pq.
}
Indeed, this agrees with the number of degrees of freedom of the coset space~\eqref{Narain}:
\al{
{\paren{p+q}\paren{p+q-1}\over 2}-{p\paren{p-1}\over2}-{q\paren{q-1}\over2}=pq.
}
\begin{figure}[tn]
\begin{center}
\hfill
\includegraphics[width=.44\textwidth]{lattice_decomp_deco.pdf}
\hfill
\includegraphics[width=.45\textwidth]{lattice_chaotic_deco.pdf}
\hfill\mbox{}
\caption{Schematic picture of the momentum boost in the $k_\text{R}$ vs $k_\text{L}$ plane. The light cone in the momentum space is depicted by the dashed diagonal lines. The sets of lighter (magenta) and black dots represent the initial momentum lattice and the one after the boost, respectively. Left: There exists a point of the initial lattice on the light cone. Then there exist infinite amount of its integer multiplications on the light cone. In the infinite boost limit, they are contracted to form a decompactified dimension, which is represented by the black dots. Right: There is no initial point on the light cone, and such a decompactification does not occur.
}\label{momentum lattice}
\end{center}
\end{figure}
We are interested in switching on the background of a single field. If the emission vertex of the field is given by $c_{i\bar j}\,\partial X^i_\text{L}\,\bar\partial X_\text{R}^{\bar j}$, this corresponds to adding
\al{
\lambda\,c_{ij}\,\partial X^i_\text{L}\,\bar\partial X_\text{R}^{\bar j}
}
to the worldsheet action, where $\lambda$ represents the strength of the background.
In general, the $SO(p)\times SO(q)$ rotation can make $c_{i\bar j}$ into the diagonal form
\al{
c_{i\bar j} &\to
\bmat{
* & & & \\
& * & & \\
& &\ddots & \\
& & &* \\
& & &\\
& & &
},\label{c_ij}
}
where the blank slots stand for zero.
This background corresponds to the combination of $q$ boosts in the $1$-$\bar 1$, \dots, $q$-$\bar q$ planes.
That is, the $(p+q)$-dimensional vector
\al{
k=\paren{k_\text{L}^1,\dots,k_\text{L}^p;k_\text{R}^{\bar 1},\dots,k_\text{R}^{\bar q}}
}
is transformed by
\al{
\bmat{k_\text{L}'^i\\ k_\text{R}'^{\bar i}}
&= \bmat{\cosh\eta_i&\sinh\eta_i\\ \sinh\eta_i&\cosh\eta_i}
\bmat{k_\text{L}^i\\ k_\text{R}^{\bar i}}, \nonumber\\
k_\text{L}'^j
&= k_\text{L}^j,
}
for $i=1,\dots,q$ and $j=q+1,\,\dots,\,p$.
Let us first consider the effect of a boost in a single plane:
\al{
\bmat{k_\text{L}'\\ k_\text{R}'}
&= \bmat{\cosh\eta&\sinh\eta\\ \sinh\eta&\cosh\eta}
\bmat{k_\text{L}\\ k_\text{R}}.
}
Then one of $k_\text{L}\pm k_\text{R}$ is contracted and the other expanded:
\al{
k'_\text{L}+k'_\text{R}&=e^\eta\paren{k_\text{L}+k_\text{R}},\nonumber\\
k'_\text{L}-k'_\text{R}&=e^{-\eta}\paren{k_\text{L}-k_\text{R}}.
}
The effective potential in the large $\eta$ limit depends on whether or not there exists a lattice point on the light cone in this plane, as is illustrated schematically in Fig.~\ref{momentum lattice}.
There are two possibilities in the infinite boost limit:
\begin{itemize}
\item If a point in the initial momentum lattice sits on the light cone as in the left panel in Fig.~\ref{momentum lattice}, infinite amount of its integer multiplications on the light cone are contracted to form a continuous spectrum.
This behavior is the same as that of the KK momenta in the large radius limit discussed in Sec.~\ref{graviton background}.
The resultant partition function becomes proportional to the radius $R$.
The same argument as Sec.~\ref{graviton background} gives us the runaway potential.
\item If no point sits on the light cone in the initial momentum lattice, as in the right panel in Fig.~\ref{momentum lattice}, then the continuum is not formed by the infinite boost.
For a given amount of boost, the closest point to the origin contributes the most to the partition function. Then the potential becomes either periodic or chaotic for larger and larger boost.
\end{itemize}
The fate of the large field limit depends on whether or not a lattice point sits on the light cone of the boost plane in the momentum space.
In the case of the multiple boosts~\eqref{c_ij}, the boost in each plane is independent from the others. However, if there are several degenerate massless states as in Eq.~\eqref{C_ij}, we should better consider all of them simultaneously. As we will see in Sec.~\ref{SO(16)} in a concrete model, the asymptotic behavior of the potential remains essentially the same.
\subsection{General compactifications}\label{generalization}
We discuss the large field limit in more general setup including compactification on a curved space, possibly involving orbifolding etc., or even the case without having a geometrical interpretation.
We will show that the classification still holds: runaway, periodic, and chaotic.
\begin{figure}
\begin{center}
\includegraphics[width=.7\textwidth]{graviton.pdf}
\caption{Left: The Higgs emission vertex $\partial Y\,\bar\partial Z$ is single-valued around the graviton emission vertex because $Y$ and $Z$ are independent of the spacetime coordinate $X^\mu$. Right: Exponential mapping around the graviton emission vertex. $\partial Y$ and $\bar\partial Z$ are periodic around the cylinder, e.g.\ around the (red) circle.}\label{graviton}
\end{center}
\end{figure}
As said above, the emission vertex of a massless field must be written in terms of a $(1,1)$ operator, and we assume that this operator separates into the holomorphic and anti-holomorphic parts,
\al{
\mathcal{O}_{(1,1)}=\mathcal{O}_{(1,0)}\times \mathcal{O}_{(0,1)},
}
on the worldsheet.
Then we can write at least locally,
\al{
&\mathcal{O}_{(1,0)}=\partial Y,
&\mathcal{O}_{(0,1)}=\bar{\partial} Z,
}
where $Y$ and $Z$ are free worldsheet scalars.
If we further assume that the Higgs field is uniquely identified, i.e., that it does not mix with other massless states at the tree level, then it suffices to consider a single background as in Eq.~\eqref{shifted action}. In this case we may not need to consider the multi-field potential discussed above.
We can show that $\partial Y$ and $\bar\partial Z$ are periodic at least in one sector:
In fact, if we insert the graviton emission vertex $\partial X^\mu\,\bar\partial X^\nu e^{ik\cdot X}$ near the Higgs emission vertex, the latter is single valued in the neighborhood of the former. This is because $Y$ and $Z$ are independent of the spacetime coordinates $X^\mu$. Therefore, $\partial Y$ and $\bar\partial Z$ are periodic in the graviton sector; see Fig.~\ref{graviton}.
In such a sector, we can mode-expand $\partial Y$ and $\bar\partial Z$.
Let us consider the simultaneous eigenvalues $\paren{p_Y,p_Z}$ of the constant modes of $\partial Y$ and $\bar\partial Z$.
The set of the pairs of eigenvalues form a momentum lattice $\Gamma_P=\Set{\paren{p_Y,p_Z}}$: If there exist states $s_1$ and $s_2$ with momenta $\paren{p_{Y1},p_{Z1}}$ and $\paren{p_{Y2},p_{Z2}}$, respectively, there is a state with the momentum $\paren{p_{Y1}+p_{Y2},\,p_{Z1}+p_{Z2}}$; such a state appears when $s_1$ and $s_2$ merge.
If $\Gamma_P$ contains a non-zero vector, it forms the momentum lattice.
Then the same argument applies as in Sec.~\ref{momentum boost}.
Putting a constant background for $\mathcal{O}_{(1,1)}$ corresponds to the momentum boost.
If there is a point on the light cone with $p_Y/p_Z$ being a rational number, then a runaway direction emerges in the infinite boost limit. If not, namely if there is no such point, then the potential becomes chaotic.
\section{$SO(16)\times SO(16)$ heterotic string}\label{SO(16)}
We verify the argument in the previous section in the concrete model: the $SO(16)\times SO(16)$ heterotic string theory~\cite{Dixon:1986iz,AlvarezGaume:1986jb}. This model breaks supersymmetry at the string scale but, unlike the bosonic string theory in 26 dimensions, the tachyonic modes are projected out as in the ordinary heterotic superstring theories.
In the fermionic construction, the modular invariance of the partition function restricts the allowed set of the fermion numbers in Neveu-Schwarz (NS) and Ramond (R) sectors.
The classification of the ten dimensional string theories is completed in Ref.~\cite{Kawai:1986vd}.
The $SO(16)\times SO(16)$ model~\cite{Dixon:1986iz,AlvarezGaume:1986jb} is the only one that has neither a tachyon nor a supersymmetry in ten dimensions.
We write the uncompactified dimensions $X^\mu$ ($\mu=0,\dots,9$), and the compactified ones $X_L^I$ ($I=1,\dots,16$) for the left movers.
We then compactify this model on $S^1$~\cite{Ginsparg:1986wr}:
\al{
X^9 &\sim X^9+2\pi R.
}
We further turn on a Wilson line for the gauge field $A_{\mu=9}^{I=1}$, and compute the one-loop partition function.
In Appendix~\ref{fermionic construction}, we spell out the construction of the model and the computation of the partition function; the notations for the theta functions are put in Appendix~\ref{notation}.
In Sec.~\ref{SO(16)xSO(16) partition function}, we review the partition function in the $SO(16)\times SO(16)$ heterotic string theory in 10 dimensions. In Sec.~\ref{S1 with Wilson} we compute the one-loop partition function of this model for the case described above.
\subsection{Partition function of $SO(16)\times SO(16)$ string}
\label{SO(16)xSO(16) partition function}
We first review the computation of the partition function in the $SO(16)\times SO(16)$ non-supersymmetric heterotic string~\cite{Dixon:1986iz,AlvarezGaume:1986jb}. Here we have chosen a non-super\-sym\-metric string as a toy model because, as discussed in Introduction, the low energy data at the electroweak scale suggests via the Veltman condition that the supersymmetry is broken at the Planck scale. In such a non-supersymmetric theory, the flat direction of the effective potential is raised perturbatively. Detailed procedure of the fermionic construction of the model is explained in Appendix~\ref{formalism section} and \ref{concrete heterotic models}.
Let us write down the contribution from the momentum lattice after the bosonization in each $\alpha\vec w$ sector:
\al{
\hat Z_{T^2,\alpha\vec w}
&= \Tr_\text{$\alpha\vec w$} e^{2\pi i\tau_1\paren{L_0-\bar L_0}-2\pi\tau_2\paren{L_0+\bar L_0}}\bigg|_\text{momentum lattice}.
}
In our case, they are
\al{
\hat Z_{T^2,\vec{0}}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4-\paren{\bar{\vartheta}_{01}}^4}\paren{\paren{\vartheta_{00}}^8+\paren{\vartheta_{01}}^8}^2,\nonumber\\
\hat Z_{T^2,\vec w_0}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4\paren{\vartheta_{10}}^{16},\nonumber\\
\hat Z_{T^2,\vec w_1}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4-\paren{\bar{\vartheta}_{01}}^4}\paren{\vartheta_{10}}^{16},\nonumber\\
\hat Z_{T^2,\vec w_2}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4+\paren{\bar{\vartheta}_{01}}^4}\paren{\vartheta_{10}}^8\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{01}}^8},\nonumber\\
\hat Z_{T^2,\vec w_0+\vec w_1}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{01}}^8}^2,\nonumber\\
\hat Z_{T^2,\vec w_0+\vec w_2}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4\paren{\paren{\vartheta_{00}}^8+\paren{\vartheta_{01}}^8}\paren{\vartheta_{10}}^8,\nonumber\\
\hat Z_{T^2,\vec w_1+\vec w_2}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4+\paren{\bar{\vartheta}_{01}}^4}\paren{\vartheta_{10}}^8\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{01}}^8},\nonumber\\
\hat Z_{T^2,\vec w_0+\vec w_1+\vec w_2}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4\paren{\paren{\vartheta_{00}}^8+\paren{\vartheta_{01}}^8}\paren{\vartheta_{10}}^8,
\label{SO(16) partition function}
}
where $\vec 0$ and $\vec{w}_i$ are basis vectors for the boundary conditions on the fermions; see Appendix~\ref{fermion one-loop} for details.
Let us sum up all the above contributions, multiplied by those from the oscillator modes in the bosonization.
Including also the spacetime momentum and oscillator modes from the bosonic $X^m$ ($m=2,\dots,9$), we get the one-loop vacuum amplitude~\cite{Dixon:1986iz,AlvarezGaume:1986jb}:
\al{
Z_{T^2}&={V_{10}\over\alpha'^5}{1\over2\paren{2\pi}^{10}}\int_F {\text{d}\tau_1\,\text{d}\tau_2\over \tau_2^6}{1\over\ab{\eta(\tau)}^{16}{\eta(\tau)}^{16}\,{\bar{\eta}(\bar{\tau})}^4}
\sum_\text{sector $\alpha\vec w$}\hat Z_{T^2,\alpha\vec w}\nonumber\\
&=
{V_{10}\over\alpha'^5}{1\over4 (2\pi)^{10}}\int_F {\text{d}\tau_1\,\text{d}\tau_2\over \tau_2^6}{1\over\ab{\eta(\tau)}^{16}{\eta(\tau)}^{16}\,{\bar{\eta}(\bar{\tau})}^4}\nonumber\\
&\quad
\times
\left[
\paren{\bar{\vartheta}_{01}}^4\paren{\vartheta_{10}}^8\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{01}}^8}
+\paren{\bar{\vartheta}_{10}}^4\paren{\vartheta_{01}}^8\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{10}}^8}
\right],
\label{partition function for non-SUSY}
}
where $F$ represents the fundamental region,
\al{
F &:= \Set{\paren{\tau_1,\tau_2}|-1/2\leq\tau_1\leq1/2,\ \ab{\tau}=\ab{\tau_1+i\tau_2}\geq 1},
}
and we have used the Jacobi's identity:
\al{
\paren{\bar{\vartheta}_{00}}^4-\paren{\bar{\vartheta}_{01}}^4-\paren{\bar{\vartheta}_{10}}^4=0.
}
We can see from this identity that the contributions between $\vec{w}_0$ and $\vec{w}_1$ cancel.
By the numerical calculation, we obtain~\cite{Dixon:1986iz,AlvarezGaume:1986jb}
\al{
\rho_{10}
&= -{Z_{T^2}\over V_{10}}\simeq \paren{3.9\times 10^{-6}} {1\over\alpha'^5}.
}
\subsection{$S^1$ compactification with Wilson line}
\label{S1 with Wilson}
Now we compactify the $m=9$ direction on $S^1$ with radius $R$: $X^9\sim X^9+2\pi R$~\cite{Ginsparg:1986wr}.
Here we consider a large field limit of an extra $m=9$ dimensional component of the gauge field, $A_{m=9}^{I=1}$.
We will find three possible large field limits discussed in the previous section.
The emission vertex for the gauge field with the polarization and momentum $\epsilon^m$ and $k$, respectively, is
\al{
\epsilon_m\left(i\bar{\partial}X^m+{\alpha'\over 2} \paren{k\cdot\psi_R} \psi^{m}_R\right)\partial X^I_L\,e^{ik\cdot X},
\label{gauge vertex op}
}
where indices run such that $m=2,\dots,9$ and $I=1,\dots,16$.
We see by putting $k=0$ in Eq.~\eqref{gauge vertex op} that a constant background $A_m^I$ corresponds to adding
\al{
A_m^I\int \text{d}^2z\,\bar{\partial}X^{m}\,\partial X^I_{L},
\label{vertex op of A}
}
to the worldsheet action.\footnote{
In obtaining the constant background by putting $k=0$, it is again important that the $A$ is massless at the tree-level.
}
In particular, we switch on the component of $I=1$ and $m=9$, and write $A:=A_9^1$:
\al{
A\int \text{d}^2z\,\bar{\partial}X^{9}\,\partial X^1_{L}.
\label{vertex op of A comp}
}
Turning on the Wilson line background $A$ does not affect the oscillator modes since Eq.~\eqref{vertex op of A} is a total derivative in the worldsheet action; only the momentum lattice of the center-of-mass mode is changed by $A$.
Let $l_L$ be the momentum of $X^{I=1}_L$. After fermionization, we have
\al{
l_L&=\sqrt{{2\over\alpha'}}m,
}
where $m\in\mathbb{Z}$ and $\mathbb{Z}+1/2$ for the NS (anti-periodic) and R (periodic) boundary conditions, respectively.
Let $p_L$ and $p_R$ be the spacetime momenta of the $S^1$-compactified direction $X^{m=9}$ for the left and right movers, respectively:
\al{
p_L&={n\over R}+{R w\over\alpha'},\nonumber\\
p_R&={n\over R}-{R w\over\alpha'},
\label{internal momentum}
}
where $n\in\mathbb Z$ and $w\in\mathbb Z$ are the KK and winding numbers, respectively.
Turning on the background $A$ corresponds to the boost on the momentum lattice~\cite{Narain:1986am}:
\al{
\bmat{
l_L'\\
p_R'
}
=
\bmat{
\cosh\eta&\sinh\eta\\
\sinh\eta&\cosh\eta
}
\bmat{
l_L\\
p_R
},\label{lL pR mixing}
}
since there appears only $l_L$ and $p_R$ in Eq.~\eqref{vertex op of A comp}.
This boost necessarily changes the radius of the compactification too.
we will see that the identification
\al{
A &= \sinh\eta,
\label{A-id}
}
gives the correct answer below.
Let us define $r$ by
\al{\label{r-id}
r &:= {R\over\cosh\eta},
}
which will turn out to be the compactification radius in the presence of $A$.
Note that in the language of Sec.~\ref{momentum boost}, we have 17 left-moving and 1 right-moving internal dimensions ($p=17$ and $q=1$).
The non-trivial transformations on the compactified space are
\al{
{SO(17,1)\over SO(17)}.
\label{tree moduli space}
}
Among them, we have chosen the boost between the left $I=1$ and right $m=9$ dimensions with the momenta $l_L$ and $p_R$, respectively.
The left momentum of the $m=9$ dimension, $p_L$, is untouched.
We will soon use the rotation between $l_L$ and $p_L$ that belongs to $SO(17)$.
We now show the validity of the identification~\eqref{A-id}.
In terms of $A$ and $r$, we have
\al{\label{pR''}
p_R'
&= p_R\cosh\eta +l_L\sinh\eta &
&= {n\over r}-{rw\over\alpha'}\paren{1+A^2}+\sqrt{{2\over\alpha'}}m A,\\
l_L'
&= l_L\cosh\eta +p_R\sinh\eta &
&= \sqrt{{2\over\alpha'}}m \sqrt{1+A^2}+{n\over r}{A\over\sqrt{1+A^2}}-{rw\over\alpha'}A\sqrt{1+A^2},\\
p_L'
&= p_L &
&= {n\over r}{1\over\sqrt{1+A^2}}+{r w\over\alpha'}\sqrt{1+A^2}.
}
We further rotate by a part of $SO(17)$ in Eq.~\eqref{tree moduli space},
\al{
\bmat{
l_L''\\
p_L''
}
&=
\bmat{
\cos\theta&\sin\theta\\
-\sin\theta&\cos\theta
}
\bmat{
l_L'\\
p_L'
}, \label{lL pL mixing}\\
p_R''
&= p_R', \nonumber
}
with
\al{
\cos\theta &= {1\over\sqrt{1+A^2}}, &
\sin\theta &= -{A\over\sqrt{1+A^2}},
}
to get
\al{
l_L''&=\sqrt{{2\over\alpha'}}m-2{rw\over\alpha'}A,\\
p_L''&={n\over r}+{rw\over\alpha'}\paren{1-A^2}+\sqrt{{2\over\alpha'}}mA.
\label{pL''}
}
The spectrum becomes
\al{
\sum_\text{all modes}\paren{l_L''^2+p_L''^2+p_R'^2}
&= \sum_{m,n,w}\Bigg[
\paren{\sqrt{{2\over\alpha'}}m-2{rw\over\alpha'}A}^2
+\paren{{n\over r}+{rw\over\alpha'}\paren{1-A^2}+\sqrt{{2\over\alpha'}}mA}^2\nonumber\\
&\phantom{\mbox{}= \sum_{m,n,l}\Bigg[}
+\paren{{n\over r}-{rw\over\alpha'}\paren{1+A^2}+\sqrt{{2\over\alpha'}}m A}^2
\Bigg].
\label{spectrum with Wilson line}
}
As promised, this result~\eqref{spectrum with Wilson line} correctly reproduces that in Ref.~\cite{Narain:1986am,Ginsparg:1986wr}, which is obtained from the quantization of the scalar field under constraints.
Furthermore, from Eq.~\eqref{spectrum with Wilson line}, we see
\al{
\left.\paren{l_L''^2+p_L''^2}\right|_{m=w=0}
= \left.p_R''^2\right|_{m=w=0}
= {n^2\over r^2},
\label{large r limit summation}
}
which indicates that $r$ is the physical radius of $S^1$.
Now let us discuss the T-dual transformations that can be read off from the above result.
\begin{itemize}
\item We can see that the shift
\al{
A &\rightarrow
A+{\sqrt{2\alpha'}\over r}
\label{A shift}
}
leaves the spectrum~\eqref{spectrum with Wilson line} unchanged.\footnote{
After the shift of $A$, redefine the mode numbers by $n'=n+2m-2w$, $w'=w$, and $m'=m-2w$.
}
\item From Eq.~\eqref{internal momentum}, we see that the spectrum is invariant under the T-dual transformation~\cite{Kikkawa:1984cp,Sakai:1985cs,Maharana:1992my}
\al{
R &\to {\alpha'\over R},
\label{T-dual tf basic}
}
or in terms of $r$ and $A$, $r\to \alpha'/\paren{1+A^2}r$.
\end{itemize}
By defining
\al{\label{tautilde}
\tilde\tau
&= \tilde\tau_1+i\tilde\tau_2
:= {r A\over\sqrt{\alpha'}}+i{r\over\sqrt{\alpha'}},
}
we can write down the enlarged T-dual transformation:\footnote{
The $S$-transformation is the transformation~\eqref{T-dual tf basic} composed with $A\to-A$, while the $T$ is Eq.~\eqref{A shift}.
}
\al{
&S:\quad \tilde\tau\rightarrow-{1\over\tilde\tau}\nonumber\\
&T:\quad \tilde\tau\rightarrow\tilde\tau+\sqrt{2}.
\label{T-duality transformation}
}
The general form of the T-dual transformation is
\al{\label{general transformation}
&\tilde\tau'={a\tilde\tau+b\over c\tilde\tau+d},
}
where $ad-bc=1$ and $a$, $b$, $c$, and $d$ are either
\al{
a &\in \mathbb Z, &
b &\in \sqrt{2}\mathbb Z, &
c &\in \sqrt{2}\mathbb Z, &
d &\in \mathbb Z,
}
or
\al{
a &\in \sqrt{2}\mathbb Z, &
b &\in \mathbb Z, &
c &\in \mathbb Z, &
d &\in \sqrt{2}\mathbb Z.
}
The fundamental region is $-1/\sqrt{2}\leq \tilde\tau_1\leq1/\sqrt{2},\, \ab{\tilde\tau}\geq 1$.
More details can be found in Appendix~\ref{T-duality section}.
\subsection{Effective potential under Wilson line}
Let us write down the contribution from the momentum lattice after the bosonization
in each sector $\alpha\vec w$; this time we include the momentum \eqref{internal momentum} of the $S^1$-compactified $X^{m=9}$ which is modified by the Wilson line $A$ as in Eqs.~\eqref{pR''} and \eqref{pL''}:
\al{
\tilde Z_{T^2,\alpha\vec w}
&= \Tr_{\alpha\vec w} e^{2\pi i\tau_1\paren{L_0-\bar L_0}-2\pi\tau_2\paren{L_0+\bar L_0}}\bigg|_{\text{momentum lattice}}.
}
Concretely,
\al{
\tilde Z_{T^2,\vec{0}}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4-\paren{\bar{\vartheta}_{01}}^4}\sum_{m\in\mathbb Z} g_m\fn{\eta,R}
\paren{\paren{\vartheta_{00}}^7+\paren{-1}^m\paren{\vartheta_{01}}^7}\paren{\paren{\vartheta_{00}}^8+\paren{\vartheta_{01}}^8},
\nonumber\\
\tilde Z_{T^2,\vec w_0}&=-{1\over8}
\paren{\bar{\vartheta}_{10}}^4
\sum_{m\in\mathbb Z+1/2}g_m\fn{\eta,R}
\paren{\vartheta_{10}}^{15},
\nonumber\\
\tilde Z_{T^2,\vec w_1}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4-\paren{\bar{\vartheta}_{01}}^4}
\sum_{m\in\mathbb Z+1/2}g_m\fn{\eta,R}
\paren{\vartheta_{10}}^{15},
\nonumber\\
\tilde Z_{T^2,\vec w_2}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4+\paren{\bar{\vartheta}_{01}}^4}
\sum_{m\in\mathbb Z+1/2}g_m\fn{\eta,R}
\paren{\vartheta_{10}}^7\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{01}}^8},
\nonumber\\
\tilde Z_{T^2,\vec w_0+\vec w_1}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4
\sum_{m\in\mathbb Z}g_m\fn{\eta,R}
\paren{\paren{\vartheta_{00}}^7-\paren{-1}^m\paren{\vartheta_{01}}^7}
\paren{\paren{\vartheta_{00}}^8-\paren{\vartheta_{01}}^8},
\nonumber\\
\tilde Z_{T^2,\vec w_0+\vec w_2}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4
\sum_{m\in\mathbb Z}g_m\fn{\eta,R}
\paren{\paren{\vartheta_{00}}^7+\paren{-1}^m\paren{\vartheta_{01}}^7}
\paren{\vartheta_{10}}^8,
\nonumber\\
\tilde Z_{T^2,\vec w_1+\vec w_2}&=\phantom{-}{1\over8}\paren{\paren{\bar{\vartheta}_{00}}^4+\paren{\bar{\vartheta}_{01}}^4}
\sum_{m\in\mathbb Z}g_m\fn{\eta,R}
\paren{\paren{\vartheta_{00}}^7-\paren{-1}^m\paren{\vartheta_{01}}^7}\paren{\vartheta_{10}}^8,
\nonumber\\
\tilde Z_{T^2,\vec w_0+\vec w_1+\vec w_2}&=-{1\over8}\paren{\bar{\vartheta}_{10}}^4
\sum_{m\in\mathbb Z+1/2}g_m\fn{\eta,R}
\paren{\vartheta_{10}}^7
\paren{\paren{\vartheta_{00}}^8+\paren{\vartheta_{01}}^8},
\label{partition function under A}
}
where
\al{
g_m\fn{\eta,R}
&=
\sum_{n,w=-\infty}^\infty \exp\left[
\pi i \alpha'{\tau_1\over2}\paren{l_L''^2+p_L''^2-p_R''^2}
-{\pi\over 2}\tau_2\alpha'\paren{l_L''^2+p_L''^2+p_R''^2}
\right]
\nonumber\\
&=\sum_{n,w=-\infty}^\infty \exp\left[
\pi i \tau_1\paren{m^2+2 n w}-{\pi\over 4}\tau_2\alpha'
\paren{
e^{2\eta}\paren{l_L+p_R}^2+e^{-2\eta}\paren{l_L-p_R}^2+2p_L^2
}
\right]\label{Wilson-line-dependent part}
}
contains the information of the Wilson line.
We can check that the $\eta\to 0$ limit reduces Eq.~\eqref{partition function under A} to Eq.~\eqref{SO(16) partition function}, multiplied by the contribution from the compactified dimension shown in Appendix.~\ref{bosonic part}.
Including the oscillator modes and the spacetime coordinates $X^m$ ($m=2,\dots,9$), we get
\al{
Z_{T^2}
&={V_{9}\over\alpha'^{9/2}}{1\over2 (2\pi)^9}\int_F {d\tau_1 d\tau_2\over \tau_2^{11/2}}{1\over\ab{\eta(\tau)}^{16}{\eta(\tau)}^{16}{\bar{\eta}(\bar{\tau})}^4}
\sum_\text{sector $\alpha\vec w$}\tilde{Z}_{T^2,\alpha \vec{w}}\nonumber\\
&={V_{9}\over\alpha'^{9/2}}{1\over8 (2\pi)^9}\int_F {d\tau_1 d\tau_2\over \tau_2^{11/2}}{1\over\ab{\eta(\tau)}^{16}{\eta(\tau)}^{16}{\bar{\eta}(\bar{\tau})}^4}\nonumber\\
&\quad\times\Bigg(
\sum_{m\in\mathbb Z+1/2}g_m\fn{\eta,R}\paren{\vartheta_{10}}^7
\paren{
\paren{\bar{\vartheta}_{01}}^4\paren{\vartheta_{00}}^8
-\paren{\bar{\vartheta}_{00}}^4\paren{\vartheta_{01}}^8
}\nonumber\\
&\phantom{\quad\times\Bigg(}
+\sum_{m\in\mathbb Z}g_m\fn{\eta,R}\bigg[
\paren{\vartheta_{00}}^7\paren{\paren{\bar{\vartheta}_{10}}^4\paren{\vartheta_{01}}^8+\paren{\bar{\vartheta}_{01}}^4\paren{\vartheta_{10}}^8}\nonumber\\
&\phantom{\quad\times\Bigg(+\sum_{m\in\mathbb Z}g_m\fn{\eta,R}\bigg[}
+\paren{-1}^m \paren{\vartheta_{01}}^7
\paren{\paren{\bar{\vartheta}_{10}}^4\paren{\vartheta_{00}}^8-\paren{\bar{\vartheta}_{00}}^4\paren{\vartheta_{10}}^8}
\bigg]
\Bigg).\label{Wilsonline partition function}
}
The 9 dimensional energy density in the Jordan frame is given by
\al{
\rho_9
&= -{Z_{T^2}\over V_9};
}
see Eq.~\eqref{energy density}.
\begin{figure}[t]
\begin{center}
\hfill
\includegraphics[width=.4\textwidth]{partition_function_r1.pdf}
\hfill
\includegraphics[width=.4\textwidth]{partition_function_rSqrt2.pdf}
\hfill\mbox{}
\caption{
The potential $\rho_9$ in Jordan frame as a function of $A$ with $r=\sqrt{\alpha'}$ (left) and $\sqrt{2\alpha'}$ (right), all in units of $\alpha'=1$.
We can see the periodicity $A\to A+\sqrt{2\alpha'}/r$, up to the distortions due to numerical errors.
}\label{periodicity}
\end{center}
\end{figure}
In Fig.~\ref{periodicity}, we plot $\rho_9$ as a function of $A$ for $r=\sqrt{\alpha'}$ (left) and $\sqrt{2\alpha'}$ (right), all in units of $\alpha'=1$.
The summation over $n$ and $m$ in Eqs.~\eqref{Wilson-line-dependent part} and \eqref{Wilsonline partition function} are truncated by $\ab{n},\ab{m}\leq10$ and the numerical integration is performed within $\tau_2\leq4$.
We can see the periodicity $A\to A+\sqrt{2\alpha'}/r$.
\begin{figure}[t]
\begin{center}
\hfill
\includegraphics[width=.35\textwidth]{contour.pdf}
\hfill
\includegraphics[width=.6\textwidth]{3Dplot.pdf}
\hfill\mbox{}
\caption{
Contour and 3D plots are shown in the left and right panels, respectively, for the energy density $\rho_9$ in the Jordan frame as a function of $\tilde\tau_1=rA/\sqrt{\alpha'}$ and $\tilde\tau_2=r/\sqrt{\alpha'}$, with all their values being given in $\alpha'=1$ units. In the left, we shade the fundamental region for the T-dual transformation: $\ab{\tau_1}\leq1/\sqrt{2}$, $\ab{\tau}\geq 1$. We can see the shift-symmetry $\tilde\tau_1\to\tilde\tau_1+\sqrt{2}$, up to distortions due to numerical errors.
}\label{Jordan potential}
\end{center}
\end{figure}
For varying $A$ and $r$, we plot $\rho_9$ as a function of $\tilde\tau_1=rA/\sqrt{\alpha'}$ and $\tilde\tau_2=r/\sqrt{\alpha'}$ in Fig.~\ref{Jordan potential}.
Note that in the large $r$ ($=\sqrt{\alpha'}\tilde\tau_2$) limit, the Jordan frame potential becomes proportional to $r$. This can also be seen analytically from the fact that in the large $r$ limit, the contributing modes are as in Eq.~\eqref{large r limit summation}, which results in the same expression as Eq.~\eqref{prop to R after Poisson}.
To repeat, we have obtained both numerically and analytically that the Jordan frame potential is proportional to $r$ at the one-loop level. For large $r$ limit,
all the higher loop corrections have the same behavior since it comes from the fact that the energy is proportional to the volume of the compactified dimension.
\begin{figure}[t]
\begin{center}
\hfill
\includegraphics[width=.35\textwidth]{contour_Einstein.pdf}
\includegraphics[width=.6\textwidth]{Einstein_3Dplot.pdf}
\hfill\mbox{}
\caption{
Contour and 3D plots are shown in the left and right panels, respectively, for the energy density $V_\text{E}$ in the Einstein frame as a function of $\tilde\tau_1=rA/\sqrt{\alpha'}$ and $\tilde\tau_2=r/\sqrt{\alpha'}$, with all their values being given in $\alpha'=1$ units.
The shaded fundamental region and the existence of the $\sqrt{2}$-shift are the same as in Fig.~\ref{Jordan potential}. We see that the potential becomes runaway for the large radius limit $r\gg\sqrt{\alpha'}$.
}\label{Einstein potential}
\end{center}
\end{figure}
Now let us turn to the Einstein frame:
\al{
V_E(r)&=-{1\over \paren{{2\pi r}}^{2/7}}{Z_{T^2}\over2\pi r V_9},\nonumber\\
&=-{1\over\alpha'^{9/2}}{1\over2 \paren{2\pi}^{72/7}}{1\over r^{9/7}}\int_F {d\tau_1 d\tau_2\over \tau_2^{11/2}}{1\over\ab{\eta(\tau)}^{16}{\eta(\tau)}^{16}{\bar{\eta}(\bar{\tau})}^4}
\sum_\text{sector $\alpha\vec w$}\tilde Z_{T^2,\alpha\vec w};
}
see Eq.~\eqref{to Einstein}.
We plot this potential in Fig.~\ref{Einstein potential}.
Important fact is that the potential in the Einstein frame becomes runaway for the large radius limit $r\gg\sqrt{\alpha'}$. As discussed above, this behavior should not be altered by the higher loop corrections.
Note that this effective potential in the Einstein frame is reliable only for large $r\gg\sqrt{\alpha'}$ since the treatment in terms of the effective field theory~\eqref{potential in lower dimensions} becomes valid only in this limit; furthermore, we can regard $r$ as the physical radius only in this limit; see also the argument around Eq.~\eqref{large r limit summation}.
\subsection{Large boost limit}
We want to examine the behavior of the Higgs potential in the large field limit.
However, in this nine dimensional toy model, there are two flat directions {\color{black} at this level}, namely, $A$ and $R$.
If the Higgs comes from a similar mechanism to the gauge-Higgs unification,
the Higgs field should be identified with $A$.
Therefore, we check the large $A$ limit for a fixed $R$.
This limit is nothing but the large boost limit as is easily seen from Eq.~\eqref{A-id}: $\eta\rightarrow \infty$.
From Eqs.~\eqref{r-id} and \eqref{tautilde},
the trajectory in the $\tilde\tau_1$-$\tilde\tau_2$ plane is given by
\al{
&\tilde\tau_1={R\over\sqrt{\alpha'}}\tanh\eta,\nonumber\\
&\tilde\tau_2={R\over\sqrt{\alpha'}}{1\over\cosh\eta}.
\label{trajectory of constant R}
}
Since $\tilde\tau_1^2+\tilde\tau_2^2=R^2/\alpha'$, this path starts from $\paren{0,\,R/\sqrt{\alpha'}}$ for $\eta=0$, and moves on the circle toward $\paren{R/\sqrt{\alpha'},\,0}$ as $\eta\to\infty$.
The question is what this trajectory is when mapped onto the fundamental region.
The large $\eta$ behavior depends on the value of $R/\sqrt{\alpha'}$:
\begin{itemize}
\item If $R/\sqrt{\alpha'}\in \sqrt{2}\mathbb Q$, then $\tilde\tau_2$ ($=r/\sqrt{\alpha'}$) goes to infinity in the large $\eta$ limit.
This can be seen as follows.
Since $\tilde\tau\to R/\sqrt{\alpha'}$ as $\eta\to\infty$, let us check to what point $R/\sqrt{\alpha'}$ is mapped in the fundamental region.
Let us write $R/\sqrt{\alpha'}=\sqrt{2}p/q$ with $p,q\in\mathbb Z$.
By an appropriate times of $\sqrt{2}$-shifts ($T$-transfomation in \eqref{T-duality transformation}), we can always make $\ab{p}<\ab{q}$.
Performing the inversion ($S$-transfomation in \eqref{T-duality transformation}), and again doing an appropriate times of $\sqrt{2}$-shifts, we can make the numerator $p$ smaller and smaller; eventually we get $p/q\to 0$.
This corresponds to the infinity $\tau_2\to\infty$ in the fundamental region.
This behavior is expected from the discussion of the general momentum boost in Sec.~\ref{momentum boost}.
In fact, if and only if $R/\sqrt{\alpha'}\in \sqrt{2}\mathbb Q$, we can have a lattice point on the light cone in the momentum space, that is, there exist $n,m,w\in\mathbb Z$ such that\footnote{
This can be proved as follows.
First we show that the conditions~\eqref{lL pR condition} and \eqref{pL condition} can be met for an arbitrary $R/\sqrt{\alpha'}\in\sqrt{2}\mathbb Q$ by an appropriate choice of $n,m,w$.
Let us write $R/\sqrt{\alpha'}=\sqrt{2}q_n/q_d$ with $q_n,q_d\in\mathbb Z$.
The condition~\eqref{pL condition} reads $nq_d/q_n+2wq_n/q_d=0$.
We can choose $n$ and $w$ such that $n=n'q_n$ and $w=w'q_d$ with $n',w'\in\mathbb Z$, resulting in the condition $n'q_d+2w'q_n=0$. This can be satisfied by setting $w'=q_d$ and $n'=-2q_n$.
Then the condition~\eqref{lL pR condition} reads
$0 \stackrel{!}{=}
m+{1\over2}\paren{n'q_d-2q_nw'}
= m+n'q_d$, which can be satisfied by choosing $m=-n'q_d$.
Next we show that if $R/\sqrt{\alpha'}\slashed\in\sqrt{2}\mathbb Q$, there is no set of $n,m,w\in\mathbb Z$ that satisfies Eqs.~\eqref{lL pR condition} and \eqref{pL condition}.
By putting Eq.~\eqref{pL condition} into Eq.~\eqref{lL pR condition}, we get the condition
$m\pm \sqrt{2}n{\sqrt{\alpha'}\over R}=0$. Therefore, it is necessary that $R/\sqrt{\alpha'}=\mp\sqrt{2}n/m$ with $n,m\in\mathbb Z$.
}
\al{
l_L^2-p_R^
&={2\over\alpha'}
\sqbr{m+{1\over\sqrt{2}}\left({n\over R/\sqrt{\alpha'}}-{R\over\sqrt{\alpha'}}w\right)}
\sqbr{m-{1\over\sqrt{2}}\left({n\over R/\sqrt{\alpha'}}-{R\over\sqrt{\alpha'}}w\right)}
=0, \label{lL pR condition}\\
p_L^2&={1\over\alpha'}\left({n\over R/\sqrt{\alpha'}}+{R\over\sqrt{\alpha'}}w\right)^2=0.
\label{pL condition}
}
For $R/\sqrt{\alpha'}\in\sqrt{2}\mathbb Q$, there is a point on the light cone in the momentum space. Following the argument of Sec.~\ref{momentum boost}, the Lorentz boost between $l_L$ and $p_R$ opens up a new dimension.
\item
If $R/\sqrt{\alpha'}\slashed\in \sqrt{2}\mathbb Q$, the potential becomes either periodic or chaotic.
Let us check in what case we get the periodic potential.
\begin{itemize}
\item
The periodic case is realized if, starting from a point $\tilde{\tau}$~\eqref{trajectory of constant R} with the boost $\eta$, we get another point on the trajectory with the boost $\eta+\eta_c$,
\al{
\tilde\tau_1' &= {R\over\sqrt{\alpha'}}\tanh\fn{\eta+\eta_c}, \nonumber\\
\tilde\tau_2' &= {R\over\sqrt{\alpha'}}{1\over\cosh\fn{\eta+\eta_c}},
}
which can be mapped from $\tilde\tau$ by an appropriate T-dual transformation~\eqref{general transformation}.
In general, the transformation of $\tilde\tau_2$ is as shown in Eq.~\eqref{explicit T-dual}, and we get
\al{
\tilde\tau_2'
&={\tilde\tau_2\over\ab{c\tilde\tau+d}^2}={\tilde\tau_2\over c^2{R^2\over\alpha'}+d^2+2cd\tilde\tau_1}
={R\over\sqrt{\alpha'}}{1\over \paren{c^2{R^2\over\alpha'}+d^2}\cosh\eta+2cd{R\over\sqrt{\alpha'}}\sinh\eta}\nonumber\\
&={R\over\sqrt{\alpha'}}{1\over\cosh\fn{\eta-\eta_2}},
}
where we have defined $\eta_2$ by
\al{
\tanh\eta_2:=-{2cd{R\over\sqrt{\alpha'}}\over c^2{R^2\over\alpha'}+d^2}.
}
On the other hand,
the same transformation maps $\tilde{\tau}_1$ to
\al{
\tilde\tau_1'&={ac\ab{\tilde\tau}^2+\paren{ad+bc}\tilde\tau_1+bd\over \ab{c\tilde\tau+d}^2}
={ac{R^2\over\alpha'}+bd+\paren{ad+bc}\tau_1\over\cosh\fn{\eta-\eta_2}/\cosh\eta}\nonumber\\
&={\paren{ac{R^2\over\alpha'}+bd}\cosh\eta+{R\over\sqrt{\alpha'}}\paren{ad+bc}\sinh\eta\over\cosh\fn{\eta-\eta_2}}\nonumber\\
&=
{R\over\sqrt{\alpha'}}{\sinh\fn{\eta-\eta_1}\over\cosh\fn{\eta-\eta_2}},
}
where we have defined $\eta_1$ by
\al{
\tanh\eta_1=-{ac{R^2\over\alpha'}+bd\over {R\over\sqrt{\alpha'}}\paren{ad+bc}}.
}
The trajectory becomes periodic if and only if $\eta_1=\eta_2$, that is,
\al{
{2cd{R\over\sqrt{\alpha'}}\over c^2{R^2\over\alpha'}+d^2}
&= {ac{R^2\over\alpha'}+bd\over {R\over\sqrt{\alpha'}}\paren{ad+bc}},
}
or
\al{
\paren{d^2-c^2{R^2\over\alpha'}}\paren{bd-ac{R^2\over\alpha'}}
&= 0.
}
Vanishing first factor means $\eta_2=\infty$, and the finite period is obtained when and only when the last factor becomes zero:
\al{
bd-ac{R^2\over\alpha'}
&= 0.
}
Therefore, the partition function becomes periodic if and only if $R^2/\alpha'$ can be written as
\al{
&
{R\over \sqrt{2\alpha'}} \slashed\in \mathbb{Q}, &
&{R^2\over \alpha'}={bd\over ac},
\label{periodicity condition}
}
where $ad-bc=1$ and either $a,d\in\sqrt{2}\mathbb Z$, $b,c\in\mathbb Z$ or $a,d\in\mathbb Z$, $b,c\in\sqrt{2}\mathbb Z$.
\item In particular, if $R^2/\alpha'$ is an irrational number then the condition~\eqref{periodicity condition} cannot be met (unless $ac=0$ that leads to the trivial $\eta_1=0$), and the partition function becomes non-periodic, namely chaotic.
\end{itemize}
\end{itemize}
\begin{figure}[t]
\begin{center}
\hfill
\includegraphics[width=.32\textwidth]{Rsqrt2gray.pdf}
\hfill
\includegraphics[width=.32\textwidth]{R2gray.pdf}
\hfill
\includegraphics[width=.32\textwidth]{R2_13gray.pdf}
\hfill\mbox{}
\caption{The trajectory that starts from $\eta=0$ at $\paren{\tilde\tau_1,\tilde\tau_2}=\paren{0,R/\sqrt{\alpha'}}$ for a fixed value of $R/\sqrt{\alpha'}$ being $\sqrt{2}$, 2, and $2^{1/3}$ in the left, center, and right panels, respectively, showing the runaway, periodic, and chaotic limits.
We have shaded the fundamental region for the T-dual transformations.
}\label{T-duality}
\end{center}
\end{figure}
As a check, we show the numerical results for $R/\sqrt{\alpha'}=\sqrt{2}$, $2$, and $2^{1/3}$ in Fig.~\ref{T-duality}.
We see that they show the runaway, periodic, and chaotic limits, respectively.
The result presented in this section provides a concrete example of the general argument presented in Sec.~\ref{classification}.
It is plausible that the large Higgs field limit in string theory fits into either one of these three.
Note that our computation is based on the one-loop effective potential and that the higher order corrections are significant around the region $A,R^{-1}\sim M_\text{s}$ ($=1/\sqrt{\alpha'}$). Therefore, the result so far should be interpreted as an effort to guess what is the physical large field limit along a potential valley after including all the higher order corrections.
In Fig.~\ref{T-duality}, we have checked the large $A$ limit for a fixed $R$. Is this a physical limit, and if not, what should it be?
Comparing Figs.~\ref{Jordan potential} and \ref{Einstein potential}, we see that it is a generic feature that there is a runaway vacuum no matter what the structure is around $A,R^{-1}\sim M_\text{s}$. It seems plausible that if the physical large $A$ limit is not the one with fixed $\tilde\tau_2$, then large $A$ limit goes into the runaway vacuum after all. However, we consider all the three limits, runaway, periodic, and chaotic in order not to loose generality.
\begin{figure}[t]
\begin{center}
\hfill
\includegraphics[width=.5\textwidth]{potential.pdf}
\hfill\mbox{}
\caption{
Schematic figure for the Higgs potential. Low energy side is determined phenomenologically. High energy side represents a runaway direction in the multi degrees of freedom space.
}\label{potential}
\end{center}
\end{figure}
As said above, the extrapolation from the low energy data has revealed that there is the quasi-flat direction of the Higgs potential in the SM.
We are interested in the potential for the large field values.
Beyond the string or Planck scale, there opens up several quasi-flat directions in general. Therefore we need to consider a multi-dimensional field space. In the example examined in this section, it corresponds to the $A$-$R$ (or $\tilde\tau_1$-$\tilde\tau_2$) plane. As we have seen in this section, generally there is at least one runaway direction in this space that corresponds to opening up an extra dimension; see Fig.~\ref{potential}. We will discuss its physical implications in the subsequent sections.
\section{Eternal Higgs inflation}\label{eternal section}
As shown in Introduction, the Higgs potential $V\sim\lambda_\text{eff}\ab{H}^4$ in the SM shows a quite peculiar behavior when extrapolated to very large field values: all of the $\lambda_\text{eff}$, its running, and the bare Higgs mass can be accidentally small.
In Ref.~\cite{Hamada:2014iga}, we have proposed a possibility that this behavior, so to say the criticality, is a consequence of the Planck scale physics and that the criticality is closely related to the cosmic inflation.
\begin{figure}[tn]
\begin{center}
\hfill
\includegraphics[width=.7\textwidth]{domainwall.pdf}
\hfill\mbox{}
\caption{Schematic figure for the maximum that yields the domain wall, which becomes the source for the eternal inflation.
}\label{domainwall}
\end{center}
\end{figure}
We have seen that the large field limit goes down to a runaway direction, which corresponds to opening up an extra dimension, in the multi degrees of freedom space, as shown in Fig.~\ref{potential}.
Therefore, there is at least one maximum of the potential around the Planck scale; see Fig.~\ref{domainwall}.
This maximum can be a source of an eternal inflation at the core of the domain wall~\cite{Hamada:2014raa} between the electroweak vacuum and the runaway vacuum, in which the fifth dimension is opened up. In order for this to work, the curvature of the potential at the maximum must be sufficiently small~\cite{Sakai:1995nh}:
\al{
\left.M_P^2{V_{\varphi\varphi}\over V}\right|_\text{maximum}\lesssim 1.4.
\label{DW condition}
}
In our scenario, this can be naturally satisfied as follows.
The potential for the fifth dimension can be seen by putting $D=5$ in Eq.~\eqref{to Einstein}. In stringy language, the action for the fifth dimension $R'\gg M_\text{s}^{-1}$ is coming from the one-loop potential: In the Einstein frame, we get
\al{
S_\text{eff}
&\sim {M_\text{s}^2\over g_\text{s}^2}\int \text{d}^4x\sqrt{-g}
\paren{
\mathcal R-{\paren{\partial R'}^2\over R'^2}-g_\text{s}^2M_\text{s}^2{1\over R'}
}.
}
Switching to the canonical field $R'=e^{g_\text{s}\chi/M_\text{s}}$, we get
\al{
S_\text{eff}
&\sim \int \text{d}^4x\sqrt{-g}\paren{
M_\text{P}^2\mathcal R-\paren{\partial\chi}^2-e^{-\chi/M_\text{P}}M_\text{s}^4
},
}
where $M_\text{P}=M_\text{s}/g_\text{s}$.
Therefore, the stringy potential also gives
\al{
V_{\chi\chi}
&\sim {V\over M_\text{P}^2}.
}
It is remarkable that the potential changes of order unity when we vary $\chi$ by $M_\text{P}$, not by $M_\text{s}$, for large $\chi$. On the other hand at low energies, the SM potential in the Einstein frame exhibits the same behavior if the non-minimal coupling $\xi$ is of order ten~\cite{Hamada:2014wna}.
Therefore, it is natural to conclude that the condition~\eqref{DW condition} is also met around the maximum.
We note that in the original version of the topological Higgs inflation~\cite{Hamada:2014raa}, $\xi$ is used to make the maximum of the potential, and hence that it cannot account for the observed fluctuation of the cosmic microwave background (CMB). On the other hand, the scenario proposed in this paper allows the Higgs to be the source for both the eternal topological inflation and for the one that accounts for the CMB fluctuation, simultaneously.
\begin{figure}[tn]
\begin{center}
\hfill
\includegraphics[width=.5\textwidth]{runaway.pdf}
\hfill\mbox{}
\caption{
Schematic figure for the Higgs potential smoothly connected to the runaway direction.
}\label{runaway figure}
\end{center}
\end{figure}
\begin{figure}[tn]
\begin{center}
\hfill
\includegraphics[width=.4\textwidth]{tunneling1.pdf}
\hfill
\includegraphics[width=.5\textwidth]{tunneling2.pdf}
\hfill\mbox{}
\caption{
Schematic figure for the Higgs potential. On the left, the false vacuum has higher energy than the quasi-flat potential in the SM, while on the right, it has lower energy.
}\label{false vacuum}
\end{center}
\end{figure}
There are two possibilities for the potential beyond the maximum:
\begin{itemize}
\item The potential smoothly becomes runaway as in Fig.~\ref{runaway figure}.
\item The potential has another local minimum as in Fig.~\ref{false vacuum}.
\end{itemize}
In the latter case, the false vacuum gives another mechanism of eternal inflation.
This situation is similar to some of the originators' idea of the inflation using a first order phase transition~\cite{Sato:1980yn,Guth:1980zm}. In the medium of the false vacuum, which is indicated by the (red) dot in Fig.~\ref{false vacuum}, there appears a bubble of the electroweak vacuum due to the tunneling, which is indicated by the dotted arrow.
This eternal inflation in the false vacuum had caused the so-called the graceful exit problem in the old inflation scenario~\cite{Linde:1981mu,Hawking:1982ga,Guth:1982pn}.
However in the left case in Fig.~\ref{false vacuum}, the space inside the bubble experiences the second stage of inflation~\cite{Hamada:2014iga,Hamada:2014wna}, after the dotted arrow in the figure, and hence this problem is ameliorated as we do not need bubbles to collide.
In the right case in Fig.~\ref{false vacuum}, we need another inflation to account for the observed CMB fluctuation such as the $B-L$ Higgs inflation.
\section{Cosmological constant}\label{solution to cosmological constant}
As is reviewed in detail in Appendix~\ref{MPP review}, the MPP requires degenerate vacua at the field value of the order of the Planck scale~\cite{Froggatt:1995rt,Froggatt:2001pa,Nielsen:2012pu}.
The cosmological constant of the runaway vacuum is exactly zero.
Then the MPP tells us that our electroweak vacuum must have the zero cosmological constant too.
This is a new solution to the cosmological constant problem in terms of the MPP.\footnote{
See also Ref.~\cite{Nielsen:2012pu} in which the cosmological constant problem is discussed in a different perspective.
}
On the other hand,
the current universe is being dominated by the cosmological constan
~\cite{Ade:2013zuv}
\al{
\rho_\Lambda^\text{obs}
\simeq
\paren{2.2\,\text{meV}}^4,
\label{cosmological constant observed}
}
and is entering the second inflationary stage.
This will eventually lead to the de Sitter space $dS_4$ with the length scale $H^{-1}$, where
\al{
H^2 = {\rho_\Lambda^\text{obs}\over 3M_P^2}.
}
We will discuss the possibility that the existence of the finite cosmological constant is understood as a statistical fluctuation.
\begin{figure}[tn]
\begin{center}
\hfill
\includegraphics[width=.4\textwidth]{universe.pdf}
\hfill\mbox{}
\caption{
Universe is divided into parts that will eventually become causally disconnected to each other in the end of their histories.
}\label{universe}
\end{center}
\end{figure}
First we point out that our universe is a part of a large universe
whose cosmological constant is fixed to zero by the MPP.\footnote{
The argument in this section may also apply for the multiverse~\cite{Kawai:2011qb,Kawai:2013wwa,Hamada:2014ofa,Kawana:2014vra,Hamada:2014xra}.
}
The large universe can be divided into parts that will eventually become causally disconnected de Sitter spaces
in the end of their histories, as in Fig.~\ref{universe}.
After the Euclideanization, each de Sitter space becomes $S^4$ with radius $r_U=1/H$.
We consider one of the $S^4$'s and latticize it by the lattice spacing of the order of $\l_P=1/M_\text{P}$, and
let $S_i$ be the action on each site labeled by $i$.
The total action for the $S^4$ becomes the sum over positions:
\al{
S &= \sum_{i=1}^{r_U^4/l_P^4} S_i.
}
Assuming that $S_i$ are independent of each other,
the vanishing cosmological constant for the large universe leads to $\Braket{S_i}=0$ for each $i$ and in particular to $\Braket{S}=0$ for this part.
Therefore the value of $S$ fluctuates around zero and its variance can be evaluated as
\al{\label{fluctuation}
\Braket{S^2}\sim
N:={r_U^4\over l_P^4},
}
where we have assumed that the variance of each $S_i$ is of order unity.
We interpret Eq.~\eqref{fluctuation} as the variance of the actions of the $S^4$'s in the large universe.
Then the typical amount of the energy density of one $S^4$ is estimated as
\al{
\rho_\Lambda
\sim {\sqrt{\Braket{S^2}}\over r_U^4}
\sim {1\over l_P^2 r_U^2}\sim\paren{\ensuremath{\,\text{meV} }}^4.
}
Thus, we have obtained the right amount of the cosmological constant as
the fluctuation from zero.
{\color{black} This result has been obtained in Ref.~\cite{Sorkin:2007bd} in the context of causal set theory.}
We note that the value of $H$ is not really a prediction in this argument. We have rather provided a consistent explanation of having a finite amount of the cosmological constant, even though it is fixed to be zero for the large universe.
\section{Summary \color{black} and discussions}\label{summary}
We have studied possible large field limits of the SM Higgs, assuming that it is coming from a massless state at the tree level in heterotic string theory with its supersymmetry broken at the string scale.
In the toroidal compactification, putting a background for such a massless state corresponds to a boost in the momentum lattice. We have classified the large boost limits with fixed radius into three categories: runaway, periodic, and chaotic.
As a concrete toy model, we have examined the ten-dimensional $SO(16)\times SO(16)$ non-supersymmetric heterotic string, with a dimension being compactified on $S^1$ with the radius $R$.
We have considered the large field limit of a Wilson line on the $S^1$ with fixed $R$, and reproduced these three limits.
We have argued that this behavior is universal if the zero momentum limit of the emission vertex of the Higgs is written as a product of holomorphic (1,0) and anti-holomorphic (0,1) operators, not only in the case of toroidal compactification.
In the known models of fermionic construction and of orbifolding, the emission vertex tends to be written as such a product, and our argument applies for these wide class of models.
Physically several degrees of freedom appears when the Higgs field value becomes larger than the Planck scale.
We have argued that there exists an runaway direction in this multi degrees of freedom space. This runaway vacuum corresponds to opening up an extra dimension.
It is noteworthy that this potential fits into the criteria of the MPP proposed by Froggatt and Nielsen.
The MPP requires that the electroweak vacuum is degenerate with this runaway vacuum, and hence that the cosmological constant of the electroweak vacuum is tuned to be zero in the large universe.
We have speculated that the observed amount of the cosmological constant can be understood as a fluctuation from zero in the framework of the MPP.
We may get the eternal inflation from this potential.
It is realized either as a topological inflation at the domain wall between the two vacua or as a decay from the false vacuum that traps the Higgs field.
In both cases, the Higgs field, which is rolling down the potential, may cause the succeeding inflation, which accounts for the observed CMB fluctuations, along the quasi-flat potential around the critical point.
It would be interesting to study the limit in more realistic SM-like model with the orbifolding, fermionic constructions, etc; see e.g.\ Ref.~\cite{Blaszczyk:2014qoa}.
{\color{black}
Finally we comment on the dilaton potential. Though we consider the general compactifications which may not even have a geometric interpretation, let us illustrate the situation starting from a conventional ten dimensional string theory. The low energy effective action in ten dimensions reads
\al{
S &= {M_\text{s}^8\over g_\text{s}^2}\int\text{d}^{10}x\sqrt{-g}\,e^{-2\Phi}
\paren{\mathcal R+4\partial_\mu\Phi\,\partial^\mu\Phi+\cdots}\nonumber\\
&\quad +M_\text{s}^{10}\int\text{d}^{10}x\sqrt{-g}\paren{-C+\cdots}\nonumber\\
&\quad +\mathcal O\fn{g_s^2e^{2\Phi}},
}
where $\Phi$ is the dilaton field and $C$ is the dimensionless cosmological constant induced at the one-loop level. We note that in this string frame, $g_\text{s}$ and $\Phi$ always appear in the combination $g_\text{s}e^\Phi$. After the compactification,
\al{
S &= {M_\text{s}^2\over g_\text{s}^2}\paren{M_\text{s}^6V_6}\int\text{d}^4x\sqrt{-g_4}\,e^{-2\Phi}
\paren{\mathcal R_4+4\partial_\mu\Phi\,\partial^\mu\Phi+\cdots}\nonumber\\
&\quad +M_\text{s}^4\paren{M_\text{s}^6V_6}\int\text{d}^4x\sqrt{-g_4}\paren{-C+\cdots}\nonumber\\
&\quad +\mathcal O\fn{g_s^2e^{2\Phi}},
\label{string frame}
}
where $V_6$ is the compactification volume. Switching to the Einstein frame, we get
\al{
S &= {M_\text{s}^2\over g_\text{s}^2}\paren{M_\text{s}^6V_6}\int\text{d}^4x\sqrt{-g_E}
\paren{\mathcal R_E-2\partial_\mu\Phi\,\partial^\mu\Phi+\cdots}\nonumber\\
&\quad +M_\text{s}^4\paren{M_\text{s}^6V_6}\int\text{d}^4x\sqrt{-g_E}\paren{-C\,e^{4\Phi}+\cdots}\nonumber\\
&\quad +\cdots.
}
We see from the second line that the dilaton has the runaway potential $e^{4\Phi}$ for $\Phi\to-\infty$ if the cosmological constant $C$ is positive. In this limit, the expansion parameter $g_se^\Phi$ in Eq.~\eqref{string frame} becomes small, and the theory is weakly coupled. Since all the higher-loop corrections come with this combination as well, the runaway behavior is not altered by taking them into account. Therefore, this direction $\Phi\to-\infty$ necessarily comprises one of the runaway directions~\cite{Dine:1985he} in Fig.~\ref{potential}, and hence the arguments in Sections~\ref{eternal section} and \ref{solution to cosmological constant} apply quite generally.
}
\subsection*{Acknowledgement}
We thank Michael Blaszczyk, Stefan Groot Nibbelink, Orestis Loukas, S\'aul Ramos-S\'anchez, and Toshifumi Yamashita for useful comments.
This work is in part supported by the Grant-in-Aid for Scientific Research Nos.\ 22540277 (HK), 23104009, 20244028, and 23740192 (KO). The work of Y. H. is supported by a Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Fellows No.25$\cdot$1107.
|
1,116,691,499,636 | arxiv |
\section{Appendix}
\subsection{Calculation of the parameter, $ c $}\label{sec:c_value}
We want to consider initial conditions for which the first state value does not lie on $ SO(3) $. To exactly verify the existence of such $ R(0) $ values, we need to evaluate the value of $ c $ in $ \tilde{V}^{-1}([0, c]) $ of \cref{thm:rigd_body_stab}.
We proceed by utilizing part of the proof of \cite[Lemma 2]{chang2018controller}. Define $ f : GL^{+}(3) \to \mathbb{R}_{\ge 0} $,
\[ f(R) = \frac{k_e}{4} ||R^T R - I||^2 \]
Take a small $ \delta > 0 $ such that every $ A \in \mathbb{R}^{3 \times 3} $ with $ ||A - I|| \le \delta $ is invertible. And let $ c = k_e \delta^2/ 4 $. Then, if $ R \in f^{-1}([0, c]), ||R^T R - I|| \le \delta $, meaning $ R^T R $ and $ R $ are invertible. Hence $ f^{-1}([0, c]) \subset GL^{+}(3) $. For a value of $ \chi $ close to $ \delta $,
\begin{gather*}
||A - I|| \le \delta < \chi
\Rightarrow \sum_{i=j} (A_{ij} -1)^2 + \sum_{i\neq j} A_{ij}^2 < \chi^2 \\
\Rightarrow (A_{ii} -1)^2 + \sum_{j \neq i, j=1}^{3} A_{ij}^2 < \chi^2
\end{gather*}
for any $ i = 1, 2, 3 $. So,
\begin{gather*}
0 > 2 (A_{ii}^2 - 2 A_{ii} + 1 - \chi^2 + \sum_{j \neq i} A_{ij}^2) \\
\Rightarrow A_{ii}^2 - 2 \sum_{j \neq i} A_{ij}^2 > 3 A_{ii}^2 - 4 A_{ii} + 2 (1 - \chi^2)
\end{gather*}
If $ 4^2 - 4 \times 3 \times 2 (1 - \chi^2) < 0 \Rightarrow \chi < \sqrt{1/3} $, RHS of the above equation is always positive. Hence,
\begin{gather*}
A_{ii}^2 - 2 \sum_{j \neq i} A_{ij}^2 > 0\\
\Rightarrow A_{ii}^2 > \sum_{j \neq i} A_{ij}^2 + 2 \prod_{j \neq i} |A_{ij}|\\
\Rightarrow |A_{ii}| > \sum_{j \neq i} |A_{ij}|
\end{gather*}
which means that the matrix $ A $ is strictly diagonally dominant. In summary, if $ \delta < \chi < \sqrt{1/3} $, $ A $ is invertible. Now, the next part of the proof of \cite[Lemma 2]{chang2018controller} is continued as is, to arrive at \cite[Theorem 2]{chang2018controller}.
Hence we obtain a sufficient condition that the permitted values of $ R(0) $ should satisfy,
\begin{gather*}
\tilde{V}(R(0)) \le (c = k_e \delta^2 / 4) < k_e/ 12 \\
\Rightarrow ||R(0)^T R(0) - I|| < \sqrt{\frac{1}{3}}
\end{gather*}
\subsection{Calculation of derivative of the height function}\label{sec:deriv_height}
We need to evaluate the derivative of $ W(x) $ along the flow (Lie derivative) on the submanifold $ \mathcal{M} $. A few useful relations are,
\begin{itemize}
\item $ tr(A) = tr(A^T), tr(ABC) = tr(BCA) $; $ A, B, C $ are square matrices
\item $ tr(AB) = 0 $ if $ A $ is a symmetric matrix and $ B $ is a skew-symmetric matrix
\item $ \widehat{(v \times w)} = [\hat{v}, \hat{w}] = 2\ \text{skew}(\hat{v}\hat{w}) $, $ v, w \in \mathbb{R}^3 $
\item $ v^T w = \frac{1}{2} \langle \hat{v}, \hat{w} \rangle $, $ v, w \in \mathbb{R}^3 $; $ ||v||^2 = \frac{1}{2} ||\hat{v}||^2 $
\end{itemize}
Now, the system \eqref{eq:rigid_body_euc} restricted to the submanifold $ \mathcal{M} $ can also be written as,
\begin{equation} \label{eq:rigid_body_Z}
\begin{aligned}
\dot{Z}_s &= \frac{1}{2} (Z_s\hat{\Omega}-\hat{\Omega}Z_s) + Z_k\hat{\Omega} - \frac{1}{2} \widehat{Z_k^\vee \times \Omega} \\
\dot{Z}_k &= \frac{1}{2} (Z_s\hat{\Omega}+\hat{\Omega}Z_s) + \hat{\Omega} + \frac{1}{2} \widehat{Z_k^\vee \times \Omega} \\
\dot{\Omega} &= -k_p Z_k^\vee - k_d \Omega
\end{aligned}
\end{equation}
where $ Z = R_0^T (R - R_0) $. We have chosen the height function as,
\begin{gather*}
W(R, \Omega) = \frac{k_p}{4} (||Z_s||^2 + ||Z_k||^2) + \frac{1}{2}||\Omega||^2 + \epsilon \langle Z_k^\vee, \Omega \rangle
\end{gather*}
So its Lie derivative,
{\small \begin{multline*}
\dot{W}|_\mathcal{M}(R, \Omega) = \frac{k_p}{2}\left( \left\langle Z_s, \frac{1}{2} (Z_s\hat{\Omega}-\hat{\Omega}Z_s) + Z_k\hat{\Omega} - \frac{1}{2} \widehat{Z_k^\vee \times \Omega} \right\rangle \right. \\
\left. + \left\langle Z_k, \frac{1}{2} (Z_s\hat{\Omega}+\hat{\Omega}Z_s) + \hat{\Omega} + \frac{1}{2} \widehat{Z_k^\vee \times \Omega} \right\rangle \right) \\
+ \langle \Omega, -k_p Z_k^\vee - k_d \Omega \rangle + \epsilon \left\langle \frac{1}{2} (Z_s\hat{\Omega}+\hat{\Omega}Z_s)^\vee + \Omega + \frac{1}{2} Z_k^\vee \times \Omega, \Omega \right\rangle\\
+ \epsilon \langle Z_k^\vee, -k_p Z_k^\vee - k_d \Omega \rangle
\end{multline*}}
We know that,
\begin{gather*}
\langle Z_s, Z_s\hat{\Omega} \rangle = tr(Z_s^T Z_s \hat{\Omega}) = tr((Z_s^T Z_s) \hat{\Omega}) = 0 \\
\langle Z_s, \hat{\Omega}Z_s \rangle = tr(Z_s^T \hat{\Omega} Z_s) = tr(\hat{\Omega} (Z_s Z_s^T )) = 0 \\
\langle Z_s, \widehat{Z_k^\vee \times \Omega} \rangle = tr(Z_s^T (\widehat{Z_k^\vee \times \Omega})) = 0 \\
\langle Z_k, \widehat{Z_k^\vee \times \Omega} \rangle = 2 \langle Z_k^\vee, Z_k^\vee \times \Omega \rangle = 2 \langle \Omega, Z_k^\vee \times Z_k^\vee \rangle = 0 \\
\langle \Omega, Z_k^\vee \times \Omega \rangle = \langle \Omega \times \Omega, Z_k^\vee \rangle = 0 \\
\langle Z_k, \hat{\Omega} \rangle = 2 \langle Z_k^\vee, \Omega \rangle
\end{gather*}
{\small \begin{multline*}
\left\langle Z_k, \frac{1}{2} (Z_s\hat{\Omega}+\hat{\Omega}Z_s) \right\rangle = \frac{1}{2} tr(-Z_k Z_s \hat{\Omega} - Z_k \hat{\Omega} Z_s)\\
= \frac{1}{2} tr((-Z_k Z_s \hat{\Omega})^T - Z_k \hat{\Omega} Z_s) = \frac{1}{2} tr(-\hat{\Omega} Z_s Z_k - Z_k \hat{\Omega} Z_s)\\
= \frac{1}{2} tr(-Z_s Z_k \hat{\Omega} - Z_s Z_k \hat{\Omega}) = - tr(Z_s^T Z_k \hat{\Omega}) = - \langle Z_s, Z_k\hat{\Omega} \rangle
\end{multline*}}
{\small \begin{multline*}
\left\langle (Z_s\hat{\Omega}+\hat{\Omega}Z_s)^\vee, \Omega \right\rangle = \frac{1}{2} \left\langle \hat{\Omega}, (Z_s\hat{\Omega}+\hat{\Omega}Z_s) \right\rangle\\
= \frac{1}{2} tr(\hat{\Omega}^T Z_s \hat{\Omega} + \hat{\Omega}^T \hat{\Omega} Z_s) = tr(\hat{\Omega}^T Z_s \hat{\Omega})
\end{multline*}}
So we have the simplification,
\begin{multline*}
\dot{W}|_\mathcal{M}(R, \Omega) = -(k_d - \epsilon) ||\Omega||^2 - \epsilon k_d \langle Z_k^\vee, \Omega \rangle - \epsilon k_p ||Z_k^\vee||^2\\
+ \frac{\epsilon }{2} tr(\hat{\Omega}^T Z_s \hat{\Omega})
\end{multline*}
Now with $ \tilde{R} = R_0^T R $,
\begin{gather*}
x^T Z_s x = x^T ((R_0^TR - I)_s) x \\
= x^T (\tilde{R}_s - I) x = x^T (\frac{\tilde{R} + \tilde{R}^T}{2} - I) x \\
= 0.5 (x^T\tilde{R}x + x^T\tilde{R}^T x) - ||x||^2 \\
\le 0.5 (||x||^2 + ||x||^2) - ||x||^2 \le 0
\end{gather*}
as $ \tilde{R} $ is a rotation matrix and $ ||\tilde{R}x|| = ||x|| $. So $ Z_s $ is negative semi-definite implying,
\begin{gather*}
\quad tr(\hat{\Omega}^T Z_s \hat{\Omega}) = \sum_{i=1}^3 (\Omega \times e_i)^T Z_s (\Omega \times e_i) \le 0
\end{gather*}
with $ (e_i)_j = \delta_{i, j}, j= 1...3 $. Hence,
\begin{gather*}
\dot{W}|_\mathcal{M}(R, \Omega) \le -(k_d-\epsilon) ||\Omega||^2 - \epsilon k_d \langle Z_k^\vee, \Omega \rangle - \epsilon k_p ||Z_k^\vee||^2 \le 0
\end{gather*}
is negative definite if $\displaystyle 0 < \epsilon < \frac{4k_pk_d}{4k_p+k_d^2} $.
\subsection{Rigid body stabilization}
Using the above result, we propose an ambient nonlinear controller for rigid body stabilization. Consider again the feedback integrator form of the rigid body dynamics \eqref{eq:rigid_body_attr},
\begin{equation}\label{eq:rigid_body_attr_copy}
\begin{aligned}
\dot{R} &= R \hat{\Omega} - k_e R (R^T R - I) \\
\dot{\Omega} &= u
\end{aligned}
\end{equation}
For this system, $ \mathcal{M} = SO(3) \times \mathbb{R}^3$ is the invariant manifold being considered which is embedded in the ambient space $ \mathfrak{R} = \mathbb{R}^{3 \times 3} \times \mathbb{R}^3 $. Also, consider again the function $ \tilde{V} $ defined in \eqref{eq:V_tilde}.
\begin{theorem}\label{thm:rigd_body_stab}
The control law given by,
\begin{equation}\label{eq:controller}
u = -k_p Z_k^\vee - k_d \Omega, \quad \mathbb{R} \ni k_p, k_d > 0
\end{equation}
asymptotically stabilizes an equilibrium point $ (R_0, 0) \in \mathcal{M} $ of the system \eqref{eq:rigid_body_attr_copy} for almost all initial conditions starting from $ \tilde{V}^{-1}([0, c]) $ and some $ c > 0 $.
\end{theorem}
\noindent The corresponding closed-loop system employing \eqref{eq:rigid_body_attr_copy} and \eqref{eq:controller} is,
\begin{equation} \label{eq:rigid_body_euc}
\begin{aligned}
\dot{R} &= R \hat{\Omega} - k_e R (R^T R - I) \\
\dot{\Omega} &= -k_p Z_k^\vee - k_d \Omega
\end{aligned}
\end{equation}
with $ (R, \Omega) $ as its states.
\begin{proof}
We first verify the assumptions corresponding to the set-up in \cref{sec:prelim},
\begin{itemize}
\item We have a Riemannian manifold $ (\mathbb{R}^{3\times 3} \times \mathbb{R}^3, \cdot ) $ on which a locally Lipschitz continuous vector field \eqref{eq:rigid_body_euc} is given.
\item It can be directly claimed from \cite[Theorem 2]{chang2018controller} that every trajectory of \eqref{eq:rigid_body_attr_copy} starting from a point in $ \tilde{V}^{-1}([0, c]) $, for some $ c > 0 $, stays in $ \tilde{V}^{-1}([0, c]) $ for all $ t \ge 0 $ and asymptotically converges to the set $ \mathcal{M} = \tilde{V}^{-1}(0) $ as $ t \to \infty $. Since $ \tilde{V}^{-1}([0, c]) $ is compact and positively invariant, the first state $ R $ in \eqref{eq:rigid_body_euc} is bounded if initial states $ (R(0), \Omega(0)) \in \tilde{V}^{-1}([0, c]) $. Now, consider a function $ V_2 = \frac{1}{2} ||\Omega||^2 $ whose derivative is evaluated using \eqref{eq:rigid_body_euc}:
{\small \begin{gather*}
\dot{V}_2 = - k_d ||\Omega||^2 - k_p \Omega^T Z_k^\vee \\
\le - k_d ||\Omega|| \left( ||\Omega|| - \frac{k_p}{k_d} ||Z_k^\vee|| \right)
\le - k_d \epsilon ||\Omega||^2 \le 0
\end{gather*}}
if $ \displaystyle ||\Omega|| \ge \frac{k_p}{k_d (1-\epsilon)} ||Z_k^\vee|| $. We know that $ Z_k^\vee $ is bounded because $ R $ is already shown to be bounded. Hence, either $ ||\Omega|| $ is bounded by a fraction of $ ||Z_k^\vee|| $ or $ \dot{V}_2 \le 0 $ implying that $ ||\Omega|| $ is non-increasing. So, $ \Omega $ is bounded.
Therefore we have a Cauchy problem for \eqref{eq:rigid_body_euc} with initial value $ (R(0), \Omega(0)) $ such that the solution is bounded.
\item From the previous point, we know that the $ \omega $-limit set $ \omega(R(0), \Omega(0)) $, which is a compact and connected set, is contained in a closed embedded submanifold $ \mathcal{M} = SO(3) \times \mathbb{R}^3 \subset \mathfrak{R} = \mathbb{R}^{3\times 3} \times \mathbb{R}^3 $.
\item Let $ O $ be an open tubular neighborhood of $ \mathcal{M} $ in $ \mathfrak{R} $. This set is being used in our context to help determine the region of convergence in $ \mathfrak{R} $. There is a real-valued $ C^1 $ function $ W : O \to \mathbb{R} $ such that $ \dot{W} \le 0 $ on $ \mathcal{M} $, defined as below:
\begin{equation}\label{eq:height_func}
W(R, \Omega) = \frac{k_p}{4} (||Z_s||^2 + ||Z_k||^2) + \frac{1}{2}||\Omega||^2 + \epsilon \langle Z_k^\vee, \Omega \rangle
\end{equation}
which serves as the height function with $ Z = R_0^T (R - R_0) $. The derivative of $ W(R, \Omega) $ along the flow on $ \mathcal{M} $ is (\cref{sec:deriv_height}),
{\small \begin{equation}\label{eq:height_deriv}
\dot{W}|_\mathcal{M}(R, \Omega) \le -(k_d-\epsilon) ||\Omega||^2 - \epsilon k_d \langle Z_k^\vee, \Omega \rangle - \epsilon k_p ||Z_k^\vee||^2 \le 0
\end{equation}}
for $\displaystyle 0 < \epsilon < \frac{4k_pk_d}{4k_p+k_d^2} $.
$ E $ is defined as $ \lbrace (R, \Omega) \in \mathcal{M} \mid \dot{W}(R, \Omega) = 0 \rbrace $:
\begin{equation}\label{eq:E_zero}
E = \lbrace (R, \Omega) \in SO(3) \times \mathbb{R}^3 \mid Z_k^\vee = 0, \Omega = 0 \rbrace
\end{equation}
so that $ \dot{W}(R, \Omega) < 0 $ on $ \mathcal{M} \setminus E $.
\end{itemize}
\noindent Thus the main assumptions required for \cref{thm:containment} are satisfied.
We observe that the height function can be re-written as
{\small \[ W(R, \Omega) = \frac{k_p}{4} tr((I - R_0^T R)^T(I - R_0^T R)) + \frac{1}{2}||\Omega||^2 + \epsilon \langle Z_k^\vee, \Omega \rangle \]}
which is different from standard Lyapunov functions used for rigid body stabilization like $ V(R, \Omega) = \frac{k_p}{4} tr(I - R_0^T R) + \frac{1}{2}||\Omega||^2 $. Among other changes, it has an additional cross term $ \langle Z_k^\vee, \Omega \rangle $ which helps us to identify the equilibrium point as one of the connected components of $ E $ and then employ \cref{thm:containment} to prove asymptotic convergence.
\begin{remark}\label{rem:neighborhood}
The maximum value that the real number $ c $ can take is less than $ k_e/ 12 $ (see \cref{sec:c_value}) and this ensures that there exists an open tubular neighborhood $ O $ which is the superset of $ \tilde{V}^{-1}([0, c)) $.
\end{remark}
\noindent On the set $ E $ we know that,
\begin{gather*}
Z_k^\vee = 0 \Rightarrow R_0^T R - R^T R_0 = 0
\end{gather*}
With $ \tilde{R} = R_0^T R $,
\[ \tilde{R} = \tilde{R}^T \Rightarrow \tilde{R}^2 = I \]
Using the axis-angle representation of rotation matrices~\cite{murray2017mathematical}, $ \tilde{R} = \exp(\theta \hat{\xi}) = I + sin{\theta}\hat{\xi} + (1 - cos{\theta})\hat{\xi}^2, \hat{\xi} \in \mathfrak{so}(3) $,
\begin{gather*}
\tilde{R}^2 = I \Rightarrow e^{2\theta\hat{\xi}} = I = e^{2n\pi\hat{k}} \quad k, \xi \in \mathbb{R}^3, ||k|| = 1 \\
\Rightarrow \theta = n\pi, \hat{\xi} = \hat{k}
\end{gather*}
The set of all such matrices, $ \tilde{R} $, can be divided into two sets as follows,
\begin{gather}
\theta = 2 m \pi, m \in \mathbb{Z} \Rightarrow \tilde{R} = I \Rightarrow tr(\tilde{R}) = 3 \\
\begin{aligned}\label{eq:Zkvee_zero}
\theta &= (2 m + 1) \pi, m \in \mathbb{Z} \\
\Rightarrow \tilde{R} &= \exp(\pi\hat{\xi}) \neq I \Rightarrow tr(\tilde{R}) = 1 + 2\cos{\theta} = -1
\end{aligned}
\end{gather}
Thus $ E = E_1 \cup E_2 $ is described below,
\begin{itemize}
\item $ E_1 = \lbrace (R_0, 0) \rbrace $. As this subset contains only one point, it is trivially connected. Value of $ W $ in $ E_1 $ evaluated with $ (Z, \Omega) = (0,0) $ gives $ W(R, \Omega) = 0 $.
\item $ E_2 = \lbrace (R, \Omega) \in SO(3) \times \mathbb{R}^3 \mid tr(R_0^T R) = tr(\tilde{R}) = -1, \tilde{R} = \tilde{R}^T, \Omega = 0 \rbrace $. A point in the set $ E_2 $ has the form $ (R_0 \tilde{R}, 0) $ where $ \tilde{R} = e^{\pi \hat{\xi}} $ for an unit vector $ \xi $ as in \eqref{eq:Zkvee_zero}. Consider two points $ x_1 = (R_0 e^{\pi \hat{\xi}_1}, 0)\ \text{and}\ x_2 = (R_0 e^{\pi \hat{\xi}_2}, 0) $ in $ E_2 $ and the corresponding axis vectors $ \xi_1, \xi_2 \in \mathbb{R}^3 $ with unit magnitudes. Define a path variable $\displaystyle \xi(\alpha) = \frac{(1-\alpha) \xi_1 + \alpha \xi_2}{||(1-\alpha) \xi_1 + \alpha \xi_2||}, \alpha \in [0, 1] $ such that $ \xi(0) = \xi_1 $ and $ \xi(1) = \xi_2 $. The corresponding path in $ \mathcal{M} $ connecting $ x_1 $ and $ x_2 $ is $ \lbrace x(\alpha) = (R(\alpha), \Omega(\alpha)) \in SO(3) \times \mathbb{R}^3 \mid R(\alpha) = R_0 \tilde{R}(\alpha) = R_0 e^{\pi \widehat{\xi(\alpha)}}, \Omega=0 \rbrace $. Any point $ x(\alpha) $ in this path connecting $ x_1, x_2 $ also belongs to $ E_2 $, meaning the set $ E_2 $ is path connected and hence connected.
Evaluating $ W $ in $ E_2 $ with $ (Z_k, \Omega) = (0,0) $,
\begin{multline*}
W(R, \Omega) = \frac{k_p}{4}||Z_s||^2 = \frac{k_p}{4} tr((R_0^TR - I ) (R_0^TR - I ))\\
= \frac{k_p}{4} tr(\tilde{R}^2 - 2\tilde{R} + I) = \frac{k_p}{4} tr(2I - 2\tilde{R}) = 2k_p
\end{multline*}
as $ \tilde{R} = R_0^T R = \tilde{R}^T $ and $ tr(R_0^T R) = -1 $ on $ E_2 $.
\end{itemize}
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=0.8]
\begin{scope}[shift={(-2, 0)}]
\draw [->] (0,-1) -- (0,4) node[above] {$ W $};
\end{scope}
\draw (0, -2) -| (6, 5) node[pos=0.4, below]{$ \mathfrak{R} = \mathbb{R}^{3\times 3} \times \mathbb{R}^3 $} -| (0, -2);
\draw (3, 1.5) circle (1.6cm);
\node[left] at (5, 0.8) {{\small $ \mathcal{M} = SO(3) \times \mathbb{R}^3 $}};
\draw (0.8, 0.8) to[out=90, in=200] ++(2.5, 3) node[above] {$ \dot{W} < 0\ \text{on}\ \mathcal{M} \setminus E $} .. controls +(right:1cm) and +(up:0.5cm) .. ++(2.5, -1.5) node[left=0.2cm] {$ O $} to[out=250, in=45] (3, -1) to[out=125, in=-45] (0.8, 0.8);
\draw[very thick] (2.5, 2) to node[pos=0.5, above] {$ E_2 $} (3, 2);
\draw [dashed] (2.1, 2) -- +(-2.1-2, 0) node [left] {$ 2 k_p $};
\filldraw (3.5, 1.5) circle (1pt) node[above] {$ E_1 $};
\draw [dashed] (3.1, 1.5) -- +(-5.1, 0) node [left] {$ 0 $};
\end{tikzpicture}
\caption{Illustration of the components involved for the case of rigid body stabilization. A general version can be found in \cite{arsie_patching}.}
\label{fig:manifold_height}
\end{figure}
\Cref{fig:manifold_height} illustrates the basic components involved in this proof. $ \mathfrak{R} $ is the set of tuples of $ 3 \times 3 $ matrices and angular velocity vectors, while $ \mathcal{M} $ is the set of tuples of valid rotation matrices and angular velocities. Further, $ O $ is the tubular neighborhood of $ \mathcal{M} $ in which the height function $ W $ is defined. On the y-axis, the value of $ W(R, \Omega) $ for any $ (R, \Omega) \in O $ is shown. We have already proved that $ \dot{W}(R, \Omega) < 0 $ on $ \mathcal{M} \setminus (E_1 \cup E_2) $. From \eqref{eq:height_func} and \eqref{eq:E_zero}, any subset of $ E $ which lies in a level set of $ W $ will have the structure,
\[ W^{-1}(\frac{k_p}{4} ||Z_s||^2 = c) \]
where $ c \ge 0 $. As shown earlier, the connected components of $ E $ lie in level sets of $ W $.
\noindent Finally completing the arguments of the proof,
\begin{itemize}
\item $ E_1 \subset W^{-1}(0), E_2 \subset W^{-1}(2k_p) $ which implies that $ \lbrace W(E_i) \rbrace_{i \in {1, 2}} = \lbrace 0, 2k_p \rbrace $ has no accumulation point in $ \mathbb{R} $. Hence, we can say that $ \lbrace E_i \rbrace_{i \in {1,2}} $ are \emph{contained} in $ W $ using \cref{def:containment}.
\item Employing \cref{thm:containment}, $ \omega(R(0), \Omega(0)) \subset E_i $ for a unique $ i \in \lbrace 1, 2 \rbrace $.
\item From the dynamics \eqref{eq:rigid_body_euc} we know that $ E_2 $ is forward invariant, that is,
\begin{gather*}
Z_k^\vee = 0, \Omega = 0, R^T R = I \\
\Rightarrow \dot{R} = 0, \dot{\Omega} = 0
\end{gather*}
on $ E_2 $. In addition, $ E_2 $ is an unstable set for the system dynamics on $ \mathcal{M} $~\cite{bayadi2014almost}.
\end{itemize}
This proves that $ \omega(R(0), \Omega(0)) \subset E_1 = \lbrace (R_0, 0) \rbrace $ for almost all $ (R(0), \Omega(0)) $ in $ \tilde{V}^{-1}([0, c]) $.
\end{proof}
\section{Simulations}\label{sec:simulations}
In this section we look at a few numerical examples to illustrate the strategies previously presented.
\subsection{Ideal case}\label{sec:sim_normal}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/euc_rigid_normal.pdf}
\caption{Nonlinear rigid body stabilization using height function}
\label{fig:euc_rigid_normal}
\end{figure}
We demonstrate the performance of the rigid body system~\eqref{eq:rigid_body_euc} with initial conditions in the manifold, $ \mathcal{M} $. Consider $ (R_0, 0) $ as the desired equilibrium point ($ R_0 = diag\left\lbrace-1, -1, 1\right\rbrace $) along with the initial conditions,
\begin{gather*}
R(0) = \exp(\frac{2\pi}{3}\hat{e}_2),\quad \Omega(0) = [0, 1, 1]^T, \text{where}\ e_2 = [0,1,0]^T
\end{gather*}
and the parameters being,
\begin{gather*}
k_e = 1, k_p = 4, k_d = 2
\end{gather*}
The chosen value of $ \epsilon $ is $ 0.99 \times \frac{4k_pk_d}{4k_p+k_d^2} = 1.584 $ which satisfies the constraint needed in \eqref{eq:height_deriv}.
\Cref{fig:euc_rigid_normal} depicts the magnitudes of orientation and attitude errors along with the control magnitude. We recover the expected ideal performance in this case.
\subsection{Numerical Robustness}\label{sec:sim_noise}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/euc_rigid_noise.pdf}
\caption{Demonstration of robustness}
\label{fig:euc_rigid_noise}
\end{figure}
To illustrate the strength of the proposed control, consider an initial state of the body not on $ \mathcal{M} $:
\begin{gather*}
R(0) = 1.1 \times \exp(\frac{2\pi}{3}\hat{e}_2) \in \mathbb{R}^{3 \times 3} \setminus SO(3)
\end{gather*}
with all the other conditions and parameters being identical to \cref{sec:sim_normal}. One can verify that,
\begin{equation*}
||R(0)^T R(0) - I|| = 0.3637 < \sqrt{1/3}
\end{equation*}
implying that the initial condition is in the permitted set (\cref{sec:c_value}).
Since in practical applications randomness could seep into the system, we check robustness to measurement noise. To emulate measurement noise, white noise of relative magnitude, $ 10^{-3} $ is added to both the states $ (R, \Omega) $.
In this case too, convergence is observed to the desired equilibrium within the range of the measurement noise (\cref{fig:euc_rigid_noise}). We also notice that the state $ R $ is outside $ SO(3) $ initially, but soon converges to the manifold (modulo noise).
\section{Ambient control formulation using Lyapunov-like functions}\label{sec:nonlinear}
The results of the previous section are obtained via linearization and therefore suffer from obvious drawbacks such as the inability to accurately estimate the region of convergence. In this section, we present a novel nonlinear design method based on Lyapunov techniques which is utilized to stabilize the rigid body using feedback integrators.
An important result on locating $ \omega $-limit sets using height functions is employed. Given the bounded solution of an autonomous vector field on a Riemannian manifold, a finer estimate of the location of $ \omega $-limit set can be obtained using results in \cite{arsie_patching}. We summarize the same here.
\subsection{Preliminaries}\label{sec:prelim}
The set-up in \cite[Section 2]{arsie_patching} is restated while changing the notations from $ \mathcal{M} $ to $ \mathfrak{R} $, $ S $ to $ \mathcal{M} $ and $ \Omega $ to $ \omega $ so that consistency with the rest of our paper is maintained:
\begin{itemize}
\item A Riemannian manifold $ (\mathfrak{R}, g) $ of class $ C^2 $ on which a locally Lipschitz continuous vector field
\begin{equation}\label{eq:vec_field}
\dot{x} = f(x)
\end{equation}
is given.
\item Consider a Cauchy problem for \eqref{eq:vec_field} with initial value $ x(0) $ such that the corresponding solution $ x(t, x(0)) $ is bounded.
\item Assume that the $ \omega $-limit set $ \omega(x(0)) $, which is a compact and connected set, is contained in a closed embedded submanifold $ \mathcal{M} \subset \mathfrak{R} $. Equivalently $ \mathcal{M} $ is attracting for the solution of \eqref{eq:vec_field} starting at $ x(0) $.
\item Let $ O $ be an open tubular neighborhood of $ \mathcal{M} $ in $ \mathfrak{R} $. Assume that there exists a real-valued $ C^1 $ function $ W : O \to \mathbb{R} $ such that $ \dot{W}(x) \le 0 $ on $ \mathcal{M} $, where $ \dot{W}(x) $ is the derivative of $ W(x) $ along the flow (Lie derivative). Moreover, let $ E := \lbrace x \in \mathcal{M} \mid \dot{W}(x) = 0 \rbrace $ so that $ \dot{W}(x) < 0 $ on $ \mathcal{M} \setminus E $.
\end{itemize}
The function $ W $ as described above is called a \textit{height function} for the pair $ (\mathcal{M}, f) $.
\begin{definition}\cite[Definition 5]{arsie_patching}\label{def:containment}
Let $ \lbrace E_i \rbrace_{i \in \mathbb{I}} $ be the connected components of $ E $, where $ \mathbb{I} = \lbrace 1, 2, \dots \rbrace \subset Z^+ $ . Given a function $ W $ as in the assumptions, we say that the components $ \lbrace E_i \rbrace_{i \in \mathbb{I}} $ are \emph{contained} in $ W $ if each $ E_i $ lies in a level set of $ W $, and the subset $ \lbrace W(E_i) \rbrace_{i \in \mathbb{I}} \subset \mathbb{R} $ has at most a finite number of accumulation points in $ \mathbb{R} $.
\end{definition}
The main result is stated below.
\begin{theorem}\cite[Theorem 6]{arsie_patching}\label{thm:containment}
If the components $ \lbrace E_i \rbrace_{i \in \mathbb{I}} $ are \emph{contained} in $ W $ according to \cref{def:containment}, then $ \omega(x(0)) \subset E_i $ for a unique $ i \in \mathbb{I} $.
\end{theorem}
\section{Conclusions}
We initially introduced an existing linearization procedure for attitude control design in Euclidean space. Then, we proved that a single height function defined on the ambient Euclidean space, can be used to derive stabilizing nonlinear control for the attitude of a rigid body with a prescribed region of attraction. This is also illustrated through exemplary simulations. The algorithm is robust to measurement noise and numerical computation errors arising from digital implementation.
\section{Introduction}
There are many established techniques for attitude control design employing parametrization of the set of rotational matrices~\cite{tsiotras1995new}. A brief summary of the representations involved in description of the kinematics of motion is given in~\cite{shuster1993survey}. Simple control laws in terms of Euler parameters~\cite{mortensen1968globally}, Cayley-Rodrigues parameters~\cite{junkins1991near} have been formulated. However, using such parametrization and hence local charts could cause undesirable unwinding behavior~\cite{bhat2000topological} and require switching between these local coordinate systems for control design.
On the other hand, in the recent past, coordinate-free techniques using geometric ideas have been used to design rigid body attitude controllers~\cite{bloch2003nonholonomic,bullo2004geometric,bayadi2014almost}. However, implementing such feedback laws from geometric control theory~\cite{crouch1984spacecraft,lee2011geometric} requires special variants of numerical integrators (e.g. variational integrator) to preserve the geometric structure of the manifold and yield reliable results.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.8]
\draw (0, 0) -| (4, 4) node[pos=0.4, below]{$ \mathfrak{R} $} -| (0, 0);
\begin{scope}[shift={(2, 2)}]
\draw (0, 0) circle (1.5cm);
\node[left] at (-1.5/2, 0) {$ \mathcal{M} $};
\path (1.5, 0) arc (0:60:1.5cm) node (eqm) {};
\filldraw[fill=white] (eqm) circle (2pt) node[anchor=south west] {$ x_0 $};
\draw[decoration={markings,mark=at position 0 with {\arrow[scale=2]{<}}},postaction={decorate}]
(eqm) to[bend right] +(1.2, -0.4);
\end{scope}
\end{tikzpicture}
\caption{Stabilization in Euclidean space}
\label{fig:patching}
\end{figure}
While simple geometric PD controllers can be used to stabilize a rigid body~\cite{bullo2004geometric}, numerical integration errors quickly creep into the digital implementations of these schemes, thus resulting in the states not lying on the $ SO(3)$ manifold, and being pushed into the ambient space of $ 3 \times 3 $ real-matrices. In such situations can we still guarantee that these numerical schemes will recover and converge to the manifold? Feedback integrators~\cite{chang_feedback} provide a positive answer to this question. \Cref{fig:patching} illustrates this scenario with $ \mathfrak{R} $ being the set of tuples of $ 3 \times 3 $ matrices and angular velocity vectors, while $ \mathcal{M} $ is the set of tuples of valid rotation matrices and angular velocities. In \cite{chang_feedback} the authors have shown that if the rigid body dynamics is seen as the restriction of a special vector field in an ambient Euclidean space, then Euclidean numerical integration schemes also lead to convergence of states to the manifold. Further, for the case when trajectories starting from an ambient space converge to an embedded submanifold, \cite{arsie_patching} shows that the omega limit set lies in a unique connected component of the level sets corresponding to a Lyapunov-like function. Our work builds on these two techniques to design Euclidean controllers which guarantee that the rigid body converges to an equilibrium point $ x_0 $ on $ \mathcal{M} $, even if at some instants the states do not lie on $ \mathcal{M} $.
More recent work by \cite{chang_controller} has addressed this problem by linearizing the ambient dynamics,
thus is only valid in a small neighborhood around the desired set-point. We briefly introduce the same in \cref{sec:chang}. In this article (primarily in \cref{sec:nonlinear}), we have developed a new procedure for nonlinear design using Lyapunov-like functions on the ambient system to guarantee asymptotic convergence to an equilibrium point in $ \mathcal{M} $. Finally, to demonstrate the performance of the controller, numerical simulations are presented in \cref{sec:simulations}.
\subsection*{Notation:}
\begin{itemize}
\item Euclidean inner product is used in this paper:
\[ \langle A, B\rangle = \sum_{i,j} A_{i,j}B_{i,j} = tr(A^T B) \]
for matrices of identical dimensions. The norm induced by this inner product is used for vectors and matrices.
\item $ SO(3) = \lbrace R \in \mathbb{R}^{3\times3} \mid R^T R = I, \det(R) = 1 \rbrace $ is the Lie group of all rotations and $ \mathfrak{so}(3) = \lbrace A \in \mathbb{R}^{3\times3} \mid A = -A^T \rbrace $ is the corresponding Lie algebra.
\item Hat map $ \wedge : \mathbb{R}^3 \to \mathfrak{s0}(3) $,
\[ \hat{\Omega} = \begin{bmatrix}
0 & -\Omega_3 & \Omega_2 \\
\Omega_3 & 0 & -\Omega_1 \\
-\Omega_2 & \Omega_1 & 0
\end{bmatrix} \]
for $ \Omega \in \mathbb{R}^3 $. The inverse map is the vee map, $ \vee $, such that $ (\hat{\Omega})^\vee = \Omega $ for all $ \Omega \in \mathbb{R}^3 $ and $ \widehat{(A^\vee)} = A $ for all $ A \in \mathfrak{so}(3) $.
\item For a square matrix $ A $, $ A_s := (A + A^T)/2 $ is the symmetric part and $ A_k := (A - A^T)/2 $ is the skew-symmetric part.
\end{itemize}
\section{Stabilization of a rigid body using linearization}\label{sec:chang}
This section summarizes the linearization procedure introduced in \cite{chang_controller}. Consider a control system $ \Sigma $ on $ \mathbb{R}^n $,
\begin{equation*}
\Sigma : \dot{x} = X(x, u), x \in \mathbb{R}^n, u \in \mathbb{R}^k
\end{equation*}
Assume that there is an m-dimensional submanifold $ \mathcal{M} $ of $ \mathbb{R}^n $ that is invariant under the flow of the system. So we can restrict the system to $ \mathcal{M} $ as,
\begin{equation*}
\Sigma|\mathcal{M} : \dot{x} = X(x, u), x \in \mathcal{M}, u \in \mathbb{R}^k
\end{equation*}
It is convenient to use the ambient control system $ \Sigma $ and the Cartesian coordinates on the ambient space in order to design controllers for the system $ \Sigma|\mathcal{M} $ on the manifold $ \mathcal{M} $.
Let $ \tilde{V} $ be a non-negative function on the euclidean space such that $ \mathcal{M} = \tilde{V}^{-1}(0) $. At every point in $ \mathcal{M} $ as $ \tilde{V} $ attains its minimum value of $ 0 $, $ \nabla\tilde{V}(x) = 0, \forall x \in \mathcal{M} $. We obtain a new ambient control system by subtracting $ \nabla\tilde{V} $ from the control vector field,
\begin{equation*}
\tilde{\Sigma} : \dot{x} = \tilde{X}(x, u), x \in \mathbb{R}^n, u \in \mathbb{R}^k
\end{equation*}
with $ \tilde{X}(x, u) = X(x, u) - \nabla\tilde{V}(x) $. It is easily verified that $ \tilde{\Sigma}|\mathcal{M} = \Sigma|\mathcal{M} $, meaning that the system dynamics is preserved on $ \mathcal{M} $. The negative gradient of $ \tilde{V} $ helps in making $ \mathcal{M} $ attractive for $ \tilde{\Sigma} $ dynamics \cite{chang_feedback}.
Now, let $ (x_0, u_0) \in \mathcal{M} \times \mathbb{R}^k $ be an equilibrium point of $ \Sigma|\mathcal{M} $ with $ X(x_0, u_0) = 0 $. Jacobian linearization can be carried out on the ambient system $ \tilde{\Sigma} $ around the equilibrium point in the ambient space to come up with stabilizing controllers for the original system on the manifold. The linearization of $ \tilde{\Sigma} $ is given by,
\begin{equation*}
\tilde{\Sigma}_0^l : \dot{x} = \frac{\partial \tilde{X}}{\partial x}(x_0, u_0)(x - x_0) + \frac{\partial \tilde{X}}{\partial u}(x_0, u_0)(u - u_0)
\end{equation*}
where $ (x, u) \in \mathbb{R}^n \times \mathbb{R}^k $.
\begin{theorem}\cite[Theorem II.3]{chang_controller}\label{thm:linear_stab}
If a linear feedback controller $ u : \mathbb{R}^n \to \mathbb{R}^k $ exponentially stabilizes the equilibrium point $ x_0 $ for the linearization $ \tilde{\Sigma}_0^l $ of the ambient system $ \tilde{\Sigma} $, then it also exponentially stabilizes the equilibrium point $ x_0 $ for $ \Sigma|\mathcal{M} $.
\end{theorem}
We are concerned the application of \cref{thm:linear_stab} to the \textit{rigid body system} with full actuation,
\begin{equation} \label{eq:rigid_body}
\begin{aligned}
\dot{R} &= R \hat{\Omega} \\
\dot{\Omega} &= u
\end{aligned}
\end{equation}
where $ (R, \Omega) \in \mathcal{M} \subset \mathbb{R}^{3 \times 3}\times \mathbb{R}^3 $. $ \mathcal{M} = SO(3) \times \mathbb{R}^3$ is the invariant manifold being considered. It is assumed that the control input is appropriately scaled and shifted to account for nonlinear terms in the dynamics.
Let $ GL^{+}(3) = \lbrace R \in \mathbb{R}^{3 \times 3} \mid \det{R} > 0 \rbrace $ and define a function $ \tilde{V} $ on $ GL^{+}(3) \times \mathbb{R}^3 $ by
\begin{equation}\label{eq:V_tilde}
\tilde{V}(R, \Omega) = \frac{k_e}{4} ||R^T R - I ||^2
\end{equation}
with constant $ k_e > 0 $. One can verify that $ \tilde{V}^{-1}(0) = \mathcal{M} $ and
\[ \nabla_R\tilde{V} = -k_e R(R^TR-I), \nabla_\Omega \tilde{V} = 0 \]
So the modified rigid body system ($ \tilde{\Sigma} $) in the ambient space is,
\begin{equation} \label{eq:rigid_body_attr}
\begin{aligned}
\dot{R} &= R \hat{\Omega} - k_e R (R^T R - I) \\
\dot{\Omega} &= u
\end{aligned}
\end{equation}
To design a controller, the system~\eqref{eq:rigid_body_attr} is linearized to get,
\begin{equation}\label{eq:rigid_body_linear}
\begin{aligned}
\dot{Z}_s &= -2 k_e Z_s \\
\dot{Z}_k^{\vee} &= \Omega \\
\dot{\Omega} &= u
\end{aligned}
\end{equation}
with $ Z = R_0^T \Delta R = R_0^T (R - R_0) $ being a transformation of $ R $.
For $ \mathbb{R} \ni k_p, k_d > 0 $, the linear PD controller
\begin{equation*}
u = -k_p Z_k^\vee - k_d \Omega
\end{equation*}
exponentially stabilizes the equilibrium point $ (R_0, 0) $ for both the linearized system \eqref{eq:rigid_body_linear} and the rigid body system \eqref{eq:rigid_body} on $ \mathcal{M} $.
|
1,116,691,499,637 | arxiv | \section{Introduction}
Let $ S^p\mathbb{R}^n \subset (\mathbb{R}^n)^{\otimes p}$ denote the space of order $p$ symmetric tensors of format $n\times \ldots \times n$ and let $\mathcal{V}_{n,p}\subset S^p\mathbb{R}^n$ denote the set of symmetric rank-1 tensors, also called the \emph{Veronese variety}. In application oriented areas such as blind source separation, data compression, imaging or genomic data analysis \cite{low-rank-approx, tensor_decomp, lathauwer-rank-one, rank-one-approx2} one is interested in finding the \emph{best symmetric rank-one approximation} to a given $v\in S^p\mathbb{R}^n$; that is, to solve the optimization problem
\begin{equation}\label{optimization} \argmin\limits_{x\in\mathcal{V}_{n,p}} \,\lVert x-v\rVert^2
\end{equation}
Draisma and Horobet \cite{draisma-horobet} pointed out that many algorithms to compute best rank-one approximations compute local optima. They conclude that an appropriate measure of complexity of solving the optimization problem \cref{optimization} is the number of critical points (as elements in projective space) of the function
\[d_v:\mathcal{V}_{n,p}\to \mathbb{R}, x\mapsto \lVert x-v\rVert^2 := \sum_{1\leq i_1,\ldots,i_p\leq n} (v_{i_1,\ldots,i_p} - x_{i_1,\ldots,i_p})^2 .\]
Note that $d_v$ is an algebraic function in the entries of $v$. When counting also complex critical points (i.e., points where the gradient of $d_v$ vanishes), the number of critical points of $d_v$ is constant on a dense subset of $ S^p\mathbb{R}^n$. This number is called the \emph{euclidean distance degree} of $\mathcal{V}_{n,p}$ \cite{edd} and it is equal to $D(n,p):=\sum_{i=0}^{n-1} (p-1)^i$ \cite{sturmfels-cartwright}.
In applications, however, one often is interested in the number of critical points that are real. Henceforth, we call the number of real critical points of $d_v$ the \emph{real euclidean distance degree of $v$ with respect to~$\mathcal{V}_{n,p}$}. What makes things hard is that, in contrast to the complex count, the real eulidean distance degree is not generically constant. For the Veronese variety this can be observed for instance in \cite[Theorem 2.3]{robeva} or in the experimental data from \cref{sec:experiments} below. Nevertheless, this motivated Draisma and Horobet \cite{draisma-horobet} to study the \emph{average} number of real critical points of~$d_v$, when $v$ is random. Their model of randomness is a symmetric tensor $v=(v_{i_1,\ldots,i_p})$, where the $v_{i_1,\ldots,i_p}$ are centered gaussian random variables with variance $\sigma^2=\binom{p!}{\alpha_1!\cdot\ldots\cdot\alpha_n!}^{-1}$ and $\alpha_j$ is the number of~$j$'s appearing in~$(i_1,\ldots,i_p)$. Let us call such a random tensor a \emph{gaussian symmetric tensor}. This distribution is also called standard Gaussian with respect to the Bombieri norm, because the coefficients of a gaussian symmetric tensor $v$ in the Bombieri norm are standard normal gaussian random variables, see \cite[Sec. 4.1]{draisma-horobet}.
The purpose of this article is to compute \emph{expected real euclidean distance degree of $\mathcal{V}_{n,p}$} defined as
\begin{equation}\label{E-def}
E(n,p):=\mean\limits_{v\in S^p\mathbb{R}^n \text{ gaussian symmetric}} \;[\text{real euclidean distance degree of } v \text{ w.r.t.} \mathcal{V}_{n,p}]
\end{equation}
\begin{rem}
In \cite{edd} $E(n,p)$ is denoted $\mathrm{aEDdegree}(\mathcal{V}_{n,p})$.
\end{rem}
The \emph{Gaussian Orthogonal Ensemble} is a parametric family of random symmetric matrices $A\in\mathbb{R}^{n\times n}$ with probability density functions $\sqrt{2}^{-n}\sqrt{\pi}^{\,\frac{-n(n+1)}{2}} \exp(-\tfrac{1}{2\sigma^2}\,\mathrm{Trace}[(A+u I)^2])$, parametrized by $(u,\sigma^2)\in\mathbb{R}\times\mathbb{R}_{>0}$; see \cite[Sec. 2.3]{mehta} or \cite[Sec.~2.2.6]{tao_matrix}. We call such a random $A$ a \emph{GOE matrix} and write $A\sim \mathrm{GOE}(n;u,\sigma^2)$. We call the probability distribution imposed by $u=1$ and $\sigma^2=1$ the \emph{standard Gaussian Orthogonal Ensemble} and abbreviate $\mathrm{GOE}(n)$ in this case.
Note that Draisma and Horobet's model of a random tensor is a direct generalization of the standard Gaussian Orthogonal Ensemble from matrices to tensors.
It is therefore not surprising that random matrix theory of the Gaussian Orthogonal Ensemble will be of use when computing \cref{E-def}. Indeed, in \cite[Theorem 4.3]{draisma-horobet} Draisma and Horobet present a formula for $E(n,p)$ in terms of the \emph{expected modulus of the characteristic polynomial of a GOE-matrix}, which is
\begin{equation}\label{E-eq}
E(n,p)=\frac{\sqrt{\pi}}{\sqrt{2}^{\,n-1}\,\Gamma(\tfrac{n}{2})}\;\mean\limits_{\substack{w\sim N(0,1)\\ A\sim \mathrm{GOE}(n-1)}} \left\lvert \det\left(\sqrt{p}\,w I_{n-1} -\sqrt{2(p-1)}\,A\right)\right\rvert.
\end{equation}
Observe that
\begin{equation}\label{E-eq2}
\mean\limits_{A\sim \mathrm{GOE}(n-1)} \left\lvert \det\big(\sqrt{p}\,w I_{n-1} -\sqrt{2(p-1)}\,A\big)\right\rvert = \sqrt{p-1}^{n-1}\mean\limits_{A\sim \mathrm{GOE}(n-1;u,1)} \left\lvert \det(A)\right\rvert,
\end{equation}
where $ u= \sqrt{\tfrac{p}{2(p-1)}}\, w$. Starting from this, the computation of $E(n,p)$ is divided into two steps: We first compute $\mean_{A\sim \mathrm{GOE}(n-1;u,1)} \lvert \det(A)\rvert$ and then take the expectation over $u$.
Thus, this article makes a contribution to random tensor theory by computing $E(n,d)$ and also makes a contribution to random matrix theory by giving a formula for $\mean_{A\sim \mathrm{GOE}(n;u,1)} \lvert \det(A)\rvert$ in~\cref{thm}. This result is new to our best knowledge.
In what follows let us denote
\begin{equation}\label{def_I_J}
\mathcal{I}_n(u):=\mean\limits_{A\sim \mathrm{GOE}(n;u,1)} \lvert \det(A)\rvert,\quad\text{and}\quad \mathcal{J}_n(u):=\mean\limits_{A\sim \mathrm{GOE}(n;u,1)}\det(A).
\end{equation}
We remark that $\lvert \mathcal{J}_n(u)\rvert \leq \mathcal{I}_n(u)$ by the triangle inequality. A computation of $\mathcal{J}_n(u)$ can be found in \cite[Sec. 22]{mehta} and the ideas in this paper are very much inspired by this reference. In fact, $\mathcal{I}_n(u)$ can be expressend in terms of~$\mathcal{J}_n(u)$ and a collection of \emph{Hermite polynomials}. The following is our first main contribution.
\begin{thm}[The expected absolute value of the determinant of a GOE matrix]\label{thm}
Let $u\in\mathbb{R}$ be fixed. Define the functions $P_{-1}(x),P_0(x),P_1,(x),P_2(x),\ldots$ via
\[P_k(x)=\begin{cases} -e^{\frac{x^2}{2}}\int_{t=-\infty}^x e^{-\tfrac{t^2}{2}}\,\d t,& \text{if } k=-1\\
H_{e_k}(x),&\text{if } k=0,1,2,\ldots.\end{cases}\]
where $H_{e_k}(x)$ is the $k$-th (probabilist's) Hermite polynomial; see \cref{my_polynomials}. The following holds.
\begin{enumerate}
\item If $n=2m$ is even, we have
\[\mathcal{I}_n(u)=\mathcal{J}_n(u)+\frac{\sqrt{2\pi} \,e^{-\tfrac{u^2}{2}}}{\prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)}\,\sum_{1\leq i,j \leq m} \det(\Gamma_1^{i,j}) \; \det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix},\]
where $\Gamma_1^{i,j}:=\left[\Gamma\left(r+s-\tfrac{1}{2}\right)\right]_{\substack{1\leq r\leq m,r\neq i\\1\leq s\leq m, s\neq j}}$.
\item If $n=2m-1$ is odd we have
\[\mathcal{I}_n(u)=\mathcal{J}_n(u) + \frac{\sqrt{2}\,e^{-\tfrac{u^2}{2}}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\,\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, \det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix},\]
where $\Gamma_2^{i,j}=\left[\Gamma\left(r+s+\tfrac{1}{2}\right)\right]_{\substack{0\leq r\leq m-1, r\neq i\\0\leq s\leq m-1,s\neq j}}$.
\end{enumerate}
\end{thm}
~\\
\cref{thm} enables us to compute $E(n,p)$. The proof of the following theorem is given in \cref{sec:proof2}.
\begin{thm}[The expected real euclidean distance degree of $\mathcal{V}_{n,p}$]\label{cor}
We denote by $F(a,b,c,x)$ Gauss' hypergeometric function (see \cref{def_hypergeom}) and define the matrices $\Gamma_1^{i,j}$ and $\Gamma_2^{i,j}$ as in \cref{thm}. For all integers~$p\geq 2$ the following holds.
\begin{enumerate}
\item If $n=2m+1$, we have
\begin{equation*}E(n,p)=1+\frac{\sqrt{\pi}\sqrt{p-1}^{n-2}\sqrt{3p-2}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\sum_{1\leq i,j\leq m} \frac{\det(\Gamma_1^{i,j})\Gamma\left(i+j-\frac{1}{2}\right)}{\tfrac{3-2i-2j}{1-2i+2j}\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j-1}} \, F\left(2-2i,1-2j,\tfrac{5}{2}-i-j,\tfrac{3p-2}{4(p-1)}\right)
\end{equation*}
\item If $n=2m$, we have
\begin{align*}
E(n,p)&= \frac{\sqrt{p-1}^{n-2}\,\sqrt{3p-2}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\sum_{j=0}^{m-1} \Bigg[ \frac{\sqrt{\pi}\det(\Gamma_2^{0,j})(2j+1)!}{(-1)^{j} 2^{2j}\,j!}\,\frac{(p-2)^jp}{(p-1)^{j}(3p-2)} \;F\left(-j,\tfrac{1}{2},\tfrac{3}{2},\tfrac{-p^2}{(3p-2)(p-2)}\right)\\
&\quad - \frac{ \det(\Gamma_2^{0,j}) \, \Gamma\left(j+\frac{1}{2}\right)}{2\,\left(-\frac{3p-2}{4(p-1)}\right)^{j+1}} +\sum_{i=1}^{m-1} \frac{ \det(\Gamma_2^{i,j}) \,\Gamma\left(i+j+\frac{1}{2}\right)}{\tfrac{(1-2i-2j)}{(1-2i+2j)}\,\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j} } F\left(-2j,-2i+1,\tfrac{3}{2}-i-j,\tfrac{3p-2}{4(p-1)}\right)\Bigg].
\end{align*}
\end{enumerate}
\end{thm}
\begin{rem}
\begin{enumerate}
\item From the description \cref{def_hypergeom} of $F(a,b,c,x)$ it is easy to see that, if both of the numeratorial parameters $a,b$ are non-positive integers, then $F(a,b,c,x)$ is a polynomial in $x$ of degree $\min\set{-a,-b}$.
Hence, there exists some $f(x)\in\mathbb{R}[x], \deg(f)= 2m-1$, with
\[E(2m+1,p)=1+ \sqrt{(p-1)(3p-2)}\, (p-1)^{m-1} \,f\left(\tfrac{4(p-1)}{3p-2}\right),\]
and there exist $g(x)\in\mathbb{R}[x]$, $\deg(g)=2m-2$, and $g_j(x)\in\mathbb{R}[x]$, $\deg(g_j)=j$, such that
\[E(2m,p)=\frac{p(p-1)^{m-1}}{\sqrt{3p-2}} \sum_{j=0}^{m-1} \left(\frac{p-2}{p-1}\right)^j \;g_j\left(\tfrac{p^2}{(3p-2)(p-2)}\right) + (p-1)^{m-1}\sqrt{3p-2}\;g\left(\tfrac{4(p-1)}{3p-2}\right).\]
Compare also the table in \cref{sec:values}.
\item The formula in \cref{cor} (2) also holds for $p=2$: Although $F(-j,\frac{1}{2},\frac{3}{2},\tfrac{-p^2}{(3p-2)(p-2)})$ has a pole of order $j$ at $p=2$, multiplication with $(p-2)^j$ removes the singularity.
\item For all $k\in\mathbb{N}$ we have $\Gamma(k+\tfrac{1}{2})=q\sqrt{\pi}$ for some $q\in\mathbb{Q}$; see \cite[43:4:3]{atlas}. A simple count reveals that that in both formulas of $E(n,p)$ the number of $\sqrt{\pi}$'s in the numerator equals the number of $\sqrt{\pi}$'s in the denominator. This implies that $E(2m+1,p)\in\mathbb{Q}\big(\sqrt{(p-1)(3p-2)}\,\big)$ and $E(2m,p)\in\mathbb{Q}\big(\sqrt{3p-2}\,\big)$.
\end{enumerate}
\end{rem}
\subsection{Formulas for $E(n,p)$ for small values of $n$}\label{sec:values}
Using the formulas in \cref{cor} we can compute $E(n,p)$ for some small values of $n$. We used \textsc{sage} \cite{sage} to compute it for $2\leq n\leq 9$, the formulas are presented in \cref{table1} below. The source code of the scripts are given in \cref{appendix}.
\vspace{-0.2cm}
\begin{center}
\begin{table}[H]
\begin{tabular}{| l | l |}
\hline
$n$ & $E(n,p)$ \\[0.1cm] \hline
2 & $\sqrt{3 \, p - 2}
$ \\[0.1cm]
3 & $1+\frac{4 \, {\left(p - 1\right)}^{\frac{3}{2}}}{\sqrt{3 \, p - 2}}$ \\[0.1cm]
4 & $\frac{29 \, p^{3} - 63 \, p^{2} + 48 \, p - 12}{2 \, {\left(3 \, p -
2\right)}^{\frac{3}{2}}}$ \\[0.1cm]
5 & $1+\frac{2 \, {\left(5 \, p - 2\right)}^{2} {\left(p - 1\right)}^{\frac{5}{2}}}{{\left(3 \, p - 2\right)}^{\frac{5}{2}}}$ \\
6 & $\frac{1339 \, p^{6} - 5946 \, p^{5} + 11175 \, p^{4} - 11240 \, p^{3} +
6360 \, p^{2} - 1920 \, p + 240}{8 \, {\left(3 \, p -
2\right)}^{\frac{7}{2}}}
$ \\[0.1cm]
7 & $1+\frac{{\left(1099 \, p^{4} - 2296 \, p^{3} + 2184 \, p^{2} - 992 \, p +
176\right)} {\left(p - 1\right)}^{\frac{7}{2}}}{2 \, {\left(3 \, p -
2\right)}^{\frac{9}{2}}}$ \\[0.1cm]
8 & $\frac{28473 \, p^{9} - 191985 \, p^{8} + 579279 \, p^{7} - 1022091 \,
p^{6} + 1160040 \, p^{5} - 877380 \, p^{4} + 441840 \, p^{3} - 142800 \,
p^{2} + 26880 \, p - 2240}{16 \, {\left(3 \, p -
2\right)}^{\frac{11}{2}}}$ \\[0.1cm]
9 & $1+\frac{{\left(22821 \, p^{6} - 77580 \, p^{5} + 118476 \, p^{4} - 95136
\, p^{3} + 41904 \, p^{2} - 9408 \, p + 832\right)} {\left(p -
1\right)}^{\frac{9}{2}}}{4 \, {\left(3 \, p - 2\right)}^{\frac{13}{2}}}$\\
\hline
\end{tabular}
\caption{\label{table1} Formulas for $E(n,p)$ for $2\leq n\leq 9$.}
\end{table}
\end{center}
\vspace{-1cm}
\subsection{Comparison to prior results}
The critical points of $d_v$ for $v\in S^p(\mathbb{R}^n)$ are sometimes called the \emph{eigenvectors} of $v$. The reason for this is as follows. We can interpret $v$ as a multilinear map from $(\mathbb{C}^n)^p\to \mathbb{C}$. Let $e_1,\ldots, e_n$ denote the standard basis vectors in~$\mathbb{C}^n$. For a general tensor $v\in(\mathbb{R}^n)^{\otimes d}$ (not necessarily symmetric) eigenpairs of $v$ are pairs $(x,t)\in(\mathbb{C}^n\backslash\set{0})\times \mathbb{C}$ that are defined by the equation
\begin{equation}\label{eigenpairs} vx^{p-1}:= \begin{pmatrix}v(x,\ldots,x,e_1)\\ \vdots \\v(x,\ldots,x,e_n)\end{pmatrix} = tx.\end{equation}
Note that $(x,t)$ is an eigenpair of $v$ if and only if $(ax,a^{p-2}t)$, $a\in\mathbb{C}^\times$, is an eigenpair of $v$. Cartwright and Sturmfels \cite{sturmfels-cartwright} call eigenpairs related by this scaling procedure \emph{equivalent}. The discussion in \cite[Sec. 6]{2011arXiv1110.5689F} reveals that for any $v\in S^p(\mathbb{R}^n)$ and $x$ an eigenvector of $v$, the symmetric rank-one tensor $x^{\otimes p}$ is a critical point of $d_v$. Therefore, the number of real critical points of $v$ equals the number of equivalence classes of real eigenpairs of~$v$.
We may compare the results in this article with~\cite{expected_Z_eigenvalues}, where we computed the expected number of equivalence classes of real eigenpairs of a random tensor $v=(v_{i_1,\ldots,i_p})$, where the $v_{i_1,\ldots,i_p}$ are centered random variables with variance $\sigma^2=1$. While gaussian symmetric tensors are a generalization of the Gaussian Orthogonal Ensemble, this model of a random tensor is a generalization of the \emph{real Ginibire Ensemble} \cite{ginibre} from matrices to tensors. Let us denote the expected number of real eigenpairs of such a random tensor $v$ in analogy to \cref{E-def} by $E_{\text{non-sym}}(n,p)$. In \cref{table0} we give $D(4,p),E(4,p)$ and~$E_{\text{non-sym}}(4,p)$ for $2\leq p \leq 10$ rounded to $10^{-2}$. Compare \cite[Sec. 5.2]{draisma-horobet}.
~\\
\begin{center}
\begin{table}[h]
{\footnotesize
\begin{tabular}{|c|lllllllll|}
\hline
$p$ & 2 &3 &4&5&6&7&8&9&10\\
\hline
$D(4,p)\approx$ & 4 &15 & 40 & 85& 156 &259& 400 & 585 & 820 \\
$E(4,p)\approx$ & 4 &9.4 &16.26& 24.31& 33.38& 43.38 & 54.22 & 65.84 & 78.19 \\
$E_{\text{non-sym}}(4,p)\approx$ &1.95 & 3.85 & 6.32 & 9.22 &12.49 &16.10& 20.00 &24.19& 28.64\\
\hline
\end{tabular}
\vspace{0.2cm}
}
\caption{\label{table0} Table of values of $D(4,p),E(4,p)$ and $E_{\text{non-sym}}(4,p)$ for $2\leq p \leq 10$.}
\end{table}
\end{center}
\vspace{-1cm}
\begin{center}
\begin{figure}[H]
\includegraphics[scale=0.45]{Rplot02}
\caption{Plot of $p\mapsto f(p)$ for $f(p)\in\set{D(4,p),E(4,p),E_{\text{non-sym}}(4,p)}$ for the values $2\leq p \leq 20$. The $y$-axis is scaled by taking the square root. The plot was created using the~\texttt{ggplot} package in~\textsc{r} \cite{R}.}
\label{fig0}
\end{figure}
\end{center}
\subsection{Experiments}\label{sec:experiments}
To compare the theoretical results of \cref{cor} with actual data we made the following experiment: For $(n,p)\in\set{(4,3),(5,3),(4,4),(5,4)}$ we sampled $2000$ gaussian symmetric tensors in $(\mathbb{R}^n)^{\otimes p}$ using the \texttt{rnorm$()$} command in \textsc{r} \cite{R}. For each of those tensors we computed the real euclidean distance degree by calling \textsc{bertini} \cite{bertini} to compute the number of real eigenpairs of the tensor. The result of the experiments and histograms of the data are given below. We abbreviate 'real euclidean distance degree' by 'r.e.d.d'.
{\footnotesize
$\mathbf{n=4,p=3}$\\
\begin{tabular}[H]{|l| l l l l l l l l| l l|}
\hline
r.e.d.d & 1 &3&5&7&9&11&13&15& mean & $E(4,3)$\\
count& 15 & 64 & 143 & 352 & 553 & 553 & 257 & 63 & $\approx 9.37$ & $\approx 9.4$\\
\hline
\end{tabular}
~\\
$\mathbf{n=5,p=3}$\\
\begin{tabular}[H]{|l| l l l l l l l l l l l l l l l | l l|}
\hline
r.e.d.d & 1 &3&5&7&9&11&13&15& 17&19&21&23&25 &27&29 &mean & $E(5,3)$\\
count& 1 & 3 & 21 & 53 & 92 &195 &284 &337& 396& 311 &183& 83& 33& 4& 3 & $\approx 15.82$ & $\approx 15.75$ \\
\hline
\end{tabular}
~\\
$\mathbf{n=4,p=4}$\\
\begin{tabular}[H]{|l| l l l l l l l l l l l l | l l|}
\hline
r.e.d.d & 6 &8&10&12&14&16&18&20& 22&24&26&28 &mean & $E(4,4)$\\
count& 13 & 57 & 128 & 262 &316 &377& 345& 239 &155 & 75 & 26& 7 & $\approx 16.24$ & $\approx 16.25$\\
\hline
\end{tabular}
~\\
$\mathbf{n=5,p=4}$\\
\begin{tabular}[H]{|l| l l l l l l l l l l l l l l | l l|}
\hline
r.e.d.d & 11 &13&15&17&19&21&23&25&27 &29&31&33&35&37 & mean & $E(5,4)$\\
count& 1 & 1 & 13 & 17 & 37 &56 &112 &117 &187 &212& 200 & 209 &197 &149& $\approx 32.81$&$\approx 32.94$\\
r.e.d.d &39&41&43&45&47&49&51&53&55 &57 & 59& & & & & \\
count& 141& 108 &86 & 65 &43 & 30 & 8 & 5 & 4 & 1& 1& & & & & \\
\hline
\end{tabular}
}
\begin{figure}[H]
\includegraphics[scale=0.57]{Rplot01}
\caption{Histograms of the experimental data. The largest value on the $x$-axis is the respective complex count $D(n,p)=\sum_{i=0}^{n-1}(p-1)^i$. The $y$-axis displays the relative frequency of the data. The histograms were created using the \texttt{ggplot} package in \textsc{r} \cite{R}.}
\label{fig1}
\end{figure}
\begin{rem}
The numbers in the table $n=5,p=3$ add up to only $1999$, because \textsc{bertini} failed to compute the correct number of real eigenvalues in one case. It falsely computed a double root. We believe that the reason for this happening is that the predictor for double roots in \textsc{bertini}, the condition number of the polynomial equation~\cref{eigenpairs}, may not be appropriate for the problem of solving for tensor eigenpairs, compare the discussion in \cite[Sec. 1.2]{homotopy_eigenpairs}. Further, note that the distribution in the histogram for $n=5,p=4$ looks unlike the others and that there is a gap between $E(5,4)$ and the corresponding sample mean. We believe that this as well is caused by \textsc{bertini} counting the wrong number of real solutions.
\end{rem}
\subsection{Organization}
The rest of this paper is organized as follows. In the next section we will first present some preliminary material that is needed throughout the article. Thereafter, in \cref{sec:integrals} we compute some of the integrals that are needed to prove \cref{thm} and \cref{cor}, which we will then do in \cref{sec:thm} and \cref{sec:proof2}, respectively. Finally, in the appendix we present the \textsc{sage} code that we used in \cref{sec:values}.
\section{Preliminaries}
We first fix notation: In what follows $n\geq 2$ is always a positive integer and $m:=\lceil \tfrac{n}{2}\rceil$. That is $n=2m$, if $n$ is even, and $n=2m-1$, if $n$ is odd. The small letters $a,b,c,x,y,\lambda$ will denote variables or real numbers. By capital calligraphic letters $\mathcal{A},\mathcal{M},\mathcal{K},\mathcal{L}$ we denote matrices. The symbols $G$ and $P$ are reserved for the functions defined in \cref{my_polynomials} and $M$ and $F$ denote the two hypergeometric functions defined in \cref{def_M} and \cref{def_hypergeom} below. The symbol $\langle\,,\,\rangle$ always denotes the inner product defined in \cref{inner_product}.
\subsection{Special functions}
Throughout the article a collection of special function appears. We present them in this subsection. The \emph{Pochhammer polynomials} \cite[18:3:1]{atlas} are defined by $(x)_n:=x(x+1)\ldots (x+n-1)$ where $n$ is a positive integer, and $(x)_0:=1$. \emph{Kummer's confluent hypergeometric function} \cite[Sec. 47]{atlas} is defined as
\begin{equation}\label{def_M}
M(a,c, x) := \sum_{k=0}^\infty \frac{(a)_k}{(c)_k}\, \frac{x^ k}{k!},
\end{equation}
and \emph{Gauss' hypergeometric function} \cite[Sec. 60]{atlas} is defined as
\begin{equation}\label{def_hypergeom}
F(a,b,c,x) := \sum_{k=0}^\infty \frac{(a)_k\,(b)_k }{(c)_k}\, \frac{x^ k}{k!},
\end{equation}
where $a,b,c\in\mathbb{R}$, $c\neq 0,-1,-2,\ldots$. Generally, neither $M(a,c,x)$ nor $F(a,b,c, x)$ converges for all $x$. But if either of the numeratorial parameters $a,b$ is a non-positive integer, both $M(a,c,x)$ and $F(a,b,c, x)$ reduce to polynomials and hence are defined for all $x\in\mathbb{R}$ (and this is the only case we will meet throughout the paper).
\begin{rem}
Other usual notations are $M(a,c,x)=~_1F_1(a;c;x)$ and $F(a,b,c,x)=~_1F_2(a,b;c;x)$. This is due to the fact that both $M(a,c,x)$ and $F(a,b,c,x)$ are special cases of the \emph{general hypergeometric function}, denoted $~_qF_p(a_1,\ldots,a_q;c_1,\ldots,c_p;x):=\sum_{k=0}^\infty \frac{\prod_{i=1}^p(a_i)_k}{\prod_{i=1}^q(c_i)_k}\, \frac{x^ k}{k!}$.
\end{rem}
The following will be useful.
\begin{lemma}\label{lemma_hypergeom}
Let $a,b$ be non-positive integers and $c\neq 0,-1,-2,\ldots$. Then
\[F(a,b+1,c,x)-F(a+1,b,c,x)= \frac{(a-b)x}{c} F(a+1,b+1,c+1,x)\]
\end{lemma}
\begin{proof}
Since $a$ and $b$ are non-negative integers, $F(a,b+1,c,x)$ and $F(a+1,b,c,x)$ are polynomials, whose constant term is equal to $1$. Therefore,
\begin{equation}\label{401}
F(a,b+1,c,x)-F(a+1,b,c,x)= \sum_{k=1}^\infty \frac{(a)_k(b+1)_k - (a+1)_k(b)_k}{(c)_k}\,\frac{x^k}{k!}.
\end{equation}
We have
\begin{equation*}
(a)_k(b+1)_k - (b)_k(a+1)_k \stackrel{\text{\cite[18:5:6]{atlas}}}{=} (a)_k(b)_k(1+\tfrac{k}{b}) - (a)_k(b)_k(1+\tfrac{k}{a})= (a)_k(b)_k k \;\frac{a-b}{ab}.
\end{equation*}
According to \cite[18:5:7]{atlas} the latter is equal to $(a+1)_{k-1}(b+1)_{k-1} k (a-b)$ and, moreover, $(c)_k=c(c+1)_{k-1}$. The claim follows when plugging this into \cref{401}.
\end{proof}
For $x\geq 0$ the \emph{gamma function} \cite[Sec. 43]{atlas} is defined as
\begin{equation}\label{def_gamma}
\Gamma(x):=\int_0^\infty t^{x-1} e^{-t} \d t.
\end{equation}
The \emph{cumulative distribution function of the normal distribution} \cite[40:14:2]{atlas} and the \emph{error function} \cite[40:3:2]{atlas} are respectively defined as
\begin{equation}\label{Phi}
\Phi(x):=\frac{1}{\sqrt{2\pi}} \, \int\limits_{-\infty}^\infty e^{-\frac{t^2}{2}}\, \d t,\quad \text{and}\quad \mathrm{erf}(x):= \frac{2}{\sqrt{\pi}} \; \int\limits_0^x \exp(-t^2) \, \d t.
\end{equation}
The error function and $\Phi(x)$ are related by the following equation \cite[40:14:2]{atlas}
\begin{equation}
2\Phi(x)=1+\mathrm{erf}\left(\frac{x}{\sqrt{2}}\right).\label{error_function_rel}
\end{equation}
The error function and the Kummer's hypergeometric function are related by
\begin{equation}
\mathrm{erf}(x)=\frac{2x}{\sqrt{\pi}}\,M\left(\tfrac{1}{2},\tfrac{3}{2},-x^2\right); \label{error_function_rel2}
\end{equation}
see \cite[13.6.19]{abramowitz}.
\subsection{Hermite polynomials}\label{sec:hermite}
Hermite polynomials are a family of integer indexed polynomials $H_0(x),H_1(x),\ldots$ that are defined via \cite[24:3:2]{atlas}
\begin{equation}
\label{hermite0}
H_n(x):=(-1)^n e^{x^2} \frac{\d^n}{\d x^n} e^{-x^2},
\end{equation}
An alternative Hermite function is defined by
\begin{equation}\label{hermite}
H_{e_n}(x):= (-1)^n e^{\tfrac{x^2}{2}} \frac{\d^n}{\d x^n} e^{-\tfrac{x^2}{2}}.
\end{equation}
The two definitions are related by the following equality \cite[24:1:1]{atlas}
\begin{equation}\label{hermite_relation}
H_{e_n}(x)=\frac{1}{\sqrt{2}^{\, n}} H_n\left(\frac{x}{\sqrt{2}}\right).
\end{equation}
By \cite[24:5:1]{atlas} we have that
\begin{equation}\label{negative_hermite}
H_k(-z)=(-1)^k H_k(z)\quad \text{and}\quad H_{e_k}(-z)=(-1)^k H_{e_k}(z)
\end{equation}
\begin{rem}
In the literature the polynomials $H_n(x)$ are sometimes called the \emph{physicists' Hermite polynomials} and the $H_{e_n}(x)$ are sometimes called the \emph{probabilists' Hermite polynomials}. We will refer to both simply as \emph{Hermite polynomials} and distinguish them by use of the respective symbols.
\end{rem}
Hermite polynomials can be expressed in terms of Kummer's confluent hypergeometric function from \cref{def_M}:
\begin{align}\label{hermite_and_kummer}
H_{2k+1}(x)&=(-1)^k \frac{(2k+1)!\,2x}{k!}\, M(-k,\tfrac{3}{2},x^2)\\
H_{2k}(x)&=(-1)^k \frac{(2k)!}{k!}\, M(-k,\tfrac{1}{2},x^2);
\end{align}
see \cite[13.6.17 and 13.6.18]{abramowitz}.
\subsection{Orthogonality relations of the Hermite polynomials}\label{sec:orth_rel}
The Hermite polynomials satisfy the following orthogonality relations. Let $\Gamma(x)$ be the Gamma function from \cref{def_gamma}. By \cite[7.374.2]{gradshteyn} we have
\begin{equation}\label{important1}
\int_\mathbb{R} H_{e_m}(x)H_{e_n}(x) e^{-x^2} \d x =
\begin{cases} (-1)^{ \lfloor\tfrac{m}{2}\rfloor+ \lfloor\tfrac{n}{2}\rfloor} \;\Gamma\left(\frac{m+n+1}{2}\right), &\text{if $m+n$ is even}\\
0,&\text{if $m+n$ is odd.} \end{cases},
\end{equation}
and, more generally, if $m+n$ is even, by \cite[p. 289, eq. (12)]{vol2} we have for~$\alpha>0$, $\alpha^2\neq \tfrac{1}{2}$ that
\begin{equation}\label{important2}
\int\limits_{-\infty}^\infty H_{e_m}(x)H_{e_n}(x) e^{-\alpha^2 x^2} \d x
= \frac{(1-2\alpha^2)^{\frac{m+n}{2}}\,\Gamma\left(\frac{m+n+1}{2}\right)}{\alpha^{m+n+1}} \, F\left(-m-n;\frac{1-m-n}{2};\frac{\alpha^2}{2\alpha^2-1}\right),
\end{equation}
where $F(a,b,c,x)$ is Gauss' hypergeometric function as defined in \cref{def_hypergeom}. Recall from \cref{Phi} the definition of $\Phi(x)$. In the following we abbreviate
\begin{equation}\label{my_polynomials}
P_k(x):=\begin{cases} H_{e_k}(x),& \text{ if } k=0,1,2,\ldots\\
-\sqrt{2\pi}\,e^{\frac{x^2}{2}}\Phi(x), &\text{ if } k=-1.
\end{cases}
\end{equation}
and put
\begin{equation}\label{G}
G_{k}(x):=\int\limits_{-\infty}^x P_k(y)\, e^{-\tfrac{y^2}{2}} \;\d y, \;k=0,1,2,\ldots
\end{equation}
We can express the functions $G_k(x)$ in terms of the $P_k(x)$.
\begin{lemma}\label{lemma2}
We have
\begin{enumerate}
\item For all $k$: $G_k(x) = - e^{-\tfrac{x^2}{2}} P_{k-1}(x)$.
\item $G_k(\infty)=\begin{cases} \sqrt{2\pi}, &\text{ if } k=0\\ 0,&\text{ if } k\geq 1\end{cases}$
\end{enumerate}
\end{lemma}
\begin{proof}
Note that (2) is a direct consequence of (1). For (1) let $k\geq 0$ and write
\[G_k(x)=\int\limits_{y=-\infty}^x P_k(y) e^{-\tfrac{y^2}{2}}\,\d y \stackrel{\text{by \cref{my_polynomials}}}{=} \int\limits_{y=-\infty}^x H_{e_k}(y) e^{-\tfrac{y^2}{2}}\,\d y \stackrel{\text{by \cref{hermite}}}{=} \int\limits_{y=-\infty}^x (-1)^k \frac{\d^k}{\d y^k} e^{-\tfrac{y^2}{2}}\d y.\]
Thus $G_k(x)=(-1)^k \frac{\d^{k-1}}{\d x^{k-1}} e^{-\tfrac{x^2}{2}} = -e^{-\tfrac{x^2}{2}}P_{k-1}(x)$ as desired.
\end{proof}
We now fix the following notation: If two functions $f:\mathbb{R}\to \mathbb{R}, g:\mathbb{R} \to \mathbb{R}$ satisfy $\int_\mathbb{R} f(x) e^{-x^2} \d x <\infty$ and $\int_\mathbb{R} g(x) e^{- x^2} \d x <\infty$, we define
\begin{equation}\label{inner_product}
\langle f(x),g(x) \rangle := \int_\mathbb{R} f(x) g(x) e^{-\tfrac{x^2}{2}} \d x,
\end{equation}
which is finite due to the Cauchy-Schwartz inequality. The functions $P_k(x)$ and $G_k(x)$ satisfy the following orthogonality relations
\begin{lemma}\label{orthogonal_rel}
For all $k,\ell\geq 0$ we have
\begin{enumerate}
\item $\langle G_k(x), P_{\ell}\rangle = - \langle G_\ell(x), P_k(x)\rangle$.\\[0.05cm]
\item
$\langle G_k(x), P_\ell(x)\rangle =
\begin{cases}(-1)^{i+j} \Gamma\left(i+j-\tfrac{1}{2}\right), & \text{if } k=2i-1 \text{ and } \ell=2j\\
0, & \text{if } k+\ell \text{ is even}
\end{cases}
$
\end{enumerate}
\end{lemma}
\begin{proof}
For (1) we have
\begin{align*}
\nonumber\langle G_k(x), P_\ell(x)\rangle = \int_\mathbb{R} G_k(x)P_\ell(x) e^{-\tfrac{x^2}{2}} \d x
\nonumber &= \int_\mathbb{R} \left(\int_{-\infty}^xP_k(y) e^{-\tfrac{y^2}{2}} \d y \right ) P_\ell(x) e^{-\tfrac{x^2}{2}} \d x\\
\nonumber &= \int_\mathbb{R} \left(\int_{y}^\infty P_\ell(x)e^{-\tfrac{x^2}{2}} \d x \right )P_k(y) e^{-\tfrac{y^2}{2}} \d y\\
\nonumber &= (-1)^\ell \int_\mathbb{R} \left(\int_{-\infty}^{-y} P_\ell(x)e^{-\tfrac{x^2}{2}} \d x \right )P_k(y) e^{-\tfrac{y^2}{2}} \d y\\
&= (-1)^{k+\ell}\langle G_\ell(x), P_k(x)\rangle,
\end{align*}
where the fourth equality is due to the transformation $x\mapsto -x$ and equation \cref{negative_hermite} and the fifth equality is obtained using the transformation $y\mapsto -y$ and \cref{negative_hermite}. This shows (1) for the case $k+\ell$ odd. Further, for $k+\ell$ even we get~$\langle P_\ell,P_k\rangle=0$, which proves (1) and (2) for this case.
Now assume that $k=2i-1,\ell=2j$, in particular $k\neq 0$. We use \cref{lemma2} to write
\begin{align}
\label{t1} \langle G_k(x), P_\ell(x)\rangle = - \int_\mathbb{R} P_{k-1}(x)P_\ell(x) e^{-x^2} \d x& = - \int_\mathbb{R} P_{2i-2}(x)P_{2j}(x) e^{-x^2} \d x\\
&= - \int_\mathbb{R} H_{e_{2i-2}}(x)H_{e_{2j}}(x) e^{-x^2} \d x. \nonumber
\end{align}
By \cref{important1} we have
\[\int_\mathbb{R} H_{e_m}(x)H_{e_n}(x) e^{-x^2} \d x =
\begin{cases} (-1)^{ \lfloor\tfrac{m}{2}\rfloor+ \lfloor\tfrac{n}{2}\rfloor} \, \Gamma\left(\frac{m+n+1}{2}\right), &\text{if $m+n$ is even}\\
0,&\text{if $m+n$ is odd.} \end{cases}\]
Applying this to \cref{t1} we see that $\langle G_k(x), P_\ell(x)\rangle=(-1)^{i+j} \Gamma\left(i+j-\tfrac{1}{2}\right)$. This finishes the proof.
\end{proof}
\subsection{The expectation of Hermite polynomials}
In this section we will compute the expected value of the Hermite polynomials when the argument follows a normal distribution.
\begin{lemma}\label{expectation_hermite1}
For $\sigma^2>0$ we have $\mean\limits_{u\sim N(0,\sigma^2)} H_{{2k}}(u)=\frac{(2k)!}{k!}\, (2\sigma^2 -1 )^k$.
\end{lemma}
\begin{proof}
Write
\begin{equation*}
\mean\limits_{u\sim N(0,\sigma^2)} H_{{2k}}(u) \stackrel{\text{by definition}}{=}
\frac{1}{\sqrt{2\pi\sigma^2}} \int\limits_{u=-\infty}^\infty H_{2k}(u)\, e^{-\tfrac{u^2}{2\sigma^2}} \d u = \frac{1}{\sqrt{\pi}} \int\limits_{w=-\infty}^\infty H_{2k}(\sqrt{2\sigma^2}\, w) \, e^{-w^2} \d w,
\end{equation*}
where the second equality is due to the change of variables $w := \tfrac{u}{\sqrt{2\sigma^2}}$. Applying \cite[7.373.2]{gradshteyn} we get
\[\frac{1}{\sqrt{\pi}} \int_{w=-\infty}^\infty H_{2k}(\sqrt{2\sigma^2}\, w) e^{-w^2} \d w= \frac{(2k)!\, (2\sigma^2 -1 )^k}{k!} .\]
This finishes the proof.
\end{proof}
\begin{lemma}\label{expectation_hermite2}
Let $\sigma^2>0$ and recall from \cref{my_polynomials} the definition of $P_k(x)$, $k=-1,0,1,2,\ldots$.
\begin{enumerate}
\item If $k,\ell>0$ and $k+\ell$ is even, we have
\[\mean\limits_{u\sim N(0,\sigma^2)} P_k(u)P_\ell(u) e^{-\tfrac{u^2}{2}} = \frac{(-1)^{\frac{k+\ell}{2}}\; \sqrt{2}^{k+\ell}\, \Gamma\left(\frac{k+\ell+1}{2}\right)}{\, \sqrt{\pi} \;\sqrt{\sigma^2+1}^{\,k+\ell+1}} \, F\left(-k,-\ell;\frac{1-k-\ell}{2};\frac{\sigma^2+1}{2}\right).\]
\item For all $k$ we have
\[\mean\limits_{u\sim N(0,\sigma^2)} P_{-1}(u)P_{2k+1}(u) e^{-\tfrac{u^2}{2}} = \frac{(-1)^{k+1} (2k+1)!}{2^{k}\,k!}\,\frac{(1-\sigma^2)^k\sigma^{2}}{\sqrt{1+\sigma^{2}} }\;F\left(-k,\frac{1}{2},\frac{3}{2},\frac{\sigma^4}{\sigma^4-1}\right).\]
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (1) we write
\[\mean\limits_{u\sim N(0,\sigma^2)} P_k(u)P_\ell(u) e^{-\tfrac{u^2}{2}} =\mean\limits_{u\sim N(0,\sigma^2)} H_{e_{k}}(u) H_{e_\ell}(u) e^{-\tfrac{u^2}{2}} = \frac{1}{\sqrt{2\pi\sigma^2}} \int\limits_{u=-\infty}^\infty H_{e_{k}}(u) H_{e_\ell}(u) e^{-\tfrac{u^2}{2}\left(1+\tfrac{1}{\sigma^2}\right)} \d u.\]
Put $\alpha^2:=\tfrac{1}{2}\left(1+\tfrac{1}{\sigma^2}\right)$ and observe that $\alpha^2\neq 1$. By \cref{important2} we have
\begin{align*}
&\frac{1}{\sqrt{2\pi\sigma^2}} \, \int\limits_{u=-\infty}^\infty \,H_{e_{k}}(u) H_{e_\ell}(u) e^{-\tfrac{u^2}{2}\left(1+\tfrac{1}{\sigma^2}\right)} \, \d u\\
= \quad &\frac{(1-2\alpha^2)^{\frac{k+\ell}{2}}\,\Gamma\left(\frac{k+\ell+1}{2}\right)}{\sqrt{2\pi\sigma^2} \;\alpha^{k+\ell+1}} \, F\left(-k,-\ell;\frac{1-k-\ell}{2};\frac{\alpha^2}{2\alpha^2-1}\right)\\
= \quad &\frac{(-1)^{\frac{k+\ell}{2}}\; \sqrt{2}^{k+\ell}\, \Gamma\left(\frac{k+\ell+1}{2}\right)}{\, \sqrt{\pi} \;\sqrt{\sigma^2+1}^{\,k+\ell+1}} \, F\left(-k,-\ell;\frac{1-k-\ell}{2};\frac{\sigma^2+1}{2}\right)
\end{align*}
This proves (1). For (2) we have
\begin{equation*}
\mean\limits_{u\sim N(0,\sigma^2)} P_{-1}(u)P_{{2k+1}}(u) e^{-\tfrac{u^2}{2}} =-\sqrt{2\pi}\,\mean\limits_{u\sim N(0,\sigma^2)} \Phi(u)H_{e_{2k+1}}(u) = \frac{-1}{\sigma} \, \int\limits_{u=-\infty}^\infty \,\Phi(u) H_{e_{2k+1}}(u) \,e^{-\tfrac{u^2}{2\sigma^2}} \, \d u.
\end{equation*}
Making a change of variables $x:=\tfrac{u}{\sqrt{2}}$ the right-hand integral becomes
\begin{equation*}
\frac{-1}{\sqrt{2}\, \sigma} \, \int\limits_{u=-\infty}^\infty \,\Phi(\sqrt{2}\,x) H_{e_{2k+1}}(\sqrt{2}\,x) \,e^{-\tfrac{x^2}{\sigma^2}} \, \d x=\frac{-1}{2^{k+1}\,\sigma} \, \int\limits_{x=-\infty}^\infty \,(1+\mathrm{erf}(x)) H_{{2k+1}}(x) \,e^{-\tfrac{x^2}{\sigma^2}} \, \d x
\end{equation*}
the equality due to \cref{error_function_rel} and \cref{hermite_relation}. We know from \cref{negative_hermite} that $H_{2k+1}(x)$ is an odd function, which implies that $\int_{w=-\infty}^\infty H_{2k+1}(x) e^{-\tfrac{x^2}{\sigma^2}} \, \d x=0$. Moreover, by \cref{hermite_and_kummer} we have $H_{2k+1}(x)=(-1)^k \frac{(2k+1)!\,2x}{k!}\, M(-k,\tfrac{3}{2},x^2)$ and by \cref{error_function_rel2} we have $\mathrm{erf}(x)=\frac{2x}{\sqrt{\pi}}\,M\left(\tfrac{1}{2},\tfrac{3}{2},-x^2\right)$. All this shows that
\begin{align}
\mean\limits_{u\sim N(0,\sigma^2)} P_{-1}(u)P_{{2k+1}}(u) e^{-\tfrac{u^2}{2}}&=\frac{(-1)^{k+1} (2k+1)!}{2^{k-1}\,\sqrt{\pi}\,\sigma\, k!} \, \int\limits_{x=-\infty}^\infty \,x^2\,M\left(\tfrac{1}{2},\tfrac{3}{2},-x^2\right)\,M(-k,\tfrac{3}{2},x^2) \,e^{-\tfrac{x^2}{\sigma^2}} \, \d x \nonumber\\
&=\frac{(-1)^{k+1} (2k+1)!}{2^{k-2}\,\sqrt{\pi}\,\sigma\, k!} \, \int\limits_{x=0}^\infty \,x^2\,M\left(\tfrac{1}{2},\tfrac{3}{2},-x^2\right)\,M(-k,\tfrac{3}{2},x^2) \,e^{-\tfrac{x^2}{\sigma^2}} \, \d x.\label{105}
\end{align}
where for the second equality we used that the integrand is an even function. Making a change of variables $t:=x^2$ we see that
\begin{equation}\label{106}
\int\limits_{x=0}^\infty \,x^2\,M\left(\tfrac{1}{2},\tfrac{3}{2},-x^2\right)\,M(-k,\tfrac{3}{2},x^2) \,e^{-\tfrac{x^2}{\sigma^2}} \, \d x=\frac{1}{2}\int\limits_{t=0}^\infty \,\sqrt{t}\;M\left(\tfrac{1}{2},\tfrac{3}{2},-t\right)\,M(-k,\tfrac{3}{2},t) \,e^{-\tfrac{t}{\sigma^2}} \, \d t.
\end{equation}
By \cite[7.622.1]{gradshteyn} we have
\begin{equation}\label{107}
\int\limits_{t=0}^\infty \,\sqrt{t}\;M\left(\tfrac{1}{2},\tfrac{3}{2},-t\right)\,M(-k,\tfrac{3}{2},t) \,e^{-\tfrac{t}{\sigma^2}} \, \d t=\Gamma\left(\frac{3}{2}\right)\,\frac{(1-\sigma^2)^k\sigma^{3}}{\sqrt{1+\sigma^{2}} }\;F\left(-k,\tfrac{1}{2},\tfrac{3}{2},\tfrac{\sigma^4}{\sigma^4-1}\right)
\end{equation}
Plugging \cref{107} into \cref{106} and the result into \cref{105} we obtain
\begin{align*}
\mean\limits_{u\sim N(0,\sigma^2)} P_{-1}(u)P_{{2k+1}}(u) e^{-\tfrac{u^2}{2}}&=\frac{(-1)^{k+1} (2k+1)!}{2^{k-1}\,\sqrt{\pi}\,\sigma\, k!}\;\Gamma\left(\tfrac{3}{2}\right)\,\frac{(1-\sigma^2)^k\sigma^{3}}{\sqrt{1+\sigma^{2}} }\;F\left(-k,\tfrac{1}{2},\tfrac{3}{2},\tfrac{\sigma^4}{\sigma^4-1}\right)\\
&=\frac{(-1)^{k+1} (2k+1)!}{2^{k}\,k!}\,\frac{(1-\sigma^2)^k\sigma^{2}}{\sqrt{1+\sigma^{2}} }\;F\left(-k,\tfrac{1}{2},\tfrac{3}{2},\tfrac{\sigma^4}{\sigma^4-1}\right).
\end{align*}
For the second equality we have used that $\Gamma\left(\tfrac{3}{2}\right)=\tfrac{\sqrt{\pi}}{2}$, see \cite[43:4:3]{atlas}. This finishes the proof.
\end{proof}
\section{Computation of the integrals that appear}\label{sec:integrals}
This section is dedicated to the computation of the integrals that appear in the subsequent sections. We start with a general lemma that we will later apply to the equations \cref{7} and \cref{602}.
\begin{lemma}\label{auxiliary_lemma}
Let $f:\mathbb{R}^m\times\mathbb{R}^{k}\to \mathbb{R}$, $((x_1,\ldots,x_m),u)\mapsto f(x_1,\ldots,x_m,u)$ be a measurable function, such that $\int_{\mathbb{R}^m\times \mathbb{R}^k} f \, \d x_1 \ldots \d x_m \d u <\infty$. Assume that $f$ is invariant under any permutations of the $x_i$. Then
\[\sum_{j=0}^{m} \binom{m}{j}\,\int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1},\ldots,x_{m}} \; } f(x_1,\ldots,x_m,u)\;\d x_1\ldots \d x_m = \int\limits_{x_1,\ldots, x_m\in \mathbb{R}} f(x_1,\ldots,x_m,u)\;\d x_1\ldots \d x_m\]
for all $u\in \mathbb{R}^k$.
\end{lemma}
\begin{proof}
We prove the statement by induction. For $m=1$ we have
\begin{equation*}
\int_{x_1\leq u} f(x_1,u)\;\d x_1 + \int_{ u \leq x_1} f(x_1,u)\;\d x_1=\int_{x_1\in\mathbb{R}} f(x_1,u)\;\d x_1.
\end{equation*}
For $m>1$ we write
\begin{equation*}
g_j(u):=\int\limits_{\stackrel{x_1,\ldots, x_j \leq u}{u\leq x_{j+1},\ldots,x_{m}} \; } f(x_1,\ldots,x_m,u)\;\d x_1\ldots \d x_m, \quad j=0,\ldots,m
\end{equation*}
Using $\binom{m}{j}=\binom{m-1}{j}+\binom{m-1}{j-1}$ \cite[6:5:3]{atlas} we have
\begin{equation}\label{501}
\sum_{j=0}^m \binom{m}{j} g_j(u) = \sum_{j=0}^{m-1} \binom{m-1}{j} (g_j(u)+g_{j+1}(u)),
\end{equation}
where
\begin{equation*}
g_j(u)+g_{j+1}(u) = \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+2},\ldots,x_{m}}} \; \int\limits_{x_{j+1}=-\infty}^\infty\, f(x_1,\ldots,x_m,u) \d x_1\ldots \d x_m.
\end{equation*}
By assumption $f$ is invariant under any permutations of the $x_i$. Thus, making a change of variables that interchanges $x_{j+1}$ and $x_m$ we see that
\begin{equation*}
g_j(u)+g_{j+1}(u) = \int\limits_{x_{m}=-\infty}^\infty \left[\; \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1},\ldots,x_{m-1}}} \; f(x_1,\ldots,x_{m-1},x_m,u) \d x_1\ldots \d x_{m-1}\right] \d x_m.
\end{equation*}
Plugging this into \cref{501} and interchanging summation and integration we obtain
\[\sum_{j=0}^{m} \binom{m}{j}\, g_j(u) = \int\limits_{x_{m}=-\infty}^\infty\left[\sum_{j=0}^{m-1} \binom{m-1}{j} \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1},\ldots,x_{m-1}}} \; f(x_1,\ldots,x_{m-1},x_m,u) \d x_1\ldots \d x_{m-1}\right]\d x_m.\]
We can now apply the induction hypothesis to the inner integral and conclude the proof.
\end{proof}
Recall from \cref{my_polynomials} the definition of the polynomials $P_k(x)$ and from \cref{G} the definition of $G_k(x)$. The following lemma is the key to prove \cref{prop1} and \cref{prop2} below.
\begin{lemma}\label{prop0}
Let $\mathcal{A}$ denote the $2m\times 2(m-1)$ matrix
\begin{equation*}
\mathcal{A} = \begin{bmatrix} G_{i}(x_1) & P_{i}(x_1) & \ldots & G_{i}(x_{m-1}) & P_{i} (x_{m-1}) \end{bmatrix}_{0\leq i \leq 2m}.
\end{equation*}
Moreover, put
\[\Gamma_1^{a,b}:= \begin{bmatrix} \Gamma\left(r+s-\tfrac{1}{2}\right)\end{bmatrix}_{\substack{1\leq r\leq m, r\neq a,\\ 1\leq s\leq m,s\neq b.}},\quad \text{ and } \quad \Gamma_2^{a,b}:= \begin{bmatrix} \Gamma\left(r+s+\tfrac{1}{2}\right)\end{bmatrix}_{\substack{0\leq r\leq m-1, r\neq a,\\ 0\leq s\leq m-1,s\neq b.}};\]
(compare the definitions in \cref{thm}).
\begin{enumerate}
\item
Let $S\subset \set{1,\ldots,2m}$ be a subset with $2$ elements and let $\mathcal{A}^{S}$ be the $2(m-1)\times 2(m-1)$-matrix that is obtained by removing from $\mathcal{A}$ all the rows indexed by $S\cup\set{0}$. Then
\begin{align*}
\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R} } \det(\mathcal{A}^S)\; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\;\d x_1\ldots \d x_{m-1}
=&\begin{cases} (m-1)! \,2^{m-1}\,\det(\Gamma_1^{a,b}), & \text{if } S=\set{2a-1,2b} \text{ and } a\leq b\\
-(m-1)! \,2^{m-1}\,\det(\Gamma_1^{a,b}), & \text{if } S=\set{2a-1,2b} \text{ and } a> b\\
0,&\text{if } S \text{ has any other form.}
\end{cases}
\end{align*}
\item Let $S\subset \set{0,\ldots,2m-1}$ be a subset with $2$ elements and let $\mathcal{A}^{S}$ be the $2(m-1)\times 2(m-1)$-matrix that is obtained by removing from $\mathcal{A}$ all the rows indexed by $S\cup\set{2m}$. Then
\begin{align*}
\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R} } \det(\mathcal{A}^S)\; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\;\d x_1\ldots \d x_{m-1}
=&\begin{cases} (m-1)! \,2^{m-1}\,\det(\Gamma_2^{a,b}), & \text{if } S=\set{2a,2b+1} \text{ and } a\leq b\\
-(m-1)! \,2^{m-1}\det(\Gamma_2^{a,b}), & \text{if } S=\set{2a,2b+1} \text{ and } a> b\\
0,&\text{if } S \text{ has any other form.}
\end{cases}
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove (1). Fix $S\subset \set{1,\ldots,2m}$. Then
\begin{equation}\label{99}
\mathcal{A}^S= \begin{bmatrix} G_{i}(x_1) & P_{i}(x_1) & \ldots & G_{i}(x_{m}) & P_{i}(x_{m-1}) \end{bmatrix}_{1\leq i \leq 2m, i\not \in S}.
\end{equation}
Let us denote the quantity that we want to compute by $\Xi$:
\begin{equation}\label{xi}
\Xi:=\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R} } \det(\mathcal{A}^S)\; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\;\d x_1\ldots \d x_{m-1}.
\end{equation}
To ease notation put
\begin{equation}\label{m_dash}
\mu:=m-1.
\end{equation}
Furthermore, let us denote the elements in $\set{1,\ldots,2m}\backslash S$ in ascending order by $s_1<\ldots < s_{2\mu}$ and let $\Sigma_{2\mu}$ denote the group of permutations on $\set{1,\ldots,2\mu}$. Expanding the determinant of $\mathcal{A}^S$ yields
\begin{equation}\label{det}
\det(\mathcal{A}^S) = \sum_{\pi \in \Sigma_{2\mu}} \mathrm{sgn}(\pi) \, \prod_{i=1}^{\mu} G_{s_{\pi(2i-1)}}(x_i) \, P_{s_{\pi(2i)}}(x_i).
\end{equation}
Recall from \cref{inner_product} the definition of $\langle \_,\_\rangle$. Plugging \cref{det} into \cref{xi} and integrating over all the $x_i$ we see that
\begin{equation}\label{a3}
\Xi=\sum_{\pi \in \Sigma_{2\mu}} \mathrm{sgn}(\pi) \, \prod_{i=1}^{\mu} \langle G_{s_{\pi(2i-1)}}(x), P_{s_{\pi(2i)}}(x) \rangle.
\end{equation}
From \cref{lemma2} we know that $ \langle G_{k}(x), P_{\ell}(x) \rangle=0$ whenever $k+\ell$ is even. This already proves that $\Xi=0$, if $S$ is not of the form $S=\set{2a-1,2b}$, because in this case we can't make a partition of $\set{1,\ldots,2m}\backslash S$ into pairs of numbers where one number is even and the other is odd. If, on the other hand, $S\set{2a-1,2b}$ does contain one odd and two even elements, in \cref{a3} we may as well sum over the subset
\[\Sigma_{2\mu}':= \cset{\pi\in\Sigma_{2\mu}}{\forall i\in \set{1,\ldots,\mu}: s_{\pi(2i-1)} + s_{\pi(2i)} \text{ is odd} }.\]
Let $\mathcal{T}\subset\Sigma_{2\mu}$ be the subgroup generated by the set of transpositions $\set{(1\; 2), (3\; 4), \ldots , ((2\mu-1)\; 2\mu)}$. We define an equivalence relation on $\Sigma_{2\mu}'$ via:
\[\forall \pi,\sigma\in \Sigma_{2\mu}': \quad \pi\sim\sigma \;:\Leftrightarrow \; \exists \tau \in \mathcal{T}: \pi = \sigma \tau.\]
Note that the multiplication with $\tau$ from the right is crucial here. Let $\mathcal{C}:=\Sigma_{2\mu}' /\sim$ denote the set of equivalence classes of $\mathcal{T}$ in $\Sigma_{2\mu}'$. A set of representatives for $\mathcal{C}$ is
\[\mathcal{R}:=\cset{\pi \in \Sigma_{2\mu}}{\forall i \in \set{1,\ldots,\mu}: s_{\pi(2i-1)} \text{ is odd and } s_{\pi(2i)}\text{ is even}}.\]
Making a partition of $\Sigma_{2\mu}'$ into the equivalence classes of $\sim$ in \cref{a3} we get
\begin{equation*}
\Xi=\sum_{\pi\in \mathcal{R}}\sum_{\tau\in \mathcal{T}} \mathrm{sgn}(\pi\circ\tau)\, \prod_{i=1}^\mu \langle G_{s_{\pi\circ\tau(2i-1)}}(x), P_{s_{\pi\circ\tau(2i)}}(x) \rangle.
\end{equation*}
For a fixed $\pi\in\mathcal{R}$ and all $\tau \in \mathcal{T}$, by \cref{orthogonal_rel} (1) we have
\[\prod_{i=1}^\mu \langle G_{s_{\pi\circ\tau(2i-1)}}(x), P_{s_{\pi\circ\tau(2i)}}(x) \rangle = \mathrm{sgn}(\tau) \prod_{i=1}^\mu \langle G_{s_{\pi(2i-1)}}(x), P_{s_{\pi(2i)}}(x) \rangle\]
so that
\begin{equation*}
\Xi=2^\mu \sum_{\pi\in \mathcal{R}}\mathrm{sgn}(\pi)\, \prod_{i=1}^\mu \langle G_{s_{\pi(2i-1)}}(x), P_{s_{\pi(2i)}}(x) \rangle.
\end{equation*}
Let us investigate $\mathcal{R}$ further. We denote the group of permutation on $\set{1,\ldots,\mu}$ by $\Sigma_\mu$. The group $\Sigma_\mu\times\Sigma_\mu$ acts transitively and faithful on $\mathcal{R}$ via
\[\forall i: \left((\sigma_1,\sigma_2).\pi\right)(2i-1) := \pi(2\sigma_1(i)-1)\quad\text{and}\quad \left((\sigma_1,\sigma_2).\pi\right)(2i) := \pi(2\sigma_2(i))\]
This shows that that we have a bijection $\Sigma_\mu\times\Sigma_\mu\to \mathcal{R},\; (\sigma_1,\sigma_2) \mapsto (\sigma_1,\sigma_2).\pi_\star$ where $\pi_\star \in \mathcal{R}$ is fixed. Moreover, for all $(\sigma_1,\sigma_2)\in\Sigma_\mu\times \Sigma_\mu$ we have $\mathrm{sign}((\sigma_1,\sigma_2).\pi_\star)=\mathrm{sgn}(\sigma_1)\mathrm{sgn}(\sigma_2)\mathrm{sign}(\pi_\star)$.
Let us denote $2k_i-1=s_{\pi_\star(2i-1)}$ and $2\ell_i=s_{\pi_\star(2i)}$. We choose $\pi_\star$ uniquely by requiring $k_1<k_2<\ldots <k_\mu$ and $\ell_1<\ell_2<\ldots<\ell_\mu$. By doing so we get
\begin{align}
\Xi=&2^\mu \mathrm{sgn}(\pi_\star)\sum_{(\sigma_1,\sigma_2)\in\Sigma_\mu\times\Sigma_\mu}\mathrm{sgn}(\sigma_1)\mathrm{sgn}(\sigma_2)\, \prod_{i=1}^\mu \langle G_{2k_{\sigma_1(i)}-1}(x), P_{2\ell_{\sigma_2(i)}}(x) \rangle\nonumber\\
=&2^\mu \mu!\, \mathrm{sgn}(\pi_\star)\sum_{\sigma\in\Sigma_\mu}\mathrm{sgn}(\sigma)\, \prod_{i=1}^\mu \langle G_{2k_{\sigma(i)}-1}(x), P_{2\ell_{i}}(x) \rangle.\nonumber\\
=&2^\mu \mu! \,\mathrm{sgn}(\pi_\star)\sum_{\sigma\in\Sigma_\mu}\mathrm{sgn}(\sigma)\,\prod_{i=1}^\mu (-1)^{k_{\sigma(i)}+\ell_i} \, \Gamma\left(k_{\sigma(i)}+\ell_i-\tfrac{1}{2}\right) \label{x}
\end{align}
the last line by \cref{orthogonal_rel} (2). By construction we have $\bigcup_{i=1}^\mu\set{2k_i-1,2\ell_i}=\set{1,\ldots,2m}\backslash S$, so that
\[\set{k_1,\ldots, k_\mu} = \set{1,\ldots,m}\backslash \set{a}, \text{ and } \set{\ell_1,\ldots, \ell_\mu} = \set{1,\ldots,m}\backslash \set{b}.\]
Hence, for all $\sigma\in\Sigma_\mu$ we have
\begin{equation}\label{x2}
\prod_{i=1}^\mu (-1)^{k_{\sigma(i)}+\ell_i} =(-1)^{m(m+1)-a-b} = (-1)^{a+b}.
\end{equation}
and, furthermore,
\begin{equation}
\label{x4}
\mathrm{sgn}(\pi_\star)=\begin{cases} (-1)^{a+b},& \text{if } a\leq b \\ (-1)^{a+b-1}, &\text{if } a>b\end{cases}.
\end{equation}
Moreover,
\begin{equation}\label{x3}
\sum_{\sigma\in\Sigma_\mu}\mathrm{sgn}(\sigma)\,\prod_{i=1}^\mu\Gamma\left(k_{\sigma(i)}+\ell_i-\tfrac{1}{2}\right) = \det\left(\left[\Gamma\left(k+\ell-\tfrac{1}{2}\right)\right]_{\substack{1\leq k\leq m, k\neq a,\\1\leq \ell\leq m, \ell\neq b.}}\right) = \det(\Gamma_1^{a,b}),
\end{equation}
Putting together \cref{x}, \cref{x4}, \cref{x2} and \cref{x3} proves (1).
\begin{rem}
Because we use the symbols $k,\ell$ frequently we decided to use $\Gamma_1^{a,b}= \left[\Gamma\left(r+s-\tfrac{1}{2}\right)\right]_{\substack{1\leq r\leq m, r\neq a,\\1\leq s\leq m, s\neq b.}}$ as notation.
\end{rem}
We now prove (2). Fix $S\subset\set{0,\ldots,2m-1}$. Similar to \cref{99} we have
\begin{equation*}
\mathcal{A}^S= \begin{bmatrix} G_{i}(x_1) & P_{i}(x_1) & \ldots & G_{i}(x_{m}) & P_{i}(x_{m}) \end{bmatrix}_{0\leq i \leq 2m-1, i\not\in S}.
\end{equation*}
Put $\widetilde{G}_i(x):=G_{i-1}(x)$ and $\widetilde{P}_i(x):=P_{i-1}(x)$, so that
\begin{equation*}
\mathcal{A}^S= \begin{bmatrix} \widetilde{G}_{i}(x_1) & \widetilde{P}_{i}(x_1) & \ldots & \widetilde{G}_{i}(x_{m}) & \widetilde{P}_{i}(x_{m}) \end{bmatrix}_{1\leq i \leq 2m, i\not\in \widetilde{S}},
\end{equation*}
where $\widetilde{S}\subset\set{1,\ldots,2m}$ is the set that is obtained from $S$ by adding $1$ to both elements of $S$. Observe that,
\[\langle \widetilde{G}_{k}(x), \widetilde{P}_{\ell}(x)\rangle = \langle G_{k-1}(x), P_{\ell-1}\rangle = -\langle G_{\ell-1}(x), P_{k-1}\rangle,\]
by \cref{orthogonal_rel} (1) and hence, by \cref{orthogonal_rel} (2),
\begin{align*}
\langle \widetilde{G}_k(x), \widetilde{P}_{\ell}(x)\rangle&=\begin{cases}(-1)^{i+j+1} \Gamma\left(i+j-\tfrac{1}{2}\right), & \text{if } k=2j+1, \ell=2i\\
0, & \text{if } k+\ell \text{ is even}\end{cases}\\
&=\begin{cases}(-1)^{i+j} \Gamma\left(i+j-\tfrac{3}{2}\right), & \text{if } k=2j-1, \ell=2i\\
0, & \text{if } k+\ell \text{ is even}
\end{cases}
\end{align*}
We may now proceed as in the proof of (1) until \cref{x}, and conclude that
\[\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R} } \det(\mathcal{A}^S)\; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\;\d x_1\ldots \d x_{m-1}=\begin{cases} (m-1)! \,2^{m-1}\,\det(\,\widetilde{\Gamma}_2^{a',b'}\,), & \text{if } \widetilde{S}=\set{2a'-1,2b'} \text{ and } a\leq b\\
-(m-1)! \,2^{m-1}\,\det(\widetilde{\Gamma}_2^{a',b'}), & \text{if } \widetilde{S}=\set{2a'-1,2b'} \text{ and } a> b\\
0,&\text{if } \widetilde{R} \text{ has any other form.}
\end{cases},\]
where
\[\widetilde{\Gamma}_2^{a',b'}:=\left[\Gamma\left(k+\ell-\tfrac{3}{2}\right)\right]_{\substack{1\leq k\leq m, k\neq a'\\1\leq \ell\leq m, \ell\neq b'}}.\]
Note that
\[\widetilde{\Gamma}_2^{a',b'} = \left[\Gamma\left(k+\ell+\tfrac{1}{2}\right)\right]_{\substack{0\leq k\leq m-1, k\neq a'-1\\0\leq \ell\leq m-1, s\neq b'-1}} = \Gamma_2^{a'-1,b'-1}.\]
If $\widetilde{S}=\set{2a'-1,2b'}$ then, by definition, $S=\set{2a,2b+1}$, where $a=a'-1$ and $b=b'-1$. Hence,
\begin{align*}
&\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R} } \det(\mathcal{A}^S)\; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\;\d x_1\ldots \d x_{m-1}\\
=&\begin{cases} (m-1)! \,2^{m-1}\,\det({\Gamma}_2^{a,b}), & \text{if } {R}=\set{2a,2b+1} \text{ and } a\leq b\\
-(m-1)! \,2^{m-1}\,\det({\Gamma}_2^{a,b}), & \text{if } {R}=\set{2a,2b+1} \text{ and } a> b\\
0,&\text{if } {R} \text{ has any other form.}
\end{cases},\end{align*}
This finishes the proof.
\end{proof}
\cref{prop1} and \cref{prop2} below become important in \cref{sec_n_even} and \cref{sec_n_odd}, respectively.
\begin{prop}\label{prop1}
Recall from \cref{sec:orth_rel} the definition of $P_k(x)$ and~$G_k(x)$. Let $\mathcal{M}$ denote the matrix
\[\mathcal{M}:=\begin{bmatrix} P_{i}(u) & \begin{bmatrix}G_{i}(x_j) & P_{i}(x_j)\end{bmatrix}_{{j=1,\ldots,m-1 }} &G_{i}(u)& G_i(\infty)\end{bmatrix}_{i=0,\ldots,2m}\]
We have
\begin{align*}&\int\limits_{x_1,\ldots,x_{m-1} \in\mathbb{R}} \det(\mathcal{M})\, e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\,\d x_1\ldots \d x_{m-1}\\
= \;&\sqrt{2\pi} (m-1)! \,2^{\,m-1}\,e^{-\tfrac{u^2}{2}}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \; \det\begin{bmatrix} P_{2j}(u)&P_{2i-1}(u) \\ P_{2j-1}(u)&P_{2i-2}(u) \end{bmatrix}.
\end{align*}
where $\Gamma^{i,j}:=\left[\begin{smallmatrix} \Gamma\left(r+s-\tfrac{1}{2}\right)\end{smallmatrix}\right]_{\substack{1\leq s\leq m,s\neq j\\1\leq r\leq m, r\neq i}}$.
\end{prop}
\begin{proof}
Let us denote the quantity that we want to compute by $\Theta$:
\[\Theta:=\int\limits_{x_1,\ldots,x_{m-1} \in\mathbb{R}} \det (\mathcal{M}) e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1\ldots \d x_{m-1}.\]
A permutation with negative sign of the columns of $\mathcal{M}$ yields,
\begin{equation}
\det(\mathcal{M})=- \det \begin{bmatrix} G_i(\infty)&G_{i}(u) & P_{i}(u) & \begin{bmatrix}G_{i}(x_j) & P_{i}(x_j)\end{bmatrix}_{{j=1,\ldots,m-1 }} \end{bmatrix}_{i=0,\ldots,2m}\label{20}
\end{equation}
By \cref{lemma2} we have $G_{i}(\infty)=0$ for $i\geq 1$ and $G_0(\infty)=\sqrt{2\pi}$. Expanding the determinant in \cref{20} with Laplace expansion we get
\[\det(\mathcal{M})=-\sqrt{2\pi} \sum_{1\leq k<\ell \leq 2m} (-1)^{k+\ell-1} (G_k(u)P_\ell(u)-G_\ell(u)P_k(u))\, \det(\mathcal{A}^{k,\ell}),\]
where $\mathcal{A}^{k,\ell}:=\begin{bmatrix}G_{i}(x_j) & P_{i}(x_j)\end{bmatrix}_{\substack{1\leq i\leq 2m, i\not\in\set{k,\ell}\\ 1\leq j\leq m-1\quad\quad\;}}.$
Hence, $\Theta$ is equal to
\begin{equation}\label{a31.2}
\sqrt{2\pi} \sum_{1\leq k<\ell \leq 2m} (-1)^{k+\ell}(G_k(u)P_\ell(u)-G_\ell(u)P_k(u)) \int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}}\det(\mathcal{A}^{k,\ell}) e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\d x_1\ldots \d x_{m-1}
\end{equation}
In the notation of \cref{prop0} we have $\mathcal{A}^{k,\ell}=\mathcal{A}^{\set{k,\ell}}$. Applying the \cref{prop0} yields
\begin{equation*}
\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}}\det(\mathcal{A}^{k,\ell}) e^{-\sum\limits_{i=1}^m \frac{x_i^2}{2}}\d x_1\ldots \d x_{m-1}
=
\begin{cases}
(m-1)! 2^{m-1} \det(\Gamma_1^{i,j}), &\text{ if } \set{k,\ell}=\set{2i-1,2j}, i\leq j.\\
-(m-1)! 2^{m-1} \det(\Gamma_1^{i,j}), &\text{ if } \set{k,\ell}=\set{2i-1,2j}, i> j.\\
0,& \text{ else.}\end{cases}
\end{equation*}
where $\Gamma_1^{i,j}=\left[\begin{smallmatrix} \Gamma\left(r+s-\tfrac{1}{2}\right)\end{smallmatrix}\right]_{\substack{1\leq r\leq m, r\neq i\\1\leq s\leq m,s\neq j}}$. When we want to plug this into \cref{a31.2} we have to incorporate that
\[\begin{cases} \text{If } k=2i-1,\ell=2j \text{ and } k<\ell, \text{ then } i\leq j.\\
\text{If } k=2j,\ell=2i-1 \text{ and } k<\ell, \text{ then } i>j.
\end{cases}\]
From this we get
\begin{align*}
\Theta=&(-1)\,\sqrt{2\pi} (m-1)! \,2^{\,m-1}\,e^{-\tfrac{u^2}{2}}\,\Big[\sum_{1\leq i\leq j \leq m} \det(\Gamma^{i,j}) \; (G_{2i-1}(u)P_{2j}(u)-G_{2j}(u)P_{2i-1}(u))\\
&\hspace{5cm}-\sum_{1\leq j < i \leq m} \det(\Gamma^{i,j}) \; (G_{2j}(u)P_{2i-1}(u)-G_{2i-1}(u)P_{2j}(u))\Big]
\end{align*}
By \cref{lemma2} we have $G_k(u)=-e^{-\tfrac{u^2}{2}} P_{k-1}(u)$, $k\geq 1$, which we can plug in into the upper expression to obtain
\begin{align*}
\Theta=&\sqrt{2\pi} (m-1)! \,2^{\,m-1}\,e^{-\tfrac{u^2}{2}}\,\Big[\sum_{1\leq i\leq j \leq m} \det(\Gamma^{i,j}) \; (P_{2i-2}(u)P_{2j}(u)-P_{2j-1}(u)P_{2i-1}(u))\\
&\hspace{5cm}-\sum_{1\leq j < i \leq m} \det(\Gamma^{i,j}) \; (P_{2j-1}(u)P_{2i-1}(u)-P_{2i-2}(u)P_{2j}(u))\Big]\\
=& \sqrt{2\pi} (m-1)! \,2^{\,m-1}\,e^{-\tfrac{u^2}{2}}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \; (P_{2i-2}(u)P_{2j}(u)-P_{2j-1}(u)P_{2i-1}(u))\\
=& \sqrt{2\pi} (m-1)! \,2^{\,m-1}\,e^{-\tfrac{u^2}{2}}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \; \det\begin{bmatrix} P_{2j}(u)&P_{2i-1}(u) \\ P_{2j-1}(u)&P_{2i-2}(u) \end{bmatrix}
\end{align*}
This finishes the proof.
\end{proof}
\begin{prop}\label{prop2}
Let $\mathcal{M}$ denote the matrix $\mathcal{M}= \begin{bmatrix} P_{i}(u) & \begin{bmatrix}G_{i}(x_j) & P_{i}(x_j)\end{bmatrix}_{j=1,\ldots, m-1} & G_i(u)\end{bmatrix}_{i=0,\ldots,2m-1}.$ Then
\begin{align*}
&\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}} \det(\mathcal{M}) \, e^{-\sum\limits\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\, \d x_1 \ldots \d x_{m-1}\\
= &(m-1)! \,2^{m-1}\,e^{-\tfrac{u^2}{2}}\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, \det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix},
\end{align*}
where $\Gamma_2^{i,j}=\left[\Gamma\left(r+s-\tfrac{3}{2}\right)\right]_{\substack{0\leq r\leq m-1, r\neq i\\0\leq s\leq m-1,s\neq j}}$.
\end{prop}
\begin{proof}
The proof works similar as the the proof for \cref{prop1}: Again, we denote by $\Theta$ the quantity that we want to compute:
\[\Theta:=\int\limits_{x_1,\ldots,x_m\in\mathbb{R}} \det(\mathcal{M}) \, e^{-\sum\limits_{i=1}^m \tfrac{x_i^2}{2}}\, \d x_1 \ldots \d x_m.\]
We have
\[\det(\mathcal{M})=-\det \begin{bmatrix} G_i(u)& P_{i}(u) &\begin{bmatrix}G_{i}(x_j) & P_{i}(x_j)\end{bmatrix}_{j=1,\ldots, m} \end{bmatrix}_{i=0,\ldots,2m-1}.\]
Expanding the determinant with Laplace expansion we get
\[\det(\mathcal{M})= -\sum_{0\leq k < \ell \leq 2m-1} (-1)^{k+\ell-1} (G_k(u)P_\ell(u) - G_\ell(u)P_k(u)) \det(\mathcal{A}^{k,\ell}),\]
where
\[\mathcal{A}^{k,\ell}=\begin{bmatrix}G_{i}(x_1) & P_{i}(x_1)& \ldots &G_{i}(x_m) & P_{j}(x_m)\end{bmatrix}_{i=0,\ldots,2m-1, i\not\in\set{k,\ell}}\]
Hence,
\begin{equation}\label{101}
\Theta=\sum_{0\leq k < \ell \leq 2m+1} (-1)^{k+\ell} (G_k(u)P_\ell(u) - G_\ell(u)P_k(u)) \int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}} \det(\mathcal{A}^{k,\ell}) e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-1}.
\end{equation}
By \cref{prop0} (2) we have
\begin{align*}
& \int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}}\det(\mathcal{A}^{k,\ell}) e^{-\sum\limits_{i=1}^{m-1} \frac{x_i^2}{2}}\,\d x_1\ldots \d x_{m-1}\\
=&\begin{cases} (m-1)! \,2^{m-1}\,\det(\Gamma_2^{i,j}), & \text{if } \set{k,\ell}=\set{2i,2j+1} \text{ and } i\leq j\\
-(m-1)! \,2^{m-1}\,\det(\Gamma_2^{i,j}), & \text{if } \set{k,\ell}=\set{2i,2j+1} \text{ and } i> j\\
0,&\text{else.}\end{cases}
\end{align*}
where
\[\Gamma_2^{i,j}=\left[\Gamma\left(r+s+\tfrac{1}{2}\right)\right]_{\substack{0\leq r\leq m-1, r\neq i\\0\leq s\leq m-1,s\neq j}}.\]
When pluggin this into \cref{101} we must take into account that
\[\begin{cases} \text{If } k=2i,\ell=2j+1 \text{ and } k<\ell, \text{ then } i\leq j.\\
\text{If } k=2j+1,\ell=2i \text{ and } k<\ell, \text{ then } i> j.
\end{cases}\]
This yields
\begin{align*}
\Theta&=(m-1)! \,2^{m-1}\, \big[\sum_{0\leq j<i\leq m-1} \det(\Gamma_2^{i,j})\, (G_{2j+1}(u)P_{2i}(u) - G_{2i}(u)P_{2j+1}(u) ) \\
&\hspace{2cm}- \sum_{0\leq i\leq j\leq m-1} \det(\Gamma_2^{i,j})\, (G_{2i}(u)P_{2j+1}(u) - G_{2j+1}(u)P_{2i}(u))\big]\\
&=m! \,2^{m-1}\,\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, (P_{2i}(u)G_{2j+1}(u) - P_{2j+1}(u)G_{2i}(u))
\end{align*}
Using from \cref{lemma2} that $G_k(u)=-e^{-\tfrac{u^2}{2}} P_{k-1}(u)$, we finally obtain
\begin{align*}
\Theta=(m-1)! \,2^{m-1}\,e^{-\tfrac{u^2}{2}}\,\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, \det\begin{bmatrix} P_{2j+1}(u)&P_{2i}(u) \\ P_{2j}(u) & P_{2i-1}(u) \end{bmatrix}.
\end{align*}
This finishes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm}}\label{sec:thm}
All of the following section is inspired by the computations made in \cite[Sec. 22]{mehta}. Recall from \cref{def_I_J} that we have put
\[\mathcal{I}_n(u)=\mean\limits_{A\sim \mathrm{GOE}(n;u,1)}\, \lvert \det(A) \rvert,\quad \text{ and }\quad\mathcal{J}_n(u)=\mean\limits_{A\sim \mathrm{GOE}(n;u,1)}\, \det(A) \]
The proof of \cref{thm} is based on the idea to decompose $\mathcal{I}_n(u)=(\mathcal{I}_n(u)+\mathcal{J}_n(u)) - \mathcal{J}_n(u)$ and then to compute the two summands $\mathcal{I}_n(u)+\mathcal{J}_n(u)$ and $\mathcal{J}_n(u)$ separately. By definition of the Gaussian Orthogonal Ensemble and since $\mathcal{I}_n(u)=\mean\limits_{A\sim \mathrm{GOE}(n)}\, \lvert \det(A-uI_n) \rvert$ we have
\begin{equation*}
\mathcal{I}_n(u)
=\frac{1}{\sqrt{2}^{\,n} \sqrt{\pi}^{\,n(n+1)/2}}\; \int\limits_{\stackrel{A \in \mathbb{R}^{n\times n}}{ \text{symmetric}}} \lvert\det(A-uI_{n})\rvert \; e^{-\tfrac{1}{2}\,\mathrm{Trace}(A^2)} \,\d A.
\end{equation*}
By \cite[Theorem 3.2.17]{muirhead}, the density of the (ordered) eigenvalues $\lambda_1\leq\ldots\leq \lambda_n$ of $A\sim \mathrm{GOE}(n)$ is given by
\[\frac{\sqrt{\pi}^{\tfrac{n(n+1)}{2}}}{ \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)}\; \Delta(\lambda) \,e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \,\mathbf{1}_{\set{\lambda_1\leq \ldots\leq \lambda_n}},\]
where $\Delta(\lambda):= \prod_{1\leq i < j \leq n} (\lambda_j - \lambda_i)$ and $\mathbf{1}_{\set{\lambda_1\leq \ldots\leq \lambda_n}}$ is the characteristic function of the set $\set{\lambda_1\leq \ldots\leq \lambda_n}$. This implies
\begin{equation}\label{I_n2}
\mathcal{I}_n(u) = \frac{1}{\sqrt{2}^{\,n} \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)}\; \int\limits_{\lambda_1\leq ...\leq \lambda_{n}} \Delta(\lambda) \,e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \;\prod_{i=1}^n \lvert\lambda_i -u\rvert\;\d \lambda_1 \ldots \d \lambda_{n}.
\end{equation}
Similiarly,
\begin{equation}\label{J_n}
\mathcal{J}_n(u)= \frac{1}{\sqrt{2}^{\,n} \; \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)}\; \int\limits_{\lambda_1\leq ...\leq \lambda_{n}}\, \Delta(\lambda) \,e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \;\prod_{i=1}^n (\lambda_i -u )\;\d \lambda_1 \ldots \d \lambda_{n}.
\end{equation}
For even $n$ the integral in \cref{J_n} can be expressed nicely in terms of the Hermite polynomials $H_k(x)$ from \cref{hermite0}. The following is \cite[Eq. (22.2.38)]{mehta}.
\begin{thm}\label{mehta_thm}
Let $n=2m$ and $H_{k}(x)$ denote the Hermite polynomial from \cref{hermite0}. We have
\begin{equation*}
\int\limits_{\lambda_1\leq ...\leq \lambda_{n}}\,\Delta(\lambda) \,e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \;\prod_{i=1}^n (\lambda_i -u )\;\d \lambda_1 \ldots \d \lambda_{n}= \frac{\sqrt{\pi}^{\, m}}{2^{m^2}} \left[\prod_{i=1}^{m-1} (2i)!\right]\, H_{2m}(u),
\end{equation*}
\end{thm}
\cref{mehta_thm} will be of use later when we compute $E(n,p)$ for even $n$. For odd $n$ a similar but more involved formula can found in \cite[Eq. (22.2.39)]{mehta}. However, we decided not to put it here, because we do not need it for further computations.
In the remainder of the section we put.
\begin{equation}\label{C}
C:= \left(\sqrt{2}^{\,n} \; \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)\right)^{-1}
\end{equation}
and $\lambda_0:=-\infty$. We can write \cref{I_n2} as
\[\mathcal{I}_n(u)= C\,\sum_{j=0}^{n} (-1)^{j} \int\limits_{\substack{\lambda_0\leq \lambda_1\leq ...\leq \lambda_{j} \leq u\\u\leq \lambda_{j+1}\leq \ldots \leq \lambda_n}}\,\Delta(\lambda)\, e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \;\prod_{i=1}^n ( \lambda_i -u )\; \d \lambda_1 \ldots \d \lambda_{n}\]
and \cref{J_n} as
\[\mathcal{J}_n(u)= C\,\sum_{j=0}^{n}\;\,\int\limits_{\substack{\lambda_0\leq \lambda_1\leq ...\leq \lambda_{j} \leq u\\u\leq \lambda_{j+1}\leq \ldots \leq \lambda_n}}\, \Delta(\lambda)\, e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \;\prod_{i=1}^n (\lambda_i -u)\; \d \lambda_1 \ldots \d \lambda_{n}.\]
Hence,
\begin{equation}
\mathcal{I}_n(u)+\mathcal{J}_n(u)= 2C\;\sum_{j=0}^{\lfloor \tfrac{n}{2}\rfloor} \; \int\limits_{\substack{\lambda_0\leq \lambda_1\leq ...\leq \lambda_{2j} \leq u\\u\leq \lambda_{2j+1}\leq \ldots \leq \lambda_n}} \Delta(\lambda) \, e^{-\sum_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \;\prod_{i=1}^n (\lambda_i -u)\; \d \lambda_1 \ldots \d \lambda_{n}\label{aaa}
\end{equation}
We write $ \Delta(\lambda)\prod_{i=1}^n (\lambda_i-u)$ as a Vandermonde determinant:
\begin{equation*} \Delta(\lambda)\prod_{i=1}^n (\lambda_i-u) = \prod_{1\leq i<j\leq n} (\lambda_j - \lambda_i) \; \prod_{i=1}^n (\lambda_i-u)= \det \begin{bmatrix}u^k & \lambda_1^{k} & \ldots & \lambda_{n}^{k} \end{bmatrix}_{k=0,\ldots, n}.
\end{equation*}
Since we may add arbitrary multiple of rows to other rows of a matrix without changing its determinant, we have
\begin{equation}\label{b}
\Delta(\lambda)\prod_{i=1}^n (\lambda_i-u) = \det \begin{bmatrix} P_{k}(u) & P_{k}(\lambda_1) & \ldots & P_{k}(\lambda_{n})\end{bmatrix}_{k=0,\ldots, n},
\end{equation}
where the $P_{k}(x), k=0,1,\ldots,n,$ are the Hermite polynomials from \cref{my_polynomials}. Plugging this into \cref{aaa} yields
\begin{align}
&\mathcal{I}_n(u)+\mathcal{J}_n(u)\label{a}
=
2C\;\sum_{j=0}^{\lfloor \tfrac{n}{2}\rfloor} \; \int\limits_{\substack{\lambda_0\leq \lambda_1\leq ...\leq \lambda_{2j} \leq u\\u\leq \lambda_{2j+1}\leq \ldots \leq \lambda_n}}\hspace{-0.7cm}\det \begin{bmatrix} P_{k}(u) & P_{k}(\lambda_1) & \ldots & P_{k}(\lambda_{n})\end{bmatrix}_{k=0,\ldots, n} \hspace{-0.2cm}e^{-\sum\limits_{i=1}^{n} \tfrac{\lambda_i^2}{2} } \, \d \lambda_1 \ldots \d \lambda_{n}
\end{align}
We now distinguish the cases $n$ even and $n$ odd.
\vspace{-0.3cm}
\subsection{The case when $n$ is even} \label{sec_n_even}
Recall that we have put $n=2m$, so that $\lfloor\tfrac{n}{2}\rfloor = m$. Moreover, recall from \cref{G} that we have put
\[G_{k}(x)=\int\limits_{-\infty}^x P_{k}(y)\, e^{-\tfrac{y^2}{2}} \;\d y\]
Observe that each $\lambda_i$ appears in exactly one column on the right hand side of \cref{b}. Integrating over $\lambda_1,\lambda_3,\lambda_5,\ldots$ in \cref{a} therefore yields
\begin{equation}
\mathcal{I}_n(u)+\mathcal{J}_n(u)= 2C\sum_{j=0}^{m} \; \int\limits_{\substack{ \lambda_2\leq \lambda_4\leq ...\leq \lambda_{2j} \leq u\\u\leq \lambda_{2j+2}\leq \ldots \leq \lambda_{2m}}}\det(\mathcal{N}_j) \; e^{-\sum_{i=1}^{m} \frac{\lambda_{2i}^2}{2} }\,\d \lambda_2 \ldots \d \lambda_{2m}
\end{equation}
where $\mathcal{N}_j$ is the matrix
\begin{align*}
\mathcal{N}_j=&\Big[ P_{k}(u)\; \begin{bmatrix}G_{k}(\lambda_{2i}) - G_k(\lambda_{2i-2}) & P_{k}(\lambda_{2i})\end{bmatrix}_{{i=1,\ldots j}}\; \begin{bmatrix}G_{k}(\lambda_{2j+2}) - G_k(u) & P_{k}(\lambda_{2j+2})\end{bmatrix}\,\ldots \\
&\hspace{1cm} \ldots\, \begin{bmatrix}G_{k}(\lambda_{2i}) - G_{k}(\lambda_{2i-2}) & P_{k}(\lambda_{2i})\end{bmatrix}_{{i=j+2,\ldots, m}}\Big]_{k=0,\ldots,n}
\end{align*}
Adding the first column of $\mathcal{N}_j$ to its third column, and the result to the fifth column and so on, does not change the value of the determinant. Hence, $\det(N_j)=\det(\mathcal{M}_j)$, where
\begin{equation}\label{6}
\mathcal{M}_j:=\begin{bmatrix} P_{k}(u)& \begin{bmatrix}G_{k}(\lambda_{2i}) & P_{k}(\lambda_{2i})\end{bmatrix}_{{i=1,\ldots j}} & \begin{bmatrix}G_{k}(\lambda_{2i}) - G_{k}(u) & P_{k}(\lambda_{2i})\end{bmatrix}_{{i=j+1,\ldots, m}}\end{bmatrix}_{k=0,\ldots,n}
\end{equation}
Observe that each $\lambda_{2i}$ appears in exactly two columns of $\mathcal{M}_j$. Hence, making a change of variables by interchanging $\lambda_{2i}$ and $\lambda_{2i'}$ for any two $i,i'$ does not change the value of the determinant of $\mathcal{M}_j$. Writing $x_i:=\lambda_{2i}$, for $1\leq i\leq m$, we therefore have
\begin{equation}\label{7}
\mathcal{I}_m(u)+\mathcal{J}_n(u)= \frac{2C}{m!}\sum_{j=0}^{m} \;\binom{m}{j} \; \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1}, \ldots, x_m}}\det(\mathcal{M}_j) \; e^{-\sum_{i=1}^{m} \tfrac{x_i^2}{2} } \, \d x_1 \ldots \d x_{m},
\end{equation}
Using the multilinearity of the determinant we can write $\det(\mathcal{M}_j)$ as a sum of determinants of matrices, each of which double colums either equal $\left[\begin{smallmatrix} G_{k}(x_i)& P_{k}(x_i)\end{smallmatrix}\right] $ or $\left[\begin{smallmatrix} -G_{k}(u)& P_{k}(x_i)\end{smallmatrix}\right] $. Observe that whenever the column $\left[\begin{smallmatrix} -G_{k}(u)& P_{k}(x_i)\end{smallmatrix}\right] $ appears twice in a matrix the corresponding determinant equals zero. Moreover, we may interchange the double columns as we wish without changing the value of the determinant. All this yields
\begin{equation}\label{8}
\det(\mathcal{M}_j) = \det(\mathcal{K}) - (m-j)\det(\mathcal{L}),\end{equation}
where
\begin{align*}\mathcal{K}&= \begin{bmatrix} P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{i=1,\ldots, m} \end{bmatrix}_{k=0,\ldots,2m},\\
\mathcal{L}&= \begin{bmatrix}P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{{i=1,\ldots,m-1 }} & G_{k}(u) & P_{k}(x_{m})\end{bmatrix}_{k=0,\ldots,2m}.
\end{align*}
Note that $\mathcal{K} \; e^{-\sum_{i=1}^{m} x_i^2 }$ is invariant under any permutations of the $x_i$. We may apply \cref{auxiliary_lemma} to conclude that
\begin{align}\label{50}
&\frac{2C}{m!}\sum_{j=0}^{m} \binom{m}{j} \int\limits_{\stackrel{x_1,\ldots, x_j \leq u}{u\leq x_{j+1}, \ldots, x_m}}\det(\mathcal{K}) \; e^{-\sum_{i=1}^{m} \tfrac{x_i^2}{2} } \d x_1 \ldots \d x_{m}\\
\nonumber=\quad& \frac{2C}{m!}\int\limits_{x_1,\ldots,x_m\in\mathbb{R}}\det(\mathcal{K}) \; e^{-\sum_{i=1}^{m} \tfrac{x_i^2}{2} } \d x_1 \ldots \d x_{m}\\
\nonumber =\quad&2C\int\limits_{x_1<\ldots<x_m\in\mathbb{R}}\det(\mathcal{K}) \; e^{-\sum_{i=1}^{m} \tfrac{x_i^2}{2} } \d x_1 \ldots \d x_{m}\\
\nonumber =\quad&2C\int\limits_{\lambda_1<\ldots<\lambda_n\in\mathbb{R}}\det\, \begin{bmatrix} P_{k}(u) & P_{k}(\lambda_1) & \ldots & P_{k}(\lambda_{n})\end{bmatrix}_{k=0,\ldots, n} \; e^{-\sum_{i=1}^{m} \tfrac{\lambda_i^2}{2} } \d \lambda_1 \ldots \d \lambda_{n}\\
\nonumber =\quad&2C\int\limits_{\lambda_1<\ldots<\lambda_n\in\mathbb{R}}\big[\prod_{i=1}^n (\lambda_i-u)\big] \Delta(\lambda) \; e^{-\sum_{i=1}^{m} \tfrac{\lambda_i^2}{2} } \d \lambda_1 \ldots \d \lambda_{n}\\
=\quad &2\mathcal{J}_n(u).\nonumber
\end{align}
the fifth line line by \cref{b} and the last line by \cref{J_n}. Combining this with \cref{7} and \cref{8} we see that
\begin{align*}
\mathcal{I}_n(u)&=(\mathcal{I}_n(u)+\mathcal{J}_n(u)) - \mathcal{J}(u)\\
&=\mathcal{J}_n(u)-\frac{2C}{m!}\sum_{j=0}^{m} \;\binom{m}{j} \; \int\limits_{\stackrel{x_1,\ldots, x_j \leq u}{u\leq x_{j+1}, \ldots, x_m}}(m-j)\det(\mathcal{L}) \; e^{-\sum_{i=1}^{m} \tfrac{x_i^2}{2} } \, \d x_1 \ldots \d x_{m}\\
&=\mathcal{J}_n(u)-\frac{2C}{(m-1)!}\sum_{j=0}^{m-1} \;\binom{m-1}{j} \; \int\limits_{\stackrel{x_1,\ldots, x_j \leq u}{u\leq x_{j+1}, \ldots, x_m}}\det(\mathcal{L}) \; e^{-\sum_{i=1}^{m} \tfrac{x_i^2}{2} } \, \d x_1 \ldots \d x_{m}
\end{align*}
\vspace{-0.3cm}
Since $\det(\mathcal{L})e^{-\sum_{i=1}^{m-1} \tfrac{x_i^2}{2}}$ is invariant under permuting $x_1,\ldots,x_{m-1}$ (excluding $x_m$!) we may apply \cref{auxiliary_lemma} to obtain
\begin{equation}\label{y1}
\mathcal{I}_n(u)=\mathcal{J}_n(u)-\frac{2C}{(m-1)!} \,\int\limits_{x_1,\ldots,x_{m-1} \in\mathbb{R}} \left[\;\int\limits_{x_m=u}^\infty \det(\mathcal{L}) e^{-\tfrac{x_m^2}{2} } \d x_m \right] e^{-\sum_{i=1}^{m-1} \tfrac{x_i^2}{2} } \, \d x_1 \ldots \d x_{m-1}.
\end{equation}
Observe that $x_m$ appears in one single column in $\mathcal{L}$. Integrating over $x_m$ in therefore shows that
\begin{align*}
&\int\limits_{x_m=u}^\infty \det(\mathcal{L}) e^{-\tfrac{x_m^2}{2} } \d x_m\\
=& \det \begin{bmatrix}P_{k}(u) && \left[G_{k}(x_i)\, P_{k}(x_i)\right]_{{i=1,\ldots,m-1 }} && G_{k}(u)& & G_k(\infty)-G_k(u)\end{bmatrix}_{k=0,\ldots,2m}\\
=&\det \underbrace{\begin{bmatrix}P_{k}(u) && \left[G_{k}(x_i)\, P_{k}(x_i)\right]_{{i=1,\ldots,m-1 }} && G_{k}(u)& & G_k(\infty)\end{bmatrix}_{k=0,\ldots,2m}}_{=:\mathcal{M}}.
\end{align*}
From \cref{prop1} we get
\begin{align*}
&\int\limits_{x_1,\ldots,x_{m-1} \in\mathbb{R}} \det(\mathcal{M}) e^{-\sum_{i=1}^{m-1} \tfrac{x_i^2}{2} } \, \d x_1 \ldots \d x_{m-1}\\
= \;&\sqrt{2\pi} (m-1)! \,2^{\,m-1}\,e^{-\tfrac{u^2}{2}}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \;\det\begin{bmatrix} P_{2j}(u)&P_{2i-1}(u) \\ P_{2j-1}(u)&P_{2i-2}(u) \end{bmatrix}.
\end{align*}
where
$\Gamma^{i,j}:=\left[ \Gamma\left(r+s-\frac{1}{2}\right)\right]_{\substack{1\leq s\leq m,s\neq j\\1\leq r\leq m, r\neq i}}.$
\noindent
Hence, by \cref{y1}:
\begin{equation*}
\mathcal{I}_n(u)=\mathcal{J}_n(u)-C\sqrt{2\pi} \,2^{\,m}\,e^{-\tfrac{u^2}{2}}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \; \det\begin{bmatrix} P_{2j}(u)&P_{2i-1}(u) \\ P_{2j-1}(u)&P_{2i-2}(u) \end{bmatrix}
\end{equation*}
Finally, we substitute $C =\left(\sqrt{2}^{\,n} \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)\right)^{-1}$ (see \cref{C}) and put the minus into the determinant to obtain
\begin{equation*}
\mathcal{I}_n(u)=\mathcal{J}_n(u)+\frac{\sqrt{2\pi} \,e^{-\tfrac{u^2}{2}}}{\prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \; \det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix}
\end{equation*}
This finishes the proof.
\subsection{The case when $n$ is odd} \label{sec_n_odd}
Here we have $n=2m-1$ and hence $\lfloor \tfrac{n}{2}\rfloor = m-1$. We proceed as in the preceeding section and can therefore be brief in our explanations. In \cref{a} we integrate over all the $\lambda_i$ with $i$ odd to obtain
\begin{equation}\label{602}
\mathcal{I}_n(u)+\mathcal{J}_n(u)= 2C\sum_{j=0}^{m-1} \; \int\limits_{\substack{ x_1\leq x_2\leq ...\leq x_{j} \leq u\\u\leq x_{j+1}\leq \ldots \leq x_{m-1}}}\det(\mathcal{N}_j) \; e^{-\sum_{i=1}^{m}\tfrac{x_i^2}{2}} \, \d x_1 \ldots \d x_{m-1},
\end{equation}
where $x_i:=y_{2i}$, $1\leq i\leq m-1$ and $\mathcal{N}_j$ is the matrix
\begin{align*}
\mathcal{N}_j=&\Big[ P_{k}(u) \hspace{0.6cm} \begin{bmatrix}G_{k}(x_i) - G_k(x_{i-1}) & P_{k}(x_i)\end{bmatrix}_{{i=1,\ldots j}}\; \begin{bmatrix}G_{k}(x_{j+1}) - G_k(u) & P_{k}(x_{j+1})\end{bmatrix}\,\ldots \\
&\hspace{1cm} \ldots\, \begin{bmatrix}G_{k}(x_i) - G_{k}(x_{i-1}) & P_{k}(x_i)\end{bmatrix}_{{i=j+2,\ldots, m-1}} \hspace{0.6cm} G_k(\infty)-G_k(x_{m-1}) \Big]_{k=0,\ldots,n}.
\end{align*}
We have $\det(\mathcal{N}_j)=\det(\mathcal{M}_j)$, where
\[\mathcal{M}_j=\begin{bmatrix} P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{{i=1,\ldots j}} & \begin{bmatrix}G_{k}(x_i) - G_{k}(u) & P_{k}(x_i)\end{bmatrix}_{{i=j+1,\ldots, m-1}} & G_k(\infty)-G_k(u)\end{bmatrix}_{k=0,\ldots,n}\]
Permuting $x_{1},\ldots, x_{j}$ or permuting $x_{j+1},\ldots,x_{m}$ does not change the value of $\det(\mathcal{M}_j)$, so that
\begin{equation}\label{f0}
\mathcal{I}_n(u)+\mathcal{J}_n(u)= \frac{2C}{(m-1)!}\sum_{j=0}^{m-1} \;\binom{m-1}{j}\; \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1}, \ldots, x_{m-1}}}\det(\mathcal{M}_j) \; e^{-\sum_{i=1}^{m-1} \tfrac{x_i^2}{2}} \, \d x_1 \ldots \d x_{m-1},
\end{equation}
Using the multilinearity of the determinant we have
\[\det(\mathcal{M}_j) = \det(\mathcal{K}) - \det(\mathcal{M})- (m-1-j)\det(\mathcal{L}),\]
where
\begin{align*}
\mathcal{K}&= \begin{bmatrix} P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{i=1,\ldots, m-1} & G_k(\infty)\end{bmatrix}_{k=0,\ldots,2m-1},\\
\mathcal{M}&= \begin{bmatrix} P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{i=1,\ldots, m-1} & G_k(u)\end{bmatrix}_{k=0,\ldots,2m-1}, \\
\mathcal{L}&=\begin{bmatrix}P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{{i=1,\ldots,m-2}} & G_{k}(u) & P_{k}(x_{m-1})& G_k(\infty)-G_k(u)\end{bmatrix}_{k=0,\ldots,2m-1}.
\end{align*}
Integrating $\int_{x_{m-1}>u } \det(\mathcal{L}) \,e^{-\tfrac{x_{m-1}^2}{2}}\;\d x$ replaces the $P_k(x_{m-1})$ in $\mathcal{L}$ by $G_k(\infty)-G_k(u)$. Hence,
\begin{align*}
&\int\limits_{x_{m-1}=u}^\infty \det(\mathcal{L}) \; e^{-\tfrac{x_{m-1}^2}{2}} \d x_{m-1}\\
=&\det \begin{bmatrix}P_{k}(u) & \begin{bmatrix}G_{k}(x_i) & P_{k}(x_i)\end{bmatrix}_{{i=1,\ldots,m-2 }} & G_{k}(u) & G_k(\infty)-G_k(u)& G_k(\infty)-G_k(u)\end{bmatrix}_{k=0,\ldots,2m-1}\\=&0
\end{align*}
and thus
\begin{align*}
& \frac{2C}{(m-1)!}\sum_{j=0}^{m-1}\binom{m-1}{j}\; \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1}, \ldots, x_{m-1}}}(m-1-j)\det(\mathcal{L}) \; e^{-\sum_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-1} \\
=&\frac{2C}{(m-1)!}\sum_{j=0}^{m-1}\binom{m-1}{j}\; \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1}, \ldots, x_{m-2}}} \left[\int\limits_{x_{m-1}=u}^\infty \det(\mathcal{L}) \; e^{-\tfrac{x_{m-1}^2}{2}} \d x_{m-1}\right] e^{-\sum_{i=1}^{m-2} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-2}\\
= &0.
\end{align*}
Using this, \cref{f0} becomes
\begin{equation*}
\mathcal{I}_n(u)+\mathcal{J}_n(u)= \frac{2C}{(m-1)!}\sum_{i=0}^{m-1} \;\binom{m-1}{j}\; \int\limits_{\substack{x_1,\ldots, x_j \leq u\\u\leq x_{j+1}, \ldots, x_{m-1}}}(\det(\mathcal{K})+\det(\mathcal{M})) \; e^{-\sum_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-1}
\end{equation*}
By construction, both $\det(\mathcal{K})$ and $\det(\mathcal{M})$ are invariant under any permutation of the $x_i$. We may apply \cref{auxiliary_lemma} to get
\begin{equation*}
\mathcal{I}_n(u)+\mathcal{J}_n(u)= \frac{2C}{(m-1)!} \int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}}(\det(\mathcal{K})-\det(\mathcal{M})) \; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-1}
\end{equation*}
Similar to \cref{50} we deduce that
\begin{equation*}
\frac{2C}{(m-1)!} \int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}}\det(\mathcal{K}) \; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-1} = 2\mathcal{J}_n(u),
\end{equation*}
so that
\begin{equation}\label{102}
\mathcal{I}_n(u)=(\mathcal{I}_n(u)+\mathcal{J}_n(u))-\mathcal{J}_n(u)= \mathcal{J}_n(u)-\frac{2C}{(m-1)!} \int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}} \det(\mathcal{M})\; e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}} \d x_1 \ldots \d x_{m-1}
\end{equation}
By \cref{prop2} we have
\begin{align*}
&\int\limits_{x_1,\ldots,x_{m-1}\in\mathbb{R}} \det(\mathcal{M}) \, e^{-\sum\limits_{i=1}^{m-1} \tfrac{x_i^2}{2}}\, \d x_1 \ldots \d x_{m-1}\\
= &(m-1)! \,2^{m-1}\,e^{-\tfrac{u^2}{2}}\,\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, \det\begin{bmatrix} P_{2j+1}(u)&P_{2i}(u) \\ P_{2j}(u) & P_{2i-1}(u) \end{bmatrix},
\end{align*}
where $\Gamma_2^{i,j}=\left[\Gamma\left(r+s+\tfrac{1}{2}\right)\right]_{\substack{0\leq r\leq m-1, r\neq i\\0\leq s\leq m-1,s\neq j}}$. Combining this with \cref{102}, substituting $C =\left(\sqrt{2}^{\,n} \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)\right)^{-1}$ (see \cref{C}) and putting the minus into the determinant we get
\[\mathcal{I}_n(u)=\mathcal{J}_n(u) + \frac{\sqrt{2}\,e^{-\tfrac{u^2}{2}}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\,\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, \det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}.\]
This finishes the proof.
\section{Proof of Theorem \ref{cor}}\label{sec:proof2}
In this section we prove \cref{cor}. Recall from \cref{E-eq} and \cref{E-eq2} that
\begin{align}\label{100}
E(n,p)&=\frac{\sqrt{\pi}\sqrt{p-1}^{\,n-1}}{\Gamma(\tfrac{n}{2})}\;\mean\limits_{u\sim N(0,\sigma^2)}\mean\limits_{A\sim \mathrm{GOE}(n-1;u,1)} \lvert \det(A)\rvert\\
&=\frac{\sqrt{\pi}\sqrt{p-1}^{\,n-1}}{\Gamma(\tfrac{n}{2})}\;\mean\limits_{u\sim N(0,\sigma^2)} \mathcal{I}_{n-1}(u), \hspace{1cm} \text{where } \sigma^2=\frac{p}{2(p-1)}.\nonumber
\end{align}
We now have to distinguish between the cases $n$ even and $n$ odd. The distinction between those cases is due to the nature of \cref{thm}: The formula for $\mathcal{I}_{n-1}(u)$ depends on the parity of $n$.
\subsection{Proof of Theorem \ref{cor} (1)} In this case we have $n=2m+1$ and hence $n-1=2m$. We know from \cref{thm} (1) that
\begin{align}\label{302}
\mathcal{I}_{n-1}(u)=&\mathcal{J}_{2m}(u)+\frac{\sqrt{2\pi} \,e^{-\tfrac{u^2}{2}}}{\prod_{i=1}^{n-1}\Gamma\left(\tfrac{i}{2}\right)}\,\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \; \det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix}
\end{align}
Thus taking the expectation over $\mathcal{I}_{n-1}(u)$ we may take the expectation over the two summands above. Before we compute the expectation of $\mathcal{J}_{2m}(u)$ in \cref{301.1} below, however, we first have to proof the following lemma.
\begin{lemma}\label{301.1} For all $m\geq 1$ we have
\begin{enumerate}
\item $\sqrt{\pi}\, (2(m-1))! (2m-1)=2^{2m-1} \;\Gamma\left(\tfrac{2m+1}{2}\right)\;\Gamma(m)$.
\item $\sqrt{\pi}^{\, m+1}\,\left[\prod_{i=1}^{m-1} (2i)!\right](2m)!=m!2^{\,m(m+1)} \; \prod_{i=1}^{2m+1}\Gamma\left(\tfrac{i}{2}\right)$.
\end{enumerate}
\end{lemma}
\begin{proof} Throughout the proof we will have to use the identities $\Gamma(\tfrac{1}{2})=\sqrt{\pi}$ and $\Gamma(\tfrac{3}{2})=\tfrac{\sqrt{\pi}}{2}$ \cite[43:4:3]{atlas} and $\Gamma(x+1)=x\Gamma(x)$ for $x>0$ \cite[43:4:3]{atlas}.
We prove both claims using an induction argument. For (1) and $m=1$ we have
\[\frac{\sqrt{\pi}\, (2(m-1))! (2m-1)}{2^{2m-1} \;\Gamma\left(\tfrac{2m+1}{2}\right)\;\Gamma(m)} = \frac{\sqrt{\pi}}{\sqrt{\pi}} = 1.\]
For $m>1$, using the induction hypothesis, we have
\begin{align*}\frac{\sqrt{\pi}\, (2(m-1))! (2m-1)}{2^{2m-1}\;\Gamma\left(\tfrac{2m+1}{2}\right)\;\Gamma(m)}
= &\frac{(2m-2)(2m-3) }{4} \frac{2m-1}{2m-3} \frac{\Gamma\left(\tfrac{2m-1}{2}\right)}{\Gamma\left(\tfrac{2m+1}{2}\right)} \frac{\Gamma(m-1)}{\Gamma(m)}\\
= &\frac{(2m-2)(2m-1) }{4} \frac{2}{2m-1} \frac{1}{m-1} = 1
\end{align*}
For (2) and $m=1$ we have
\[\sqrt{\pi}^{\, m+1}\,\left[\prod_{i=1}^{m-1} (2i)!\right](2m)!=2\pi = m!2^{\,m(m+1)} \; \prod_{i=1}^{2m+1}\Gamma\left(\tfrac{i}{2}\right)\]
For $m>1$, using the induction hypothesis, we have
\begin{align*}\frac{\sqrt{\pi}^{\, m+1}\,\left[\prod_{i=1}^{m-1} (2i)!\right](2m)!}{m!2^{\,m(m+1)} \; \prod_{i=1}^{2m+1}\Gamma\left(\tfrac{i}{2}\right)}
= &\frac{\sqrt{\pi}\,(2(m-1))! 2m (2m-1)}{m2^{2m} \; \Gamma\left(\tfrac{2m+1}{2}\right) \;\Gamma(m)} = \frac{\sqrt{\pi}\,(2(m-1))! (2m-1)}{2^{2m-1} \; \Gamma\left(\tfrac{2m+1}{2}\right) \;\Gamma(m)}=1,
\end{align*}
the last equality because of (1). This finishes the proof.
\end{proof}
Using \cref{301.1} we can prove the following.
\begin{lemma}\label{301} We have
\[\frac{\sqrt{\pi}\sqrt{p-1}^{\,n-1}}{\Gamma(\tfrac{n}{2})}\;\mean\limits_{u\sim N(0,\sigma^2)} \mathcal{J}_{2m}(u) = 1\]
\end{lemma}
\begin{proof}
A combination of \cref{mehta_thm} and \cref{J_n} reveals that
\[\mathcal{J}_{2m}(u) = \frac{ \sqrt{\pi}^{\, m}\, \left[\prod_{i=1}^{m-1} (2i)!\right]}{2^{\,m(m+1)} \; \prod_{i=1}^{2m}\Gamma\left(\tfrac{i}{2}\right)} \, H_{2m}(u).\]
By \cref{expectation_hermite1} (1) we have $\mean\limits_{u\sim N(0,\sigma^2)} H_{{2m}}( u)=\frac{(2m)!}{m!} (2\sigma^2-1)^m$.
Plugging in $\sigma^2= \tfrac{p}{2(p-1)}$ yields
\[\mean\limits_{u\sim N(0,\sigma^2)} H_{{2m}}( u)=\frac{(2m)!}{m!(p-1)^m} \]
Thus
\begin{align*}
\frac{\sqrt{\pi}\sqrt{p-1}^{\,n-1}}{\Gamma(\tfrac{n}{2})}\;\mean\limits_{u\sim N(0,\sigma^2)} \mathcal{J}_{2m}(u)
&= \frac{\sqrt{\pi}\sqrt{p-1}^{\,n-1}}{\Gamma(\tfrac{n}{2})}\frac{ \sqrt{\pi}^{\, m}\, \left[\prod_{i=1}^{m-1} (2i)!\right]}{2^{\,m(m+1)} \; \prod_{i=1}^{2m}\Gamma\left(\tfrac{i}{2}\right)}\frac{(2m)!}{m!(p-1)^m}\\
&= \frac{ \sqrt{\pi}^{\, m+1}\, \left[\prod_{i=1}^{m-1} (2i)!\right]}{2^{\,m(m+1)} \; \prod_{i=1}^n\Gamma\left(\tfrac{i}{2}\right)}\frac{(2m)!}{m!} =1
\end{align*}
the last equality by \cref{301.1} (2).
\end{proof}
\cref{301} in combination with \cref{100} and \cref{302} shows that $E(n,p)$ equals
\begin{equation}\label{303}
1+\frac{\sqrt{2}\pi\sqrt{p-1}^{n-1}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\sum_{1\leq i,j \leq m} \det(\Gamma^{i,j}) \mean\limits_{u\sim N(0,\sigma^2)}e^{-\tfrac{u^2}{2}}\det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix}.
\end{equation}
Applying \cref{lemma10} below to \cref{303} we get finally get that $E(n,p)$ equals
\begin{align*}
1+\frac{\sqrt{\pi}\sqrt{p-1}^{n-2}\sqrt{3p-2}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\sum_{1\leq i,j\leq m} \;\frac{\det(\Gamma^{i,j})\Gamma\left(i+j-\frac{1}{2}\right)}{\tfrac{3-2i-2j}{1-2i+2j}\,\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j-1}} \, F\left(2-2i,1-2j,\tfrac{5}{2}-i-j,\frac{3p-2}{4(p-1)}\right),
\end{align*}
which is the statement from \cref{cor} (1).
\begin{lemma}\label{lemma10}
For any $1\leq i,j\leq m$ we have
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^{-\tfrac{u^2}{2}}\det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix}\\
=& \frac{\det(\Gamma^{i,j})\Gamma\left(i+j-\frac{1}{2}\right)}{\sqrt{2\pi}\;\tfrac{3-2i-2j}{1-2i+2j}\,\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j-1}} \, \sqrt{\frac{3p-2}{p-1}}\; F\left(2-2i,1-2j,\tfrac{5}{2}-i-j,\frac{3p-2}{4(p-1)}\right).
\end{align*}
\end{lemma}
\begin{proof}
Write
\begin{align*}
\mean\limits_{u\sim N(0,\sigma^2)}e^{-\tfrac{u^2}{2}}\det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix}
=&\mean\limits_{u\sim N(0,\sigma^2)}e^{-\tfrac{u^2}{2}}(P_{2j-1}(u)P_{2i-1}(u)-P_{2i-2}(u)P_{2j}(u))
\end{align*}
From \cref{expectation_hermite1} (2) we get for all $1\leq i\leq m$ that
\begin{align}\label{505}
&\mean\limits_{u\sim N(0,\sigma^2)} P_{{2i-1}}(u)P_{{2j-1}}(u)\,e^{-\tfrac{u^2}{2}}\\
=&\frac{(-1)^{i+j-1}\; 2^{i+j-1}\, \Gamma\left(i+j-1+\frac{1}{2}\right)}{\, \sqrt{\pi} \;(\sigma^2+1)^{i+j-1+\tfrac{1}{2}}} F\left(1-2i,1-2j;\frac{1}{2}-i-j+1;\frac{3p-2}{4(p-1)}\right)\nonumber\\
= &\frac{(-1)^{i+j-1} 4^{i+j}\, \Gamma\left(i+j-\frac{1}{2}\right)\,}{2\sqrt{2\pi}} \; \left(\frac{p-1}{3p-2}\right)^{i+j-\tfrac{1}{2}} F\left(1-2i,1-2j;\frac{3}{2}-i-j;\frac{3p-2}{4(p-1)}\right),\nonumber
\end{align}
and
\begin{align}\label{502}
&\mean\limits_{u\sim N(0,\sigma^2)} P_{{2i}}(u)P_{{2j-2}}(u)\,e^{-\tfrac{u^2}{2}}\\
=&\frac{(-1)^{i+j-1}\; 2^{i+j-1}\, \Gamma\left(i+j-1+\frac{1}{2}\right)}{\, \sqrt{\pi} \;(\sigma^2+1)^{i+j-1+\tfrac{1}{2}}} F\left(-2i,2-2j;\frac{1}{2}-i-j+1;\frac{3p-2}{4(p-1)}\right)\nonumber\\
= &\frac{(-1)^{i+j-1} 4^{i+j}\, \Gamma\left(i+j-\frac{1}{2}\right)\,}{2\sqrt{2\pi}} \; \left(\frac{p-1}{3p-2}\right)^{i+j-\tfrac{1}{2}} F\left(-2i,2-2j;\frac{3}{2}-i-j;\frac{3p-2}{4(p-1)}\right).\nonumber
\end{align}
Thus
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^{-\tfrac{u^2}{2}}(P_{2j-1}(u)P_{2i-1}(u)-P_{2i-2}(u)P_{2j}(u))\\
=& \det(\Gamma^{i,j}) \frac{(-1)^{i+j-1} 4^{i+j}\, \Gamma\left(i+j-\frac{1}{2}\right)\,}{2\sqrt{2\pi}} \; \left(\frac{p-1}{3p-2}\right)^{i+j-\tfrac{1}{2}}\\
&\left[ F\left(1-2i,1-2j;\frac{3}{2}-i-j;\frac{3p-2}{4(p-1)}\right)- F\left(2-2i,-2j;\frac{3}{2}-i-j;\frac{3p-2}{4(p-1)}\right)\right].
\end{align*}
By \cref{lemma_hypergeom} we have
\begin{align*}
&F\left(1-2i,1-2j;\tfrac{3}{2}-i-j;x\right)- F\left(2-2i,-2j;\tfrac{3}{2}-i-j;x\right)\\
=&2x\,\frac{1-2i+2j}{3-2i-2j} F\left(2-2i,1-2j,\tfrac{5}{2}-i-j,x\right).
\end{align*}
This shows that
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^{-\tfrac{u^2}{2}}\det\begin{bmatrix} P_{2i-1}(u) & P_{2j}(u) \\ P_{2i-2}(u) & P_{2j-1}(u) \end{bmatrix}\\
=& \frac{\det(\Gamma^{i,j})\Gamma\left(i+j-\frac{1}{2}\right)}{\sqrt{2\pi}\,\tfrac{3-2i-2j}{1-2i+2j}\,\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j-1}} \, \sqrt{\frac{3p-2}{p-1}}\; F\left(2-2i,1-2j,\tfrac{5}{2}-i-j,\frac{3p-2}{4(p-1)}\right),
\end{align*}
which finishes the proof.
\end{proof}
\subsection{The case $n=2m$ is even} In this case we have $n-1=2m-1$, so that \cref{100} becomes
\begin{equation*}E(n,p)=\frac{\sqrt{\pi}\sqrt{p-1}^{n-1}}{\Gamma(n)}\;\mean\limits_{u\sim N(0,\sigma^2)}\mathcal{I}_{2m-1}(u),\quad \text{where } \sigma^2=\frac{p}{2(p-1)}.
\end{equation*}
We apply \cref{thm} (2) to obtain
\[\mathcal{I}_{2m-1}(u)=\mathcal{J}_{2m-1}(u) + \frac{\sqrt{2}\,e^{-\tfrac{u^2}{2}}}{\prod_{i=1}^{n-1}\Gamma\left(\tfrac{i}{2}\right)}\,\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j})\, \det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}\]
where $\Gamma_2^{i,j}=\left[\Gamma\left(r+s+\tfrac{1}{2}\right)\right]_{\substack{0\leq r\leq m-1, r\neq i\\0\leq s\leq m-1,s\neq j}}$.
Since the normal distribution is symmetric around the origin we have
\begin{align*}
\mean\limits_{u\sim N(0,\sigma^2)} \mathcal{J}_{2m-1}(u)&=\mean\limits_{u\sim N(0,\sigma^2)}\mean\limits_{A\sim \mathrm{GOE}(2m-1)} \det(A-u I_{2m-1})\\
&=(-1)^{2m-1}\mean\limits_{u\sim N(0,\sigma^2)}\mean\limits_{A\sim \mathrm{GOE}(2m-1)} \det((-A)-(-u) I_{2m-1})\\
&= -\mean\limits_{u\sim N(0,\sigma^2)}\mean\limits_{A\sim \mathrm{GOE}(2m-1)} \det(A-u I_{2m-1})\\
&= -\mean\limits_{u\sim N(0,\sigma^2)} \mathcal{J}_{2m-1}(u),
\end{align*}
and hence $\mean\limits_{u\sim N(0,\sigma^2)} \mathcal{J}_{2m-1}(u)=0$. This shows that
\begin{equation}\label{201}
E(n,p)=\frac{\sqrt{2\pi}\sqrt{p-1}^{n-1}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\sum_{0\leq i,j\leq m-1} \det(\Gamma_2^{i,j}) \mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2} \det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}
\end{equation}
Applying \cref{lemma3} below to \cref{201} we see that $E(n,p)$ equals
\begin{align*}
& \frac{\sqrt{p-1}^{n-2}\,\sqrt{3p-2}}{\prod_{i=1}^{n}\Gamma\left(\tfrac{i}{2}\right)}\sum_{j=0}^{m-1} \Big[\frac{\sqrt{\pi}\det(\Gamma_2^{0,j}) (2j+1)!}{(-1)^{j} 2^{2j}\,j!}\,\frac{(p-2)^jp}{(p-1)^{j}(3p-2)} \;F\left(-j,\frac{1}{2},\frac{3}{2},\frac{-p^2}{(3p-2)(p-2)}\right)\\
&- \frac{ \det(\Gamma_2^{0,j}) \, \Gamma\left(j+\frac{1}{2}\right)}{\left(-\frac{3p-2}{4(p-1)}\right)^{j+1}} +\sum_{i=1}^{m-1} \frac{\det(\Gamma_2^{i,j}) \,\Gamma\left(i+j+\frac{1}{2}\right)}{\tfrac{(1-2i-2j)}{(1-2i+2j)}\,\left(-\frac{3p-2}{4(p-1)}\right)^{i+j} } F\left(-2j,-2i+1,\tfrac{3}{2}-i-j,\frac{3p-2}{4(p-1)}\right)\Big].
\end{align*}
which proves \cref{cor} (2).
\begin{lemma}\label{lemma3}
For all $0\leq i\leq m-1$ and $0\leq j\leq m-1$ the following holds.
\begin{enumerate}
\item If $i>0$:
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}\\
=&\frac{\Gamma\left(i+j+\frac{1}{2}\right)}{\sqrt{2\pi} \,\tfrac{(1-2i-2j)}{(1-2i+2j)}\,\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j} }\,\sqrt{\frac{3p-2}{p-1}}\, F\left(-2j,-2i+1,\frac{3}{2}-i-j,\frac{3p-2}{4(p-1)}\right).
\end{align*}
\item If $i=0$:
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}\\
=& \sqrt{\frac{3p-2}{p-1}}\left[\frac{(-1)^{j} (2j+1)!}{2^{2j}\,\sqrt{2}\,j!}\,\frac{(p-2)^jp}{(p-1)^{j}(3p-2)} \;F\left(-j,\frac{1}{2},\frac{3}{2},\frac{-p^2}{(3p-2)(p-2)}\right) - \frac{ \Gamma\left(j+\frac{1}{2}\right)}{\sqrt{2\pi}\left(-\frac{3p-2}{4(p-1)}\right)^{j+1}} \right].
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
Write
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}
=\mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\left(P_{2i}(u)P_{2j}(u)-P_{2i-1}(u)P_{2j+1}(u)\right)
\end{align*}
We prove (1). Fix $0<i\leq m$ and $0\leq j\leq m$. By \cref{expectation_hermite2} (1) and similiar to \cref{502} we have
\begin{align}\label{203}
&\mean\limits_{u\sim N(0,\sigma^2)} P_{2i}(u)P_{2j}(u) e^{-\tfrac{u^2}{2}}\\ = &\frac{(-1)^{i+j} 4^{i+j+1}\, \Gamma\left(i+j+\frac{1}{2}\right)\,}{2\sqrt{2\pi}} \; \left(\frac{p-1}{3p-2}\right)^{i+j+\tfrac{1}{2}} F\left(-2i,-2j;\frac{1}{2}-i-j;\frac{3p-2}{4(p-1)}\right).\nonumber
\end{align}
and, similar to \cref{505},
\begin{align}\label{202}
&\mean\limits_{u\sim N(0,\sigma^2)} P_{2i-1}(u)P_{2j+1}(u) e^{-\tfrac{u^2}{2}}\\ = &\frac{(-1)^{i+j} 4^{i+j+1}\, \Gamma\left(i+j+\frac{1}{2}\right)\,}{2\sqrt{2\pi}} \; \left(\frac{p-1}{3p-2}\right)^{i+j+\tfrac{1}{2}} F\left(-(2i-1),-(2j+1);\frac{1}{2}-i-j;\frac{3p-2}{4(p-1)}\right).\nonumber
\end{align}
This shows that
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\left(P_{2i}(u)P_{2j}(u)-P_{2i-1}(u)P_{2j+1}(u)\right)\\
=&\frac{4}{2\sqrt{2\pi}}\;\Gamma\left(i+j+\frac{1}{2}\right) \; \left(-\frac{4(p-1)}{3p-2}\right)^{i+j} \sqrt{\frac{p-1}{3p-2}}\\
&\Big[F\left(-2i,-2j;\tfrac{1}{2}-i-j;\tfrac{3p-2}{4(p-1)}\right)-F\left(-(2i-1),-(2j+1);\tfrac{1}{2}-i-j;\tfrac{3p-2}{4(p-1)}\right)\Big]
\end{align*}
Using \cref{lemma_hypergeom} we get
\begin{align*}
& F\left(-2i,-2j;\tfrac{1}{2}-i-j;x\right)-F\left(-(2i-1),-(2j+1);\tfrac{1}{2}-i-j;x\right)\\
=&2x\frac{(1-2i+2j)}{(1-2i-2j)} F\left(-2j,-2i+1,\tfrac{3}{2}-i-j,x\right),
\end{align*}
so that
\begin{align*}
&\mean\limits_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\det\begin{bmatrix} P_{2i}(u) & P_{2j+1}(u) \\ P_{2i-1}(u) & P_{2j}(u) \end{bmatrix}\\
=&\frac{\Gamma\left(i+j+\frac{1}{2}\right)}{\sqrt{2\pi} \,\tfrac{(1-2i-2j)}{(1-2i+2j)}\,\left(-\tfrac{3p-2}{4(p-1)}\right)^{i+j} }\,\sqrt{\frac{3p-2}{p-1}}\, F\left(-2j,-2i+1,\tfrac{3}{2}-i-j,\tfrac{3p-2}{4(p-1)}\right).
\end{align*}
This proves (1).
Now we prove (2). Observe that by \cref{203} we have
\begin{equation}
\mean\limits_{u\sim N(0,\sigma^2)} P_{0}(u)P_{2j}(u) e^{-\tfrac{u^2}{2}}=\frac{(-1)^{j} 4^{j+1} \Gamma\left(j+\frac{1}{2}\right)\left(\frac{p-1}{3p-2}\right)^{j+\tfrac{1}{2}}}{2\sqrt{2\pi}} =\frac{ (-1)\,\Gamma\left(j+\frac{1}{2}\right)}{2\sqrt{2\pi}\left(-\frac{3p-2}{4(p-1)}\right)^{j+1}} \, \sqrt{\frac{3p-2}{p-1}}.\label{204}
\end{equation}
Moreover, by \cref{expectation_hermite2} (2) we have
\begin{align}
\nonumber\mean\limits_{u\sim N(0,\sigma^2)} P_{-1}(u)P_{2j+1}(u) e^{-\tfrac{u^2}{2}} &= \frac{(-1)^{j+1} (2j+1)!}{2^{j}\,j!}\,\frac{(1-\sigma^2)^j\sigma^{2}}{\sqrt{1+\sigma^{2}} }\;F\left(-j,\frac{1}{2},\frac{3}{2},\frac{\sigma^4}{\sigma^4-1}\right)\\
\nonumber&= \frac{(-1)^{j+1} (2j+1)!}{2^{j}\,j!}\,\frac{(\tfrac{p-2}{2(p-1)})^j (\tfrac{p}{2(p-1)})}{\sqrt{\tfrac{3p-2}{2(p-1)}} }\;F\left(-j,\frac{1}{2},\frac{3}{2},\frac{-p^2}{(3p-2)(p-2)}\right)\\
&= \frac{(-1)^{j+1} (2j+1)!}{2^{2j}\,\sqrt{2}\,j!}\,\frac{(p-2)^jp}{(p-1)^{j}(3p-2)} \sqrt{\frac{3p-2}{p-1}} \;F\left(-j,\frac{1}{2},\frac{3}{2},\frac{-p^2}{(3p-2)(p-2)}\right).\label{205}
\end{align}
Combining \cref{204} and \cref{205} we see that $\mean_{u\sim N(0,\sigma^2)}e^\frac{-u^2}{2}\det\begin{bmatrix} P_{0}(u) & P_{2j+1}(u) \\ P_{-1}(u) & P_{2j}(u) \end{bmatrix}$ equals
\begin{align*}
\sqrt{\frac{3p-2}{p-1}}\left[\frac{(-1)^{j} (2j+1)!}{2^{2j}\,\sqrt{2}\,j!}\,\frac{(p-2)^jp}{(p-1)^{j}(3p-2)} \;F\left(-j,\frac{1}{2},\frac{3}{2},\frac{-p^2}{(3p-2)(p-2)}\right) - \frac{ \Gamma\left(j+\frac{1}{2}\right)}{2\sqrt{2\pi}\left(-\frac{3p-2}{4(p-1)}\right)^{j+1}} \right],
\end{align*}
which proves (2).
\end{proof}
{
\bibliographystyle{plain}
|
1,116,691,499,638 | arxiv | \section{Introduction}
The Shubnikov-de Haas oscillation (SdHO) in a two-dimensional electron gas (2DEG) is a manifestation of Landau quantization, $E_N = (N+1/2) \hbar \omega_\mathrm{c}$, in the magnetoresistance \cite{Ishihara86}. The oscillation is periodic in $1/B$, and damps with decreasing magnetic fields since the cyclotron energy $\hbar \omega_\mathrm{c}=\hbar eB/m^*$ (with $m^*$ the electron effective mass) diminishes with respect to both the thermal blurring $k_\mathrm{B}T$ of the Fermi energy $E_\mathrm{F}$ and the disorder broadening $\Gamma$ of the Landau levels (LLs). In a unidirectional lateral superlattice (ULSL), a 2DEG subjected to a one-dimensional (1D) periodic modulation, another analogous oscillation periodic in $1/B$ emerges. The oscillation, known as the commensurability oscillation (CO) \cite{Weiss89,Gerhardts89,Winkler89}, results from the commensurability between the cyclotron radius $R_\mathrm{c}=\hbar k_\mathrm{F}/eB$ and the period $a$ of the modulation, and also damps with decreasing $B$. Here $k_\mathrm{F}=\sqrt{2\pi n_e}$ represents the Fermi wave number with $n_e$ the areal density of electrons. In the CO, the energy difference between adjacent flat-band conditions (see below), $(a k_\mathrm{F}/2)\hbar \omega_\mathrm{c}$, takes the place of $\hbar \omega_\mathrm{c}$ in the SdHO. Since $(a k_\mathrm{F}/2)\gg 1$ in most experiments performed so far, the SdHO vanishes and only the CO survives at the low end of magnetic fields. At low enough temperatures where thermal damping is not so severe, however, the two classes of oscillation coexist over a range of magnetic field with an intriguing interplay. The behavior of the SdHO under the influence of the simultaneously present CO is the main subject of the present study.
A unidirectional periodic modulation of the electrostatic potential,
\begin{equation}
V(x) = V_0 \cos(qx)
\label{potmod}
\end{equation}
with $q = 2 \pi/a$, lifts the degeneracy of the LLs. The energy becomes dependent on the position $x_0$ of the guiding center and reads, in the first order perturbation theory valid for a weak modulation $V_0 \ll E_\mathrm{F}$,
\begin{equation}
E_{N,qx_0} = \left(N + \frac{1}{2}\right)\hbar\omega _\mathrm{c} + V_{N,B} \cos(qx_0),
\label{Energy}
\end{equation}
where $V_{N,B}=V_0 e^{ - u/2} L_N(u)$ with $L_N(u)$ the Laguerre polynomial, $u = q^2 l^2 /2$, and $l = \sqrt{\hbar/eB}$ the magnetic length. $V_{N,B}$ at the Fermi energy can be approximated, using an asymptotic expression for $N \gg 1$ appropriate for low-magnetic-field range relevant to the present study, by,
\begin{equation}
V_B =
V_0 \sqrt {\frac{2}{\pi q R_\mathrm{c}}} \cos \left(q R_\mathrm{c} - \frac{\pi}{4} \right).
\label{VBapp}
\end{equation}
$V_B$ oscillates with $B$, and the width of the Landau bands $2|V_B|$ takes maximum at
\begin{equation}
\frac{2 R_\mathrm{c}}{a}=n+\frac{1}{4}\hspace{10mm}(n=1,2,3,...),
\label{bandmax}
\end{equation}
and vanishes at the \textit{flat band conditions}
\begin{equation}
\frac{2 R_\mathrm{c}}{a}=n-\frac{1}{4}\hspace{10mm}(n=1,2,3,...).
\label{flatband}
\end{equation}
The oscillation of the Landau bandwidth is the origin of the CO and, at the same time, is responsible for the modulation of the amplitude and the phase of the SdHO.
The modulation of the SdHO amplitude was first reported by Overend \textit{et al}. \cite{OverendG98} They exploited a periodic modulation of magnetic field instead of electrostatic potential. For a magnetic field modulation, eqs. (\ref{bandmax}) and (\ref{flatband}) interchange their roles, with eq. (\ref{bandmax}) representing the flat band conditions. The underlying physics, however, is basically the same for both types of modulations. The authors reported that the SdHO amplitude remains large at the flat band conditions [eq. (\ref{bandmax})], while is suppressed at the maximum bandwidth conditions [eq. (\ref{flatband})]. Further study by the same group \cite{Edmonds01,Shi02} showed the phase inversion of the SdHO at the maximum bandwidth conditions when the bandwidth is larger than $\hbar \omega_\mathrm{c}$. They explained their observation as an effect of the modulated density of states (DOS), which affects the conductivity through the collisional contribution. The phase inversion is essentially the same phenomenon as the even-odd filling-factor switching reported for a ULSL with strong electrostatic potential modulation in the quantum Hall regime \cite{Tornow96}, and is attributed to the van Hove singularities in the DOS\@.
Similar suppression of the SdHO amplitude at the maximum bandwidth conditions [eq. (\ref{bandmax})] was also reported for electrostatic ULSLs with a strong modulation amplitude induced by a patterned InGaAs stressor layer \cite{Milton00}. Interestingly, the authors observed that the trend is reversed at higher magnetic fields; the amplitude is \textit{enhanced} at the maximum bandwidth conditions. The origin of this enhancement, as well as of the inversion of the tendency with the magnetic field, still lacks comprehensive explanation.
The purpose of the present paper is to achieve more quantitative understanding of the behavior of the SdHO under periodic modulation. In the previous works quoted above \cite{OverendG98,Edmonds01,Shi02,Tornow96,Milton00}, ULSLs with a large amplitude periodic modulation are employed, which is obviously advantageous in attaining the modulation of the SdHO strong enough to be readily observed. However, the amplitude of the periodic modulation or the width of Landau bands exceeding 10\% of the Fermi energy seems to be rather incompatible with the perturbative treatment as in eq.\ (\ref{Energy}). In the present work, we keep $|V_B|/E_\mathrm{F}$ $\leq$ $V_0/E_\mathrm{F}$ to be less than five percent, which validates the comparison of the experimental data with perturbation theories. We employ low temperatures, which allows us to observe the SdHO down to low magnetic fields ($\sim$ 0.05 T) where $V_0 / \hbar \omega_\mathrm{c}$ becomes large even for our small $V_0$.
\section{Experimental}
We examine two ULSL samples with slightly different characteristics, as tabulated in Table \ref{Samples}. The two samples are fabricated from two GaAs/AlGaAs 2DEG wafers having nominally the same structure \cite{structure} but differing in the carrier mobilities owing to the conditions of the molecular beam epitaxy (MBE) chamber used for the growth. The electrostatic potential modulation is introduced via strain-induced piezoelectric effect \cite{Skuras97} by placing negative electron-beam (EB) resist on the surface \cite{Endo00E}. As depicted in the right inset of Fig.\ \ref{rawVB} (a), each sample is patterned into a Hall bar that has two sets of voltage probes to measure the longitudinal and Hall resistivity of the section with (ULSL) and without (plain 2DEG) the potential modulation at the same time. The values of the mobility $\mu$ and the density $n_e$ tabulated in Table \ref{Samples} are those measured after brief illumination by LED, the condition in which the magnetoresistance traces presented in this paper are taken. No sign of deterioration in $\mu$ or change in $n_e$ arising from the microfabrication process is discerned.
\begin{table}
\caption{Sample parameters}
\begin{center}
\begin{tabular}{lcc}
\hline
& Sample A & Sample B \\
\hline
Period $a$ (nm) & 231 & 184 \\
Amplitude $V_0$ (meV) & 0.39 & 0.31 \\
Mobility $\mu$ (m$^2$/Vs) & 69 & 101 \\
Quantum mobility $\mu_\mathrm{Q}$ (m$^2$/Vs) & 11.9 & 7.2 \\
Density $n_e$ (10$^{15}$ m$^{-2}$) & 2.9 & 2.9 \\
\hline
\end{tabular}
\label{Samples}
\end{center}
\end{table}
\begin{figure}[tb]
\includegraphics[bbllx=45,bblly=220,bburx=510,bbury=800,width=8.5cm]{63050Fig1.eps}
\caption{(a) (Color online) Magnetoresistance traces for the ULSL (solid line) and the plain 2DEG (dotted line) taken at 15 mK for sample A\@. A trace for the ULSL at 4.2 K is also shown with dot-dashed line \cite{NoteDiffCryo}. The upper right inset depicts the schematic configuration of the sample. (b) The absolute value of $V_B$ [eq.\ (\ref{VBapp})], with $V_B$ $>$ 0 ($<$ 0) plotted by solid (dashed) line.}
\label{rawVB}
\end{figure}
For Sample B, the periodic modulation is introduced by a EB-resist grating having a conventional \textit{periodic} line-and-space pattern. In this approach, higher harmonics inevitably mixes in the potential profile as the period $a$ becomes large compared to the depth $d$ ($=$ 90 nm for the present samples) of the 2DEG plane from the surface \cite{Endo05HH}. An unconventional strategy of employing a \textit{quasiperiodic} pattern is applied for Sample A with a larger period. Here, slabs of EB-resist are placed on the ``$L$''s of the Fibonacci sequence ``$LSLLSLSL...$'', with $L$=104 nm and $S$=64 nm [$L/S$ is set to $\phi$=$(1+\sqrt{5})/2$=1.618...]. Generally in such Fibonacci ULSLs, the analysis of the CO reveals several frequency components, each corresponding to one of the self-similar generations of a potential profile mutually scaled by the factor $\phi$ \cite{Endo07I,Endo07FCO}. In the particular sample explored in this study (Sample A), a single component with effective period $a$=231 nm, corresponding to the average distance between adjacent ``$S$''s, overwhelmingly dominates the potential profile, allowing the profile to be virtually regarded as periodic. Although somewhat counterintuitive, a simple sinusoidal potential profile described by eq.\ (\ref{potmod}) is realized better in Sample A than in Sample B because of the absence of the higher harmonics.
\begin{figure}[tb]
\includegraphics[bbllx=50,bblly=80,bburx=550,bbury=800,width=8.5cm]{63050Fig2.eps}
\caption{(Color online) Fourier transform of $(d^2/dB^2)[\rho_{xx}(B) / \rho_0]$ versus $1/B$ for the ULSL in sample A (a) and B (b). Fourier spectra for the plain 2DEG is also shown for (a) by dotted line with the shade underneath. Peak positions are given by combinations of integer multiples of $f_\mathrm{SdH}$ and $f_\mathrm{CO}$, where $f_\mathrm{SdH}$ ($f_\mathrm{CO}$) represents the fundamental frequency for the SdHO (CO).}
\label{d2BFFT}
\end{figure}
Cursory characterization of the ULSL samples can be done by performing a Fourier transform of the $(d^2/dB^2)[\rho_{xx}(B) / \rho_0]$ vs. $1/B$ curve, where $\rho_{xx}(B)$ represents longitudinal magnetoresistance and $\rho_0=\rho_{xx}(0)$. The second derivative conveniently eliminates the slowly-varying background from the magnetoresistance traces. The Fourier spectra shown in Fig.\ \ref{d2BFFT} exhibit peaks corresponding to the CO ($f_\mathrm{CO}$), the SdHO ($f_\mathrm{SdH}$), and their combinations ($p f_\mathrm{SdH} \pm q f_\mathrm{CO}$ with $p$, $q$ integers). The CO peaks up to the fourth harmonic ($f_\mathrm{CO}$, $2 f_\mathrm{CO}$, $3 f_\mathrm{CO}$, and $4 f_\mathrm{CO}$) are seen for Sample B, indicating the presence of the corresponding harmonic contents in the potential profile. For Sample A, in contrast, only one CO peak is found, justifying the description by eq.\ (\ref{potmod}) of the potential profile. The presence of the peaks at $p f_\mathrm{SdH} \pm q f_\mathrm{CO}$ reveals the interplay between the two types of oscillation that will be discussed in detail in the following sections. The higher harmonics of the CO in Sample B do affect the interplay as indicated by the presence of a peak $f_\mathrm{SdH} - 2 f_\mathrm{CO}$ \cite{highharm}.
More quantitative account of the actual potential profile requires analyses of the CO amplitude. The values of the fundamental component $V_0$ tabulated in Table \ref{Samples} are obtained by such analyses that take the effect of damping by scattering into account \cite{Endo00E}. Higher harmonic contents can also be evaluated by analyses using Fourier band pass filters \cite{Endo05HH,Endo07FCO}, and the second, the third, and the fourth harmonic components for Sample B are found to be $V_2$=0.10 meV, $V_3$=0.07 meV, and $V_4$=0.05 meV, respectively. Although these higher harmonics can in principle complicate the analysis below for Sample B, we neglect them for simplicity and assume, in the rest of this paper, that both Sample A and Sample B have a potential profile of the form given by eq.\ (\ref{potmod}).
Measurements are performed in a top-loading dilution fridge at the base temperature $T \simeq$ 15 mK. Standard low-frequency (13 Hz) ac lock-in technique with a current $I_\mathrm{rms}$=10 nA is employed for the resistivity measurement. We have checked that the electron heating by the current is negligible in the magnetic-field range of the present interest ($|B|$ $<$ 1 T) by reducing $I_\mathrm{rms}$ down to 0.5 nA, for which the magnetoresistance trace shows no difference except for much worse signal-to-noise ratio. A low sweep rate $dB/dt$=0.01 T/min is employed in order to avoid undesired hysteresis of the superconducting magnet. The remnant hysteresis is further calibrated by using the simultaneously measured Hall resistivity.
\section{Results and Discussion}
\subsection{Experimentally obtained Shubnikov-de Haas oscillation}
Figure \ref{rawVB} (a) shows magnetoresistance traces of Sample A for both the ULSL and the adjacent plain 2DEG\@. Magnetoresistance of the ULSL at 4.2 K is also shown \cite{NoteDiffCryo}, which essentially represents the pure CO for $|B|$ $<$ $\sim$0.5 T where the SdHO has already damped out. In the low magnetic field range ($|B|$ $<$ $\sim$ 0.25 T), the SdHO of the ULSL is suppressed at the maxima of the CO, while remains almost unaltered from that of the plain 2DEG at the minima of the CO. In contrast, the SdHO amplitude is observed to be enhanced at the CO maxima for higher magnetic fields. This is most clearly seen for the CO peak at 0.64 T\@. In Fig.\ \ref{rawVB} (b), the half width of Landau bands calculated by eq.\ (\ref{VBapp}) are plotted. It can readily be confirmed that, as is well known \cite{Gerhardts89,Winkler89}, the maxima and the minima of the CO correspond to the maximum bandwidth and the flat band conditions, respectively.
\begin{fullfigure}[tb]
\includegraphics[bbllx=20,bblly=290,bburx=810,bbury=700,width=18cm]{63050Fig3.eps}
\caption{(Color online) (a) Experimentally obtained SdHO, divided by the thermal and the scattering damping factors, for sample A\@. See text for details. (b) Density of states divided by the exponential damping factor, calculated by eq.\ (\ref{D1plain}) or eq.\ (\ref{D1ULSL}) with $V_0$ = 0.49 meV. The thin line with the shade underneath and the thick line represent the plain 2DEG and the ULSL, respectively, plotted against the filling factors (bottom axes) or magnetic fields (top axes). The half width of the Landau bands, $|V_B|$, is also plotted in (b) (right axis).}
\label{Sim}
\end{fullfigure}
To look into the details of the behavior of the SdHO, the rapidly oscillating parts of the magnetoresistance, $\Delta \rho_\mathrm{SdH} / \rho_0$, are extracted and plotted against the Landau level filling factor $\nu$=$n_e h/eB$ in Fig.\ \ref{Sim} (a) for both the ULSL and the plain 2DEG of Sample A\@. The corresponding magnetic field is shown on the top axis. The extraction of the $\Delta \rho_\mathrm{SdH} / \rho_0$ is done by applying a Fourier high pass filter to the $\rho_{xx} / \rho_0$ vs. $1/B$ curve, with the threshold set at a frequency higher than $f_\mathrm{CO}$, and further by subtracting the average of the upper and lower envelopes, as was done in the analysis of the CO \cite{Endo00E}. The SdHO is known to damp with decreasing magnetic field by the thermal damping factor $A(T/T_\mathrm{c})$ and the scattering damping factor $\exp (-\pi / \omega_\mathrm{c} \tau_\mathrm{Q})$, where $A(x)$=$x/\sinh(x)$, $k_\mathrm{B} T_\mathrm{c}$=$\hbar \omega_\mathrm{c} / 2 \pi^2$, and $\tau_\mathrm{Q}$ the single particle (quantum) scattering time. Traces shown in Fig.\ \ref{Sim} are $\Delta \rho_\mathrm{SdH} / \rho_0$ divided by these damping factors, applying to both traces the value of the quantum mobility $\mu_\mathrm{Q} = e \tau_\mathrm{Q} / m^*$=11.9 m$^2$/Vs determined from the actual damping of the SdHO of the plain 2DEG\@. The factor $A(T/T_\mathrm{c})$ turns out to barely deviate from unity, reflecting the fact that the thermal damping is negligibly small at this low temperature. Since spins are unresolved for the magnetic-field range in the present study, the minima and maxima of the SdHO are expected to take place at even and odd filling factors, respectively. This is actually what we observe for the plain 2DEG\@. For the ULSL, the SdHO amplitudes are suppressed at the maximum bandwidth conditions, $B$=0.239, 0.183, 0.148, 0.125, 0.107, 0.094, 0.084, 0.076, 0.069, 0.064 T [$n$=3, 4,...,12 in eq.\ (\ref{bandmax}), note the $|V_B|$ plotted in Fig.\ \ref{Sim} (b)]. The SdHO amplitude at the suppressed conditions decrease with decreasing magnetic fields until it vanishes at $B$ $\sim$ 0.125 T, and then revives at still lower magnetic fields but with the position of peaks and dips inverted; there, the minima and maxima occurs at odd and even filling factors, respectively. This is the even-odd transition reported in refs.\ \citen{Edmonds01,Shi02,Tornow96,Milton00}, which was attributed, by using numerically calculated DOS and the resultant conductivity, to the broadening and the van Hove singularity of individual LLs that result in the peaks of DOS at even filling factors.
\subsection{Comparison with calculated density of states \label{cmpDOS}}
It is well established that the $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})]$ is proportional to the oscillatory part of the DOS at the Fermi energy, $\Delta D (E_\mathrm{F}) / D_0$, at low temperatures provided the $|\Delta \rho_\mathrm{SdH} / \rho_0 A|$ is not too large \cite{Ishihara86,Coleridge89}. In this subsection, we make a detailed comparison of the experimentally obtained SdHO described in the previous subsection with a calculated DOS\@.
First we deduce an analytic expression for the DOS under a periodic modulation eq.\ (\ref{potmod}), instead of resorting to a numerical calculation as was done in the previous works. We start by recalling the DOS for a plain 2DEG \cite{Ishihara86}. The disorder broadened line shape of each LL peak is approximated here for simplicity by a Lorentzian, $P(E) = (\Gamma/\pi)/(E^2+\Gamma^2)$, with the width $\Gamma$ independent of $B$. The DOS is obtained by summing up the LL peaks, including the factor 2 for spin degeneracy,
\begin{eqnarray}
\displaystyle{ D(E)} & = & \displaystyle{ \frac{2}{2\pi l^2}\sum\limits_{N=0}^\infty {P(E - E_N)}} \hspace{30mm} \nonumber \\
& = & \displaystyle{ D_0 \left\{ 1 + 2\sum\limits_{k = 1}^\infty \cos \left[ 2\pi k\left( \varepsilon - \frac{1}{2} \right) \right]e^{ - 2\pi k\gamma} \right\}}, \nonumber \\
\label{DOSplain}
\end{eqnarray}
where $D_0 = m^* / \pi \hbar^2$ represents the constant DOS of a 2DEG in the absence of magnetic field. We introduced dimensionless parameters, $\varepsilon = E / \hbar \omega _\mathrm{c}$ and $\gamma = \Gamma / \hbar \omega _\mathrm{c}$. In the second line of eq.\ (\ref{DOSplain}), we made use of the Poisson sum formula \cite{approxminus}. Since $\exp(-2 \pi \gamma) \ll 1$ for small magnetic fields, it is usually a good approximation to keep only the $k$=1 term in the summation: $D(E) \simeq D_0 + \Delta D_1(E)$ with
\begin{equation}
\frac{\Delta D_1(E)}{D_0} = -2 \cos(2 \pi \varepsilon) \exp(-2 \pi \gamma).
\label{D1plain}
\end{equation}
The proportionality $\Delta \rho_\mathrm{SdH} / \rho_0 A \propto \Delta D_1 / D_0$ implies $\Gamma = \hbar / 2 \tau_\mathrm{Q}$ by the comparison of the exponential factors. Therefore the quantum mobility $\mu_\mathrm{Q} =$ 11.9 m$^2$/Vs corresponds to $\Gamma =$ 0.073 meV\@.
In Fig.\ \ref{Sim} (b), we plot the oscillatory part of the DOS at $E_\mathrm{F} = n_e / D_0$ \cite{constEF} divided by the damping factor, $\Delta D_1(E_\mathrm{F}) / [D_0 \exp(-2 \pi \gamma)] = -2 \cos(\varepsilon_\mathrm{F})$ with $\varepsilon_\mathrm{F} = E_\mathrm{F} / \hbar \omega_\mathrm{c} = \nu / 2$. Comparison of Figs. \ref{Sim} (a) and (b) confirms that the relation $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})] \propto \Delta D_1 (E_\mathrm{F}) / D_0$ actually holds for our plain 2DEG, aside from the deviation at higher magnetic-field range $|B|$ $>$ 0.4 T where the peak height of $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})]$ diminishes accompanying the onset of spin-splitting, which was not taken into consideration in the calculation of the DOS\@.
Upon switching on the potential modulation eq.\ (\ref{potmod}), each LL peak further broadens from $P(E)$ to (with $t = q x_0$)
\begin{equation}
P(E,V_B) = \frac{1}{\pi} \int_0^\pi P(E - V_B \cos t)dt.
\end{equation}
This can be formally rewritten as
\begin{eqnarray}
\displaystyle{ \frac{i}{2 \pi} \left( \frac{1}{\sqrt{E-V_B+i\Gamma}\sqrt{E+V_B+i\Gamma}} \right.} \hspace{15mm} \nonumber \\
\displaystyle{ \left. - \frac{1}{\sqrt{E-V_B-i\Gamma}\sqrt{E+V_B-i\Gamma}} \right),} \nonumber
\end{eqnarray}
and displays broadening or, when $V_B$ is large compared to $\Gamma$, splitting (van Hove singularities) of a LL peak. Correspondingly, the DOS becomes
\begin{equation}
\displaystyle{ D(E,V_B) = \frac{2}{2\pi l^2}\sum\limits_{N=0}^\infty {P(E - E_N,V_B)}} \hspace{30mm} \nonumber
\end{equation}
\begin{eqnarray}
\displaystyle{ = D_0 \left\{ 1 + 2\sum\limits_{k = 1}^\infty \frac{1}{\pi} \int_0^\pi \! \! \! \cos \left[ 2\pi k\left( \varepsilon - v_B \cos t - \frac{1}{2} \right) \right] dt \right. } \nonumber \\
\displaystyle{ \Biggl. \times e^{ - 2\pi k\gamma} \Biggr\}}
\nonumber
\end{eqnarray}
\begin{equation}
= D_0\left\{ 1 + 2\sum\limits_{k = 1}^\infty { \cos \left[ 2\pi k\left( \varepsilon - \frac{1}{2} \right) \right]J_0 \left( 2\pi k v_B \right)e^{ -2\pi k \gamma} } \right\},
\label{DOSULSL}
\end{equation}
with $v_B = V_B / \hbar \omega _\mathrm{c}$, and $J_0(x)$ the Bessel function of order zero. $\Delta D_1(E)$ acquires an extra factor $J_0(2 \pi v_B)$:
\begin{equation}
\frac{\Delta D_1(E)}{D_0} = -2 J_0(2 \pi v_B) \cos(2 \pi \varepsilon) \exp(-2 \pi \gamma).
\label{D1ULSL}
\end{equation}
\begin{figure}[t]
\includegraphics[bbllx=30,bblly=70,bburx=560,bbury=250,width=8.5cm]{63050Fig4.eps}
\caption{Plots of $J_0(2 \pi x)$ (a) and $J_1(2 \pi x)/(2 \pi x)$ (b).}
\label{Bessels}
\end{figure}
With the decrease of $B$, $v_B$ oscillates periodically in $1/B$ back and forth around $v_B = 0$, increasing its amplitude proportionally to $B^{-1/2}$ [see eq.\ (\ref{VBapp})]. As shown in Fig.\ \ref{Bessels} (a), the function $J_0(2 \pi v_B)$ decrease from 1 with the increase of $|v_B|$, becomes zero at $|v_B|$ = 0.3827... $\simeq$ 3/8 and then changes its sign. Therefore the oscillation of $\Delta D_1$ takes minimum amplitude at the maximum bandwidth conditions while $|v_B|$ stays less than 3/8, disappears when a maximum of $|v_B|$ touches $\sim$3/8, reappears with inverted sign for $|v_B|$ larger than 3/8 \cite{stilllarge}. Thus the position where the oscillation of $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})]$ vanishes is the landmark of $|V_B| = 0.3827 \hbar \omega_\mathrm{c}$ \cite{evenodd}, on an assumption that the relation $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})] \propto \Delta D_1 (E_\mathrm{F}) / D_0$ holds also for ULSLs. From the position we can determine the modulation amplitude $V_0$ using eq.\ (\ref{VBapp}). The oscillation disappears and a small sign of inverted peak is observed at $B$ = 0.125 T in Fig.\ \ref{Sim} (a). From this we obtain $V_0$ = 0.49 meV\@. The DOS calculated using $V_0$ = 0.49 meV plotted in Fig.\ \ref{Sim} (b) reproduce the line shape of the observed $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})]$ in Fig.\ \ref{Sim} (a) quite well for $B < \sim$ 0.25 T, confirming the proportionality relation. The value $V_0$ = 0.49 meV is roughly 25\% larger than that deduced from the CO, the reason of this discrepancy remains to be elucidated.
For Sample B, the modulation amplitude is not large enough to achieve the collapse or the peak/dip inversion of the SdHO\@. However, the modulated SdHO trace similar to Fig.\ \ref{Sim} (a) is well reproduced for $B < \sim$ 0.2 T by eq.\ (\ref{D1ULSL}) with $V_0$ = 0.38 meV, again some 20\% larger than the value determined from the CO\@.
For higher magnetic fields, SdHO amplitude is enhanced at the maximum bandwidth conditions, as mentioned earlier. This is obviously not reproduced in the calculated DOS and requires an alternative explanation. We note in passing that the onset of the spin-splitting for the ULSL shifts to a higher magnetic field due to the detrimental effect of the modulation-induced broadening of LLs on the Zeeman gap, and is outside the magnetic-field range for the present study.
\subsection{Comparison with calculated conductivities}
\subsubsection{Analytic expressions for conductivities at low temperatures}
It has been pointed out by Peeters and Vasilopoulos that a periodic potential modulation $V(x)$ alters the conductivity of a 2DEG via two different routes \cite{Peeters92}. \textit{The collisional (hopping) contribution} corresponds to the effect of the DOS we have just discussed above. Peaks in the DOS boost the conductivity hence the resistivity, through the increase in the scattering rate. The other route, \textit{the diffusion (band) contribution}, results from the drift velocity,
\begin{equation}
v_y = \frac{1}{\hbar}\frac{\partial E_{N,(-q l^2 k_y)}}{\partial k_y } = \frac{q V_B}{eB} \sin(q x_0),
\end{equation}
which enhances the $\sigma_{yy}$ hence the $\rho_{xx}$. Here, we made use of the relation $x_0 = -l^2 k_y$. It is this effect that mainly contributes to the CO, with $v_y^2$ being maximum (zero) at the maximum-bandwidth (flat-band) conditions. It is worth pointing out that the diffusion contribution has no counterpart in a plain 2DEG and therefore vanishes with $V_0 \rightarrow 0$, while the collisional contribution is basically an ordinary SdH effect with a due modification introduced by the periodic modulation.
Shi \textit{et al}. calculated by a perturbation theory the collisional ($\sigma_{xx}^\mathrm{col}=\sigma_{yy}^\mathrm{col}$) and the diffusion conductivities ($\sigma_{yy}^\mathrm{dif}$) for 2DEGs under 1D modulation of both an electrostatic potential and a magnetic field \cite{Shi02}. Taking only the electrostatic potential, eq.\ (\ref{potmod}), relevant to the present study into account, they read,
\begin{eqnarray}
\displaystyle{ \sigma_{xx}^{\rm{col}} = \frac{e^2}{h}\Gamma ^2 \int_{ - \infty }^\infty {dE\left( - \frac{\partial f}{\partial E} \right)} \sum\limits_{N = 0}^\infty {(2N + 1)}} \nonumber \\
\displaystyle{\times \int_0^\pi {dtP^2 (E - E_{N,t})}},
\label{sgmcol}
\end{eqnarray}
and
\begin{eqnarray}
\displaystyle{ \sigma_{yy}^{\rm{dif}} = \sigma_0 \frac{V_0 ^2}{\hbar \omega_\mathrm{c} (ak_\mathrm{F}/2)^2} 2\pi \int_{ - \infty }^\infty {dE\left( - \frac{\partial f}{\partial E} \right)}} \hspace{10mm} \nonumber \\
\displaystyle{ \times \sum\limits_{N = 0}^\infty {\left[ e^{ - u/2} L_N (u) \right]^2 } \int_0^\pi {dtP(E - E_{N,t})\sin ^2 t} },
\label{sgmdif}
\end{eqnarray}
respectively, where $f(E) = \{ 1+\exp[(E-E_\mathrm{F})/k_\mathrm{B}T] \}^{-1}$ is the Fermi-Dirac distribution function and $\sigma_0 = n_e e^2 \tau / m^* = (e^2/h) (E_\mathrm{F} / \Gamma_0)$ represents the conductivity at zero magnetic field with $\tau = \hbar/2 \Gamma_0$ the momentum relaxation time \cite{difdef}. In eq.\ (\ref{sgmcol}), contributions from inter-Landau-band hoppings are omitted. The authors of ref.\ \citen{Shi02} made comparison with experimental results by numerically evaluating these equations. Here,
we will take a further step and deduce from eqs. (\ref{sgmcol}) and (\ref{sgmdif}) approximate analytic formulae appropriate for comparison with our experimental data.
Firstly, since we employ a low temperature ($T \simeq$ 15 mK) in our measurement, the derivative of the Fermi-Dirac distribution function can safely be approximated by the delta function $\delta (E-E_\mathrm{F})$, resulting in,
\begin{equation}
\sigma_{xx}^{\rm{col}} = \frac{e^2 }{h}\Gamma ^2 \sum\limits_{N = 0}^\infty {(2N + 1)} \int_0^\pi {dtP^2 (E_\mathrm{F} - E_{N,t})}
\label{sgmcolLT}
\end{equation}
and
\begin{eqnarray}
\displaystyle{ \sigma_{yy}^{\rm{dif}} = \sigma_0 \frac{ 2\pi V_0 ^2}{\hbar \omega_\mathrm{c} (ak_\mathrm{F}/2)^2} \sum\limits_{N = 0}^\infty {\left[ e^{ - u/2} L_N (u) \right]^2 }} \nonumber \\
\displaystyle{ \times \int_0^\pi {dtP(E_\mathrm{F} - E_{N,t})\sin ^2 t}}.
\label{sgmdifLT}
\end{eqnarray}
Equation (\ref{sgmcolLT}) can be rewritten along the same line as in eq.\ (\ref{DOSplain}). Leaving the detail of the derivation to the Appendix, the calculation leads to, up to the leading term,
\begin{eqnarray}
\displaystyle{ \frac{\sigma_{xx}^{\rm{col}}}{\sigma_0} = \frac{\Gamma _0}{\Gamma }\gamma ^2 \left\{ 1 + 2\sum\limits_{k = 1}^\infty {\left( 1 + 2\pi k \gamma \right)} \right. } \hspace{20mm} \nonumber \\
\displaystyle{ \Biggl. \times
\cos \left[ 2\pi k\left( \varepsilon _\mathrm{F} - \frac{1}{2} \right) \right] J_0 ( 2\pi kv_B )e^{ - 2\pi k\gamma } \Biggr\} }. \nonumber \\
\label{sgmcolFin}
\end{eqnarray}
In eq.\ (\ref{sgmdifLT}) the term $e^{-u/2} L_N(u)$ may be replaced by its asymptotic expression at the Fermi energy, since $P(E_\mathrm{F} - E_{N,t})$ takes significant value only at $N \sim N_\mathrm{F} = [E_\mathrm{F} / \hbar \omega_\mathrm{c}]$ (the integer part of $E_\mathrm{F} / \hbar \omega_\mathrm{c}$), the index of LL in which the Fermi level resides ($N_\mathrm{F} \gg 1$ for low magnetic fields), and therefore
\begin{eqnarray}
\displaystyle{ \sigma_{yy}^{\rm{dif}} = \sigma_0 \frac{ 2\pi ^2 V_0 ^2}{\hbar \omega _\mathrm{c} (ak_\mathrm{F}/2)^2} {\left[ \sqrt{\frac{2}{\pi q R_\mathrm{c}}}\cos \left( q R_\mathrm{c} -\frac{\pi}{4} \right) \right]^2} } \nonumber \\
\displaystyle{ \times \frac{1}{\pi }\int_0^\pi {dt \sin ^2 t\sum\limits_{N = 0}^\infty {P(E_\mathrm{F} - E_{N,t})} } }.
\end{eqnarray}
As we have done in eq.\ (\ref{DOSULSL}), we employ eq.\ (\ref{DOSplain}) to rewrite the summation $\sum\nolimits_{N = 0}^\infty {P(E_\mathrm{F} - E_{N,t})}$, and use relations $(1/\pi) \int_0^\pi {\sin^2 t} dt = 1/2$, $(1/\pi) \int_0^\pi {\cos (x \cos t) \sin^2 t} dt = J_1(x)/x$, and $(1/\pi) \int_0^\pi {\sin (x \cos t) \sin^2 t} dt = 0$, to finally obtain
\begin{eqnarray}
\displaystyle{ \frac{\sigma_{yy}^{\rm{dif}}}{\sigma_0} = \frac{V_0 ^2}{\hbar \omega _\mathrm{c} E_\mathrm{F} (ak_\mathrm{F}/2)}\cos ^2 \left( qR_\mathrm{c} - \frac{\pi }{4} \right) \Biggl\{ 1 +\Biggr. } \hspace{15mm} \nonumber \\
\displaystyle{ \left. 4\sum\limits_{k = 1}^\infty {\cos \left[ {2\pi k\left( \varepsilon _\mathrm{F} - \frac{1}{2} \right)} \right]} \frac{J_1 (2\pi k v_B)}{2\pi k v_B}e^{ - 2\pi k\gamma } \right\} }, \nonumber \\
\label{sgmdifFin}
\end{eqnarray}
where $J_1(x)$ is the Bessel function of order one. The conductivities are translated to resistivities by the inversion of the conductivity tensor. For not too small magnetic fields ($B > \sim$0.05 T), $\rho_{xx}^\mathrm{col} / \rho_0 \simeq (\omega_\mathrm{c} \tau)^2 \sigma_{xx}^\mathrm{col} / \sigma_0$ and $\rho_{xx}^\mathrm{dif} / \rho_0 \simeq (\omega_\mathrm{c} \tau)^2 \sigma_{yy}^\mathrm{dif} / \sigma_0$ to a good approximation. The resultant resistivities are
\begin{eqnarray}
\displaystyle{ \frac{\rho_{xx}^{\rm{col}}}{\rho_0} = \frac{ \Gamma }{4 \Gamma_0 } \left\{ 1 + 2\sum\limits_{k = 1}^\infty {\left( 1 + 2\pi k \gamma \right)} \right. } \hspace{25mm} \nonumber \\
\displaystyle{ \Biggl. \times
\cos \left[ 2\pi k\left( \varepsilon _\mathrm{F} - \frac{1}{2} \right) \right] J_0 ( 2\pi kv_B )e^{ - 2\pi k\gamma } \Biggr\} }, \nonumber \\
\label{rhocolFin}
\end{eqnarray}
and
\begin{eqnarray}
\displaystyle{ \frac{\rho_{yy}^{\rm{dif}}}{\rho_0} = \frac{V_0 ^2}{ ak_\mathrm{F} E_\mathrm{F} \Gamma_0 } \omega_\mathrm{c} \tau \cos ^2 \left( qR_\mathrm{c} - \frac{\pi }{4} \right) \Biggl\{ 1 +\Biggr. } \hspace{15mm} \nonumber \\
\displaystyle{ \left. 4\sum\limits_{k = 1}^\infty {\cos \left[ {2\pi k\left( \varepsilon _\mathrm{F} - \frac{1}{2} \right)} \right]} \frac{J_1 (2\pi k v_B)}{2\pi k v_B}e^{ - 2\pi k\gamma } \right\} }. \nonumber \\
\label{rhodifFin}
\end{eqnarray}
The $B$ dependence of eq.\ (\ref{rhocolFin}) inherits that of eq.\ (\ref{DOSULSL}), as expected, with minor discrepancy, to be discussed in the following subsection, resulting from the factor $(1+2 \pi k \gamma)$ in the summation. The first term in eq.\ (\ref{rhodifFin}) describes the CO and is exactly the same as the expression given in ref.\ \citen{Peeters92}. The amplitude of the CO increases linearly with $B$, as revealed by the factor $\omega_\mathrm{c} \tau$. (The damping factor due to scattering \cite{Endo00E} that results in additional $B$ dependence is not included here.) The second term represents the diffusion contribution to the SdHO\@. The amplitude of the oscillation is modulated by the CO, and is therefore enhanced (suppressed) at the maximum bandwidth (flat band) conditions; the phase of the modulation is at odds with that of the collisional contribution. Owing to the $B$-linear dependence of the CO mentioned above, the diffusion contribution to the SdHO raise its relative importance with $B$ and can outweigh the collisional contribution above a certain $B$. This qualitatively explains the observed transition, with the increase of $B$, from the suppression to the enhancement of the SdHO amplitude at the maximum bandwidth conditions. The interpretation will be confirmed in the next subsection by comparing the calculated traces with our experimental SdHO\@.
The numerical calculation of $\rho_{xx}^\mathrm{dif}$ by Shi \textit{et al}. \cite{Shi02} shows only negligibly small share of the SdHO\@. This can be interpreted in terms of the factor $J_1(2 \pi v_B) / 2 \pi v_B$ in eq.\ (\ref{rhodifFin}), keeping only the $k =$ 1 term in the summation. As shown in Fig.\ \ref{Bessels} (b), the function $J_1(2 \pi v_B) / 2 \pi v_B$ decrease from 0.5 with the increase of $|v_B|$, becomes zero at $|v_B| \simeq$ 5/8, and oscillates thereafter with ever decreasing amplitude of less than 0.07, with zeros at $|v_B| \simeq 1/8+n/2$ ($n=2,3,4,...$). The oscillation of $v_B$ with $1/B$ works just to slightly counteract the effect of the CO while $|v_B|$ remains much smaller than 5/8, which is the case for our experiment for higher magnetic fields. When the amplitude of the periodic modulation is large, as is the case in Shi \textit{et al}., the factors $\cos^2 (q R_\mathrm{c} - \pi/4) $ and $|J_1(2 \pi v_B) / 2 \pi v_B|$ conspire to alternatingly become small and keep the diffusion contribution to the SdHO small over the whole range of magnetic fields.
\subsubsection{Comparison with experimental data}
\begin{figure}[t]
\includegraphics[bbllx=50,bblly=55,bburx=560,bbury=435,width=8.5cm]{63050Fig5.eps}
\caption{(Color online) Experimentally obtained SdHO versus $1/B$ for sample A (left axis) and the half width of Landau bands (right axis).}
\label{Fexp}
\end{figure}
\begin{figure}[t]
\includegraphics[bbllx=50,bblly=100,bburx=560,bbury=780,width=8.5cm]{63050Fig6.eps}
\caption{(Color online) Calculated SdHO for Sample A. The collisional (hopping) contribution [eq.\ (\ref{rhoxxDOS})], the diffusion (band) contribution [eq.\ (\ref{rhodifA})], and the addition of the two contributions are plotted (left axis), with the former two traces negatively offset for clarity. The half width of the Landau bands is also plotted (right axis).}
\label{Fcalc}
\end{figure}
\begin{figure}[t]
\includegraphics[bbllx=50,bblly=55,bburx=560,bbury=435,width=8.5cm]{63050Fig7.eps}
\caption{(Color online) Experimentally obtained SdHO versus $1/B$ for sample B (left axis) and the half width of Landau bands (right axis).}
\label{Pexp}
\end{figure}
\begin{figure}[t]
\includegraphics[bbllx=50,bblly=100,bburx=560,bbury=780,width=8.5cm]{63050Fig8.eps}
\caption{(Color online) Similar to Fig.\ \ref{Fcalc} for Sample B\@.}
\label{Pcalc}
\end{figure}
In this subsection, we compare our experimentally obtained SdHO with calculated resistivities, in an attempt to gain more quantitative understanding of the magnetic-field dependence of the SdHO amplitude. Main focus is on the behavior at higher magnetic-field side that remains unexplained in \S \ref{cmpDOS}.
Figures \ref{Fexp} and \ref{Pexp} show the experimentally obtained SdHO for Sample A and Sample B, respectively, plotted against $1/B$. The oscillatory parts, $\Delta \rho_\mathrm{SdH} / \rho_0$, are obtained by applying a Fourier high-pass filter to the plot of $\rho_{xx} / \rho_0$ vs. $1/B$. Simultaneously plotted $|V_B|$ serves as a guide to review the transition with the increase of $B$ from suppression to enhancement of the SdHO amplitudes at the maxima of the band width.
In Figs.\ \ref{Fcalc} and \ref{Pcalc}, we plot calculated collisional (hopping) and diffusion (band) contributions and the sum of the two contributions for Sample A and Sample B, respectively, using corresponding sample parameters. In pursuit of better agreement with the experiment, we used slightly modified version of the argument presented in the previous subsection.
As mentioned earlier, the collisional contribution given by eq.\ (\ref{rhocolFin}) possesses an additional $B$-dependent factor $(1 + 2 \pi k \gamma)$ compared with the DOS in eq.\ (\ref{DOSULSL}). In principle, eq.\ (\ref{rhocolFin}) should also describe the SdHO of the plain 2DEG by placing $V_0 =$0 ($v_B \equiv$ 0). This, however, is at variance with the firmly established relation $\Delta \rho_\mathrm{SdH} / [\rho_0 A(T/T_\mathrm{c})] \propto \Delta D_1 / D_0$\cite{Coleridge89} owing to the extra factor, which may possibly be resulting from the approximation used in the course of deducing eq.\ (\ref{sgmcol}). We, therefore, discard eq.\ (\ref{rhocolFin}) and assume that the relation
\begin{equation}
\frac{\Delta \rho_{xx}^\mathrm{col}}{\rho_0} = \frac{\rho_{xx}^\mathrm{col} - \rho_0}{\rho_0} \simeq C \frac{\Delta D_1}{D_0}
\label{rhoxxDOS}
\end{equation}
confirmed for plain 2DEGs also represents the oscillatory part of the collisional contribution of the ULSLs. Note, however, that this choice does not have a drastic effect, since $(1 + 2 \pi k \gamma) \simeq 1$ for large enough magnetic fields. In eq.\ (\ref{rhoxxDOS}), we assumed $A(T/T_\mathrm{c}) \simeq$1 appropriate for low temperatures. The constant $C$ has been shown to be equal to 2 for ideally uniform 2DEGs but deviate from this ideal value by a small (typically a few percent) inhomogeneity in the electron density \cite{Coleridge91}. We selected $C =$ 0.92 and 2 for Sample A and Sample B, respectively, the values that quantitatively describe the SdHO of the adjacent plain 2DEGs and also the SdHO of the ULSLs for the low magnetic-field range \cite{prefactor}.
In a previous publication \cite{Endo00E}, we have pointed out that a factor $A(\pi / \omega_\mathrm{c} \tau_\mathrm{w})$ describing the damping of the CO due to scattering needs to be incorporated for quantitative account of the experimental CO traces, with $\tau_\mathrm{w}$ a characteristic scattering time usually identifiable with $\tau_\mathrm{Q}$. This will also affect the second term in eq.\ (\ref{rhodifFin}) that includes the CO as a multiplying factor, resulting in the diffusion contribution to the SdHO (excluding the first term corresponding to the ordinary CO) as
\begin{eqnarray}
\displaystyle{ \frac{\Delta \rho_{yy}^{\rm{dif}}}{\rho_0} = A\left( \frac{\pi}{\omega_\mathrm{c} \tau_\mathrm{w}} \right) \frac{4 V_0 ^2 \omega_\mathrm{c} \tau}{ ak_\mathrm{F} E_\mathrm{F} \Gamma_0 } \cos ^2 \left( qR_\mathrm{c} - \frac{\pi }{4} \right) } \hspace{15mm} \nonumber \\
\displaystyle{ \times \sum\limits_{k = 1}^\infty {\cos \left[ {2\pi k\left( \varepsilon _\mathrm{F} - \frac{1}{2} \right)} \right]} \frac{J_1 (2\pi k v_B)}{2\pi k v_B}e^{ - 2\pi k\gamma } }. \nonumber \\
\label{rhodifA}
\end{eqnarray}
Thermal damping is neglected here again. The exponential factor $\exp (-2 \pi k \gamma)$ still works in favor of smaller $k$. However, due to the extra $B$-linear dependence mentioned earlier, the more weight is on the higher magnetic field side for the diffusion contribution, where the exponential factors still remain rather large. Therefore, in Figs.\ \ref{Fcalc} and \ref{Pcalc}, we preserved up to $k =$ 5 in the summation of eq.\ (\ref{rhodifA}).
By comparing Figs.\ \ref{Fexp}, \ref{Pexp} and Figs.\ \ref{Fcalc}, \ref{Pcalc}, it can be seen that the addition of the two types of contributions qualitatively reproduce the experimental SdHO, notably the transition from the suppression to the enhancement of SdHO at maximum bandwidth conditions. The transition is attributable to the rapid growth of the diffusion contribution with increasing $B$. The diffusion contribution plays more important role in Sample B than in Sample A\@. This can be ascribed to the higher mobility $\mu$ for Sample B; as can be seen in eq.\ (\ref{rhodifA}), the diffusion contribution is proportional to $\tau / \Gamma_0 \propto \mu^2$. The effect of smaller modulation amplitude $V_0$ is two fold: on one hand, the smaller $V_0$ makes the counteracting effect of $J_1(2 \pi v_B)/(2 \pi v_B)$ mentioned at the end of the preceding subsection less effective; on the other hand, the smaller $V_0$ is disadvantageous because of the factor ${V_0}^2$ in eq.\ (\ref{rhodifA}). These two effects somewhat compensate each other to make the difference in $V_0$ between the two samples less important.
The degree of agreement between the calculated and the experimental SdHO shown here should be assessed with care, since we had to employ slightly larger values of $V_0$ in eq.\ (\ref{rhoxxDOS}) than in eq.\ (\ref{rhodifA}) in order to achieve good agreement with experimental traces, as mentioned in \S \ref{cmpDOS}. Although the inconsistency should be resolved in the future studies, it does not affect the qualitative argument presented here.
\section{Conclusions}
We have shown that the experimentally observed SdHO for a ULSL is basically reproduced by the addition of the collisional contribution $\Delta \rho_{xx}^\mathrm{col}$ [eq.\ (\ref{rhoxxDOS}) with eq.\ (\ref{D1ULSL})] and the diffusion contribution $\Delta \rho_{xx}^\mathrm{dif}$ [eq.\ (\ref{rhodifA})]. The amplitude of the oscillation alternates between the two contributions: $\Delta \rho_{xx}^\mathrm{col}$ is suppressed while $\Delta \rho_{xx}^\mathrm{dif}$ is enhanced at the maximum bandwidth conditions [eq.\ (\ref{bandmax})]. Owing to an extra linear-$B$ factor for $\Delta \rho_{xx}^\mathrm{dif}$ in addition to the common exponential damping factor, $\Delta \rho_{xx}^\mathrm{col}$ dominates the SdHO at low magnetic fields but $\Delta \rho_{xx}^\mathrm{dif}$ outweighs $\Delta \rho_{xx}^\mathrm{col}$ at higher magnetic fields. This accounts for the experimentally observed transition from suppression to enhancement of the SdHO at the maximum bandwidth conditions. The term $J_1(2 \pi v_B)/(2 \pi v_B)$ in eq.\ (\ref{rhodifA}) qualitatively explains why diffusion contribution to the SdHO was not observed in a previous experiment \cite{Edmonds01}.
\section*{Acknowledgment}
This work was supported by Grant-in-Aid for Scientific Research (C) (18540312) and (A) (18204029) from the Ministry of Education, Culture, Sports, Science and Technology (MEXT).
|
1,116,691,499,639 | arxiv | \section{Introduction}
Scattering by black holes (BH) have been studied intensively since the pioneer work of Matzner, in 1968 \cite{Matzner1968}. There have been a lot of works using different approaches. One of them uses the complex angular momentum (CAM) theory, the $S$-matrix formalism and the associated Regge pole techniques. In this framework, the authors have shown ~\cite{YDAFBJ2003,YDAFBR2010} that the weakly damped quasinormal mode (QNM) complex frequencies of a wide class of static, asymptotically flat and spherically symmetric BH of arbitrary dimension, can be understood as Breit-Wigner type resonances generated by the interferences and damping of a family of ``surface waves'' lying close to their photon sphere, whose existence is of fundamental importance. It should be noted that, in~\cite{YDAF2009}, they have shown that this ``surface waves'' interpretation can also be applied to the BTZ BH, a non-asymptotically flat spacetime, i.e. in a framework where the notion of an $S$-matrix does not exist, by extending a powerful formalism introduced several decades ago by Sommerfeld~\cite{Sommerfeld49}. Finally, by noting that each ``surface wave'' is associated with a Regge pole of the corresponding $S$-matrix, this approach permits to construct analytically the spectrum of the weakly damped QNM complex frequencies of the corresponding BH, beyond the leading order term. Physically, it gives a powerful and quite elegant way to obtain a semiclassical interpretation of BH resonance phenomena. However, even if they have computed the absorption cross section (beyond the eikonal order) and the greybody factor in such scheme, the WKB approximation used to obtain analytically the expression of the Regge poles restricts the study to ``high frequency'' regime, i.e. for frequencies of order $\omega \gg (2M)^{-1}$, where $M$ is the mass of the Schwarzschild BH and thus limits the analysis to a scattering description of weakly damped QNM, formally associated to high angular momenta. Thus, highly damped QNM, Hawking radiation, and more generally, the physics near the BH horizon seems to be out of such framework. Nevertheless, in this paper, we claim that one could have a physical description of the near horizon physics without using any conformal field theories but by focusing, precisely, on the scattering of an ingoing $s$-wave by the non null barrier of the Regge-Wheeler potential of the Schwarzschild BH. In section \ref{generalities}, we introduce some notations, assumptions and well-known results linked to a scattering analysis and the $S$-matrix formalism. In section \ref{NHL}, after having discussed the ``Rindler approximation'', we will compute some results which have been assumed or derived by Solodukhin but within a conformal field framework \cite{Solodukhin2004}. In section \ref{HDQNM}, we will focus on the ``far horizon limit'' and we will show that one can easily compute the relevant physical quantities needed for a scattering analysis. They will allow us to compute the exact expression of the imaginary part of highly damped QNM complex frequencies. In section \ref{scattering_statistics}, using the results obtained in section \ref{HDQNM}, we will derive explicitly the reflection coefficient related to the scattering of the ingoing $s$-wave by the non null barrier of the Regge-Wheeler potential of the BH which is exactly the Boltzmann weight associated with the Hawking radiation, giving, in a purely scattering picture, another possible explanation of BH thermal effects. Finally, in section \ref{statistics}, extending some reflexions due to Motl \cite{Motl2002}, we will claim by an heuristic approach that the real part of these highly damped QNM complex frequencies is intimately linked to some ``exotic'' statistics, going beyond the Bose-Einstein and the Fermi-Dirac ones, which could be expected near any BH horizon where a theory of quantum gravity is maybe unavoidable.
\section{Generalities and notations}\label{generalities}
We consider first a static spherically symmetric four-dimensional spacetime with metric
\begin{equation} \label{metric_BH}
ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\sigma^{2}.
\end{equation}
Here $d\sigma^{2}$ denotes the line element on the unit sphere $S^{2}$. It should be noted that a metric such as (\ref{metric_BH}) does not describe the most general static spherically symmetric spacetime but it will permit us to easily apply the following results to the Schwarzschild case.\\
In Eq.~(\ref{metric_BH}), we shall assume that $f(r)$ is a function of the usual radial Schwarzschild coordinate $r$ satisfying the properties:
\begin{itemize}
\item (i) There exists an interval $I=]r_h,+\infty[ \subset \bf{R}$ with $r_h>0$ such as $f(r)>0$ for $r \in I$.
\item (ii) $r_h$ is a simple root of $f(r)$, i.e.
\begin{equation}\label{Assump_f_1}
f(r_h)=0 \qquad \text{and} \qquad f'(r_h)\neq 0 \qquad \text{and} \qquad f^{(2)}(r_h)>0.
\end{equation}
\end{itemize}
These assumptions indicate that the spacetime considered has an single event horizon at $r_h$, its exterior corresponding to $r \in I$. If we want to work with an asymptotically flat spacetime, in order to define an $S$-matrix, we need to impose the condition
\begin{equation}\label{Assump_f_2}
\underset{r \to +\infty}{\lim}f(r)=1.
\end{equation}
The Klein-Gordon wave equation for a massless scalar field propagating on a general gravitational background is given by
\begin{equation}\label{WaveEq}
\Box \Phi=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\Phi=
\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\mu}\Phi\right)=0.
\end{equation}
If the spacetime metric is defined by (\ref{metric_BH}), after separation of variables and introduction of the radial partial wave
functions $\Phi_\ell(r)$ with $\ell=0,1,2, \dots$, this wave equation reduces to the Regge-Wheeler equation
\begin{equation}\label{RW}
\frac{d^2 \Phi_\ell}{dr_*^2} + \left[ \omega^2 - V_\ell(r)\right]\Phi_\ell=0.
\end{equation}
Here we have assumed a harmonic time dependence $\exp(-i\omega t)$ for the massless scalar field. The variable $r_\ast=r_\ast(r)$ is the well-known tortoise coordinate defined, for $r \in I$, by the relation $dr_\ast/dr=1/f(r)$. Moreover, it is worth noting that the previous definition of $r_\ast$ provides a bijection $r_\ast=r_\ast(r)$ from $I$ to $]-\infty,+\infty[$.\\
In Eq.~(\ref{RW}), $V_\ell(r)$ is the Regge-Wheeler potential associated to the massless scalar field.
\begin{equation}\label{RWPotscalar}
V_\ell(r)=f(r) \left[ \frac{\ell(\ell+1)}{r^2}+\frac{1}{r}f'(r)\right].
\end{equation}
In four dimensions, we recall the well-known cases of non zero spin $j$ fields propagating on the BH background described by (\ref{metric_BH}). Indeed, one can be interested, for example, in the propagation of an electromagnetic test-field $(j = 1)$ or a linearized perturbation of the metric $(j = 2)$. Thus, for more general situations in four dimensions, the Regge-Wheeler potential can be written as
\begin{equation}\label{RWPot}
V_\ell(r)=f(r) \left[ \frac{\ell(\ell+1)}{r^2}+\frac{J}{r}f'(r)\right]
\end{equation}
where $J=1-j^2$, with $j$ the spin of the massless field under consideration. We recall that this result is valid when the BH does not carry any electric or magnetic charge. However, we would like to refer the reader to \cite{YDAFBR2010} for a $d$-dimensional generalization of the scattering of a massless scalar field by a static and spherically symmetric BH through the semi-classical CAM techniques.
\newline
According to (\ref{RWPot}), it should be noted that $\lim_{r \to r_h} V_\ell(r)=0$ and $\lim_{r \to +\infty} V_\ell(r)=0$ and therefore the solutions of the radial equation (\ref{RW}) have a $\exp(\pm i \omega r_{\ast})$ behavior at the horizon and at infinity. In other words, for a given angular momentum index $\ell$, a general solution of the Regge-Wheeler equation (\ref{RW}), satisfies the following asymptotic behaviors:
\begin{equation}\label{bc1}
\Phi_{\omega,\ell} (r_\ast) \underset{r_\ast \to -\infty}{\sim} e^{-i\omega r_\ast }
\end{equation}
which is a purely ingoing wave at the event horizon and which has the following general expression at spatial infinity $r_\ast \to +\infty$
\begin{equation}\label{bc2}
\Phi_{\omega,\ell}(r_\ast)\underset{r_\ast \to +\infty}{\sim} A_{in}e^{-i\omega r_\ast}+A_{out}e^{+i\omega r_\ast}.
\end{equation}
Moreover, by considering the wronskian of two linearly independent solution of Eq.~(\ref{RW}) at $r_\ast=\pm\infty$, we obtain
\begin{equation}\label{Wrsk}
1+\left|A_{out}\right|^2=\left|A_{in}\right|^2.
\end{equation}
As independent solutions we have chosen Eq.~(\ref{bc2}) and its complex conjugate. Furthermore, introducing the transmission and reflection amplitudes $T_\ell$ and $R_\ell$ defined by
\begin{subequations}
\begin{eqnarray}
&&T_\ell=\frac{1}{A_{in}}\\
&&R_\ell=\frac{A_{out}}{A_{in}}
\end{eqnarray}
\end{subequations}
we can rewrite Eq.~(\ref{Wrsk}) in the more familiar form
\begin{equation}
\left|T_\ell\right|^2+\left|R_\ell\right|^2=1.
\end{equation}
Thus, from Eq.~(\ref{bc2}), a QNM, which is defined as a purely ingoing wave at the event horizon and a purely outgoing wave at infinity, corresponds to $A_{in}=0$. Therefore, the part of the ingoing wave that will not be absorbed by the BH will be reflected back to spatial infinity. It should be noted that these quantities can be deduced from the $S$-matrix elements. Here, our problem is spherically symmetric so, the $S$-matrix is diagonal. We recall that the $S$-matrix elements, noted $S_\ell(\omega)$, are related to the reflection amplitude by
\begin{equation}
S_\ell(\omega)=(-1)^{\ell+1}R_\ell(\omega).
\end{equation}
It has been shown in \cite{YDAFBJ2003,YDAF2009,YDAFBR2010} (and references therein) that the $S$-matrix permits to analyze the resonant aspects of the considered BH as well as to construct the form factor describing the scattering of a monochromatic scalar wave. In this paper, we are mainly interested in the scattering of an ingoing $s$-wave, i.e. $\ell=0$, for which one has, on one hand
\begin{equation}
S_0(\omega)=-R_0(\omega),
\end{equation}
and on the other hand,
\begin{equation}
V_0(r)=\frac{J}{r}f'(r)f(r).
\end{equation}
For a Schwarzschild BH of mass $M$, i.e. $f(r)=1-2M/r$, and a massless scalar field ($j=0$), the Regge-Wheeler has a maximum $V_{0,\text{max}}$ for $r=(4/3)\,r_h$ \cite{Matzner1968}. As illustrated in Fig.~\ref{0RWPot}, we will show in section \ref{HDQNM} that the non null potential barrier, for which we will associate an energy $\omega_\text{min}=\sqrt{V_{0,\text{max}}}$, could be at the origin of the Hawking radiation and more generally of thermal effects.
\section{The near horizon limit}\label{NHL}
\begin{figure}[!t]
\centering
\includegraphics[scale=1]{RWPot.eps}
\caption{``$\ell = 0$'' Regge-Wheeler potential; For the Schwarzschild BH ($2M=1$), the maximum of the Regge-Wheeler potential is evaluated at $V_{0,\text{max}}=(27/256)\, (2M)^{-2}\approx 0.11\, (2M)^{-2}$ and located near the event horizon, i.e. at $r=(4/3)\, (2M)$}
\label{0RWPot}
\end{figure}
\subsection{The Rindler approximation}
In order to deal with the spacetime structure in the very vicinity of the BH horizon, i.e. the ``near horizon limit'', we first naturally expand the function $f(r)$ around the value $r=r_h$.
\begin{equation}\label{taylor_f}
f(r) \sim f'(r_h)(r-r_h)+\underset{r \to r_h}{\cal O}(r-r_h)^2.
\end{equation}
Then inserting (\ref{taylor_f}) into (\ref{metric_BH}), we obtain
\begin{equation}
ds^2=-f'_h(r-r_h)dt^2+\frac{dr^2}{f'_h(r-r_h)}+r_h^2d\Omega^{2}.
\end{equation}
We introduce the new variable $\rho$ defined by
\begin{equation}
d\rho=\frac{dr}{\sqrt{f'_h(r-r_h)}}
\end{equation}
such as
\begin{equation}
\rho=2\sqrt{\frac{r-r_h}{f'_h}} \qquad \text{or} \qquad r-r_h=\frac{f'_h}{4}\rho^2.
\end{equation}
Thus, the metric reads
\begin{equation}\label{rindler_approx}
ds^2=-\kappa^2 \rho^2 dt^2+d\rho^2+r_h^2 d\Omega^{2}
\end{equation}
which is regular at $\rho=0$ and where $\kappa=(1/2)f'_h$ is the well-known surface gravity of the Killing horizon at $r=r_h$.
This ``near horizon limit'' form of the metric is of course a ``Rindler approximation'' with a corresponding constant acceleration $\kappa$. From Eq.~(\ref{taylor_f}), we can obtain the behavior of the tortoise coordinate $r_\ast$ near the event horizon
\begin{equation}
r_\ast=\int \frac{dr}{f(r)} \sim \frac{1}{f'_h} \int \frac{dr}{(r-r_h)}
\end{equation}
which gives
\begin{equation}\label{tort1}
r_\ast(r)=\frac{1}{f'_h}\ln\left(\frac{r}{r_h}-1\right),
\end{equation}
where the constant of integration is chosen such as $r=2r_h$ implies $r_\ast=0$. From (\ref{tort1}), we write the usual radial coordinate $r$ as a function of $r_\ast$
\begin{equation}
r-r_h=r_h \exp\left(f'_h r_\ast\right)
\end{equation}
which allows us to write Eq.~(\ref{taylor_f}) as
\begin{equation}\label{taylor_ftort}
f(r_\ast) \sim r_h\, f'_h \exp\left(f'_h r_\ast\right)+\frac{1}{2}r_h^2\,f^{(2)}_h \exp\left(2f'_h r_\ast\right)+\underset{r_\ast \to 0}{\cal O}\left(\exp\left(3f'_h r_\ast\right)\right),
\end{equation}
where $f_h^{(p)}=(d^pf/dr^p)|_{r_h}$. Then one can express the Regge-Wheeler equation in the ``near horizon limit'' as it has been done in \cite{Solodukhin2004}, but, as we will see later, it is of no importance in this paper. In terms of the tortoise coordinate, the metric (\ref{metric_BH}) for the Schwarzschild geometry, i.e. $f(r)=1-2M/r$, in the near horizon limit is given by
\begin{equation}\label{metric_tortoise}
ds^2=e^{2\kappa r_\ast}\left(-dt^2+dr_\ast^2\right)+r_h^2d\sigma^2
\end{equation}
where $\kappa=1/4M$ and $r_h=2M$. Of course, if one chooses a hyperplane such as $d\sigma=0$, then in the $(t, r_\ast)$ coordinates, the metric is conformally flat.\\
For the following, we introduce the ``null tortoise coordinates'' defined by
\begin{subequations}\label{uv}
\begin{eqnarray}
&&u=t+r_\ast\\
&&v=t-r_\ast
\end{eqnarray}
\end{subequations}
and the associated Kruskal coordinates
\begin{subequations}\label{UV}
\begin{eqnarray}
&&\kappa U=e^{\kappa u}\\
&&\kappa V=-e^{-\kappa v}.
\end{eqnarray}
\end{subequations}
\subsection{The time dependent ``Doppler-gravitational'' shift effect}
Now, let us consider, near the Schwarzschild event horizon, a ``local inertial frame'' with coordinates $(T,R)$, i.e. a timelike coordinate $T$ and a spacelike coordinate $R$, such as the Kruskal-Szekeres coordinates are defined as ``null coordinates'':
\begin{eqnarray}
&&U=T+R\\
&&V=T-R.
\end{eqnarray}
We recall that the $(U,V)$ Kruskal coordinate system is analogous to the Rindler coordinates while the $(T,R)$ coordinate system is analogous to the Minkowski one. Moreover, they cover the entire spacetime manifold of the maximally extended Schwarzschild solution, being well-behaved everywhere outside the physical singularity at $r=0$.
\newline
Furthermore, since we are working with an $s$-wave which is spherically symmetric, one could always approximate it locally as a ``plane wave'' propagating in the radial direction. So, in the previously defined ``local inertial frame'' $(T,R)$, we can consider a locally (monochromatic) ``plane wave'' propagating towards the BH. Up to an amplitude coefficient, it could be written as
\begin{equation}
\Phi_\text{inertial}(T,R) \propto \exp\left[-i\omega(T+R)\right].
\end{equation}
Thus, according to eqs.~(\ref{uv}) and (\ref{UV}), the locally (monochromatic) ingoing plane wave $\Phi_\text{inertial}(T,R)$ becomes
\begin{equation}
\Phi(t,r_\ast)=\exp\left[-i\left(\frac{\omega}{\kappa}\right)e^{\kappa(t+r_\ast)}\right]
\end{equation}
in the $(t,r_\ast)$ coordinates where it is obviously not a monochromatic plane wave. It is worth noting that keeping the variable $u=t+r_\ast$, the field reads
\begin{equation}\label{horizon_state}
\Phi(u)=\exp\left[-i\left(\frac{\omega}{\kappa}\right)e^{\kappa u}\right]
\end{equation}
which is analogous to the ``horizon state'' $\phi_H$ introduced by Solodukhin \cite{Solodukhin2004} in the framework of a conformal field theory, if one identifies $\omega/\kappa$ with the parameter $\mu_H$. Physically, the ingoing wave $\Phi(u)$ is not monochromatic simply because of the time dependent Doppler shift effect due to the gravitational field of the Schwarzschild BH.
\newline
According to the previous relations, one could conclude that an observer looking at the ingoing wave actually sees a superposition of ``plane waves'' (cf. \cite{Alsing_Milonni2004}). Indeed, if we write the inverse Fourier transform, we have
\begin{equation}\label{inverseTF}
\Phi(u)=\int_{-\infty}^{+\infty}d\Omega\,\hat{\Phi}_0(\Omega)\,e^{-i\Omega u}.
\end{equation}
The expression of $\Phi(u)$ can be understood as resulting from a superposition of ingoing and outgoing ``plane waves'' near the Schwarzschild event horizon where $\hat{\Phi}_0(\Omega)$ is the amplitude of the ``plane wave'' of frequency $\Omega$. In other words, we focused first on a single plane wave, i.e. a single frequency. We found that in the freely falling frame it becomes a superposition of ``plane waves''. Now, if we want to quantize the considered spin $j$ field, we have to identify each plane wave with single particle states. In \cite{Solodukhin2004}, Solodukhin talks about a transition between the ``horizon state'' and the outgoing propagating wave. Here, the interpretation from the scattering point of view is analogous and far more natural. There is a transition from a single ``plane wave'', i.e. one particle state, to a superposition of ``plane waves'', i.e. superposition of single particle states, during the free-fall into the Schwarzschild BH, and vice versa by time symmetry. Furthermore, it should be noted that close to the horizon, according to eq~(\ref{bc1}), Eq.~(\ref{inverseTF}) is similar to Eq.~(19) in ~\cite{Solodukhin2004}
\begin{equation}
\Phi(u)= \int_{-\infty}^{+\infty}d\Omega\, \hat{\Phi}_0(\Omega)\, \Phi_{\Omega,0}(u)
\end{equation}
but, we can go one step further in this ``freely falling wave'' description. Indeed, in order to know the frequency-distribution of the locally accelerated ``wave'' $\Phi(u)$, we can Fourier transform it
\begin{equation}\label{TF}
\hat{\Phi}_0(\Omega)=\int_{-\infty}^{+\infty}du'\, \Phi(u')\, e^{i\Omega u'}.
\end{equation}
We would like to note that what follows is not really ``rigorous'' in a quantum field theory sense, as it has been noticed in \cite{Alsing_Milonni2004}, but it permits us to have a physical intuition of what happens near the event horizon. According to eqs.~(\ref{horizon_state}) and (\ref{TF}) we can write explicitly
\begin{equation}\label{TFexplicit}
\hat{\Phi}_0(\Omega)=\int_{-\infty}^{+\infty}du'\, e^{i\Omega u'}\, \exp\left[-i\left(\frac{\omega}{\kappa}\right)e^{\kappa u'}\right].
\end{equation}
Thus, following \cite{Alsing_Milonni2004}, we change variables to $y=\exp(\kappa u')$ and write
\begin{equation}
\hat{\Phi}_0(\Omega)=\frac{1}{\kappa}\int_{0}^{+\infty}dy\, y^{i\Omega/\kappa-1}\, \exp\left[-i\left(\frac{\omega}{\kappa}\right)y\right].
\end{equation}
The integration reads
\begin{equation}
\hat{\Phi}_0(\Omega)=\frac{1}{\kappa}\Gamma\left(\frac{i\Omega}{\kappa}\right)\,\left(\frac{\omega}{\kappa}\right)^{i\Omega/\kappa}\exp\left(-\pi\Omega/2\kappa\right).
\end{equation}
The modulus squared of the amplitude of each ``plane waves'', i.e. $\left|\hat{\Phi}_0(\Omega)\right|^2$, is naturally interpreted as a probability (density) of measuring the associated frequency $\Omega$. Since we know from the properties of $\Gamma$ functions that
\begin{equation}
\left|\Gamma\left(\frac{i\Omega}{\kappa}\right)\right|^2=\frac{\pi}{\left(\Omega/\kappa\right)\sinh\left(\pi\Omega/\kappa\right)}
\end{equation}
then the probability (density) of measuring the frequency $\Omega$ is given by a Bose-Einstein-like distribution
\begin{equation}
\left|\hat{\Phi}_0(\Omega)\right|^2=\left(\frac{2\pi}{\Omega\kappa}\right)\frac{1}{\exp\left(2\pi\Omega/\kappa\right)-1}.
\end{equation}
It is worth noting that this time dependent ``Doppler-gravitational'' shift effect can be adapted to fermions with the formal prescription $i\Omega/\kappa \rightarrow i\Omega/\kappa+1/2$. We refer the reader to \cite{Alsing_Milonni2004} for more details. Of course, the previous quantum point of view with superposition of particle states remains valid if one associates the frequency $\Omega$ with the energy of a quantum particle and interprets the probability of measuring $\Omega$ as the probability of finding particles of energy $\Omega$, as we will show it in section \ref{scattering_statistics}.
\section{The far horizon limit and the QNM complex frequencies}\label{HDQNM}
\subsection{The far horizon limit}
We start by using the transformation
\begin{equation}
\Phi_{\omega,\ell}(r)=\left(1-\frac{2M}{r}\right)^{1/2}\phi_{\omega,\ell}(r).
\end{equation}
Then, after a partial fraction decomposition, the Regge-Wheeler equation becomes
\begin{eqnarray}\label{precoulomb}
&&\frac{1}{4M^2}\frac{d^2\phi_{\omega,\ell}}{dr^2}+\left[\omega^2+\frac{J-3/4}{r^2}+\frac{J+\ell(\ell+1)-1/2}{2Mr}\right. \nonumber \\
&&\qquad \qquad \qquad \quad \left.+\frac{8M^2\omega^2-J-\ell(\ell+1)+1/2}{2M(r-2M)}+\frac{4M^2\omega^2+1/4}{(r-2M)^2}\right]\phi_{\omega,\ell}=0. \nonumber \\
&&
\end{eqnarray}
We substitute the variable $r$ by $x=(r-2M)/2M$ such that $x \in [0,+\infty[$ when $r \in [2M,+\infty[$. Thus, we consider large $x$ region, i.e. $r \gg 2M$, for which we can expand Eq.~(\ref{precoulomb}) in powers of $x^{-1}$
\begin{equation}\label{coulomb3}
\frac{d^2\phi_{\omega,\ell}}{dx^2}+\left[\omega^2+\frac{2\omega^2}{x}+\frac{\omega^2-\frac{\ell(\ell+1)}{4M^2}}{x^2}+\frac{1+\ell(\ell+1)-J}{4M^2x^3}+\ldots\right]\phi_{\omega,\ell}=0.
\end{equation}
It should be noted that in Eq.~(\ref{coulomb3}) the spin, $J=1-j^2$, only appears in higher orders of the expansion, starting from the third one. One way to solve such an equation is to consider the first two terms of the asymptotic expansion (\ref{coulomb3}) but the price to pay is the lost of information concerning the spin
\begin{equation}\label{coulomb}
\frac{d^2\phi_{\omega,\ell}}{dx^2}+\left[\omega^2+\frac{2\omega^2}{x}+\frac{\omega^2-\frac{\ell(\ell+1)}{4M^2}}{x^2}\right]\phi_{\omega,\ell}=0.
\end{equation}
Thus, the equation becomes analogous to a Coloumb differential equation which reads for an $s$-wave, i.e. $\ell=0$,
\begin{equation}\label{scoulomb}
\frac{d^2\phi_{\omega,0}}{dx^2}+\left[\omega^2+\frac{2\omega^2}{x}+\frac{\omega^2}{x^2}\right]\phi_{\omega,0}=0.
\end{equation}
\subsection{The $s$-wave scattering and the QNM complex frequencies}
In order to find the QNM complex frequencies, we will make some changes of variables. We introduce $b$ and $z$ such as
\begin{subequations}
\begin{eqnarray}
&&-4M^2\omega^2=b(b-1)\\ \label{bb}
&&-4iM\omega x=z. \label{zx}
\end{eqnarray}
\end{subequations}
Finally, we define a new ``$s$-wave function''
\begin{equation}
\psi_{\omega}(z,b)=x^b \exp(2iM\omega x)\phi_{\omega,0}(z).
\end{equation}
Therefore, the equation (\ref{scoulomb}) reads
\begin{equation}\label{hypergeom}
z\frac{d^2 \psi_{\omega}}{dz^2}+(2b-z)\frac{d\psi_{\omega}}{dz}-(b-2iM\omega)\psi_{\omega}=0
\end{equation}
which is one of the well-known confluent hypergeometric equations \cite{AS65} (for the $\ell \neq 0$ case, we refer the reader to \cite{LiuMashhoon1996}). The solution of Eq.~(\ref{hypergeom}) is a combination of confluent hypergeometric functions of the first kind
\begin{eqnarray}
&&\psi_{\omega}(z,b)=\frac{\Gamma(1-2b)}{\Gamma(1-b-2iM\omega)}F_1\left(b-2iM\omega;2b;z\right)\nonumber \\
&& \qquad \qquad +\frac{\Gamma(2b-1)}{\Gamma(b-2iM\omega)}z^{1-2b}F_1\left(1-b-2iM\omega;2-2b;z\right).
\end{eqnarray}
Then the solution for the ``$s$-field'' $\phi_{\omega,0}$ reads
\begin{eqnarray}
&&\phi_{\omega,0}(x)=x^be^{2iM\omega x}\frac{\Gamma(1-2b)}{\Gamma(1-b-2iM\omega)}F_1\left(b-2iM\omega;2b;z(x)\right)\nonumber \\
&& \qquad \qquad +\left(-4iM\omega x\right)^{1-2b}x^be^{2iM\omega x}\frac{\Gamma(2b-1)}{\Gamma(b-2iM\omega)}z(x)^{1-2b}F_1\left(1-b-2iM\omega;2-2b;z(x)\right)\nonumber \\
&&
\end{eqnarray}
where $z(x)$ is defined by Eq.~(\ref{zx}).\\
For large frequencies, i.e. for $\omega>\omega_\text{min}$ where $\omega_\text{min}$ has been defined in section \ref{generalities}, we can use the following approximation
\begin{equation}
b_\pm \approx 1/2 \pm 2iM\omega \nonumber
\end{equation}
and, from the large $x$ limit, we use the asymptotic expansions of the previous confluent hypergeometric functions in terms of $\Gamma$ functions. Moreover, in order to obtain the amplitudes related to the ingoing wave, i.e. $A_{in}$, and to the outgoing wave, i.e. $A_{out}$, one has to note that $x$ and $r_\ast$ are related by the identity
\begin{equation}
e^{\pm2iM\omega x} x^{\pm2iM\omega}=e^{\pm i\omega r_\ast}. \nonumber
\end{equation}
Thus, up to numerical constants of order unity, the amplitude of the asymptotic behavior of the $s$-wave, for ``high frequencies'', reads
\begin{subequations}\label{AinAout}
\begin{eqnarray}
&&A_{in}(\omega)\approx (-4iM\omega)^{-b-2iM\omega}\frac{\Gamma(1-4iM\omega)\Gamma(4iM\omega)}{\sqrt{\pi}~\Gamma(1/2-4iM\omega)}\left(x^{-1+2b}b^{1-2b}+1\right)\\
&&A_{out}(\omega)\approx (4iM\omega)^{-b+2iM\omega}\frac{\Gamma(1-4iM\omega)\Gamma(4iM\omega)}{\pi}\left(x^{-1+2b}b^{1-2b}+1\right).
\end{eqnarray}
\end{subequations}
such as the ``$s$-field'' has an asymptotical behavior similar to Eq.~(\ref{bc2}). It should be noted that, in these expressions, after some simplifications, the value of $b_{+}=1/2+2iM\omega$ is the same for $A_{in}$ and for $A_{out}$. Then, as seen earlier, the QNM complex frequencies which are defined by $A_{in}=0$, i.e. by the poles of the function $\Gamma(1/2-4iM\omega)$, can be written as
\begin{equation}
\forall n \in \textbf{N},\qquad \frac{1}{2}-4iM\omega_n=-n
\end{equation}
or, in other words
\begin{equation}
\forall n \in \textbf{N},\qquad 8\pi M\omega_n=2\pi i \left(n+\frac{1}{2}\right).
\end{equation}
Here, the highly damped QNM complex frequencies are obtained considering a ``far region'' limit, $r>2M$, and a ``high frequency'' regime, i.e. $\omega>\omega_\text{min}$. It should be noted that, in such calculation, $\omega_n$ has no real part, which is known to be non null and depending both on the spin $j$ of field and on the characteristics of considered BH. More particularly, one could think that not considering the third order term in the expansion in $x \gg 1$ in Eq.~(\ref{coulomb3}), is equivalent to consider the particular case of spin $j$ satifying
\begin{equation}
1-J=1-(1-j^2)=0.
\end{equation}
In this case, Eq.~(\ref{coulomb3}) would be equivalent to Eq.~(\ref{scoulomb}) if $j=0$. But we know that this ``$j=0$ result'' differs from the results discussed by Motl in \cite{Motl2002} obtained with the help of the powerful monodromy techniques. In other terms, the statement ``Eq.~(\ref{coulomb3}) would be equivalent to Eq.~(\ref{scoulomb}) if $j=0$'' seems to be wrong. Then, as we already noticed earlier, the exact expression of the ``spin $j$ dependent''-QNM frequency spectrum can't obviously be obtained by our second order approximation, i.e. Eq.~(\ref{scoulomb}) for which we only get the exact expression of the imaginary part.
\section{From scattering to statistics}\label{scattering_statistics}
\subsection{Quantum field theory: a very brief survey}
In quantum field theory \cite{ParkerToms2009}, in order to compute the spectrum of outgoing particles from a BH, one usually considers the coefficients of the Bogolubov transformation relating the well-known creation (resp. annihilation) operator $b_\Omega$ (resp. $b_\Omega^{\dagger}$) to $a_\omega$ and $a_\omega^{\dagger}$ such as
\begin{equation}
b_\Omega=\int d\omega \left(\alpha^{\ast}_{\Omega\omega}a_{\omega}-\beta^{\ast}_{\Omega\omega}a_{\omega}^{\dagger}\right)
\end{equation}
with the commutation relations
\begin{subequations}\label{commutation_relations}
\begin{eqnarray}
&&\left[\eta_{\omega_1},\eta_{\omega_2}^{\dagger}\right]=\delta\left(\omega_1-\omega_2\right)\\
&&\left[\eta_{\omega_1},\eta_{\omega_2}\right]=0=\left[\eta_{\omega_1}^{\dagger},\eta_{\omega_2}^{\dagger}\right]
\end{eqnarray}
\end{subequations}
where $\eta=a$ (resp. $b$) and $(\omega_1,\omega_2)=(\omega,\omega')$ (resp. $(\Omega,\Omega')$). The annihilation operators $b_\Omega^\dagger$ and $a_\omega^\dagger$ define respectively the Boulware vacuum state (in the $(u,v)$ coordinate system) and the Kruskal vacuum state (in the $(U,V)$ coordinate system) by
\begin{subequations}\label{vacuum_states}
\begin{eqnarray}
&&b_\Omega^\dagger \left|0_B\right\rangle=0 \qquad \text{Boulware vacuum}\\
&&a_\omega^\dagger \left|0_K\right\rangle=0 \qquad \text{Kruskal vacuum}.
\end{eqnarray}
\end{subequations}
It should be noted that using eq~(\ref{commutation_relations}), the normalization condition for the Bogolubov coefficients reads
\begin{equation}
\int d\omega \left(\alpha_{\Omega\omega}\alpha^{\ast}_{\Omega'\omega}-\beta_{\Omega\omega}\beta^{\ast}_{\Omega'\omega}\right)=\delta\left(\Omega-\Omega'\right).
\end{equation}
Then, from the standard mode expansions for the considered field operator both in the coordinates $(u,v)$ and $(U,V)$, and after some calculations, it follows that the coefficients of the Bogolubov transformation obey the well-known relation
\begin{equation}\label{exp_alphabeta}
\left|\alpha_{\Omega\omega}\right|^2=\exp\left(8\pi M\Omega\right)\left|\beta_{\Omega\omega}\right|^2.
\end{equation}
Thus, one can easily deduce the expectation value of the ``$b$-particle'' number operator, i.e. $N_\Omega=b_\Omega^{\dagger},b_\Omega$ in the Kruskal vacuum state and, more generally, the physics of the Hawking effect.
\subsection{The Hawking radiation: a scattering effect}
Let us focus, now, on the scattering/reflection coefficient of an ingoing $s$-waves, i.e. $S_0(\omega)=-R_0(\omega)$. We recall the main result (\ref{AinAout}) obtained in section \ref{HDQNM}, i.e.
\begin{subequations}
\begin{eqnarray}
&&A_{in}(\omega)\approx (-4iM\omega)^{-b-2iM\omega}\frac{\Gamma(1-4iM\omega)\Gamma(4iM\omega)}{\sqrt{\pi}~\Gamma(1/2-4iM\omega)}\left(x^{-1+2b}b^{1-2b}+1\right)\nonumber \\
&&A_{out}(\omega)\approx (4iM\omega)^{-b+2iM\omega}\frac{\Gamma(1-4iM\omega)\Gamma(4iM\omega)}{\pi}\left(x^{-1+2b}b^{1-2b}+1\right).\nonumber
\end{eqnarray}
\end{subequations}
Using Stirling's approximation, i.e. considering ``large frequencies'', i.e. $\omega>\omega_\text{min}$, one has
\begin{equation}
\Gamma(1/2-4iM\omega) \approx e^{4iM\omega}(4iM\omega)^{-4iM\omega}e^{-4\pi M\omega}
\end{equation}
then, we can easily deduce that the ``$\ell=0$'' reflection coefficient can be written as
\begin{equation}\label{R0}
R_0(\omega)=\frac{A_{out}(\omega)}{A_{in}(\omega)}\approx e^{4iM\omega}\times e^{-4\pi M\omega}.
\end{equation}
From Eq.~(\ref{R0}), the reflection probability off the ``$\ell=0$'' Regge-Wheeler potential barrier (cf. Fig.~\ref{0RWPot}) reads
\begin{equation}\label{Boltzmann_factor}
\left|R_0(\omega)\right|^2 = e^{-8\pi M\omega}.
\end{equation}
The equation (\ref{Boltzmann_factor}) could be interpreted as a Boltzmann factor characterized by a temperature $T=1/8\pi M$. With the help of Eq.~(\ref{exp_alphabeta}), we can now stress the link between the quantum field theory approach and the scattering approach, i.e.
\begin{equation}
\left|\alpha_{\Omega\omega}\right|^2=\left|R_0(\Omega)\right|^{-2}\, \left|\beta_{\Omega\omega}\right|^2.
\end{equation}
\subsection{Physics and thermodynamical aspects of the Schwarzschild BH}
In the following, we will associate every ``local plane wave'' to a ``single particle state''. According to the above results, the infalling motion of a quantum particle is analogous to a time dependent boost along the radial direction and consequently the proper distance of the infalling quantum particle exponentially decreases with time. Moreover, if the infalling particles lose enough energy, they can be considered, for an external observer, as ``trapped'' between the event horizon $r_h$ and the location of $V_{0,max}$, which then will play the role of a ``thermal atmosphere'' of the BH. Finally, as a consequence of the time dependent boost, this thermal atmosphere will become thinner as the particles eternally fall towards the horizon. We claim that this analysis could allow to clarify the link between conformal field approaches and BH scattering and could be at the origin of BH thermal effects. Indeed, an $s$-wave, with energy at least equals to $\omega_\text{min}$, can escape the BH without tunneling. Reciprocally, such $s$-wave will be able to penetrate the barrier from the outside and fall to the horizon (cf. Fig.~\ref{0RWPot}). It is worth noting that, in this case, the amplitude of the reflected wave off the ``$\ell=0$'' Regge-Wheeler potential barrier is exponentially small, but not null. Moreover, the mean energy of massless particles in thermal equilibrium at temperature $T=1/8\pi M$ is roughly of order of $T$ which is bigger than $\omega_\text{min}$. Therefore, some of the $s$-waves, i.e. ``$s$-states'' particles, will easily escape to infinity. Unless the BH is kept in equilibrium by incoming radiation, it will lose energy to its surroundings. In other words, the BH evaporates. This is one possible explanation of the Hawking radiation in terms of $s$-waves scattering, without tunneling. It should be noted that it is not the case for fields (or particles) with $\ell \gg 1$ because the Regge-Wheeler potential barrier becomes large enough to reduce significantly this process. Moreover, in this case, the location of $V_{0,\text{max}}$ moves away from the horizon to be closer to the location of the photon sphere which will play a central role in the analysis of weakly damped QNM \cite{YDAFBR2010}.
\newline
Furthermore, up to a normalizing contant, the expression (\ref{Boltzmann_factor}) tells us that the probability of finding one particle reflected by the ``$\ell=0$'' Regge-Wheeler potential barrier is the same as the probability of finding one particle of energy $\omega$ in a system, i.e. the thermal atmosphere, in thermodynamical equilibrium at temperature $T=1/8\pi M$. Thus, the probability $P_R(N_k)$ to find $N_k$ particles of energy $\omega_k$ in the thermal atmosphere of the BH, in thermodynamical equilibrium at the temperaure $T=1/8\pi M$ is
\begin{equation}
P_R(N_k)=\frac{1}{Z}\left(\left|R_0(\omega_k)\right|^2\right)^{N_k}=\frac{1}{Z}e^{-8\pi M\omega_k N_k}
\end{equation}
where the normalizing constant $Z$ is the partition function such as the probabilities sum up to one
\begin{equation}
Z=\frac{1}{\sum_{N_k} P_R(N_k)}.
\end{equation}
The mean number of reflected particles depends of course on the nature of the considered particles and is usually given by
\begin{equation}
\left<N^{(F/B)}(\omega_k)\right>=\sum_{N_k=0}^{1\, \text{or} \,\infty}N_k\, P_R(N_k)
\end{equation}
where $F/B$ stands for ``Fermion/Bosons''. More explicitly
\begin{subequations}
\begin{eqnarray}
&&\left<N^{(F)}(\omega_k)\right>=\sum_{N_k=0}^{1}N_k\, P_R(N_k)=\frac{1}{e^{8\pi M\omega_k}+1} \quad \text{for\, fermions}\\
&&\left<N^{(B)}(\omega_k)\right>=\sum_{N_k=0}^{\infty}N_k\, P_R(N_k)=\frac{1}{e^{8\pi M\omega_k}-1} \quad \text{for\, bosons}.
\end{eqnarray}
\end{subequations}
From the Fourier analysis of section \ref{NHL}, one then has
\begin{equation}
\left<N^{(F/B)}(\Omega)\right> \propto \left|\hat{\Phi}_0^{(F/B)}(\Omega)\right|^2.
\end{equation}
Therefore, the mean number of particles of energy $\Omega$ is proportional to the probability of measuring a frequency $\Omega$ in the spectrum of the field $\Phi(u)$.
\section{Statistical ``heuristics''}\label{statistics}
As we have just seen, in thermodynamical terms, the system in thermal equilibrium at temperature $T=1/8\pi M$ is the thin thermal atmosphere located close to the event horizon. In this sense, it is tempting to think that the Physics which has to be considered to describe the spectrum of the Hawking radiation would be a conformal field theory, in the near vicinity of the horizon, associated to the field in interaction with the considered BH. Therefore, due to the accumulation of all the Fourier components (quantum particles states) of the incoming wave spectrum on this thermal layer, it would not be a surprise to describe the near horizon physics in terms of some ``other/new'' degrees of freedom which would then be expected to have rather ``exotic'' statistics. In the following, we will give a heuristic point of view concerning these new statistics.
\subsection{A naive approach}
Let us introduce the usual variable $\beta=T^{-1}$ to simplify the notations and let us consider ``$\ell=0$'' fermions reflected off the Regge-Wheeler potential barrier. The mean number of fermions with energy $\omega_n$ is given by the well-known Fermi-Dirac statistics
\begin{equation}
\left<N_i(\omega_n)\right>=\frac{1}{e^{\beta \omega_n}+1}=\frac{\left<N_i\right>}{g_i}
\end{equation}
where $g_i$ is the degeneracy of the $i^{th}$ state of energy $\omega_n$.
The QNM complex frequencies can be seen as poles in the complex $\omega$-plane of the Fermi-Dirac distribution, for which one can write
\begin{equation}
e^{\beta \omega_n/T}+1 \approx 0 \Rightarrow \beta \omega_n \approx 2\pi i\left(n+\frac{1}{2}\right)
\end{equation}
which is the result obtain in the previous section.\\
Even if the behavior is correct for large $n$, this approach does not give, obviously, the correct answer with the well-known constant real part, i.e. $\ln(3)/8\pi M$. How can one have access to the latter? One way to answer this question is to assume that near the horizon, there may be another, ``exotic'' statistics satisfied by the considered quanta which would not behave neither like fermions or bosons.\\
A first very naive approach is to consider a non-null chemical potential $\mu$, or a corresponding fugacity $z=e^{\mu/T}$, for a thermodynamical system with a conserved number of fermions. Then the Fermi-Dirac distribution reads
\begin{equation}
\left<N_i(\omega_n-\mu)\right>=\frac{1}{z e^{\beta \omega_n}+1}=\frac{z^{-1}}{e^{\beta \omega_n}+z^{-1}}.
\end{equation}
One could consider the fugacity $z$ as an universal degeneracy $g=z^{-1}$ for each state of an equivalent thermodynamical problem where the number of particles is now not conserved and, by definition, which statistics would be given by
\begin{equation}
\left<\tilde{N}_i(\omega_n)\right> = \frac{1}{g} \left<N_i(\omega_n-\mu)\right> = \frac{1}{e^{\beta \omega_n}+g}.
\end{equation}
Then, the QNM complex frequencies, seen as the poles of the statistical distribution are
\begin{equation}
e^{\beta \omega_n}+g \approx 0 \Rightarrow \beta \omega_n \approx \ln\left|g\right|+ 2\pi i\left(n+\frac{1}{2}\right).
\end{equation}
The exact analytical result suggest that for a Schwwarzschild BH, $g=3$. In other words, each energy state could have 3 possible sub-levels. We are conscious that such explanation is just a mathematical trick and that the underlying physics is still expected.
\subsection{More on statistics: the fractional statistics}
However, one could assume that near the event horizon of the Schwarzschild BH, particles behave neither like bosons or fermions but they satisfy a more general statistics which could be the Polychronakos' fractional statistics \cite{Polychronakos1996}, the Haldane's fractional statistics \cite{Haldane1991} or even the infinite statistics \cite{AltherrGrandou1993}, introduced some years ago, which all start by a generalization of the Pauli exclusion principle (highly suggested by the Motl's ``tripled Pauli statistics'' \cite{Motl2002}). In the former case, one could use the simple Eq.~(22) of \cite{Polychronakos1996} which reads
\begin{equation}
\left<N_i(\omega_n,\alpha)\right>=\frac{1}{e^{\beta \omega_k}+\alpha}.
\end{equation}
Fermions and bosons corresponds respectively to $\alpha=1$ and $\alpha=-1$. If we introduce the variable $g$ such as $\alpha=2g+1$, then
\begin{equation}
\left<N_i(\omega_n,g)\right>=\frac{1}{e^{\beta \omega_n}+1+2g}
\end{equation}
where fermions and bosons corresponds now respectively to $g=0$ and $g=-1$. We deduce naturally
\begin{equation}
\beta \omega_n=\ln\left|1+2g\right|+2\pi i \left(n+\frac{1}{2}\right).
\end{equation}
One can noticed that the choice $g=1$ ($\alpha=3$) allows us to recover the intriguing ``tripled Pauli statistics''. Let us note that the different expressions of highly damped QNM complex frequencies can be seen as statistics obtained for different kind of BH which could be embedded in the ``richness'' of these statistics \cite{Polychronakos1996,Haldane1991,AltherrGrandou1993}. In particular, we think that the fractional statistics could be compatible with the ``general'' expression defining the quasinormal frequencies given in \cite{Skakala2012}.
\section{Conclusion}
In our scattering picture, the Hawking radiation and thermal effects for a Schwarzschild BH are associated with the scattering of the $s$-waves by the ``$\ell=0$'' Regge-Wheeler potential barrier. A very interesting result is that the scattering/reflection amplitude is exponentially small but not null and is exactly the Boltzmann factor related to Hawking effect. We have seen, from an external observer perspective, that incoming waves (quantum particles) can be ``trapped'' between the location of $V_{0,\text{max}}$ and the horizon. Moreover, this ``thermal atmosphere'' becomes thinner as a consequence of the time dependent boost, i.e. the free-fall of the waves (quantum particles) into the BH. Then one could ask: is there a thickness limit for the ``thermal atmosphere''? According to the well-known thermodynamical results, the answer would probably be linked to the existence of a cutoff, maybe the Planck length, but this is way too far from the scope of our analysis. Furthermore, we claim that this thin thermal atmosphere could be at the origin of new exotic statistics, due to the change of degrees of freedom during the transition between the incoming field in spacetime and the accumulation of all its Fourier components (quantum particles states) on the ``thermal layer''. It is worth noting that this thermodynamics, and consequently all the associated quantities, is usually associated to the BH, seen as one thermodynamical system. We have thus two different origins for the highly damped QNM complex frequencies. The first is the damping of the QNM due to the scattering/reflection off the ``$\ell=0$'' Regge-Wheeler potential barrier in the ``far region limit'', directly described by the imaginary part. The second would be the thermodynamics of the ``thermal layer'' associated with the real part. Our scattering analysis seems to be in agreement with the Maggiore's interpretation \cite{Maggiore2008}. It should be noted that the Hawking effect and the highly damped QNM are not linked to a very low frequency limit \cite{Matzner1968} but simply to the interaction of $s$-fields with the considered BH, whatever their energy is. Moreover, in addition to the existence of a thermal atmosphere, we have also recovered some results obtained in \cite{Solodukhin2004} but through conformal field frameworks, suggesting naturally the use of such approaches to describe the physics near the event horizon. Furthermore, one has to bear in mind that the statistics of particles should be thought as a property of spacetime and particularly near the horizon of a Schwarzschild BH. Therefore, if spacetime is quantum by nature, the commutation or anticommutation relations for creation and annihilation operators, for the considered fields, may change \cite{Swain2009}. Thus, exotic statistics and thereby a modification of the spin statistics theorem could be deeply related to the very quantum nature of spacetime and could emerge from an underlying theory of quantum gravity in the near vicinity of a BH horizon. Finally, it is worth noting that a generalization to $d$-dimensional Schwarzschild-Tangherlini BH in interaction with a scalar ($j=0$) field is trivial, at least for the imaginary part of the QNM complex frequencies. Indeed, in this case, the ``dimension parameter'' in the Regge-Wheeler potential is multiplied by the angular momentum $\ell$ (for more details, see \cite{YDAFBR2010}). Thus, for the scattering of an $s$-wave, the results will be identical. Concerning the real part, there is no ``simple'' argument except that the ``$\ell=0$'' Regge-Wheeler potential is not modified and thus one could perform the same calculations and recover the same scattering/reflection amplitude as in section \ref{HDQNM}. Actually, the real part seems to not be affected either by the dimension of spacetime as it has been noticed by Motl in \cite{Motl2002} (and references therein).
\acknowledgments
I would like to thank Professors Antoine Folacci, Yves Decanini, Jean-Pierre Provost, Christian Bracco, Thierry Grandou and Sergey Solodukhin for very stimulating discussions and support.
|
1,116,691,499,640 | arxiv | \section{Introduction \label{introduction}}
Superthermal particles in laboratory, space and astrophysical
plasmas are often modeled by a $\kappa$-type distribution function
(df) \cite{Vasyliunas1968,Sultana2010}. The superthermality
parameter $\kappa$ measures the deviation from a Maxwellian
distribution (the latter is recovered for infinite $\kappa$). Our
twofold aim here is to investigate the effect of superthermality on
electrostatic solitary waves, and also on self-modulated
wavepackets.
Electron-acoustic waves (EAW) occur in plasmas containing two
distinct temperature electron populations (here referred to as
``cold'' and ``hot'' electrons) \cite{Watanabe1977,Mace1990}. These
are high frequency electrostatic electron oscillations, where the
restoring force comes from the hot electrons pressure and the cold
electrons provide the inertia \cite{Watanabe1977,Verheest2007},
while ions plainly provide a neutralizing background. The phase
speed $v_{ph}$ of the EAW is much larger than the thermal speeds of
both cold electrons and ions but much smaller than the cold
electrons (i.e., $v_{ph,c}$, $v_{ph,i}\ll v_{ph} \ll v_{ph,h}$).
EAWs survive Landau damping in the region $T_{h}/T_{c} \geq10$ and
$0.25 \leq n_{c0}/n_{h0} \leq4$ \cite{Watanabe1977,Mace1990}, where
we have defined the temperature ($T_{c}$, $T_{h}$) and density
($n_{c}$, $n_{h}$) of the electron constituents (`c' for cold, `h'
for hot).
\section{Electron fluid model \label{electronfluidmodel}}
We consider a three component plasma consisting of inertial (\textquotedblleft
cool\textquotedblright) electrons, $\kappa$-distributed (\textquotedblleft
hot\textquotedblright) electrons and stationary background ions. In a 1D
geometry, the dynamics of the cold electrons is governed by the following
normalized equations
\
\begin{array}
[c]{cc
\dfrac{\partial n}{\partial t}+\dfrac{\partial(nu)}{\partial x}=0, & \text{
\ \ }\dfrac{\partial u}{\partial t}+u\dfrac{\partial u}{\partial x
=\dfrac{\partial\phi}{\partial x}-\dfrac{\sigma}{n}\dfrac{\partial P}{\partial
x},
\end{array}
\
\
\begin{array}
[c]{cc
\dfrac{\partial P}{\partial t}+u\dfrac{\partial P}{\partial x}+3P\dfrac
{\partial u}{\partial x}=0, & \text{ \ \ }\dfrac{\partial^{2}\phi}{\partial
x^{2}}=-(\beta+1)+n+\beta\left( 1-\dfrac{\phi}{\kappa-\tfrac{3}{2}}\right)
^{-\kappa+1/2},
\end{array}
\]
We have scaled all relevant physical quantities as: cold electron density
$n=n_{c}/n_{c0}$; fluid speed $u=u_{c}/v_{0}$; electric potential $\phi
=\Phi/\Phi_{0}$; time $t=t\omega_{pc}$; space $x=x/\lambda_{0}$; pressure
$P=P/n_{c0}k_{B}T_{c}$; we have defined: $v_{0}=(k_{B}T_{h}/m_{e})^{1/2}$,
$\lambda_{0}=(k_{B}T_{h}/4\pi n_{c0}e^{2})^{1/2}$ and $\omega_{pc}^{-1}=(4\pi
n_{c0}e^{2}/m_{e})^{-1/2}$, and also the density- and temperature- ratio(s):
$\beta=n_{h,0}/n_{c,0}$ and $\sigma=T_{c}/T_{h}$
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.1534in,
width=5.3705in
{figures/fig1.eps
\caption{Variation of the pseudopotential $\Psi(\phi)$ with $\phi$
(left); the electric potential $\phi$ vs. $\xi$ (right). We have
considered various values
of k, and $\sigma=0.01$, $\beta=1$, and $M=1$.
\label{fig1
\end{center}
\end{figure}
\section{Arbitrary amplitude solitary excitations}
Anticipating stationary profile localized excitations, we shift from
variables $\{x,t\}$ to $x=x-Mt$, where $M$ is the solitary wave
speed, scaled by $v_{0}$
(defined above). We obtain $u=M\left( 1-\dfrac{1}{n}\right) $, $u={M-(M
^{2}{+2\phi-3n^{2}\sigma+3\sigma)}^{1/2}$, and $P=n^{3}$. Poisson's equation
thus leads to a pseudo-energy balance equation
\begin{equation}
\frac{1}{2}\left( \frac{d\phi}{d\xi}\right) ^{2}+\Psi(\phi)=0,
\label{eq2_37
\end{equation}
where the ``Sagdeev'' pseudopotential function $\Psi(\phi)$ reads
\cite{Danehkar2011}
\begin{align}
\Psi(\phi)= & (1+\beta)\phi+\beta\left[ 1-\left( 1+\frac{\phi
{-\kappa+\tfrac{3}{2}}\right) ^{-\kappa+3/2}\right] +\frac{1}{6\sqrt
{3{\sigma}}}\left[ \left( {M+}\sqrt{3{\sigma}}\right) ^{3}-{{\left(
{M-}\sqrt{3{\sigma}}\right) ^{3}}}\right. \nonumber\\
& -\left( {2\phi+}\left[ {M+}\sqrt{3{\sigma}}\right] ^{2}\right)
^{3/2}\left. +{\left( {2\phi+\left[ {M-}\sqrt{3{\sigma}}\right] ^{2
}\right) }^{3/2}\right] \,. \label{eq2_38
\end{align}
\subsection{Soliton Existence}
In order for solitons to exist, we need to impose \cite{Verheest2007}:
$\Psi^{\prime}(\phi=0)=0$ and $\Psi^{\prime\prime}(\phi=0)<0$ (where the prime
denotes differentiation wrt $\phi$), leading to the (true sound speed)
threshold $M_{1}=\left[ \frac{\kappa-3/2}{\beta(\kappa-1/2)}+3\sigma\right]
^{1/2}$. An upper limit for $M$ is obtained by imposing the reality
requirement \cite{Danehkar2011}. $F_{2}(M)=\Psi(\phi)|_{\phi=\phi_{\max}}>0$
(where $\phi_{\max}$ is a limit on the electrostatic potential value;
$\Psi(\phi)$ is real for $\phi_{\max}<\phi$; see Fig. \ref{fig1}a). The region
thus obtained is depicted in Figure \ref{fig2}
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=2.8in
{figures/fig2a.eps
\ \ \ \ \
\includegraphics[
width=2.8in
{figures/fig2b.eps
\caption{Soliton existence region ($M_{1}<M<M_{2}$) for different
temperature
ratio $\sigma$ values, versus $\beta$ for $\kappa=3$ (left panel);
versus $\kappa$ for $\beta=1$ (right panel).
\label{fig2
\end{center}
\end{figure}
\section{Modulated electron-acoustic wavepackets}
We consider small ($\varepsilon\ll1$) deviations of all state
variables, say $S$ ($=n,u,\phi$), from the equilibrium state, viz.
$S={S}^{(0)}+\Sigma
_{n=1}^{\infty}\varepsilon^{n}\,\sum_{l=-n}^{n}S^{(nl)}e^{il(kx-\omega
t)}$, and allow for a weak space-/time-dependence of the $l$--th
harmonic amplitudes $S^{(nl)}$. In what follows, we ignore the
pressure term (\textit{cold electron model}) for simplicity, and set
$\alpha={n_{c,0}}/{n_{h,0}}$ ($=\beta^{-1}$).
\begin{figure}
[ptb]
\begin{center}
\includegraphics[width=2.5in
{figures/fig3a.eps
\ \ \ \ \
\includegraphics[width=2.5in
{figures/fig3b.eps
\caption{Envelope type solitary excitations:
bright type (left panel) and dark type (right panel).
\label{fig3
\end{center}
\end{figure}
The 1st order ($\sim\varepsilon^{1}$) expressions provide the EAW
\textit{dispersion relation}
$\omega^{2}=\frac{k^{2}\alpha}{k^{2}+c_{1}}$, along with the
amplitudes of the first harmonics. The 2nd and 0th harmonics are
obtained at order $\varepsilon^{2}$. Annihilation of secular terms
at 3rd order yields a nonlinear Schr\"{o}dinger (NLS) type equation:
\begin{equation}
i\, \frac{\partial\psi}{\partial\tau} + P \, \frac{\partial^{2} \psi
{\partial\zeta^{2}} + Q \, |\psi|^{2}\,\psi= 0 \, .\label{nlse
\end{equation}
where the amplitude $\psi\equiv\phi_{1}^{(1)}(\zeta, \tau)$ depends
on $\zeta= \varepsilon(x - v_{g} t)$, $\tau= \varepsilon^{2} t$,
while $v_{g}= \frac{d \omega }{d k} = \frac{\omega^{3}c_{1}}{k^{3}
\alpha}$ and $P$ and $Q$ are dispersion and nonlinear coefficients
respectively.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[width=2.24in
]
{figures/fig4a.eps
\ \ \ \ \
\includegraphics[width=2.3in
{figures/fig4b.eps
\caption{$PQ = 0$ (or $k = k_{cr}$) contours \textit{vs} carrier
wavenumber $k$ and superthermality parameter $\kappa$, or density
ratio $\alpha$. Left panel: $\alpha=0.25$ for the green curve; $1$
for magenta; $2.5$ for red; and $4$
for blue. Right panel: $\kappa=3$ for green; $4$ for magenta; $8$ for red; and $100$ for blue.
\label{fig4
\end{center}
\end{figure}
\begin{figure}
[ptb]
\begin{center}
\includegraphics[width=2.5in
]
{figures/fig5a.eps
\ \ \ \ \
\includegraphics[width=2.5in
{figures/fig5b.eps
\caption{Modulational instability growth rate (normalized by its
value for infinite $\kappa$) versus perturbation wavenumber. Left
panel: $\kappa = 100$, $7$, $5$, $3.5$ (top to bottom) for $\alpha =
0.5$, $k = 3.2$. Right panel: $\alpha = 0.5$, $1$, $2$, $4$ (top to
bottom)
for $k = 4.5$, $\kappa = 7$.
\label{fig5
\end{center}
\end{figure}
\subsection{Modulational instability}
Adopting standard procedure \cite{Sultana2010}, we investigate the
occurrence of modulational instability by considering a harmonic
solution of (\ref{nlse}) and then a harmonic amplitude perturbation
with (wavenumber, frequency)=($\tilde{k},\tilde{\omega}$). A
nonlinear dispersion relation is thus obtained:
$\tilde{\omega}^{2}=P^{2}\tilde{k}^{2}(\tilde{k}^{2}-2\frac{Q}{P}|
\tilde {\psi}_{1,0}| ^{2})$. Provided that $PQ>0$, wavenumbers above
$\tilde{k}_{cr}=\left( \frac{2Q}{P}\right) ^{1/2}| \tilde{\psi
}_{1,0}| $ lead to (amplitude) modulational instability (MI). For
$PQ<0$, wavepackets are stable.
\subsection{Envelope solitons}
In the modulationally stable region ($PQ < 0$, in fact essentially
for large wavelengths) ``\textit{dark}'' solitons may occur, i.e.
exact solutions in the form \cite{Kourakis2004}: $\psi = \psi_0 \{1
- d^2 {\rm sech}^2[(\zeta -V\tau)/L]\}$. On the other hand, for $PQ
> 0$, ``\textit{bright}'' envelope solitons occur in the form \cite{Kourakis2004}: $\psi =
\psi_0 {\rm sech}[(\zeta -V\tau)/L]$. In the above, $\psi_0$ is the
asymptotic electric potential amplitude value, $V$ is the
propagation speed and $L$ is the soliton width, while the positive
constant $d$ regulates the depth of the void ($d = 1$ for black
solitons or $d < 1$ for grey ones).
\section{Summary}
Stronger superthermality leads to higher amplitude solitary
excitations (as suggested by Fig. \ref{fig1}). Both the cold
electron temperature and concentration significantly effect on the
soliton existence domain, as the upper Mach number limit $M_{2}$
increases for higher ``cold'' electron temperature, while the sonic
threshold $M_{1}$ is decreased for higher $n_{c0}$ (see Fig.
\ref{fig4}). The modulational instability growth rate may be reduced
due to stronger superthermality (see Fig. \ref{fig5}a), while
(somewhat counter-intuitively) the presence of more excess hot
(superthermal) electrons increases the instability growth rate (see
Fig. \ref{fig5}b).
\section*{Acknowledgements}
This work was supported by a UK EPSRC Science and Innovation award
to Queen's University Belfast Centre for Plasma Physics (Grant No.
EP/D06337X/1). The work of AD was supported via postgraduate
scholarship at Queen's University Belfast from the Department for
Employment and Learning (DEL) Northern Ireland.
|
1,116,691,499,641 | arxiv | \section{Introduction}
State transfer between different units of a quantum computer or
entanglement distribution between two parties require quantum communication
channels~\cite{kn:nielsen-chuang,kn:benenti-casati-strini}. They are
quantum systems transferring quantum information: the proper quantity
to characterize the channel performance is
the \textit{quantum capacity}, defined as the maximum number of
qubits that can be reliably transmitted per channel use~\cite{BennetShor98}.
Quantum channels are often thought as memoryless, implying that
the effect of the channel on each information carrier
is always described by the same map ${\cal E}$. In other terms
there is no memory in the interaction between carriers and
the environmental degrees of freedom physically describing the channel.
In this case the quantum operation for $N$ channel uses is given by
${\cal E}_N={\cal E}^{\otimes N}$.
However, in several physically relevant situations this is not a
realistic assumption.
Memory effects appear when the characteristic
time scales for the environment dynamics are comparable or longer than the
time between
consecutive channel uses.
For instance, solid state implementations, which are the most promising
for their scalability and integrability, suffer from low frequency
noise~\cite{kn:solid_state_environment_noise}. In optical fibers,
memory effects may appear due to
slow birefringence fluctuations~\cite{Banaszek04}.
This introduces correlation among uses, then
${\cal E}_N\neq {\cal E}^{\otimes N}$,
this kind of channels being referred in the literature as {\it memory
channels}~\cite{BowenMancini04,MemoryChannel,KretschmannWerner05}.
A very interesting question, raised for the first time in
Ref.~\cite{MacchiavelloPalma02}, is whether memory can {\it enhance}
the transmission capacity of a quantum channel.
Recently we have considered a channel subject to dephasing noise
described by a Markov chain, showing that the quantum capacity increases
with respect to memoryless limit~\cite{DBF_NJP07}
(see also~\cite{Hamada,PlenioVirmani07}). Furthermore based on theoretical
arguments and numerical simulations, we have conjectured that the enhancement
of the quantum capacity also takes place for a dephasing quantum environment
modelled by a bosonic bath~\cite{DBF_NJP07}.
This issue is also relevant for the performance
of Quantum Error-Correcting Codes (QECCs).
Since quantum capacity is the maximum rate of reliable
quantum information transmission, it puts
an upper bound to the asymptotic rate achievable by any QECC.
On the other hand, realistic
QECCs necessarily work on a finite number of channel uses.
Moreover, present day experimental implementations~\cite{Cory98,Leung99}
are bases on very few channel uses.
Previous studies have investigated the impact of correlations on the
performance of QECCs~\cite{QECCcorrelations,Clemens}. Depending on the
chosen model, correlations may have positive or negative impact on QECCs.
In a previous paper~\cite{DDBF_IJQI08} we have shown, for a Markovian dephasing
channel, that also low values of memory, for which the quantum
capacity does not change appreciably, can have a detrimental impact on
the three-qubit code performance.
In this paper we describe a dephasing channel by a Hamiltonian where
the system environment interaction is modelled by a
stochastic process. Then we discuss the three-qubit code
error performance in presence of channel correlations.
\section{Channel Model}
We suppose that information is carried by qubits that transit across
a communication channel, modelled as an environment determining pure
dephasing of the qubits. The environment acts as a stochastic drive $\xi(t)$ on the
system and the Hamiltonian describing the
transmission of $N$ qubits through the channel reads
\begin{equation}
{\cal H}(t) = -\,\frac{\lambda}{2}\, \xi(t) \sum_{k=1}^{N}\sigma_z^{(k)}\, f_k(t).
\label{Hamiltonian}
\end{equation}
The $k-$th qubit is coupled to the environment via
its Pauli operator $\sigma_z^{(k)}$, with coupling strength $\lambda$.
The functions $f_k(t)=u(t-t_k)-u(t-t_k-\tau_p)$, where $u(t)$ is
the unit step function~\cite{Abramowitz}, switch the coupling on and off.
Here $\tau_p$ is the time each carrier takes to cross the channel;
$\tau \equiv t_{k+1}-t_k$ is the time interval that separates
two consecutive qubits entering the channel.
Only when the $k$-th qubit is inside the channel the function $f_k=1$.
We assume $\xi(t)$ is a stationary and Gaussian stochastic process~\cite{Papoulis} with
zero average value, characterized by its autocorrelation function $C(\tau)$.
To deal with this problem, we first consider the time evolution of the system
for a given realization $\xi(t)$
of the stochastic process, and then we perform an average over all possible realizations.
The $N$-qubit time evolution operator for a given realization is
\begin{eqnarray}
U_{\xi}(t) \,=\, \bigotimes_{k=1}^{N} \exp(-\rmi \sigma_z^{(k)} \phi_k),
\label{time_evolution}
\end{eqnarray}
where $\phi_k$ is the phase acquired by the $k$-th qubit
coherences after the qubit crossed the channel:
\begin{eqnarray}
\phi_k\,=\, \frac{\lambda}{2} \int_{t_k}^{t_{k+1}} \hskip-3pt
\xi(t^\prime)\, \rmd t^\prime.
\label{phase_k}
\end{eqnarray}
Time evolution is conveniently described in the
factorized basis $\{|j\rangle \equiv |j_1,....,j_N\rangle,
\,j_1,...,j_N=0,1\}$, where $\{|j_k\rangle\}$ are eigenvectors of
$\sigma^{(k)}_z$.
Let $\rho^{\cal Q}=\sum_{j,l} a_{jl} |j\rangle\langle l|$ be the initial state of the $N$-qubit system; the final state $\rho^{\cal Q^\prime}$ after all $N$ qubits crossed the channel
is given by
\begin{eqnarray}
\rho_{\xi}^{\cal Q^\prime} \,=\, U_{\xi}(t)\, \rho^{\cal Q} U^{\dag}_{\xi}(t)\, \,=\,
\sum_{j,l} a_{jl}\, \exp\Bigg( 2\rmi\, \sum_{k=1}^{N} s_k\phi_k\Bigg) \,|j\rangle\langle l|,
\label{rho_out-xi}
\end{eqnarray}
where $s_k\equiv l_k-j_k=\frac{1}{2}[(-1)^{j_k}-(-1)^{l_k}]$.
By averaging over the stochastic process we finally obtain
\begin{eqnarray}
\rho^{\cal Q^\prime} = \langle \rho_{\xi}^{\cal Q^\prime} \rangle \,=\,
\sum_{j,l} a_{jl}\, \Bigg\langle \exp\Bigg( 2\rmi\, \sum_{k=1}^{N} s_k\phi_k\Bigg)
\Bigg\rangle \,|j\rangle\langle l|,
\label{rho_out}
\end{eqnarray}
which is a quantum operation for the N-qubits system: $\rho^{\cal Q^\prime}={\cal E}_{N}\big(\rho^{\cal Q}\big)$.
It is possible to show that the quantity $\sum_{k=1}^{N} s_k\phi_k$ is itself
a Gaussian variable\footnote{Given two Gaussian variables $x$ and $y$ with arbitrary
variances and some degree of correlation, the two variables $z_{\pm}=x\pm y$ are again
Gaussian, as one can find by deriving the $(z_+,z_-)$ mixed density function from the
one of $(x,y)$~\cite{Papoulis}. Each phase $\phi_k$ is a time integral of a Gaussian
stochastic process, so it can be view as the limit of a sum of Gaussian variables,
so in its turn it is Gaussian.},
therefore
\begin{eqnarray}
\Bigg\langle \exp\Bigg(2\rmi\, \sum_{k=1}^{N} s_k\phi_k \Bigg)\Bigg\rangle \, =\,
\exp\Bigg(-2 \sum_{k,k^\prime=1}^{N} s_k s_{k^\prime}\langle \phi_k\phi_{k^\prime} \rangle \Bigg).
\label{average}
\end{eqnarray}
We call this quantity the $(j,l)$-coherence decay factor,
since it is just the damping experienced
by the $(j,l)$ system coherence:
\begin{eqnarray}
D_{jl} \equiv \frac{\langle j | \rho^{\cal Q^\prime} | l \rangle}
{\langle j | \rho^{\cal Q} | l \rangle}
= \exp\Big(-2 \sum_{k,k^\prime=1}^{N} s_k s_{k^\prime}\langle \phi_k\phi_{k^\prime} \rangle \Big).
\label{Dmn_def}
\end{eqnarray}
Next, by using the stationarity of $\xi(t)$, we calculate
\begin{eqnarray}
\langle \phi_k\phi_{k^\prime} \rangle = &&
\Big\langle \frac{\lambda^2}{4} \int_{t_k}^{t_k+\tau_p}\hskip-3pt \rmd t_1 \, \xi(t_1)
\int_{t_{k^\prime}}^{t_{k^\prime}+\tau_p} \hskip-3pt \rmd t_2 \,\xi(t_2)
\Big\rangle \,=\, \nonumber\\
&& \frac{\lambda^2}{4} \int_{t_k}^{t_k+\tau_p} \hskip-3pt \rmd t_1
\int_{t_{k^\prime}}^{t_{k^\prime}+\tau_p} \hskip-3pt \rmd t_2\,\,
C(t_1-t_2).
\label{phase-correlation-1}
\end{eqnarray}
Since the autocorrelation function can be expressed in terms of the
power spectral density $S(\omega)=\int \rmd \tau \rme^{i \omega \tau} C(\tau)$ of $\xi(t)$ we obtain
\begin{eqnarray}
\langle \phi_k\phi_{k^\prime} \rangle =
\lambda^2 \int_0^{\infty} \frac{\rmd \omega}{2 \pi} S(\omega)
\frac{1-\cos(\omega \tau_p)}{\omega^2}
\cos[\omega(k-k^\prime)\tau],
\label{phase-correlation-2}
\end{eqnarray}
and the final form of the coherences decay factor follows:
\begin{eqnarray}
D_{jl} &&= \exp\Bigg(
-\lambda^2 \int_0^{\infty} \frac{\rmd \omega}{\pi} S(\omega)
\frac{1-\cos(\omega \tau_p)}{\omega^2} \cdot \nonumber \\
&& \hspace{3.5cm}
\sum_{k,k^\prime=1}^{N} (l_k-j_k) (l_{k^\prime}-j_{k^\prime}) \cos[\omega(k-k^\prime)\tau]
\Bigg)
\label{Dmn}
\end{eqnarray}
This result is identical to the coherences decay
due to a dephasing channel modelled by a set
of quantum harmonic oscillators~\cite{DBF_NJP07}.
The expression \eref{Dmn} can be put in a nice and useful form. To this end we define
$\mu_{kk^\prime}$ as the correlation coefficient~\cite{Papoulis} between the phases $\phi_k$ and
$\phi_{k^\prime}$:
\begin{eqnarray}
\mu_{kk^\prime}\,=\,\frac{\langle \phi_k \phi_{k^\prime}\rangle}
{\sqrt{\langle\phi^2_k\rangle \langle\phi^2_{k^\prime}\rangle}}
\,=\, \frac{\langle \phi_k \phi_{k^\prime}\rangle}{\eta^2},
\label{phase-correlation-factor}
\end{eqnarray}
where we set $\eta^2 = \langle\phi^2_k\rangle$; in fact, thanks to stationarity of
the process, the quantity $\langle\phi^2_k\rangle$ does not depend on $k$
(see equation \eref{phase-correlation-2}).
The coherence decay factor \eref{Dmn_def} can be rewritten as
\begin{eqnarray}
D_{jl} = \exp\Big(-2\eta^2 \sum_{k,k^\prime=1}^{N} s_k s_{k^\prime} \mu_{kk^\prime} \Big).
\label{Dmn_def2}
\end{eqnarray}
Now we observe that $g \equiv \exp(-2\eta^2)$ is just the damping experienced by
single qubit coherences for one channel use ($N=1$). We finally write
\begin{eqnarray}
D_{jl} = g^{\sum_{k,k^\prime=1}^{N} s_k s_{k^\prime} \mu_{kk^\prime} } =
g^{\big(\sum_{k=1}^{N} s^2_k \,+ \,2\sum_{k^\prime=1, k>k^\prime}^{N} s_k s_{k^\prime} \mu_{k-k^\prime} \big)}.
\label{Dmn_def3}
\end{eqnarray}
where we have defined $\mu_{k-k^\prime} = \mu_{kk^\prime}$.
In fact the stationarity of $\xi(t)$
implies that $\mu_{kk^\prime}$ depends only
on $|k-k^\prime|$. The quantity $\mu_{k-k^\prime}$ is a measure of the
degree of the correlation between the channel uses $k$ and $k^\prime$.
\section{Three-Qubit Code performance}
As a measure of the quantum information transmission reliability
we use the \textit{entanglement fidelity}~\cite{Schumacher96}.
To define this quantity we look at the system ${\cal Q}$
as a part of a larger quantum system ${\cal RQ }$,
initially in a pure entangled state $|\psi^{\cal RQ } \rangle$.
The initial density operator of the system ${\cal Q }$
is then obtained from that of ${\cal RQ}$ by a partial trace over
the reference system ${\cal R}$:
$\rho^{\cal Q}={\rm Tr}_{\cal R} [|\psi^{\cal RQ }
\rangle \langle \psi^{\cal RQ } | ]$.
The system $\cal Q$ is sent through the channel, while $\cal R$ remains ideally isolated
from any environment, being $\rho^{\cal RQ^\prime}$ the final state of $\cal RQ$ after the
transmission.
Entanglement fidelity is just the fidelity between the initial and
the final state of $\cal RQ$:
\begin{equation}
F_e=\langle\psi^{\cal RQ}| \rho^{\cal RQ^\prime} |\psi^{\cal RQ}\rangle.
\label{Fe}
\end{equation}
First we consider a single use of the channel described by
Hamiltonian~\eref{Hamiltonian}. We suppose to feed the
channel with a quantum source~\cite{Barnum98} described by the density operator
$\rho^{\cal Q}=\frac{1}{2} \openone$.
The entanglement fidelity is~\cite{DBF_EPJ08}:
\begin{equation}
F_e\,=\,\langle\psi^{\cal RQ}|
{\mathbb I}^{\cal R}\otimes {\cal E}^{\cal Q}
\big(|\psi^{\cal RQ}\rangle\langle\psi^{\cal RQ}|\big)
|\psi^{\cal RQ}\rangle \,=\, \frac{1+g}{2},
\label{oneuse-Fe}
\end{equation}
where ${\mathbb I}^{\cal R}$ is the identity operator and
${\cal E}^{\cal Q}={\cal E}_1$.
This case is relevant in quantum information field as it takes place when two
communication parties try to share a Bell state: the party that initially possesses
the pair sends one half of it through the quantum channel $\cal E$.
$F_e$ is the fidelity between the actually shared pair and the original one; it
means that a Bell measurement on $\rho^{\cal RQ^\prime}$ able to distinguish the
ideally shared state from the other states of the Bell basis fails with
\textit{error probability}
$P_{\rme} = 1-F_e$. From \eref{oneuse-Fe} it follows that the error probability
for a single channel use is $\frac{1-g}{2}$; in what follows we identify this quantity
by $\epsilon$.
Now we suppose to use the Three-Qubit
Code (TQC)~\cite{kn:nielsen-chuang,kn:benenti-casati-strini} to send $\rho^{\cal Q}$.
The system's state is encoded by using
two ancillary systems $\cal A$ and $\cal B$.
The system and the ancillary qubits are encoded by means of a set of quantum operations
that we resume as $\cal C^{\cal QAB}$
(stages a, b, c in \fref{fig:TQC}) and then transmitted in $N=3$ uses
of channel~\eref{Hamiltonian}.
Then the receiver performs the decoding ${\cal D}^{\cal QAB}$ (stages e, f, g,
h in \fref{fig:TQC}) on the system $\cal QAB$. After
tracing out $\cal AB$, he obtains the final, generally mixed
state of system ${\cal RQ}$:
\begin{equation}
\rho^{\cal RQ^\prime}_{TQC}=\textrm{tr}_{\cal AB}\big[\mathbb{I}^{\cal R} \otimes
{\cal D}^{\cal QAB} \circ {\cal E}^{\cal QAB} \circ {\cal C}^{\cal QAB}
\big(|{\psi}^{\cal RQAB}\rangle \langle{\psi}^{\cal RQAB}|\big)\big]
\end{equation}
where $|{\psi}^{\cal RQAB}\rangle =|{\psi}^{\cal RQ}\rangle\otimes |00^{\cal AB}\rangle $.
Entanglement fidelity
$F_e^{(TQC)}=\langle\psi^{\cal RQ}| \rho_{TQC}^{\cal RQ^\prime} |\psi^{\cal RQ}\rangle$
just gives the probability that the code is successful.
The merit of this code is that it drastically reduces - in absence of use correlations,
i.e. for ${\cal E}^{\cal QAB}={\cal E}_1^{\otimes 3} $ -
the transmission error probability
from $\epsilon$ to $P_{\rme}^{(TQC)}=1-F_e^{(TQC)} \simeq 3\epsilon^2$.
\begin{figure}[!ht]
\centering
\includegraphics[width=101mm,height=37mm]{figure/ThreeQC.eps}
\caption{Scheme of a three qubit code~\cite{Cory98}. This quantum error correcting code
was initially designed for a bit flip channel, for this reason each channel
use (stage d) is embedded between two Hadamard
gates~\cite{kn:nielsen-chuang,kn:benenti-casati-strini} (stages c and e).
The coding is performed by means of CNOT gates (stages a, b, f and g), the
decoding also requiring a Toffoli
gate~\cite{kn:nielsen-chuang,kn:benenti-casati-strini} (stage h).}
\label{fig:TQC}
\end{figure}
Now we investigate the effects of channel correlations on the performance of a TQC.
After some involved calculations it comes out that
\begin{eqnarray}
F_e^{(TQC,m)} &=& \frac{1}{2} \,+\, \frac{3}{4}g \,-\, \frac{1}{16}g^3
\big[
g^{2\mu_{\cal QA}-2\mu_{\cal QB}-2\mu_{\cal AB}} \,+\,
g^{-2\mu_{\cal QA}+2\mu_{\cal QB}-2\mu_{\cal AB}} \,+\,
\nonumber\\
&&\hspace{3.1cm}
g^{-2\mu_{\cal QA}-2\mu_{\cal QB}+2\mu_{\cal AB}} \,+\,
g^{2\mu_{\cal QA}+2\mu_{\cal QB}+2\mu_{\cal AB}}
\big].
\label{FE-TQC-m-1}
\end{eqnarray}
By observing that $\mu_{\cal QA}=\mu_{\cal AB} = \mu_1$ and
$\mu_{\cal QB}= \mu_2$ we can rewrite equation \eref{FE-TQC-m-1} as:
\begin{eqnarray}
F_e^{(TQC,m)} &=& \frac{1}{2} \,+\, \frac{3}{4}g \,-\, \frac{1}{16}g^3
\big[2 g^{-2\mu_2} + g^{2\mu_2-4\mu_1} + g^{2\mu_2+4\mu_1} \big].
\label{FE-TQC-m-2}
\end{eqnarray}
An interesting result can be obtained by considering the
case of a small error probability $\epsilon \ll 1$.
In this regime we can take the series expansion of \eref{FE-TQC-m-2}
near $\epsilon = 0$:
\begin{eqnarray}
F_e^{(TQC,m)} \simeq 1\,-\,(3+4\mu_1^2+2\mu_2^2)\epsilon^2.
\label{FE-TQC-approx}
\end{eqnarray}
This expression tells us that even though memory lowers the fidelity,
this worsening is always slight and absolutely negligible when
$\mu_1,\mu_2 \ll 1$.
Moreover, it highlights that channel correlations - inside the Hamiltonian model
\eref{Hamiltonian} - permit the TQC to maintain its error probability
$P_{\rme}^{(TQC,m)}=1- F_e^{(TQC,m)}$ of the order of $\epsilon^2$.
However, one has to take care that in the case perfect memory the code error
triplicates from $3\epsilon^2$ to $9\epsilon^2$.
These results are very similar to the ones by Clemens {\it et al.}~\cite{Clemens}. However, we discuss time rather than space correlations and average with respect to stochastic processes.
Rather than choosing a particular autocorrelation function $C(\tau)$ for $\xi(t)$
and then trying to carry out a specific relation between it and the TQC error probability,
we make some general considerations about the impact of correlations on the code error probability.
For the sake of simplicity we assume $\mu_1$ and $\mu_2$ having positive values
(we do not consider anti-correlation cases).
While the range of $\mu_1$ is $[0,1]$, we can argue that $\mu_2 \leq \mu_1$
since one expects that the phase correlation does not increase when increasing
the channel uses distance;
furthermore it can be proved that it must be $\mu_2\geq \tilde{\mu}_2\equiv 2\mu_1^2-1$\footnote{One can
see it by considering the average of $[\phi_2+a(\phi_1+\phi_3)]^2$ where $a$ is a
real variable: it is a quadratic form in $a$ and by imposing that it must always be
positive we obtain the desired condition on $\mu_2$~\cite{Papoulis}.}.
Studying the first derivatives of $P_\rme^{(TQC,m)}$ as a function of $\mu_1$ and $\mu_2$
it turns out that $P_\rme^{(TQC,m)}$ is monotonical with respect to $\mu_1$
(error grows with $\mu_1$), but not with respect to $\mu_2$.
It indeed can displays a minimum at
$\mu_{2_{opt}} \equiv -0.25\log_g[(g^{4\mu_1}+g^{-4\mu_1})/2]$, but its presence is
substantially irrelevant, and one can reasonably say that the code error probability
is also increasing with respect to $\mu_2$.
Thus to characterize the $P_\rme^{(TQC,m)}$ behaviour, we plot it as a function
$\mu_1$, using $\mu_2\in \{\max(0,\tilde{\mu}_2),\,\mu_1\}$ as parameter.
As it is showed in \fref{fig:Pe_mu}, in which we set $\epsilon=10^{-3}$, the TQC error
probability weakly depends on $\mu_1$, and the $\mu_2$ allowable values affect very
little it. In the same figure we also plot the error probability
for a two-qubit code~\cite{DBF_EPJ08} encoding a qubit into the subspace spanned by
$\{|01\rangle,|10\rangle\}$:
the performance of this last code is always worse than the TQC ones, unless in the case
$\mu_1 \to 1$, for which the coding subspace becomes decoherence-free.
\begin{figure}[!tbp]
\centering
\includegraphics[width=75mm,height=58mm]{figure/Pe_mu.eps}
\caption{Plot of code error probability as a function of $\mu_1$. The dotted and the dashed
grey lines represent respectively the single channel use error probability ($\epsilon=10^{-3}$)
and TQC error probability for the memoryless channel ($P_\rme^{(TQC)}$).
The error probabilities for the three qubit code in presence of correlations
($P_\rme^{TQC, m}$) are represented by black curves: solid curve refers to
$\mu_2=\max(0,\,\tilde{\mu}_2)$ and the dotted one to $\mu_2=\mu_1$.
There is also displayed (solid gray curve) the error probability for a simple
two-qubit code encoding a qubit into the subspace spanned by $\{|01\rangle,|10\rangle\}$.}
\label{fig:Pe_mu}
\end{figure}
The TQC exhibits the same kind of behaviour showed in \fref{fig:Pe_mu}
as $\epsilon$ changes. In \fref{fig:Pe_e} we plot $P_\rme^{(TQC, m)}$ as a function
of $\epsilon$ for $\mu_2=\mu_1=1$, the case in which the code shows the worst
performance; we do not plot the cases of low correlations $(\mu_1\leq 0.1)$
since the correspondent curves are practically indistinguishable from the memoryless ones.
We also compare $P_\rme^{(TQC, m)}$ with the error
probability of the two-qubit code~\cite{DBF_EPJ08}: to produce good results
this last one requires very high degrees of correlation between successive channel uses.
In conclusion we find that a Hamiltonian formulation of a memory dephasing channel
shows that the three qubit code is robust against channel correlations.
Significantly different results
emerge if we describe channel correlations inside a Markovian model~\cite{DDBF_IJQI08}:
In this latter case memory restores the $\epsilon-$dependence in the code error probability,
thus drastically reducing the code performance.
\begin{figure}[!ht]
\centering
\includegraphics[width=75mm,height=58mm]{figure/Pe_e.eps}
\caption{Plot of code error probability as a function of the single channel use
error probability $\epsilon$. The dashed
grey line represents the TQC error probability in the memoryless case ($P_\rme^{(TQC)}$).
For the error probabilities of the three qubit code in presence of correlations
($P_\rme^{(TQC, m)}$) we plot the worst case $(\mu_2=\mu_1=1)$ by a dotted black curve.
There is also displayed (solid gray curve) the error probability for a simple
two-qubit code encoding a qubit into the subspace spanned by
$\{|01\rangle,|10\rangle\}$ for $\mu_1=0.99$
(triangles down).}
\label{fig:Pe_e}
\end{figure}
\ack{We acknowledge Andrea Mastellone and Elisabetta Paladino for
invaluable help.
A.D. and G.F. acknowledge support from the EU-EuroSQIP (IST-3-015708-IP) and
MIUR-PRIN2005 (2005022977).}
\section*{References}
|
1,116,691,499,642 | arxiv | \section{Introduction}
The determination of the equation of state (EoS) of compact star (CS) interiors is a subject of active research by means of both laboratory measurements and astronomical observations.
One of the most urgent questions concerns the possible existence of deconfined quark matter in CS cores, where matter densities can exceed by several times the nuclear saturation value, $n_0=0.15$ fm$^{-3}$, the typical density in large atomic nuclei.
Hybrid compact stars have an inner core composed of quark matter surrounded by an outer core of nuclear matter.
Of particular interest is the CS mass twin (MT) phenomenon \cite{Glendenning:1998ag}: when a pair of stars has the same
gravitational mass but different radii, with the larger star being composed only of hadronic matter and the smaller one being a hybrid star with a quark matter core.
The presence of MTs is synonymous to the existence of a third family \cite{Gerlach:1968zz} of CS in the mass-radius ($M-R$) diagram.
It requires the CS EoS to feature a strong first order phase transition from hadron to quark matter~\cite{Alford:2013aca,Alvarez-Castillo:2013cxa,Benic:2014jia,Blaschke:2015uva,Alvarez-Castillo:2017qki,Paschalidis:2017qmb}, where the latent heat of the transition fulfils the Seidov criterion \cite{1971SvA....15..347S}.
Should it turn out that the mass twin phenomenon can be discovered in CS observations this would give indirect evidence for the existence of a critical endpoint in the QCD phase diagram \cite{Alvarez-Castillo:2013cxa,Blaschke:2013ana,Benic:2014jia}
which is sought for in relativistic heavy-ion collisions, so far without being conclusive.
On the other hand, astronomical observations may provide measurements of masses and/or radii which are relevant for constraining the nuclear EoS via the one-to-one relationship provided by the Tolman-Oppenheimer-Volkoff (TOV) equations~\cite{Tolman:1939jz,Oppenheimer:1939ne} which govern the general relativistic hydrodynamic stability of CS matter under its own gravity.
Neutron star masses are accurately measured by monitoring binary pulsar dynamics whereas the radii were determined so far with big uncertainties.
The situation has changed with the first observation of the gravitational wave signal from the inspiral phase of the binary CS merger
GW170817~\cite{TheLIGOScientific:2017qsa} which allowed to constrain the tidal deformability of both merging stars and thus their mass and
radius~\cite{Annala:2017llu,Bauswein:2017vtn,Rezzolla:2017aly,De:2018uhw}.
Consequently, theoretical approaches are required to quantitatively assess the most probable EoS out of a whole class obtained by varying intrinsic model parameters.
One of the most powerful methods is the Bayesian analysis (BA) or Bayesian interpretation of probability that allows for estimation of model parameters based on prior knowledge, in this case the already collected measurements. Several works in this direction include BA based on observations of X-ray bursters~\cite{Steiner:2010fz} that unfortunately suffer from uncertainties in the stellar atmosphere composition modeling required in the interpretation of the observed signal or the more general approach of~\cite{Raithel:2017ity} that includes a generic multipolytrope EoS with priors that include experimental nuclear matter pressure values and that is able to support the most massive CS of about 2M$_{\odot}$ and reports an accuracy of about $30\%$ in pressure values.
The recent approach of~\cite{Salmi:2018gsn} employs X-ray timing observations of accretion-powered millisecond pulsars, resulting in radius estimates of about $5\%$ if the CS mass is known. Moreover, a more stringent analysis~\cite{Margueron:2017lup} that incorporates the hypothesis of the Direct Urca cooling constraint~\cite{Blaschke:2016lyx} in addition to the afore mentioned measurements concludes that the neutron star radius has a value of $12.7\pm0.4$~km for masses ranging from 1 up to $2~M_{\odot}$.
Despite the fact that the recent detection of gravitational radiation from the inspiral phase of the binary CS merger GW170817
has led to constraints on the tidal deformability of CS matter and to the discussion of the possibility that one or even both of the CS in the binary could be hybrid stars with quark matter interior and there would be a possibility to discover the mass twin phenomenon and thus a strong
first order phase transition in CS matter \cite{Alvarez-Castillo:2018pve,Most:2018hfd,Christian:2018jyd,Montana:2018bkb,Sieniawska:2018zzj}, these data have not yet been included in BA studies such as the ones already listed above.
Thus, it is of great interest to update such BA studies by incorporating this new information to constrain the CS EoS.
In this work we consider the mass twin EoS and modifications to it in order to perform new Bayesian analysis calculations that, in addition to the previously used CS measurements in~\cite{Alvarez-Castillo:2016oln}, will include the GW170817 data as priors to assess the probability of model parameters.
Our model choice is the KVOR EoS for hadronic matter together with the String-Flip approach for the deconfined quark regime.
\section{Hybrid EoS}
The hybrid EoS has been produced with the one-parameter replacement interpolation method (for a second order polynomial ansatz for $P(\mu)$)~\cite{Ayriyan:2017nby,Ayriyan:2017tvl,Abgaryan:2018gqp} which mimics the thermodynamic behavior of pasta phases in the coexistence region between the pure hadronic phase and the pure quark one.
The hadronic phase in this work is described by the well known KVOR equation of state~\cite{Kolomeitsev:2004ff} with a modification of stiffness as introduced in~\cite{Maslov:2015wba} and denoted as KVORcut2.
This particular version of the KVOR EoS model is the stiffest one presented in that work and allows to fulfill the condition \cite{1971SvA....15..347S} for the latent heat of phase transition to quark matter, so that the disconnected hybrid star family at higher densities is possible.
The quark phase is based on the String-Flip Model with the so called density functional~\cite{Kaltenborn:2017hus} including the available volume fraction $\Phi$,
\begin{equation}
\label{eq:exvol}
\Phi(n_\mathrm B) = {\mathrm e}^{- \alpha\ n_\mathrm{B}^2\ {\rm fm}^6},
\end{equation}
where the parameter $\alpha$ describes the effective density-dependence of the confining interaction, and it is varied here in the range $\left[0.1\dots 0.3\right]$.
The value of this parameter is correlated with the critical density of the phase transition, as it is shown in the figures below.
The larger $\alpha$, the lower the critical density and therefore the critical mass for the onset of the phase transition in the hybrid star.
This systematics allows to calibrate the range of the model parameters with the observational data without contradicting the known properties of nuclear matter at saturation density.
For the construction of the hybrid star EoS from the hadronic and quark matter EoS we employ here the mixed phase construction that is
described in detail in Refs.~\cite{Ayriyan:2017tvl,Ayriyan:2017nby,Abgaryan:2018gqp}.
It introduces the free parameter $\Delta_P=P(\mu_c)/P_c - 1$,
the pressure increment at the critical chemical potential $\mu_c$ relative to the critical pressure
$P_c$ of the corresponding Maxwell construction, which is then obtained as limiting case for $\Delta_P=0$.
The resulting family of hybrid EoS corresponds to the one described in \cite{Ayriyan:2017nby}, see Fig.~\ref{fig:heos}.
This way of modelling the phase transition mimics the possibility of so-called pasta phases characterized by different geometric structures
in the coexistence region of the transition sufficiently well, as has been shown in \cite{Maslov:2018ghi}.
In that reference it has also been demonstrated that the parameter $\Delta_P$ of the transition construction can be related to the value of the surface tension at the hadron
to quark matter interface.
\begin{figure}[ht!]
\vspace{-8mm}
\includegraphics[width=0.55\textwidth]{HybridEoS_KVORcut02_Nucleon_NUB_EoS_Grigorian_P_over_Eps_with_n} \hspace{-8mm}
\includegraphics[width=0.55\textwidth]{GrigorianConstructed_KVORcut02_Nucleon_NUBalpha_TOV_with_n}
\caption{\label{fig:heos}
The set of hybrid EoS obtained by the mixed phase construction for different values of $\Delta_P$ and $\alpha$ are shown in the left
panel, while in the right panel the corresponding set of compact star sequences in the mass-radius diagram is shown.
The blue numbers show selected values of critical densities for the onset of the phase transitions obtained by a Maxwell construction.
}
\end{figure}
\section{Neutron star configurations}
\subsection{Mass and Radius}
The structure and global properties of compact stars are obtained by solving the TOV equations
\begin{eqnarray}
\label{eq:tov1}
\dfrac{dP(r)}{dr} &=& - \dfrac{G M( r)\varepsilon( r)}{r^2}\dfrac{\left(1+\dfrac{P( r)}{\varepsilon( r)}\right)\left(1+ \dfrac{4\pi r^3 P( r)}{M( r)}\right)}{\left(1-\dfrac{2GM( r)}{r}\right)}~,\\
\displaystyle \dfrac{dM( r)}{dr} &=& 4\pi r^2 \varepsilon( r)~,
\label{eq:tov2}
\end{eqnarray}
where $P(r)$, $\varepsilon(r)$ and $M(r)$ are the profiles of pressure, energy density and enclosed mass as a function of the distance $r$
from the center of the star. The radius $R$ of the star is obtained from the condition of vanishing pressure at the surface $P(R)=0$
and the gravitational mass of the star is $M=M(R)$.
\subsection{Tidal deformability}
The recent observation of gravitational waves from the inspiral phase of the binary CS merger GW170817 gives an estimation of the tidal deformability of these stars, therefore we include also this information in the analyses.
The tidal deformability of a star can be calculated from the Einstein equation for small elliptic deformation as described in~\cite{Hinderer:2009ca}.
For the results of the calculation see Fig.~\ref{fig:lambda}.
\begin{figure*}[ht!]
\includegraphics[width=0.55\textwidth]{ML_newer.pdf}\hspace{-8mm}
\includegraphics[width=0.55\textwidth]{L1L2_newer.pdf}
\vspace{-5mm}
\caption{\label{fig:lambda}
The dependence of the tidal deformability $\Lambda$ on the compact star mass (left panel) and the $\Lambda_1$--$\Lambda_2$
diagram for selected values of $\Delta_P$ and $\alpha$ (right panel).
Comparison with the 90\% confidence line of the LIGO-Virgo Collaboration for the low-spin prior of GW170817
\cite{TheLIGOScientific:2017qsa} shows that if the hadronic EoS is as stiff as DD2$\underline{ }$p40 at least one of the
stars in the binary has to be a hybrid star in order to avoid a violation of the $\Lambda_1$--$\Lambda_2$ constraint.
As can be seen from Fig.~17 of Ref.~\cite{Alvarez-Castillo:2018pve} the merger of two hybrid stars which is admissible when
the onset mass for the deconfinement transition is lowered, e.g., by increasing $\alpha$, would lead to the appearance of a new
branch in this diagram mimicking the pattern of a merger of two neutron stars with a rather soft nuclear EoS (like APR or SLy4).
}
\end{figure*}
\section{Bayesian inference for the EoS models}
\subsection{Vector of Parameters}
The set of parameters of models could be represented in the parameter space with introduction of the vector of parameters, each vector is one fixed model from above described types of hadronic, quark phases and transition construction:
\begin{equation}
\label{pi_vec}
\overrightarrow{\pi}_i = \left\{\alpha_{(j)},{\Delta_P}_{(k)}\right\},
\end{equation}
where $i = 0..N-1$ and $i = N_2\times j + k$~and~$j = 0..N_1-1$,~$k = 0..N_2-1$ and $N_1$ and $N_2$ are number of values of model parameters $\alpha$ and $\Delta_P$ correspondingly.
\subsection{Likelihood of a model for the $\Lambda_1-\Lambda_2$ constraint from GW170817 }
In order to implement the tidal deformability constraint on the compact stars EoS, reflected on the $\Lambda_1$--$\Lambda_2$ diagram that includes probability regions from GW170817 event~\cite{TheLIGOScientific:2017qsa,Abbott:2018exr}, we employ the definition of the probability as an integral over the probability distribution function (PDF)
$\beta(\Lambda_1, \Lambda_2)$ by
\begin{equation}
\label{eq:lhoodLL}
P\left(E_{GW}\left|\pi_i\right.\right) = \int_{l_{22}} \beta(\Lambda_1(\tau), \Lambda_2(\tau))d\tau,
\end{equation}
when both stars in the binary merger belong to the second family branch of neutron stars (and $l_{22}$ is the corresponding path in the
$\Lambda_1-\Lambda_2$ plane), or by
\begin{equation}
\label{eq:lhoodLLtwins}
P\left(E_{GW}\left|\pi_i\right.\right) = \int_{l_{22}} \beta(\Lambda_1(\tau), \Lambda_2(\tau))d\tau + \int_{l_{23}} \beta(\Lambda_1(\tau), \Lambda_2(\tau))d\tau,
\end{equation}
in case the parameter vector $\pi_i$ corresponds to hybrid star equation of state with a third family of compact stars in the mass range relevant for the merger.
Then $l_{23}$ is the path in the $\Lambda_1-\Lambda_2$ plane corresponding to a binary merger composed of a neutron star and a hybrid star.
The parameter $\tau$ defines the position of a point on the paths $l_{22}$ and $l_{23}$ in equations \eqref{eq:lhoodLL}--\eqref{eq:lhoodLLtwins}.
Note that the PDF has been reconstructed by the method Gaussian kernel density estimation with $\Lambda_1-\Lambda_2$ data given at LIGO web-page \cite{LIGO}, see~fig.~\ref{fig:L1L2_PDF}.
\begin{figure*}[ht!]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=0.49\textwidth]{L1L2_pdf.pdf} &
\includegraphics[width=0.46\textwidth]{L1L2_cont.pdf}
\end{array}$
\end{center}
\caption{The PDF reconstructed with $\Lambda_1$--$\Lambda_2$ data for GW170817 from the LIGO website \cite{LIGO} as a 3D graphics (left panel) and as a contour plot (right panel).}
\label{fig:L1L2_PDF}
\end{figure*}
\subsection{Likelihood of a model for the mass constraint}
The likelihood of the model is the conditional probability of the expected value of the possible maximum mass for the given model parameter vector:
\begin{equation}
\label{eq:lhoodMass}
P\left(E_{A}\left|\pi_i\right.\right) = \Phi(M_i, \mu_A, \sigma_A),
\end{equation}
here $M_i$ is maximum mass of the given by $\pi_i$, and $\mu_A = 2.01~\mathrm{M_{\odot}}$ and $\sigma_A = 0.04~\mathrm{M_{\odot}}$ is the mass measurement~\cite{Antoniadis:2013pzd}.
\subsection{Posterior distribution}
The full likelihood for the given $\pi_i$ can be calculated as a product of all likelihoods, since the considered constraints are independent of
each other
\begin{equation}
\label{eq:p_event}
P\left(E\left|\overrightarrow{\pi}_{i}\right.\right)= \prod_{m} P\left(E_{m}\left|\overrightarrow{\pi}_{i}\right.\right).
\end{equation}
The posterior distribution of models on parameter diagram is given by Bayes' theorem
\begin{equation}
\label{eq:bayes}
P\left(\overrightarrow{\pi}_{i}\left|E\right.\right)=\frac{P\left(E\left|\overrightarrow{\pi}_{i}\right.\right)P\left(\overrightarrow{\pi}_{i}\right)}{\sum\limits _{j=0}^{N-1}P\left(E\left|\overrightarrow{\pi}_{j}\right.\right)P\left(\overrightarrow{\pi}_{j}\right)},
\end{equation}
where $P\left(\overrightarrow{\pi}_{j}\right)$ is a prior distribution of a models taken to be uniform: $P\left(\overrightarrow{\pi}_{j}\right)=1/N$.
\begin{figure*}[!ht]
\includegraphics[height=0.43\textwidth]{BAPlot_MKVORcut02_SFM_2019.pdf} \hspace{-1mm}
\includegraphics[height=0.5\textwidth]{GrigorianConstructed_KVORcut02_Nucleon_NUBalpha_TOV_BA_with_n_colored}
\caption{Left panel: The posterior distribution of models on the parameter space spanned by $\Delta_P$ and $\alpha$.
Right panel: The compact star sequences in the mass-radius diagram labeled into four probability classes according to the results of the BA
for the posterior distribution of the left panel. The grey, brown, orange and black lines show sequences for which the value of the posterior
probability exceeds the thresholds of 0.0, 0.02, 0.04 and 0.06, respectively.
Due to the restriction to the model and parameter range used in Ref.~\cite{Ayriyan:2017nby}, the sequences with a lower onset mass for the
deconfinement transition are not accessed for which an interpretation of GW170817 as a merger of two hybrid stars would be possible. }
\label{fig:results}
\end{figure*}
\section{Results}
The results of the Bayesian analysis are given in Fig.~\ref{fig:results}
The most likely EoS are those with a strong mixed phase effect described by $\Delta_P > 0.04$ and with a large screening parameter $\alpha > 0.28$,
at the limit of the parameter range considered here. For these parameter sets the phase transition and therefore the compactification of the
hybrid star configuration occurs within the mass range that is relevant for GW170817.
These results indicate that it should be worthwile to repeat this exploratory calculation with a wider set of parameters, in particular a larger
screening parameter $\alpha$.
\section{Conclusions}
We have developed a Bayesian analysis method for selecting the most probable equation of state under a set of constraints from compact star physics,
which now include the tidal deformability from GW170817.
We have applied this method to a case study that employed the class of hybrid EoS introduced in \cite{Ayriyan:2017nby} that allow for the existence of a third family
of compact stars.
We have investigated the requirements on future measurements to find the equivalent phenomenon of mass twins, i.e. two objects
with the same mass but different, distinguishable radii.
A mass - radius relation with two branches is mapped also on the $\Lambda_1-\Lambda_2$ diagram as it is shown here in Fig.~\ref{fig:L1L2_PDF}
and other recent publications \cite{Alvarez-Castillo:2018pve,Most:2018hfd,Christian:2018jyd,Montana:2018bkb,Sieniawska:2018zzj}.
Since binary compact star mergers are expected to be observed by the LIGO-Virgo Collaboration at a rate of 1-10 events per year, we expect that with the observation
of next merger events a binodal structure of the PDF in the $\Lambda_1-\Lambda_2$ plane could become apparent as a manifestation of the low-mass twin case with
an onset of the third family branch below $\sim 1.3~M_\odot$ if such a branch exists in nature.
A similar suggestion has been already proposed by Christian et al. \cite{Christian:2018jyd}.
Such an observation would support the existence of an early phase transition, around $2n_0$ for strong in-medium screening of the string tension.
\section{Acknowledgements}
We acknowledge discussions with K. Maslov on the hybrid star EoS.
A.A., D.B., and H.G. acknowledge support from the Russian Science Foundation under grant No. 17-12-01427 for the work described in
sections 2 - 6. D. A.-C. is grateful for partial support from the Ter-Antonian - Smorodinsky program for collaboration between JINR and Armenian
scientific institutions and from the Bogoliubov-Infeld program for collaboration between JINR and Polish Institutions.
\reftitle{References}
|
1,116,691,499,643 | arxiv | \section{Introduction}
\IEEEPARstart{T}{he} relationship between conditional entropy (equivocation) or mutual information, and best possible quality of decoding is an important concept in information theory. The best possible quality of a decoding scheme, when quantified by the minimal probability of error $\epsilon$, does not uniquely determine the value of equivocation or mutual information, but various upper and lower bounds have been proved, see Sec. \ref{sec_existing_theory}.
Here we discuss a scenario when not only $\epsilon$, but the complete joint probability distribution $p(x,\hat{x})$ of signals $x$ and maximum a posteriori
decodes $\hat{x}$ is available. We refer to $p(x,\hat{x})$ as the confusion matrix. To our knowledge, such a scenario has not been extensively studied in the literature, despite having practical relevance for estimation of mutual information, as we point out in Sec. \ref{sec_motivation}. In this article, we derive an upper bound on mutual information (and a corresponding lower bound on equivocation) that is based on the confusion matrix and is tighter than the known similar bound by Kovalevsky and others \cite{Kovalevsky1968,Tebbe1968,Feder1994} based on probability of error alone. The inequality in our bound can be proved quickly using the bound by Kovalevsky, as we show in Sec. \ref{sec_quick_proof}. However, we also include a self-contained derivation in Sec. \ref{sec_our_proof}, where we construct the distribution of channel outputs that minimizes equivocation $H(X|Y)$ under our constraints.
\subsection{Equivocation, mutual information and the minimal probability of error}
\label{sec_existing_theory}
We consider a signal variable (message) $X$ that is communicated through a channel with output $Y$ and then decoded, obtaining a ``decode'' $\hat{X}$ -- forming a Markov chain $X \leftrightarrow Y \leftrightarrow \hat{X}$. The equivocation $H(X|Y)$ quantifies the uncertainty in $X$ if the value of $Y$ is given. Conversely, the mutual information $I(X;Y)$ measures how much information about $X$ is contained in $Y$. It is not surprising that both $H(X|Y)$ and $I(X;Y)$ can be related to the minimal probability of error while decoding, $\epsilon = \Pr(X \neq \hat{X})$.
Accurate decoding, i.e., low $\epsilon$, requires sufficiently low equivocation $H(X|Y)$. This is quantified by Fano's inequality \cite{Cover2006}. The mutual information between the true signal and the channel output, $I(X;Y) = H(X)-H(X|Y)$, needs to be sufficiently high, and this is described by rate-distortion theory \cite{Shannon1959}.
Here we focus on the opposite bounds. If the minimal probability of error $\epsilon$ is specified, there is also a minimal possible equivocation. The following lower bound was derived for discrete $X$ with finite support by Kovalevsky \cite{Kovalevsky1968} and later Tebbe and Dwyer \cite{Tebbe1968} and Feder and Merhav \cite{Feder1994}. It reads
\begin{equation}
H(X|Y) \geq \phi^\ast (\epsilon), \label{eq_feder_merhav}
\end{equation}
where $\phi^\ast (\epsilon)$ is a piecewise linear function that coincides with $-\log{(1-\epsilon)}$ at points $\epsilon=0$, $1/2$, $2/3$, $\dots$, $(|\mathcal{X}|-1)/|\mathcal{X}|$ (we use $\log = \log_2$ throughout the paper, and $\mathcal{X}$ is the support of $X$), and it can be written using the floor and ceiling functions,
\begin{align}
\phi^\ast (\epsilon) &= \alpha(\epsilon) \log{ \left\lfloor \frac{1}{1- \epsilon} \right\rfloor} + \left(1-\alpha(\epsilon) \right) \log{ \left\lceil \frac{1}{1- \epsilon} \right\rceil}, \label{eq_phi} \\
\alpha(\epsilon) &= \left\lfloor \frac{1}{1- \epsilon} \right\rfloor \left( (1-\epsilon) \left\lceil \frac{1}{1- \epsilon} \right\rceil -1 \right). \label{eq_alpha}
\end{align}
The function $\phi^\ast (\epsilon)$ is plotted in Fig. \ref{fig_phi}.
\begin{figure}[!t]
\centering
\includegraphics[width=2.7in]{phi-eps-converted-to.pdf}
\caption{Plot of the functions $\phi^\ast(\epsilon)$ and $-\log{(1-\epsilon)}$. The two functions intersect at $\epsilon=0$, $1/2$, $2/3$, $\dots$, $(|\mathcal{X}|-1)/|\mathcal{X}|$ (black dots), and in between $\phi^\ast(\epsilon)$ is piecewise linear.}
\label{fig_phi}
\end{figure}
The bound \eqref{eq_feder_merhav} has been generalized to countably infinite support of $X$ by Ho and Verd\'u \cite{Ho2010}. Sason and Verd\'u \cite{Sason2018} proved a generalisation of \eqref{eq_feder_merhav} for Arimoto-R\'enyi conditional entropy of arbitrary order.
The bound \eqref{eq_feder_merhav} is tight when only the overall probability of error $\epsilon$ is available. However, when more constraints on the the joint distribution of $X$ and $Y$ are given, tighter bounds can be obtained. Prasad \cite{Prasad2015} introduced two series of lower bounds on $H(X|Y)$ based on partial knowledge of the posterior distribution $p(x|y)$. The first is in terms of the $k$ largest posterior probabilities $p(x|y)$ for each $y$, that we could label $p_1(y), p_2(y), \dots, p_k(y)$ in descending order (where $1 \leq k \leq |\mathcal{X}|$). The second series of bounds by Prasad is in terms of the averages of $p_1(y), p_2(y), \dots, p_k(y)$ across all $y$.
Hu and Xing \cite{Hu2016} focused on binary signal $X$ and derived a bound tighter than \eqref{eq_feder_merhav} by taking into account the prior distribution of signals $p(x)$. Hu and Xing also discuss suboptimal (other than maximum a posteriori) decoding, which is otherwise rare in the related literature.
\subsection{Motivation: estimation of mutual information} \label{sec_motivation}
Here we extend the bound \eqref{eq_feder_merhav} to account for the situation when the complete confusion matrix -- the joint distribution $p(x,\hat{x})$ is known. We are motivated by the following scenario: suppose that the goal is to estimate the mutual information $I(X;Y)$ from a finite set of $(x,y)$ samples. Moreover, assume that the space of possible channel outputs $\mathcal{Y}$ is large (much larger than the space of signals, $|\mathcal{Y}|\gg|\mathcal{X}|$), making a direct calculation of $I(X;Y)$ by means of their joint distribution $p(x,y)$ infeasible due to insufficient sampling. In such a case, one approach (used e.g. in neuroscience \cite{Borst1999}) is to construct a decoder, map each $y$ into a decode $\hat{x}$ and estimate the confusion matrix $p(x,\hat{x})$. Then the post-decoding mutual information $I(X;\hat{X})$ can be calculated and used as a lower bound on $I(X;Y)$ due to the data processing inequality \cite{Cover2006}. However, the gap between $I(X;\hat{X})$ and $I(X;Y)$ is not known (but see a discussion of this gap in \cite{Samengo2002}), and an upper bound on $I(X;Y)$ based on $p(x,\hat{x})$ is desirable. Our result is such a bound, for the specific case of maximum a posteriori decoder.
While mutual information $I(X;Y)$ has this practical importance, we formulate our result as an equivalent lower bound on equivocation $H(X|Y) = H(X)-I(X;Y)$ first. This is simpler to state and prove.
\section{Statement of the bound}
Given the joint distribution $p(X,\hat{X})$ of signals $X$ (discrete with finite support) and maximum a posteriori decodes $\hat{X}$ based on the channel output $Y$, the equivocation $H(X|Y)$ is bounded from below by
\begin{equation}
H(X|Y) \geq \sum_{\hat{x}} p(\hat{x}) \, \phi^\ast (\epsilon_{\hat{x}} ), \label{eq_bound_equivocation}
\end{equation}
where $\epsilon_{\hat{x}} = p(X \neq \hat{X} | \hat{x}) = 1 - p(X = \hat{x} | \hat{X} = \hat{x})$ is the probability of error for the decode $\hat{x}$ and the function $\phi^\ast$ is defined in \eqref{eq_phi}, \eqref{eq_alpha}.
Equivalently, we can bound the mutual information $I(X;Y)$ from above:
\begin{align}
I(X;Y) &= H(X) - H(X|Y) \nonumber \\
&\leq H(X) - \sum_{\hat{x}} p(\hat{x}) \, \phi^\ast (\epsilon_{\hat{x}} ).
\end{align}
These bounds are tight, and we construct the distributions $p(y|\hat{x})$ and $p(x|y)$ that achieve equality in Sec. \ref{sec_our_proof}.
\subsection{Comments on the bound}
We note that since the function $\phi^\ast (\epsilon_{\hat{x}})$ is convex, we can apply Jensen's inequality to the right hand side of \eqref{eq_bound_equivocation} and recover the bound \eqref{eq_feder_merhav} by Kovalevsky \cite{Kovalevsky1968},
\begin{equation}
H(X|Y) \geq \phi^\ast \left( \sum_{\hat{x}} p(\hat{x}) \, \epsilon_{\hat{x}} \right)
= \phi^\ast (\epsilon).
\end{equation}
Both bounds coincide in case of binary signal $|\mathcal{X}| = 2$, or any other case when the probability of error is less than $1/2$, $\epsilon_{\hat{x}} < 1/2$ for all $\hat{x}$. On this range, $\phi^\ast (\epsilon_{\hat{x}}) = 2 \epsilon_{\hat{x}}$ and the bound simplifies to
\begin{equation}
H(X|Y) \geq 2 \sum_{\hat{x}} p(\hat{x}) \, \epsilon_{\hat{x}} = 2 \epsilon,
\end{equation}
as has been noted in \cite{Feder1994} and before.
\subsection{Example calculation}
As an illustration, we apply our bound \eqref{eq_bound_equivocation} to an example confusion matrix and compare it to the bound \eqref{eq_feder_merhav} that is in terms of error probability $\epsilon$ only.
The confusion matrix considered is depicted in Fig. \ref{fig_simpleCalc} (A) for the case $|\mathcal{X}|=5$. We vary the size $|\mathcal{X}|$ of the space of signals $\mathcal{X}=\{ 1,2,\dots,|\mathcal{X}|\}$, and the confusion matrix always takes the form
\begin{equation}
p(x,\hat{x}) =
\begin{cases}
\frac{1}{2|\mathcal{X}|}; \qquad & x=\hat{x}<|\mathcal{X}|, \\
\frac{1}{2|\mathcal{X}|}; \qquad & x<|\mathcal{X}|,\, \hat{x}=|\mathcal{X}|, \\
\frac{1}{|\mathcal{X}|}; \qquad & x=\hat{x}=|\mathcal{X}|, \\
0; \qquad & x \neq \hat{x},\, \hat{x}<|\mathcal{X}|.
\end{cases} \label{eq_example_calculation}
\end{equation}
This distribution has the property that while most of the decodes have zero probability of being incorrect ($\epsilon_{\hat{x}}=0$ for $\hat{x}<|\mathcal{X}|$), the last one has a high probability of being incorrect, $\epsilon_{\hat{x}}=(|\mathcal{X}|-1)/(|\mathcal{X}|+1)$ for $\hat{x}=|\mathcal{X}|$. Our bound \eqref{eq_bound_equivocation} takes this into account -- which makes it substantially tighter than the bound \eqref{eq_feder_merhav} based only on the overall probability of error $\epsilon$. This can be seen in Fig. \ref{fig_simpleCalc} (B), where both lower bounds are plotted. We also plot the post-decoding conditional entropy $H(X|\hat{X})$ which serves as the upper bound on the true value of $H(X|Y)$.
\begin{figure}[!t]
\centering
\begin{tabular}{m{1.3 in} m{1.8 in}}
\includegraphics[width=\linewidth]{simple_calc_conf_matrix-eps-converted-to.pdf} &
\includegraphics[width=\linewidth]{simple_calc-eps-converted-to.pdf}
\end{tabular}
\caption{Example application of the bound. (A) The joint distribution of signals and decodes $p(x,\hat{x})$ for which we compute the bound, defined in Eq. \eqref{eq_example_calculation}. Here for the case $|\mathcal{X}|=5$. (B) Bounds on conditional entropy (equivocation) $H(X|Y)$ plotted for different sizes of signal space $|\mathcal{X}|$. $H(X|Y)$ is bounded from above by $H(X|\hat{X})$ (blue points). Our novel lower bound (Eq. \eqref{eq_bound_equivocation}) is in orange and the bound by Kovalevsky (Eq. \eqref{eq_feder_merhav}) in green. Our bound \eqref{eq_bound_equivocation} is the tightest possible given the confusion matrix.}
\label{fig_simpleCalc}
\end{figure}
\section{Proof of the bound}
We offer two alternative proofs of the bound here. The first proves it as a simple consequence of the bound \eqref{eq_feder_merhav} by Kovalevsky. It is short, but it leaves open the question of tightness. We therefore focus on the second proof, which is self-contained, implies tightness and perhaps offers additional insights, since it includes a derivation of the distribution of channel outputs $p(y|\hat{x})$, $p(x|y)$ that minimizes $H(X|Y)$.
Throughout the proofs, the spaces of possible values of $X$ and $Y$ are written as $\mathcal{X}$ and $\mathcal{Y}$ respectively. The decoding function is denoted $g : \mathcal{Y} \rightarrow \mathcal{X}$ and is based on the maximum a posteriori rule, $g(y) \in \underset{x}{\operatorname{argmax}} \, p(x|y)$. Finally, $\mathcal{Y}_{\hat{x}} = \{ y \in \mathcal{Y} \, | \, g(y) = \hat{x} \}$ is the set of all $y$ that decode into $\hat{x}$.
\subsection{A quick proof of inequality following Kovalevsky's bound} \label{sec_quick_proof}
The left hand side of \eqref{eq_bound_equivocation}, the equivocation $H(X|Y)$ can be written as
\begin{equation}
H(X|Y) = \sum_{\hat{x}} p(\hat{x}) \int_{\mathcal{Y}_{\hat{x}}} H(X|Y=y) \,dp(y|\hat{x}),
\end{equation}
where the term $\int_{\mathcal{Y}_{\hat{x}}} H(X|Y=y) \,dp(y|\hat{x})$ is the entropy of $X$ conditional on $Y$, but with the values of $Y$ only limited to $\mathcal{Y}_{\hat{x}}$. Since it has the form of conditional entropy, we can use the Kovalevsky bound \eqref{eq_feder_merhav} and obtain our result \eqref{eq_bound_equivocation}.
This establishes the inequality in our bound, but it does not tell us if equality can be achieved -- and if it can, for what distribution of $Y$ does it happen. We address this in the following section.
\section{Proof by minimization of equivocation}
\label{sec_our_proof}
For simplicity, we formulate the derivation for discrete $Y$. However, as we comment in Sec. \ref{sec_discussion}, the derivation applies to continuous $Y$ with only minor modifications.
For clarity, let us state the minimization problem we are solving. We minimize
\begin{align}
H(X|Y) = \sum_{\hat{x}} p(\hat{x}) \sum_{y \in \mathcal{Y}_{\hat{x}}} p(y|\hat{x}) H(X|Y=y) \label{eq_objective_function_full}
\end{align}
with respect to $p(y|\hat{x})$ and $p(x|y)$, with the constraints given by the confusion matrix and maximum a posteriori decoding:
\begin{align}
\forall x, \hat{x}: \qquad & \sum_y p(x|y) p(y|\hat{x}) = p(x|\hat{x}), \label{eq_constraints_conf} \\
\forall \hat{x}, \forall y \in \mathcal{Y}_{\hat{x}}: \qquad & \hat{x} \in \underset{x}{\operatorname{argmax}} \, p(x|y). \label{eq_constraints_dec}
\end{align}
Note in \eqref{eq_objective_function_full} that the minimization can be done separately for each $\hat{x}$, since the corresponding $\mathcal{Y}_{\hat{x}}$ are disjoint. Hence we have $|\mathcal{X}|$ independent minimization problems with the objective function
\begin{equation}
\sum_{\mathcal{Y}_{\hat{x}}} p(y|\hat{x}) H(X|Y=y). \label{eq_objective_function}
\end{equation}
Note also that we do not have any constraint on $|\mathcal{Y}|$, the number of elements of $\mathcal{Y}$. We actually exploit this flexibility in the proof. However, it turns out (see Propositions 1 and 2) that when the minimum is achieved, there can be only a limited number of $y$ values with different distribution $p(x|y)$.
Our approach is based on update rules for $p(y|\hat{x})$ and $p(x|y)$ that decrease the objective function \eqref{eq_objective_function} while respecting the constraints \eqref{eq_constraints_conf}, \eqref{eq_constraints_dec}. In fact, the updates also change $|Y|$. The minimum of $H(X|Y)$ is achieved when the update rules can no longer be used to decrease it -- and such situations can be characterized and the corresponding $H(X|Y)$ can be calculated.
It is instructive to have in mind the following visualization of our minimization problem, which we use to illustrate the update rules in Fig. \ref{fig_updateRules}. The distribution $p(x,y|\hat{x})$ for some $\hat{x}$, with $y$ restricted to $y \in \mathcal{Y}_{\hat{x}}$ can be represented as a matrix, with a row for each $x$ and a column for each $y$. Normalized columns correspond to $p(x|y)$ and the sum of each column is $p(y|\hat{x})$. The constraint \eqref{eq_constraints_conf} means that each row has a fixed sum, $p(x|\hat{x})$, and the constraint \eqref{eq_constraints_dec} means that one row (e.g. the first) contains the dominant elements of all columns. The objective function \eqref{eq_objective_function} is a weighted sum of entropies of all columns. Our minimization will consist of adding and removing columns, and moving probability mass within rows.
In the following, a probability distribution is called \emph{flat} if all non-zero elements are equal, i.e. there are $n$ non-zero elements and all have probabilities $1/n$. The number $n$ is called its \emph{length}.
\subsection*{Proposition 1: equivocation minimized by flat $p(x|y)$}
\label{subsec_proposition1}
The minimum of the objective function \eqref{eq_objective_function}, given constraints \eqref{eq_constraints_conf}, \eqref{eq_constraints_dec} can only be achieved when the distributions $p(x|y)$ are flat for all $y$.
\begin{IEEEproof}
Suppose that there is a channel output $y'$ with a non-flat distribution $p(x|y')$. Then, the following update rule, illustrated in Fig. \ref{fig_updateRules} (A), will decrease the objective function \eqref{eq_objective_function}.
We label the elements of $\mathcal{X}$ as $x_1, x_2, \dots, x_{|\mathcal{X}|}$ such that
\begin{equation}
p(x_1|y') \geq p(x_2|y') \geq \dots \geq p(x_{|\mathcal{X}|}|y') \geq 0,
\end{equation}
where at least two of the inequalities are sharp (otherwise $p(x|y')$ would be flat). Note that $x_1$ must be the decode of $y'$, i.e. $g(y') = \hat{x} = x_1$. The proposed update is to replace $y'$ by $y'_1, y'_2, \dots, y'_{|\mathcal{X}|}$ with flat distributions $p(x|y'_i)$,
\begin{align}
p(x_j|y'_i) &= \begin{cases}
1/i; & j \leq i \\
0; & j > i,
\end{cases} \label{eq_prop1_xy} \\
p(y'_i|\hat{x}) &= \begin{cases}
i p(y'|\hat{x}) \left( p(x_i|y') - p(x_{i+1}|y') \right); & i < |\mathcal{X}| \\
i p(y'|\hat{x}) \, p(x_i|y'); & i = |\mathcal{X}|.
\end{cases} \label{eq_prop1_yx}
\end{align}
Intuitively, this replaces $y'$ by multiple elements $y'_i$ with flat distributions $p(x|y'_i)$ covering the first $1, 2, \dots, |\mathcal{X}|$ elements of the ordered $x_1, x_2, \dots, x_{|\mathcal{X}|}$. It can be confirmed that this replacement respects the constraints \eqref{eq_constraints_conf}. All $y'_i$ still decode into $\hat{x} = x_1$, and the probability associated with $y'$ is merely divided among the elements $y'_i$,
\begin{align}
\sum_i p(y'_i|\hat{x}) &= p(y'|\hat{x}), \label{eq_prop1_argument1} \\
\sum_i p(x_j|y'_i) p(y'_i|\hat{x}) &= p(x_j|y') p(y'|\hat{x}). \label{eq_prop1_argument2}
\end{align}
See Fig. \ref{fig_updateRules} for an example.
Now we look at the change in the objective function \eqref{eq_objective_function} induced by this replacement. Before the replacement, $y'$ contributes the amount
\begin{equation}
p(y'|\hat{x}) H(X|Y=y'), \label{eq_prop1_oldH}
\end{equation}
where $H(X|Y=y')$ is the entropy of a single random variable with distribution $p(x|y')$. After the replacement, the total contribution of all $y'_1, y'_2, \dots, y'_{|\mathcal{X}|}$ is
\begin{multline}
\sum_i p(y'_i|\hat{x}) H(X|Y=y'_i) = \\
= p(y'|\hat{x}) \sum_i \frac{p(y'_i|\hat{x})}{p(y'|\hat{x})} H(X|Y=y'_i), \label{eq_prop1_newH}
\end{multline}
where the latter sum has the form of a conditional entropy of a variable with a marginal distribution $p(x|y')$, conditioned on the value of $y'_i$ distributed according to $p(y'_i|\hat{x})/p(y'|\hat{x})$ (this follows from Eq. \eqref{eq_prop1_argument1}, \eqref{eq_prop1_argument2}). Since conditioning decreases entropy, our replacement decreases the objective function \eqref{eq_objective_function}.
The only case when our the proposed replacement cannot be used to decrease the objective function is when $p(x|y)$ is flat for all $y$. Therefore flat $p(x|y)$ must be a characteristic of any solution to our minimization problem.
\end{IEEEproof}
Note that there are only $2^{|\mathcal{X}|-1}$ different possible flat distributions $p(x|y)$ with nonzero $p(X=\hat{x}|y)$, which means that we need at most $2^{|\mathcal{X}|-1}$ elements in $\mathcal{Y}_{\hat{x}}$ to achieve the minimum equivocation. However, as the following proposition will show, there are further restrictions on $p(x|y)$ at the minimum.
Reflecting that only flat $p(x|y)$ are of further interest in the minimization, we say that the channel output $y$ has length $l$ if $p(x|y)$ has length $l$.
\begin{figure}[!t]
\centering
\includegraphics[width=2.8in]{updateRule_1-eps-converted-to.pdf}\\
\includegraphics[width=2.8in]{updateRule_2v2-eps-converted-to.pdf}
\caption{Illustrations of the update rules used to prove (A) Proposition 1 and (B) Proposition 2. Displayed is the joint distribution $p(x,y|\hat{x})$.
(A) A channel output $y'$ with a non-flat distribution $p(x|y')$ is replaced by $y'_1, y'_2, \dots, y'_{4}$ with flat distributions $p(x|y'_i)$, such that $y'_1, y'_2, \dots, y'_{4}$ still decode into $x_1$ and the confusion matrix is not affected. This replacement decreases $H(X|Y)$, our objective function. The elements of $\mathcal{X}$ are labeled in decreasing order of $p(x,y|\hat{x})$.
(B) Two channel outputs, $y_1$ and $y_2$, have flat distributions $p(x|y_{1,2})$ with $3$ and $1$ nonzero elements respectively. We replace $y_1$ by $\bar{y}_1$ and $\bar{\bar{y}}_1$, and then transfer probability $p(x_2,\bar{y}_1|\hat{x})$ to $p(x_2,y_2|\hat{x})$ (dotted red arrow). The distributions $p(x|\bar{y}_1)$, $p(x|\bar{\bar{y}}_1)$ and $p(x|y_{2})$ remain flat, and the objective function $H(X|Y)$ is decreased.
}
\label{fig_updateRules}
\end{figure}
\subsection*{Proposition 2: minimization restricts lengths of $p(x|y)$ }
\label{subsec_proposition2}
Building on Proposition 1, we further claim that if equivocation is minimized, no two channel outputs $y_1,y_2 \in \mathcal{Y}_{\hat{x}}$ can have lengths differing by more than $1$.
\begin{IEEEproof}
As before, we introduce an update rule. Recalling the visualization with a column for each $y$, this update rule will move a nonzero element from a longer column to a shorter column, as shown in Fig. \ref{fig_updateRules} (B).
Take two elements $y_1,y_2 \in \mathcal{Y}_{\hat{x}}$ that have flat distributions $p(x|y_1)$ and $p(x|y_2)$ with lengths $a$ and $b$ respectively where $a>b$. Assume that $a$ and $b$ differ by more than one, $a - b > 1$. This means that we can choose an element $x' \in \mathcal{X}$ such that $p(x'|y_1) = 1/a$ and $p(x'|y_2) = 0$. Assume momentarily that $p(y_1|\hat{x})/a = p(y_2|\hat{x})/b$ (we will relax this assumption later). Then we can replace $y_1,y_2$ by $y'_1$ and $y'_2$, such that
\begin{itemize}
\item $p(x|y'_1)$ is flat with length $a-1$. It is nonzero for the same $x$ as $p(x|y_1)$, except for $x'$ where it is zero.
\item $p(x|y'_2)$ is flat with length $b+1$. It is nonzero for the same $x$ as $p(x|y_2)$, and also for $x'$.
\end{itemize}
Given that $p(y_1|\hat{x})/a = p(y_2|\hat{x})/b$, we can also choose the probabilities $p(y'_1|\hat{x})$ and $p(y'_2|\hat{x})$ such that $y'_1$, $y'_2$ contribute the same amount to $p(x|\hat{x}) = \sum_{y} p(x|y) p(y|\hat{x})$ as $y_1$ and $y_2$ did, ensuring that constraints \eqref{eq_constraints_conf} are respected:
\begin{align}
p(y'_1|\hat{x}) &= \frac{a-1}{a} p(y_1|\hat{x}), \\
p(y'_2|\hat{x}) &= \frac{b+1}{b} p(y_2|\hat{x}).
\end{align}
Now we show that the proposed replacement reduces the objective function. Before the replacement, the contribution to the objective function \eqref{eq_objective_function} by $y_1$ and $y_2$ was
\begin{equation}
p(y_1|\hat{x}) \log{a} + p(y_2|\hat{x}) \log{b}.
\end{equation}
After the replacement, $y'_1$ and $y'_2$ contribute by
\begin{equation}
\frac{a-1}{a} p(y_1|\hat{x}) \log{(a-1)} + \frac{b+1}{b} p(y_2|\hat{x}) \log{(b+1)}.
\end{equation}
The difference of these contributions $\Delta$, after minus before replacement, has the form
\begin{equation}
\Delta = \frac{p(y_1|\hat{x})} {a} \left( f(b+1) - f(a) \right),
\end{equation}
where $f(t) = t \log{t} - (t-1) \log{(t-1)}$ is an increasing function for $t\geq 1$. Since $b+1 < a$, we have $\Delta < 0$, meaning that the objective function is reduced.
This update rule is applicable to any $y_1,y_2 \in \mathcal{Y}_{\hat{x}}$ with lengths $a$ and $b$ such that $a-b>1$ respectively. We have, however, further required that $p(y_1|\hat{x})/a = p(y_2|\hat{x})/b$. This requirement can be avoided. If $p(y_1|\hat{x})/a > p(y_2|\hat{x})/b$, we first split $y_1$ into $\bar{y}_1$ and $\bar{\bar{y}}_1$ with
\begin{align}
p(\bar{y}_1|\hat{x}) &= a \, p(y_2|\hat{x})/b, \\
p(\bar{\bar{y}}_1|\hat{x}) &= p(y_1|\hat{x}) - a \, p(y_2|\hat{x})/b, \\
p(x|\bar{y}_1) &= p(x|\bar{\bar{y}}_1) = p(x|y_1),
\end{align}
such that the above mentioned update rule can be applied to $\bar{y}_1$ and $y_2$ while $\bar{\bar{y}}_1$ is left unchanged, see Fig. \ref{fig_updateRules} (B). If $p(y_1|\hat{x})/a < p(y_2|\hat{x})/b$, we can proceed analogously by splitting $y_2$.
We can decrease the objective function by repeatedly applying this generalized update rule. Therefore, the minimum can only be achieved when the lengths of $p(x|y)$ for $y \in \mathcal{Y}_{\hat{x}}$ vary by no more than 1.
\end{IEEEproof}
Note that by repeated application of this update rule, in a finite number of steps we reach a state with only up to two lengths (per $\hat{x}$) that differ by at most 1. As shown in the next section, such a state implies a specific value of $H(X|Y)$. Together with the update rule in the proof of Proposition 1, this gives us an algorithm to find the distributions $p(y|\hat{x})$ and $p(x|y)$ that achieves the minimum $H(X|Y)$. The algorithm can start from an arbitrary initialization of $p(y|\hat{x})$ and $p(x|y)$ that follows the constraints \eqref{eq_constraints_conf}, \eqref{eq_constraints_dec} and finishes in a finite number of steps.
It remains to be determined what are the (at most two) allowed lengths of $y \in \mathcal{Y}_{\hat{x}}$ and how the elements $y$ with these lengths contribute to the equivocation $H(X|Y)$.
\subsection*{Admissible lengths of $p(x|y)$}
Let us call the two admissible lengths $l_{\hat{x}}$ and $l_{\hat{x}}+1$. Given $\hat{x}$, the total probability of all $y \in \mathcal{Y}_{\hat{x}}$ with length $l_{\hat{x}}$ is $\alpha_{\hat{x}}$, and those of length $l_{\hat{x}}+1$ have probability $1-\alpha_{\hat{x}}$. Then from the constraint \eqref{eq_constraints_conf}, we can write the probability that $\hat{x}$ is the correct decode
\begin{equation}
1-\epsilon_{\hat{x}} = \frac{\alpha_{\hat{x}}}{ l_{\hat{x}} }+ \frac{1-\alpha_{\hat{x}}} {l_{\hat{x}}+1}, \label{eq_alpha_length}
\end{equation}
from which we can deduce that $\frac{1} {l_{\hat{x}}+1} \leq 1-\epsilon_{\hat{x}} \leq \frac{1}{ l_{\hat{x}} }$, and that the two admissible lengths must be
\begin{equation}
l_{\hat{x}} = \left\lfloor \frac{1}{1- \epsilon_{\hat{x}}} \right\rfloor
\text{ and } l_{\hat{x}} + 1 = \left\lceil \frac{1}{1- \epsilon_{\hat{x}}} \right\rceil, \label{eq_result_lengths}
\end{equation}
unless $\frac{1}{1- \epsilon_{\hat{x}}}$ is an integer -- in that case the floor and ceiling coincide into a single admissible length.
Now, from equations \eqref{eq_alpha_length} and \eqref{eq_result_lengths} we can determine that
\begin{equation}
\alpha_{\hat{x}} = \left\lfloor \frac{1}{1- \epsilon_{\hat{x}}} \right\rfloor \left( (1-\epsilon_{\hat{x}}) \left\lceil \frac{1}{1- \epsilon_{\hat{x}}} \right\rceil -1 \right) = \alpha(\epsilon_{\hat{x}}) \label{eq_result_alpha}
\end{equation}
is the total probability (given $\hat{x}$) of $y \in \mathcal{Y}_{\hat{x}}$ with length $\lfloor \frac{1}{1- \epsilon_{\hat{x}}} \rfloor$.
Finally, the minimal value of equivocation is simply
\begin{equation}
H(X|Y) \geq \sum_{\hat{x}} p(\hat{x}) \left( \alpha_{\hat{x}} \log{l_{\hat{x}}} + (1-\alpha_{\hat{x}}) \log{(l_{\hat{x}} + 1)} \right),
\end{equation}
which together with equations \eqref{eq_result_lengths} and \eqref{eq_result_alpha} constitutes our main bound, as stated in \eqref{eq_bound_equivocation}.
\section{Discussion} \label{sec_discussion}
We have introduced a tight lower bound on equivocation in terms of the maximum a posteriori confusion matrix, and proved it in two ways. The first is a proof of the inequality, starting from a similar bound by Kovalevsky \cite{Kovalevsky1968}, but it does not prove that the bound is tight. Therefore, we developed a second proof, in which we construct the distribution of channel outputs that minimizes the equivocation and achieves equality in our bound.
Central to the latter approach are two update rules for the distribution of the channel outputs. These update rules exploit the fact that equivocation can be, under our constraints, minimized by (1) making the posterior distributions $p(x|y)$ flat and (2) making sure that these flat distributions contain similar numbers of nonzero elements.
We formulated the proof for discrete random variables $X$ and $Y$, but it can be extended. If $X$ is discrete but $Y$ continuous, application of a modified version of the first update rule would result in $2^{|\mathcal{X}|}$ regions in the $\mathcal{Y}_{\hat{x}}$ space corresponding to each of the $2^{|\mathcal{X}|}$ possible flat distributions $p(x|y')$. For example, the region associated with a flat distribution of length $|\mathcal{X}|$, that is $p(x|y') = 1/|\mathcal{X}|$, would have a total probability $\int_{\mathcal{Y}_{\hat{x}}} |\mathcal{X}| \min_x{p(x|y)} dp(y|\hat{x})$. These subsets of $\mathcal{Y}$ where $p(x|y)$ is constant can then be treated like discrete values, and the rest of our derivation applies.
Bounds on equivocation (or mutual information) in terms of the confusion matrix are, to our knowledge, not common -- despite their relevance for estimation of mutual information. We hope that our result can be useful for these purposes, and that it sheds some light on the gap between mutual information before and after decoding. However, its applicability is restricted by the assumption of maximum a posteriori decoding, and relaxing this assumption remains an interesting challenge.
\section*{Acknowledgment}
The authors would like to thank Sarah A. Cepeda-Humerez for helpful discussions.
|
1,116,691,499,644 | arxiv | \section{Introduction}
A standard model of cosmology is developed, in which the universe
consists of $4\%$ ordinary baryonic matter, $\sim 23\%$ dark matter
(DM), $\sim 73\%$ dark energy, and a tiny abundance of relic neutrinos
\cite{2011ApJS..192...18K}. The nature of DM particle remains a mystery.
One of the leading candidates is the weakly interacting massive particle
(WIMP), which is predicted in several models, such as neutralino in
supersymmetry model (see the reviews \cite{1988ARNPS..38..751P,
1996PhR...267..195J,2005PhR...405..279B}). In this kind of models, the
mass and interaction strength of DM particles can produce the
correct relic density of DM if the WIMPs are thermally ``freeze-out'',
which is called ``WIMP miracle''.
If DM particles annihilate or decay into standard model particles,
they can be detected indirectly from the cosmic ray (CR) radiation.
Among many kinds of CR particles, $\gamma$-rays are the best probe
due to their simple propagation. {\it Fermi} gamma-ray telescope,
which was launched in 2008, has surveyed the $\gamma$-ray sky with
very high resolution and sensitivity for more than three years.
Nearly $2000$ sources as well as the diffuse $\gamma$-ray emission
were detected by {\it Fermi} Large Area Telescope ({\it Fermi}-LAT)
\cite{2011arXiv1108.1435T,2009ApJ...703.1249A,2009PhRvL.103y1101A,
2010PhRvL.104j1101A}. The analysis of the {\it Fermi}-LAT data in
the Galactic center region did see some excesses with respect to
the background model \cite{2010ApJ...717..825D,2010ApJ...724.1044S},
however, there is no strong indication of signals from DM annihilation
or decay\footnote{See also the argument of possible DM explanation
of the $\gamma$-ray haze/bubble \cite{2011ApJ...741...25D}.}.
The constraints on DM model parameters can be derived according to
the non-detection of DM signals from e.g., dwarf galaxies
\cite{2010ApJ...712..147A,2010PhRvD..82l3503E,2011PhRvL.107x1303G,
2011PhRvL.107x1302A,2011arXiv1111.2604C}, galaxy clusters
\cite{2010JCAP...05..025A,2010PhRvD..82b3506Y,2010JCAP...12..015D,
2011PhLB..698...44K,2011arXiv1105.3240P,2011arXiv1110.1529H} and the
diffuse $\gamma$-rays
\cite{2010JCAP...04..014A,2010NuPhB.840..284C,2010JCAP...03..014P,
2010JCAP...06..027Z,2010JCAP...07..008H,2010JCAP...10..023Y,
2010JCAP...11..041A,2010arXiv1011.5090A,2011arXiv1105.4230C,
2011PhRvD..83l3513Z,2011JCAP...04..020Y}.
Due to the very weak interactions of DM particles, it is important
to investigate the sites with high DM density when searching for DM
annihilation signals. The proposed good candidates include the Galactic
center, dwarf galaxies, Galactic subhalos and cluster of galaxies.
The Milky Way globular clusters (GCs), defined as spherical
ensemble of stars that orbits the Galaxy as satellites, are also
potentially good targets for indirect detection of DM. The formation
of GCs remains a poorly understood problem. There are generally two
scenarios to describe the formation of GCs. The primordial formation
scenario suggests that GCs were formed in cosmological DM minihalos
before the formation of galaxies \cite{1984ApJ...277..470P}. The other
way to form GCs might be the star-forming events such as the merger of
galaxies. There was evidence to show that metal-poor GCs might have
a cosmological origin and metal-rich GCs might form in the galaxies
\cite{2006ARA&A..44..193B}. If the GCs were formed in cosmological
DM minihalos, they would experience the adiabatic contraction (AC) due
to the infall of baryons during the evolution of GCs and leave a high
density spike of DM. GCs are not usually discussed for DM detection
due to the poor knowledge about their origin and the observational fact
that there is in general no significant amount of DM in vicinity of GCs
\cite{2009MNRAS.396.2051B,2010MNRAS.406.2732L,2010arXiv1010.5783C}.
However, there is possibility that the high density spike of DM due to the
AC process may still play an important role for the annihilation signals.
The previous works to search for or constrain DM models with $\gamma$-rays
from GCs include \cite{2008PhRvD..78b7301Z,2008ApJ...678..594W}.
Recently the atmospheric Cherenkov telescope array High Energy Stereoscopic
System (H.E.S.S.) had investigated two GCs NGC 6388 and M 15 to search
for possible DM signals \cite{2011ApJ...735...12A}.
No $\gamma$-ray signal was detected by H.E.S.S. and strong constraints on
the DM model parameters were given. In this work, we use the three-year data
of {\it Fermi}-LAT to study the $\gamma$-ray emission from DM annihilation
in these two GCs. Detections of $\gamma$-ray emission from some GCs with
{\it Fermi}-LAT were reported \cite{2009Sci...325..845A,2010A&A...524A..75A,
2010ApJ...712L..36K,2011ApJ...729...90T}, including NGC 6388 studied here.
For M 15 there is no detection yet. In this work we will focus on the
possible DM component of the $\gamma$-ray emission, if any, from the GCs.
The upper limits of DM contribution will be derived and the constraints
on DM model parameters will be presented.
\section{Gamma-rays from DM annihilation in globular clusters}
M 15 is a metal-poor GC which favors a cosmological origin of it
\cite{1996AJ....112.1487H}. For NGC 6388, there is strong evidence to
show the existence of an intermediate mass black hole (IMBH) with mass
$\sim 6\times10^3$ M$_{\odot}$ \cite{2007ApJ...668L.139L}, which also
suggests a cosmological origin even though the metallicity is relatively
high \cite{1996AJ....112.1487H}. Therefore we have good motivation to
search for the possible DM annihilation signal from these two GCs. The
estimated stellar masses of NGC 6388 and M 15 are $10^6$ and $5\times10^5$
M$_{\odot}$, with distances $11.5$ and $10.0$ kpc respectively
\cite{2007ApJ...668L.139L,2008ApJ...678..594W}. Other parameters of
them can be found in Table 2 of Ref. \cite{2011ApJ...735...12A}.
\subsection{DM density distribution}
For the purpose of this work, these two GCs are assumed to form in the
cosmological context, which were DM dominated in the primordial stage,
before reionization and the galaxy formation \cite{1984ApJ...277..470P}.
The AC process of baryons to form the GC is expected to pull DM into
the center and results in a high density core of DM
\cite{1986ApJ...301...27B}. After the AC process the heating effect of
DM due to scattering with baryons will tend to sweep out the high density
DM core, leaving a constant density \cite{2008PhRvD..77d3515B}.
The IMBH, if exists, may further modify the density profile through
adiabatic accretion \cite{2005PhRvL..95a1301Z}.
The modelling of the GC DM halo can be divided into three steps. The
first step is the AC process of the dark halo during the collapse of the
core of GC. Supposing that the DM particles travel on circular orbits,
the enclose mass distribution of DM $M(r)$ can be calculated with the
follow equation \cite{1986ApJ...301...27B}
\begin{equation}
[M_{{\rm DM},i}(r_i)+ M_{b,i}(r_i)]r_i = [M_{{\rm
DM},f}(r_f)+M_{b,f}(r_f)]r_f,
\end{equation}
where the subscript $i(f)$ denotes the initial (final) mass distribution
of baryon or DM. The initial mass of the minihalo is assumed to be $10^7$
M$_{\odot}$, with Navarro-Frenk-White (NFW, \cite{1997ApJ...490..493N})
density profile for both the DM and baryon distributions\footnote{Note
that for such minihalos the density profile might be smoother
\cite{2009MNRAS.397.1169D,2011arXiv1111.1165S}, however, as shown in
\cite{2008PhRvL.100e1101S} the initial density profile does not
affect significantly the final DM profile after AC process.}. The
mass fraction of baryons is adopted to be $20\%$. For the
convenience of comparison, these adoptions are the same as that in
Ref. \cite{2011ApJ...735...12A}. We should keep in mind that these
parameters may have large uncertainties and the quantitative results
of this work may also suffer from uncertainties.
Given the final baryon distribution, which can be derived according to
the observed surface density distribution of the GC\footnote{See, e.g.,
http://www.physics.mcmaster.ca/~harris/mwgc.dat}, one can get the DM
density profile after AC \cite{1986ApJ...301...27B}. The final baryon
density for NGC 6388 is taken from \cite{2011ApJ...735...12A}, which
was computed using the surface density profile given in
\cite{2007ApJ...668L.139L}. For M 15 the final baryon density is
taken from \cite{1997AJ....113.1026G}.
The second step is to take into account the smoothing effect due to
baryon heating after AC process. For the convenience of discussion, we
employ the relaxation time $T_r$ defined as \cite{1987degc.book.....S}
\begin{equation}
T_{r} = \frac{3.4 \times 10^{9}}{ \ln{\Lambda}}\left(\frac{v_{\rm rms}}
{\rm km\,s^{-1}}\right)^3 \left(\frac{m}{\rm M_{\odot}}\right)^{-2}
\left(\frac{n}{\rm pc^{-3}}\right)^{-1}{\rm yr} \; ,
\label{eq:relaxationtime}
\end{equation}
where $v_{\rm rms}$ is the velocity dispersion of stars, $m$ is the
typical stellar mass in the GC, $n$ is the stellar number density, and
$\ln{\Lambda}$ is the Coulomb logarithm. $T_r$ is estimated to be
$\sim 7\times 10^4$ yr in the central region of M 15, and $\sim 8\times
10^6$ yr for central NGC 6388 \cite{2011ApJ...735...12A}. The relaxation
time is an increasing function of the distance to the center.
The DM will be heated up due to the scattering with the stars, which
will lead to the dissipation of the DM core \cite{2004PhRvL..92t1304M}.
The scattering time scale is comparable to $T_r$. The heating radius,
$r_{\rm heat}$, is then defined with $T_r(r_{\rm heat})=t_{\rm age}$,
where $t_{\rm age}$ is the age of the Universe. Therefore at small
radius $r<r_{\rm heat}$, the relaxation time is shorter than the age of
the Universe and the heating effect on DM is important. At large radius
$r>r_{\rm heat}$ the DM distribution is unaffected by heating. The
heating radii are estimated to be about $5$ pc and $4$ pc for M 15 and
NGC 6388 respectively. Roughly speaking we have the DM density distribution
with baryonic heating
\begin{equation}
\rho(r)=\left\{
\begin{array}{ll}
\rho_0, & r<r_{\rm heat},\\
\rho_0\times\frac{\rho_{\rm AC}(r)}{\rho_{\rm AC}(r_{\rm heat})},
& r>r_{\rm heat},
\end{array}\right.
\end{equation}
where $\rho_{\rm AC}(r)$ is the DM density profile after AC, which can be
solved with Eq. (2.1), and $\rho_0$ is the density at $r=r_{\rm heat}$.
The third step is to consider the effect of the IMBH, if exists.
The AC profile of DM will not be significantly affected by the IMBH due
to its small mass compared with the total baryon mass. However, the
following adiabatic accetion of IMBH will modify the DM density profile
after dynamic heating. The radius within which the IMBH is gravitational
dominant is defined by $M(< r_h)=\int^{r_h}_0\rho(r){\rm d}^3r =
2M_{\rm IMBH}$. Then the DM distribution can be expressed in three regions
\begin{equation}
\rho(r)=\left\{
\begin{array}{ll}
\rho_0(r/r_h)^{-3/2}, & r<r_h,\\
\rho_0, & r_h<r<r_{\rm heat},\\
\rho_0\times\frac{\rho_{\rm AC}(r)}{\rho_{\rm AC}(r_{\rm heat})},
& r>r_{\rm heat}.
\end{array}\right.
\end{equation}
The inner most density profile ($\propto r^{-3/2}$) corresponds to
the collisionally regenerated structures (``crest'') of DM due to
the joint evolution of baryons and DM in the enviroment of a central
black hole \cite{2007PhRvD..75d3517M}. For NGC 6388, $M_{\rm IMBH}\sim
6\times 10^3$ M$_{\odot}$, $r_h\sim 0.4$ pc is found with the
observational baryon density. For M 15 there is no consensus of the
existence of IMBH \cite{1996A&A...315..396D,2002AJ....124.3270G}, and
we employ Eq. (2.3) to describe the DM density profile of M 15 without
considering the possible IMBH.
\subsection{Astrophysical $J$-factor}
For Majorana fermion DM particles, the $\gamma$-ray flux from DM
annihilation can be written as
\begin{equation}
\label{eqn:phi}
\Phi(\Delta\Omega,E_{\gamma})=\frac{1}{4\pi} \times \frac{\langle
\sigma v\rangle}{2m^2_{\chi}}\,\frac{\mathrm{d}N_{\gamma}}{\mathrm{d}
E_{\gamma}} \times \bar{J}(\Delta\Omega)\Delta\Omega,
\end{equation}
where $m_{\chi}$ and $\langle\sigma v\rangle$ are the mass and velocity
weighted thermal average annihilation cross section of DM particles,
${{\rm d}N_{\gamma}/{\rm d}E_{\gamma}}$ is the $\gamma$-ray
spectrum for one annihilation. The astrophysical factor ($\bar{J}$) is
the integral of the density square along the line of sight (LOS) averaged
over the solid angle $\Delta\Omega$
\begin{equation}
\label{eqn:jbar}
\bar{J}(\Delta\Omega) = \frac{1}{\Delta\Omega}
\int_{\Delta\Omega}{\rm d}\Omega\int_{\rm LOS}{\rm d}l\,\rho^2(r(l)).
\end{equation}
For H.E.S.S. observations, the integral solid angle is $\Delta\Omega=
5\times 10^{-6}$ sr, which corresponds to a cone with half angle
$0.07^{\circ}$ \cite{2011ApJ...735...12A}. Since the resolution angle
of {\it Fermi}-LAT in GeV range is much larger ($>0.5^{\circ}$,
\cite{2009ApJ...697.1071A}), we need to enlarge the integral solid
angle. The tidal radii of both GCs are about $30$ pc, and the distances
are about $10$ kpc \cite{2011ApJ...735...12A}. The opening angles of
these two GCs are $\sim 0.17^{\circ}$. Therefore they can be regarded
as point sources for {\it Fermi}-LAT and we integrate all the DM
contribution to the tidal radius to calculate the $J$-factor.
It is found that the final $J\times\Delta\Omega$ is about $7.8\times
10^{19}$ ($3.4\times10^{20}$) GeV$^2$ cm$^{-3}$ for M 15 (NGC 6388),
which is larger by $\sim10\%$ ($0.1\%$) compared with that within
$0.07^{\circ}$ cone as adopted by H.E.S.S.. The $J$-factor of NGC 6388 is
larger than M 15 is mainly due to the difference of the density profiles
after AC which depends on the final baryon density profiles of the GCs,
and the heating effect. Compared with the dwarf galaxies as given in
\cite{2011PhRvL.107x1302A}, the $J$-factors of GCs are generally larger,
which is also due to the AC process.
\section{{\it Fermi}-LAT data analysis}
The {\it Fermi}-LAT data\footnote{http://fermi.gsfc.nasa.gov/ssc/data}
used in this analysis are the new ``Pass 7'' data recorded between 4
August 2008 and 2 September 2011. Photons with Event Class ``Source''
(evclass=2) and zenith angle within $100^{\circ}$
are selected. The energy range of events is cut from $200$ MeV to
$300$ GeV, and the radius of region-of-interest (ROI) is adopted
to be $6^{\circ}$. We use the LAT Scientific Tools v9r23p1 to do this
analysis. The unbinned likelihood analysis method is adopted. The
instrument response function used is ``{\tt P7SOURCE\_V6}''.
For the diffuse background, we use the Galactic diffuse model
{\tt gal\_2yearp7v6\_v0.fits} and the isotropic background spectrum
{\tt iso\_p7v6source.txt} provided by the {\it Fermi} Science Support
Center\footnote{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}.
\subsection{NGC 6338}
The detection of $\gamma$-ray emission from NGC 6388 was reported
in \cite{2010A&A...524A..75A}, with {\it Test Statistic}
\cite{1996ApJ...461..396M} value TS$=86.6$. The spectral energy
distribution can be fitted using power-law function with an exponential
cutoff $E^{-\Gamma}\exp(-E/E_c)$, which is expected for the emission from
a population of millisecond pulsars (MSP). The best-fitting parameters
are $\Gamma=1.1^{+0.7}_{-0.5}$ and $E_c=1.8^{+1.2}_{-0.7}$ GeV
\cite{2010A&A...524A..75A}. Using the likelihood tool {\tt gtlike} we
re-do the spectral analysis with more data. The source model XML file
is generated using the user contributed tool {\tt
make2FGLxml.py}\footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/user/}
based on the 2FGL source catalog \cite{2011arXiv1108.1435T}. The spectrum
of NGC 6388 is also modelled with $E^{-\Gamma}\exp(-E/E_c)$. By setting
all the source parameters within the ROI free, the best-fitting parameters
for NGC 6388 are $\Gamma=1.21\pm0.17$ and $E_c=1.82\pm0.35$ GeV, with
a TS value $596$. The fitting parameters are consistent with that given
in \cite{2010A&A...524A..75A}.
To derive the spectral energy distribution (SED) of NGC 6388, we divide
the data into different energy bins, and use {\tt gtlike} tool to fit
the parameters for each bin. Two methods are adopted in the fit. We
first fix the parameters of all other sources and the normalizations
of diffuse backgrounds derived above in the global fit, leaving only
the normalization parameters of NGC 6388 and the very bright pulsar
PSR J1709-4429 free. The spectral parameters of NGC 6388 and PSR
J1709-4429 are also fixed to be the best fitting values. Because the
energy bin is relatively narrow the precise values of the spectral
parameters have little effect on the final results
\cite{2010A&A...524A..75A}. The results are shown by the filled
circles in Figure \ref{fig:spectrum}. The solid line shows the
fitting curve with spectrum $E^{-\Gamma}\exp(-E/E_c)$, as a represent
of MSP-type emission. It shows good agreement between the global fit
and the individual fit for different energy bins. Then we relax the
normalization parameters of all the sources and the diffuse backgrounds,
and re-do the fit. The results are shown by the empty circles in Figure
\ref{fig:spectrum}. We can see that there are some differences between
the results of these two methods, which might be originated from the
complexity of the diffuse background models.
\FIGURE{
\includegraphics[width=0.6\columnwidth]{spectrum.eps}
\caption{SED of NGC 6388.}
\label{fig:spectrum}
}
We then add a DM component at the position of NGC 6388 to search for
possible DM contribution to the emission. Different annihilation final
states are investigated, including $b\bar{b}$, $W^+W^-$, $\mu^+\mu^-$,
$\tau^+\tau^-$ and $\gamma$-ray line. We use PYTHIA simulation tool to
calculate the $\gamma$-ray yield spectrum for each final state
\cite{2006JHEP...05..026S}. A series values of DM mass from $10$ GeV to
$10$ TeV are considered. For monochromatic $\gamma$-ray line we use
Gaussian function to model its spectral shape. The energies of photons
from $300$ MeV to $200$ GeV are searched. The width of Gaussian function
as a function of photon energy is adopted to be the energy resolution of
{\it Fermi}-LAT, which is about $8\%-13\%$ ($\Delta E/E$) in this energy
range \cite{2009ApJ...697.1071A}. Given the spectrum shape of DM
contribution, we use the python likelihood
tool\footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python\_tutorial.html}
to fit the normalization parameter and derive the flux upper limits.
It is found that the detected $\gamma$-ray spectrum of NGC 6388 can also
be fitted with a DM component, with mass $\sim 25$ GeV and annihilation
final state $b\bar{b}$, as shown by the dashed line in Figure
\ref{fig:spectrum}. Note that the recent analysis of the {\it Fermi}-LAT
data in the Galactic center region also showed possible additional
emission compatible with DM contribution with $m_{\chi}\sim30$ GeV
for $b\bar{b}$ annihilation final state \cite{2011arXiv1110.0006H}.
It is well motivated that the $\gamma$-ray emission from GCs may come
from the MSPs, therefore we do not claim the DM origin of the $\gamma$-rays
of NGC 6388. In any case we can instead set an upper limit of the
contribution from DM annihilation. Here the upper limits are derived
for different final states and different mass of DM particle individually.
We use two ways to fit the data. The first one (Fit 1) is to fix
all the parameters of sources derived in the above global fit. The free
parameters are the normalizations of the diffuse backgrounds and the DM
component. The other way (Fit 2) is to leave the normalizations of sources
in the ROI and the diffuse backgrounds as well as the normalization
of the DM component free. The spectral parameters of the sources are
fixed to be the best fitting values in the global fit. Fit 1 corresponds
to a more stringent constraint, based on the assumption that the observed
$\gamma$-rays come from astrophysical sources. Fit 2 gives a weaker but
more conservative constraint. The $95\%$ confidence level (C.L.) upper
limits of the $>100$ MeV fluxes from DM annihilation for NGC 6388
are shown in the left panel of Figure \ref{fig:ul}. The thick and
thin lines correspond to Fit 1 and 2 respectively. The same way is
applied to the line analysis. The upper limits of line emission are
shown in Figure \ref{fig:ulline}.
\FIGURE{
\includegraphics[width=0.45\columnwidth]{UL_NGC6388.eps}
\includegraphics[width=0.45\columnwidth]{UL_M15.eps}
\caption{Derived 95\% C.L. upper limits of the WIMP annihilation
contribution to the $>100$ MeV $\gamma$-ray fluxes as functions of WIMP
mass $m_{\chi}$, for GCs NGC 6388 (left) and M 15 (right).}
\label{fig:ul}
}
\FIGURE{
\includegraphics[width=0.6\columnwidth]{UL_line.eps}
\caption{95\% C.L. upper limits of the monochromatic $\gamma$-ray
emission as functions of $\gamma$-ray energy.}
\label{fig:ulline}
}
\subsection{M 15}
There was no firm detection of emission from M 15 in the previous
analysis. In \cite{2010A&A...524A..75A} M 15 was reported to have
a very weak signal with TS$=5.4$. In our analysis with more data, we
find a relatively higher TS value $\sim12$, for both power-law
and power-law + exponential cutoff models. We can derive the upper
limits on DM annihilation to $\gamma$-rays, similar as the
analysis of NGC 6388. Because there is no detection of $\gamma$-rays,
only the method Fit 2 is adopted for M 15.
There are three other sources in the ROI of M 15, 2FGL J2115.4+1213,
2FGL J2112.5+0818 and 2FGL J2147.3+0930. The free parameters include
the spectral parameters of M 15 (power-law model for a possible MSP
contribution) and these three sources, the normalizations of the
diffuse backgrounds and the DM component of M 15. The results are
shown in Figures \ref{fig:ul} and \ref{fig:ulline}.
\section{Constraints on DM models}
Integrating Eq. (\ref{eqn:phi}) above $100$ MeV we can easily get the
upper limits of the DM annihilation cross section using the flux upper
limits. The results are presented in Figures \ref{fig:svbw} -
\ref{fig:svline} respectively.
Figure \ref{fig:svbw} shows the constraints on $m_{\chi}-
\langle\sigma v\rangle$ for $b\bar{b}$ (left) and $W^+W^-$ (right)
final states. For comparison we also show the
results from the combined analysis of {\it Fermi} observations of $10$
dwarf galaxies \cite{2011PhRvL.107x1302A} and that given by H.E.S.S.
observation of these two GCs ($W^+W^-$, \cite{2011ApJ...735...12A}).
It is shown that the constrains given in this work are generally
stronger than that for dwarf galaxies, at least for DM mass $\gtrsim50$
GeV. Because of the high density spike from the AC process, the
distribution of DM is more concentrated than the initial NFW profile.
Although the heating effect from stars will smooth out the central density
spike, the $J$-factors of GCs are still higher than that of dwarf galaxies
(e.g., estimated with NFW profiles), and the constraints on DM models are
stronger accordingly.
For $b\bar{b}$ final state, the DM annihilation induced $\gamma$-ray
spectrum is similar with the observed data of NGC 6388, therefore the
constraint is a bit weaker when $m_{\chi}<50$ GeV. However, for the
method Fit 1 the constraint is always stronger than that for dwarf
galaxies. H.E.S.S. constraints are more effective for massive DM
($m_{\chi}\gtrsim2$ TeV).
Also shown in Figure \ref{fig:svbw} are the theoretically expected
neutralino annihilation cross sections (multiplied with the branching
ratios to $b\bar{b}$ and $WW+ZZ$) in the Minimum Supersymmetric
Standard Model (MSSM). In the right panel we sum the model predicted
cross sections to $W^+W^-$ and $ZZ$ chanels together due to the similarity
of $\gamma$-ray spectra from these two channels. We utilize numerical code
micrOMEGAs \cite{2007CoPhC.176..367B,2009CoPhC.180..747B} to perform a
random scan in the 7-dimensional parameter space at the electroweak scale.
These parameters include the CP-odd Higgs mass $m_A$, the Higgs mixing
mass parameter $\mu$, the wino mass parameter $M_2$, the sfermion mass
parameter $m_{\tilde{f}}$, the ratio of two Higgs vacuum expectation
values $\tan \beta$, the trilinear parameters of the third family
squark $A_{\tilde{b}}$ and $A_{\tilde{t}}$. The other trilinear
parameters are set to zero. We also impose the assumptions that the
gaugino mass parameters are related by $M_1:M_2:M_3=\alpha_1:\alpha_2:
\alpha_3$ for grand unification, where the $\alpha_i$ are the coupling
constants of three standard model gauge groups, and $M_1:M_2=
\frac{5}{3} \tan^2 \theta_{_W}$. The ranges of the parameters are taken as
follows: 50 GeV$< |\mu|, M_2 <$10 TeV, 100 GeV $<m_A, m_{\tilde{f}}<$1 TeV,
1$<\tan \beta<$60, $-5m_{\tilde{f}} <A_t, A_b<
5m_{\tilde{f}}$ and ${\rm sign}(\mu)=\pm 1$. Several constraints from
accelerator experiments and DM detection are implemented in our
numerical scan. We set the limit for $\rho$ parameter as $\rho-1<2.2
\times 10^{-3}$\cite{2006JHEP...33..0603}. Some important flavor physics
constraints include: $Br(B \to X_s \gamma) = (3.55 \pm 0.24) \times
10^{-4}$ \cite{bsgamma}, $Br(B_s \to \mu^+ \mu^- ) = (0 \pm 1.4)
\times 10^{-8}$ \cite{2010PhLB..693..539D,2011PhLB..699..330L},
$Br(B_u \to \tau \nu )/Br(B_u \to \tau \nu )_{\rm SM} = 1.28 \pm 0.38$
\cite{bsgamma}. Here we only require the supersymmetric contributions
to satisfy these constraints at $3\sigma$ level, and adopt a very
conservative bound for muon anomalous magnetic moment
\cite{2011EPJC...71.1515D} as $-11.4 \times 10^{-10} < \delta \alpha_\mu
< 9.4 \times 10^{-9}$ \cite{2006JHEP...33..0603}. The mass bound of
standard model like Higgs $m_h> 114$ GeV, limits on the masses of
light charge sparticle from LEP (for details, see \cite{2007CoPhC.176..367B,
2009CoPhC.180..747B}), and DM direct detection constrains from XENON100
\cite{2011PhRvL.107m1302A} are also taken into account.
In Figure \ref{fig:svbw} the squares are for DM models which can give
the right relic density \cite{2011ApJS..192...18K} if DM is thermally
produced in the early Universe (labelled as ``MSSM-thermal''). The
triangles are the cases with thermal relic density not higher than the
measured value, the correct DM abundance in these models could be
produced via some non-thermal mechanisms. We can see that for $b\bar{b}$
final state the current constraint can reach the natural scale for
thermally produced DM with mass of $O(10)$ GeV. In the MSSM model,
large neutralino annihilation cross section to $b\bar{b}$ final states
could arise from the resonance effect with $m_{A}\sim 2m_{DM}$. Some
non-thermal models with this feature have been excluded by our
constraints. For $W^+W^-$ channel the constraint is a bit weaker but also
close to the natural scale. In the MSSM scenario, large higgsino or wino
component (depending on the relations between the three parameters $M_1$,
$M_2$ and $\mu$) in the neutralino would enhance DM annihilation cross
section to gauge bosons significantly. If the neutralino mass lies in the
range $(80,\,300)$ GeV, many non-thermal models would be also stringently
constrained by our results.
\FIGURE{
\includegraphics[width=0.45\columnwidth]{bb.eps}
\includegraphics[width=0.45\columnwidth]{ww.eps}
\caption{Constraints on DM mass vs. annihilation cross section, for
$b\bar{b}$ (left) and $W^+W^-$ (right) final states. Points are a random
scan of the MSSM parameter space taking into account the current
constraints from accelerator data. Magenta dotted lines in the right
panel are the constraints got by H.E.S.S. observations of NGC 6388
(lower) and M 15 (upper).}
\label{fig:svbw}
}
\FIGURE{
\includegraphics[width=0.45\columnwidth]{mu.eps}
\includegraphics[width=0.45\columnwidth]{tau.eps}
\caption{Same as Fig. \ref{fig:svbw} but for leptonic final states
$\mu^+\mu^-$ (left) and $\tau^+\tau^-$ (right) respectively.}
\label{fig:svmt}
}
\FIGURE{
\includegraphics[width=0.45\columnwidth]{gg.eps}
\includegraphics[width=0.45\columnwidth]{gz.eps}
\caption{Same as Fig. \ref{fig:svbw} but for DM annihilation into
$\gamma\gamma$ (left) and $\gamma Z^0$ (right) final states.}
\label{fig:svline}
}
Motivated by the recent observations of the CR positron/electron excesses
at PAMELA, ATIC and Fermi-LAT
\cite{2009Natur.458..607A,2008Natur.456..362C,2008PhRvL.101z1104A,
2009A&A...508..561A,2009PhRvL.102r1101A}, and the non-excess of antiprotons
\cite{2009PhRvL.102e1101A,2010PhRvL.105l1101A}, the leptonic DM models
are proposed to explain the data (e.g., \cite{2009NuPhB.813....1C,
2009PhRvD..79b3512Y,2009PhRvL.103c1103B,2010NuPhB.831..178M,
2010PhRvD..81b3516L,2012PhRvD..85d3507L}). We also study the constraints
on the leptonic final states $\mu^+\mu^-$ and $\tau^+\tau^-$ by DM
annihilation which might
be responsible for the $e^{\pm}$ excesses. Here the inverse Compton
scattering $\gamma$-rays generated by the decaying products $e^{\pm}$
from muons or tauons are not taken into account. As illustrated in
\cite{2010ApJ...712..147A}, considering the inverse Compton $\gamma$-rays
the constraints on DM model parameters will be stronger, however,
depending on the uncertainties of the diffusion process of $e^{\pm}$. Thus
the results given here should be conservative. Constraints on $m_{\chi}-
\langle\sigma v\rangle$ parameter space are shown in Figure \ref{fig:svmt}.
The contours show the favored parameter region to fit the CR $e^{\pm}$
data \cite{2010NuPhB.831..178M}. It is shown that the models to explain
the $e^{\pm}$ excesses should be excluded by the {\it Fermi}-LAT data
about GCs.
Finally we study the constraints on possible monochromatic $\gamma$-ray
line emission from e.g., $\chi\chi\to\gamma\gamma$ or $\chi\chi\to\gamma
Z$ of DM annihilation. No significant line emission is found in the
data. The upper limits of line emission are derived. The constraints on
cross sections to $\gamma\gamma$ and $\gamma Z$ are given in Figure
\ref{fig:svline}. Note we have $E_{\gamma}=m_{\chi}$ for
$\chi\chi\to\gamma\gamma$, and $E_{\gamma}=m_{\chi}(1-m_Z^2/4m_{\chi}^2)$
for $\chi\chi\to\gamma Z$. The results derived with {\it Fermi}-LAT data
including the Galactic center region by {\it Fermi} collaboration
(NFW profile, \cite{2010PhRvL.104i1302A}) and Vertonger \& Weniger
\cite{2011JCAP...05..027V} are also shown for comparison. Our
constraints are a bit weaker than the results in these two works.
We think it is reasonable because their analysis regions are much
larger and include the Galactic center region, which will give a higher
$J$-factor of DM annihilation.
\section{Conclusions and Discussions}
The GCs are thought to form in the cosmological context with AC
process at the beginning which pulls DM into the halo center and
results in a very high annihilation luminosity. Thus search for
$\gamma$-rays from GCs may be effective to probe the particle nature
of DM. In this work we analyze the {\it Fermi}-LAT three-year data
(Pass 7) of GCs NGC 6638 and M 15 and constrain the DM annihilation
models. A clear detection of $\gamma$-ray emission from NGC 6388 is
found, with TS value $\sim600$. The spectrum of NGC 6388 can be well
fitted with a power-law + exponential cutoff function, which is
expected for the emission of a population of MSPs. We find that
a DM scenario with $m_{\chi}\sim25$ GeV and $b\bar{b}$ final state
can also fit the SED. For M 15 no significant $\gamma$-ray emission
if found (the spectral fit indicates a potential source with
TS $\approx12$).
Assuming there is an additional spectral component from DM annihilation
of these two GCs, we derive the upper limits of the DM component for
different DM masses ($10\,{\rm GeV}-10\,{\rm TeV}$) and annihilation
final states ($b\bar{b},\,W^+W^-,\,\mu^+\mu^-,\,\tau^+\tau^-,\,
\gamma\gamma,\,\gamma Z$) (Figures \ref{fig:ul} and \ref{fig:ulline}).
The constraints on the DM annihilation cross section are given
(Figures \ref{fig:svbw}-\ref{fig:svline}). Except for the line
emissions, the constraints are stronger than that derived
according to the {\it Fermi}-LAT observations of dwarf galaxies. For
DM mass smaller than TeV our constraints are also stronger than that
given by H.E.S.S. observations of the same GCs. For $b\bar{b}$ and
$W^+W^-$ final states which are generally expected from supersymmetric
DM model, the constraints can reach the natural scale with which DM
is thermally produced. Especially the leptonic annihilation models
to explain the CR $e^{\pm}$ excesses can be excluded by the current
analysis.
However, the uncertainties of the present analysis, for example the
properties of the hypothetical DM halo and the origin and evolution
of the GCs, are far from clear. The GCs were assumed to be formed in
the cosmological context, and the DM density profiles in the GCs are
modelled taking into account the most probable astrophysical processes,
e.g., the AC by baryons, adiabatic growth of an IMBH and the scattering
by stars. Future studies on the observations and modelings of the DM
distribution in the GCs are necessary to improve the current work.
\acknowledgments
We thank Yi-Zhong Fan, Rui-Zhi Yang and Xiao-Yuan Huang for discussion.
This work is supported by the Natural Science Foundation of China under
the grant Nos. 11075169, 11075074, 11065004, 11105155, 11105157 and
11175251, the 973 project under grant No. 2010CB833000 and the Chinese
Academy of Science under Grant No. KJCX2-EW-W01. L. Feng is supported
by the Research Fund for the Doctoral Program of Higher Education under
grant No. 200802840009.
\bibliographystyle{JHEP}
|
1,116,691,499,645 | arxiv | \section{Introduction}\label{sec:introduction}
Magnetic reconnection is the process by which magnetic energy is converted to plasma energy via a rapid topological rearrangement of magnetic-field lines \citep{zy09,ykj10,lu16}. It is usually preceded by a slow phase in which magnetic flux is accumulated in an increasingly thin current sheet (CS). Recently, it has been conjectured that this preparatory phase of CS formation, along with the material properties of the host plasma, determine the characteristics of the tearing modes that ultimately disrupt the sheet and thereby set the maximum aspect ratio above which CSs cannot survive \citep{pv14,tenerani15,lu16,ul16,comisso17,huang17}. This maximum aspect ratio is important for (at least) two reasons. First, the large aspect ratio of the Sweet--Parker CS \citep{parker57,sweet58} in high-Lundquist-number plasmas, being violently unstable to the plasmoid instability \citep{loureiro07,bhattacharjee09}, may not be realizable during CS formation. Second, the maximum aspect ratio may define a disruption scale in critically balanced Alfv\'{e}nic turbulence, below which the intense, sheet-like structures become tearing unstable and break up \citep{bl17,lb17a,lb17b,mallet17a,mallet17b}.
All of the work thus far on CS formation and tearing-mediated disruption was either couched within a collisional magnetohydrodynamic (MHD) framework or focused on collisionless plasmas with $\beta\doteq{8}\upi{nT}/B^2\lesssim{1}$ ($n$ is the plasma density, $T$ the temperature, and $B$ the magnetic-field strength). The latter restriction precludes application of those results to many dilute, weakly collisional astrophysical plasmas, whose large temperatures and relatively weak magnetic fields imply $\beta\gg{1}$. For example, $n\sim{10}^{-3}~\mrm{cm}^{-3}$, $T\sim{5}~\mrm{keV}$, and $B\sim{1}~\mu\mrm{G}$ in the hot intracluster medium (ICM) of galaxy clusters imply $\beta\sim{10}^2$ \citep{ct02,sc06}; $n\sim{100}~\mrm{cm}^{-3}$, $T\sim{2}~\mrm{keV}$, and $B\sim{1}~\mrm{mG}$ near the accretion radius of Sgr A$^\ast$ at the Galactic center imply $\beta\sim{10}$ \citep{quataert03,marrone07}. The hallmark of such plasmas is that the embedded magnetic field, while energetically subdominant, nevertheless has a strength tens of orders of magnitude above that required to magnetize the plasma (i.e.~$\Omega_i\tau\ggg{1}$ and $\rho_i\lll{L}$, where $\Omega_i\doteq{eB/m_ic}$ is the ion Larmor frequency, $m_i$ is the ion mass, $\rho_i\doteq{v}_{\mrm{th}i}/\Omega_i$ is the ion Larmor radius, $v_{\mrm{th}i}\doteq(2T/m_i)^{1/2}$ is the ion thermal speed, and $\tau$ and $L$ are representative macroscopic time and length scales, respectively). This hierarchy of scales, particularly in weakly collisional plasmas with collision frequencies $\nu$ satisfying $\nu\tau\ll{1}$, biases the plasma properties with respect to the magnetic-field direction \citep{braginskii65}. Notably, the thermal pressure becomes {\em anisotropic}.
There is a relatively large body of work on the impact of pressure anisotropy on tearing modes \citep{cd81,coppi83,cp84,cl85,ambrosiano86,shi87,karimabadi05,haijima08,quest10,matteini13,gingell15}, as well as on the production and impact of pressure anisotropy during the reconnection process itself \citep{drake06,le09,schoeffler11,egedal13,cassak15,le16}. Here we focus instead on the pressure anisotropy adiabatically produced during the CS formation, prior to the reconnection event. Namely, as the CS thins, the magnetic-field strength in the in-flowing fluid elements increases. An increase in field strength in a weakly collisional, magnetized plasma leads, by adiabatic invariance, to an increase (decrease) in the thermal pressure perpendicular (parallel) to the field lines \citep{cgl56}. Above an $\mc{O}(1/\beta)$ threshold, this pressure anisotropy drives the mirror instability \citep{barnes66,hasegawa69,sk93}, which produces strong distortions in the field lines and traps particles on ion-Larmor scales \citep{kunz14,riquelme15}. In what follows, we ask how the production of pressure anisotropy during CS formation and the consequent triggering of ion-Larmor-scale mirror instabilities in a $\beta\gg{1}$ plasma impacts the onset of tearing-mediated reconnection.
\section{Prerequisites}
\subsection{CS formation and pressure anisotropy}
We first establish that pressure anisotropy is produced during CS formation. For that, we adopt a simple local model for CS formation based on a one-dimensional generalization of the Chapman--Kendall solution \citep[][\S 2]{ck63,tolman18}. A sheared magnetic field $\bb{B}(x,t)=B_\mrm{r}[x/a(t)]\hat{\bb{y}}+B_\mrm{g}\hat{\bb{z}}$ is frozen into an incompressible, time-independent fluid velocity $\bb{u}(x,y)=-(x\hat{\bb{x}}-y\hat{\bb{y}})/2\tau_\mrm{cs}$, where $B_\mrm{r}$ and $B_\mrm{g}\doteq \thetaB_\mrm{r}$ are constants describing the strengths of the reconnecting and guide components of $\bb{B}$, respectively, and $\tau_\mrm{cs}$ is the characteristic CS-formation timescale. These expressions satisfy the reduced MHD equations provided that the CS half-thickness $a(t)$ and length $L(t)$ satisfy $a(t)/a_0=L_0/L(t)=\exp(-t/\tau_\mrm{cs})$, where the ``$0$'' subscript denotes an initial value. This model may be regarded as a Taylor expansion about the neutral line ($x=0$) of a more complicated (e.g.~Harris) CS profile, and so we restrict its validity to $|y|\ll{L(t)}$ and $|x|\lesssim{a(t)}$, beyond which $\bb{B}$ is taken to be spatio-temporally constant. (Indeed, this simple model is only meant to illustrate that $\Delta_p>0$ can be driven during CS formation.) We assume $\sqrt{\rho_{i,\mrm{r}}/a }\ll\theta\lesssim{1}$ and $\Omega_i\tau_\mrm{cs}\gg{1}$, where $\rho_{i,\mrm{r}}$ is the ion-Larmor radius computed using $B_\mrm{r}$, so that the entire CS is well magnetized (even near $x=0$).\footnote{This guarantees that any particle whose guiding center lies near $x=0$ executes Larmor motion about $B_\mrm{g}$ rather than a betatron orbit with turning points at ${\sim}\sqrt{\rho_{i,\mrm{r}}a}$ \citep[as in][]{dobrowolny68}.}
Using these fields, it is straightforward to show that the magnetic-field strength in a fluid element starting at $x=\xi_0$ (with $|\xi_0|\le{a_0}$) and moving towards $x=0$ is
\begin{equation}
B(\xi(t),t)=B_\mrm{r}\bigl[\theta^2+\exp(t/\tau_\mrm{cs})(\xi_0/a_0)^2\bigr]^{1/2} ,
\end{equation}
where $\xi(t)=\xi_0\exp(-t/2\tau_\mrm{cs})$ is a Lagrangian coordinate co-moving with the fluid element. This change in $B$ drives field-aligned pressure anisotropy, $\Delta_p\doteq{p}_\perp/p_\parallel-1$, adiabatically in the fluid frame. Using $\mu$ conservation in the form $p_\perp\propto{B}$ and assuming $\Delta_p(x,t=0)=0$,
\begin{equation}
\Delta_p(\xi(t),t) = \biggl[\frac{\theta^2+\exp(t/\tau_\mrm{cs})(\xi_0/a_0)^2}{\theta^2+(\xi_0/a_0)^2}\biggr]^{1/2}-1 \approx \frac{t}{2\tau_\mrm{cs}} \frac{(\xi_0/a_0)^2}{\theta^2+(\xi_0/a_0)^2}\doteq\frac{t}{\tau_\mrm{pa}}
\label{eqn:Deltaapprox}
\end{equation}
for $t/\tau_\mrm{cs}\ll{1}$.\footnote{If the second adiabatic invariant, $J$, were also conserved -- unlikely in a $\beta\gg{1}$ plasma with Alfv\'{e}nic, incompressible flows -- the exponent $1/2$ in \eqref{eqn:Deltaapprox} becomes $3/2$ and $\tau_\mrm{pa}$ changes by an inconsequential factor of $3$.} Thus, pressure anisotropy increases in all fluid elements.
If nothing interferes with the adiabatic increase in pressure anisotropy, the plasma in a fluid element will eventually become mirror unstable when $\Delta_p\gtrsim{1}/\beta_\perp$, where
\begin{equation}
\beta_\perp(\xi(t),t) = \beta_0(\xi_0)\biggl[\frac{\theta^2+(\xi_0/a_0)^2}{\theta^2+\exp(t/\tau_\mrm{cs})(\xi_0/a_0)^2}\biggr]^{1/2} \approx \beta_0 \biggl(1-\frac{t}{3\tau_\mrm{pa}}\biggr)
\label{eqn:betaapprox}
\end{equation}
is the adiabatically evolving perpendicular plasma beta in the fluid frame ($\beta_0$ is its initial value). Comparing (\ref{eqn:Deltaapprox}) and (\ref{eqn:betaapprox}), this occurs at $t_\mrm{m}\sim\tau_\mrm{pa}/\beta_0$ for $\beta_0\gg{1}$. If the guide field is small compared to the local reconnecting field ($\theta\ll\xi_0/a_0$), this time is a small fraction of the CS-formation time scale, $t_\mrm{m}\sim\tau_\mrm{cs}/\beta_0$, and so the CS becomes mirror-unstable early in its evolution. With a larger guide field ($\theta\gg\xi_0/a_0$), $t_\mrm{m}\sim\tau_\mrm{cs}(a^2_0/\xi^2_0)(\theta^2/\beta_0)$. This time is also early in the CS evolution for $\xi_0\lesssim{a}_0$, since $\theta\ll\beta^{1/2}$ is required in this model for the plasma to reliably exceed the mirror-instability threshold.\footnote{\label{fn:thetaLim} If the asymptotic value of the reconnecting field, $B_\mrm{r}$, is constant, then the maximum change of $B$ in a fluid element is bounded, $B(t)/B(0)< (1+\theta^{-2})^{1/2}$, and so $\Delta_p<(1+\theta^{-2})^{1/2}-1$. Therefore, $\theta\lesssim\beta^{1/2}$ is required to reach the mirror threshold. In other models where $B_\mrm{r}$ increases in time \citep[e.g.][]{tolman18}, no such limit on $\theta$ exists.}
These times must be compared to the characteristic time scales for tearing modes that facilitate magnetic reconnection in the forming CS. Before doing so, we review the basic properties of the mirror instability.
\subsection{Mirror instability}
As $B$ increases, adiabatic invariance drives $\Delta_p>0$, with plasma becoming mirror-unstable when $\Lambda_\mrm{m}\doteq\Delta_p-1/\beta_\perp>0$. Just beyond this threshold ($0<\Lambda_{\rm m}\ll{1}$), oblique modes with wavenumbers $k_{\parallel,\mrm{m}}\rho_i\sim (k_{\perp,\mrm{m}}\rho_i)^2\sim\Lambda_\mrm{m}$ and polarization $\delta{B}_\perp/\delta{B}_\parallel\sim\Lambda_\mrm{m}^{1/2}$ grow exponentially at a maximum rate $\gamma_\mrm{m}\sim\Omega_i\Lambda_\mrm{m}^2$ \citep{hellinger07}. Once this growth rate becomes larger than the rate at which $\Delta_p$ is produced ($\gamma_\mrm{m}\tau_\mrm{pa}\gtrsim{1}$), the growth of $\Delta_p$ stops. This yields a maximum mirror-instability parameter, $\Lambda_\mrm{m}\gtrsim(\Omega_i\tau_\mrm{pa})^{-1/2}\doteq\Lambda_\mrm{m,max}$. Kinetic simulations show that, once $\Lambda_\mrm{m}(t)\sim\Lambda_\mrm{m,max}$, mirrors rapidly drain $\Lambda_\mrm{m}(t)\rightarrow{0}^+$ and attain amplitudes $\delta{B}_\parallel/B\sim\Lambda^{1/2}_\mrm{m,max}$ \citep{kunz14}. This is the end of the linear stage; for $\beta_0\gg{1}$, this occurs at $t/\tau_\mrm{pa}\sim{1}/\beta_0+\Lambda_\mrm{m,max}$.
As the CS continues to thin, $\Delta_p>0$ is continuously driven. Mirror modes then maintain marginal stability ($\Lambda_\mrm{m}\simeq{0}^+$) by growing secularly, $\delta{B}^2_\parallel\propto{t}^{4/3}$, and trapping an increasing fraction of particles \citep{schekochihin08,kunz14,rincon15}. Independent of $\Lambda_\mrm{m,max}$, saturation occurs at $t\sim\tau_\mrm{pa}$ and $\delta{B}/B\sim{1}$, when these particles pitch-angle scatter off sharp bends in the magnetic field occurring at the mirror boundaries at a rate $\nu_\mrm{m}\sim\beta/\tau_\mrm{pa}$; this maintains marginal stability by severing the adiabatic link between $\Delta_p$ and changes in $B$ \citep{kunz14,riquelme15}. Thereafter, $\Delta_p\simeq{1}/\beta_\perp$, even as $B$ changes.
This evolution was found for situations in which $\tau_\mrm{pa}$ is comparable to the dynamical time in the system (e.g., linear shear flows). However, for locations $\xi_0\ll\theta{a}_0$ deep inside the CS, $\tau_\mrm{pa}\gg\tau_\mrm{cs}$. In this case, local mirror growth cannot outpace CS formation, and any potential mirrors are advected and distorted faster than they can grow. When $\theta\gg{1}$, $\tau_\mrm{pa}\gg\tau_\mrm{cs}$ in the entire CS. We thus focus only on cases with $\theta\lesssim{1}$ and locations $\xi_0\gtrsim\theta{a}_0$.
\subsection{Collisionless tearing instability}
Next we review the theory of collisionless tearing modes, applicable when the inner-layer thickness of the tearing CS, $\delta_\mrm{in}\lesssim\rho_e$. To determine under what condition this criterion is satisfied, we use standard MHD tearing theory \citep{fkr63} to estimate
\begin{equation}\label{eqn:din}
\delta_\mrm{in}^\mrm{MHD}=\bigl[\gamma_\mrm{t}(\ktv_\mrm{A,r})^{-2}a^2\eta\bigr]^{1/4}=a\bigl[\gamma_\mrm{t}\tau_\mrm{A,r}(k_\mrm{t}{a})^{-2}S^{-1}_a\bigr]^{1/4},
\end{equation}
where $v_\mrm{A,r}\doteqB_\mrm{r}/(4\upi{m}_in_i)^{1/2}$ is the Alfv\'{e}n speed of the reconnecting field, $\tau_\mrm{A,r}\doteq{a}/v_\mrm{A,r}$ is the Alfv\'{e}n crossing time of the CS, $\eta$ is the (collisional) resistivity, and $S_a\doteq{a}v_\mrm{A,r}/\eta$ is the Lundquist number. Using an estimate for the growth rate $\gamma_\mrm{t}$ of the fastest-growing collisional tearing mode with wavenumber $k_\mrm{t}$ oriented along the CS \citep{fkr63,coppi76,ul16}, the validity condition for collisionless tearing theory to hold becomes
\begin{equation}\label{eqn:Scollisionless}
S_a\gtrsim(a/\rho_e)^4.
\end{equation}
This gives $a\lesssim{10}^{-6}~\mrm{pc}$ for the ICM parameters listed in \S\ref{sec:introduction}, a satisfiable constraint given that $\rho_i\sim{10}^{-9}~\mrm{pc}$ and the outer scale of ICM magnetic-field fluctuations is observationally inferred to be ${\sim}10~\mrm{kpc}$ \citep{ev06,guidetti08,bonafede10,vacca12,govoni17}, comparable to the collisional mean free path. At the accretion radius of Sgr A$^\ast$, this constraint is $a\lesssim{10}^{-10}~\mrm{pc}$, which is ${\sim}10^2$ larger than $\rho_i$ and ${\sim}10^8$ times smaller than the collisional mean free path. As long as \eqref{eqn:Scollisionless} is satisfied (which becomes easier as $a$ shrinks), $\gamma_\mrm{t}$ and $k_\mrm{t}$ are estimated as follows.
In a $\beta\gtrsim{1}$ plasma when the tearing-mode instability parameter $\Delta'(k_\mrm{t})$ \citep{fkr63} is small, satisfying $\Delta' \delta_\mrm{in} \sim (\Delta' d_e)^2 \ll 1$ (``FKR-like''; \citet{karimabadi05}),
\begin{equation}\label{eqn:gammat_smallDelta}
\gamma_\mrm{t}^{\rm FKR}\tau_\mrm{A,r}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/2}\biggl(\frac{d_i}{a}\biggr)^2 \,k_\mrm{t}{a}\,\Delta'a ,
\end{equation}
where $d_e$ and $d_i\doteq\rho_i/\beta^{1/2}_{i}=d_e(m_i/m_e)^{1/2}$ are, respectively, the electron and ion skin depths \citep{fitzpatrick04,fitzpatrick07}. (Our CS formation model leaves $d_e,d_i$ constant.) This growth rate is approximately independent of $k_\mrm{t}$ in a Harris sheet, for which $\Delta'a=2(1/k_\mrm{t}{a}-k_\mrm{t}{a})\sim(k_\mrm{t}{a})^{-1}$ at $k_\mrm{t}{a}\ll{1}$. The large-$\Delta'$ (``Coppi-like'') growth rate satisfies
\begin{equation}\label{eqn:gammat_largeDelta}
\gamma_\mrm{t}^{\rm Coppi}\tau_\mrm{A,r}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/5}\biggl(\frac{d_i}{a}\biggr)\,k_\mrm{t}{a},
\end{equation}
independent of $\Delta'$ \citep{fitzpatrick07}. An estimate for $\gamma_\mrm{t}$ and $k_\mrm{t}$ of the fastest-growing Coppi-like mode in a Harris sheet can be obtained by balancing \eqref{eqn:gammat_smallDelta} and \eqref{eqn:gammat_largeDelta}:
\begin{subequations}\label{eqn:tearingn}
\begin{align}
\gamma_\mrm{t}^\mrm{max}\tau_{\rm A,r} &\sim \biggl(\frac{m_e}{m_i}\biggr)^{1/2}\biggl(\frac{d_i}{a}\biggr)^{2},\label{eqn:tearingn:g}\\*
k_\mrm{t}^\mrm{max} a &\sim \biggl(\frac{m_e}{m_i}\biggr)^{3/10}\biggl(\frac{d_i}{a}\biggr) .\label{eqn:tearingn:k}
\end{align}
\end{subequations}
These modes are the fastest growing provided they fit into the length $L$ of the CS, i.e.~$k_\mrm{t}^\mrm{max} L>1$. Otherwise, the fastest-growing mode is FKR-like.\footnote{\citet{fitzpatrick04,fitzpatrick07} obtained \eqref{eqn:gammat_smallDelta} and \eqref{eqn:gammat_largeDelta} using a two-fluid model assuming cold ions and that the compressional Alfv\'{e}n wave propagates much faster than any other wave in the system (as it would in a high-$\beta$ plasma), thus guaranteeing pressure balance along field lines and nearly incompressible flow. The former (small-$\Delta'$) growth rate agrees with the corresponding kinetic expression in \citet[][their equation (16)]{dl77a} up to a factor of $1/\sqrt{1+\beta_\mrm{g}}$, which is ${\sim}1$ given those authors' assumption of small $\beta$ and large guide field. Both results assumed a Maxwellian background. Alternatively, \citet{cp84} allowed for a spatially uniform $\Delta_p\ne{0}$ in their linear kinetic tearing calculation, but assumed $B_\mrm{g}=0$ and thus obtained different scalings after accounting for axis-crossing particle orbits (see also \citet{cl85} and \citet{quest10}). While we have opted to use the \citet{fitzpatrick04,fitzpatrick07} expressions for $\gamma_{\rm t}$, our analysis can be generalized for any alternative scalings without a significant change in the main qualitative conclusions summarized in \S\ref{sec:summary}. The ``FKR-like'' and ``Coppi-like'' designations are adaptations of those introduced by \citet{ul16}.}
In what follows, we {\em assume} that pressure anisotropy does not appreciably modify these growth rates. This is because saturated mirrors maintain $\Delta_p\simeq{1}/\beta_\perp\ll{1}$, and so the resulting viscous stress effectively enhances the magnetic tension responsible for driving the tearing by a factor of only ${\simeq}3/2$. Other works that postulate an initial $\Delta_p$ (customarily taken to be uniform and thus non-zero even at $x=0$) do not consider its rapid regulation by the mirror instability prior to the onset of tearing, and the enhanced $\gamma_\mrm{t}$ often found in linear calculations when $\Delta_p > 0$ is largely because the assumption $B_\mrm{g}=0$ permits axis-crossing particle orbits in the inner regions of the CS and allows threshold-less instabilities such as the Weibel instability \citep[e.g.][]{cp84}.
\section{Reconnection onset when $\Delta_p=0$}
Before determining how mirror-unstable pressure anisotropy affects a gradually forming CS, we recapitulate the theory of CS disruption by tearing modes \citep{pv14,tenerani15,lu16,ul16}, specialized to the case of collisionless tearing in a high-$\beta$ plasma. That is, we ignore the production of pressure anisotropy during CS formation and instead determine when $L/a$ has increased enough for tearing modes to prompt reconnection.
As the CS's aspect ratio $L/a$ increases in time, modes with progressively larger mode number $N\doteqk_\mrm{t}(t)L(t)={\rm const}$ become unstable and undergo linear evolution with $\gamma_\mrm{t}(N,t)$ increasing (see figure \ref{fig:gamma-t0}). \citet{ul16} argued that the first tearing mode $N$ to reach the end of its linear stage at the critical time $t_\mrm{cr}(N)$ (when $\gamma_\mrm{t}\tau_\mrm{cs}\gtrsim{1}$, neglecting logarithmic corrections \citep{comisso17}) will also be the first to undergo X-point collapse (defined by when the island width $w\sim{1}/\Delta'$) and, soon thereafter, disrupt the CS ($w\sim{a}$). We adopt this argument and estimate the CS disruption time $t_\mrm{disrupt}$ for a collisionless Harris sheet with $L(t)a(t)={\rm const}$. (The same procedure can be used to investigate alternative CS profiles and evolution.) Note that, for the Harris-sheet profile, $\gamma_\mrm{t}^{\rm FKR}\approx\gamma_\mrm{t}^\mrm{max}$ for $k_\mrm{t}{a}\ll{1}$ (see \eqref{eqn:gammat_smallDelta} and \eqref{eqn:tearingn:g}), so the only difference between these modes are their wavenumbers and, thus, their $\Delta'\sim{1}/k_\mrm{t}{a}^2$.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig1.eps}
\caption{Qualitative plot of tearing growth rate $\gamma_\mrm{t}$ vs.~mode number $N$ (see \eqref{eqn:gammat_smallDelta} and \eqref{eqn:gammat_largeDelta}) shortly after mirror production at $k^\mrm{max}_{y,\mrm{m}}a>{1}$. Arrows indicate evolution as the CS aspect ratio ($L/a$) increases, with $\gamma_\mrm{t}$ approaching $\tau_\mrm{cs}^{-1}$ (blue dashed line), $k_\mrm{t}$ approaching the large-$\Delta'$ regime ($k_\mrm{t}\lesssimk_\mrm{t}^\mrm{max}$), and mirrors affecting an increasing number of tearing modes (those with $k_\mrm{t}\gtrsim{k}^\mrm{max}_{y,\mrm{m}}$).}
\label{fig:gamma-t0}
\end{figure}
Each unstable mode $N$ starts in the small-$\Delta'$ (``FKR-like") regime ($N>N_\mrm{max}(t)$), with $\gamma_\mrm{t}$ roughly independent of $k_\mrm{t}$ for $k_\mrm{t}{a}\ll{1}$. However, because $N_\mrm{max}\propto(L/a)(d_i/a)\propto{a}^{-3}$ increases in time, these FKR-like modes approach the large-$\Delta'$ (``Coppi-like'') regime, making the transition at $t=t_\mrm{tr}(N)$ when
\begin{equation}
\frac{a(t_\mrm{tr}(N))}{a_0}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/10}\biggl(\frac{L_0d_i}{a^2_0}\biggr)^{1/3}N^{-1/3}.
\end{equation}
Larger $N$ corresponds to larger $t_\mrm{tr}(N)$, and so the first mode to make this transition is $N=1$; i.e.~at $t=t_\mrm{tr}(1)$, the fastest Coppi-like mode (see \eqref{eqn:tearingn:k}) just fits inside the CS. All modes satisfying $k_\mrm{t}^\mrm{max} a\lesssimk_\mrm{t}{a}\ll{1}$ obtain growth rates $\gamma_\mrm{t}\tau_\mrm{cs}\gtrsim{1}$ at roughly the same time, $t=t_\mrm{cr}$, when (using \eqref{eqn:tearingn:g})
\begin{equation}\label{eqn:tearingDisrupts}
\frac{a(t_\mrm{cr})}{a_0}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/6}\biggl(\frac{d_i}{a_0}\biggr)^{2/3}M_\mrm{A,0}^{-1/3} ,
\end{equation}
where $M_\mrm{A,0}\doteq\tau_\mrm{A,r}(t=0)/\tau_\mrm{cs}$ is the initial Alfv\'{e}nic Mach number of the CS formation. These modes have
\begin{equation}\label{eqn:tearingDisrupts_N}
\frac{L(t_\mrm{cr})}{a(t_\mrm{cr})}\gg{N}\ge{N}_\mrm{cr}\doteq\biggl(\frac{m_e}{m_i}\biggr)^{-1/5}\biggl(\frac{L_0}{d_i}\biggr)M_\mrm{A,0}.
\end{equation}
This is an important distinction from the collisional MHD case, in which larger $N>N_\mrm{cr}$ corresponds to larger $t_\mrm{cr}(N)$ (since $\gamma_\mrm{t}^{\rm FKR}\proptok_\mrm{t}^{-2/5}$ at $k_\mrm{t}{a}\ll{1}$ instead of $k_\mrm{t}^{0}$).
Another important distinction from the MHD case lies in the nonlinear evolution, during which the MHD FKR modes behave differently than the MHD Coppi modes. While the latter are expected to rapidly evolve towards X-point collapse soon after $t=t_\mrm{cr}$ due to their large $\Delta'$, the former undergo secular ``Rutherford'' evolution that increases $\Delta'(k_N)w_N$ for a given mode $N$ until $w_N\sim{1}/\Delta'$ \citep{rutherford73,waelbroeck89,waelbroeck93,loureiro05,arcis09}. However, in the collisionless case, the FKR-like modes reach $\gamma_\mrm{t}\tau_\mrm{cs}\sim{1}$ at the same time as the fastest Coppi-like mode. If the latter is accessible, then the fastest-growing mode $N_\mrm{max}$ already has $\Delta'd_e\sim{1}$ at $t_\mrm{cr}(N_\mrm{max})$ and so X-point collapse likely occurs soon after \eqref{eqn:tearingDisrupts} is satisfied. The CS is then said to be ``disrupted'' at $t_\mrm{disrupt}\sim{t}_\mrm{cr}(N_\mrm{max})$. For there to be no Coppi-like modes when \eqref{eqn:tearingDisrupts} is satisfied (i.e.~$N_\mrm{cr}<{1}$), $M_\mrm{A,0}\lesssim(m_e/m_i)^{1/5}(d_i/L_0)$, a rather stringent condition that is difficult to satisfy when $\beta_0\gg{1}$ and $\rho_{i0}/L_0\ll{1}$.
That being said, given the uncertainties in the nonlinear evolution of collisionless tearing modes in a high-$\beta$, magnetized plasma -- especially regarding the existence (or nonexistence) of a secular ``Rutherford'' phase and the production of pressure anisotropy during X-point collapse -- we focus primarily on the critical time for reconnection onset (when $\gamma_\mrm{t}\tau_\mrm{cs}\gtrsim{1}$) rather than the CS disruption time (when $w\sim{a}$).\footnote{Another reason for prudence is Drake \& Lee's (1977{\it b})\nocite{dl77b} argument that single-mode tearing with a guide field saturates via trapped-electron effects with an amplitude comparable to the inner-layer thickness, $w\sim\delta_\mrm{in}$. This argument was confirmed, and refined by incorporating finite-Larmor-radius effects, by \citet{karimabadi05}.}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{fig2.eps}
\caption{Qualitative illustration of magnetic-field lines in an evolving, mirror-infested Harris CS with $\theta \ll 1$.}
\label{fig:CS_with_mirrors}
\end{figure}
\section{Reconnection onset when $\Delta_p \ne 0$}
We now consider the effects of mirrors on an evolving CS subject to tearing modes. Because different portions of the CS have different $\rho_i$ and $\tau_\mrm{pa}$, there will be a range of mirror wavenumbers, $k_{y,\mrm{m}}(x)$, along the CS (see figure \ref{fig:CS_with_mirrors}).
The smallest $k_{y,\mrm{m}}$ will be located the nearest to $x=0$ where mirrors can form, since these regions have the largest values of $\rho_i$ and $\tau_\mrm{pa}$. We argue that, since tearing modes with wavenumbers $k_\mrm{t}$ much smaller than this $k_{y,\mrm{m}}^\mrm{min}$ will see a rapidly $y$-varying magnetic field that averages to its unperturbed value, these modes are likely unaffected by the mirrors (or at least less affected than other modes). The largest $k_{y,\mrm{m}}$ will be located near $|x|\sim{a}$, where $\rho_i$ and $\tau_\mrm{pa}$ are at their smallest values. All tearing modes with $k_\mrm{t}\ggk_{y\mrm{,m}}^\mrm{max}$ will see an approximately uniform-in-$y$ magnetic field, but will have their $\Delta'(k_\mrm{t})$ enhanced by the mirrors' effect on the $x$-variation of the CS profile. If the CS is able to stretch to the point where $k_{y\mrm{,m}}^\mrm{max}\lesssimk_\mrm{t}^\mrm{max}$ before the onset of tearing, then all of the modes that are unaffected by the mirrors will have smaller growth rates and thus be unimportant for CS reconnection. The condition $k_{y\mrm{,m}}^\mrm{max}\lesssimk_\mrm{t}^\mrm{max}$ is thus a sufficient (but not necessary) condition for mirrors to matter.
We now follow the evolution of $k_{y,\mrm{m}}^\mrm{max}$ as the CS evolves, and investigate the evolution of tearing modes with $k_\mrm{t}\gg{k}_{y,\mrm{m}}^\mrm{max}$. We treat two cases, depending upon the size of the guide field and thus the component of the mirrors' wavevector along the CS at $|x|\sim{a}$,
\begin{equation}\label{eqn:ky}
k_{y,\mrm{m}}\sim k_{\parallel,\mrm{m}}\frac{B_{\rm r}}{B}+k_{\perp,\mrm{m}}\frac{B_{\rm g}}{B}=k_{\parallel,\mrm{m}} \frac{B_{\rm r}}{B}\biggl(1+\theta\frac{k_{\perp,\mrm{m}}}{k_{\parallel,\mrm{m}}}\biggr) .
\end{equation}
With $k_{\perp,\mrm{m}}/k_{\parallel,\mrm{m}}\sim\Lambda^{-1/2}_\mrm{m,max}$ for the fastest-growing mirror mode, we have $k_{y,\mrm{m}}\sim{k}_{\parallel,\mrm{m}}$ for $\theta\ll\Lambda^{1/2}_\mrm{m,max}$ and $k_{y,\mrm{m}}\sim\theta{k}_{\perp,\mrm{m}}$ for $\Lambda^{1/2}_\mrm{m,max}\ll\theta\lesssim{1}$. (In both cases, $\Lambda_\mrm{m,max}\sim(d_i/a_0)^{1/2}M_\mrm{A,0}^{1/2}$.)
\subsection{When mirrors affect tearing if $\theta\ll\Lambda^{1/2}_\mrm{m,max}$}\label{sec:smalltheta}
At $x\sim{a}$, the local reconnecting field is near its asymptotic value and $\tau_\mrm{pa}\sim\tau_\mrm{cs}$. Starting at time $t_\mrm{m}\sim\tau_\mrm{cs}/\beta_0\ll\tau_\mrm{cs}$, unstable mirror modes grow rapidly at this location ($a$ and $\tau_\mrm{A,r}$ hardly change from their initial values in a time $t_\mrm{m}$.) Unless tearing modes disrupt the CS within $t_\mrm{disrupt}\lesssim\tau_\mrm{cs}$ -- which is extremely unlikely, requiring \eqref{eqn:tearingDisrupts} to be satisfied within $\tau_\mrm{cs}$ -- these mirrors will saturate with $\delta{B}\simB_\mrm{r}$ and
\begin{equation}\label{eqn:kpar}
k_{y,\mrm{m}}^\mrm{max}(t)\rho_i\sim\frac{L_0}{L(t)}(\Omega_i\tau_\mrm{cs})^{-1/2}\sim\frac{a(t)}{a_0}\biggl(\frac{d_i}{a_0}\biggr)^{1/2}M_\mrm{A,0}^{1/2},
\end{equation}
where we have accounted for the Lagrangian stretching of the perturbations during CS formation.
To determine the effect of these mirrors on tearing, it is useful (as argued above) to first establish when $k^\mrm{max}_{y,\mrm{m}}(t)$ enters the large-$\Delta'$ regime in which $\gamma_\mrm{t}\propto{k}$ (the leftmost portion of figure \ref{fig:gamma-t0}), i.e.~when the mirrors influence the fastest-growing tearing modes. Combining \eqref{eqn:tearingn:k} and \eqref{eqn:kpar}, we find that $a(t)$ must satisfy
\begin{equation}\label{eqn:mirrorsCoppi1}
\frac{a(t)}{d_i}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/10}\biggl(\frac{d_i}{a_0}\biggr)^{-1/2}\beta_0^{1/6}M_\mrm{A,0}^{-1/6}
\end{equation}
for $k^\mrm{max}_{y,\mrm{m}}(t)\lesssimk_\mrm{t}^\mrm{max}(t)$. Equation (\ref{eqn:mirrorsCoppi1}) happens before the sheet would be disrupted in the absence of mirrors (see (\ref{eqn:tearingDisrupts})) if
\begin{equation}\label{eqn:CoppiWins1}
\frac{a_0}{d_i}\gtrsim\biggl(\frac{m_e}{m_i}\biggr)^{2/5}\beta_0^{-1}M_\mrm{A,0}^{-1},
\end{equation}
which is easily satisfied under the conditions of interest. Thus, there will be a time at which all tearing modes with $k_\mrm{t}\gtrsimk_\mrm{t}^\mrm{max}$ are affected by mirrors. How the tearing progresses after \eqref{eqn:mirrorsCoppi1} is satisfied will be discussed once the corresponding conditions for the other $\theta$-regime are derived.
\subsection{When mirrors affect tearing if $\Lambda^{1/2}_\mrm{m,max}\ll\theta\lesssim{1}$}
As $B_\mrm{g}$ is increased, things will continue in much the same way as in \S\ref{sec:smalltheta} except that the initial $k^\mrm{max}_{y,\mrm{m}}\sim\theta{k}_{\perp,\mrm{m}}$. That is, equation \eqref{eqn:kpar} is replaced by
\begin{equation}\label{eqn:kperp}
k_{y,\mrm{m}}^\mrm{max}(t)\rho_i\sim\frac{L_0}{L(t)}\theta(\Omega_i\tau_\mrm{cs})^{-1/4}\sim\frac{a(t)}{a_0}\biggl(\frac{d_i}{a_0}\biggr)^{1/4}\thetaM_\mrm{A,0}^{1/4}.
\end{equation}
This means that the condition on $a(t)$ that $k^\mrm{max}_{y,\mrm{m}}(t)\lesssimk_\mrm{t}^\mrm{max}(t)$ (cf.~\eqref{eqn:mirrorsCoppi1}) becomes
\begin{equation}\label{eqn:mirrorsCoppi2}
\frac{a(t)}{d_i}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/10}\biggl(\frac{d_i}{a_0}\biggr)^{-5/12}\theta^{-1/3}\beta_0^{1/6}M_\mrm{A,0}^{-1/12}.
\end{equation}
If the initial state satisfies
\begin{equation}\label{eqn:CoppiWins2}
\frac{a_0}{d_i}\gtrsim\biggl(\frac{m_e}{m_i}\biggr)^{4/5}\theta^4\beta_0^{-2}M_\mrm{A,0}^{-3},
\end{equation}
then \eqref{eqn:mirrorsCoppi2} occurs before \eqref{eqn:tearingDisrupts}, when the sheet would be disrupted without the mirrors.
\subsection{Mirror-stimulated onset of reconnection}
If either \eqref{eqn:CoppiWins1} or \eqref{eqn:CoppiWins2} is satisfied, then mirrors influence all tearing modes before they could otherwise disrupt the CS in the absence of mirrors. We now quantify that influence, focusing on those tearing modes with $k_\mrm{t}\gg{k}^\mrm{max}_{y,\mrm{m}}$ (see \eqref{eqn:kpar} and \eqref{eqn:kperp}). As argued previously, these modes see a magnetic field that is roughly uniform in $y$ but is rapidly varying in $x$ due to the mirrors, with an initial $k_{x,\mrm{m}}\sim{k}_{\perp,\mrm{m}}$ that is then compressed by the CS formation with $k_{x,\mrm{m}}(t)a(t)\sim{\rm const}$. This rapid variation enhances $\gamma_\mrm{t}(k_\mrm{t})$ for these modes due to the smaller effective sheet thickness (estimated below), which affects both $\Delta'(k_\mrm{t})$ and the Alfv\'{e}n-crossing time $\tau_\mrm{A,r}$ (see \eqref{eqn:gammat_smallDelta} and \eqref{eqn:gammat_largeDelta}).
\subsubsection{Model for a mirror-infested CS}
We argue that $\tau_\mrm{A,r}$ changes by a small amount, since mirrors modify $\mrm{d}B_y/\mrm{d}x|_{x=0}$ by only a factor of order unity. To determine how $\Delta'(k)$ is modified, we adopt the following simple model for the magnetic-field profile of a mirror-infested Harris CS:
\begin{equation}\label{eqn:Bymodel}
B_y(x)=B_\mrm{r}\tanh\Bigl(\frac{x}{a}\Bigr)\Bigl[1+\varepsilon\sin\Bigl(2k_\mrm{max}a\sech\Bigl(\frac{x}{a}\Bigr)\Bigr)\Bigr] ,
\end{equation}
where $k_\mrm{max}\gg{a}^{-1}$ is a parameter characterizing the peak $k_{x,\mrm{m}}$ occurring at the edge of the CS. This is a WKB approximation describing saturated mirrors with amplitude $\varepsilon\sim\mathcal{O}(1)$ times the local reconnecting field and wavenumber in the $x$-direction given by $k_x(x)=2k_\mrm{max}\sech(x/a)\tanh(x/a)$. This model was chosen because $k_x(x=0)=0$, $k_x(x\to\infty)\to{0}$, and $k_x(x)$ is maximal near the edge of the CS, as anticipated. (What follows is not particularly sensitive to this choice of $k_x(x)$.)
The resulting $\Delta'(k_\mrm{t})$ is obtained by numerically integrating the outer differential equation for the flux function, $\psi$ \citep{fkr63}:
\begin{equation}\label{eqn:psi}
\DD{x}{\psi}-\biggl(k^2+\frac{B_y''}{B_y}\biggr)\psi=0,
\end{equation}
with $B_y(x)$ given by \eqref{eqn:Bymodel}. Then $\Delta'\doteq\mrm{d}\ln\psi/\mrm{d}x|_{x=0}$ for the solution that obeys reasonable boundary conditions; an example result is shown in figure \ref{fig:Deltap-vs-k}({\it a}). (Its shape does not change significantly as $\varepsilon$ and $k_\mrm{max}$ vary.) Generally, $\Delta'>0$ for $k_\mrm{t}$ smaller than the inverse of the effective sheet thickness, $a_\mrm{eff}$, which we identify with the location $x_\mrm{m}$ of the peak in $B_y(x)$ closest to $x=0$ (i.e.~the location of the innermost mirror). As $k_\mrm{t}$ decreases from this value, $\Delta'(k_\mrm{t})$ rises sharply to saturate at $k_\mrm{t}=k_\mrm{sat}$ with value $\Delta'_\mrm{sat} \sim 1/a_\mrm{eff} \sim 1/x_\mrm{m}$, at which it is approximately constant until it nears the Harris-sheet $\Delta'(k_\mrm{t})\sim{1}/k_\mrm{t}{a}^2$, which it then follows.
The corresponding $\gamma_\mrm{t}(k_\mrm{t})$ shown in figure \ref{fig:Deltap-vs-k}({\it b}) depends on whether or not $\Delta'_\mrm{sat}d_e\ll{1}$. However, the maximum growth rate always occurs at $k_\mrm{sat}\sim{1}/x_\mrm{m}$, because of the $k_\mrm{t}$-dependence of \eqref{eqn:gammat_smallDelta} and \eqref{eqn:gammat_largeDelta}. Thus, to determine the new $t_\mrm{cr}$, we must calculate $x_\mrm{m}$. This yields two cases based on the size of $\theta$.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{fig3.eps}%
\caption{({\it a}) $\Delta'(k_\mrm{t})$ and ({\it b}) $\gamma_\mrm{t}(k_\mrm{t})$ for a Harris CS (red dashed line) and its mirror-infested counterpart (blue solid line), using $k_\mrm{max}a=200\upi$ and $\varepsilon=1/2$ in Eq.~\eqref{eqn:Bymodel}. $\Delta'$ rises rapidly at $k_\mrm{t} x_\mrm{m}\lesssim{1}$ and plateaus for ${k}_\mrm{sat}\gtrsimk_\mrm{t}\gtrsim{1}/(\Delta'_\mrm{sat}a^2)$. Mirror-stimulated tearing thus peaks at $k_\mrm{t}\sim{k}_\mrm{sat}$, regardless of whether $\Delta'_\mrm{sat}d_e\ll{1}$ (blue solid line) or $\Delta'_\mrm{sat}d_e\gtrsim1$ (orange dotted line).}
\label{fig:Deltap-vs-k}
\end{figure}
\subsubsection{Mirror-stimulated tearing for $\theta\ll{x}_\mrm{m}/a$}
When the reconnecting field is the dominant field on the scale of the innermost mirrors, the total ion-cyclotron frequency is $\Omega_i\sim(x_\mrm{m}/a)\Omega_{i,\mrm{r}}$ and $\tau_\mrm{pa}\sim\tau_\mrm{cs}$. The $x$-wavenumber of the mirrors at that location is then
\begin{equation}\label{eqn:kxm}
k_{x,\mrm{m}}(t,x_\mrm{m}(t))\rho_{i,\mrm{r}}\sim\biggl(\frac{x_\mrm{m}}{a(t)}\biggr)^{3/4}\biggl(\frac{d_i}{a_0}\biggr)^{1/4}\frac{a_0}{a(t)}M_\mrm{A,0}^{1/4},
\end{equation}
where we have accounted for the Lagrangian compression due to CS formation. The innermost mirror is located at $x_\mrm{m}\sim{k}^{-1}_{x,\mrm{m}}$, an $x$-wavelength away from the center. Substituting this into \eqref{eqn:kxm} yields
\begin{equation}\label{eqn:xm}
\frac{x_\mrm{m}}{a(t)}\sim\biggl(\frac{d_i}{a_0}\biggr)^{3/7}\beta_\mrm{r}^{2/7}M_\mrm{A,0}^{-1/7}.
\end{equation}
For this estimate to be self-consistent, we require $\theta\ll{x}_{\rm m}/a$ or, using \eqref{eqn:xm},
\begin{equation}\label{eqn:smallThReq}
\theta\ll\biggl(\frac{d_i}{a_0}\biggr)^{3/7}\beta_\mrm{r}^{2/7}M_\mrm{A,0}^{-1/7}.
\end{equation}
Provided this is satisfied, the fastest-growing tearing mode, having $\gamma_\mrm{t}(k_{\rm sat})$, is either FKR-like, if $d_e/x_\mrm{m}\ll{1}$, or Coppi-like, if $d_e/x_\mrm{m}\gtrsim{1}$.
In the former case, the maximum tearing growth rate is (using \eqref{eqn:gammat_smallDelta} with $k_\mrm{t}\sim{1}/x_\mrm{m}$ and $\Delta'\sim{1}/x_\mrm{m}$)
\begin{equation}\label{eqn:gammaFKRsmallTh}
\gamma_\mrm{t,m}^\mrm{FKR}\tau_\mrm{A,r}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/2}\Biggl(\frac{d_i^{8/7}a_0^{6/7}}{a^2}\Biggr)\beta_\mrm{r}^{-4/7}M_\mrm{A,0}^{2/7} .
\end{equation}
The critical time for onset, $t^\mrm{FKR}_\mrm{cr}$, occurs when $\gamma_\mrm{t,m}^\mrm{FKR}\tau_\mrm{cs}\sim{1}$, or
\begin{equation}\label{eqn:FKRdisrupts}
\frac{a(t^\mrm{FKR}_\mrm{cr})}{a_0}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/6}\biggl(\frac{d_i}{a_0}\biggr)^{8/21}\beta_\mrm{r}^{-4/7}M_\mrm{A,0}^{-5/21}.
\end{equation}
In the latter (Coppi-like) case, which happens when
\begin{equation}\label{eqn:CoppiSmallTh}
\frac{a(t_\mrm{tr})}{a_0}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/2}\biggl(\frac{d_i}{a_0}\biggr)^{4/7}\beta_\mrm{r}^{-2/7}M_\mrm{A,0}^{1/7},
\end{equation}
the maximum growth rate is
\begin{equation}\label{eqn:gammaCoppiSmallTh}
\gamma_\mrm{t,m}^\mrm{Coppi}\tau_\mrm{A,r}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/5}\beta_\mrm{r}^{-2/7}M_\mrm{A,0}^{1/7}\Biggl(\frac{d_i^{4/7}a_0^{3/7}}{a}\Biggr) ,
\end{equation}
and so the critical time $t^\mrm{Coppi}_\mrm{cr}$ occurs when $\gamma_\mrm{t,m}^\mrm{Coppi}\tau_\mrm{cs}\sim{1}$, or
\begin{equation}\label{eqn:Coppidisrupts}
\frac{a(t^\mrm{Coppi}_\mrm{cr})}{a_0}\lesssim \biggl(\frac{m_e}{m_i}\biggr)^{1/10}\biggl(\frac{d_i}{a_0}\biggr)^{2/7}\beta_\mrm{r}^{-1/7}M_\mrm{A,0}^{-3/7}.
\end{equation}
If the smallest parameter in the problem is $d_i/a_0$, so that (\ref{eqn:FKRdisrupts}) occurs before (\ref{eqn:CoppiSmallTh}) (i.e.~$t^\mrm{ FKR}_\mrm{cr} < t_\mrm{tr}$), then the CS will go unstable to mirror-stimulated FKR-like modes before the fastest-growing mode enters the large-$\Delta'$ regime. In this case, the critical CS thickness, $a_\mrm{cr}$, is given by \eqref{eqn:FKRdisrupts}. Comparing this to the expression for $a_\mrm{cr}$ when pressure anisotropy is {\em not} considered, equation \eqref{eqn:tearingDisrupts}, we see that mirrors increase $a_\mrm{cr}$ by a factor of ${\sim}(d_i/a_0)^{-2/7}\beta_\mrm{r}^{-4/7}M_\mrm{A,0}^{2/21}$. If, instead, $t^\mrm{FKR}_\mrm{cr} > t_{\rm tr}$, then the fastest-growing mirror-stimulated tearing mode becomes Coppi-like before tearing onsets, and $a_\mrm{cr}$ is effectively increased by a factor of ${\sim}(m_e/m_i)^{-1/15}(d_i/a_0)^{-8/21}\beta_\mrm{r}^{-1/7}M_\mrm{A,0}^{-2/21}$.
\subsubsection{Mirror-stimulated tearing for $\theta\sim{x}_\mrm{m}/a$}
If \eqref{eqn:smallThReq} is not satisfied, then the innermost mirror does not reach the center of the CS (i.e.~$k_{x,\mrm{m}}x_\mrm{m}\gg{1}$). Instead, the mirrors closest to the center with growth rate comparable to $\tau_\mrm{cs}^{-1}$ are most important, i.e.~those located at $x_\mrm{m}\sim\theta{a}$ (see \eqref{eqn:Deltaapprox}). Then the scaling laws in the previous section are modified; equations \eqref{eqn:gammaFKRsmallTh}--\eqref{eqn:Coppidisrupts} become, respectively,
\begin{gather}
\gamma_\mrm{t,m}^\mrm{FKR}\tau_\mrm{A,r}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/2}\biggl(\frac{d_i}{\theta a}\biggr)^2, \\
\label{eqn:FKRdisruptsTha}
\frac{a(t^\mrm{FKR}_\mrm{cr})}{a_0}\lesssim \biggl(\frac{m_e}{m_i}\biggr)^{1/6}\biggl(\frac{d_i}{\theta a_0}\biggr)^{2/3}M_\mrm{A,0}^{-1/3}, \\
\label{eqn:CoppiTha}
\frac{a(t_\mrm{tr})}{a_0}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/2}\biggl(\frac{d_i}{\theta a_0}\biggr), \\
\gamma_\mrm{t,m}^\mrm{Coppi}\tau_\mrm{A,r}\sim\biggl(\frac{m_e}{m_i}\biggr)^{1/5}\biggl(\frac{d_i}{\theta a_0}\biggr), \\
\label{eqn:CoppiDisruptsTha}
\frac{a(t^\mrm{Coppi}_\mrm{cr})}{a_0}\lesssim\biggl(\frac{m_e}{m_i}\biggr)^{1/10}\biggl(\frac{d_i}{\theta a_0}\biggr)^{1/2}.
\end{gather}
Comparing \eqref{eqn:FKRdisruptsTha} and \eqref{eqn:CoppiTha}, we see that, if $d_i/(\theta a_0)\lesssim(m_e/m_i)^{-1}M_\mrm{A,0}^{-1}$, tearing will onset before the fastest-growing mode can enter the large-$\Delta'$ regime (i.e.~$t^\mrm{FKR}_\mrm{cr}<t_\mrm{tr}$). In this case, $a_\mrm{cr}$ is given by \eqref{eqn:FKRdisruptsTha}, which is larger by a factor of ${\sim}\theta^{-2/3}$ than $a_\mrm{cr}$ derived without consideration of the mirrors, equation \eqref{eqn:tearingDisrupts}. Therefore, tearing will onset much sooner if $\theta\ll{1}$, whereas $t_\mrm{cr}$ is largely unaffected when $\theta\sim{1}$. If, instead, $t^\mrm{FKR}_\mrm{cr}>t_\mrm{tr}$, the mirror-stimulated tearing is Coppi-like, and $a_\mrm{cr}$ is given by \eqref{eqn:CoppiDisruptsTha}. However, the condition \eqref{eqn:smallThReq} still must be satisfied, allowing only a narrow range of validity for $\theta$. Moreover, this range only exists if $d_i/a_0\gg(m_e/m_i)^{-7/4}\beta_\mrm{r}^{1/2}M_\mrm{A,0}^{-2}$, a constraint not likely to be satisfied in the regime of interest. We therefore choose \eqref{eqn:FKRdisruptsTha} as the relevant condition for the onset of mirror-stimulated tearing for $\theta\sim x_\mrm{m}/a$.
\section{Discussion}\label{sec:summary}
While the specific quantitative model of CS evolution and mirror-stimulated tearing formulated herein is perhaps debatable, it nevertheless demonstrates an important, {\em qualitative} point: a gradually forming CS in a high-$\beta$, collisionless plasma easily produces enough pressure anisotropy to trigger the mirror instability, and the effect of this instability on the magnetic-field-line topology, and thus the tearing modes that instigate CS disruption via reconnection, ought to be considered.\footnote{In this respect, it is worth re-documenting the following prescient quote from the scarcely cited \citet{coppi83}: ``Thus we may consider the anisotropy-driven modes as a precursor of the spontaneous [tearing] ones and regard their effect as that of creating a region of macroscopic magnetic field turbulence near the neutral plane.''} For reasonable parameters, our theory predicts that the onset of reconnection in an evolving CS, driven by mirror-stimulated tearing modes, likely occurs earlier and at smaller scales than it would have without the mirrors, thereby placing a tighter upper limit on the aspect ratio of any forming CS (e.g.~compare \eqref{eqn:FKRdisrupts}, \eqref{eqn:Coppidisrupts}, and \eqref{eqn:FKRdisruptsTha} for the critical CS thickness at which mirror-stimulated tearing onsets to their $\Delta_p=0$ counterpart, equation \eqref{eqn:tearingDisrupts}). Whether or not these mirror-stimulated tearing modes ultimately grow to amplitudes $w\sim{a}_\mrm{eff}$, and perhaps beyond to ${\sim}a$ via island coalescence, to disrupt the CS awaits further work.
An immediate practical implication of this result is that numerical simulations of collisionless reconnection in high-$\beta$ plasmas should not initialize with a Maxwellian plasma embedded in an equilibrium CS. Instead, the CS should be allowed to evolve, and the particle distribution function self-consistently with it. A natural testing ground for this theory is the kinetic magnetorotational instability (MRI) \citep{quataert02,hq14}, thought to be the main driver of turbulence and enhanced transport in collisionless accretion flows, such as that onto the supermassive black hole at the Galactic center \citep{sharma06}. Historically, the linear MRI, at least in its MHD guise \citep{bh91}, was quickly shown to be a nonlinear ``channel'' solution in a differentially rotating disk \citep{gx94}, and various studies followed that employed Kelvin-Helmholtz and tearing ``parasitic'' modes to disrupt the otherwise resilient channels. In some theories, this disruption is credited for setting the steady-state level of magnetorotational turbulence as a function of the dissipative properties of the underlying magnetized fluid \citep[e.g.][]{pg09}. Given that the kinetic MRI both linearly and nonlinearly drives pressure anisotropy \citep{squire17}, it is worthwhile to contemplate a similar sequence of events, in which the kinetic MRI breaks down due to tearing modes stimulated by ion-Larmor-scale mirrors. Kinetic simulations of the MRI \citep[e.g.][]{riquelme12,hoshino13,hoshino15,kunz16,inchingolo18} may already be capable of testing this idea.
\begin{acknowledgments}
Support for A.A.~and M.W.K.~was provided by U.S.~DOE contract DE-AC02-09CH11466. This work benefited greatly from conversations with Nuno Loureiro, Alexander Schekochihin, and Dmitri Uzdensky, and from comments by the anonymous referees.
\end{acknowledgments}
|
1,116,691,499,646 | arxiv | \section{\@startsection {section}{1}{\z@}{3.25ex plus 1ex minus
.2ex}{1.5ex plus .2ex}{\large\bf}}
\def\subsection{\@startsection{subsection}{2}{\z@}{3.25ex plus 1ex minus
.2ex}{1.5ex plus .2ex}{\normalsize\bf}}
\@addtoreset{equation}{section}
\makeatother
\title{\empty}
\author{\empty}
\date{\empty}
\numberwithin{equation}{section}
\begin{document}
\title{Nash estimates and upper bounds for non-homogeneous Kolmogorov equations}
\author{Alberto Lanconelli\thanks{Dipartimento di Matematica, Universit\`a di Bari Aldo Moro, Bari, Italy. \textbf{e-mail}: [email protected]} \and Andrea Pascucci\thanks{Dipartimento di Matematica,
Universit\`a di Bologna, Bologna, Italy. \textbf{e-mail}: [email protected]}}
\date{This version: \today}
\maketitle
\begin{abstract}
We prove a Gaussian upper bound for the fundamental solutions of a class of ultra-parabolic
equations in divergence form. The bound is independent on the smoothness of the coefficients and
generalizes some classical results by Nash, Aronson and Davies. The class considered has relevant
applications in the theory of stochastic processes, in physics and in mathematical finance.
\end{abstract}
\noindent \textbf{Keywords}: Nash estimates, Kolmogorov equations, ultra-parabolic equations,
fundamental solution, linear stochastic equations
\section{Introduction}
We consider the Kolmogorov-type equation with measurable coefficients
\begin{equation}\label{PDE}
Lu:=\sum_{i,j=1}^{m_0}\partial_{x_i}(a_{ij}\partial_{x_j}u)+\sum_{i=1}^{m_0}\partial_{x_i}(a_{i} u)+
c u+\sum_{i,j=1}^{d}b_{ij}x_j\partial_{x_i}u+\partial_tu=0,\qquad (t,x)\in\R\times\R^{d}.
\end{equation}
where $m_0\leq d$ and $L$ verifies the following two standing assumptions:
\begin{assumption}\label{assA}
The coefficients $a_{ij}=a_{ji},a_i,c$, for $1\le i,j\le m_0$, are bounded, measurable functions
such that
\begin{equation}\label{ellipticity}
\m^{-1}|\x|^{2}\le \sum_{i,j=1}^{m_{0}}a_{ij}(t,x)\x_{i}\x_{j}\le \m|\x|^{2},\qquad
\x\in\R^{m_{0}},\ (t,x)\in\R^{d+1},
\end{equation}
for some positive constant $\m$.
\end{assumption}
\begin{assumption}\label{assB}
The matrix $B:=\left(b_{ij}\right)_{1\leq i,j\leq d}$ has constant real entries and takes the
block-form
\begin{equation}\label{e65b}
B=\begin{pmatrix}
\ast & \ast & \cdots & \ast & \ast \\ B_1 & \ast &\cdots& \ast & \ast \\ 0 & B_2 &\cdots& \ast& \ast \\ \vdots & \vdots
&\ddots& \vdots&\vdots \\ 0 & 0 &\cdots& B_{\nu}& \ast
\end{pmatrix}
\end{equation}
where each $B_i$ is a $\left(m_{i}\times m_{i-1}\right)$-matrix of rank $m_{i}$ with
\begin{equation}
m_0\geq m_1\geq \cdots \geq m_{\nu}\geq 1, \qquad \sum_{i=0}^{\nu} m_i = d,
\end{equation}
and the blocks denoted by {\rm ``$\ast$''} are arbitrary.
\end{assumption}
Degenerate equations of the form \eqref{PDE} naturally arise in the theory of stochastic
processes, in physics and in mathematical finance. For instance, if $W$ denotes a real Brownian
motion, then the simplest non-trivial Kolmogorov operator
$$\frac{1}{2}\p_{vv}+v\p_{x}+\p_{t},\qquad t\ge 0,\,(v,x)\in\R^{2},$$
is the infinitesimal generator of the classical Langevin's
stochastic equation
$$
\begin{cases}
dV_{t}=dW_{t}, \\
dX_{t}=V_{t}dt,
\end{cases}
$$
that describes the position $X$ and velocity $V$ of a particle in the phase space (cf.
\cite{Langevin}). Notice that in this case we have $1=m_{0}<d=2$.
Linear Fokker-Planck equations (cf. \cite{Desvillettes} and \cite{Risken}), non-linear
Boltzmann-Landau equations (cf. \cite{Lions1} and \cite{Cercignani}) and non-linear equations for
Lagrangian stochastic models commonly used in the simulation of turbulent flows (cf. \cite{Talay})
can be written in the form
\begin{equation}\label{PDE1}
\sum_{i,j=1}^{n}\partial_{v_i}(a_{ij}\partial_{v_j}f)+\sum_{j=1}^{n}v_{j}\p_{x_{j}}f+\p_{t}f=0,\qquad
t\ge 0,\, v\in\R^{n},\, x\in\R^{n},
\end{equation}
with the coefficients $a_{ij}=a_{ij}(t,v,x,f)$ that may depend on the solution $f$ through some
integral expressions. Clearly \eqref{PDE1} is a particular case of \eqref{PDE} with $n=m_{0}<d=2n$
and
$$B=\begin{pmatrix}
0 & 0 \\
I_{n} & 0 \
\end{pmatrix}$$
where $I_{n}$ denotes the $\left(n\times n\right)$-identity matrix.
In mathematical finance, equations of the form \eqref{PDE} appear in various models for the
pricing of path-dependent derivatives such as Asian options (cf., for instance,
\cite{pascuccibook}, \cite{BarucciPolidoroVespri}), stochastic volatility models (cf.
\cite{HobsonRogers}, \cite{Peszek}) and in the theory of stochastic utility (cf.
\cite{AntonelliBarucciMancino}, \cite{AntonelliPascucci}). In interest rate modeling, equations of
the type \eqref{PDE} were used in the study of the possible realization of Heath-Jarrow-Morton
models in terms of a finite dimensional Markov diffusion (cf. \cite{Ritchken},
\cite{ChiarellaKwon}).
A systematic study of Kolmogorov operators has been carried out by several authors. In the case of
constant coefficients, Kupcov \cite{Kupcov2}, Lanconelli and Polidoro \cite{LanconelliPolidoro}
studied the geometrical properties of the operator, giving necessary and sufficient conditions for
the existence of the fundamental solution. In the case of {\it H\"older continuous coefficients}
and assuming invariance properties with respect to a suitable homogeneous Lie group, existence of
a fundamental solution has been proved by Weber \cite{Weber}, Il'in \cite{Il'in}, Eidelman
\cite{Eidelman} and Polidoro \cite{Polidoro2}; pointwise upper and lower bounds for the
fundamental solution, mean value formulas and Harnack inequalities are given in \cite{Polidoro2}
and \cite{Polidoro1}. Schauder type estimates have been proved by Satyro \cite{Satyro}, Lunardi
\cite{Lunardi}, Manfredini \cite{Manfredini}. In the more general case of non-homogeneous
Kolmogorov equations with H\"older continuous coefficients, the existence of a fundamental
solution has been proved by Morbidelli \cite{Morbidelli} and Di Francesco and Pascucci
\cite{DiFrancescoPascucci2}; Harnack inequalities and Schauder estimates were proved by Di
Francesco and Polidoro \cite{DiFrancescoPolidoro}. The first results for Kolmogorov operators with
{\it measurable coefficients} were proved by Cinti, Pascucci and Polidoro in \cite{PP2003},
\cite{CPP2008}.
\medskip
The main result of this paper is a Gaussian upper bound, independent of the smoothness of the
coefficients, for the transition density/fundamental solution $\G=\G(t,x;T,y)$ of \eqref{PDE}.
Before stating our result, we introduce the following
\begin{notation}\label{notation1}
Let $\cost>0$ and $B:=\left(b_{ij}\right)_{1\leq i,j\leq d}$ a matrix that satisfies Assumption
\ref{assB}. We denote by $\Kol$ the class of Kolmogorov operators $L$ of the form \eqref{PDE},
that satisfy Assumption \ref{assA} with the non-degeneracy constant $\m$ in \eqref{ellipticity}
and the norms
$\|a_{i}\|_{\infty}$, $\|c\|_{\infty}$ smaller than $\cost$.
\end{notation}
The following theorem generalizes the classical results by Nash
\cite{Nash},\cite{Fabes1993}, Aronson \cite{Aronson} and Davies \cite{Davies} for uniformly
parabolic equations and provides step forward for the study of non-linear Kolmogorov equations.
\begin{theorem}\label{t1}
Let $L\in \Kol$ and $T_{0}>0$. There exists a positive constant $C$, only dependent on $\cost,B$ and $T_{0}$,
such that
\begin{align}\label{thes}
\G(t,x;T,y)\le \frac{C}{(T-t)^{\frac{Q}{2}}}\exp\left(-\frac{1}{C}\left|\Dil\left(\left(T-t\right)^{-\frac{1}{2}}\right)
\left(x-e^{-(T-t)B}y\right)\right|^2\right),
\end{align}
for $0<T-t\le T_{0}$ and $x,y\in\R^{d}$, with
\begin{equation}\label{e15}
{\Dil(r) :=\text{\rm diag}(r I_{m_{0}},r^{3} I_{m_{1}},\dots,r^{2{\nu}+1}I_{m_{{\nu}}}),\qquad r>0,}
\end{equation}
where $I_{m_{i}}$ denotes the $\left(m_{i}\times m_{i}\right)$-identity matrix, and
\begin{equation}\label{e77}
Q:=m_{0}+3m_{1}+\cdots+(2\nu+1)m_{\nu}.
\end{equation}
\end{theorem}
The exponent $\frac{Q}{2}$ appearing in estimate \eqref{thes} is optimal, as it can be easily seen
in the case of constant-coefficient Kolmogorov operators whose fundamental solution is explicit
(see \eqref{e22and}). Notice the difference with respect to the uniformly parabolic case: for
instance in $\R^{3}$, for the heat operator $\p_{xx}+\p_{yy}+\p_{t}$ we have $Q=2$, while for the
prototype Kolmogorov operator $\p_{xx}+x\p_{y}+\p_{t}$ we have $Q=4$. To explain the specific form
of the Gaussian exponent in \eqref{thes} and the role of the constant $Q$ in \eqref{e77} (that is
typically greater than the Euclidean dimension $d$), we recall some basic facts about Kolmogorov
operators and the Lie group structures on $\R^{d+1}$ naturally associated with them.
Let us first rewrite \eqref{PDE} in the compact form
\begin{equation}\label{L}
Lu=\div(AD u)+\div(au)+cu+Yu=0,
\end{equation}
{where $D=(\p_{x_{1}},\dots,\p_{x_{d}})$ denotes the gradient in $\R^{d}$,}
$A:=\left(a_{ij}\right)_{1\leq i,j\leq d}$, $a:=\left(a_i\right)_{1\leq i\leq d}$ with
$a_{ij}=a_{i}\equiv 0$ for $i>m_0$ or $j>m_0$, and
\begin{equation}\label{e14c}
Y:
\langle B x, D \rangle + \partial_t,\qquad (t,x)\in\R\times\R^{d}.
\end{equation}
The {\it constant-coefficient Kolmogorov operator}
\begin{equation}\label{e14b}
\LL:= \frac{1}{2}\sum_{i=1}^{m_{0}}\p_{x_{i}x_{i}}+ Y
\end{equation}
will be referred to as the \emph{principal part of $L$}. It is known that Assumption \ref{assB} is
equivalent to the hypoellipticity of $\LL$: in fact, Assumption \ref{assB} is also equivalent to
the well-known H\"ormander's condition, which in our setting reads:
\begin{align}
{\rm rank\ Lie}\left(\p_{x_{1}},\dots,\p_{x_{m_{0}}},Y\right)(t,x)=d+1,\qquad\text{for all }(t,x)\in \R^{d+1},
\end{align}
where Lie$\left(\p_{x_{1}},\dots,\p_{x_{m_{0}}},Y\right)$ denotes the Lie algebra generated by the
vector fields $\p_{x_{1}},\dots,\p_{x_{m_{0}}}$ and $Y$ (see Proposition 2.1 in
\cite{LanconelliPolidoro}). Thus operator $L$ can be regarded as a perturbation of its principal
part $L_{0}$: roughly speaking, Assumption \ref{assA} ensures that the sub-elliptic structure of
$L_{0}$ is preserved under perturbation.
Constant-coefficient Kolmogorov operators are naturally associated to {\it linear} stochastic
differential equations: indeed, $\LL$ is the infinitesimal generator of the $d$-dimensional SDE
\begin{equation}\label{SDE}
dX_{t}=B X_{t}dt+\s dW_{t},
\end{equation}
where $W$ is a standard $m_{0}$-dimensional Brownian motion and $\s$ is the
$(d\times m_{0})$-matrix
\begin{equation}\label{sigm}
\s=\begin{pmatrix}
I_{m_{0}} \\
0 \
\end{pmatrix}.
\end{equation}
The solution $X$ of \eqref{SDE} is a Gaussian process with transition density
\begin{align} \label{e22and}
\Gg(t,x;T,y
&= \frac{1}{ \sqrt{(2\pi)^{d}\det \mathcal{C}(T-t)} }
\exp\left(-\frac{1}{2}\langle\mathcal{C}(T-t)^{-1} \big(y - e^{(T-t)B}x)\big),
\big(y -e^{(T-t)B}x\big)\rangle\right)
\end{align}
for $t<T$ and $x,y\in\R^{d}$, where
\begin{align}
\mathcal{C}(t)=\int\limits_{0}^{t}\left(e^{s B}\s\right)\left(e^{s B}\s\right)^{\ast}ds
\end{align}
is the covariance matrix of $X_{t}$. Assumption \ref{assB} ensures (actually, is equivalent to the
fact) that $\mathcal{C}(t)$ is positive definite for any positive $t$. Moreover, $\Gg$ in
\eqref{e22and} is the fundamental solution of $\LL$ and the function
\begin{align}
u(t,x):=E\left[\phi\left(X_{T}\right)\mid X_{t}=x\right]=\int_{\R^{d}}\Gg(t,x;T,y)\phi(y)dy,\qquad t<T,\ x\in\R^{d},
\end{align}
solves the backward Cauchy problem
\begin{equation}\label{PC}
\begin{cases}
\LL u(t,x)=0,\qquad & t<T,\ x\in\R^{d}, \\
u(T,x)=\phi(x) & x\in\R^{d},
\end{cases}
\end{equation}
for any bounded and continuous function $\phi$.
Operator $\LL$ has some remarkable invariance properties that were first studied in
\cite{LanconelliPolidoro}. Denote by $\ell_{(\t,\x)}$, for $(\t,\x)\in\R^{d+1}$, the
left-translations in $\R^{d+1}$ defined as
\begin{equation}\label{eq:translation}
\ell_{(\t,\x)}(t,x):=(\t,\x)\circ (t,x):=\left(t+\t,x+{e^{t B}}\x\right)
\end{equation}
Then, $\LL$ is invariant with respect to $\ell_{\z}$ in the sense that
$$L_{0}\left(u\circ \ell_{\z}\right)=\left(L_{0}u\right)\circ \ell_{\z},\qquad \z\in\R^{d+1}.$$
Moreover, let $\Dil(r)$ be as in \eqref{e15}: then $\LL$ is homogeneous with respect to the
dilations in $\R^{d+1}$ defined as
\begin{equation}\label{e066}
\d_{r}(t,x):=\left(r^2t,\Dil(r)x\right), \qquad r>0
\end{equation}
{\it if and only if} all the $\ast$-blocks of $B$ in \eqref{e65b} are null
(\cite{LanconelliPolidoro}, Proposition 2.2): in that case, we have
\begin{equation}\label{e200}
\LL(u\circ\delta_r)=r^{2}(\LL u)\circ\delta_r,\qquad r>0.
\end{equation}
Since the Jacobian $J \Dil(r)$ equals $r^{Q}$, the natural number $Q$ in \eqref{e77} is usually
called the {\it homogeneous dimension} of $\mathbb{R}^{d}$ with respect to $(\Dil(r))_{r>0}$.
\medskip Now let us consider the case of a Kolmogorov operator $L\in\Kol$ with variable coefficients.
It turns out that the invariance properties of the principal part $L_{0}$ are inherited by $L$ in
terms of ``invariance within the class $\Kol$''. More explicitly, for the left-translations we
have
\begin{remark}\label{rtrasl}
Let $\z\in\R^{d+1}$ and $L\in\Kol$. If $u$ is a solution of
$Lu=0$
then $v:=u\circ \ell_{\z}$ solves
$L^{(\z)}v=0$
where $L^{(\z)}$ is obtained from $L$ by left-translating its coefficients, that is
$L^{(\z)}=L\circ \ell_{\z}$. Moreover, operator $L^{(\z)}$ still belongs to $\Kol$.
\end{remark}
As for dilations, we have to distinguish between {\it homogeneous Kolmogorov operators} (i.e.
operators with null $\ast$-blocks in \eqref{e65b}) and general Kolmogorov operators.
\begin{remark}\label{rdil}
Let $\l>0$ and $L\in\Kol$ be a homogeneous Kolmogorov operators. If $u$ is a solution of
$Lu=0$
then $v:=u\circ\d_{\l}$ solves
$L^{\l}v=0$
where $L^{\l}$ is obtained from $L$ by dilating its coefficients, that is $L^{\l}=L\circ \d_{\l}$.
Moreover, operator $L^{\l}$ still belongs to $\Kol$.
\end{remark}
By virtue of the previous remarks, it turns out that
the crucial step to achieve estimate
\eqref{thes} is to prove it for $t=0$, $T=1$ and $y=0$, that is
\begin{align}\label{thes1}
\G(0,x;1,0)\le C\exp\left(-\frac{|x|^2}{C}\right),\qquad \ x\in\R^{d},
\end{align}
with {\it $C$ dependent only on $\cost$ and $B$}. Indeed, once \eqref{thes1} is proved, then the
general estimate for $L\in\Kol$ follows from the invariance of the class $\Kol$ with
respect to the left-translations $\ell$ and the intrinsic dilations $\d$.
It is then clear that the factor $(T-t)^{-\frac{Q}{2}}$ and the term $\left(x-e^{-(T-t)B}y\right)$
in the exponential of \eqref{theshom} come directly from the use of translations and dilations in
the intrinsic Lie group structure.
Estimate \eqref{thes} is consistent with the following Gaussian upper bound for Kolmogorov
operators with {\it H\"older continuous coefficients}, proved in \cite{Polidoro2} and
\cite{DiFrancescoPascucci2} (see also \cite{Menozzi}, \cite{BallyKohatsu}) by means of the
classical parametrix method:
\begin{align}\label{theshom}
\G(t,x;T,y)\le C\G_{0}(t,x;T,y),\qquad t<T,\, x\in\R^{d},
\end{align}
with $C=C(M)$ and $\G_{0}$ as in \eqref{e22and} with $\s=\begin{pmatrix}
\sqrt{2M} I_{m_{0}} \\
0 \
\end{pmatrix}.
$ Notice that for {\it homogeneous} Kolmogorov operators, the constant $C$ in estimate
\eqref{thes} is independent of $T-t$.
\medskip In the case of {\it non-homogeneous} Kolmogorov operators, which is the main focus of this
paper, the lack of homogeneity makes the proof of \eqref{thes1} rather involved. The invariance
property of Remark \ref{rtrasl} remains unchanged, while the scaling argument cannot be used
anymore. However, we have the following result (see Remark 3.2 in \cite{LanconelliPolidoro}).
\begin{lemma}\label{l3}
Let $\l>0$ and $L\in\Kol$. If $u$ is a solution of
$Lu=0$
then $v:=u\circ\d_{\l}$ solves
$L^{\l}v=0$
where
\begin{equation}\label{Lr}
L^{\l}u:=\div(A^{(\l)}D u)+\langle B^{(\l)}x,D u\rangle+\partial_t
u+\div(a^{(\l)}u)+c^{(\l)}u(t,x),
\end{equation}
with
$$A^{(\l)}(t,x)=A\left(\d_{\l}(t,x)\right),\qquad a^{(\l)}(t,x)=\l a\left(\d_{\l}(t,x)\right),\qquad c^{(\l)}(t,x)=\l^{2}
c\left(\d_{\l}(t,x)\right),$$
and $B^{(\l)}=\l^{2}\Dil_{\l} B\Dil_{\frac{1}{\l}}$, that is
\begin{equation}\label{e65bl}
B^{(\l)}=\begin{pmatrix}
\l^{2} B_{1,1} & \l^{4} B_{1,2} & \cdots & \l^{2\nu} B_{1,\nu} & \l^{2\nu+2} B_{1,\nu+1} \\
B_1 & \l^{2} B_{2,2} &\cdots& \l^{2\nu-2} B_{2,\nu} & \l^{2\nu} B_{2,\nu+1} \\
0 & B_2 &\cdots& \l^{2\nu-4} B_{3,\nu}& \l^{2\nu-2} B_{3,\nu+1} \\ \vdots & \vdots
&\ddots& \vdots&\vdots \\ 0 & 0 &\cdots& B_{\nu}& \l^{2} B_{\nu+1,\nu+1}
\end{pmatrix},
\end{equation}
where $B_{i,j}$ denotes the $\ast$-block in the $(i,j)$-th position in \eqref{e65b}.
\end{lemma}
More importantly, {\it we will show that if $L\in\Kol$ then the fundamental solution $\G^{\l}$ of
$L^{\l}$ in \eqref{Lr} satisfies estimate \eqref{thes1} uniformly with respect to $\l\in[0,1]$,
that is with the constant $C$ dependent only on $\cost$ and $B$:} intuitively, this is due to the
fact that, on the one hand, the dilations $\d_{\l}$ do not affect the blocks $B_{1},\dots,B_{\n}$
in \eqref{e65bl} (this guarantees the hypoellipticity of the operator, uniformly with respect to
$\l$); on the other hand, the ``new'' $\ast$-blocks in \eqref{e65bl} are bounded functions of
$\l\in[0,1]$.
The first step in the proof of \eqref{thes1}
consists in proving the local boundedness of non-negative weak solutions of equation \eqref{PDE}
(cf. Theorem \ref{moser}): this result slightly extends the Moser's estimates obtained in Cinti et
al. \cite{CPP2008} where the lower order terms were not included. From the Moser's estimates we
get a Nash upper bound for the fundamental solution in Theorem \ref{t4}. Next we employ a method
by Aronson \cite{Aronson} which provides the crucial estimates in Theorem \ref{t2} and the desired
Gaussian upper bound: here we extend the previous scaling argument to non-homogeneous
Kolmogorov operators and then prove Theorem \ref{t1} for the general class $\Kol$.
\section{Moser's iterative method}
In this section we adapt the Moser's iterative method to prove the local boundedness of weak
solutions of $L$. In the classical setting, Moser's approach combines Caccioppoli type estimates
with the embedding Sobolev inequality: for Kolmogorov operators that are not uniformly parabolic,
Caccioppoli estimates provide $L^{2}_{\text{\rm loc}}$-bounds only for the first $m_{0}$
derivatives (cf. Assumption \ref{assA}) and therefore a naive extension of the classical approach
is not possible. Nevertheless, this problem can be overcome by using the original argument
proposed in \cite{PP2004}, that is based on some ad hoc Sobolev type inequalities for local
solutions to \eqref{PDE}.
To introduce the main result of this section, Theorem \ref{moser} below, we recall the definition
of weak solution.
\begin{definition}
We say that $u$ is a \emph{weak sub-solution} of \eqref{PDE} in a domain $\O$ of $\R^{d+1}$ if
$$u,\p_{x_{1}}u,\dots, \p_{x_{m_{0}}}u, Yu\in L^{2}_{\text{\rm loc}}(\Omega)$$ and for any
non-negative $\varphi\in C_0^{\infty}(\Omega)$ we have
\begin{equation}
\int_{\Omega}-\langle AD u,D \phi\rangle-\langle a,D \phi\rangle u+\varphi cu+\varphi Yu\geq0.
\end{equation}
A function $u$ is a \emph{weak super-solution} if $-u$ is a weak sub-solution. If $u$ is a weak
sub and super-solution, then we say that $u$ is a \emph{weak solution}.
\end{definition}
In the following statement, $R_{r}(z_{0})$ denotes the cylinder
\begin{equation}\label{e76}
R_{r}(z_{0}):= z_0 \circ \d_{r} \left(R_{1}\right)=\{z\in\R^{d+1}\mid z= z_{0} \circ \d_{r}(\z), \, \z \in
R_{1}\},\qquad z_{0}\in\R^{d+1},\ r>0,
\end{equation}
where
\begin{align}
R_{1} = \{(t,x)\in\R\times\R^{d}\mid |t|<1,\, |x|<1\}.
\end{align}
\begin{theorem}\label{moser}
Let $L\in\Kol$, $\l\in[0,1]$ and $L^{\l}$ as in \eqref{Lr}. Let $u$ be a non-negative weak
solution of $L^{\l}u=0$ in a domain $\O$. Let $z_{0}\in \O$ and $0<\r<r\le r_{0}$ be such that
$r-\r<1$ and $\overline{R_{r}(z_{0})}\subseteq\O$. Then, for every $p>0$ there exists a positive
constant $C=C(\cost,r_{0},p)$ such that
\begin{align}\label{e50}
\sup_{R_{\r}(z_{0})}u^{p}\le\frac{C}{(r-\r)^{Q+2}}\int\limits_{R_{r}(z_{0})}u^{p}.
\end{align}
Estimate \eqref{e50} also holds for every $p<0$ such that $u^{p}\in L^{1}(R_{r}(z_{0}))$.
\end{theorem}
The proof of Theorem \ref{moser} requires two auxiliary results of independent interest. In the
following statement, we use the notation $D_{m_0}=\left(\p_{x_{1}},\dots,\p_{x_{m_{0}}}\right)$
and {$\|B\|$ for the norm of $B$ as a linear operator}.
\begin{theorem}[Caccioppoli type inequality]\label{Caccioppoli}
Let $L\in\Kol$ and $u$ be a non-negative weak sub-solution of \eqref{PDE} in $R_r(z_0)$, with
$0<\r<r\le r_{0}$ such that $r-\r<1$. If $u^q\in L^{2}(R_r(z_0))$ for some $q>\frac{1}{2}$, then
$D _{m_0}u^q\in L^{2}(R_{\rho}(z_0))$ and there exists a constant $C=C(M,\|B\|)$ such that
\begin{equation}\label{caccioppoli}
\int_{R_{\rho}(z_0)}|D _{m_0}u^q|^2\leq C\left(\frac{q}{2q-1}\right)^2\frac{q}{(r-\rho)^2}\int_{R_r(z_0)}|u^q|^2.
\end{equation}
If $u$ is a non-negative weak super-solution, then the previous inequality holds for
$q<\frac{1}{2}$.
\end{theorem}
\begin{remark}\label{rlambda1}
Since $C$ in \eqref{caccioppoli} depends only on $M$ and $\|B\|$, then the estimate holds also for
$L^{\l}$ in \eqref{Lr}, uniformly with respect to $\l\in[0,1]$.
\end{remark}
\begin{proof}
Let $u$ be a non-negative weak sub-solution of \eqref{PDE} in $R_r(z_0)$: this means that
\begin{equation}\label{subsolution}
\int_{R_r(z_0)}-\langle AD u,D \varphi\rangle-\langle a,D \varphi\rangle
u+\varphi cu+\varphi Yu\geq 0
\end{equation}
for any non-negative $\varphi\in H_0^1(R_r(z_0))$. We choose $q>\frac{1}{2}$ and assume that
$u^q\in L^{2}(R_r(z_0))$. Let $\varphi=2qu^{2q-1}\psi^2$ in \eqref{subsolution}, where $\psi\in
C_0^{\infty}(R_r(z_0))$; then we have
\begin{align}
\langle AD u,D \varphi\rangle&=\langle AD u,D (2qu^{2q-1}\psi^2)\rangle\\
&=2q(2q-1)\psi^2u^{2q-2}\langle AD u,D u\rangle+4q\psi u^{2q-1}\langle AD
u,D \psi\rangle\\ &=\frac{2(2q-1)}{q}\psi^2\langle AD u^q,D u^q\rangle+4\psi
u^q\langle AD u^q,D \psi\rangle
\intertext{and}
\int_{R_r(z_0)}\varphi Yu&=\int_{R_r(z_0)}2qu^{2q-1}\psi^2Yu\\ &=\int_{R_r(z_0)}\psi^2Y(u^{2q})\\
&=\int_{R_r(z_0)}Y(u^{2q}\psi^2)-2\psi u^{2q}Y\psi\\ &=\int_{R_r(z_0)}-\text{\rm tr}(B)u^{2q}\psi^2-2\psi
u^{2q}Y\psi,
\end{align}
where in the last equality we used the divergence theorem. Moreover,
\begin{align}
\langle a,D \varphi\rangle u&=\langle a,D (2qu^{2q-1}\psi^2)\rangle u\\ &=
2q(2q-1)\psi^2u^{2q-1}\langle a,D u\rangle+4qu^{2q}\psi\langle a,D \psi\rangle\\ &=
2(2q-1)\psi^2u^q\langle a,D u^q\rangle+4qu^{2q}\psi\langle a,D \psi\rangle.
\end{align}
Therefore, inequality \eqref{subsolution} can be rewritten as
\begin{align}
\int_{R_r(z_0)}\frac{2(2q-1)}{q}\psi^2\langle AD u^q,D u^q\rangle \leq&\ \int_{R_r(z_0)}
4u^q|\psi||\langle AD u^q,D \psi\rangle|+2(2q-1)\psi^2u^q|\langle a,D u^q\rangle|\\
&+\int_{R_r(z_0)}u^{2q}h(\psi),
\end{align}
where
\begin{equation}\label{def h}
h(\psi):=|\text{\rm tr}B|\psi^2+2|\psi||Y\psi|+4q|\psi||\langle a,D \psi\rangle|+2q\psi^2|c|.
\end{equation}
Observe that for any $\e,\delta>0$ we have
\begin{align}
4u^q|\psi| |\langle AD u^q,D \psi\rangle|\leq 2\e\psi^2\langle AD
u^q,D u^q\rangle+\frac{2}{\e}u^{2q}\langle AD \psi,D\psi\rangle
\end{align}
and
\begin{align}
2(2q-1)\psi^2u^q|\langle a,D u^q\rangle|&\leq2(2q-1)\psi^2u^q|a| |D _{m_0}u^q|\leq\frac{2q-1}{\delta}\psi^2u^{2q}|a|^2+\delta(2q-1)\psi^2|D _{m_0}u^q|^2.
\end{align}
These estimates give
\begin{align}
\int_{R_r(z_0)}2\left(\frac{2q-1}{q}-\e\right)\psi^2\langle AD u^q,D
u^q\rangle \leq&\ \int_{R_r(z_0)} \delta(2q-1)\psi^2|D _{m_0}u^q|^2\\
&+\int_{R_r(z_0)}u^{2q}\left(\frac{2}{\e}\langle
AD \psi,D \psi\rangle+\frac{2q-1}{\delta}\psi^2|a|^2+h(\psi)\right).
\end{align}
Now, choose $\e=\frac{2q-1}{2q}$, which is positive since $q>\frac{1}{2}$, and use Assumption
\ref{assA} on the matrix $A$ to obtain
\begin{align}
\frac{2q-1}{q\mu}\int_{R_r(z_0)}\psi^2|D _{m_0}u^q|^2\leq&\ \int_{R_r(z_0)}
\delta(2q-1)\psi^2|D _{m_0}u^q|^2\\ &+\int_{R_r(z_0)}u^{2q}\left(\frac{4q}{2q-1}\langle
AD \psi,D \psi\rangle+\frac{2q-1}{\delta}\psi^2|a|^2+h(\psi)\right),
\intertext{or equivalently}
\left(\frac{2q-1}{q\mu}-\delta(2q-1)\right)\int_{R_r(z_0)}\psi^2|D _{m_0}u^q|^2\leq&\ \int_{R_r(z_0)}u^{2q}\left(\frac{4q}{2q-1}\langle AD \psi,D \psi\rangle+\frac{2q-1}{\delta}\psi^2|a|^2+h(\psi)\right).
\end{align}
The choice $\delta=\frac{1}{2q\mu}$ yields
\begin{equation}\label{last caccioppoli}
\frac{2q-1}{2q\mu}\int_{R_r(z_0)}\psi^2|D _{m_0}u^q|^2\leq\int_{R_r(z_0)}u^{2q}\Big(\frac{4q}{2q-1}\langle
AD \psi,D \psi\rangle+2q\mu(2q-1)\psi^2|a|^2+h(\psi)\Big).
\end{equation}
The thesis now follows by making a suitable choice of the function $\psi$. More precisely, if
$z_0=(t_0,x_0)$ we set
\begin{equation}\label{e13}
\psi(t,x)=\chi\left(\sqrt{|t-t_0|}\right)\chi\left(\|z_0^{-1}\circ(x,0)\|\right),
\end{equation}
where $\chi\in C^{\infty}(\mathbb{R},[0,1])$ is such that
\begin{align}
\chi(s)=
\begin{cases}
1 & \text{ if }\quad s\leq\rho, \\
0 & \text{ if }\quad s\geq r,
\end{cases}
\qquad\text{ and }\quad|\chi '|\leq \frac{2}{r-\rho}.
\end{align}
Note that
\begin{equation}\label{e20}
\left|\partial_{t}\psi\right|, \left|\partial_{x_{j}}\psi\right|\leq
\frac{C_{1}}{r-\rho},\qquad j=1,\dots,d,
\end{equation}
where $C_{1}$ is a dimensional constant. With such a choice for $\psi$, we can bound $h(\psi)$ in
\eqref{def h} as
\begin{align}
|h(\psi)|\leq C\frac{q}{r-\rho}
\end{align}
with $C=C(\cost,\|B\|)$ and inequality \eqref{last caccioppoli} becomes
\begin{align}
\frac{2q-1}{2q\mu}\int_{R_{\rho}(z_0)}|D _{m_0}u^q|^2&\leq\int_{R_r(z_0)}u^{2q}\left(\frac{4q\mu}{2q-1}|D _{m_0}\psi|^2+
2q\mu(2q-1)\| a\| _{\infty}^2+|h(\psi)|\right)\\
&\leq\frac{Cq^2}{(2q-1)(r-\rho)^2}\int_{R_r(z_0)}u^{2q},
\end{align}
which corresponds to \eqref{caccioppoli}. The statement concerning super-solutions is proved
similarly. By a standard approximation argument, we can suppose that $u$ is positive; the test
function to be used is $\varphi=2|q|u^{2q-1}\psi^2$ and all the previous inequalities are reversed
due to the negativity of $2q-1$.
\end{proof}
We now state and prove the following Sobolev type inequality.
\begin{theorem}[Sobolev type inequality]\label{Sobolev}
Let $L\in\Kol$, $\l\in[0,1]$ and $L^{\l}$ as in \eqref{Lr}. If $u$ is a non-negative weak
sub-solution of $L^{\l}u=0$ in $R_r(z_0)$, then $u\in L^{2\k}_{\text{\rm loc}}(R_{r}(z_0))$ with
$\k=1+\frac{2}{Q}$ and we have
\begin{equation}\label{sobolev}
\|u\| _{L^{2\k}(R_{\rho}(z_0))}\leq \frac{C}{r-\rho}\left(\|u\|_{L^{2}(R_{r}(z_0))}+\| D_{m_0}u\| _{L^{2}(R_{r}(z_0))}\right),
\end{equation}
for every $0<\rho<r\le r_{0}$, satisfying $r-\r<1$, with $C$ dependent only on $\cost,B$ and
$r_{0}$. The same statement holds for non-negative super-solutions.
\end{theorem}
\begin{proof}
We only sketch the proof since it follows closely the idea of the proof of Theorem 3.3 in
\cite{PP2004}. First we consider the case $\l=1$: if $u$ is a non-negative sub-solution of
\eqref{PDE} in $R_r(z_0)$, we represent it in terms of the fundamental solution $\Gg$ in
\eqref{e22and}. To this end, we set by $A_0=\frac{1}{2}\s
\s^{*}$ with $\s$ as in \eqref{sigm}
and consider the cut-off function $\psi$ introduced in \eqref{e13}. Then, for every $z\in
R_{\rho}(z_0)$ we write
\begin{align}
u(z)&=(u\psi)(z)\\ &= \int\limits_{R_{r}(z_0)} \left(\langle A_0 D (u\psi),D
\Gg(z;\cdot)\rangle-\Gg(z;\cdot)Y(u\psi)\right)(\zeta)d\zeta\\
&=I_{1}(z)+I_{2}(z)+I_{3}(z)+I_4(z),
\end{align}
where
\begin{align*}
I_{1}(z) = & \int\limits_{R_{r}(z_0)} \left( \langle A_0 D \psi,D
\Gg(z;\cdot) \rangle u \right) (\zeta) d\zeta - \int\limits_{R_{r}(z_0)}
\left( \Gg(z;\cdot)u Y\psi \right)(\zeta)d\zeta,\\
I_{2}(z)= & \int\limits_{R_{r}(z_0)} \left( \langle \left(A_0-A\right) D u,D
\Gg(z;\cdot) \rangle \psi \right)(\zeta)d\zeta
- \int\limits_{R_{r}(z_0)} \left(\Gg(z;\cdot)\langle A D u,D \psi
\rangle\right)(\zeta)d\zeta, \\
I_{3}(z) = & \int\limits_{R_{r}(z_0)} \left(\langle A D u,D (
\Gg(z;\cdot)\psi) \rangle-\Gg(z;\cdot)\psi Y v+\langle a,D (\Gg(z;\cdot)\psi)\rangle u-\Gg(z;\cdot)\psi c u \right)(\zeta)d\zeta.\\
I_{4}(z) = & \int\limits_{R_{r}(z_0)} \left(-\langle a,D (\Gg(z;\cdot)\psi)\rangle u+\Gg(z;\cdot)\psi c u \right)(\zeta)d\zeta.
\end{align*}
Since $u$ is a weak sub-solution of \eqref{PDE}, it follows that $I_{3}\leq 0$ and therefore
\begin{align}
0\leq u\leq I_{1} +I_{2}+ I_4 \qquad \text{a.e. in } R_{\rho}(z_0).
\end{align}
The integrals $I_1$ and $I_2$ can be estimated as in the proof of Theorem 3.3 in \cite{PP2004}.
For the last term we have
\begin{align}
I_4(z)&= \int\limits_{R_{r}(z_0)} \left(-\langle a,D (\Gg(z;\cdot)\psi)\rangle
u+\Gg(z;\cdot)\psi c u \right)(\zeta)d\zeta\\ &=\int\limits_{R_{r}(z_0)}
\left(-\psi\langle a,D \Gg(z;\cdot)\rangle u-\Gg(z;\cdot)\langle
a,D \psi\rangle u+\Gg(z;\cdot)\psi c u \right)(\zeta)d\zeta\\
&=\int\limits_{R_{r}(z_0)} \left(-\psi\langle a,D \Gg(z;\cdot)\rangle
u-\Gg(z;\cdot)u(\langle a,D \psi\rangle -\psi c) \right)(\zeta)d\zeta\\
&=\int\limits_{R_{r}(z_0)} \left(-\psi\langle a,D \Gg(z;\cdot)\rangle
u\right)(\zeta)d\zeta-\int\limits_{R_{r}(z_0)} \left(\Gg(z;\cdot)u(\langle
a,D \psi\rangle -\psi c) \right)(\zeta)d\zeta\\ &=:I'_4(z)+I''_4(z).
\end{align}
By the potential estimates in \cite{CPP2008}, Theorem 2, and the H\"older inequality, we get
\begin{align}
\|I_4\|_{L^{2\k}(R_{\rho}(z_0))}&\leq\| I'_4\|_{L^{2\k}(R_{\rho}(z_0))}+\|
I''_4\|_{L^{2\k}(R_{\rho}(z_0))}\\ &\leq\| I'_4\|_{L^{2\k}(R_{\rho}(z_0))}+C_2\|
I''_4\|_{L^{2\tilde{\kappa}}(R_{\rho}(z_0))}\\ &\leq C_1 \|
u\|_{L^{2}(R_{r}(z_0))}+\frac{C_2}{r-\rho}\| u\|_{L^{2}(R_{r}(z_0))},
\end{align}
where $\tilde{\kappa}=1+\frac{4}{Q-2}$, $C_1=C_1(r_{0},B,\| a \| _{\infty})$ and
$C_2=C_2(r_{0},B,\| a \| _{\infty},\| c \| _{\infty})$. For the general case when $\l\in[0,1]$,
the proof is completely analogous and be carried out using the fact that the potential estimates
in \cite{CPP2008} are uniform in $\l$.
A similar argument proves the thesis when $u$ is a super-solution. In this case, we introduce the
following auxiliary operator
\begin{align}
\hat{L}_{0}=\div(A_0 D )+\hat{Y}\quad\text{ where }\quad \hat{Y}:= -\langle x,B
D \rangle+\partial_{t}.
\end{align}
For any $z=(t,x)$, we set $\hat{z}=(-t,x)$, $v(z)=u(\hat{z})$ and remark that
\begin{equation*}
D v(z)= D u(\hat{z})\quad\text{ and }\quad \hat{Y} v(z)=-Y u(\hat{z})
\end{equation*}
almost everywhere.
Then, if $R$ is a domain which is symmetric with respect to the time variable $t$, since $u$ is a
super-solution, we have
\begin{align}
&\int\limits_{R}\left(- \langle A (\hat{z}) D v,D \phi \rangle -\langle
a(\hat{z}),D \phi\rangle v+c(\hat{z})\phi v- \phi \hat{Y} v\right)(z)d z\\
&= \int\limits_{R} \left(-\langle
A(\hat{z}) D u(\hat{z}),D \phi(z)\rangle-\langle a(\hat{z}),D \phi(z)\rangle u(\hat{z})+c(\hat{z})\phi(z) u(\hat{z})
+\phi(z) Y u(\hat{z})\right) d z \leq 0,
\end{align}
for every non-negative $\phi\in C_{0}^{\infty}(R)$. Then, we represent $v$ in terms of the
fundamental solution $\hat{\Gamma}_{0}$ of $\hat{L}_{0}$ and the proof proceeds as before.
\end{proof}
We are now ready to prove Theorem \ref{moser}.
\begin{proof}[Proof of Theorem \ref{moser}]
As in the proof of Theorem 1 in \cite{CPP2008}, the argument is based on the Moser's iteration method.
The inequality to be iterated is obtained through a combination of Theorem \ref{Caccioppoli} and
Theorem \ref{Sobolev}, and reads
\begin{equation}\label{e55}
\|u^{q}\|_{L^{2\kappa}(R_{\rho}(z_0))}\leq \frac{C(\cost,r_{0},q)\sqrt{|q|}}{(r-\r)^{2}}\|u^{q}\|_{L^{2}(R_{r}(z_0))},
\end{equation}
where $0<\rho<r\le r_{0}$ with $r-\rho<1$, $q\neq\frac{1}{2}$ and $u$ is a non-negative weak
solution of $L^{\l}u=0$. From Theorem \ref{Caccioppoli} we see that $C(\cost,r_{0},q)$, as a
function of $q$, is bounded at infinity and diverges at $q=\frac{1}{2}$: this feature is in common
with the equation studied in \cite{CPP2008}. However, the presence of the new factor $\sqrt{|q|}$
in the right hand side of \eqref{e55} requires additional care in the application of the Moser's
iterative procedure. First of all, we fix a sequence of radii
$\rho_n=\Big(1-\frac{1}{2^n}\Big)\rho+\frac{1}{2^n}r$, a sequence of exponents
$q_n=\frac{p}{2}\kappa^n$ and a safety distance, say $\delta$, from $\frac{1}{2}$. The exponent
$p$ is chosen to guarantee that the distance of the resulting exponent $q_n$ from $\frac{1}{2}$ is
at least $\delta$, for each $n\geq 1$. We then iterate inequality \eqref{e55} to obtain
\begin{align}
\|u^{\frac{p}{2}}\|_{L^{\infty}(R_{\rho}(z_0))}\leq f(r-\r)\|u^{\frac{p}{2}}\|_{L^{2}(R_{r}(z_0))}
\end{align}
where, for some $\tilde{C}=\tilde{C}(\cost,r_{0},\d)$,
\begin{align}
f(r-\r)&=\prod_{j=0}^{\infty}
\left(\frac{\tilde{C}\sqrt{|p|}\kappa^{\frac{j}{2}}}{(\r_{j}-\r_{j+1})^{2}}\right)^{\frac{1}{\kappa^{j}}}=\frac{C_1(\cost,r_{0},p)}{(r-\r)^{\frac{Q+2}{2}}},
\end{align}
This proves inequality (\ref{e50}) for $p$ satisfying $|\frac{p}{2}k^n-\frac{1}{2}|\geq\delta$.
The previous restriction is easily relaxed using the monotonicity of the $L^p$-means (see
\cite{CPP2008} for details).
\end{proof}
\begin{remark} The previous proof can be slightly modified to see that Moser's estimate
\eqref{e50} still holds true on the cylinder $R^+_r(z_0):= z_0 \circ \d_{r}
\left(R^{+}_{1}\right)$ with
$ R^{+}_{1} = \{(t,x)\in\R\times\R^{d}\mid 0<t<1,\, |x|<1\}.$
\end{remark}
\section{Gaussian upper bound}
In this section we prove a Gaussian upper bound for the fundamental solution $\G$ of $L\in\Kol$.
We begin with an important implication of the Moser's estimate \eqref{e50}
\begin{theorem}[Nash upper bound]\label{t4}
Let $\G$ be the fundamental solution of $L\in\Kol$. Then, there exists a positive constant
$C=C(\cost,T_{0})$ such that
\begin{equation}\label{e30}
\G(t,x;T,y)\le \frac{C}{(T-t)^{\frac{Q}{2}}},\qquad 0<T-t\le T_{0},\ x,y\in\R^{d}.
\end{equation}
\end{theorem}
\begin{proof}
By Theorem \ref{moser}, with $\r=\frac{1}{2}\sqrt{\frac{T-t}{\max\{T_{0},1\}}}$ and
$r=\sqrt{2}\r$, we have
\begin{align}
\G(t,x;T,y)&\le \sup_{R_{\r}(t,x)}\G(\cdot,\cdot;T,y)\\
&\le \frac{C}{(T-t)^{\frac{Q+2}{2}}}\iint\limits_{R_{r}(t,x)}\G(s,\x;T,y)d \x d s\\
&\le \frac{C}{(T-t)^{\frac{Q+2}{2}}}\int\limits_{t-\frac{T-t}{2\max\{T_{0},1\}}}^{t+\frac{T-t}{2\max\{T_{0},1\}}}\int\limits_{\R^{d}}\G(s,\x;T,y)d \x d s
\intertext{{(since $\int\limits_{\R^{d}}\G(s,\x;T,y)d \x d s\le e^{(T-s)\|c\|_{\infty}}$)}}
&\le \frac{C}{(T-t)^{\frac{Q}{2}}}.
\end{align}
\end{proof}
An immediate consequence of Theorem \ref{t4} is the following
\begin{corollary}\label{c1bis} \
There exists a positive constant $C=C(\cost,T_{0})$ such that
\begin{align}
\int\limits_{\R^d}\G^{2}(t,x;T,y) dy \le \frac{C}{(T-t)^{\frac{Q}{2}}},\qquad 0<T-t\le T_{0},\ x\in\R^{d},
\end{align}
and
\begin{align}
\int\limits_{\R^d}\G^{2}(t,x;T,y) d x \le \frac{C}{(T-t)^{\frac{Q}{2}}},\qquad 0<T-t\le T_{0},\ y\in\R^{d}.
\end{align}
\end{corollary}
Our proof of a Gaussian upper bound for the fundamental solution is adapted to Aronson's method
\cite{Aronson}. The next theorem is a crucial step in this direction.
\begin{theorem}\label{t2}
Fix $y\in \R^{d}$, $\s>0$ and let $u_{0}\in L^{2}(\R^{d})$ be such that $u_{0}(x)=0$ for
$|x-y|<\s$. Let $L\in\Kol$ and suppose that $u$ is a bounded solution to \eqref{PDE} in
$[\y-\s^2,\y[\,\times \R^{d}$ with terminal value $u(\eta,x)=u_{0}(x)$. Then, there exist positive
constants $k$ and $C$ such that for any $\t$ which satisfies
$\eta-\frac{1\wedge\sigma^2}{k}\leq\t\leq\eta$ we have
\begin{equation}\label{e16}
|u((0,e^{-\y B}y)\circ(\t,0))|\le C(\eta-\t)^{-\frac{Q}{4}}
\exp\left(-\frac{\s^{2}}{C(\eta-\t)}\right)\|u_{0}\|_{L^{2}(\R^{d})}.
\end{equation}
The constants $k$ and $C$ depend only on $\cost$.
\end{theorem}
\begin{proof}
We first prove the thesis for $y=0$. We fix $s$ such that $0\leq\eta-s\leq1\wedge\sigma^2$ and we
define
\begin{align}
h(t,x)=-\frac{|x|^2}{2(\eta-s)-k(\eta-t)}+\alpha(\eta-t),\qquad \eta-\frac{\eta-s}{k}\leq t\leq \eta,\ x\in\mathbb{R}^d,
\end{align}
with $\alpha$ and $k$ being positive constants to be fixed later on. Moreover, for $R\ge 2$, we
consider a function $\g_{R}\in C_{0}^{\infty}(\R^{d},[0,1])$ such that $\g_{R}(x)\equiv 1$ for
$|x|\le R-1$, $\g_{R}(x)\equiv 0$ for $|x|\ge R$ with $|D \g_{R}|$ bounded by a constant
independent of $R$. Then, we multiply both sides of \eqref{PDE} by $\g^2_{R}e^{2h}u$ and we
integrate over $[\tau,\eta]\times \R^{d}$, with $\eta-\frac{\eta-s}{k}\leq\tau\le\eta$, to get
\begin{equation}\label{e19}
\begin{split}
&\int\limits_{\R^{d}}\g_{R}^{2}e^{2h}u^{2}\vert_{t=\t}\,d x
-2\iint\limits_{[\tau,\eta]\times\R^{d}} e^{2h}u^{2}
\left(3\langle A D _{m_{0}}h,D _{m_{0}}h\rangle-Y
h-2\langle a,D _{m_{0}} h\rangle+\Lambda\right)d x d t\le \\
&\int\limits_{\R^{d}}\g_{R}^{2}e^{2h}u^{2}\vert_{t=\eta}\,d x+
2\iint\limits_{[\tau,\eta]\times\R^{d}}e^{2h}u^{2}\left(3\mu
\left|D _{m_{0}}\g_{R}\right|^{2}+\left|Y\g_{R}^{2}\right|-{2\langle a,D _{m_{0}}\g_{R}\rangle\g_{R}}
\right)d x d t,
\end{split}
\end{equation}
where $\Lambda$ is a positive constant depending on $\cost$.
The proof of \eqref{e19} is tedious but routine: all the details are reported in
Appendix \ref{aronapp}.
Next we let $R$ go to infinity in \eqref{e19}: since $u$ is bounded by assumption and
$e^{2h(t,x)}\le e^{-\frac{|x|^{2}}{\eta-s}+2\alpha(\eta-s)}$, the last integral tends to zero and
we get
\begin{equation}\label{e195}
\int\limits_{\R^{d}} e^{2h}u^{2}\vert_{t=\t}\,d x
-2\iint\limits_{[\tau,\eta]\times\R^{d}} e^{2h}u^{2}
\left(3\langle A D _{m_{0}}h,D _{m_{0}}h\rangle-Y
h-2\langle a,D _{m_{0}} h\rangle+\Lambda\right)d x d t\\
\le \int\limits_{\R^{d}} e^{2h}u^{2} \vert_{t=\eta}\,d x.
\end{equation}
\noindent We now claim that, by a suitable choice of $k$ and $\alpha$, only dependent on
$\cost,B$,
we have
\begin{equation}\label{e17}
3\langle A D _{m_{0}}h,D _{m_{0}}h\rangle-Y
h-2\langle a,D _{m_{0}} h\rangle+\Lambda\le 0,\qquad \eta-\frac{\eta-s}{k}\le t\le\eta,\ x\in\R^{d}.
\end{equation}
Indeed, letting $\d=2(\eta-s)-k(\eta-t)$ to ease the notation, we have
\begin{align}
3\langle A D _{m_{0}}h,D _{m_{0}}h\rangle-Yh-2\langle a,D _{m_{0}}
h\rangle+\Lambda
&\leq \frac{12\mu|x|^2}{\d^2}+\frac{2\left\|B\right\||x|^2}{\d}-\frac{k|x|^2}{\d^2}-\alpha+\frac{4\langle a,x\rangle}{\d}+\Lambda\\
&{\le \frac{|x|^2}{\d^2}(12\mu+2\d\left\|B\right\|-k+2)-\alpha+2\left\|a\right\|^{2}_{\infty}+\Lambda}\\
&{\leq \frac{|x|^2}{\d^2}(12\mu+4\left\| B\right\|-k+2)-\alpha+2\left\|a\right\|^{2}_{\infty}+\Lambda,}
\end{align}
and, with $\alpha=2\left\|a\right\|^{2}_{\infty}+\Lambda$ and $k$ big enough, the last term can be
made negative.
From \eqref{e17} and \eqref{e195}, we derive the inequalities
\begin{align}
\max_{t\in\, ]\eta-\frac{\eta-s}{k},\eta[}\ \int\limits_{\left|D\left(\frac{2\sqrt{k}}{\sqrt{\eta-s}}\right)
x\right|\le 1}e^{2h(t,x)}u^{2}(t,x)d x
&\le \max_{t\in\, ]\eta-\frac{\eta-s}{k},\eta[}\ \int\limits_{\R^{d}} e^{2h(t,x)}u^{2}(t,x)d x\\
\label{est3}
&\le \int\limits_{|x|\ge\s}e^{2h(\eta,x)}u_{0}^{2}(x)d x.
\end{align}
Now we notice that, by definition, for every $t\in\,]\eta-\frac{\eta-s}{k},\eta]$ we have
\begin{align}
2h(t,x)&\ge -\frac{2|x|^{2}}{\y-s}=
\intertext{(setting $\d=\frac{\sqrt{\eta-s}}{2\sqrt{k}}$)}
&= -\frac{2\left|\Dil(\d)\Dil(\d^{-1})x\right|^{2}}{\y-s}\ge
\intertext{(if
$\left|D\left(\frac{2\sqrt{k}}{\sqrt{\eta-s}}\right)x\right|=\left|\Dil\left(\d^{-1}\right)x\right|\le
1$)}
&\ge -\frac{2\left\|\Dil(\d)\right\|^{2}}{\y-s}\ge
\intertext{(since $\d<1$ by assumption)}\label{est1}
&\ge -\frac{2\d^{2}}{\y-s}= -\frac{1}{2k}.
\end{align}
On the other hand, if $|x|\ge\s$, we have
\begin{align}\label{est2}
-2h(\eta,x)=\frac{2|x|^2}{2(\eta-s)}\geq\frac{\sigma^2}{\eta-s}.
\end{align}
Plugging estimates \eqref{est1} and \eqref{est2} into \eqref{est3}, we get
\begin{equation}\label{e23}
\max_{t\in ]\eta-\frac{\eta-s}{k},\eta[}\ \int\limits_{\left|D\left(\frac{2\sqrt{k}}{\sqrt{\eta-s}}
\right)x\right|\le 1}u^{2}(t,x)d x \le e^{\frac{1}{2k}}
\exp\left(-\frac{\s^{2}}{\eta-s}\right)\|u_{0}\|^{2}_{L^{2}(\R^{d})}.
\end{equation}
Finally, we rely on Theorem \ref{moser} in order to get the desired estimate \eqref{e16}. We let
$\tau=\eta-\frac{\eta-s}{k}$ and we observe that $\tau\in [\eta-\frac{1}{k},\eta]$ and
$\eta-s=k(\eta-\t)$: thus we have
\begin{align}
|u(\t,0)|^{2}&\le \sup_{R^{+}_{\frac{\sqrt{\eta-s}}{4\sqrt{k}}}(\t,0)}|u|^{2}\le
\intertext{(by \eqref{e50})}
&\le \frac{C}{(\eta-s)^{\frac{Q+2}{2}}}\iint\limits_{R^{+}_{\frac{\sqrt{\eta-s}}{2\sqrt{k}}}(\t,0)}u^{2}(t,x)d x d t\\
&= \frac{C}{(\eta-s)^{\frac{Q+2}{2}}} \int\limits^{\t+\frac{\eta-s}{4k}}_{\t}
\int\limits_{\left|D\left(\frac{2\sqrt{k}}{\sqrt{\eta-s}}\right)x\right|\le 1} u^{2}(t,x)d x d t
\intertext{(by \eqref{e23})}
&\le \frac{C}{(\eta-s)^{\frac{Q}{2}}}\exp\left(-\frac{\s^{2}}{C(\eta-s)}\right) \|u_{0}\|^{2}_{L^{2}(\R^{d})}\\
&{= \frac{C}{k^{\frac{Q}{2}}(\eta-\t)^{\frac{Q}{2}}}\exp\left(-\frac{\s^{2}}{Ck(\eta-\t)}\right)
\|u_{0}\|^{2}_{L^{2}(\R^{d})},}
\end{align}
where the constant $C=C(\cost,k)$. This yields \eqref{e16} in the case $y=0$.
For the general case, for any fixed $u$, $u_{0}$ and $(\y,y)$ as in the statement, we set
\begin{align}
v(\t,x)= u\left((0,e^{-\y B}y)\circ(\t,x)\right)\qquad \t<\eta, \ x\in\R^{d},
\end{align}
{and observe that $v(\eta,x)=u(\eta,y+x)=u_{0}(y+x)=0$ for $|x|\le\s$. Moreover, by the invariance
property of the vector field $Y$ with respect to the left translation $\ell_z$, we have
$L^{(z)} v=0,$
where {$L^{(z)}:=L\circ \ell_{z}\in\Kol$} and $z=(0,e^{-\y B}y)$.} Thus, we get as before
\begin{align}
|u(z\circ(\t,0))|&= |v(\t,0)|\leq\frac{C}{(\eta-\t)^{\frac{Q}{4}}}\exp\left(-\frac{\s^{2}}{C(\eta-\t)}\right)
\|u_{0}\|_{L^{2}(\R^{d})}
\end{align}
completing the proof.
\end{proof}
\noindent The following corollary is a simple consequence of Theorems \ref{t2}.
\begin{corollary}\label{c1}
There exists two positive constants $k$ and $C$, that depend only on $\cost$, such that for every
$\sigma>0$ and $\y\in\R$, we have
\begin{equation}\label{e24}
\int\limits_{\left|\x-e^{(\y-t)B}x\right|\ge\s}\G^{2}(t,x;\eta,\xi)d\x\le
\frac{Ce^{-\frac{\s^{2}}{C(\eta-t)}}}{(\eta-t)^{\frac{Q}{2}}},\qquad (t,x)\in\Big[\eta-\frac{1\wedge\sigma^2}{k},\eta\Big[\times\R^{d}
\end{equation}
and
\begin{equation}\label{e24dual}
\int\limits_{\left|x-e^{(t-\y)B}\x\right|\ge\s}\G^{2}(t,x;\eta,\xi)dx\le
\frac{Ce^{-\frac{\s^{2}}{C(\eta-t)}}}{(\eta-t)^{\frac{Q}{2}}},\qquad (t,x)\in\Big[\eta-\frac{1\wedge\sigma^2}{k},\eta\Big[\times\R^{d}.
\end{equation}
\end{corollary}
\begin{proof}
First of all we observe that
\begin{align}
\int\limits_{\left|\x-e^{(\y-t)B}x\right|\ge\s}\G^{2}(t,x;\eta,\xi)d\x&=
\int\limits_{\left|\x-y\right|\ge \s} \G^{2}\left(t,e^{(t-\eta)B}y;\eta,\xi\right)d\x\\
&=\int\limits_{\left|\x-y\right|\ge \s} \G^{2}\left((0,e^{-\y B}y)\circ(t,0);\eta,\xi\right)d\x.
\end{align}
Now the function
\begin{align}
u(s,w): =\int\limits_{\left|\x-y\right|\ge \s} \G(s,w;\eta,\xi)\G((0,e^{-\y
B}y)\circ(t,0);\eta,\xi)d\x,
\end{align}
is a non-negative solution to \eqref{PDE} for $s<\y$, with terminal condition
$$u(\eta,w)=
\begin{cases}
0 & \text{ if } |w-y|<\s, \\
\G((0,e^{-\y B}y)\circ(t,0);\eta,w) & \text{ if } |w-y|\ge \s.
\end{cases}
$$
Setting $(s,w)=(0,e^{-\y B}y)\circ(t,0)$, from Theorem \ref{t2} we infer
\begin{align}
\int\limits_{\left|\x-y\right|\ge \s}\G^{2}\left((0,e^{-\y B}y)\circ(t,0);\eta,\xi\right)d\x&= u((0,e^{-\y B}y)\circ(t,0))\\ &\le
\frac{Ce^{-\frac{\s^{2}}{C(\eta-t)}}}{(\eta-t)^{\frac{Q}{4}}}\|\G\left((0,e^{-\y
B}y)\circ(t,0),\eta,\cdot\right)\|_{L^{2}(\R^{d})}.
\end{align}
Then the thesis follows directly from Corollary \ref{c1bis}. Inequality \eqref{e24dual} is proved
similarly.
\end{proof}
We are now in position to prove our main result.
\begin{proof}[Proof of Theorem \ref{t1}]
The proof proceeds in a series of steps.
\noindent {\bf Step 1.} We first prove the thesis for $y=0$ and $T-t=\frac{1}{k}$, with $k$ as in
Theorem \ref{t2}. We fix $x\in\R^{d}$ and set
\begin{equation}\label{ee1}
\s(x)=\frac{|x|}{2 \|e^{\frac{T-t}{2}B}\|}.
\end{equation}
If $\s(x)\le 1$, that is $|x|\le 2\|e^{\frac{T-t}{2}B}\|$, then the thesis is a direct consequence
of Theorem \ref{t4} and the fact that, by assumption, $T-t=\frac{1}{k}$ is fixed with $k$
dependent only upon $\cost$.
On the other hand, if $\s(x)\ge 1$, by the Chapman-Kolmogorov identity and putting
$\eta=T-\frac{T-t}{2}$, we have
\begin{align}
\G(t,x;T,0)= \int\limits_{\R^{d}}\G(t,x;\eta,\xi)\G(\eta,\xi;T,0)d\x=J_1+J_2,
\end{align}
where
\begin{align}
J_{1}&:=\ \int\limits_{\big|\x-e^{\frac{T-t}{2}B}x\big|\ge\s(x)}\G(t,x;\eta,\xi)\G(\eta,\xi;T,0)d\x,\\
J_{2}&:=\ \int\limits_{\big|\x-e^{\frac{T-t}{2}B}x\big|<\s(x)}\G(t,x;\eta,\xi)\G(\eta,\xi;T,0)d\x.
\end{align}
By the Cauchy-Schwarz inequality, we have
\begin{align}
\left(J_{1}\right)^{2}&\le\int\limits_{\big|\x-e^{\frac{T-t}{2}B}x\big|\ge\s(x)}\G^2(t,x;\eta,\xi)d\x
\int\limits_{\big|\x-e^{\frac{T-t}{2}B}x\big|\ge\s(x)} \G^2(\eta,\xi;T,0)d\x
\intertext{(by \eqref{e24} and Corollary \ref{c1bis})}
&\leq \frac{Ce^{-\frac{\s^{2}(x)}{C(T-t)}}}{(T-t)^{Q}}\\
&= C k^{Q}\exp\left(-\frac{k|x|^2}{4C\|e^{\frac{1}{2k}B}\|^{2}}\right).
\end{align}
In order to estimate $J_{2}$, we first note that if $\big|\x-e^{\frac{T-t}{2}B}x\big|<\s(x)$ then,
recalling also the definition \eqref{ee1} of $\s(x)$, we have
\begin{align}\label{ee2}
|\x|\ge\big|e^{-\frac{T-t}{2}}x\big|-\big|\x-e^{-\frac{T-t}{2}}x\big|\ge \frac{|x|}{\|e^{\frac{T-t}{2}B}\|}-\s(x)=\s(x).
\end{align}
Thus, by \eqref{ee2} and using again the Cauchy-Schwarz inequality, we have
\begin{align}
\left(J_{2}\right)^{2}&\le
\int\limits_{\left|\x\right|\geq\s(x)} \G^2(\eta,\xi;T,0)d\x \int\limits_{\left|\x\right|\geq\s(x)}\G^2(t,x;\eta,\xi)d\x
\intertext{(by \eqref{e24dual})}
&\leq \frac{Ce^{-\frac{\s^{2}(x)}{C(T-t)}}}{(T-t)^{\frac{Q}{2}}} \int\limits_{\mathbb{R}^d}\G^2(t,x;\eta,\xi)d\x
\intertext{(by Corollary \ref{c1bis})}
&\leq \frac{C}{(T-t)^{Q}}e^{-\frac{\s^{2}(x)}{C(T-t)}}\\
&= C k^{Q}\exp\left(-\frac{k|x|^2}{4C\|e^{\frac{1}{2k}B}\|^{2}}\right).
\end{align}
This completes the proof of the case $\s(x)\geq 1$. In conclusion we have proved estimate
\eqref{thes} for $T-t=\frac{1}{k}$, that is
\begin{equation}\label{last}
\Gamma(t,x;T,0)\leq Ce^{-\frac{|x|^2}{C}},\qquad T-t=\frac{1}{k}, \ x\in\R^{d},
\end{equation}
with the constant $C$ only dependent on $\cost$ and $B$.
Actually, the same estimate holds also for the fundamental solution $\G^{\l}$ of $L^{\l}$ in
\eqref{Lr}, {\it with $C$ independent of $\l\in[0,1]$:} in fact, all the results of this section
derive from the Moser's estimate, Theorem \ref{moser}, which is uniform in $\l\in[0,1]$.
\medskip\noindent {\bf Step 2.} We use a scaling argument to generalize estimate \eqref{last} to the case $0<
T-t\leq\frac{1}{k}$; precisely, we prove that
\begin{equation}\label{last1}
\Gamma(t,x;T,0)\leq\frac{C}{(T-t)^{\frac{Q}{2}}}e^{-\frac{|x|^2}{C(T-t)}},\qquad 0< T-t\leq\frac{1}{k},\
x\in\R^{d}.
\end{equation}
For $\l\in[0,1]$, we set
$$\Gamma^{\l}(t,x;T,0)=\l^{Q}\Gamma(\d_{\l}(t,x);\d_{\l}(T,0))$$
and observe that, since the Jacobian $J \Dil(\l)$ equals $\l^{Q}$, we have that $\Gamma^{\l}$ is a
fundamental solution of the operator $L^{(\l)}$ in \eqref{Lr}.
Now, fix $t$ such that $0<T-t\leq\frac{1}{k}$ and set $\l=k(T-t)$. Then we have
\begin{align}
\Gamma(t,x;T,0)&=\l^{-\frac{Q}{2}}\Gamma^{(\sqrt{\l})}\left(\frac{t}{\l},\Dil\left(\frac{1}{\sqrt{\l}}\right)x;\frac{T}{\l},0\right)\le
\intertext{(by \eqref{last})}
&\le C\l^{-\frac{Q}{2}}e^{-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{\l}}\right)x\right|^{2}}
\end{align}
which proves \eqref{last1}.
\medskip\noindent {\bf Step 3.} We now remove the condition $y=0$. Let
$z=(0,e^{-TB}y)$ and $\Gamma^{(z)}$ be the fundamental solution of the operator $L^{(z)}:=L\circ
\ell_z$.
{Since $L^{(z)}\in\Kol$, we have that $\Gamma^{(z)}$ satisfies the estimate \eqref{last1} and
hence we} obtain
\begin{align}
\Gamma(t,x;T,y)&=
\Gamma^{(z)}(z^{-1}\circ (t,x);T,0)\\ &= \Gamma^{(z)}(t,x-e^{-(T-t)B}y;T,0)\\
&\leq \frac{C}{(T-t)^{\frac{Q}{2}}}\exp\left(-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{T-t}}\right)\left(x-e^{-(T-t)B}y\right)\right|^{2}\right),\qquad 0< T-t\leq\frac{1}{k},\ x,y\in\R^{d}.
\end{align}
\medskip\noindent {\bf Step 4.} In the last step we relax the restriction on the length of the time interval. We first
suppose that $0< T-t\leq\frac{2}{k}$ and set $\tau=\frac{T-t}{2}$. By the Chapman-Kolmogorov
identity we have
\begin{align}
\Gamma(t,x;T,y)&= \int_{\mathbb{R}^d}\Gamma(t,x;t+\tau,\xi)\Gamma(t+\tau,\xi;T,y)d\xi\\
&\leq \frac{C}{\tau^{Q}}\int_{\mathbb{R}^d}e^{-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{\tau}}\right)\left(x-e^{-\t B}\xi\right)\right|^2}
e^{-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{\tau}}\right)\left(\x-e^{-\t B}y\right)\right|^2}d\xi\\
&\leq \frac{C}{\tau^{Q}}\int_{\mathbb{R}^d}e^{-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{\tau}}\right)\left(x-e^{-\t B}\xi\right)\right|^2}
e^{-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{\tau}}\right)\left(e^{-\t B}\xi-e^{-(T-t)B}y\right)\right|^2}d\x
\intertext{(by the Chapman-Kolmogorov identity for a standard Gaussian kernel)}
&\leq \frac{C}{(T-t)^{\frac{Q}{2}}}e^{-\frac{1}{C}\left|\Dil\left(\frac{1}{\sqrt{T-t}}\right)\left(x-e^{-(T-t)B}y\right)\right|^2}.
\end{align}
Iterating this procedure we can extend the estimate to any bounded time interval and this
concludes the proof.
\end{proof}
\titleformat{\section}{\large\bfseries}{ |
1,116,691,499,647 | arxiv | \section{Introduction}
In a recent article \cite{Frydel18b} we studied a one-dimensional lattice gas model of penetrable
particles and demonstrated that a two-component system (where particles of the same species repel
and those of opposite species attract each other) becomes thermodynamically unstable, where the
collapsed state is manifested by the presence of scattered and extremely dense clusters, in which
the occupation number of a site that is part of the cluster is $n\gg 1$.
This behavior is not unique to lattice models and has been previously observed in more realistic systems
of penetrable particles such as a penetrable sphere model \cite{Frydel16,Frydel17,Frydel18a}.
Prior to these examples, the possibility of thermodynamic collapse in a multicomponent
system of soft particles
has been considered as early as 1966 by Ruelle and Fisher \cite{Ruelle66a,Ruelle66b,Heyes07},
who also explored mathematical
criteria for the conditions in which such a collapse becomes plausible.
The renewed interest in penetrable particles has been triggered by a growing number of synthesized
and naturally occurring nanoparticle whose pair interactions lack the usual hard-core repulsion, resulting
in ultrasoft particles that interpenetrate and, in principle, can occupy the same space \cite{Likos01a}.
Penetrability gives rise to different behaviors than those encountered in systems with hard-core repulsion.
The type of soft interactions, furthermore, plays a decisive role in determining a particular
behavior of the system \cite{Likos01}.
In a one-component system, thermodynamic collapse becomes possible for systems with
pair interactions comprised of a short-range attractive tail and a repulsive soft-core.
More recent examples where such systems are studies in connection to thermodynamic
collapse include Ref. \cite{Malescio15,Malescio16,Malescio18}, among others.
The most famous example of thermodynamic collapse, however, is that in gravitational system
\cite{Yan14}, whose pair interaction consists of only attractive long-range part.
When it comes to two-component systems, a considerably less work has been done
to understand the mechanism of thermodynamic collapse.
Thermodynamic collapse in a two-component system is not self-evident, since attractive interactions
occur between particles of opposite species, and this implies that a collapsed configuration, or a
group of configurations, involves a very specific arrangement of particles whose specific structures
has been investigated in Ref. \cite{Frydel18b} for a one-dimensional lattice-gas model.
Because one-dimensional models, as a general rule, preclude the possibility of a phase transition
\cite{Cuesta04} (interestingly enough, this rule does not apply to thermodynamic collapse), the
investigation in the Ref. \cite{Frydel18b} is not entirely satisfactory. In the present article we consider
a binary system on a lattice-square substrate with nearest neighbor interactions, as it is the most
standard model in two-dimensions. Because the occupation number is unlimited, the system
is closely related to the discrete Gaussian model originally designed to capture the structure and
behavior of interfaces and the roughening transition \cite{Chui76,Weeks80,Binder95},
Our results are organized as follows. In Sec. \ref{sec:model} we introduce
the model and write down the corresponding grand partition function. In this section
we introduce two distinct ways of counting particles, depending on whether particles are
considered as distinguishable or indistinguishable. Different ways of counting particles
does not arise for a single occupation lattice-gas models and is a consequence of
multiple occupation.
In Sec. \ref{sec:trans} we transform the original particle system into spin ensemble.
In the transformed ensemble
spins can take on any integer value as a consequence of particle
penetrability.
In Sec. \ref{sec:infty} we analyze thermodynamic collapse
in the infinite density limit.
This limit is the
consequence of penetrability
and implies that the average occupation of a site is $n\to\infty$.
In this limit the Hamiltonian reduces to
a harmonic function and the corresponding partition function transforms into a discrete
Gaussian model (DG).
A similar model was used to study roughening transition of interfaces.
In Sec. \ref{sec:rho} we analyze the system at finite density. Due to a non-harmonic
term, the resulting partition function is no longer Gaussian and we analyze the system
using a Gaussian variational method. Both approximate and exact models indicate
the presence of a metastable region, so that even though the global minimum
corresponds to a collapsed state, the system remains in
metastable equilibrium.
\section{The model}
\label{sec:model}
The model consists of two types of particles on a two-dimensional square-lattice substrate.
As hard-core interactions are not included, there is no restriction on the number of particles
that can occupy a single site. If the occupation numbers for a given site are $n_{i}^+$ and
$n_{i}^-$, where the superscripts ``+'' and ``-'' designates different species, then the
Hamiltonian of the system is
\begin{eqnarray}
H&=& K \sum_{} \bigg[\frac{1}{2} n_{i}^+(n_{i}^+ - 1) + \frac{1}{2} n_{i}^-(n_{i}^- - 1) - n_{i}^+n_{i}^-\bigg]
\nonumber\\
&+& \alpha K \sum_{nn} \bigg[n_{i}^{+}n_{j}^{+} + n_{i}^{-}n_{j}^{-} - n_{i}^+n_{j}^- - n_{i}^-n_{j}^+\bigg],
\nonumber\\
\label{eq:H1}
\end{eqnarray}
where the first line is for the interaction between particles on the same site, and the second
line is for the interaction between particles on neighboring sites (the subscript $nn$ indicates
the nearest-neighbor interaction). The dimensionless coupling parameter $\alpha$ for interactions
between neighbors is positive in our model. This implies that particles of opposite species
attract and those of the same species repel each other.
The fact that each lattice site can be occupied by multiple particles at one time results in two
types of statistics. If particles are distinguishable as in classical fluids, then the grand
canonical partition function is
\begin{equation}
\Xi_{a} =
\sum_{n_1^+=0}^{\infty} \sum_{n_1^-=0}^{\infty}\dots \sum_{n_N^+=0}^{\infty} \sum_{n_N^-=0}^{\infty}
e^{-\beta H_{\rm int}} \prod_{i=1}^N \frac{e^{\beta \mu' (n_i^++n_i^-)}}{n^+_i!n^-_i!},
\label{eq:X_dist}
\end{equation}
where
\begin{equation}
H_{\rm int} = \frac{K}{2} \sum_{} \big(n_{i}^{+}-n_{i}^-\big)^2
+ \alpha K \sum_{nn} (n_{i}^+-n_{i}^-) (n_{j}^+ - n_{j}^-)
\end{equation}
is the interaction Hamiltonian,
\begin{equation}
\mu' = \mu + \frac{K}{2}
\end{equation}
is the effective chemical potential, and $N=L^2$ is the number of lattice sites, where $L$ is the
size of the system. The factor $1/n_i!$, also referred to as the Gibbs correction, is a feature
of distinguishable particles, and indicates that statistics at a single site follows a poisson
rather than an exponential distribution. A more detailed analysis of distinguishability versus
indistinguishability is provided in Ref. \cite{Frydel18b}.
On the other hand, if particles are regarded as indistinguishable, a situation which in classical
systems arises for example in growth models, where particles do not change their location on
the lattice substrate but rather are added or removed from it at each Monte Carlo step, in which
case the particles of a given site have no labels, then the grand partition function is
\begin{equation}
\Xi_{b} =
\sum_{n_1^+=0}^{\infty} \sum_{n_1^-=0}^{\infty}\dots \sum_{n_N^+=0}^{\infty} \sum_{n_N^-=0}^{\infty}
e^{-\beta H_{\rm int}} \prod_{i=1}^N e^{\beta \mu' (n_i^++n_i^-)}.
\label{eq:X_indist}
\end{equation}
Based on the above discussion, even if the systems obey the same Hamiltonian, they can be subject to
different rules of statistical mechanics which, in turn, can lead to different behaviors.
This difference can be particularly relevant in characterizing thermodynamic collapse. As this issue
does not arise in a standard lattice-gas model with occupations limited to one, it is important to
emphasize it as well as consider it in overall analysis.
\section{transformation into a spin-ensemble}
\label{sec:trans}
The system described above can be simplified by transforming it into a spin ensemble
with spins corresponding to $s_i=n_i^+-n_i^-$.
Because a single configuration in the spin-ensemble corresponds to infinitely many
configurations in
the particle-ensemble, these degeneracies need to be correctly accounted for. The resulting
transformed partition functions are \cite{Frydel18b}
\begin{equation}
\Xi_{a} =
\!\!\!\! \sum_{s_1=-\infty}^{\infty} \!\!\! \dots \!\!\! \sum_{s_N=-\infty}^{\infty}
\!\!\! e^{- \beta K \alpha \sum_{nn}s_is_{j}} \prod_{i=1}^N
\bigg[ e^{-\frac{\beta K}{2} s_i^2} {\rm I}_{s_i} \big(2e^{\beta \mu'}\big)\bigg],
\label{eq:Xs_dist}
\end{equation}
and
\begin{equation}
\Xi_{b} =
\!\!\!\! \sum_{s_1=-\infty}^{\infty} \!\!\! \dots \!\!\! \sum_{s_N=-\infty}^{\infty}
\!\!\! e^{- \beta K \alpha \sum_{nn}s_is_{j}} \prod_{i=1}^N
\bigg[ e^{-\frac{\beta K}{2} s_i^2} \frac{e^{\beta \mu'|s_i|}}{1-e^{2\beta\mu'}}\bigg],
\label{eq:Xs_indist}
\end{equation}
for distinguishable and indistinguishable particles, respectively. The terms inside square
brackets can be regarded as effective external field. Furthermore, as these terms are even
function in $s_i$, the spin symmetry is never broken so that $\langle s_i\rangle=0$
under all conditions. The
function ${\rm I}_s(x)$ in Eq. (\ref{eq:Xs_dist}) is the modified Bessel function of the first
kind.
Any quantity defined in the original ensemble can be calculated as another quantity
in the spin-ensemble. For example, the average number of particles at a single site $i$,
in the original ensemble defined as
\begin{equation}
\rho_i = \langle n_i^+\rangle +\langle n_i^-\rangle = \frac{1}{N}\frac{\partial\ln \Xi}{\partial\beta\mu},
\label{eq:rho}
\end{equation}
in the spin-ensemble becomes
\begin{equation}
\rho_i = e^{\beta \mu'}
\Bigg\langle \frac{{\rm I}_{s_i+1} \big(2e^{\beta\mu'}\big)+{\rm I}_{s_i-1}
\big(2e^{\beta \mu'}\big)}{{\rm I}_{s_i} \big(2e^{\beta \mu'}\big)} \Bigg\rangle_s,
\label{eq:rho2a}
\end{equation}
for distinguishable particles, where the subscript $s$ indicates the average calculated
in the spin ensemble, and
\begin{equation}
\rho_i = \langle |s_i| \rangle_s + \frac{2e^{2\beta \mu'}}{1-e^{2\beta \mu'}},
\label{eq:rho2b}
\end{equation}
for indistinguishable particles. For distinguishable particles,
the limit $\rho_i\to\infty$ is attained if $\mu'\to\infty$, and for indistinguishable particles if $\mu'\to 0^-$.
In the rest of the paper, we use $\rho\equiv \rho_i$, to indicate the average number of particles on any
lattice site and refer to $\rho$ as density.
The limit $\rho\to\infty$ is a consequence of the fact that no limit is placed on the occupation
number. This is quite different from the standard lattice-gas model where the maximum
density is $\rho=1$.
The spin-ensembles in Eq. (\ref{eq:Xs_dist}) and Eq. (\ref{eq:Xs_indist}) more generally can be
written as
\begin{equation}
\Xi = B^N\sum_{s_1=-\infty}^{\infty} \!\! \dots \!\! \sum_{s_N=-\infty}^{\infty}e^{-\beta H},
\end{equation}
with the pre-factors
\begin{equation}
B(\mu') =
\begin{cases}
{\rm I}_{0} (2e^{\beta\mu'}),~~\text{distinguishable} \\
\frac{1}{1-e^{2\beta\mu'}},~~~~\text{indistinguishable},
\end{cases}
\label{eq:B}
\end{equation}
and the Hamiltonian is given by
\begin{equation}
H = \alpha K \sum_{nn} s_i s_j + \frac{K}{2}\sum s_i^2 + \sum h(s_i).
\label{eq:H_s}
\end{equation}
where the one-body potentials $h(s_i)$ are
\begin{equation}
\beta h(s_i) =
\begin{cases}
-\ln\Big[\frac{{\rm I}_{s_i} (2e^{\beta\mu'})}{{\rm I}_{0} (2e^{\beta\mu'})}\Big],~~\text{distinguishable} \\
-\beta \mu' |s_i|,~~~~~~~~~\text{indistinguishable}.
\end{cases}
\label{eq:hs}
\end{equation}
Note that in the limit $\rho\to \infty$, $h(s_i)\to 0$ and both Hamiltonians
become a simple harmonic function. The difference between distinguishable and
indistinguishable particles, therefore, becomes relevant at finite densities.
For illustration and to see how these differences might be manifested,
in Fig. (\ref{fig:h}) we plot $h(s)$ for distinguishable and indistinguishable particles for
the parameters $\beta K=5$ and $\alpha=1/4$. Based on the figure,
one may expect larger fluctuations for indistinguishable particles due to the
shape of the function $h(s_i)$.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{h1.pdf}&
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{h10.pdf}\\
\end{tabular}
\end{center}
\caption{The functions $h(s)$ for distinguishable and indistinguishable particles for
$\beta K=5$ and $\alpha=1/4$ and for two different densities, see
Eq. (\ref{eq:hs}). For $\rho=1$ in (a), $\beta\mu'=-0.27$ and $\beta \mu'=-0.57$, and
for $\rho=10$ in (b), $\beta \mu'=1.66$ and $\beta \mu'=-0.09$ for
distinguishable and indistinguishable particles, respectively. }
\label{fig:h}
\end{figure}
\subsection{connection with other spin models}
It might be of interest to place our spin model in the context of other related models.
The first difference to be noted is that
unlike the standard Ising model, our model permits a spin $s_i=0$, which can be
regarded as an empty site. The class of Ising models that permit empty sites are referred to
as site-diluted Ising models with the Hamiltonian
$H_{} = -J \sum_{nn} p_ip_j s_i s_j$, where $s_i=\pm 1$ and $p_i=0,1$ are random (correlation free)
occupation numbers such that $\langle p_i\rangle=\rho$ \cite{Parisi97,Rosinberg99}.
These models assume the presence of defects in the lattice structure in a magnetic material
and represent quenched dilution. Models describing
annealed dilution are possible and have been studied in the past \cite{Romano07}.
Our model can be regarded as a version of a site-diluted (annealed) model, which would be interesting to
study in its own right by limiting spins to $s_i=-1,0,1$, where the frequency of empty spins is
determined by the function $h(s_i)$.
Our model bears the closest analogy to the discrete Gaussian model (DG) \cite{Chui76,Sly16}
dubbed so by Chui and Weeks in 1976. The DG model belongs to a family of random surface
models and whose Hamiltonian is given by $H_{} = \frac{1}{2}J \sum_{nn} (s_i-s_j)^2 + 4hJ \sum_{n} s_i^2$.
In the limit $\rho\to\infty$, where $h(s)=0$, our model corresponds to the DG model. For
the parameter $h=0$, the DG model can be mapped onto a lattice Coulomb system,
and like the lattice Coulomb model, it exhibits the Kosterlitz-Thouless transition. This corresponds
to our parameter $\alpha=1/4$. In appendix \ref{sec:A3}
\subsection{simulation details}
In addition to analytical results, we study the transformed spin ensemble using Monte Carlo
simulation.
The simulated system consists of spins on a square-lattice substrate.
A simulation box itself is a square of size $L=128$ with periodic boundary conditions.
A Monte Carlo move consists of a random selection
of a lattice site followed by the trial change of the spin by either $1$ or $-1$ with
equal probability. The move is accepted if it lowers the energy, otherwise it is accepted
with the probability $e^{-\beta (H_{new}-H_{old})}$. Before calculating average quantities,
the system is equilibrated for half a million steps. The average quantities are subsequently
computed during another $2$ million steps.
\section{The limit $\rho\to \infty$}
\label{sec:infty}
In the limit $\rho\to\infty$, $h(s_i)$ as defined in Eq. (\ref{eq:hs}) vanishes
and the Hamiltonian in Eq. (\ref{eq:H_s}) for both distinguishable and indistinguishable
particles attains a simple quadratic form
\begin{equation}
H_{\infty} = \alpha K \sum_{nn} s_i s_j + \frac{K}{2}\sum s_i^2,
\label{eq:H_infty}
\end{equation}
whose Boltzmann factor is a Gaussian function and, as the spins are restricted to integers, the
resulting system is a discrete Gaussian model (DG). In the past, the DG model has been used
to model an interface
\cite{Chui76,Weeks80,Binder95}.
Although the interpretation and the parametrization of that DG model for interfaces is different
from ours (in the interface model spins represent height of an interface and, as the heights of
neighboring spins tend to be the same, $\alpha<0$), the same general analysis applies to both.
The analogy between the interface model and the present binary lattice-gas system of penetrable
particles is also interesting.
Even though the partition function of the DG model has a Gaussian form, it cannot be solved exactly.
However, if we neglect spin discreteness, it may be possible to approximate the DG model
with the continuous Gaussian model (CG) which can be solved
exactly \cite{Moshe14,Mattis06}.
A systematic way to carry this out is to write the partition function for the DG model
where the partition function of the CG model is a contributing term. Any additional term
would then represent contributions due to spin discreteness.
To see if this can be done, we first reformulate the Hamiltonian in Eq. (\ref{eq:H_infty}) using matrix
notation,
\begin{equation}
H_{\infty} = \frac{K}{2}\sum_{i,j} A_{ij}s_is_j = \frac{K}{2} {s}^T{A}{s},
\label{eq:H_infty_B}
\end{equation}
where ${s}=(s_1,\dots,s_N)$ is the $N$-dimensional vector, ${A}$ is a $N\times N$ matrix
with elements
\begin{equation}
A_{ij} = \delta_{ij} + \alpha \epsilon_{ij},
\end{equation}
where $\delta_{ij}$ is the Kronecker delta function, and $\epsilon_{ij}=1$ if the two spins
are the nearest neighbors and zero otherwise. ${A}$ for an arbitrary dimension $d$ is
given in Appendix (\ref{sec:A1}). The corresponding partition function is
\begin{equation}
\Xi_{\infty} = \sum_{s_1=-\infty}^{\infty} \!\! \dots \!\! \sum_{s_N=-\infty}^{\infty}
e^{-\frac{\beta K}{2} {s}^T{A}{s}}.
\label{eq:Xi_infty}
\end{equation}
Note that we ignore the pre-factor $B$ defined in Eq. (\ref{eq:B}) which in the limit
$\rho\to\infty$ diverges, however, regardless of its value, it does not affect configurations.
If we rewrite the partition function in Eq. (\ref{eq:Xi_infty}) as
\begin{equation}
\Xi_{\infty} =
\prod_{i=1}^N \int_{-\infty}^{\infty} ds_i \sum_{n_i=-\infty}^{\infty} \delta(s_i-n_i)\,\,
e^{-\frac{\beta K}{2} {s}^T{A}{s}},
\end{equation}
and express the Dirac comb function as a Fourier series,
\begin{equation}
\sum_{n=-\infty}^{\infty} \delta(s-n) = \sum_{k=-\infty}^{\infty} e^{i2\pi k s},
\end{equation}
we arrive at
\begin{eqnarray}
\Xi_{\infty} &=& \sum_{k_1=-\infty}^{\infty}\dots\sum_{k_N=-\infty}^{\infty} \nonumber\\
&\times& \bigg[
\int_{-\infty}^{\infty} d {s_1}\dots\int_{-\infty}^{\infty} d {s_N}\,e^{i2\pi {\bf k}\cdot{\bf s}}
e^{-\frac{\beta K}{2} {s}^T{A}{s}}\bigg],
\end{eqnarray}
where the integral term in square brackets is a Gaussian integral with a linear term that
can be evaluated exactly using the identity
\begin{equation}
\int d{\bf x}\, e^{i {\bf k}\cdot{\bf s}}
e^{-\frac{1}{2} {s}^T{A}{s}} =
e^{-\frac{1}{2} {k}^T {A}^{-1} { k}} \sqrt{\frac{(2\pi)^N}{\det { A}}},
\end{equation}
where $A^{-1}$ is the inverse of the matrix $A$.
The resulting partition function is comprised of two subsystems,
\begin{equation}
\Xi_{\infty} = \Xi_G\, \Xi_L,
\label{eq:Xi_gen}
\end{equation}
where $\Xi_G$ is the partition function of the CG model,
\begin{equation}
\Xi_G = \bigg(\frac{2\pi}{\beta K}\bigg)^{N/2} \sqrt{\frac{1}{\det {A}}},
\label{eq:Xi_G}
\end{equation}
and $\Xi_L$ represents all the contributions due to spin discreetness and is given by
\begin{equation}
\Xi_L = \sum_{s_1=-\infty}^{\infty}\dots\sum_{s_N=-\infty}^{\infty}
e^{-\frac{1}{2} \frac{1}{\beta K/(4\pi^2)} ~ {s}^T {A}^{-1} {s}},
\label{eq:Xi_L}
\end{equation}
The dimensionless temperature of $\Xi_L$ is $k_BT' = \beta K/(4\pi^2)$.
\subsection{continuous Gaussian model}
From Eq. (\ref{eq:Xi_gen}) it is seen that by approximating the DG model as
$$
\Xi_{\infty} \approx \Xi_G,
$$
the missing contributions due to the spin discreteness are contained in the term $\Xi_L$.
In this section we verify how accurate this approximation is. To do this, we
need to evaluate $\Xi_G$.
The determinant in Eq. (\ref{eq:Xi_G}) is solved using the identity
\begin{equation}
\det{A} = \prod_{k=1}^N\lambda_k,
\end{equation}
where $\lambda_k$ are the eigenvalues of $A$. $A$ is a circulant block matrix with
circulant blocks \cite{Davis79,Chen87,Kaveh11}. The eigenvalues of a circulant matrix
are Fourier modes. For a matrix $A$ in $d=2$ the eigenvalues are
\begin{equation}
\lambda(q_1,q_2) = 1 + 2\alpha \cos q_1 + 2\alpha \cos q_2,
\label{eq:lambda}
\end{equation}
where
\begin{equation}
q_i = \frac{2\pi n_i}{L}, ~~~ n_i=0,1,\dots,L-1
\end{equation}
so that in total there are $N=L^2$ eigenvalues. The determinant of $A$ now becomes
\begin{equation}
\det{A} = e^{\sum_{n_1=0}^{L-1}\sum_{n_2=0}^{L-1}\ln\big[1 + 2\alpha \cos(\frac{2\pi}{N}n_1) + 2\alpha \cos(\frac{2\pi}{N}n_2)\big]},
\end{equation}
which in the thermodynamic limit $L\to\infty$ becomes
\begin{equation}
\det{A} = e^{(\frac{L}{2\pi})^2 \int_0^{2\pi} dq_1 \int_0^{2\pi} dq_2\, \ln[1 + 2\alpha \cos q_1 + 2\alpha \cos q_2]}.
\end{equation}
To complete the expression, it remains to evaluate the integral
\begin{equation}
I = \bigg(\frac{1}{2\pi}\bigg)^2 \!\! \int_0^{2\pi} \!\! dq_1 \int_0^{2\pi} \!\! dq_2\, \ln\big[1 + 2\alpha \cos q_1 + 2\alpha \cos q_2\big].
\label{eq:I}
\end{equation}
When evaluated, it corresponds to a hypergeometric function which can also be expressed as
a power series in $\alpha$,
\begin{equation}
I = -\sum_{k=1}^{\infty} \frac{\alpha^{2k}}{2k} \frac{(2k)!^2}{k!^4}.
\end{equation}
The interval of convergence of the above series is $|\alpha|\le 1/4$. At $\alpha=1/4$, $I$
remains finite with a value $I\approx -0.220$. For any value outside the radius of convergence,
the series diverges, which in the present model implies thermodynamic instability. We designate
this value of $\alpha$ as $\alpha_c$.
Given the above results, the partition function in Eq. (\ref{eq:Xi_G}) becomes
\begin{equation}
\Xi_G = \bigg(\frac{2\pi}{\beta K}\bigg)^{N/2} \exp\bigg[\frac{N}{4}\sum_{k=1}^{\infty} \frac{\alpha^{2k}}{k} \frac{(2k)!^2}{k!^4}\bigg].
\label{eq:Xi_G2}
\end{equation}
It is interesting to consider at this point the partition function of the Ising model
that can be expressed as (see appendix \ref{sec:A2})
\begin{equation}
\Xi_{IS} = [2\cosh(2\beta J)]^N \exp\bigg[-\frac{N}{4}\sum_{k=1}^{\infty} \frac{\alpha^{2k}}{k} \frac{(2k)!^2}{k!^4}\bigg]
\end{equation}
where $\alpha$ is a function of $\beta J$ according to
\begin{equation}
\alpha = \frac{1}{2}\frac{\sinh(2\beta J)}{\cosh^2(2\beta J)},
\label{eq:alpha_is}
\end{equation}
and $J$ is the interaction strength between nearest neighbor sites.
In both the DG and the Ising model the value $\alpha=1/4$ has physical significance. In the Ising
model it indicates a critical point of a continuous phase transition and in the Gaussian model it is
the last point before thermodynamic instability. The Ising model, however, is prevented from
leaving the convergence region as a result of the parametrization in Eq. (\ref{eq:alpha_is}),
and thermodynamic instability never precipitates.
Going back to the partition function $\Xi_G$, we point out that even if $\Xi_G$ is finite at $\alpha_c$
other quantities may diverge. The internal energy defined as
\begin{equation}
\beta u = -\frac{\alpha}{N} \frac{\partial \log \Xi_G}{\partial \alpha} = 2\alpha K \langle s_is_j\rangle,
\label{eq:u0}
\end{equation}
where $\langle s_is_j\rangle$ are spin correlations between two nearest neighbors,
can be calculated exactly using Eq. (\ref{eq:Xi_G2}), leading to
\begin{equation}
\beta u(\alpha) = \frac{1}{2} - \frac{1}{\pi} {\rm K}\big(16\alpha^2\big)
\label{eq:u}
\end{equation}
where $ {\rm K}(x)$ is the complete elliptic integral of the first kind, which contains
logarithmic singularity at $\alpha_c$,
\begin{equation}
\beta u(\alpha) \approx \frac{1}{2} + \frac{1}{2\pi}\ln\bigg(\frac{1-4|\alpha|}{8}\bigg).
\end{equation}
In Fig. (\ref{fig:u12}) we plot $\beta u$. The data points are from the Monte Carlo
simulation for the system
$\Xi_{\infty}$ and the dashed line
corresponds to the expression in Eq. (\ref{eq:u}).
For $\beta K=1$, the data points follow closely the continuous Gaussian model.
For larger $\beta K$, the two results diverge, yet despite this the point of
thermodynamic instability is the same for both models.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{u_k1.pdf}&
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{u_k5.pdf}\\
\end{tabular}
\end{center}
\caption{The internal energy $u$ as a function of $\alpha$ for
(a) $\beta K=1$ and (b) $\beta K=5$. The data points of a discrete Gaussian model are obtained
from Monte Carlo simulation with $L=128$, and the dashed lines correspond to Eq. (\ref{eq:u}). }
\label{fig:u12}
\end{figure}
In Fig. (\ref{fig:conf_k1}) we show configuration snapshots close to thermodynamic
collapse (at $\alpha=0.2499$) for different values of $K$. The spin $s_i=0$ is regarded
as an empty site, and the colored squares are for $s_i\ne 0$.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\hspace{-0.0cm} \includegraphics[height=0.20\textwidth,width=0.27\textwidth]{conf_k1_inftyA.pdf}&
\hspace{-0.5cm} \includegraphics[height=0.20\textwidth,width=0.27\textwidth]{conf_k5_inftyA.pdf}\\
\end{tabular}
\end{center}
\caption{Configuration snapshot for $\alpha=0.2499$, for (a) $\beta K=1$ and (b) $\beta K=5$.
The zero spins are regarded as empty sites and are represented by unfilled squares. Red is
for spins $s_i=\pm 1$, purple for $s_i=\pm 2$, and black for $s_i=\pm 3,\pm4,\dots$. }
\label{fig:conf_k1}
\end{figure}
The same configurations are shown in Fig. (\ref{fig:conf_k5}) but in a way as to
emphasize their antiferromagnetic order. Red squares are for positive and
black squares for negative spins.
In both cases, configurations appear as islands of antiferromagnetic material
immersed in disordered low density phase. For $\beta K=1$, the islands are much
larger and appear interconnected, while for $\beta K=5$ the islands
are separated, reminiscent of the liquid-gas coexistence.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.20\textwidth,width=0.27\textwidth]{conf_k1_inftyC.pdf}&
\hspace{-0.5cm}\includegraphics[height=0.20\textwidth,width=0.27\textwidth]{conf_k5_inftyC.pdf}\\
\end{tabular}
\end{center}
\caption{Configuration snapshot as in Fig. (\ref{fig:conf_k1}) plotted to
emphasize ``antiferromagnetic'' order of the configurations.
The empty sites appear as unfilled squares. The remaining spins appear as red squares
if $s_i>0$ and as black squares if $s_i<0$. }
\label{fig:conf_k5}
\end{figure}
Another revealing quantity is the distribution of spins at a single site $p(s)$. For the continuous
Gaussian model such a distribution is expected to be Gaussian (see Appendix (\ref{sec:A1})
for details),
\begin{equation}
p(s) = \frac{e^{-s^2/2\sigma^2}}{\sqrt{2\pi\sigma^2}}.
\label{eq:pG}
\end{equation}
The variance can be obtained by knowing that the total energy per particle
for a harmonic system is $\beta u_{tot}=1/2$. The two contributions to the total energy are
$\beta u_{tot}=\beta u + \beta u_{ext}$, where $\beta u_{ext} = \int_{-\infty}^{\infty} ds\,p(s) Ks^2/2$
and $\beta u$ is given in Eq. (\ref{eq:u}). This leads to the following result
\begin{equation}
\sigma^2 = \frac{2{\rm K}\big(16\alpha^2\big)}{\pi \beta K},
\label{eq:sigma2}
\end{equation}
and in the limit $\alpha \to \alpha_c$ we have
\begin{equation}
\sigma^2 = \langle s^2\rangle \approx - \frac{1}{\pi \beta K} \ln\bigg(\frac{1-4|\alpha|}{8}\bigg).
\label{eq:sigma2A}
\end{equation}
In Fig. (\ref{fig:ps}) we plot the distributions $p(s)$ for $\alpha=0.2499$, for different values
of $\beta K$, and compare the results with the distribution in Eq. (\ref{eq:pG}).
For $\beta K=1$, the discrete data points coincide with the continuous
results.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.17\textwidth,width=0.22\textwidth]{ps_k1.pdf}&
\includegraphics[height=0.17\textwidth,width=0.22\textwidth]{ps_k5.pdf}\\
\end{tabular}
\end{center}
\caption{Distributions $p(s)$ for $\alpha=0.2499$, and $\beta K=1$ in (a) and $\beta K=5$
in (b).
The discrete points are from a simulation and the continuous
lines correspond to Eq. (\ref{eq:pG}). }
\label{fig:ps}
\end{figure}
\subsection{Discrete subsystem $\Xi_L$}
In the previous section we approximated the $\Xi_{\infty}$ system by neglecting its spin
discreteness, and the comparison with the simulation showed that such approximation
is generally correct for $\beta K< 5$, and even if not correct at every point, the CG model
correctly predicts the point of thermodynamic collapse, suggesting that discreteness has
no effect on the thermodynamic collapse. The explanation for this is that close to instability
the variance of the distribution $p(s)$ diverges, and for large spin variations the spin
discreteness becomes irrelevant.
In this section we look more carefully into the neglected contributions of spin
discreteness by looking into the behavior of $\Xi_L$.
According to Ref. \cite{Chui76}, the DG model at $\alpha=\alpha_c$ is isomorphic
with the lattice Coulomb model which exhibits the Kosterlitz-Thouless (KT) transition.
This means that at precisely the point where our system is about to collapse, the
system also undergoes the KT transition along the parameter $\beta K$ \cite{Gupta97}.
This by itself cannot affect the collapse transition, however, it can modify the manner
of that collapse.
\subsubsection{$\Xi_L$ in one-dimension}
To establish the procedure in a clear manner,
we consider first a simpler case of a system in $d=1$, for which the matrix $A$ is given in
Eq. (\ref{eq:A1}) and the matrix $A^{-1}$ is
\begin{equation}
A^{-1}_{ij} = \frac{1}{L} \sum_{k=0}^{L-1} \frac{\cos\big[2\pi k (i-j) / L\big]}{1+2\alpha\cos(2\pi k/L)}.
\label{eq:AI_1d}
\end{equation}
Because the value of $\alpha_c$ depends on dimensionality according to $\alpha_c = 1/(2d)$,
in $d=1$ thermodynamic collapse occurs for $\alpha_c=1/2$.
In the limit $L\to\infty$ the summation in Eq. (\ref{eq:AI_1d}) becomes an integral,
\begin{equation}
A^{-1}_{ij} = \frac{1}{2\pi} \int_0^{2\pi} dq\, \frac{\cos\big[q(i-j)\big]}{1+2\alpha\cos(q)},
\label{eq:AD1}
\end{equation}
which evaluates to
\begin{equation}
A^{-1}_{ij} = \frac{(-1)^{|i-j|}} {\sqrt{1-4\alpha^2}} \bigg(\frac{1-\sqrt{1-4\alpha^2}}{2\alpha}\bigg)^{|i-j|}.
\label{eq:AD2}
\end{equation}
At $\alpha_c$, $A_{ij}^{-1}$ diverges, but the divergence can be subtracted and the system
can be analyzed in terms of non-divergent interactions.
To do this, we introduce an alternating sign matrix,
\begin{equation}
C^{}_{ij} = (-1)^{|i-j|},
\end{equation}
then subtract from each element $A_{ij}^{-1}$ the divergent term $C_{ij}/\sqrt{1-4\alpha^2}$.
The remaining elements constitute an interaction matrix
$U_{ij} = A_{ij}^{-1} - C_{ij}/\sqrt{1-4\alpha^2}$,
which at $\alpha=\alpha_c$ reduces to
\begin{equation}
U^{}_{ij} = -(-1)^{|i-j|} |i-j|.
\end{equation}
The Hamiltonian of the system $\Xi_L$ can now be written as
\begin{equation}
\beta H_L = \frac{2\pi^2}{\beta K} \bigg[{s}^T U {s} + \frac{{s}^T C {s}}{\sqrt{1-4\alpha^2}}\bigg].
\end{equation}
Clearly, only configurations which suppress the divergence are allowed.
Such configurations satisfy ${s}^T C=0$, which is the same as
\begin{equation}
\sum_{odd} s_i = \sum_{even} s_i,
\label{eq:res}
\end{equation}
where the subscripts ``odd'' and ``even'' refer to odd and even numbered lattice sites.
Taking this restriction into account, the Hamiltonian can now be written as
\begin{equation}
\beta H'_L = \frac{2\pi^2}{\beta K} {s}^T U {s},
\end{equation}
where the prime implies the restriction in Eq. (\ref{eq:res}).
Although not immediately clear, $\Xi_L$ is an even function of $\alpha$, and
flipping the sign of $\alpha$ does not change the partition function.
(The sign change modifies Eq. (\ref{eq:AD2}), but as the summations in $\Xi_L$
are over $s_i\in(-\infty,\infty)$, this does not effect the value of $\Xi_L$).
Calculations then can equally be done for $\alpha=-1/2$. In such a case,
the interaction potential becomes
\begin{equation}
U^{}_{ij} = -|i-j|,
\end{equation}
which is a Coulomb interaction in 1D. There are two differences between the present
system and the more usual Coulomb model, however. First, the valance number of
particles on a lattice site is unlimited. Second, the periodic boundary conditions
involve only particles in the simulation box and do not include contributions due to images
outside the original simulation box.
\subsubsection{$\Xi_L$ in two-dimensions}
Based on the results of the previous section for $d=1$,
it is guessed that in $d=2$ the interactions between
lattice sites are logarithmic at $\alpha_c$, since this is the functional form of
Coulomb interactions in this dimension.
It is more convenient to represent interactions between spins on a square-lattice, not
in terms of the matrix $A^{-1}$, but in terms of a pair potential between sites on the
$(x,y)$-grid, and such a potential would have the following form \cite{Chui76}
\begin{equation}
U_{tot} = \bigg(\frac{1}{2\pi}\bigg)^2 \!\! \int_{-\pi}^{\pi} \!\! dq_1\int_{-\pi}^{\pi} \!\! dq_2\, \frac{\cos(q_1n + q_2m) }{1 + 2\alpha\cos(q_1) + 2\alpha\cos(q_2)},
\end{equation}
where $n=|x_1-x_2|$ and $m=|y_1-y_2|$ indicate a separation between two
lattice sites on the discrete Cartesian grid, where $x_i,y_i=0,1,\dots$.
The expression is analogous to that in Eq. (\ref{eq:AD1}) for $d=1$ in the limit $L\to\infty$.
If we expand the integrand in powers of $\alpha$ and then evaluate each term, we find the following
series expansion
\begin{equation}
U_{tot}(m,n) = \sum_{k=0}^{\infty} \frac{ (-1)^{m+n} \alpha^{2k+m+n} (2k+m+n)!^2}{k!(k+m)!(k+n)!(k+m+n)!},
\label{eq:U_tot}
\end{equation}
which constitutes a hypergeometric function. $U_{tot}(m,n)$ diverges at $\alpha_c=1/4$, and the
divergent term is identified as
\begin{equation}
U_{tot}(0,0) = \frac{2{\rm K} \big(16\alpha^2\big)}{\pi},
\label{eq:A00}
\end{equation}
where ${\rm K}(x)$ is the complete elliptic integral of the first kind. Subtracting the divergence from
$U_{tot}$, the non-divergent pair potential is
\begin{equation}
U(m,n) = U_{tot}(m,n) - (-1)^{m+n}\frac{2{\rm K} \big(16\alpha^2\big)}{\pi},
\label{eq:U_2D}
\end{equation}
where an accurate approximation to $U(m,n)$ at $\alpha_c$ is \cite{Spitzer}
\begin{equation}
U(m,n) \approx -(-1)^{m+n} \frac{2}{\pi} \bigg(\ln\sqrt{n^2 + m^2} + \gamma + \frac{1}{2}\ln 8\bigg),
\label{eq:U_2D_approx}
\end{equation}
that is valid for $\sqrt{n^2 + m^2}\ge 1$. For $n=m=0$ we use $U=0$.
The approximate functional form in Eq. (\ref{eq:U_2D_approx}) compared with the exact
form in Eq. (\ref{eq:U_2D}) is shown in Fig. (\ref{fig:ps}).
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.17\textwidth,width=0.22\textwidth]{U.pdf}&
\end{tabular}
\end{center}
\caption{Approximate pair potential in Eq. (\ref{eq:U_2D_approx}) compared to the
exact results in Eq. (\ref{eq:U_2D}) at discrete locations on a square-lattice. }
\label{fig:ps}
\end{figure}
Because the constant terms in $U(m,n)$, together with the divergent term, are irrelevant,
the pair interaction can simply be written as
\begin{equation}
U = -(-1)^{m+n} \frac{2}{\pi} \ln\sqrt{n^2 + m^2}.
\end{equation}
The spin configurations are subject to the same
restriction as that in Eq. (\ref{eq:res}). In the square-lattice setting, this means that the
lattice is decomposed into two interpenetrating sub-lattices and the restriction amounts to
$\sum_{sub_1} s_i = \sum_{sub_2} s_i$.
The Hamiltonian at $\alpha_c$ can be written as
\begin{eqnarray}
\beta H'_L &=& -\frac{4\pi}{\beta K} \sum_{x_1,y_1=1}^{L} \sum_{x_2,y_2=1}^{L}(-1)^{m+n}
s_{x_1,y_1}s_{x_2,y_2} \nonumber\\
&\times& \ln\sqrt{n^2 + m^2},
\label{eq:HL_2D}
\end{eqnarray}
with $x_i$ and $y_i$ indicating discrete locations on a lattice grid.
In Fig. (\ref{fig:confL}) we show several configuration snapshots of $\Xi_L$ for decreasing
values of $\beta K$. One observes gradual decrease of spin
density with decreasing $\beta K$, and for $\beta K=6$ the configuration consists of
sparse isolated spins or spin pairs of the same sign. This means that for $\beta K<6$,
$\Xi_L\approx 1$ since most likely value of a spin is $s_i=0$.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.20\textwidth,width=0.26\textwidth]{confL_k12.pdf}&
\includegraphics[height=0.20\textwidth,width=0.26\textwidth]{confL_k10.pdf}\\
\includegraphics[height=0.20\textwidth,width=0.26\textwidth]{confL_k08.pdf}&
\includegraphics[height=0.20\textwidth,width=0.26\textwidth]{confL_k06.pdf}\\
\end{tabular}
\end{center}
\caption{Configuration snapshots for $\Xi_L$ at $\alpha_c$ for different values of
$\beta K$, $\beta K=12,10,8,6$. Red squares are for $s=1$, black squares for $s=-1$,
and the white squares represent empty sites. Spins larger than $1$ are
negligible for those values of $\beta K$ that are plotted.
The system size is $L=64$.}
\label{fig:confL}
\end{figure}
The distribution of spins $p(s)$ is accurately represented
using the continuous Gaussian approximation, see Appendix (\ref{sec:A0}), given by
\begin{equation}
p(s) = e^{-2\pi^2 s^2/\beta K} \sqrt{\frac{2\pi}{\beta K}},
\label{eq:pL}
\end{equation}
where the spin variance is given by $\langle s^2\rangle = \frac{\beta K}{4\pi^2}$.
Fig. (\ref{fig:psL}) compares the above Gaussian distribution with the discrete distributions
obtained from simulation, showing a general good agreement between the two.
The Gaussian distribution in Eq. (\ref{eq:pL}), however, cannot be a reliable approximation
of the discrete system if
$p(0)>1$, since this implies that the probability that a spin is zero is greater than one.
The Gaussian approximation in Eq. (\ref{eq:pL}), therefore, breaks down for $\beta K < 2\pi$.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.17\textwidth,width=0.22\textwidth]{psL_k10.pdf}&
\includegraphics[height=0.17\textwidth,width=0.22\textwidth]{psL_k20.pdf}\\
\end{tabular}
\end{center}
\caption{Distributions $p(s)$ for a system $\Xi_L$ at $\alpha=\alpha_c$, for (a)
$\beta K=10$ and (b) $\beta K=20$.
The discrete points are from a simulation and the continuous
lines correspond to Eq. (\ref{eq:pL}). }
\label{fig:psL}
\end{figure}
In Fig. (\ref{fig:p0L}) we plot $p(0)$ as a function of $\beta K$ obtained from
simulation for a discrete system and compare it to $p(0)$ calculated using Eq. (\ref{eq:pL}).
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.17\textwidth,width=0.22\textwidth]{p0L.pdf}\\
\end{tabular}
\end{center}
\caption{The probability that a lattice site is empty, $p(0)$, as a function of $\beta K$.
The data points are from simulation and the continuous line corresponds to $p(0)$
in Eq. (\ref{eq:pL}). }
\label{fig:p0L}
\end{figure}
Given the reliable performance of the approximation in the range $\beta K>2 \pi$, it is
safe to conclude that there is no phase transition in this range. The distribution
$p(s)$ is monomodal and its variance diverges only in the limit $\beta K\to \infty$.
If there is any KT type of transition, it must occur in the range $5 > \beta K > 6$
and can be associated with the emergence of the lone pairs in Fig. (\ref{fig:confL}),
which could be interpreted as the emergence of defects.
Because the MC simulations on the spin ensemble become impossible for
$\beta K>5$, since the only possible spins are $s_i=0$, the KT transition
along the $\beta K$ parameter in the context of the two-component model could imply a
different mechanism of the collapse transition.
\section{Finite $\rho$ and the emergence of a metastable region}
\label{sec:rho}
In this section we consider a more realistic situation where the average occupation
number of a lattice site $\rho$ is finite. This also means that the quadratic Hamiltonian
$H_{\infty}$ in Eq. (\ref{eq:H_infty}) is modified by an additional non-quadratic term $h(s)$.
A technical difficulty is that the system is no longer Gaussian and additional methods are
needed to analyze it.
The simulations show that the thermodynamic collapse for finite
$\rho$ does not occur at $\alpha=1/4$, as for the case $\rho\to\infty$,
but is shifted to larger values of $\alpha$.
This indicates that the thermodynamic collapse depends on density.
This may be somewhat surprising, since one expects the global minimum of a system
for $\alpha>1/4$ to be a collapsed state. This indicates the presence of a metastable
equilibrium.
In a two-component system, a collapsed configuration, as it emerges in a simulation, is comprised of
numerous clusters, each of which can, in principle, accommodate an infinite number of particles.
A sequence of such clusters for a one-dimensional lattice model has been analyzed before
\cite{Frydel18b}. Within a single cluster, a single site is occupied by one type of particles. (Similar
clusters have been observed in a two-component system of penetrable spheres \cite{Frydel17,Frydel18a}).
The energy of each cluster scales like $E \propto - n^2$, where $n$ is the number
of particles in a cluster.
If a collapsed configuration consists of a single cluster comprised
of all the particles in a system, then the energy of a collapsed state scales like
$E \propto - n^2$ where $n$ is the
number of particles. The competing entropy of non-collapsed configurations, on
the other hand,
scales like $-ST \propto k_BT n\ln n$. This means that as soon as a configuration with
energy that scales like $E \propto - n^2$ appears (which for the present model occurs when
$\alpha>1/4$), the global minimum will always be a collapsed state. The fact that the system
does not collapse spontaneously when $\alpha>1/4$ suggests
that there is a local minimum that produces metastable equilibrium.
For a better grasp of the collapse mechanism, we describe a simple situation. We consider
a finite system that roughly corresponds to a size of a cluster that emerges in a collapsed state.
The system is in contact with a reservoir, so that a number of particles in the system $n$ fluctuates.
The particles in the reservoir do not interact with each other, while
the energy of the system itself is assumed to be $\beta E=-an^2$ so that the system can
achieve a collapsed configuration only if $a>0$.
The grand potential of the system is
\begin{equation}
\beta \Omega(n) = n\ln\frac{n}{2} - n - a n^2 - \beta\mu n,
\label{eq:omega}
\end{equation}
where $n=n_+ + n_-$ is the total number of particles and $\frac{n}{2}\ln\frac{n}{2}-\frac{n}{2}$ is the entropy
$-TS$ due to each species.
If $a>0$, the global minimum of $\beta \Omega(n)$ is for $n=\infty$.
However, there is also a local minimum $\frac{d\beta\Omega}{dn}=0$ corresponding to
\begin{equation}
n_0 = -\frac{W\big(-4 a e^{\beta\mu}\big)}{2a} = 2 e^{\beta\mu} + 8a e^{2\beta\mu} + \dots
\end{equation}
and that corresponds to a metastable equilibrium. The local minimum vanishes for
$a > \frac{1}{4e^{\beta \mu}}$.
Since the reservoir density is given by $\rho=e^{\beta \mu}$, then the thermodynamic collapse
can be estimated to depend on the density as $a = \frac{1}{4\rho}$. We observe a similar
qualitative behavior in our simulations for a lattice-gas model of binary penetrable particles.
To use a more rigorous approach to analyze a metastable region, we start with a perturbation
approximation. For a finite $\rho$, the system Hamiltonian is
\begin{equation}
H = H_{\infty} + \sum h(s_i).
\end{equation}
The partition function of this system can be written in terms of the $H_{\infty}$ ensemble as
\cite{Frydel15}
\begin{equation}
\Xi = \Xi_{\infty}\big\langle e^{-\beta \sum h(s_i)}\big\rangle_{\infty}.
\end{equation}
If we expand the quantity $\ln \Xi$, assuming $h$ to be small, and keep only the first order term,
a perturbative expression is
\begin{equation}
\ln \Xi \approx \ln \Xi_{\infty} - \beta N \big\langle h(s)\big\rangle_{\infty}.
\end{equation}
Finally, if we use the separation $\Xi_{\infty}=\Xi_{G}\Xi_{L}$ and ignore discrete contributions,
$\Xi_{\infty}\approx \Xi_{G}$, we have
\begin{equation}
\ln \Xi \approx \ln \Xi_{G} - \beta N \big\langle h(s)\big\rangle_{G},
\end{equation}
where the subscript $G$ denotes the continuous Gaussian system analyzed earlier.
For indistinguishable particles $h(s_i) = -\mu' |s_i|$, where
the average value of $|s_i|$ is related to $A^{-1}_{ii}$, see Eq. (\ref{eq:A3_3}),
and the value of $A^{-1}_{ii}$ is given in Eq. (\ref{eq:A00}) for $A^{-1}_{ii}$, we get
\begin{equation}
\ln \Xi \approx \ln \Xi_G + \beta \mu' \frac{2N}{\pi} \sqrt{\frac{{\rm K} (16\alpha^2)}{\beta K}}.
\label{eq:Xi_p1}
\end{equation}
The internal energy per particle can now be obtained using the definition in Eq. (\ref{eq:u0}).
For $\beta \mu'<0$, the expression in Eq. (\ref{eq:u})
is corrected as $u\to u + \Delta u$, where the correction due to the perturbation theory is given by
\begin{equation}
\beta \Delta u = \frac{\beta \mu' }{\pi} \sqrt{\frac{{\rm K} (16\alpha^2)}{\beta K} }
\bigg[ 1 - \frac{1}{(1-16\alpha^2)} \frac{{\rm E} (16\alpha^2)}{{\rm K} (16\alpha^2)} \bigg],
\label{eq:du}
\end{equation}
where ${\rm E}(x)$ is the complete elliptic integral of the second kind.
Fig. (\ref{fig:du}) plots the data points for $\beta u$, for $\beta K=1$ and two values of the
chemical potential, $\beta \mu'=0$ and $\beta \mu' =-0.2$, the former corresponding to
infinite and the latter to a finite density. The data points indicate that the reduced density
leads to higher internal energy. The perturbative correction in Eq. (\ref{eq:du})
for the case $\beta \mu' =-0.2$ is shown as a dotted line. It accurately represents the simulated
results for $\alpha<0.15$, then for $\alpha>0.15$ it becomes increasingly less accurate, and eventually
diverges in the wrong direction as $\alpha\to 1/4$. Because the perturbation approach
breaks down, it cannot tell us anything about the value of $\beta u$ in a metastable region.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{u_k1_a02_indist.pdf}&
\end{tabular}
\end{center}
\caption{Internal energy (for indistinguishable particles) as a function of $\alpha$ for $\beta K=1$,
and $\beta \mu'=0$ and $\beta \mu'=-0.2$. The data points are from
Monte Carlo simulation. The solid line is for $\beta u$ in Eq. (\ref{eq:u}). The dotted line incorporates
the perturbative correction in Eq. (\ref{eq:du}). The dashed line is for the variational approach.}
\label{fig:du}
\end{figure}
We next turn to a variational method. We start by postulating a quadratic auxiliary Hamiltonian
$$
H_{\Gamma} = \frac{K}{2} s^T \Gamma s,
$$
where $\Gamma$ is a $N\times N$ matrix. To keep things simple, it is assumed that $\Gamma$
has the same structure as the matrix $A$, and the only difference is that the coupling constant does
not correspond to a physical value $\alpha$ but is used as a variational parameter designated by
$\alpha'$. The partition function written in terms of the auxiliary ensemble is
\begin{equation}
\Xi = \Big\langle e^{-\frac{\beta K}{2} s^T (A-\Gamma) s - \beta \sum h(s_i)} \Big \rangle_{\! \Gamma} \Xi_{\Gamma}.
\end{equation}
Then, using the Gibbs-Bogoliubov-Feynman inequality (GBF) \cite{Frydel15}, we get
\begin{equation}
\Xi \ge e^{-\langle\frac{\beta K}{2} s^T (A-\Gamma) s - \beta \sum h(s_i)\rangle_{\Gamma} } \Xi_{\Gamma},
\end{equation}
and the quantity $\ln \Xi$ becomes
\begin{equation}
\ln \Xi \ge \ln \Xi_{\Gamma} - \bigg\langle \frac{\beta K}{2} s^T (A-\Gamma) + N\beta h(s)\bigg\rangle_{\Gamma}.
\end{equation}
As the auxiliary system is Gaussian, the term in angular brackets can be evaluated, leading to
\begin{eqnarray}
\ln \Xi &\ge& \ln \Xi_{eff} =
\ln \Xi_{\Gamma}+ \beta\mu' \frac{2N}{\pi}\sqrt{\frac{{\rm K}(16\alpha'^2)}{\beta K} } \nonumber\\
&+& N\bigg(\frac{1}{2}-\frac{1}{\pi}{\rm K}(16\alpha'^2) \bigg) \bigg(1-\frac{\alpha}{\alpha'} \bigg).
\label{eq:lnXi}
\end{eqnarray}
Fig. (\ref{fig:Xi_eff}) plots $-\ln \Xi_{eff}/N$, where $\ln \Xi_{eff}$ is given in Eq. (\ref{eq:lnXi}),
as a function of a variational parameter $\alpha'$. Because the plots are for $\alpha>1/4$,
the local minima in those plots correspond to metastable equilibriums. The minimum disappears
at around $\alpha\approx 0.42$, in which case the system spontaneously collapses.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{Feff.pdf}&
\end{tabular}
\end{center}
\caption{$-\ln\Xi$ as a function of a variational parameter $\alpha'$
for $\beta K=1$ and $\beta \mu'=-1$ (for indistinguishable particles), for three different
values of $\alpha$. }
\label{fig:Xi_eff}
\end{figure}
The free energy of a metastable equilibrium corresponds to the function $-\ln\Xi_{eff}$ at a local
minimum. The internal energy is subsequently obtained from the definition in Eq. (\ref{eq:u0}).
$\beta u$ obtained in this way is shown in Fig. (\ref{fig:du}) for the parameters $\beta K=1$
and $\beta\mu'=-0.2$ as a dashed line. Comparison with the exact results indicates high
degree of accuracy of the variational approach.
If we take the value of $\alpha$ where the local minimum of the function $-\ln\Xi_{eff}$
disappears, see Fig. (\ref{fig:Xi_eff}),
to indicate the end of the stability region, we can use the variational method to
obtain precise contours of the stability region.
Fig. (\ref{fig:alphac2}) plots such a boundary of the metastable region. To make contact with
the original particle system, we plot the results as a function of a particle density. The density
has been obtained from Eq. (\ref{eq:rho2b}) and within the variational framework is given by
\begin{equation}
\rho = \sqrt{\frac{4{\rm K}(16{\alpha'}_0^2)} {\beta K \pi^2} } + \frac{2e^{2\beta \mu'}}{1-e^{2\beta \mu'}},
\end{equation}
where ${\alpha'}_0$ corresponds to $\alpha'$ at a local minimum just as it is about to disappear.
The results show drastic broadening of the metastable region as $\rho<1$. This effect
is even stronger for smaller $\beta K$.
\graphicspath{{figures/}}
\begin{figure}[h]
\begin{center}
\begin{tabular}{rrrr}
\includegraphics[height=0.18\textwidth,width=0.22\textwidth]{alpha_c2_indist.pdf}&
\end{tabular}
\end{center}
\caption{Boundaries of the metastable region as a function of the particle density for indistinguishable particles.
Global minimum corresponds to $\alpha=1/4$ regardless of density. The metastable region
extends above this value and strongly depends on density. }
\label{fig:alphac2}
\end{figure}
For distinguishable particles we see the same type of general behavior and the emergence of the
metastable region. However, the application of the variational
procedure is more complex as the function $h(s)$ is more difficult to handle.
\section{Conclusion}
\label{sec:con}
This work investigates thermodynamic collapse in a two-component lattice-gas system
of penetrable particles on a square-lattice substrate. Because particles are penetrable,
there is no limit on how many particles occupy the same site, and the multiple occupation
of a single site gives rise to different statistical mechanics, depending whether particles
are regarded as distinguishable or indistinguishable. To facilitate analysis of the system,
we transform the relevant partition function into the spin model with spins
$s_i=0,\pm1,\pm2,\dots$. In the limit $\rho\to\infty$, the system Hamiltonian recovers a
simple quardatic form, so that the partition function corresponds to a discrete Gaussian
model analyzed in the past in connection to interfaces and the roughening
transition. The difference between the Gaussian model used to study interfaces and
the Gaussian model of penetrable particles lies in the sign of interactions between spins.
Because the Gaussian model at the point of a collapse becomes isomorphic with
the lattice Coulomb system, we check for the existence of a KT phase transition along
the line of the thermodynamic instability. The presence of the KT transition itself
does not affect the collapse transition, it might, however, affect the mechanism.
To analyze the system for finite $\rho$ we employ a variational approximation
since for this situation the Hamiltonian is no longer harmonic.
Both simulations and the approximation indicate the presence of a metastable
equilibrium corresponding to a local minimum in the free energy. The extent of the
metastable region, furthermore strongly depends on density. The metastable
region vanishes at an infinite density, and diverges as density goes to zero.
|
1,116,691,499,648 | arxiv | \section{Introduction}\label{1}
\medskip
Let's $B$ a continuous body undergoing a thermomechanical transformation, whose evolution in the spacetime is ruled by the system of balance laws
\begin{equation}
\label{6}
U_{\beta,t} + U_{\beta,j}v_{j}+\Phi^\beta_{k,k}=r_\beta,\,\,\,\,\, \beta= 1\dots \omega,
\end{equation}
with $v_{j}$ as the components of the velocity field on $B$ entering the total time derivative, $\Phi^\beta_k$ as the components of the flux of $U_\beta$, and $r_\beta$ as the production of $U_\beta$ (for the sake of simplicity we assume that the supplies are zero). Moreover, the symbols $f_{,t}$ and $f_{,j}$ mean the partial derivative of function $f$ with respect to time and to the spatial coordinate $x_{j},\,\,j=1,2,3$, respectively.
We suppose that the fields $U_{\beta}$, the fluxes $\Phi^\beta_k$, and the productions $r_\beta$ depend on $\omega$ unknown fields $z_\alpha(x_{j},\,t)$ and on their spatial derivatives $z_{\alpha,j}(x_{j},\,t)$. Then, suitable constitutive equations must be assigned for them.
In classical rational thermodynamics \cite{TRU,ColNol} the equations above are the balances of mass, linear momentum, angular momentum and energy, while in the extended non-equilibrium thermodynamic theories taking the fluxes as independent variables, the set of field equations includes the balance laws for the independent fluxes, \cite{Gra,JCL,MR, SELCIMJOU,SzuKovSim}.
The solutions the system \eqref{6} must obey second law of thermodynamics, which imposes that the local entropy production
\begin{equation}\label{5}
\sigma^{\left(s\right)}=\rho s_{,t}+\rho s_{,j}v_{j}+J_{k,k} - \rho({{r}}/{\vartheta}),
\end{equation}
where $s$ is the specific entropy, $J_k$ are the components of the entropy flux, and $\vartheta$ the absolute temperature, is nonnegative whatever the thermodynamic process is \cite{ColNol,TRU}.
In continuum physics the entropy (or dissipation) principle \cite{CimJouRugVan} constitutes a valuable tool in modeling material properties. Coleman and Noll were the first to formulate it as follows \cite{ColNol}:
\textit{The constitutive equations, which characterize the material properties of continuous media, must be assigned in such a way that second law of thermodynamics is satisfied along arbitrary thermodynamic processes}.
\medskip
These authors also proposed a rigorous mathematical procedure to exploit the requirement above, currently referred to as Coleman-Noll procedure \cite{ColNol,TriPapCimMus}.
It is worth observing that the entropy principle, such as formulated by Coleman and Noll, is just an operative assumption but not a consequence of a general physical law. Thus, in principle, nothing prevents to assume that second law of thermodynamics restricts the thermodynamic processes instead of the constitutive equations, by selecting those which actually can occur in nature, and those which cannot occur.
In order to decide what is the correct approach, Muschik and Ehrentraut \cite{MusEhr} proposed the following amendment to the second law:
\medskip
\textit{Except in equilibria, reversible process directions in state space do not exist}.
\medskip
From the physical point of view the amendment expresses, in form of postulate, the physically evident but rather hidden assumption that in any point of a continuum body the entropy production is zero if, and only if, this point is in thermodynamic equilibrium.
Muschik and Ehrentraut proved that, under the validity of the amendment, second law of thermodynamics necessarily restricts the constitutive equations and not the thermodynamic processes. In this way, the classical Coleman--Noll approach follows by a rigorous proof.
The present paper is motivated by the observation that the important result illustrated above can be put in more general and accessible form within a geometric framework.
To achieve that task, we use the results in Refs. \cite{DolFraRog,DolFraRog1}, where a geometric perspective on nonequilibrium thermodynamics has been given. The chosen state space is different with respect to that considered Ref. \cite{MusEhr}, because we do not include in it the time derivatives. In this way, the constitutive equations we are dealing with, are suitable to respect the principle of material indifference, too \cite{TRU}. After defining the space of the higher derivatives, we introduce the definitions of real, ideal, and over-ideal vector of the higher derivatives. For thermodynamic processes, we give the definitions of irreversible, reversible, and over-reversible process, by analyzing the properties of its representative curve in the fibre bundle of the configuration spaces.
Once the geometric framework is complete, we reformulate second law of thermodynamics, both locally and globally in time, in order to encompass the amendment. In this way, we are able to prove a new formulation of the Muschik and Ehrentraut theorem.
The paper runs as follows.
In Sec. \ref{2}, we construct a new thermodynamic framework for nonequilibrium processes. In Sec. \ref{3}, we give a new formulation, both locally and globally, of second law of thermodynamics. In Sec. \ref{4}, we prove the Muschik and Ehrentraut theorem. In Sec. \ref{5}, we resume our results and discuss some open problems which will be considered in future researches.
\section{The thermodynamic framework} \label{2}
\medskip
In this section we aim at constructing the geometric framework where our main result can be formulated. To this end, we start by giving some basic definitions.
\begin{definition} \label{d2}
The space $C_t$ of the configurations at the instant $t$ is represented by
a $\omega$-dimensional vector space spanned by the solutions $z_\alpha(x_{j},\,t)$ of Eqs. \eqref{6} with a the structure of a finite-dimensional manifold.
\end{definition}
We assume that the total configuration space is given by the disjoint union
\begin{equation}
\label{7}
\mathcal{C} = \bigcup_{t\in [0,\,\infty]}\{t\} \times C_t,
\end{equation}
with a given natural structure of a fibre bundle over the real line $\mathbb{R}$ where time flows \cite{DolFraRog,DolFraRog1}.
\begin{definition} \label{d2}
$\mathcal{C}$ is called configuration bundle.
\end{definition}
Under the natural assumption that $C_t$
does not vary in time, namely, $C_t=C\, \forall t$, then $\mathcal{C}$ has the topology of the
Cartesian product
\begin{equation}
\label{8}
\mathcal{C} = \mathbb{R} \times C.
\end{equation}
\begin{definition} \label{d3}
A vector valued function $\pi: t\in [\tau_0,\,\tau_0+\tau] \subseteq\mathbb{R}\rightarrow z_\alpha(x_{j},\,t) \in\mathcal{C}$ is said a thermodynamic process $\pi$ of duration $\tau$. Moreover, $\pi=\pi(t)$ is the parametric equation of the curve $\Gamma$ representative of $\pi$ in $\mathcal{C}$.
\end{definition}
\begin{definition} \label{d31}
For $t_0 \in [\tau_0,\,\tau_0+\tau] $, a vector valued function $p: t\in [t_0,\,\tau_0+\tau] \subseteq\mathbb{R}\rightarrow z_\alpha(x_{j},\,t) \in\mathcal{C}$ is said a restricted thermodynamic process $p$ of initial point $t_0$ and duration $\tau_0+\tau-t_0$, \cite{DolFraRog}. Moreover, $p=p(t)$ is the parametric equation of the curve $\gamma$ representative of $p$ in $\mathcal{C}$.
\end{definition}
\begin{remark} \label{r0}
For $t_0= \tau_0$ we get $p(t)= \pi(t)$, for $t_0= \tau_0+\tau$, $p(t)$ is the process of duration $0$, i.e., the null process.
\end{remark}
As said in Sec. \ref{1}, in order to find the fields $z_\alpha(x_{j},\,t)$, i.e. to solve the system \eqref{6},
for the quantities $U_\beta$, $\Phi^\alpha_k$ and $r_\alpha$ constitutive equations must be assigned on a suitable state space.
\begin{definition} \label{d4}
The $4\omega$-dimensional vector space with the structure of a finite-dimensional manifold
\begin{equation}
\label{9}
\Sigma_t = \left\{z_{\alpha}(x_{j},\,t),\,z_{\alpha,j}(x_{j},\,t)\right\}.
\end{equation}
for any value of the time variable $t$, represents a local in time state space and it is called state space at the instant $t$.
\end{definition}
\begin{definition} \label{d5}
The disjoint union
\begin{equation}
\label{10}
\mathcal{S} = \bigcup_{t\in [0,\,\infty]}\{t\} \times \Sigma_t,
\end{equation}
with a given natural structure of a fibre bundle over the real line $\mathbb{R}$ where time flows, represents
the total configuration space and it is said the thermodynamic bundle.
\end{definition}
Again, under the natural assumption that $\Sigma_t$
does not vary in time, namely, $\Sigma_t=\Sigma\, \forall t$, then $\mathcal{S}$ has the topology of the
Cartesian product
\begin{equation}
\label{11}
\mathcal{S} = \mathbb{R} \times \Sigma.
\end{equation}
Of course,
\begin{equation}
\label{12}
{C_t} \subset \Sigma_t,\,\,\,\,\,\,\mathcal{C}\subset\mathcal{S}.
\end{equation}
The balance equations \eqref{6} on the local in time state space $\Sigma_t$ read
\begin{equation}
\label{15}
\frac{\partial U_{\beta}
}{\partial z_\alpha} z_{\alpha,t}+\frac{\partial U_{\beta}
}{\partial z_{\alpha,j}} z_{\alpha,jt} + \frac{\partial U_{\beta}
}{\partial z_\alpha} z_{\alpha,j}v_j + \frac{\partial U_{\beta}
}{\partial z_{\alpha,k}} z_{\alpha,kj}v_j + \frac{\partial \Phi^\beta _k}{\partial z_\alpha}z_{\alpha,k} + \frac{\partial \Phi^\beta _k}{\partial z_{\alpha,j}}z_{\alpha,jk}
= r_\beta.
\end{equation}
In Eqs. \eqref{15} and we may individuate the $10\omega$ higher derivatives $\left\{z_{\alpha,t},\, z_{\alpha,jt},\, z_{\alpha,jk} \right\}$, which are the space and time derivatives of the elements of $\Sigma_t$.
\begin{definition} \label{d6}
The local in time $10\omega$-dimensional vector space
\begin{equation}
\label{17}
H_t = \left\{z_{\alpha,t}(x_{j},\,t),\, z_{\alpha,jt}(x_{j},\,t),\,z_{\alpha,jk}(x_{j},\,t)\right\},
\end{equation}
and the fibre bundle
\begin{equation}
\label{18}
\mathcal{H} = \bigcup_{t\in [0,\,\infty]}\{t\} \times H_t.
\end{equation}
represent the space of the higher derivatives at time $t$ and its
fibre bundle respectively.
Moreover, the equilibrium subspace of $H_t$ and its fibre bundle are given by
\begin{equation}
\label{17a}
\hat E_t = \left\{z_{\alpha,jk}(x_{j},\,t)\right\},
\end{equation}
and
\begin{equation}
\label{18a}
\mathcal{\hat E} = \bigcup_{t\in [0,\,\infty]}\{t\} \times \hat E_t.
\end{equation}
\end{definition}
Analogously, the entropy inequality on the state space reads
\begin{equation}
\label{16}
\rho\frac{\partial s}{\partial z_\alpha} z_{\alpha,t}+\rho\frac{\partial s}{\partial z_{\alpha,j}} z_{\alpha,jt} + \rho\frac{\partial s}{\partial z_\alpha} z_{\alpha,j}v_j + \rho\frac{\partial s}{\partial z_{\alpha,k}} z_{\alpha,kj}v_j
+\frac{\partial J_k}{\partial z_\alpha}z_{\alpha,k} +\frac{\partial J_k}{\partial z_{\alpha,j}}z_{\alpha,jk} \geq0.
\end{equation}
\begin{definition} \label{d7}
The local in time $10\omega$-dimensional vector space at time $t$
\begin{equation}
\label{17a}
W_t = \left\{z_{\alpha,t}(x_{j},\,t),\, z_{\alpha,jt}(x_{j},\,t),\,z_{\alpha,jk}(x_{j},\,t)\right\},
\end{equation}
and the fibre bundle
\begin{equation}
\label{18}
\mathcal{W} = \bigcup_{t\in [0,\,\infty]}\{t\} \times W_t.
\end{equation}
define the vector space and the fibre bundle of the higher derivatives, respectively, whose state vectors satisfy the entropy inequality. Moreover,
the equilibrium subspace of $W_t$ and its fibre bundle are given by
\begin{equation}
\label{17a}
E_t = \left\{z_{\alpha,jk}(x_{j},\,t)\right\},
\end{equation}
and
\begin{equation}
\label{18a}
\mathcal{E} = \bigcup_{t\in [0,\,\infty]}\{t\} \times E_t.
\end{equation}
\end{definition}
\medskip
\begin{remark} \label{r1}
The reason because we defined two different spaces of the higher derivatives, one for the balance equations and another one for the entropy inequality, is related to the fundamental focus of the present investigation, namely, to determine the conditions, if any, under which all the solutions of the balance laws are also solutions of the entropy inequality. This will be discussed in detail in next section.
\end{remark}
The relations in Eqs. \eqref{15} and \eqref{16} can be arranged as follows
\begin{equation}
\label{19}
\frac{\partial U_{\beta}
}{\partial z_\alpha} z_{\alpha,t}+\frac{\partial U_{\beta}
}{\partial z_{\alpha,j}} z_{\alpha,jt} + \left(\frac{\partial U_{\beta}
}{\partial z_{\alpha,k}} v_j + \frac{\partial \Phi^\beta _j}{\partial z_{\alpha,k}}\right)z_{\alpha,kj} = r_\beta - \frac{\partial U_{\beta}
}{\partial z_\alpha} z_{\alpha,j}v_j -\frac{\partial \Phi^\beta _j}{\partial z_\alpha}z_{\alpha,j}.
\end{equation}
\begin{equation}
\label{20}
\rho\frac{\partial s}{\partial z_\alpha} z_{\alpha,t}+\rho\frac{\partial s}{\partial z_{\alpha,j}} z_{\alpha,jt} + \left(\rho\frac{\partial s}{\partial z_{\alpha,k}}v_i
+\frac{\partial J_i}{\partial z_{\alpha,k}} \right)z_{\alpha,ki} \geq - \rho\frac{\partial s}{\partial z_\alpha} z_{\alpha,j}v_j -\frac{\partial J_i}{\partial z_\alpha}z_{\alpha,i}.
\end{equation}
Let's now define the $10\omega \times 1$ column vector function
\begin{equation}
\label{21}
y_{\alpha} \equiv \left( z_{\alpha,t},\,\, z_{\alpha,jt},\,\,z_{\alpha,kj}\right)^T,
\end{equation}
%
the $\omega \times 1$ column vector
%
\begin{equation}
\label{22}
C_\beta \equiv r_\beta - \frac{\partial U_{\beta}
}{\partial z_\alpha} z_{\alpha,j}v_j -\frac{\partial \Phi^\beta _j}{\partial z_\alpha}z_{\alpha,j},\,\,\,\, \beta =1 \dots \omega,
\end{equation}
%
and the $\omega \times 10\omega$ matrix
\begin{equation}
\label{23}
A_{\beta\alpha} \equiv \left[ \frac{\partial U_{\beta}
}{\partial z_\alpha},\,\,\,\,\frac{\partial U_{\beta}}{\partial z_{\alpha,j}},\,\,\,\,\left(\frac{\partial U_{\beta}
}{\partial z_{\alpha,k}} v_j + \frac{\partial \Phi^\beta _j}{\partial z_{\alpha,k}}\right)\right]\,\,\,\,\,(j,\,k=1,2,3),
\end{equation}
with $C_\beta$ and $A_{\beta\alpha}$ defined on $\mathcal{S}$.
In this way, the balance equations \eqref{19} can be rearranged as
\begin{equation}
\label{24}
A_{\beta\alpha}(\mathcal{S})y_{\alpha} = C_\beta(\mathcal{S}).
\end{equation}
%
Analogously, after defining the $10\omega \times 1$ column vector function
\begin{equation}
\label{25}
B_{\alpha} (\mathcal{S}) \equiv \left(\rho\frac{\partial s}{\partial z_{\alpha}},\,\,\,\, \rho\frac{\partial s}{\partial z_{\alpha,j}},\,\,\,\, \left(\rho\frac{\partial s}{\partial z_{\alpha,k}}v_i
+\frac{\partial J_i}{\partial z_{\alpha,k}} \right)\right)^T,
\end{equation}
and the scalar function
\begin{equation}
\label{26}
D(\mathcal{S}) \equiv - \rho\frac{\partial s}{\partial z_\alpha} z_{\alpha,j}v_j -\frac{\partial J_i}{\partial z_\alpha}z_{\alpha,i},
\end{equation}
we can write the inequality \eqref{20} as
\begin{equation}
\label{27}
B_{\alpha}(\mathcal{S})y_{\alpha} \ge D(\mathcal{S}).
\end{equation}
%
\begin{remark} \label{r2}
It is worth observing that the higher derivatives entering the system \eqref{24} are elements of $H_t$, while those entering the inequality \eqref{27} are elements of $W_t$.
\end{remark}
From now on we pursue our analysis under the hypothesis that $B$ occupies the whole space. Then, for arbitrary $t_0 \in [\tau_0, \tau_0+\tau]$ we consider the restricted process $p$ of initial instant $t_0$ and duration $\tau_0+\tau-t_0$, and suppose that it corresponds to the solution of the Cauchy problem for the system \eqref{24} with initial conditions
%
\begin{equation}
\label{28}
z_{\alpha}(x_{j},\,t_0) = z_{\alpha\,0}({x}_{j}),\,\,\,\, \forall {P} \in C.
\end{equation}
%
If $A_{\beta\alpha}$ and $C_\beta$ are regular, and $A_{\beta\alpha}$ is invertible, the theorem of Cauchy-Kovalevskaya ensures that the Cauchy problem \eqref{24} and \eqref{28} has a unique solution continuously depending on the initial data \eqref{28}, \cite{CH}. However, such a solution does not necessarily corresponds to a thermodynamic process which is physically realizable, since the physically admissible solutions of
\eqref{24} and \eqref{28} are only those solutions which additionally satisfy the unilateral differential constraint \eqref{27}. On the other hand, the problem \eqref{24} and \eqref{28} is very difficult to solve, in general, so that to find a solution of it and verify ex post if it also satisfies \eqref{27} does not seems to be a convenient procedure. For that reason, Coleman and Noll \cite{ColNol} in 1963 postulated the constitutive principle referred in Sec. \ref{1}, \cite{CimJouRugVan}. Then it is important to investigate if the Coleman and Noll postulate is a consequence of a general physical law or it is an arbitrary, although very useful, assumption, as observed by Muschik and Ehrentraut \cite{MusEhr}. Such a study will be carried on in the next sections.
\section{Local and global formulation of second law of thermodynamics} \label{3}
\medskip
Let's consider now a fixed point $P_0 \in B$ whose vector position will be indicated by $\mathbf{x}_0$, a fixed instant of time $t_0 \in [\tau_0, \, \tau_0+\tau]$. We note that, whatever is $t_0$, it can can ever be considered as the initial time of a restricted process of duration $\tau_0+\tau-t_0$. Moreover, let $\Sigma_0$, $H_0$, and $E_0$ the vector spaces
$\Sigma_t(P_0,\,t_0)$, $H_t(P_0,\,t_0)$, and $E_t(P_0,\,t_0)$. When evaluated in $(P_0,\,t_0)$, the balance equations \eqref{24} and the entropy inequality \eqref{27} transform in the algebraic relations
\begin{equation}
\label{29}
A_{\beta\alpha}({\Sigma_0})y_{\alpha} = C_\beta({\Sigma_0}),
\end{equation}
\begin{equation}
\label{30}
B_{\alpha}({\Sigma_0})y_{\alpha} \ge D({\Sigma_0}).
\end{equation}
%
In this way we can regard the $\omega \times 10\omega$ matrix $A_{\beta\alpha}({\Sigma_0})$ as a linear morphism from $H_0$ to the $\omega$-dimensional Euclidean vector space defined on $\Sigma_0$. Analogously, the vector $B_{\alpha}({\Sigma_0})$ can be regarded as a linear application from $H_0$ in $\mathbb{R}$, so that $B_{\alpha}({\Sigma_0})$ belongs to the dual space $H^ {*}_0$ of $H_0$. It is worth observing that, since $A_{\beta\alpha}$ has been supposed to be invertible (otherwise the Cauchy problem \eqref{24} and \eqref{28} would not admit a unique solution), the algebraic relations \eqref{29} allow to determine $\omega$ of the $10\omega$ components of $y_{\alpha}$. Moreover, by spatial derivation of the initial conditions \eqref{28} we get
%
\begin{equation}
\label{31}
z_{\alpha,jk}(x_{j},\,t_0) = z_{\alpha\,0,jk}({x_j}),
\end{equation}
%
which, once evaluated in $P_0$, allow to determine $6\omega$ components of $y_{\alpha}$. It is worth observing that, since the initial conditions can be assigned arbitrarily, such $6\omega$ quantities can assume arbitrary values. Moreover, there are further $3\omega$ components of the vector $y_{\alpha}$ which remain completely arbitrary, since the system \eqref{29} and the initial relations \eqref{31} allow to determine only $7\omega$ of the $10\omega$ components of $y_{\alpha}$. Then, it is not guaranteed that the inequality \eqref{30} is satisfied whatever is $y_{\alpha} \in H_0$. Thus, we define the space $W_0 \subseteq H_0$ constituted by the vectors of $H_0$ which satisfy both Eq. \eqref{29} and the inequality \eqref{30}.
\begin{remark} \label{r3}
It is worth observing that, although it is not guaranteed that the inequality \eqref{30} is satisfied whatever is $y_{\alpha} \in H_0$, at this stage we do not have elements to exclude such a possibility. In other words, we do not have elements to decide if, actually, $W_0$ is a proper subspace of $H_0$ or it coincides with $H_0$.
\end{remark}
In order to decide if $W_0 \subset H_0$, or $W_0=H_0$, we follow the way paved by Muschik and Ehrentraut \cite{MusEhr} who observed that such a decision cannot ensue by the sole second law of thermodynamics, because such a law does not contain information neither regarding Eqs. \eqref{29}, nor regarding the initial conditions \eqref{31}. In order to fill this gap, Muschik and Ehrentraut completed the information contained into the inequality \eqref{30} by an amendment which clarifies how the reversible transformations can be realized from the operative point of view. Here we follow their strategy, but propose a more general approach which includes the amendment into a new formulation of second law. To achieve that task, we need some preliminary definitions. To this end, we observe that in the real world reversible thermodynamic transformations do not exist, but they are approximated by very slow (quasi-static) transformations in which in any point $P_0 \in B$ the system is very close to the thermodynamic equilibrium. From a ideal point of view, a quasi-static transformation requires an infinite time to occur, and in any point of the system the value of the state variable is constant in time.
\begin{remark} \label{r4}
Alternative formulations of the thermodynamic laws which consider realistic transformations occurring in a finite time have been proposed within the framework of finite time thermodynamics \cite{AndBerNitSal,AndSalBer,Hof}.
\end{remark}
As far as the thermodynamic framework developed so far is concerned, if $B$ undergoes a quasi-static transformation, along with Muschik and Ehrentraut \cite{MusEhr}, we say that in any point $(P_0,\,t_0)$ the vectors of the higher derivatives are elements of $E_0$. Such an observation suggests the following definitions.
\begin{definition} \label{d8}
A vector $y_{\alpha} \in H_0$ is said:
\medskip
\begin{itemize}
\item real, if it satisfies the relation $ B_{\alpha}({\Sigma_0})y_{\alpha} > D({\Sigma_0})$;
\smallskip
\item ideal, if it satisfies the relation $ B_{\alpha}({\Sigma_0})y_{\alpha} = D({\Sigma_0})$;
\smallskip
\item over-ideal, if it satisfies the relation $ B_{\alpha}({\Sigma_0})y_{\alpha} < D({\Sigma_0})$.
\end{itemize}
\end{definition}
\medskip
Owing to the definitions above we can establish the following
\begin{postulate}{\bf Local formulation of second law of thermodynamics}. \label{p1}
Let $B$ a body, and let the couple $(P_0,\,t_0)$ represent an arbitrary point of $B$ at an arbitrary instant $t_0 \in [\tau_0,\, \tau_0+\tau]$. Suppose $B$ is undergoing an arbitrary thermodynamic process of initial instant $t_0$ and duration $\tau_0+\tau - t_0$. Then, the local space of the higher derivatives $W_0$ does not contain over-ideal vectors. Moreover, a vector $y_{\alpha} \in W_0$ is ideal if, and only if, $y_{\alpha} \in E_0$.
\end{postulate}
The postulate above traduces the experimental evidence that in a thermodynamic process the entropy production cannot be negative in any point $P_0$ of $B$ at any instant $t_0$. Moreover, it also expresses the further experimental fact, which is often tacit in the formulations of second law of thermodynamics, that the entropy production can be zero only in the points of $B$ which are in equilibrium. In particular, we say that the point $P_0 \in B$ at the instant $t_0$ is in thermodynamic equilibrium if, and only if, $W_0= E_0$.
\medskip
\begin{remark} \label{r5}
We note that the local formulation of second law of thermodynamics prohibits that over-ideal vectors are in $W_0$ but does not prevents they are in $H_0$. If $H_0$ contains over-ideal vectors or not is just the focus of the present investigation.
\end{remark}
\begin{definition} \label{d9}
Let $B$ a body undergoing an arbitrary thermodynamic process $p$ of initial instant $t_0$ and duration $\tau_0+\tau - t_0$, and let $\gamma$ the curve representative of the process in $\mathcal{C}$.
The process $p$ is said:
\medskip
\begin{itemize}
\item irreversible, if there exists at least a point $z_{\alpha}(P,\,t)$ of $\gamma$ in which the vector of the higher derivatives $y_{\alpha}(P,\,t)$ is real;
\smallskip
\item reversible, if in any point $z_{\alpha}(P,\,t)$ of $\gamma$ the vector of the higher derivatives $y_{\alpha}(P,\,t)$ is ideal;
\smallskip
\item over-reversible, if there exists at least a point $z_{\alpha}(P,\,t)$ of $\gamma$ in which the vector of the higher derivatives $y_{\alpha}(P,\,t)$ is over-ideal.
\end{itemize}
\end{definition}
The definitions above allow to enunciate the following
\begin{postulate} \label{p2}
{\bf Global formulation of second law of thermodynamics}:
Over-reversible processes do not occur in nature. Moreover, a thermodynamic process is reversible if, and only if, its representative curve $\gamma$ lies into the equilibrium bundle $\mathcal{E}$.
\end{postulate}
The previous formulations (local and global) of second law of thermodynamics include the information, not present in the classical ones, that
the reversible transformations are necessarily quasi-static and hence, they need an infinite time to occur. So, they represent ideal processes, which in nature are approximated by very slow transformations. Here we take into account such a situation by admitting that in any point of a reversible curve the vector of the higher derivatives is ideal.
\section{The Muschik and Ehrentraut theorem revisited} \label{4}
\medskip
In this section we present a novel formulation of the Muschik and Ehrentraut theorem proved in Ref. \cite{MusEhr}. To this end, we use the thermodynamic framework and the generalized formulations of second law established above.
\begin{theorem} \label{t1}
Let $B$ a body, and let the couple $(P_0,\,t_0)$ represent an arbitrary point of $B$ at an arbitrary instant $t_0 \in [\tau_0,\, \tau_0+\tau]$. Then, $H_0$ = $W_0$.
\end{theorem}
\begin{proof}
To prove the theorem it is enough to demonstrate that the vectors of $H_0$ are all and only the vectors of $W_0$.
To this end, we observe that, in the generic point $(P_0,\,t_0)$, at fixed values of $A_{\beta\alpha}({\Sigma_0})$,$C_\beta({\Sigma_0})$, $B_{\alpha}({\Sigma_0})$, and $D({\Sigma_0})$, correspond infinite vectors $y_{\alpha}(P_0,\,t_0)$, because only $\omega$ components of $y_{\alpha}(P_0,\,t_0)$ are determined by the balance equations while the remaining $9\omega$ are completely arbitrary (see discussion in Sec. \ref{3}). Moreover, if all the $y_{\alpha}$ in $H_0$ would be over-ideal, the vector space $W_0$ would be empty, because the second law of thermodynamics prohibits that it contains over-ideal vectors. As a consequence, in $(P_0,\,t_0)$ no any process would be possible. On the other hand, since $(P_0,\,t_0)$ is arbitrary, no any thermodynamic transformation could occur in $B$ in the interval of time $[\tau_0,\,\tau_0+\tau]$. So, in $(P_0,\,t_0)$ the space $H_0$ contains, in principle, both real/ideal vectors and over-ideal ones.
Let's suppose that in $(P_0,\,t_0)$ the space $H_0$ contains an ideal vector $y^1_{\alpha}$ and an over-ideal vector $y^2_{\alpha}$. Since the existence of $y^1_{\alpha}$ is possible if, and only if, $(P_0,\,t_0)$ is in thermodynamic equilibrium, while $y^2_{\alpha}$ exist if, and only if $(P_0,\,t_0)$ is not in thermodynamic equilibrium, such a situation is impossible to be realized.
Analogously, let's suppose that $y^1_{\alpha}$ is ideal and $y^2_{\alpha}$ is real. Again, such a situation is impossible, because it would require $(P_0,\,t_0)$ to be in equilibrium and not in equilibrium.
Finally, let $y^1_{\alpha}$ be a real vector, and $y^2_{\alpha}$ a over-ideal one. Such a situation is possible, in principle, provided
$(P_0,\,t_0)$ is not in equilibrium.
In such a case, due to the local formulation of second law, neither $y^1_{\alpha}$ nor $y^2_{\alpha}$ are elements of $E_0$.
Let's consider now the linear combination $y^3_{\alpha}= \lambda y^1_{\alpha} + (1-\lambda)y^2_{\alpha}$, with $\lambda \in ]0,\,1[$. Since $y^1_{\alpha}$ and $y^2_{\alpha}$ are in $H_0$, they satisfy the following equations
\begin{equation}
\label{32}
A_{\beta\alpha}({\Sigma_0})y^1_{\alpha} = C_\beta({\Sigma_0}),
\end{equation}
%
\begin{equation}
\label{33}
A_{\beta\alpha}({\Sigma_0})y^2_{\alpha} = C_\beta({\Sigma_0}).
\end{equation}
%
The combination of Eqs. \eqref{32} multiplied by $\lambda$ and Eqs. \eqref{33} multiplied by $(1-\lambda)$ leads to
\begin{equation}
\label{34}
A_{\beta\alpha}({\Sigma_0})y^3_{\alpha} = C_\beta({\Sigma_0}),
\end{equation}
%
namely, $y^3_{\alpha}$ is also a solution of Eq \eqref{29}, i.e. it is in $H_0$. On the other hand, the local entropy production corresponding to $y^3_{\alpha}$ can be written as
\begin{equation}
\label{35}
\sigma^3=\lambda\left[B_{\alpha}({\Sigma_0})y^1_{\alpha} - D({\Sigma_0})\right] + (1-\lambda)\left[B_{\alpha}({\Sigma_0})y^2_{\alpha} - D({\Sigma_0})\right]=
\end{equation}
\begin{equation}
\nonumber
=B_{\alpha}({\Sigma_0})\left[\lambda y^1_{\alpha} + (1-\lambda)y^2_{\alpha}\right ]- D({\Sigma_0}).
\end{equation}
Since $\lambda$ is arbitrary in $]0,\,1[$, nothing prevents to chose it as
\begin{equation}
\label{36}
\lambda = \frac{D({\Sigma_0})-B_{\alpha}({\Sigma_0})y^2_{\alpha}}{B_{\alpha}({\Sigma_0})\left[y^1_{\alpha}-y^2_{\alpha}\right]},
\end{equation}
%
because, as it is easily seen, the right-hand side of Eq. \eqref{36} is in the interval $]0,\,1[$. In fact, being $y^2_{\alpha}$ over-ideal we get $D({\Sigma_0})-B_{\alpha}({\Sigma_0})y^2_{\alpha}>0$. Moreover, being $y^1_{\alpha}$ real, we get $B_{\alpha}({\Sigma_0})\left[y^1_{\alpha} -y^2_{\alpha}\right ]> D({\Sigma_0})-D({\Sigma_0})$, namely, $B_{\alpha}({\Sigma_0})\left[y^1_{\alpha} -y^2_{\alpha}\right ]> 0$. Hence $\lambda>0$. Moreover, being $y^1_{\alpha}$ real, we get also that $B_{\alpha}({\Sigma_0})\left[y^1_{\alpha}-y^2_{\alpha}\right]> D({\Sigma_0})-B_{\alpha}({\Sigma_0})y^2_{\alpha}$, and hence
$\lambda <1$.
Consequently, the right-hand side of Eq. \eqref{35} vanishes, so that $y^3_{\alpha}$ is in $E_0$. However, this is impossible, otherwise $(P_0,\,t_0)$ would be in thermodynamic equilibrium. Thus, it is forbidden that in $(P_0,\,t_0)$ there are both real and over-ideal vectors which are solutions of the local balance laws \eqref{29}.
Furthermore, suppose that both $y^1_{\alpha}$ and $y^2_{\alpha}$ are real. Then, it is easy to verify by direct calculation that $\lambda$ can be taken such that $\sigma^3>0$.
Finally, if $(P_0,\,t_0)$ is a point of equilibrium, then the entropy production related to $y^1_{\alpha}$ and $y^2_{\alpha}$ vanishes, so that, by Eq. \eqref{35}, it follows that also $\sigma^3$ is zero.
The considerations above show the impossibility that in a point $P_0$ of $B$, at a given instant $t_0$, the solutions of Eqs. \eqref{29} can be of different type. Moreover, they cannot be over-ideal only, because this contradicts the local form of second law of thermodynamics. Thus, $H_0$ may contain either only real vectors, and in such a case $(P_0,\,t_0)$ is a point of non-equilibrium, or only ideal vectors, and in such a case $(P_0,\,t_0)$ is a point of equilibrium. This conclusion proves the theorem.
\end{proof}
\begin{corollary} \label{c1}
$\mathcal{H}$ = $\mathcal{W}$.
\end{corollary}
\begin{proof} This corollary is an immediate consequence of the arbitrariness of the initial instant $t_0$, and of the point $P_0$. In particular, whatever is $t_0$, we can ever consider it as the initial instant of the restricted process of duration $\tau_0+\tau-t_0$, so that $H_0 \equiv H_t(P_0,t_0)$ has dimension $10\omega$. Moreover, only $7\omega$ of components of the vectors of $H_0$ can be determined by the algebraic relations \eqref{29} and \eqref{31} while the further $3\omega$ components are completely arbitrary. Thus, to $H_0$ can be applied the conclusions established in Theorem \ref{1}. This is enough to prove that, for any $t \in [\tau_0,\tau_0+\tau]$ the space of the higher derivatives $H_t$ contains only real or ideal vectors.
\end{proof}
\begin{remark} \label{r6}
The Corollary \ref{c1} also implies $\mathcal{\hat E}$ = $\mathcal{E}$.
\end{remark}
\begin{corollary} \label{c2}
The unilateral differential constraint \eqref{27} is a restriction on the constitutive quantities $U_{\beta}$, $r_{\beta}$, $s$ and $J_k$ and not on the thermodynamic processes $p$.
\end{corollary}
\begin{proof}
In fact, any process $p: t\in [\tau_0,\,\tau_0+\tau] \subseteq\mathbb{R}\rightarrow z_\alpha(x_{j},\,t) \in\mathcal{C}$, where $z_\alpha(x_{j},\,t)$ is a solution of the balance laws \eqref{24}, can only be either irreversible or reversible but not over-reversible, because otherwise its representative curve $\gamma$ would contain at least a over-ideal point, against Corollary \ref{c1}. On the other hand, such a property of the solutions of the system of balance laws is not guaranteed whatever are $A_{\beta\alpha}$ and $C_\beta$, and for arbitrary $s$ and $J_k$ because, given the state space, only particular forms of those functions defined on it lead to a nonnegative entropy production. Then, the role of the unilateral differential constraint in Eq. \eqref{27} is just to select such forms.
\end{proof}
\section{Discussion} \label{5}
Exploitation of second law of thermodynamics is based on the assumption that it restricts the
constitutive equations and not the thermodynamic processes. Then, the constitutive equations must be assigned in such a way that
all solutions of the field equations satisfy the entropy inequality.
An alternative interpretation of the restrictions imposed by second law is that we must exclude from the set of solutions
of the balance equations that ones which do not guarantee a nonnegative entropy production. The
problem of choosing among the two interpretations above has been solved in 1996 by Muschik and Ehrentraut \cite{MusEhr},
by postulating an amendment to the second law which assumes that at a fixed instant of time and in any point of the body, the entropy production is zero if, and only if, this point is in thermodynamic equilibrium. Muschik and Ehrentraut proved that, presupposing the amendment,
necessarily second law of thermodynamics restricts the constitutive equations and not the processes. Such
a result justifies, from the theoretical point of view, the approach to the exploitation of second law proposed in
1963 by Coleman and Noll in their celebrated paper \cite{ColNol}.
In the present paper we have revisited their proof, lighting up some geometric aspects which were hidden in Ref. \cite{MusEhr}. Moreover, we proposed a generalized formulation of second law of thermodynamics which incorporates the amendment.
In future researches we aim at extending the present results to more complex situations.
In the case of shock wave propagation, among the solutions of the Rankine-Hugoniot equations, the physical
shocks are selected by the celebrated Lax conditions, which force the shock speed $U_s$ to satisfy the
inequality $U_{b}
> U_{s} > U_{a}$ with $U_{b}$ as the characteristic speed behind the shock and $U_{a}$ as the
characteristic speed ahead the shock \cite{DAF}. Since for a fluid the Lax conditions imply the growth of the
entropy across the shock, they are often called in the literature ''entropy growth conditions'' .
The common interpretation of this result is that for non-regular (weak) solutions of the balance equations,
second law of thermodynamics restricts the processes instead of the constitutive equations. However, in Ref. \cite{TriCim} it is proved that the amendment can be generalized in order to prove that second law of thermodynamics necessarily
restricts the constitutive equations on both sides of the shock. So, under the hypothesis above, the classical
interpretation of Lax conditions should be revisited in the light of the new mathematical framework formulated in the present paper.
In Ref. \cite{TriCim1}, the results in Refs. \cite{MusEhr,TriCim} on the interpretation of the second law of thermodynamics have been extended in order to encompass the most general situation in which also the gradients of the basic laws are considered
as constraints for the entropy inequality \cite{CimSelTri}. This result too should be reanalyzed within the mathematical framework presented here.
To our opinion, the investigations mentioned above are necessary, since thermodynamic processes that involve discontinuous
solutions are very frequent in physics.
\section*{Acknowledgements}
P.~R. thanks the University of Messina and the Italian National Group of Mathematical Physics (GNFM-INdAM) for financial support.
V.~A.~C. thanks the University of Basilicata nd the Italian National Group of Mathematical Physics (GNFM-INdAM) for financial support.
\bibliographystyle{10}
|
1,116,691,499,649 | arxiv | \section{Physical density vs. density of composite fermions}
\label{Supplementary1}
In this section we write down explicit expressions for the physical densities $\rho_{\eta}$ in terms of the densities of composite fermions.
The decoupling of the left and right-movers in the quadratic Hamiltonian $H^{(2)}$ (see Eq.(6) of the main text) is achieved via the Bogolubov transformation
\begin{eqnarray}&&
\label{Supp:rho_R}
\rho_{R, q}=U_2^+R_qU_2=\cosh \kappa_q R_q-\sinh\kappa_q L_q\,, \\&&
\rho_{L, q}=U_2^+L_qU_2=-\sinh \kappa_q R_q +\cosh\kappa_q L_q\,.
\end{eqnarray}
To perform decoupling of the cubic terms one needs to perform the non-linear rotation
\begin{equation}
R=U^+_3\tilde{\rho}_RU_3\\, \qquad L=U^+_3\tilde{\rho}_LU_3\,,
\end{equation}
with
\begin{eqnarray}
U_3=\exp\left(\sum_{{\bf q}}f_{\bf q} R_1 R_2 L_3 - (L\leftrightarrow R)\right)\,,
\\ f_{\bf q}=\frac{2\pi^2}{mL^2}
\frac{\Gamma^{'}_{\bf q}}{u_1 q_1+u_2 q_2-u_3 q_3}\,.
\end{eqnarray}
To third order in densities we obtain
\begin{eqnarray}
\label{Supp:R_rho_tilde}
R_q=\tilde{\rho}_{Rq}+\frac{q L}{\pi}\sum_{2+3-q=0}f_{(-q, 2, 3)}\tilde{\rho}_{R2}\tilde{\rho}_{L3}-
\frac{q L}{2\pi}\sum_{1+2-q=0}f_{(1, 2, -q)}\tilde{\rho}_{L1}\tilde{\rho}_{L2}\,,\\
\label{Supp:L_rho_tilde}
L_q=\tilde{\rho}_{Lq}+\frac{q L}{\pi}\sum_{2+3-q=0}f_{(-q, 2, 3)}\tilde{\rho}_{L2}\tilde{\rho}_{R3}-
\frac{q L}{2\pi}\sum_{1+2-q=0}f_{(1, 2, -q)}\tilde{\rho}_{R1}\tilde{\rho}_{R2}\,.
\end{eqnarray}
The connection of $\rho_\eta$ and $\tilde{\rho}_\eta$ can be now read off from (\ref{Supp:rho_R}), (\ref{Supp:R_rho_tilde}) and (\ref{Supp:L_rho_tilde}).
The consideration above simplifies considerably when the relevant spacial scale of the density variation is small compared to the interaction radius.
In this case the transformations $U_2$ and $U_3$ act locally in space leading to
\begin{eqnarray}
\rho_R(x)=\frac{\sqrt{K_0}}{2}(R(x)+L(x))+\frac{1}{2\sqrt{K_0}}(R(x)-L(x))\,,\\
\rho_L(x)=\frac{\sqrt{K_0}}{2}(R(x)+L(x))-\frac{1}{2\sqrt{K_0}}(R(x)-L(x))\,,
\end{eqnarray}
and
\begin{eqnarray}
R(x)=
\tilde{\rho}_R(x)+\frac{\pi}{m}\frac{1-K_0^2}{8u_0\sqrt{K_0}}
\left[-\frac{1}{\pi}\partial_x(\tilde{\rho}_R(x)\tilde{\varphi}_L(x))+\tilde{\rho}_L^2(x)\right]\,,\\
L(x)=
\tilde{\rho}_L(x)+\frac{\pi}{m}\frac{1-K_0^2}{8u_0\sqrt{K_0}}
\left[\frac{1}{\pi}\partial_x(\tilde{\rho}_L(x)\tilde{\varphi}_R(x))+\tilde{\rho}_R^2(x)\right]\,.
\end{eqnarray}
Here $\tilde{\varphi}_{\eta}(x)$ is defined by the usual relation
\begin{equation}
\tilde{\rho}_\eta(x)=\frac{\eta}{2\pi} \partial_x\tilde{\varphi}_\eta(x)\,.
\end{equation}
In the leading order in $\rho/mv_F$ the physical density $\rho(x)=\rho_L+\rho_R \simeq \sqrt{K_0}(\tilde{\rho}_R+\tilde{\rho_L})$.
\section{Kinetic equation and chiral hydrodynamics}
\label{Supplementary2}
We now discuss the relation between the kinetic approach, developed above and
hydrodynamics description for 1D fermions with generic finite range interaction,
developed in Ref. \cite{PGM2012}.
In terms of the bosonic densities the Hamiltonian of the system can be written as
[see main text, Eq. (13)]
\begin{equation}
\label{Sup:Hboson}
H=\sum_{\eta} \int dx \left[\pi u_0\tilde{\rho}_\eta^2+\frac{2\pi^2}{3m^*} \tilde{\rho}_\eta^3\right]+
\frac12\int dx dx' \tilde{\rho}_\eta(x)V(x-x')\tilde{\rho}_\eta(x')\,,
\end{equation}
where we approximate the interaction vertex $\Gamma_{\bf q} \simeq \Gamma_{{\bf q}=0}$
and use real space representation.
The operators of chiral density components satisfy Heisenberg equation
\begin{equation}
\label{Supp:HydrodynamicEquation}
\partial_t \hat{\tilde{\rho}}_\eta+\eta\left(u_0+\frac{2\pi}{m^*}\hat{\tilde{\rho}}_\eta\right)\partial_x \hat{\tilde{\rho}}_\eta
+\frac{\eta}{2\pi}\int dx' V(x-x')\partial_{x'}\hat{\tilde{\rho}}_\eta(x')
=0\,.
\end{equation}
In the classic limit the operators in Eq. (\ref{Supp:HydrodynamicEquation})
are replaced by the real density field.
By ignoring the difference between density operators and their expectation values one
neglects the quantum loop corrections to the classical equations of motion.
Such corrections play an important role in evolution of the density field Ref.\cite{PGM2012},
in particular in the region where hydrodynamic equations develop instabilities
(and phase space of quasi-particle acquires an inverse population).
Sufficiently strong electron interaction prevents the emerging instabilities in hydrodynamic theory,
which allows to neglect the loop corrections in a controlled way.
For the case of finite range interaction
\begin{equation}
g(q)=\frac{1}{l_0m}e^{-q^2l_{\rm int}^2}
\end{equation}
the hydrodynamics is justified, provided that
\begin{equation}
\sqrt{\frac{l_{\rm int}^2 \Delta\rho}{l_0}}\gg1 \quad {\rm and}\quad l_0\Delta\rho\ll 1.
\end{equation}
Here $\Delta\rho$ is the amplitude of the density perturbation in the initial state.
The classic hydrodynamic theory can be straightforwardly derived from
the kinetic description of the main text.
For the right-moving particles (from now on we focus on this case and omit the chirality index $\eta$)
the kinetic equation reads
\begin{eqnarray}
\label{Supp:Vlasov}
\partial_t \tilde{f}(p,x,t) +\left(u_0+\frac{p}{m^*}\right)\partial _x \tilde{f}(p,x,t)
+\int \frac{dp}{2\pi} e^{-ipy} \tilde{f}(x,y,t)
\left[\tilde{\phi}\left(x+\frac{y}{2}\right)-\tilde{\phi}\left(x-\frac{y}{2}\right)\right]=0\,,\\
\phi(x,t)=\int dx' V(x-x')\tilde{\rho}(x',t)\,.
\end{eqnarray}
The equation (\ref{Supp:Vlasov}) should be supplied with the initial conditions $\tilde{f}_0(x, p)$, that needs to be calculated separately. As in the main text, we assume that the perturbation in electronic density is created by the applying the smooth external potential $U(x)$ to the uniform Fermi sea.
In this case the curvature of electronic spectrum has little effect on the initial Wigner function, and
the standard bosonization technique
enables us to find $\tilde{f}_0(x,p)$. In the vicinity of the right Fermi point (cf. discussion of the Wigner function for non-interacting fermions in Ref.\cite{PGM2012}) the Wigner function can be written as
\begin{equation}
\label{Supp:f0}
\widetilde{f}_0(x, p)=\int \frac{dy}{2\pi i (y-i0)}\exp\left[-ip y+2\pi i\int_{x_-}^{x_+}\tilde{\rho}_0(x')dx'\right]\,,
\qquad x_\pm \equiv x\pm\frac{y}{2},
\end{equation}
where $\tilde{\rho}_0(x)$ is the expectation value of fermionic density in the external potential $U(x)$.
We note, that the details of the interaction are encoded in
the static Wigner function only through $\tilde{\rho_0}$.
Several simple facts about equation (\ref{Supp:Vlasov}) help to clarify its connection to hydrodynamics.
In the limit ($m^*=\infty$) Eq. (\ref{Supp:Vlasov}) yields
\begin{equation}
\label{Supp:fLL}
\widetilde{f}(x, p, t)=\int \frac{dy}{2\pi i (y-i0)}\exp\left[-ip y+2\pi i\int_{x_-}^{x_+}\tilde{\rho}(x', t)dx'\right]\,.
\end{equation}
This corresponds to density evolution
\begin{equation}
\partial_t \tilde{\rho}+u_0\partial_x \tilde{\rho}
+\frac{1}{2\pi}\int dx' V(x-x')\partial_{x'}\tilde{\rho}(x')
=0\,
\end{equation}
in accordance with harmonic LL model.
As expected, Eq.(\ref{Supp:Vlasov}) is exact in the limit $m\rightarrow\infty$.
Performing the gradient expansion in Eq.(\ref{Supp:Vlasov}) one obtains the standard Boltzmann equation
\begin{equation}
\label{Supp:VlasovReduced}
\partial_t \tilde{f}(p,x,t) +\left(u_0+\frac{p}{m^*}\right)\partial _x \tilde{f}(p,x,t)
-\partial_x\phi(x)\partial_p\tilde{f}(x, p, t)=0\,.
\end{equation}
Approximating the initial condition (\ref{Supp:f0}) by
\begin{equation}
\label{Supp:f0Hyd}
\tilde{f}_0(x, p)=\Theta(2\pi\tilde{\rho}_0(x)-p)\,.
\end{equation}
one finds the formal solution of Eq. (\ref{Supp:VlasovReduced})
\begin{equation}
\tilde{f}(x, p, t)=\Theta(2\pi\rho(x, t)-p)\,,
\end{equation}
where the density $\rho(x, t)$ satisfies the hydrodynamic equation (\ref{Supp:HydrodynamicEquation}).
\section{Numerical solution of the kinetic equation}
\label{Supplementary3}
In this section we briefly discuss the algorithm used for numeric simulation of Eq. (\ref{Supp:Vlasov}).
We use the model of fermions on a ring, of the circumference $L_x$.
This induces periodic boundary conditions for the Wigner function $f(x, y)$
with the period with period $L_y=2L_x$, as a function of
$y$ and $x$ correspondingly.
The fermionic momentum $p$ in $f(x, p)$ is quantized in units of $2\pi/L_y$, while the momentum $q$ conjugate to $x$ is quantized in units of $2\pi/L_x$. To perform numerical simulations we impose the cut-off $2\pi N_x/L_x$ and $2\pi N_y/L_y$ for momenta $q$ and $p$
respectively.
In our calculations, the values of the parameters $L_x=4000$ (in units where $\lambda_F\equiv mV_F=2\pi$), $N_x\sim 2500$ and $N_y\sim500$ were used.
We checked that the final results are stable with respect to the variation of these parameters.
We model the initial density bump by a Gaussian with the dispersion $\sigma=200$ that contain
$N\approx 5$ particles.
Periodic boundary conditions enable the use of fast Fourier transform algorithm for the calculation
of $\partial_t \tilde{f}$, given by (\ref{Supp:Vlasov}).
Combined with the standard fourth-order Runge-Kutta time stepper this provides us with the fast and accurate algorithm for the numerical solution of Eq. (\ref{Supp:Vlasov}).
|
1,116,691,499,650 | arxiv | \section{Introduction}
One-dimensional quantum systems are generically more strongly correlated than their higher-dimensional counterparts, due to the inevitability of collisions when particles cross each other. The one-dimensional Bose gas with contact interactions, known as the Lieb-Liniger model, is the paradigm of such systems \cite{Lieb}. This model well describes experiments with ultracold atoms in tight waveguides and some of its correlation functions have been experimentally probed in all interaction regimes \cite{Paredes, Kinoshita, Kinoshitabis, Haller, Hallerbis, Bouchoule, Clement, Meinert}. From a theoretical point of view, since the model is integrable, its $k$-body correlations can in principle be obtained explicitly at all orders $k$, since they are linked to the (infinite) set of integrals of motion. In practice, however, the exact analytical calculation at arbitrary interaction strength is tremendously difficult for two reasons: the coefficients of their Taylor expansion at small distance are related to each other in a non-trivial way, and the defining system of equations, in turn derived using Bethe Ansatz,
is technically very challenging. In the thermodynamic limit, finding an explicit expression for the various spatially
non-local, equal time correlations from Bethe Ansatz techniques actually requires to link them to the moments of the density of quasi-momenta, and thus to solve a type II homogeneous Fredholm integral equation with Lorentzian kernel, whose exact analytic solution is yet unknown.
In this work, we focus on the link between non-local correlation functions of the Lieb-Liniger model and its integrals of motion, thus elucidating a special structure of the ground state for this integrable model. In particular, we derive a relation, first proposed in \cite{Dunjko}, that links the fourth coefficient of the Taylor expansion of the one-body correlation function at short distances with various moments of the quasi-momentum distribution and their derivatives with respect to the coupling constant. Then, we use a recently-developed method \cite{Ristivojevic, Lang2016}, and generalize recent conjectures \cite{Widom, Lang2016, Prolhac}, to evaluate these quantities with excellent accuracy in a wide range of interaction strengths.
The paper is organized as follows: in Section \ref{Hamilton} we introduce the Hamiltonian of the system and the relevant notations. We also define the notion of connection, which is the key concept in this work, and illustrate it on simple cases. Then, in Sec.~\ref{Connect} we derive a connection between the fourth coefficient of the Taylor expansion of the one-body correlation function at short distances, $c_4$, the local three-body correlation function and the fourth moment of the density of pseudo-momenta, which is one of the main results of this work. In Section \ref{Conjecturesmoments}, we provide new conjectures about the moments of the density of pseudo-momenta, and illustrate them in Sec.~\ref{Illus} where we find, in particular, that $c_4$ changes sign at interaction strength $\gamma\simeq 3.8$. In Section \ref{Outlook}, we summarize our main results and give an outlook to our work.
\section{Hamiltonian and definitions}
\label{Hamilton}
In this work, we consider a one-dimensional system of $N$ indistinguishable point-like bosons of mass $m$ subject to contact interactions, known as the Lieb-Liniger model. We choose periodic boundary conditions, possibly realized by using a ring geometry.
The Hamiltonian of the system reads
\begin{align}
\hat{\cal H} =\frac{\hbar^2}{2m} \left[
\sum_{i=1}^{N} -\frac{\partial^2}{\partial x_{i}^2}
+ 2 c \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} \delta(x_{i}-x_{j})\right]
\,\,,
\label{H}
\end {align}
where the first term of the right-hand side stands for the kinetic energy, $\{x_i\}$ label the positions of the atoms, $\delta$ is the Dirac delta function, $c\!=\!2/a$ is related to the coupling constant, with $a\!=\!-a_{\mbox{\scriptsize 1D}}\!>\!0$ for repulsive interactions, and $a_{\mbox{\scriptsize 1D}}$ is the one-dimensional
scattering length related to the many-body wavefunction by $\Psi(\ldots,\,x_{i},\,\ldots ,x_{j},\,\ldots) \propto |x_{i}\!-\!x_{j}|-a_{\mbox{\scriptsize 1D}}+\ldots$. The usual one-dimensional coupling constant of the model is $g_{\mbox{\scriptsize 1D}}\!=\!-2\hbar^2/(m a_{\mbox{\scriptsize 1D}})\!=\!(\hbar^2/m)c$ \cite{Olshaniibis}.
The action of the Hamilonian on the Bethe ground state $|\chi_N\rangle$ of the many-body system reads
\begin{align}
\hat{\cal H}|\chi_N \rangle = \sum_{i=1}^{N} \lambda_{i}^2 |\chi_N \rangle
\,\,,
\label{H_2}
\end {align}
the eigenvalue is the sum of Bethe rapidities $\lambda_{i}$ squared, and the ground state in coordinate representation reads \cite{Gaudin, Korepin}
\begin{widetext}
\begin{align}
\chi_{N}(x_{1},\,\ldots,x_{N}) = \mbox{const} \times\!\!
\sum_{\sigma \in S_N} (-1)^{{\cal P}[\sigma]}\prod_{i=1}^{N-1}\prod_{j=i+1}^{N}
\left[\lambda_{\sigma_{i}} - \lambda_{\sigma_{j}} - \frac{2 i}{a} \mbox{sign}(x_{i}-x_{j})\right] \exp\left[i \sum_{k=1}^{N} \lambda_{\sigma_{k}} x_{k}\right].
\label{chi}
\end {align}
\end{widetext}
In Eq.~(\ref{chi}), $\sigma$ are elements of the symmetry group $S_N$, i.e. permutations of $N$ elements, ${\cal P}[\sigma]$ their parity, and $\sigma_i$ denotes the image of $i$ by $\sigma$. These eigenstates are also eigenfunctions of all conservation laws, as required by the integrability of the model.
The correlation functions we are interested in are the $k$-body density matrices normalized to unity by choice of the constant in Eq.~(\ref{chi}), and such that
\begin{align}
&\rho_k(x_1,\dots x_k;x_1',\dots, x_k')\equiv \int\!dx_{k+1}\dots dx_N \nonumber\\
&\chi_N^*(x_1',\dots, x_k',x_{k+1},\dots,x_N)\chi_N(x_1,\dots,x_N)\nonumber\\
&=\rho_k(x_1-x_1',\dots, x_k-x_k';0,\dots, 0)
\end{align}
due to Galilean invariance. In particular, in this work we consider the one-body density matrix, whose series expansion at short distance can be written as
\begin{align}
&
\rho_{1}(x;\,x') =\frac{1}{L}\sum_{l=0}^{+\infty}c_l(n|x\!-\!x'|)^l.
\end{align}
In what follows we will also make use of the notation
\begin{eqnarray}
g_k\equiv \frac{N!}{(N-k)!}\frac{\rho_k(0, \dots, 0 ; 0 ,\dots, 0)}{n^k},
\end{eqnarray}
where $n=N/L$ is the mean linear density, the system being of size $L$. Correlations $g_k$ will be refered to as $k$-body local correlations, they represent the probability to find $k$ atoms at the same place and time. It is quite intuitive that the combined effect of geometry and interactions enforces $g_{k+1}<g_k$ at finite interaction strengths, and that $g_k$ are decreasing functions of the interaction strength.
The first aim of this work is to illustrate the fact that, due to the integrability of the model, those correlations are related to each other via the moments $e_{2k}$ of the dimensionless density of quasi-momenta $g(z;\alpha)$, defined as the solutions of the set of Bethe equations derived by Lieb and Liniger \cite{Lieb} in the thermodynamic limit $N\to \infty$, $L\to \infty$, at fixed $N/L$ and at zero temperature:
\begin{equation}
\label{Fredholm}
g(z;\alpha)-\frac{1}{2\pi}\int_{-1}^{1}dy\frac{2\alpha g(y;\alpha)}{\alpha^2+(y-z)^2}=\frac{1}{2\pi},
\end{equation}
\begin{equation}
\label{alphagamma}
\gamma \int_{-1}^1dy g(y;\alpha)=\alpha,
\end{equation}
and
\begin{equation}
\label{moments}
e_{2k}(\gamma)=\frac{\int_{-1}^1dy g(y;\alpha(\gamma))y^{2k}}{[\int_{-1}^1 dy g(y;\alpha(\gamma))]^{2k+1}},
\end{equation}
where $\alpha$ is a positive coefficient and $\gamma\!\equiv\!2/(na)$ is the Lieb parameter, representing the natural dimensionless coupling constant of the model.
The quantities $e_{2k}$ are integrals of motions of the model: for exemple, $e_2$ corresponds to the thermodynamic limit of the ground-state energy $E_2=\sum_i \lambda_i^2$, through $E_2=Nn^2e_2$. More generally, we define $E_{2k}= Nn^{2k}e_{2k}$, and $e_{2k+1}=0$ from parity arguments.
To finish with, we define connections as functionals $\mathcal{F}$ such that
\begin{eqnarray}
\mathcal{F}\left(c_l(\gamma),g_k(\gamma),\{e_{n}(\gamma),e_n'(\gamma),\dots\},\gamma\right)=0,
\end{eqnarray}
where $'$ denotes differentiation with respect to $\gamma$. We denote each connection by a pair of indices $(l,k)$, where by convention an index is $0$ if the corresponding quantity in the notation above does not appear in the functional. This compact notation helps classifying the connections. To illustrate this concept, we derive the first few connections from conservation laws.
The first conserved quantity is
\begin{eqnarray}
\hat{H}_0=\sum_{i=0}^N\frac{\partial^0}{\partial x_i^0}=N,
\end{eqnarray}
the number of atoms.
Trivially,
\begin{eqnarray}
\langle \chi_N|\hat{H}_0|\chi_N\rangle=N\langle \chi_N|\chi_N\rangle=N
\end{eqnarray}
since the Bethe eigenstate is normalized to unity. On the other hand,
\begin{eqnarray}
&&\langle \chi_N|\hat{H}_0|\chi_N\rangle\nonumber\\
&&=\!N\!\int\!dx_1\dots dx_N\chi_N^*(x_1,\dots,x_N)\chi_N(x_1,\dots,x_N)\nonumber\\
&&=N\int dx_1\int dx_1' \delta(x_1-x_1')\int dx_2\dots dx_N\nonumber\\
&&\chi_N^*(x_1',\dots,x_N)\chi_N(x_1,\dots,x_N)\nonumber\\
&&=NL\rho_1(x;0)|_{x=0}=Nc_0=Ng_1,
\end{eqnarray}
hence
\begin{eqnarray}
c_0=g_1=e_0=1,
\end{eqnarray}
yielding the connections (0,0) and (0,1).
The second conserved quantity is
\begin{eqnarray}
\hat{H}_1=\sum_{i=0}^N\frac{\partial}{\partial x_i}
\end{eqnarray}
and proceeding as before, we find
\begin{eqnarray}
c_1=e_1=0,
\end{eqnarray}
the connection of type (1,0), in agreement with \cite{Olshanii}.
Then, from the Hamiltonian we obtain
\begin{eqnarray}
\langle\chi_N|\hat{\cal H}|\chi_N\rangle=Nn^2e_2.
\end{eqnarray}
We also evaluate
\begin{eqnarray}
&&\langle \chi_N|\sum_{i=1}^N-\frac{\partial^2}{\partial x_i^2}|\chi_N\rangle=NL\frac{\partial^2}{\partial x^2}\rho_1(x;0)|_{x=0}\nonumber\\
&&=2Nn^2c_2,
\end{eqnarray}
and
\begin{eqnarray}
\langle \chi_N|\sum_{i=1}^{N-1} \sum_{j=i+1}^{N} \delta(x_{i}-x_{j})|\chi_N\rangle=\frac{N}{2}ng_2
\end{eqnarray}
hence the connection of order (2,2),
\begin{eqnarray}
-2c_2+\gamma g_2=e_2.
\end{eqnarray}
The connection of type (0,2) is obtained by applying the Hellmann-Feynman theorem to the Hamiltonian and reads \cite{Ganshlya}
\begin{eqnarray}
\label{Eq02}
g_2=e_2',
\end{eqnarray}
combining the connections of orders (2,2) and (0,2) yields order (2,0), i.e. \cite{Olshanii}
\begin{eqnarray}
c_2=\frac{1}{2}(\gamma e_2'-e_2).
\end{eqnarray}
\section{Derivation of the connection of order $(4,3)$}
\label{Connect}
In this Section, we derive a new connection, namely
\begin{align}
24 c_{4} - 2 \gamma^2 g_{3} = e_{4}-\gamma e_{4}'.
\label{Maineq}
\end{align}
It is the connection of type (4,3) according to our nomenclature.
First, we introduce an operator $\hat{H}_{4}$ that yields, when applied to an eigenstate (\ref{chi}), the fourth integral of motion $E_4$,
\begin{align}
\hat{H}_{4} | \chi_N \rangle = E_{4} | \chi_N \rangle
\,\,,
\end {align}
with
\begin{align}
E_{4}= \sum_{i=1}^{N} \lambda_{i}^4.
\,\,
\end {align}
From these definitions, by construction the higher Hamiltonian $\hat{H}_4$ can be written explicitly as \cite{Gutkin, KorepinDavies, Davies}
\begin{align}
&\hat{H}_{4}= \sum_{i=1}^{N} \frac{\partial^4}{\partial x_{i}^4}
+
\frac{48}{a^2} \sum_{i=1}^{N-2} \sum_{j=i+1}^{N-1}\sum_{k=j+1}^{N} \delta(x_{i}-x_{j}) \delta(x_{j}-x_{k})\nonumber
\\
&
\quad
- \frac{4}{a} \sum_{i=1}^{N-1} \sum_{j=i+1}^{N}
\left\{
\left(
\frac{\partial^2}{\partial x_{i}^2} + \frac{\partial^2}{\partial x_{j}^2} + \frac{\partial^2}{\partial x_{i} \partial x_{j}}
\right)\delta(x_{i}-x_{j})
\right.\nonumber
\\
&
\qquad\quad
\left.
+
\delta(x_{i}-x_{j})
\left(
\frac{\partial^2}{\partial x_{i}^2} + \frac{\partial^2}{\partial x_{j}^2} + \frac{\partial^2}{\partial x_{i} \partial x_{j}}
\right)
\right\}\nonumber
\\
&
\qquad\qquad
+\frac{8}{a^2} \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} \delta^2(x_{i}-x_{j})
\nonumber\\
&= \hat{h}_{4}^{(1)} + 48\kappa^2 \hat{h}_{4}^{(2)} - 4\kappa \hat{h}_{4}^{(3)} + 8\kappa^2 \hat{h}_{4}^{(4)}
\,\,,
\label{H_4}
\end{align}
where
$\kappa=\frac{1}{a}$.
For convenience, we introduce the auxiliary operator
\begin{align}
\hat{Q}_{4} =\frac{1}{\kappa} \hat{H}_{4}
\,\,,
\end {align}
and apply the Hellmann-Feynman theorem to it:
\begin{align}
\langle \chi_N | \left(\frac{d}{d\kappa} \hat{Q}_{4}(\kappa) \right) | \chi_N \rangle
=
\frac{d}{d\kappa} \left(\frac{1}{\kappa} E_{4}(\kappa)\right)
\,\,.
\label{HF}
\end {align}
The left-hand side is related to operator $\hat{H}_{4}$ introduced above by
\begin{widetext}
\begin{align}
\langle \chi_N | \left(\frac{d}{d\kappa} \hat{Q}_{4}(\kappa) \right) | \chi_N \rangle
=
-\frac{1}{\kappa^2} \langle \chi_N | \hat{h}_{4}^{(1)} | \chi_N \rangle
+ 48 \langle \chi_N | \hat{h}_{4}^{(2)} | \chi_N \rangle
+ 8 \langle \chi_N | \hat{h}_{4}^{(4)} | \chi_N \rangle
\,\,.
\label{HF_2}
\end {align}
\end{widetext}
We evaluate the terms of the right-hand side separately. First, we find
\begin{align}
\begin{split}
&
\langle \chi_N | \hat{h}_{4}^{(1)} | \chi_N \rangle
=N L \frac{\partial^4}{\partial x^4} \rho_{1}(x;\,0)\Big|_{x=0}
\\
&
= 12 L n^4 c_{3} \delta(0) + 24 L n^5 c_{4}
\,\,.
\end{split}
\label{h_1_result}
\end {align}
Actually, the infinity in the form of $\delta(0)$ is canceled by the analogous divergence produced by
\begin{align}
\langle \chi_N | \hat{h}_{4}^{(4)} | \chi_N \rangle=
\frac{1}{2} L n^2 g_2 \delta(0)
\,\,
\label{h_4_result}
\end{align}
as can be shown using the connection of order (3,2),
\begin{align}
\label{conn23}
c_{3}=\frac{1}{3} \frac{1}{(n a)^2}g_2
\,\,.
\end {align}
The latter is deduced from (0,2) above, Eq.(\ref{Eq02}), and (3,0) that reads \cite{Olshanii}
\begin{eqnarray}
c_3=\frac{\gamma^2}{12}e_2'.
\end{eqnarray}
We remark that conversely, starting from the sole requirement that $\hat{H}_4$ is divergence-free, Eq.~(\ref{conn23}) naturally follows from our derivation, which can thus be seen as a new and independent proof of this connection. Then, $(3,0)$ is derived by combination with $(0,2)$.
An other way to derive $(3,2)$ is as follows: due to the contact condition, one can write
\begin{eqnarray}
&&\rho_k(x_1,\dots,x_k;x_1',\dots,x_k')\nonumber\\
&&\!\!\!\!\!\!=\!\!\!\sum_{m=0}^{+\infty}\!\rho_k^{(m)}\!\!\left(\!\frac{x_1\!+\!x_1'}{2},x_2,\dots,x_k;x_2',\dots,x_k'\!\right)\!|x_1\!-\!x_1'|^m\!\!\!.
\end{eqnarray}
Since
\begin{eqnarray}
\rho_1=\int dx_2\rho_2(x_1,x_2;x_1',x_2)
\end{eqnarray}
and
\begin{eqnarray}
&&\int dx_2|x_1\!-\!x_2||x_1'\!-\!x_2|=_{x_1\to x_1'}\frac{1}{3}|x_1\!-\!x_1'|^3+\dots
\end{eqnarray}
where the dots represent a regular function, one finds the general result
\begin{eqnarray}
\rho_k^{(3)}(0,\dots;0,\dots)=\frac{N-k}{3a^2}\rho_{k+1}(0,\dots;0,\dots)
\end{eqnarray}
or, written an other way,
\begin{eqnarray}
c_3^{(k)}=\frac{\gamma^2}{12}g_{k+1},
\end{eqnarray}
a higher-order connection from which $(3,2)$ follows as a corollary.
To finish with, we evaluate
\begin{align}
\begin{split}
\langle \chi_N | \hat{h}_{4}^{(2)} | \chi_N \rangle
&
=
\frac{1}{6} L n^3 g_{3}
\,\,.
\end{split}
\label{h_2_result}
\end {align}
Inserting (\ref{h_1_result}), (\ref{h_4_result}) and (\ref{h_2_result})
into (\ref{HF}) and (\ref{HF_2}) ends the derivation. We now comment on the physical meaning of Eq.~(\ref{Maineq}). The fact that $g_3$ appears stems from $\hat{h}_4^{(2)}$ in Eq.~(\ref{H_4}), that involves three-body processes provided $N\geq 3$. The coefficient $c_4$, that stems from $\hat{h}_4^{(1)}$, is related to the higher kinetic energy in that the momentum operator applied to the density matrix generates the coefficients of its Taylor expansion when taken at zero distance.
One can even go further, combining the connection (0,3) \cite{Cheianov, Smith},
\begin{align}
\label{g3}
g_3(\gamma)\!=\!\frac{3}{2}\frac{e_4'}{\gamma}\!-\!5\frac{e_4}{\gamma^2}\!+\!\left(1\!+\!\frac{\gamma}{2}\right)\!e_2'\!-\!2\frac{e_2}{\gamma}\!-\!3\frac{e_2e_2'}{\gamma}\!+\!9\frac{e_2^2}{\gamma^2},
\end{align}
with the connection (4,3), Eq.~(\ref{Maineq}), to obtain the connection (4,0),
\begin{align}
\label{c4}
c_4(\gamma)\!=\!\frac{\gamma e_4'}{12}\!-\!\frac{3}{8}e_4\!+\!\frac{2\gamma^2\!+\!\gamma^3}{24}e_2'\!-\!\frac{\gamma e_2}{6}\!-\!\frac{\gamma e_2e_2'}{4}\!+\!\frac{3}{4}e_2^2,
\end{align}
given in \cite{Dunjko} without proof. The right-hand sides of these last two equalities involve moments of the pseudo-momentum distribution only, and it is a general fact that all correlations of the model are defined through connections of type $(l,0)$ and $(0,k)$, as a consequence of integrability.
\section{Conjectures about the exact moments of the density of pseudo-momenta}
\label{Conjecturesmoments}
As illustrated above, local correlation functions are linked to the even-order moments $e_{2k}$ of the density of pseudo-momenta (we recall that odd ones are trivially null by parity). Their exact and explicit analytical expression is not known to date, only the first few terms of their exact asymptotic expansions in the weakly- and strongly-interacting regimes have been computed exactly. To go further and cover the full range of repulsive interaction strengths $\gamma \in [0,+\infty[$, we generalize to arbitrary moments recent conjectures about the ground-state energy.
\subsection{Conjecture in the weakly-interacting regime}
The first conjecture concerns the Taylor expansion of $e_{2k}$ in the weakly-interacting regime, and reads
\begin{align}
\label{conjweak}
e_{2k}(\gamma)=\sum_{i=0}^{+\infty}\frac{a_{2k,i}}{\pi^i}\gamma^{k+i/2},
\end{align}
where $\{a_{2k,i}\}$ are real coefficients. This is a generalization to arbitrary $k$ of the conjecture proposed in \cite{Widom} for $e_2$.
According to Eq.~(\ref{moments}), a trivial necessary condition is $a_{0,i}=\delta_{i,0}$, where $\delta_{.,.}$ is the Kronecker symbol. The exact nontrivial coefficients unambiguously found so far are $a_{2,0}\!=\!1$, $a_{2,1}\!=\!-4/3$ \cite{Lieb}, $a_{2,2}\!=\!\zeta(2)\!-\!1$ \cite{Widom}, where $\zeta$ is the Riemann zeta function, $a_{4,0}\!=\!2$, and $a_{4,1}\!=\!-88/15$ \cite{Cheianov}. Based on accurate numerics by S. Prolhac, G. Lang conjectured $a_{2,3}\!=\!3\zeta(3)/8-1/2$. Prolhac also proposed $a_{2,4}=a_{2,3}/3$ and $a_{2,5}=-45\zeta(5)/1024+15\zeta(3)/256-1/32$ \cite{Prolhac}. Higher-order coefficients $a_{2,i}$ are numerically known with high accuracy up to order $i\!=\!10$ \cite{Prolhac}.
Using the conjecture (\ref{conjweak}) together with the known expansion at low $\alpha$, heuristically introduced in \cite{Hutson} and proven in \cite{Wadati} (we refer to Appendix \ref{refereesugg} for a derivation of the first part),
\begin{align}
&g(z;\alpha)\simeq_{\alpha \ll 1}\frac{\sqrt{1-z^2}}{2\pi\alpha}\nonumber\\
&+\frac{1}{4\pi^2\sqrt{1-z^2}}\left[z\ln\left(\frac{1-z}{1+z}\right)+\ln\left(\frac{16\pi}{\alpha}\right)+1\right],
\end{align}
combined with Eqs.~(\ref{Fredholm}), (\ref{alphagamma}) and (\ref{moments}), we find the general form of the first two coefficients at fixed order $k$,
\begin{align}
a_{2k,0}=\frac{1}{k+1}\binom{2k}{k}= C_k,
\end{align}
where $\{C_k\}$ denote the Catalan numbers, and
\begin{align}
a_{2k,1}\!=\!\binom{2k}{k}\!
-
\frac{2^{4k}}{\binom{2k+1}{k}}\frac{1}{k+1}\sum_{i=0}^k\left[\frac{1}{2^{2i}}\binom{2i}{i}\right]^2
.
\end{align}
\subsection{Conjecture in the strongly-interacting regime}
In the strongly-interacting regime, we generalize a recent conjecture on $e_2$ \cite{Lang2016}, by stating that the asymptotic expansion in $1/\gamma$ is partially resummed in a natural way as
\begin{align}
\label{conjstrong}
e_{2k}(\gamma)=\left(\frac{\gamma}{2+\gamma}\right)^{\!2k}\sum_{i=0}^{+\infty}\frac{\pi^{2(k+i)}}{(2+\gamma)^{3i}}\mathcal{L}_{2k,i}(\gamma)
\end{align}
where $\mathcal{L}_{2k,i}$ are polynomials with rational coefficients, such that $\mathcal{L}_{2k,0}=1/(2k+1)$ and $\mathcal{L}_{2k,i\geq 1}$ is of degree $i-1$. Using a basis of orthogonal polynomials to systematically find a $1/\gamma$ expansion of the moments as explained in \cite{Ristivojevic} and \cite{Lang2016}, together with the conjecture Eq.~(\ref{conjstrong}), we find by identification:
\begin{align}
\label{e4conj}
&\mathcal{L}_{4,1}(X)=\frac{32}{35},\nonumber\\
&\mathcal{L}_{4,2}(X)=-\frac{1984}{1575}X+\frac{3424}{1575},\nonumber\\
&\mathcal{L}_{4,3}(X)=\frac{8192}{3465}X^2-\frac{37376}{5775}X+\frac{169728}{45045},\nonumber\\
&\mathcal{L}_{4,4}(X)=-\frac{47104}{9009}X^3+\frac{59337728}{3378375}X^2\nonumber\\
&-\frac{61582336}{3378375}X+\frac{137573632}{23648625},\nonumber\\
&\mathcal{L}_{4,5}(X)=\frac{192512}{15015}X^4-\frac{765952}{15925}X^3+\frac{80326709248}{1206079875}X^2\nonumber\\
&-\frac{594448384}{14189175}X+\frac{295196160000}{38192529375},\nonumber\\
&\mathcal{L}_{4,6}(X)=-\frac{335872}{9945}X^5+\frac{132872192}{984555}X^4\nonumber\\
&-\frac{2316542492672}{10416144375}X^3+\frac{3689660465152}{18091198125}X^2\nonumber\\
&-\frac{184095784026112}{2406129350625}X+\frac{12238234443776}{1260353469375}.
\end{align}
\section{Illustrations}
\label{Illus}
In this Section, we illustrate the various results obtained above. Within our approach, in order to evaluate $g_3$ and $c_4$ with good accuracy from Eqs.~(\ref{g3}) and (\ref{c4}), it is crucial to correctly evaluate not only $e_2$ and $e_4$, but also their derivatives. For $e_2$, we refer to previous studies \cite{Prolhac, Lang2016}, where it was found that the combination of the weakly-interacting expansion of \cite{Prolhac} and the conjectural partial resummation in the strongly-interacting regime of \cite{Lang2016} yields excellent agreement with accurate numerics. In this work, we checked that $e_2'$ obtained from the conjectures in their respective ranges is also numerically exact for all interaction strengths, as shown in Appendix \ref{e2prime}.
More important, we benchmark the conjectures on the fourth moment. In Fig.~\ref{Fig1} we plot $e_4$ from the conjectures Eq.~(\ref{conjweak}) and Eq.~(\ref{conjstrong}) over the experimentally relevant range $\gamma\in [0,10]$ and find excellent agreement with the numerical integration of the Bethe Ansatz equations (6)-(8). The coefficients used in the weakly-interacting regime are found by fitting data at $\gamma \ll 1$ and the whole curve thereby obtained closely follows numerical solution of the Bethe equations for interaction strengths $\gamma>1$, validating the conjecture. However, our numerical data is not accurate enough to guess the analytical exact value of the unknown coefficients $a_{4,i}$ for $i\geq 2$.
\begin{figure}
\includegraphics[width=8cm, keepaspectratio, angle=0]{g3c4.png}
\caption{(Color online) Dimensionless fourth moment of the distribution of quasi-momenta $e_4$ as a function of the dimensionless interaction strength $\gamma$. Analytical result from the conjecture (\ref{e4conj}) (solid, blue) is in excellent agreement with independent accurate numerics from the authors (red and black dots) for all interaction strengths. The conjecture in the weakly-interacting regime Eq.~(\ref{conjweak}) with appropriate coefficients (black, dashed) reproduces numerical calculations with excellent accuracy up to intermediate interactions.}
\label{Fig1}
\end{figure}
While no discrepancy between the conjecture from the strongly-interacting regime Eq.~(\ref{conjstrong}) and numerical solution of the Bethe Ansatz equations is seen on this graph, by looking at $e_4'$ shown in Fig.~\ref{Fig2}, one sees that the large-$\gamma$ expansion displays spurious oscillations at intermediate interactions, hence an appropriate combination of both conjectures at weak and strong coupling is needed to recover agreement with numerical calculations over the whole range of interaction strengths.
\begin{figure}
\includegraphics[width=8cm, keepaspectratio, angle=0]{g3c42.png}
\caption{(Color online) Derivative of the dimensionless fourth moment of the distribution of quasi-momenta $e_4$ with respect to the dimensionless interaction strength $\gamma$ as a function of the latter. Using the analytical results either from the conjectures (\ref{e4conj}) (solid, blue) at strong interactions or Eq.~(\ref{conjweak}) (dashed, black) at weak interactions, one finds an excellent agreement with accurate numerics (black dots) for all interaction strengths.}
\label{Fig2}
\end{figure}
We have also checked that our numerical data for $e_2$ and $e_4$, when used in Eq.~(\ref{g3}), yield $g_3(\gamma)$ in close agreement with accurate approximate expressions obtained in \cite{Cheianov} by fitting on the numerical solution of Eqs.~(\ref{Fredholm}), (\ref{alphagamma}), (\ref{moments}) and (\ref{g3}), as illustrated in Appendix \ref{g3gamma}. Having performed all these verifications, we plot in Fig.~\ref{Fig3} the coefficient $c_4$ as a function of $\gamma$ from numerical calculations and the conjectures on $e_2$ and $e_4$. We find that $c_4$ changes sign at $\gamma=\gamma_c\simeq 3.8$. We evaluated the value with more accuracy as $\gamma_c=3.8160616255908\dots$ by comparing two independent numerical solutions of the Bethe Ansatz equations, with agreement of all digits up to this order. This change of sign had already been predicted, based on numerical analysis in \cite{Caux}, that suggested $1<\gamma_c<8$. In Fig.~\ref{Fig4} we plot the known coefficients $c_2$, $c_3$ and $c_4$ as functions of $\gamma$. They are also known by direct calculation in the Tonks-Girardeau regime of infinite interaction strength, where their values are $c_2\!=\!-\pi^2/6$, $c_3\!=\!\pi^2/9$ and $c_4\!=\!\pi^4/120$ respectively, in agreement with our results in the limit $\gamma \to +\infty$, but higher-order terms are known as well in this regime, such as $c_5=-11\pi^4/1350 \dots$ \cite{Tracy, Gangardt, Forrester}. Note that very high interaction strengths are needed to approach the value in the Tonks-Girardeau regime up to a few percents, thus it is quite difficult, for the observable $\rho_1$, to reach this regime experimentally.
\begin{figure}
\includegraphics[width=8cm, keepaspectratio, angle=0]{g3c43.png}
\caption{(Color online) Dimensionless coefficient $c_4$ as a function of the dimensionless interaction strength $\gamma$, as predicted from the conjectures (solid, blue) and (dashed, black), compared to accurate numerics (red dots). A sign inversion occurs around $\gamma=3.8$.}
\label{Fig3}
\end{figure}
\begin{figure}
\includegraphics[width=8cm, keepaspectratio, angle=0]{g3c44.png}
\caption{(Color online) Dimensionless coefficients $c_2$, $c_4$ and $c_3$ (resp. black, blue, red and from bottom to top) as predicted from conjectures, as functions of the dimensionless interaction strength $\gamma$.}
\label{Fig4}
\end{figure}
\section{Conclusions and outlook}
\label{Outlook}
In conclusion, in this work we have derived an exact relation linking the fourth coefficient of the Taylor expansion of the one-body correlation function at short distances and the local three-body correlation function of the Lieb-Liniger model. This connection can be recast in a form where $c_4$ is expressed in terms of moments of the density of pseudo-momenta. We have investigated the fourth moment $e_4$ in detail and provided new conjectural expressions that are extremely accurate in the whole range of interaction strengths. Both analytically and numerically, we find that $c_4$ changes sign around $\gamma=3.8$.
In outlook, it would be interesting to investigate the link between $c_5$ and the coefficient of the first subleading, high-momentum $1/p^6$ term of the momentum distribution $n(p)$ of the gas, beyond the well-known Tan contact (i.e. the coefficient of the leading $1/p^4$ term). Knowing more terms of the Taylor expansion of $g_1$ and $n(p)$ also allows to probe in a finer way the validity of the Renormalization Group-Luttinger liquid approach by comparing their predictions at large distances or short momenta \cite{Dunjko}. In the perspective of taking an harmonic trapping into account, it also allows to discuss the validity of the Local Density Approximation \cite{Olshanii} by comparison with exact numerics \cite{Decamp}. To this aim, the equation of state from the $1/\gamma$ expansion is accurate in the strongly-interacting regime, while the conjecture thereby deduced is also accurate at intermediate $\gamma$ \cite{Minguzzi}. The attractive regime of the super-Tonks-Girardeau gas may also provide new insights \cite{Trombettonibis, Piroli}. To finish with, the equivalence between Eq.~(\ref{g3}) and an other one derived in \cite{Kormos, Poszgay}, that does not involve the momenta $e_{2k}$ but requires solving other Fredholm integral equations instead, checked numerically so far, still awaits rigorous proof. It shall provide an interesting alternative way to tackle connections, especially in out-of-equilibrium situations. An other approach from field theory, based on an appropriate non-relativistic limit of the sinh-Gordon model, has also shown remarkable efficiency already \cite{Mussardo, Trombettoni} as compared with previous Bethe Ansatz results \cite{Kheruntsyan, Shlyapnikov, Ganshlya}. The full characterization of the local correlations $g_k$ seems, however, especially challenging and insightful. Even the connection (0,3) has not been derived yet based on Bethe Ansatz techniques only.
\acknowledgments
We acknowledge financial support from the ANR SuperRing (ANR-15-CE30-0012-02), and National Science Foundation grants PHY-1402249 and PHY-1607221. We thank the referee for suggesting reference \cite{Fateev}.
|
1,116,691,499,651 | arxiv | \section{Introduction}
Throughout this paper, let $\mathbb{Z}$ be the whole integer, $\mathbb{C}$ be the whole complex number, $i:=\sqrt{-1}$ be the imaginary unit, $\tau\in\mathbb{C}$ be the complex number $\mathrm{Im}(\tau)>0$, and $q:=e^{2\pi i\tau}$.
We define the $q$-shifted factorials and Jacobi theta functions as follows:
\begin{align*}
(x)_{\infty }
&=
(x;q)_{\infty }
:=
\prod_{j=0}^{\infty }(1-xq^{j}), \quad
(x)_{n }=(x;q)_{n}:=\frac{(x;q)_{\infty }}{(q^{n}x;q)_{\infty }} \quad (n \in \mathbb{Z}), \\
\vartheta_{11}(u,\tau)
&=
\vartheta_{11}(u)
:=
\sum_{n\in\mathbb{Z}}e^{2\pi i(n+\frac{1}{2})(u+\frac{1}{2})+\pi i(n+\frac{1}{2})^2\tau} \\
&=iq^{\frac{1}{8}}e^{-\pi iu}(q,e^{2\pi iu}, qe^{-2\pi iu};q)_{\infty }, \\
\theta_q(x)
&:=(q,-x,-q/x)_\infty
=\sum_{n \in \mathbb{Z}} x^nq^\frac{n(n-1)}{2},
\end{align*}
and for appropriate complex numbers $a_1,\ldots, a_r,b_1,\ldots,b_s,x$, we define the $q$-hypergeometric series as follows:
\begin{align*}
{}_r\phi_{s}\left( \begin{matrix} a_1,\ldots ,a_r \\ b_1,\ldots ,b_s \end{matrix};q,x\right)
&:=
\sum_{n=0}^{\infty }\frac{(a_1,\ldots, a_r)_n}{(b_1,\ldots ,b_s,q)_n}\left((-1)^{n }q^\frac{n(n-1)}{2}\right)^{s-r+1}x^{n }, \\
{}_r\psi_{s}\left( \begin{matrix} a_1,\ldots, a_r\\b_1,\ldots, b_s\end{matrix};q,x\right)
&:=
\sum_{n \in \mathbb{Z}}\frac{(a_1,\ldots, a_r)_n}{(b_1,\ldots, b_s)_n}\left((-1)^{n }q^\frac{n(n-1)}{2}\right)^{s-r}x^{n },
\end{align*}
where
\begin{align*}
(a_1,\ldots,a_r)_{n }
=
(a_1,\ldots,a_r;q)_{n }
:=(a_1;q)_{n}\cdots (a_r;q)_{n} \quad (n \in \mathbb{Z}\cup \{\infty \}).
\end{align*}
Mock theta functions first appeared in Ramanujan's last letter to Hardy in 1920.
In this letter, Ramanujan told Hardy that he had discovered a new class of functions which he called mock theta functions.
Mock theta functions are functions that have asymptotic behavior similar to theta functions at the root of unity but are not theta functions, and Ramanujan gave 17 examples of mock theta functions.
Some typical examples are as follows:
\begin{align}
\label{eq:Ramanujan's examples}
f_0(q)=\sum_{n=0}^{\infty } \frac{q^{n^2}}{(-q;q)_n}, \quad
\phi(q)=\sum_{n=0}^{\infty } \frac{q^{n^2}}{(-q^2;q^2)_n}, \quad
\psi(q)=\sum_{n=1}^{\infty } \frac{q^{n^2}}{(q;q^2)_n}.
\end{align}
Later, Andrews and Hickerson gave a detailed definition of the mock theta function \cite{AH}. For more background on mock theta functions, see, for example, \cite{AB}.
Ramanujan's mock theta functions are essentially represented as a linear combination of specializations of the universal mock theta function:
$$
g_3(x;q):=\sum_{n=1}^{\infty }\frac{q^{n(n-1)}}{(x)_{n }(x^{-1}q)_{n }}
$$
and some $q$-infinite products, which have come to be known as the mock theta conjectures.
For example \cite[Appendix A]{BFOR},
\begin{align*}
f_0(q)
&=
-2q^{2}g_{3}(q^{2};q^{10})+\frac{(q^{5};q^{5})_{\infty }(q^{5};q^{10})_{\infty }}{(q;q^{5})_{\infty }(q^{4};q^{5})_{\infty }}.
\end{align*}
Such identities were proved by Hickerson \cite{H}.
More fundamentally, the universal mock theta function $g_{3}(x;q)$ can be written in the form:
\begin{align}
\label{eq:mu universal mock}
q^{-\frac{1}{24}}x^\frac{3}{2}g_3(x;q)
&=
\frac{q^{\frac{1}{3}}(q^{3};q^{3})_{\infty }^{3}}{(q;q)_{\infty }\vartheta_{11}(3u,3\tau)} \nonumber \\
& \quad +q^{-\frac{1}{6}}x\mu(3u,\tau,3\tau)+q^{-\frac{2}{3}}x^2\mu(3u,2\tau,3\tau),
\end{align}
where $\mu$ is the $\mu$-function defined by Zwegers as follows \cite{Zw1}:
\begin{align}
\label{eq:def of mu}
\mu(u,v;\tau)
:=&\frac{e^{\pi i u}}{\vartheta_{11}(v)}\sum_{n\in\mathbb{Z}}\frac{(-1)^{n}e^{2\pi inv}q^\frac{n(n+1)}{2}}{1-e^{2\pi iu}q^n}.
\end{align}
For convenience, we also use the following multiplicative notation of the $\mu$-function:
\begin{align}
\mu(x,y;q)=\mu(x,y):=-\frac{iq^{-\frac{1}{8}}\sqrt{xy}}{\theta_q(-y)}\sum_{n\in\mathbb{Z}}\frac{(-1)^ny^nq^\frac{n(n+1)}{2}}{1-xq^n}.
\end{align}
If we substitute $x=e^{2\pi iu}$ and $y=e^{2\pi iv}$, then $\mu(x,y;q )=\mu(u,v;\tau )$.
Zwegers showed that the $\mu$-function satisfies a transformation law like Jacobi forms by adding an appropriate non-holomorphic function to the $\mu$-function \cite{Zw1}:
\begin{align}
\label{eq:tmu +1}
\tilde{\mu}(u,v;\tau+1)
&=
e^{-\frac{\pi i}{4}}\tilde{\mu}(u,v;\tau), \\
\label{eq:tmu +tau}
\tilde{\mu}\left(\frac{u}{\tau},\frac{v}{\tau};-\frac{1}{\tau}\right)
&=
-i\sqrt{-i\tau}e^{\pi i\frac{(u-v)^2}{\tau}}\tilde{\mu}(u,v;\tau),
\end{align}
where
\begin{align*}
\tilde{\mu}(u,v;\tau)
&:=
\mu(u,v;\tau)+\frac{i}{2}R(u-v;\tau), \nonumber \\
E(x)
&:=
2\int_0^x e^{-\pi z^2}dz, \nonumber \\
R(u;\tau)
&:=
\sum_{\nu\in\mathbb{Z}+\frac{1}{2}}\left\{{\rm sgn}(\nu)-E((\nu+a)\sqrt{2t})\right\}(-1)^{\nu-\frac{1}{2}}e^{-\pi i\nu^2\tau-2\pi i\nu u}, \nonumber
\end{align*}
and $t={\rm Im}(\tau), a=\frac{{\rm Im}(u)}{{\rm Im}(\tau)}$.
This result was a pioneering work in the study of mock modular forms (see \cite{BFOR}).
Thus, the $\mu$-function is very important for the study of mock theta functions.
Further, Zwegers gave the following formulas;
\begin{align}
\label{eq:mu periodicity}
\mu(u+1,v)
&=
\mu(u,v+1)=-\mu(u,v), \\
\label{eq:mu pseudo periodicity}
\mu(u+\tau,v)
&=
-e^{2\pi i(u-v)}q^\frac{1}{2}\mu(u,v)-ie^{\pi i(u-v)}q^\frac{3}{8}, \\
\label{eq:mu translation}
\mu(u+z,v+z)
&=
\mu(u,v)+\frac{iq^{\frac{1}{8}}(q)_{\infty }^{3}\vartheta_{11}(z)\vartheta_{11}(u+v+z)}{\vartheta_{11}(u)\vartheta_{11}(v)\vartheta_{11}(u+z)\vartheta_{11}(v+z)}, \\
\label{eq:mu symmetry}
\mu(u,v)
&=
\mu(u+\tau,v+\tau) \\
\label{eq:mu symmetry2}
&=
\mu(v,u) \\
\label{eq:mu symmetry3}
&=
\mu(-u,-v).
\end{align}
On the other hand, the $\mu$-function is also a very interesting object from the viewpoint of $q$-hypergeometric functions.
For example, let us recall the well-known Kronecker formula \cite{W}:
\begin{align}
\label{eq:Kronecker formula}
k(x,y)
:=
\frac{1}{1-x}{_{1}\psi _1}\left(\begin{matrix} x \\ qx \end{matrix};q,y\right)
=
\frac{(q,q,xy,q/xy)_{\infty}}{(x,q/x,y,q/y)_{\infty}}
=
\frac{(q)_{\infty }^{3}\theta _{q}(-xy)}{\theta _{q}(-x)\theta _{q}(-y)}.
\end{align}
The second equality is the case of $a=x$, $b=xq$, $z=y$ in the Ramanujan's summation formula:
\begin{align}
\label{eq:Kronecker symmetry}
{_{1}\psi _1}\left(\begin{matrix} a \\ b \end{matrix};q,z\right)
=
\sum_{n \in \mathbb{Z}}
\frac{(a)_{n}}{(b)_{n}}z^{n}
=
\frac{(az,q/az,q,b/a)_{\infty}}{(z,b/az,b,q/a)_{\infty}}.
\end{align}
Note that from the most right hand side of (\ref{eq:Kronecker formula}), one recognizes the symmetry $k(x,y)=k(y,x)$ explicitly.
Obviously, the $\mu$-function is an analogue of the Kronecker summation:
$$
k(x,y)
=
\sum_{n\in\mathbb{Z}}\frac{(-1)^{n}y^{n}}{1-xq^{n}}.
$$
Hence the symmetric property $\mu(x,y)=\mu(y,x)$ holds in a similar way as $k(x,y)=k(y,x)$, that is $\mu$-function has the following expressions;
\begin{align}
\label{eq:sym mu func 1}
\mu(x,y)
&=
-\frac{iq^{-\frac{1}{8}}\sqrt{xy}}{(q,x,q/x)_{\infty}}\frac{1}{1-y}
{}_1\psi_2\left(\begin{matrix} y\\0,qy\end{matrix};q,qx\right) \\
\label{eq:sym mu func 2}
&=
-\frac{iq^{-\frac{1}{8}}\sqrt{xy}}{(qx,qy)_{\infty}}
{}_2\psi_2\left(\begin{matrix}x,y\\0,0\end{matrix};q,q\right) \\
\label{eq:sym mu func 3}
&=
-\frac{iq^{-\frac{1}{8}}\sqrt{xy}}{(q/x,q/y)_{\infty}}
{}_0\psi_2\left(\begin{matrix}-\\qx,qy\end{matrix};q,qxy\right).
\end{align}
These expressions follow from some Bailey transformations for ${_{2}\psi _2}$ (see \cite{GR} Chapter 5, Exercise 5.20);
\begin{align}
{_{2}\psi _2}\left(\begin{matrix}a,b \\ c,d \end{matrix};q,z\right)
\label{eq:Bailey trans0}
&=
\frac{(az,c/a,d/b,qc/abz)_{\infty}}{(z,c,q/b,cd/abz)_{\infty}}
{_{2}\psi _2}\left(\begin{matrix}a,abz/c \\ az,d \end{matrix};q,\frac{c}{a}\right) \\
\label{eq:Bailey trans1}
&=
\frac{(bz,d/b,c/a,qd/abz)_{\infty}}{(z,d,q/a,cd/abz)_{\infty}}
{_{2}\psi _2}\left(\begin{matrix}b,abz/d \\ bz,c \end{matrix};q,\frac{d}{b}\right) \\
\label{eq:Bailey trans2}
&=
\frac{(az,d/a,c/b,qd/abz)_{\infty}}{(z,d,q/b,cd/abz)_{\infty}}
{_{2}\psi _2}\left(\begin{matrix}a,abz/d \\ az,c \end{matrix};q,\frac{d}{a}\right) \\
\label{eq:Bailey trans3}
&=
\frac{(bz,c/b,d/a,qc/abz)_{\infty}}{(z,c,q/a,cd/abz)_{\infty}}
{_{2}\psi _2}\left(\begin{matrix}b,abz/c \\ bz,d \end{matrix};q,\frac{c}{b}\right),
\end{align}
which is a bilateral version of Heine's transformation formula;
\begin{align*}
{_{2}\phi _1}\left(\begin{matrix}a,b \\ c \end{matrix};q,z\right)
&=
\frac{(az,c/a)_{\infty}}{(z,c)_{\infty}}
{_{2}\phi _1}\left(\begin{matrix}a,abz/c \\ az \end{matrix};q,\frac{c}{a}\right).
\end{align*}
In fact, by taking the limit $z \rightarrow z/b$, $b\rightarrow \infty$, $c=0$ in (\ref{eq:Bailey trans0})-(\ref{eq:Bailey trans3}), we derive
\begin{align}
\label{eq:Bailey trans special1}
{}_1\psi_2\left(\begin{matrix}a\\0,d\end{matrix};q,z\right)
&=
\frac{(z,dq/az)_\infty}{(d,q/a)_\infty}{}_1\psi_2\left(\begin{matrix}az/d\\0,z\end{matrix};q;d\right) \\
\label{eq:Bailey trans special2}
&=
\frac{(d/a,dq/az)_\infty}{(d)_\infty}{}_2\psi_2\left(\begin{matrix}a,az/d\\0,0\end{matrix};q,\frac{d}{a}\right) \\
\label{eq:Bailey trans special3}
&=
\frac{(z,d/a)_{\infty}}{(q/a)_{\infty}}{}_0\psi_2\left(\begin{matrix}- \\z,d\end{matrix};q,az\right),
\end{align}
and we obtain (\ref{eq:sym mu func 1}), (\ref{eq:sym mu func 2}) and (\ref{eq:sym mu func 3}).
In particular, the expression (\ref{eq:sym mu func 3}) was pointed out by Choi \cite{C} (see also \cite{B}).
Thus the $\mu$-function is an interesting object not only from the viewpoint of mock theta functions but also from the viewpoint of $q$-special functions.
However, it seems that many basic problems about the $\mu$-function remain unanswered.
First, Zwegers' original definition looks like artificial.
For example, it does not seem to explain why the $\mu$-function is a two-variable function, whereas the universal mock theta function $g_{3}(x,q)$ is a one-variable function (i.e. what is the origin of the extra variable of the $\mu$-function?).
Also there might not be any interpretations of the numerator $e^{\pi i u}$ and the denominator $\vartheta_{11}(v)$ in the definition (\ref{eq:def of mu}).
Further, proofs and interpretations of the translation formula (\ref{eq:mu translation}) looks like mysterious.
On the other hand, by erasing the non-homogeneous term of pseudo-periodicity equation (\ref{eq:mu pseudo periodicity}), we obtain the following second-order $q$-difference equation for the $\mu$-function:
\begin{align}
\label{eq:mu q-diff}
\left[T_{x}^2-q^{\frac{1}{2}}\left(1-\frac{x}{y}q\right)T_{x}-\frac{x}{y}q\right]\mu(x,y)=0, \quad T_{x}f(x):=f(qx)
\end{align}
This $q$-difference equation (\ref{eq:mu q-diff}) coincides with the sepcialization $a=q$ of the following equation essentially;
\begin{align}
\label{eq:q-Hermite}
[T_x^2-(1-xq)\sqrt{a}T_x-xq]f(x)=0,
\end{align}
which is a gauge transformation of the $q$-Hermite-Weber equation:
\begin{align*}
&[axT_{x}^{2}+(1-x)T_{x}-1]u(x)=0.
\end{align*}
The equation (\ref{eq:q-Hermite}), which we call it also ``$q$-Hermite-Weber equation'', is a typical example of the second-order $q$-difference equation of the Laplace type:
\begin{align}
\label{eq:q-Laplace diff}
[(a_{0}+b_{0}x)T_{x}^{2}+(a_{1}+b_{1}x)T_{x}+(a_{2}+b_{2}x)]u(x)=0.
\end{align}
The $q$-difference equations of the Laplace type have long been studied.
For example, the $q$-difference equation satisfied by Heine's hypergeometric function ${}_2\phi_1$:
\begin{align}
\label{eq:q-Heine diff}
[(c-abqx)T_{x}^{2}-(c+q-(a+b)qx)T_x+q(1-x)]f(x)=0,
\end{align}
which is the most popular and master class in the following hierarchy of the second order $q$-difference equations, was discovered in the 19th century.
The $q$-Hermite-Weber equation (\ref{eq:q-Hermite}) is one of some equations in the hierarchy (\ref{eq;diagram}) obtained by taking some limits of $q$-difference equation (\ref{eq:q-Heine diff}) (for example, see \cite{O1}).
\begin{align}
\label{eq;diagram}
\xymatrix@R=8pt{
& & J_\nu^{(3)} \ar[r] & q\text{-}{\rm Airy} \\
{}_2\phi_1\hyper{a,b}{c}{q}{x} \ar[r] & {}_1\phi_1\hyper{a}{c}{q}{x} \ar[ru] \ar [r] \ar[rd] & J_\nu^{(1)} \ar[r] & {\rm Ramanujan}\\
& & {}_1\phi_1\hyper{a}{0}{q}{x} \ar[ru] &
}
\end{align}
However, a systematic study of the global analysis of degenerate Laplace types had to wait for Ramis J. P., Sauloy J. and Zhang C.'s work \cite{RSZ} in the 21st century.
They introduced some $q$-Borel and $q$-Laplace transformations:
\begin{align}
&\mathcal{B}^{+}(f)(\xi):=\sum_{n\geq 0}a_{n}q^{\frac{n(n-1)}{2}}\xi^{n}, \\
\label{eq:q-Laplace}
&\mathcal{L}^{+}(f)(x,\lambda):=\sum_{n\in\mathbb{Z}}\frac{f(\lambda q^n)}{\theta_q(\lambda q^{n}/x)},
\end{align}
for the formal series
$$
f(x)=\sum_{n\geq 0}a_{n}x^{n} \in \mathbb{C}\llbracket x \rrbracket,
$$
which play a fundamental role in the study of the Laplace type $q$-difference equations.
In particular, Zhang C. \cite{Zh} obtained some connection formulas for the $q$-convergent hypergeometric series ${}_2\phi_0(a,b;q,x)$ by applying these transformations.
By considering the degeneration of Zhang's result, Zhang C. and Ohyama Y. \cite{O2} gave some connection formulas for the $q$-Hermite series ${}_1\phi_1(a;0;q,x)$.
In view of the story discussed above, we introduce the following generalization of the $\mu$-function.
\begin{dfn}
\label{def:mua}
Let $\alpha $, $a$ be complex parameters such that $u-\alpha\tau,v\in\mathbb{C}\backslash\Lambda_\tau$, and such that $xa^{-1},y \in \mathbb{C} \backslash \{q^{n}\}_{n \in \mathbb{Z}}$.
We define $\mu(u,v;\alpha)$ or $\mu(x,y;a)$ as the following series:
\begin{align}
\mu(u,v;\alpha,\tau)
=&
\mu(u,v;\alpha) \nonumber \\
:=&
\frac{e^{\pi i\alpha(u-v)}}{\vartheta_{11}(v)}
\sum_{n\in\mathbb{Z}}(-1)^ne^{2\pi i(n+\frac{1}{2})v}q^\frac{n(n+1)}{2}
\frac{(e^{2\pi iu}q^{n+1})_{\infty }}{(e^{2\pi iu}q^{n-\alpha +1})_{\infty }}, \\
\mu(x,y;a,q)
=&
\mu(x,y;a) \nonumber \\
:=&
-iq^{-\frac{1}{8}}\frac{e^{\pi i\alpha(u-v)}}{\theta_q(-y)}\frac{(x)_\infty}{(x/a)_\infty}
{}_1\psi_2\left(\begin{matrix}x/a\\0,x\end{matrix};q,y\right).
\end{align}
\end{dfn}
If we substitute $x=e^{2\pi iu}$, $y=e^{2\pi iv}$ and $a=e^{2\pi i\alpha \tau }=q^{\alpha }$, then $\mu(x,y;a,q)=\mu(u,v;\alpha ,\tau )$.
The definition of $\mu(u,v;\alpha)$ is equal to the composition of the $q$-Borel and $q$-Lapalace transformations of the formal solution
$$
\widetilde{f}_{0}(x)
=
{}_2\phi_0\left(\begin{matrix}a,0 \\ - \end{matrix};q, {\frac{x}{a}}\right)
$$
for the $q$-Hermite-Weber equation (\ref{eq:q-Hermite}) around $x=0$ (see Section 2).
\begin{thm}
\label{thm:mu and q-Hermite-Weber}
Let $f_{0}(x,\lambda )$ be the image of the $q$-Borel and $q$-Lapalace transformations of the fundamental solution $\widetilde{f}_{0}$ of the $q$-Hermite-Weber equation at $x=0$:
$$
f_{0}(x)
:=
x^{\frac{\alpha }{2}}\mathcal{L}^{+}\circ\mathcal{B}^{+}\left(\widetilde{f}_{0}\right)(x,\lambda).
$$
Then we have
\begin{align}
\label{eq:mu and q-Hermite-Weber}
f_{0}(e^{2\pi i(v-u)},-e^{2\pi iv})
=iq^\frac{1}{8}\mu(v,u;\alpha).
\end{align}
\end{thm}
As a corollary, we obtain an interpretation of $\mu(u,v)=\mu(u,v;1)$ that the original $\mu$-function is regarded as the image of the $q$-Borel and $q$-Lapalace transformations of the fundamental solution $\widetilde{f}_{0}$ of the $q$-Hermite-Weber equation
(very recently S. Garoufalidis and C. Wheeler \cite{GW} pointed out this fact independently of us).
This interpretation gives some answers of the above our questions.
For example, the $\mu$-function is a two-variable function with $x=e^{2\pi i u}$ and $y=e^{2\pi i v}$ due to the extra parameter $\lambda $ which arises from the $q$-Laplace transformation (\ref{eq:q-Laplace}).
The denominator $\vartheta_{11}(v)$ of the $\mu$-function arises from the definition of $q$-Laplace transformation.
The numerator $e^{\pi i u}$ in $\mu(u,v)$ corresponds to the characteristic index of the solution around the origin of (\ref{eq:q-Hermite}).
For this function $\mu(u,v;\alpha)$, we give the following formulas similar to those of the original $\mu$-function.
\begin{thm}
\label{thm:Thm1}
\begin{align}
\label{eq:main results0}
\mu(u+2\tau,v;\alpha)
&=
(1-e^{2\pi i (u-v)}q)q^\frac{\alpha}{2}\mu(u+\tau,v;\alpha) \nonumber \\
& \quad +e^{2\pi i(u-v)}q\mu(u,v;\alpha), \\
\label{eq:main results1}
\mu(u,v;\alpha)
&=
e^{-\pi i \alpha}\mu(u+1,v;\alpha)
=
e^{\pi i\alpha}\mu(u,v+1;\alpha), \\
\label{eq:main results2}
\mu(u+\tau,v;\alpha)
&=
-e^{2\pi i(u-v)}q^\frac{\alpha}{2}\mu(u,v;\alpha)
+e^{\pi i(u-v)}q^\frac{\alpha}{2}\mu(u,v;\alpha-1), \\
\label{eq:main results2-2}
\mu(u-\tau,v;\alpha)
&=
q^\frac{\alpha}{2}\mu(u,v;\alpha)
-2ie^{-\pi i(u-v)}\sin(\pi\alpha\tau)\mu(u,v;\alpha+1), \\
\label{eq:main results3}
\mu(u+z,v+z;\alpha)
&=
\frac{\vartheta_{11}(u+z)\vartheta_{11}(v+z-\alpha\tau)}{\vartheta_{11}(u+z-\alpha\tau)\vartheta_{11}(v+z)}e^{2\pi i\alpha(u-v)}\mu(v,u;\alpha) \nonumber \\
& \quad -
\frac{i(q^\alpha)_\infty(q)_\infty^2q^\frac{1-4\alpha}{8}\vartheta_{11}(z)\vartheta_{11}(u+v+z-\alpha\tau)}{\vartheta_{11}(u)\vartheta_{11}(v-\alpha\tau)\vartheta_{11}(u+z-\alpha\tau)\vartheta_{11}(v+z)} \nonumber \\
&\quad\cdot e^{\pi i(\alpha-1)(u-v)}{}_1\phi_1\hyper{q^{1-\alpha}}{0}{q}{e^{-2\pi i(u-v)}q}, \\
\label{eq:main results4}
\mu(u,v;\alpha)
&=
\mu(u+\tau,v+\tau;\alpha) \\
\label{eq:main results4-2}
&=
\frac{\vartheta_{11}(v-\alpha\tau)\vartheta_{11}(u)}{\vartheta_{11}(u-\alpha\tau)\vartheta_{11}(v)}e^{2\pi i\alpha(u-v)}\mu(v,u;\alpha) \\
&=
\frac{\vartheta_{11}(v-\alpha\tau)\vartheta_{11}(u)}{\vartheta_{11}(u-\alpha\tau)\vartheta_{11}(v)}e^{2\pi i\alpha(u-v)} \nonumber \\
\label{eq:main results4-3}
& \quad \cdot \mu(-u+\alpha\tau,-v+\alpha\tau;\alpha), \\
\label{eq:main results5}
2\cos\pi(u-v)\mu(u,v;\alpha)
&=
(1-q^{-\alpha})\mu(u,v;\alpha+1)+\mu(u,v;\alpha-1).
\end{align}
\end{thm}
We see that these formulas correspond to that of the original $\mu$-function as the periodicity: $(\ref{eq:main results1})\leftrightarrow(\ref{eq:mu periodicity})$,
forward shift: $(\ref{eq:main results2})\leftrightarrow(\ref{eq:mu pseudo periodicity})$,
translation: $(\ref{eq:main results3})\leftrightarrow(\ref{eq:mu translation})$,
$\tau$-periodicity: $(\ref{eq:main results4})\leftrightarrow(\ref{eq:mu symmetry})$,
symmetry: $(\ref{eq:main results4-2})\leftrightarrow(\ref{eq:mu symmetry2})$,
pseudo periodicity: $(\ref{eq:main results4-3})\leftrightarrow(\ref{eq:mu symmetry3})$.
The equation (\ref{eq:main results0}) is a rewriting that $\mu(u,v;\alpha)$ satisfies the $q$-Hermite-Weber equation (\ref{eq:q-Hermite}).
Also, the property (\ref{eq:main results5}) is one of the specific properties of $\mu(u,v;\alpha)$, which coincides essentially with the $q$-Bessel equation:
\[
\left[T_x-(q^\frac{\nu}{2}+q^{-\frac{\nu}{2}})T_x^\frac{1}{2}+\left(1+\frac{x^2}{4}\right)\right]f(x)=0, \quad T_{x}^{\frac{1}{2}}f(x):=f(q^{\frac{1}{2}}x).
\]
Further, we prove the translation formula (\ref{eq:main results3}) as a connection formula for the $q$-Hermite-Weber equation, that means the mysterious translation (\ref{eq:main results3}) is regarded as a variation of a connection formula (see Theorem \ref{thm:tuchimi connection} and (\ref{eq:tuchimi connection})).
We also immediately obtain the $q$-hypergeometric expressions for $\mu(u,v;\alpha)$ corresponding to (\ref{eq:sym mu func 1}), (\ref{eq:sym mu func 2}) and (\ref{eq:sym mu func 3}) from (\ref{eq:Bailey trans special1}), (\ref{eq:Bailey trans special2}) and (\ref{eq:Bailey trans special3}).
\begin{thm}\label{Theorem 3}
\begin{align}
\mu(x,y;a)
&=
-iq^{-\frac{1}{8}}\frac{e^{\pi i\alpha(u-v)}}{\theta_q(-x/a)}\frac{(aq/y)_{\infty }}{(a/y)_{\infty }}{}_1\psi_2\left(\begin{matrix}yq/a\\0,qy\end{matrix};q,xq\right) \\
\label{2psi2}
&=
-iq^{-\frac{1}{8}}e^{\pi i\alpha(u-v)}\frac{(a,q,aq/x,aq/y)_{\infty }}{\theta_{q}(-y)\theta_{q}(-x/a)}{}_2\psi_2\left(\begin{matrix}x/a,y/a\\0,0\end{matrix};q,a\right) \\
&=
\label{0psi2}
-iq^{-\frac{1}{8}}e^{\pi i\alpha(u-v)}\frac{(a,q,x,y)_{\infty }}{\theta_{q}(-y)\theta_{q}(-x/a)}{}_0\psi_2\left(\begin{matrix}-\\x,y\end{matrix};q,\frac{xy}{a}\right).
\end{align}
\end{thm}
Note that the symmetry (\ref{eq:main results4-2}) is proved by a specialization of the translation (\ref{eq:main results3}), but we also get another proof from these $q$-hypergeometric expressions.
The above results are for general complex parameters with $\alpha$, but by restricting $\alpha$ to the integer $k$, we obtain the following simplified formulation.
\begin{cor}
\label{cor:Thm1 k}
Let $k$ be an integer, we have
\begin{align}
\label{eq:mu k 1}
\mu(u+1,v;k)
&=
\mu(u,v+1;k)
=
-\mu(u,v;k), \\
\label{eq:mu k 2}
\mu(u+\tau,v;k)
&=
-e^{2\pi i(u-v)}q^\frac{k}{2}\mu(u,v;k)+e^{\pi i(u-v)}q^\frac{k}{2}\mu(u,v;k-1), \\
\label{eq:mu k 2-2}
\mu(u-\tau,v;k)
&=
q^\frac{k}{2}\mu(u,v;k)
-2ie^{-\pi i(u-v)}\sin(\pi k \tau)\mu(u,v;k+1), \\
\mu(u+z,v+z;k+1)
&=\mu(u,v;k+1) \nonumber \\
& \quad +\frac{i q^{\frac{1}{8}}(q)_{\infty }^{3}\vartheta_{11}(z)\vartheta_{11}(u+v+z)}{\vartheta_{11}(u)\vartheta_{11}(v)\vartheta_{11}(u+z)\vartheta_{11}(v+z)} \nonumber \\
\label{eq:mu k 3}
& \quad \cdot \frac{e^{-\pi ik(u-v)}}{(q^{-k})_k}{}_1\phi_1\left(\begin{matrix}q^{-k}\\0\end{matrix};q,e^{2\pi i(u-v)}q\right), \\
\label{eq:mu k 4}
\mu(u,v;k)
&=
\mu(u+\tau,v+\tau;k) \\
\label{eq:mu k 4-2}
&=
\mu(v,u;k) \\
\label{eq:mu k 4-3}
&=
\mu(-u,-v;k), \\
\label{eq:mu k 5}
2\cos\pi(u-v)\mu(u,v;k)
&=
(1-q^{-k})\mu(u,v;k+1)+\mu(u,v;k-1).
\end{align}
In particular, let $k$ be a positive integer, we have
\begin{align}
\label{eq:mu k def}
\mu(u,v;k)
&=
\frac{e^{\pi ik(u-v)}}{\vartheta_{11}(v)}\sum_{n\in\mathbb{Z}}(-1)^ne^{2\pi i\left(n+\frac{1}{2}\right)v}q^\frac{n(n+1)}{2}\prod_{l=0}^{k-1}\frac{1}{1-e^{2\pi iu}q^{n-l}} \\
\label{eq:mu k 7}
&=
e^{-\pi i(k-1)(u-v-\tau)}\sum_{j=0}^{k-1}\frac{(-1)^{k-1-j}}{(q)_j(q)_{k-1-j}}q^\frac{(k-1-j)^2}{2}\mu(u-j\tau,v).
\end{align}
\end{cor}
Also from (\ref{eq:mu k 5}), we obtain the following result which is non-trivial from the definition of $\mu(u,v;\alpha )$.
\begin{thm}
\label{thm:mu and CqH}
Let $k$ be a non-negative integer, we have
\begin{align}
\label{eq:mu and CqH}
\mu(u,v;-k)
=
-iq^{-\frac{1}{8}}H_{k}(\cos{\pi (u-v)}\mid q),
\end{align}
where $H_{k}(\cos{\pi (u-v)}\mid q)$ be a continuous $q$-Hermite polynomial of degree $k$:
$$
H_{k}(\cos{\pi (u-v)} \mid q)
:=
\sum_{l=0}^{k}
\frac{(q)_{k}}{(q)_{l}(q)_{k-l}}
e^{\pi i(k-2l)(u-v)}.
$$
\end{thm}
From this theorem, $\{\mu(u,v;k+1)\}_{k\geq 0}$ is a family of ``minus degree'' continuous $q$-Hermite polynomials, and in particular, the original $\mu$-function is regarded as a ``$-1$ degree'' continuous $q$-Hermite polynomial.
To give some relations of $\{\mu(u,v;k)\}_{k \in \mathbb{Z}}$, we introduce the following generating function of $\{\mu(u,v;k+1)\}_{k\geq 0}$:
\begin{align}
S(u,v,r)=S(r):=\sum_{k=0}^{\infty }\mu(u,v,k+1)r^{k}
\end{align}
which is a variation (i.e. minus degree version) of the well-known generating function of the continuous $q$-Hermite polynomials (for example, see \cite[p.542]{KLS}):
\begin{align}
\label{eq:gen func of CqH}
\sum_{n\geq0}\frac{H_n(\cos\theta \mid q)}{(q)_n}r^n=\frac{1}{(re^{i\theta},re^{-i\theta})_\infty}.
\end{align}
For the generating function $S(r)$, we obtain the following $q$-difference relations and expressions.
\begin{thm}
\label{thm:Sr}
{\rm{(1)}} The generating function $S(r)$ satisfies the following $q$-difference equations:
\begin{align}
\label{eq:Sr rec1}
S(r)
&=
(1-re^{\pi i(u-v)}q)(1-re^{-\pi i(u-v)}q)S(rq)-irq^\frac{7}{8}, \\
& [(1-re^{\pi i(u-v)}q^{2})(1-re^{-\pi i(u-v)}q^{2})T_r^2 \nonumber \\
\label{eq:Sr rec2}
& \quad -
(1+q(1-re^{\pi i(u-v)}q)(1-re^{-\pi i(u-v)}q))T_r+q]S(r)=0.
\end{align}
{\rm{(2)}} The generating function $S(r)$ has the following closed forms:
\begin{align}
\label{eq:Sr expression1}
S(r)
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_{\infty }
\mu(u,v) \nonumber \\
& \quad -irq^{\frac{7}{8}}
{_{3}\phi _2}\left(\begin{matrix}q, re^{\pi i(u-v)}q, re^{-\pi i(u-v)}q \\ 0,0 \end{matrix};q,q\right) \\
\label{eq:Sr expression2}
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_{\infty } \nonumber \\
& \quad \cdot \left\{\mu(u,v)-\frac{irq^{\frac{7}{8}}}{1-q}\Phi^{(1)}\left(\begin{matrix}a;0,0 \\ q^{2} \end{matrix};q;re^{\pi i(u-v)}q, re^{-\pi i(u-v)}q\right)\right\} \\
\label{eq:Sr expression3}
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_{\infty }
\sum_{m\geq 0}\frac{\mu(u,v;1-m )}{(q)_{m}}q^{m}r^{m},
\end{align}
where $\Phi^{(1)}$ is the $q$-Appell function:
$$
\Phi^{(1)}\left(\begin{matrix}a;b_{1},b_{2} \\ c \end{matrix};q;x, y\right)
:=
\sum_{m,n\geq0}
\frac{(a)_{m+n}(b_1)_m(b_2)_n}{(c)_{m+n}(q)_m(q)_n}x^{m}y^{n}.
$$
\end{thm}
In particular, note that the $q$-difference equation (\ref{eq:Sr rec2}) is a degeneration of the $q$-Heun equation:
$$
[(a_{0}+b_{0}x+c_{0}x^{2})T_{x}^{2}+(a_{1}+b_{1}x+c_{1}x^{2})T_{x}+(a_{2}+b_{2}x+c_{2}x^{2})]u(x)=0,
$$
and $S(r)$ is its solution.
Furthermore, by comparing the coefficients in the expansion of the relation (\ref{eq:Sr expression3}) at $r=0$, we also obtain some relations for $\mu(u,v;k+1)$: (Corollary \ref{cor:Sr expressions}).
This paper is organized as follows.
First, in Section 2, we present some preliminary results on the basic solutions and connection formulas for some $q$-difference equations that are degenerations of the $q$-difference equation (\ref{eq:q-Heine diff}). In particular, for the $q$-Hermite-Weber equation (\ref{eq:q-Hermite}), we give a proof of its connection formula according to Ohyama's private note \cite{O2}.
Furthermore, we show that the images of the $q$-Borel and $q$-Laplace transformations for the divergent solution $\widetilde{f}_{0}(x)$ is essentially equivalent to the generalized $\mu$-function $\mu(u,v;\alpha)$.
Then, we prove the connection formula for the case where the parameters $\lambda $ and $\lambda^{\prime}$ are different, and state that it is equivalent to the translation formula (\ref{eq:main results3}) of $\mu(u,v;\alpha)$.
Next, in Section 3, we prove the above main results concerning $\mu(u,v;\alpha)$.
In Section 4, we mention the modular transformations like a real-analytic Jacobi form of $\mu(u,v;k,\tau)$.
Finally, in Section 5, we discuss a possible direction to future works.
During the preparation of this paper, we have informed a recent very interesting paper \cite{GW} from Hikami and Matsusaka.
Some of the examples of the paper \cite{GW} overlap with our results on the $\mu$-function, though the one parameter generalization $\mu(u,v;\alpha)$ is not considered there (e.g. see equation (\ref{eq:mu q-diff}) and \cite[section 4.2]{GW}).
\section{Fundamental solutions and connection formulas of some $q$-difference equations}
In this section, based on \cite{RSZ}, \cite{Zh}, \cite{O1}, we present some preliminary results on the basic solutions and connection formulas for some $q$-difference equations in the diagram (\ref{eq;diagram}).
\begin{lem}[\cite{Zh} p.18, Theorem 2.2.1]
\label{lem:lemma1}
We have the fundamental solutions of the $q$-difference equation
\begin{align}
\label{eq:(26)}
[(1-abxq)T_x^2-(1-(a+b)xq)T_x-xq]f(x)=0
\end{align}
around $x=0$ and $x= \infty$:
\begin{align*}
& \mathcal{L}^{+}\circ\mathcal{B}^{+}(F_1)(x,\lambda), \quad F_2(x)=\frac{(abx)_\infty}{\theta_q(-xq)}{}_2\phi_1\left(\begin{matrix}q/a,q/b\\0 \end{matrix};q,abx\right), \\
& G_1(x)
=
\frac{\theta_q(-axq)}{\theta_q(-xq)}{}_2\phi_1\left(\begin{matrix}a,0\\ aq/b\end{matrix};q,\frac{q}{abx}\right), \quad G_2(x)=\frac{\theta_q(-bxq)}{\theta_q(-xq)}{}_2\phi_1\left(\begin{matrix}b,0\\ bq/a\end{matrix};q,\frac{q}{abx}\right),
\end{align*}
where $F_{1}(x)$ is the formal solution around $x=0$:
$$
F_1(x)
:=
{}_2\phi_0\left(\begin{matrix}a,b \\ -\end{matrix}q,x\right).
$$
In this case, the connection formulas for $\mathcal{L}^{+}\circ\mathcal{B}^{+}(F_1)(x,\lambda)$, $F_2(x)$ and $G_1(x)$, $G_2(x)$ are as follows:
\begin{align*}
\begin{split}
\mathcal{L}^{+}\circ\mathcal{B}^{+}(F_1)(x,\lambda)
&=
\frac{(b)_\infty\theta_q(a\lambda,axq/\lambda,-xq)}{(b/a)_\infty\theta_q(\lambda,xq/\lambda,-axq)}G_1(x)\\
&\qquad +\frac{(a)_\infty\theta_q(b\lambda,bxq/\lambda,-xq)}{(a/b)_\infty\theta_q(\lambda,xq/\lambda,-bxq)}G_2(x),
\end{split}\\
F_2(x)
&=
\frac{(q/a)_\infty}{(b/a,q)_\infty}G_1(x)+\frac{(q/b)_\infty}{(a/b,q)_\infty}G_2(x).
\end{align*}
\end{lem}
Putting $x\mapsto a^{-1}x$, $b=0$ in the the $q$-difference equation $(\ref{eq:(26)})$, we have
\[
\left[T_x^2-(1-xq)T_x-\frac{x}{a}q\right]f(a^{-1}x)=0.
\]
Furthermore, we put $a=q^{\alpha }$, $x^\frac{\alpha}{2}f(a^{-1}x)=g(x)$, then $g(x)$ is the solution of the $q$-Hermite-Weber equation (\ref{eq:q-Hermite}).
\begin{lem}
\label{lem:lemma2}
The fundamental solutions of the $q$-difference equation $(\ref{eq:q-Hermite})$ around $x=0$ are
\begin{align}
f_{0}(x)
:=
x^{\frac{\alpha }{2}}\mathcal{L}^{+}\circ\mathcal{B}^{+}\left(\widetilde{f}_{0}\right)(x,\lambda), \quad
g_0(x)
:=
\frac{x^{1-\frac{\alpha}{2}}}{\theta_q(-x)}{}_1\phi_1\left(\begin{matrix}q/a\\0 \end{matrix};q,xq\right),
\end{align}
where $\widetilde{f}_{0}(x)$ is the formal solution around $x=0$:
$$
\widetilde{f}_{0}(x)
=
{}_2\phi_0\left(\begin{matrix}a,0 \\ - \end{matrix};q, {\frac{x}{a}}\right).
$$
And that of around $x=\infty$ are
\begin{align}
f_{\infty }(x)
:=
f_{0 }(x^{-1}), \quad
g_{\infty }(x)
=g_0(x^{-1}).
\end{align}
In this case, the connection formulas for $f_0(x,\lambda)$, $f_{\infty }(x,\lambda)$ and $g_0(x)$, $g_\infty(x)$ are as follows:
\begin{align}
\label{amuconnect}
\begin{pmatrix}f_0(x,\lambda)\\ f_{\infty }(x,\lambda) \end{pmatrix}=-\frac{(q)_\infty}{(q/a)_\infty}\begin{pmatrix}\frac{\theta_q(\lambda,ax/\lambda)x^\alpha}{\theta_q(\lambda/a,x/\lambda)a}&1\\ 1&\frac{\theta_q(\lambda,x\lambda/a)}{\theta_q(\lambda/a,x\lambda)}x^{-\alpha}\end{pmatrix}\begin{pmatrix}g_0(x)\\g_\infty(x)\end{pmatrix}.
\end{align}
\end{lem}
For this Lemma\,\ref{lem:lemma2}, we give a proof by following Ohyama \cite{O2}.
\begin{proof}
It is sufficient to prove the case of $x=0$, since the $q$-difference equation (\ref{eq:q-Hermite}) is symmetric under the transformation $x\leftrightarrow x^{-1}$.
The image of $\mathcal{L}^{+}\circ\mathcal{B}^{+}$ for $\widetilde{f}_{0}(x)$ gives the convergent series $f_{0}(x)$, and it gives a fundamental solution of (\ref{eq:q-Hermite}).
This fact follows from the property of the $q$-Borel and $q$-Laplace transformations:
\begin{align}
\label{eq:borel}
&\mathcal{B}^{+}(x^mT_x^nf(x))(\xi)=q^\frac{m(m-1)}{2}\xi^mT_\xi^{m+n}\mathcal{B}^+(f)(\xi), \\
\label{eq:laplace}
&\mathcal{L}^{+}(\xi^mT_\xi^nf(\xi))(x,\lambda)=q^{-\frac{m(m-1)}{2}}x^mT_x^{n-m}\mathcal{L}^+(f)(x,\lambda),
\end{align}
or
\begin{align}
\mathcal{L}^{+}\circ\mathcal{B}^{+}(x^mT_x^nf)(x,\lambda)=x^mT_x^n\mathcal{L}^+\circ\mathcal{B}^+(f)(x,\lambda).
\end{align}
Since the limit of $F_{2}(x)$ as $x\to a^{-1}x$, $b\to 0$ degenerates to $g_{0}(x)$, we prove $g_{0}(x)$ is another fundamental solution of (\ref{eq:q-Hermite}).
The proof of the connection formula is obtained from the degeneration limit of the connection formula in Lemma\,\ref{lem:lemma1}.
We take the limit of the Heine's transformation formula
$$
{}_2\phi_1\left(\begin{matrix}a,b\\c\end{matrix};q,x\right)
=
\frac{(abx/c)_\infty}{(x)_\infty}{}_2\phi_1\left(\begin{matrix}c/b,c/a\\ c\end{matrix};q,\frac{abx}{c}\right),
$$
as $a\mapsto0$, then
$$
{}_2\phi_1\left(\begin{matrix}0,b\\c\end{matrix};q,x\right)=\frac{1}{(x)_\infty}{}_1\phi_1\left(\begin{matrix}q/b\\c\end{matrix};q,bx\right).
$$
Then we rewrite the solution $G_2(x)$ in Lemma \ref{lem:lemma1} as
\begin{align*}
G_2(x)
&=
\frac{\theta_q(-bxq)}{\theta_q(-xq)}{}_2\phi_1\left(\begin{matrix}b,0\\ bq/a\end{matrix};q,\frac{q}{abx}\right) \\
&=
\frac{\theta_q(-bxq)}{(q/abx)_\infty \theta_q(-xq)}{}_1\phi_1\left(\begin{matrix}q/a\\ bq/a\end{matrix};q,\frac{q}{ax}\right) \\
&=
(q,abx)_\infty \frac{\theta_q(-bxq)}{\theta_q(-abx,-xq)}{}_1\phi_1\left(\begin{matrix}q/a\\ bq/a\end{matrix};q,\frac{q}{ax}\right).
\end{align*}
Therefore the connection formula of $\mathcal{L}^+\circ\mathcal{B}^+(F_1)(x,\lambda)$ is given by
\begin{align}
\label{eq:prot connec}
\mathcal{L}^+\circ\mathcal{B}^+(F_1)(x,\lambda)=C_1(x)F_2(x)+C_2(x)\frac{(abx)_\infty}{\theta_q(-ax)}{}_1\phi_1\left(\begin{matrix}q/a\\ bq/a\end{matrix};q,\frac{q}{ax}\right),
\end{align}
where
\begin{align*}
C_1(x)
&=
\frac{(b,q)_\infty}{(q/a)_\infty}\frac{\theta_q(a\lambda,axq/\lambda,-xq)}{\theta_q(\lambda,xq/\lambda,-axq)}, \\
\begin{split}
C_2(x)
&=
\frac{(q)_\infty\theta_q(-ax)}{\theta_q(\lambda,\frac{xq}{\lambda})}\left\{ (a,bq/a,q)_\infty\frac{\theta_q(b\lambda,bxq/\lambda)}{\theta_q(-bq/a,-abx)}\right.\\
&\left.\qquad\qquad\qquad\qquad\qquad-\frac{(bq/a)_\infty}{(q/a)_\infty}\frac{\theta_q(-b,a\lambda,axq/\lambda,-bxq)}{\theta_q(-bq/a,-axq,-abx)}\right\}.
\end{split}
\end{align*}
By replacing $x\mapsto a^{-1}x,\lambda\mapsto a^{-1}\lambda,b\mapsto-\lambda^{-1}q^n$ in (\ref{eq:prot connec}) and taking the limit of each terms of (\ref{eq:prot connec}) as $n\to\infty$, we have
\begin{align*}
& \frac{(abx)_\infty}{\theta_q(-ax)}{}_1\phi_1\left(\begin{matrix}q/a\\ bq/a\end{matrix};q,\frac{q}{ax}\right)\to-x^{-\frac{\alpha}{2}}g_{\infty }(x) \\
& F_2(x)\to\frac{\theta_q(-x)}{\theta_q(-xq/a)}x^{\frac{\alpha}{2}-1}g_0(x), \quad
\mathcal{L}^{+}\circ\mathcal{B}^{+}(F_1)(x,\lambda)\to \mathcal{L}^{+}\circ\mathcal{B}^{+}\left(\widetilde{f}_{0}\right)(x,\lambda).
\end{align*}
Hence, we obtain the conclusion.
\end{proof}
Next, we prove Theorem \ref{thm:mu and q-Hermite-Weber} that means the image of the composition of the $q$-Borel and $q$-Laplace transformations of the divergent series $\widetilde{f}_{0}(x)$ is essentially equivalent to our $\mu (u,v;\alpha)$.\\
\noindent
{\bf{Proof of Theorem \ref{thm:mu and q-Hermite-Weber}.}}
From the definition of the $q$-Borel transformation and the $q$-binomial theorem,
\begin{align*}
\mathcal{B}^{+}\left(\widetilde{f}_{0}\right)(\xi )
&=
\sum_{n\geq 0}\frac{(a)_{n}}{(q)_{n}}(-1)^{n}\left(\frac{\xi }{a}\right)^{n}
=
\frac{(-\xi )_{\infty }}{(-\xi /a)_{\infty }}.
\end{align*}
By simple calculation, we have
\begin{align*}
\mathcal{L}^{+}\circ\mathcal{B}^{+}\left(\widetilde{f}_{0}\right)(x,-\lambda)
&=
\sum_{n\in\mathbb{Z}}\frac{1}{\theta_q(-\lambda q^n/x)}\frac{(\lambda q^n)_\infty}{(\lambda q^n/a)_\infty} \\
&=
\frac{1}{\theta_q(-\lambda/x)}\sum_{n\in\mathbb{Z}}\left(-\frac{\lambda}{x}\right)^{n+1}q^\frac{n(n+1)}{2}
\frac{(\lambda q^{n+1})_{\infty }}{(\lambda q^{n-\alpha +1})_{\infty }}.
\end{align*}
The second equality follows from the well-known $q$-difference relation of the theta function:
$$
\theta _{q}(q^{n}x)=q^{-\frac{n(n-1)}{2}}x^{-n}\theta _{q}(x) \quad (n \in \mathbb{Z}).
$$
By substituting $x=e^{2\pi i(u-v)}$ and $\lambda=e^{2\pi iu}$, we have
\begin{align*}
f_0(e^{2\pi i(u-v)},-e^{2\pi iu})
&=
e^{\pi i\alpha(u-v)}\mathcal{L}^{+}\circ\mathcal{B}^{+}\left(\widetilde{f}_{0}\right)(e^{2\pi i(u-v)},-e^{2\pi iv}) \\
&=
-\frac{e^{\pi i\alpha(u-v)}}{\theta_q(-e^{2\pi iv})}\sum_{n\in\mathbb{Z}}(-1)^ne^{2\pi i(n+1)v}q^\frac{n(n+1)}{2}\frac{(e^{2\pi iu}q^{n+1})_\infty}{(e^{2\pi iu}q^{n-\alpha+1})_\infty} \\
&=
\frac{ie^{\pi i\alpha(u-v)}}{\vartheta_{11}(v)}\sum_{n\in\mathbb{Z}}(-1)^ne^{2\pi i\left(n+\frac{1}{2}\right)v}q^\frac{\left(n+\frac{1}{2}\right)^2}{2}\frac{(e^{2\pi iu}q^{n+1})_\infty}{(e^{2\pi iu}q^{n-\alpha+1})_\infty} \\
&=
iq^\frac{1}{8}\mu(u,v;\alpha).
\end{align*}
\qed
Finally, we give a connection formula which derives the translation formula (\ref{eq:main results3}).
\begin{thm}
The following equation holds:
\label{thm:tuchimi connection}
\begin{align}
f_{0}(x,\lambda^{\prime})
&=
\frac{\theta_q(\lambda^{\prime},ax/\lambda^{\prime})x^\alpha}{\theta_q(\lambda^{\prime}/a,x/\lambda^{\prime})a}f_{\infty}(x,\lambda) \nonumber \\
\label{eq:tuchimi connection}
& \quad -
(a)_\infty(q)_\infty^2\frac{\theta_q(-\lambda^{\prime}/x\lambda,-x,-\lambda\lambda^{\prime}/a)}{\theta_q(x\lambda,\lambda^{\prime}/x,\lambda^{\prime}/a,a/\lambda)}g_{\infty }(x).
\end{align}
\end{thm}
\begin{proof}
We replace a variable $\lambda$ to $\lambda'$ in the connection formula (\ref{amuconnect}) of the case of $f_0(x,\lambda)$:
\begin{align}
\label{eq:different variable}
\begin{pmatrix}f_0(x,\lambda')\\ f_{\infty }(x,\lambda) \end{pmatrix}=-\frac{(q)_\infty}{(q/a)_\infty}\begin{pmatrix}\frac{\theta_q(\lambda',ax/\lambda')x^\alpha}{\theta_q(\lambda'/a,x/\lambda')a}&1\\ 1&\frac{\theta_q(\lambda,x\lambda/a)}{\theta_q(\lambda/a,x\lambda)}x^{-\alpha}\end{pmatrix}\begin{pmatrix}g_0(x)\\g_\infty(x)\end{pmatrix}.
\end{align}
By erasing the function $g_0(x)$ in the above (\ref{eq:different variable}), we have
\begin{align*}
&f_0(x,\lambda^{\prime})-\frac{\theta_q(\lambda^{\prime},ax/\lambda^{\prime})x^\alpha}{\theta_q(\lambda^{\prime}/a,x/\lambda^{\prime})a}f_{\infty}(x,\lambda)\\
&=\frac{(q)_\infty}{(q/a)_\infty}\left\{\frac{\theta_q(\lambda,\lambda^{\prime},x\lambda/a,ax/\lambda^{\prime})}{a\theta_q(\lambda/a,\lambda^{\prime}/a,x\lambda,x/\lambda^{\prime})}-1\right\}g_\infty(x)\\
&=\frac{(a)_\infty(q)_\infty^2}{\theta_q(-a)}\frac{\theta_q(-\lambda^{\prime}/x\lambda,-x)}{\theta_q(x\lambda,\lambda^{\prime}/x)}C(x)g_\infty(x),
\end{align*}
where
\[
C(x)=\frac{\lambda^{\prime}}{ax}\frac{\theta_q(\lambda,\lambda^{\prime},x\lambda/a,ax/\lambda^{\prime})}{\theta_q(-\lambda^{\prime}/x\lambda,-x,\lambda/a,\lambda^{\prime}/a)}-\frac{\theta_q(x\lambda,\lambda^{\prime}/x)}{\theta_q(-\lambda^{\prime}/x\lambda,-x)}.
\]
The function $C(x)$ has a simple pole in the points $x=q^j,\frac{\lambda^{\prime}}{\lambda}q^j$ for some $j\in\mathbb{Z}$, and satisfies $C(xq)=C(x)$. We calculate the residue at the pole $x=1$,
\begin{align*}
\lim_{x\to1}\theta_q(-x)C(x)&=\frac{\lambda^{\prime}}{a}\frac{\theta_q(\lambda,\lambda^{\prime},\lambda/a,a/\lambda^{\prime})}{\theta_q(-\lambda^{\prime}/\lambda,\lambda/a,\lambda^{\prime}/a)}-\frac{\theta_q(\lambda,\lambda^{\prime})}{\theta_q(-\lambda^{\prime}/\lambda)}\\
&=\frac{\theta_q(\lambda,\lambda^{\prime})}{\theta_q(-\lambda^{\prime}/\lambda)}\left\{\frac{\lambda^{\prime}}{a}\frac{\theta_q(a/\lambda^{\prime})}{\theta_q(\lambda^{\prime}/a)}-1\right\}=0.
\end{align*}
It follows that the double periodic function $C(x)$ is constant in $x$.
Hence, we have
\begin{align*}
C(x)=C(-a/\lambda)=-\frac{\theta_q(-a,-\lambda\lambda^{\prime}/a)}{\theta_q(\lambda^{\prime}/a,a/\lambda)}.
\end{align*}
\end{proof}
\section{Proofs of the main results}
In this section, we prove main theorems and corollaries.
First, we prove theorem \ref{thm:Thm1}.
\noindent
{\bf{Proof of Theorem \ref{thm:Thm1}.}}
The equation (\ref{eq:main results0}) is obvious from Theorem \ref{thm:mu and q-Hermite-Weber} and Lemma \ref{lem:lemma2}.
The pseudo periodicity (\ref{eq:main results1}) follows directly from the definition of $\mu(u,v;\alpha)$.
The forward shift (\ref{eq:main results2}) is proved as follows:
\begin{align*}
\mu(u+\tau,v;\alpha)
&=
\frac{e^{\pi i\alpha(u-v)}}{\vartheta_{11}(v)}q^\frac{\alpha}{2} \\
& \quad \cdot
\sum_{n\in\mathbb{Z}}
(-1)^ne^{2\pi i(n+\frac{1}{2})v}q^\frac{n(n+1)}{2} \\
& \quad \cdot
\frac{(e^{2\pi iu}q^{n+2})_{\infty }}{(e^{2\pi iu}q^{n-\alpha+2})_{\infty }}(e^{2\pi iu}q^{n+1}+1-e^{2\pi iu}q^{n+1}) \\
&=
\frac{e^{\pi i\alpha(u-v)}}{\vartheta_{11}(v)}q^\frac{\alpha}{2} \\
& \quad \left\{
\sum_{n\in\mathbb{Z}}e^{2\pi i(u-v)}(-1)^{n-1}e^{2\pi i(n+\frac{1}{2})v}q^\frac{n(n+1)}{2}
\frac{(e^{2\pi iu}q^{n+1})_{\infty }}{(e^{2\pi iu}q^{n-\alpha+1})_{\infty }} \right. \\
& \quad \left. +
\sum_{n\in\mathbb{Z}}(-1)^ne^{2\pi i(n+\frac{1}{2})v}q^\frac{n(n+1)}{2}
\frac{(e^{2\pi iu}q^{n+1})_{\infty }}{(e^{2\pi iu}q^{n-(\alpha -1)+1})_{\infty }}\right\} \\
&=
-e^{2\pi i(u-v)}q^\frac{\alpha}{2}\mu(u,v;\alpha)+e^{\pi i(u-v)}q^\frac{\alpha}{2}\mu(u,v;\alpha-1).
\end{align*}
The translation formula (\ref{eq:main results3}) is proved by putting $x=e^{2\pi i(u-v)}$, $\lambda=-e^{2\pi iv}$, and $\lambda^{\prime}=-e^{2\pi i(u+z)}$ in the connection formula (\ref{eq:tuchimi connection}) and using (\ref{eq:mu and q-Hermite-Weber}).
The $\tau$-periodicity (\ref{eq:main results4}), symmetry (\ref{eq:main results4-2}) and pseudo-periodicity (\ref{eq:main results4-3}) are the case of $z=0, \tau$ and $-u-v+\alpha\tau$ in the translation formula (\ref{eq:main results3}) respectively;
\begin{align*}
\mu(u,v;\alpha)
&=
\frac{\vartheta_{11}(v-\alpha\tau)\vartheta_{11}(u)}{\vartheta_{11}(u-\alpha\tau)\vartheta_{11}(v)}e^{2\pi i\alpha(u-v)}\mu(v,u;\alpha), \\
\mu(u,v;\alpha)
&=
\mu(u+\tau,v+\tau;\alpha), \\
\mu(u,v;\alpha)
&=
\frac{\vartheta_{11}(v-\alpha\tau)\vartheta_{11}(u)}{\vartheta_{11}(u-\alpha\tau)\vartheta_{11}(v)}e^{2\pi i\alpha(u-v)}\mu(-u+\alpha\tau,-v+\alpha\tau;\alpha).
\end{align*}
Finally to prove the equation (\ref{eq:main results5}), let us put
$$
\Phi(u,v;\alpha):=\frac{\vartheta_{11}(v-\alpha\tau)\vartheta_{11}(u)}{\vartheta_{11}(u-\alpha\tau)\vartheta_{11}(v)}e^{2\pi i\alpha(u-v)},
$$
which satisfies
$$
\Phi(u,v;\alpha)=\Phi(u,v;\alpha-1)=\Phi(u,v+\tau;\alpha)=\Phi(v,u;\alpha)^{-1}.
$$
Then, from (\ref{eq:main results3}) and (\ref{eq:main results2}), we have the desired result
\begin{align}
\mu(u,v+\tau;\alpha)
&=
\Phi(u,v+\tau;\alpha)\mu(v+\tau,u;\alpha) \nonumber \\
&=
\Phi(u,v;\alpha)(-e^{-2\pi i(u-v)}q^\frac{\alpha}{2}\mu(v,u;\alpha)+e^{-\pi i(u-v)}q^\frac{\alpha}{2}\mu(v,u;\alpha-1)) \nonumber \\
\label{eq:v+tau}
&=
-e^{-2\pi i(u-v)}q^\frac{\alpha}{2}\mu(u,v;\alpha)+e^{-\pi i(u-v)}q^\frac{\alpha}{2}\mu(u,v;\alpha-1).
\end{align}
\qed
Next, we prove Corollary \ref{cor:Thm1 k}.
Since all other equations except equation (\ref{eq:mu k 7}) are obtained immediately from Theorem \ref{thm:Thm1}, we prove only (\ref{eq:mu k 7}).
\noindent
{\bf{Proof of Corollary \ref{cor:Thm1 k}}.}
The first equation is obvious by the Definition \ref{def:mua}.
Next, using the partial fraction decomposition
$$
\prod_{l=0}^{k-1}\frac{1}{1-e^{2\pi iu}q^{n-l}}
=
\sum_{j=0}^{k-1}\frac{(-1)^{k-j-1}}{(q)_j(q)_{k-1-j}}\frac{q^{\frac{(k-j)(k-j-1)}{2}}}{1-e^{2\pi iu}q^{n-j}},
$$
we have
\begin{align*}
\mu(u,v;k)
&=
\frac{e^{\pi ik(u-v)}}{\vartheta_{11}(v)}\sum_{j=0}^{k-1}\sum_{n\in\mathbb{Z}}\frac{(-1)^ne^{2\pi i(n+\frac{1}{2})v}q^\frac{n(n+1)}{2}}{1-e^{2\pi iu}q^{n-j}}\frac{(-1)^{k-1-j}q^\frac{(k-j)(k-j-1)}{2}}{(q)_j(q)_{k-1-j}} \\
&=
e^{\pi i(k-1)(u-v+\tau)}\sum_{j=0}^{k-1}\frac{(-1)^{k-1-j}}{(q)_j(q)_{k-1-j}}q^\frac{(k-1-j)^2}{2}\mu(u-j\tau,v).
\end{align*}
\qed
As an important application of the recursion formula (\ref{eq:main results5}), we also obtain the following relation (reducing formula) for $\mu(u,v;\alpha)$.
\begin{cor}
\label{cor:mu a reduction}
For $\alpha\in\mathbb{C}$, we have
\begin{align}
\label{eq:mu a reduction}
\mu\left(u,u+\frac{1}{2};\alpha-1\right)=(q^{-\alpha}-1)\mu\left(u,u+\frac{1}{2};\alpha+1\right).
\end{align}
In particular, for any non-negative integer $k$,
\begin{align}
\label{eq:mu k reduction}
\mu\left(u,u+\frac{1}{2};k\right)
=
\begin{cases}
-iq^{-\frac{1}{8}}\frac{q^\frac{k^2}{4}}{(q;q^2)_\frac{k}{2}} & k:\text{even}\\
\frac{q^\frac{k^2-1}{4}}{(q^2;q^2)_\frac{n-1}{2}}\mu\left(u,u+\frac{1}{2}\right) & k:\text{odd}
\end{cases}.
\end{align}
\end{cor}
\begin{rmk}
Corollary \ref{cor:mu a reduction} is a generalization of a classical evaluation \cite{G}.
In fact, Gauss proved the case of negative integers in our formula (\ref{eq:mu a reduction}), which is the special value of continuous $q$-Hermite polynomial $H_{n}(x \mid q)$ at $x=0$:
\begin{align}
\label{eq:Gauss evaluation}
i^{-n}H_{n}(0 \mid q)
=
\frac{(q^{n-1};q^{-2})_{\infty}}{(q^{-1};q^{-2})_{\infty}}
=\begin{cases}
0 & (n=2N-1) \\
(q;q^{2})_{N} & (n=2N)
\end{cases}.
\end{align}
This formula (\ref{eq:Gauss evaluation}) is a key lemma to derive the product formula of the quadratic Gauss sum:
\begin{align}
\label{eq:Gauss sum product}
\sum_{k=1}^{2N}e^{\frac{2\pi i k^{2}}{2N+1}}
=
\prod_{j=1}^{N}
(e^{\frac{2\pi i}{2N+1}(2j-1)}-e^{-\frac{2\pi i}{2N+1}(2j-1)}).
\end{align}
In fact, by using this product formula (\ref{eq:Gauss sum product}), Gauss determined the sign of the quadratic Gauss sum, and gave a proof (the 4th proof) of the quadratic reciprocity law.
\end{rmk}
Next, we prove Theorem \ref{thm:mu and CqH} for the relation between the $\mu(u,v;k)$ and the continuous $q$-Hermite polynomial.
\noindent
{\bf{Proof of Theorem \ref{thm:mu and CqH}.}}
For $n=0,1,2,\ldots$, the continuous $q$-Hermite polynomials satisfy the following recursion equation:
\begin{align}
\label{q-Her Rec}
2xH_n(x\mid q)&=H_{n+1}(x\mid q)+(1-q^n)H_{n-1}(x\mid q), \\
\label{q-Her initial}
H_{0}(x\mid q)&=1, \quad H_1(x\mid q)=2x.
\end{align}
Let us now set
$$
\widehat{H}_{k}(x\mid q):=iq^\frac{1}{8}\mu(u,v;-k), \quad x=\cos\pi(u-v).
$$
To prove $\widehat{H}_{k}(x\mid q)={H}_{k}(x\mid q)$, it is enough to show that $\widehat{H}_{k}(x\mid q)$ satisfy the recurrence relation (\ref{q-Her Rec}) and the initial condition (\ref{q-Her initial}).
The recurrence relation (\ref{q-Her Rec}) is equal to the formula (\ref{eq:mu k 5}) exactly.
For the initial condition (\ref{q-Her initial}), by the definition and the triple product identity, we have
\begin{align}
\label{eq:Hhat initial0}
\widehat{H}_{0}(x\mid q)
&=
iq^\frac{1}{8}\mu(u,v;0)
=
\frac{1}{\vartheta_{11}(v)}
\sum_{n\in\mathbb{Z}}e^{2\pi i(n+\frac{1}{2})(v+\frac{1}{2})+\pi i(n+\frac{1}{2})^{2}\tau }
=
1.
\end{align}
The equation $\widehat{H}_{1}(x\mid q)=2x$ follows from (\ref{eq:Hhat initial0}) and the case of $k=0$ in (\ref{eq:mu k 5}).
\qed
Next, we prove Theorem \ref{thm:Sr} for the generating function $S(r)$.
\noindent
{\bf{Proof of Theorem \ref{thm:Sr}.}}
(1) Since the equation (\ref{eq:Sr rec2}) follows from (\ref{eq:Sr rec1}) immediately, it is enough to show (\ref{eq:Sr rec1}).
We find by (\ref{eq:mu k 5}) that
\begin{align}
S(r)-S\left(\frac{r}{q}\right)
&=
\sum_{k=0}^\infty(1-q^{-k})\mu(u,v;k+1)r^k \nonumber\\
&=
\sum_{k=0}^\infty(2\cos\pi(u-v)\mu(u,v;k)-\mu(u,v;k-1))r^k \nonumber\\
&=
2r\cos\pi(u-v)\sum_{k=-1}^\infty\mu(u,v;k+1)r^k-r^2\sum_{k=-2}^\infty\mu(u,v;k+1)r^k \nonumber\\
&=
(2\cos\pi(u-v)-r^2)S(r)-r\mu(u,v;0) \nonumber \\
& \quad +2\cos\pi(u-v)\mu(u,v;0)-\mu(u,v;-1). \nonumber
\end{align}
By the case of $k=0$ in recursion equation (\ref{eq:mu k 5}):
\begin{align*}
2\cos\pi(u-v)\mu(u,v;0)-\mu(u,v;-1)=0
\end{align*}
and $\mu(u,v;0)=-iq^{-\frac{1}{8}}$, we have the conclusion.\\
\noindent
(2) Let $N=0,1,2,\ldots$. Using the equation (\ref{eq:mu k 5}), we find that
\begin{align}
S(r)
&=
(1-re^{\pi i(u-v)}q)(1-re^{-\pi i(u-v)q})S(rq)-irq^\frac{7}{8} \nonumber \\
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_NS(rq^N) \nonumber \\
\label{eq:N expression Sr}
& \quad -
irq^\frac{7}{8}\sum_{n=0}^{N-1}q^n(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_n.
\end{align}
Taking the limit $N \to \infty$ in (\ref{eq:N expression Sr}), we have
\begin{align}
S(r)
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_\infty\mu(u,v) \nonumber \\
& \quad -
irq^\frac{7}{8}\sum_{n=0}^\infty q^n(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_n \nonumber \\
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_{\infty }
\mu(u,v) \nonumber \\
\label{eq:gen Sr expression}
& \quad -
irq^{\frac{7}{8}}{_{3}\phi _2}\left(\begin{matrix}q, re^{\pi i(u-v)}q, re^{-\pi i(u-v)}q \\ 0,0 \end{matrix};q,q\right).
\end{align}
The conclusion follows from (\ref{eq:gen Sr expression}) and Andrews' formula \cite[p298, Exercise 10.8]{GR}:
\begin{align}
\label{eq:Andrews formula}
{_{3}\phi _2}\left(\begin{matrix}a,b,c \\ d,e \end{matrix};q,x\right)
=
\frac{(ax,b,c)_{\infty }}{(x,d,e)_{\infty }}
\Phi^{(1)}\left(\begin{matrix}x;d/b,e/c \\ ax \end{matrix};q;b,c\right).
\end{align}
Finally, we prove (\ref{eq:Sr expression3}).
From (\ref{eq:Sr expression2}), we have
\begin{align*}
S(r)
&=
(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_{\infty }\\
&\quad \cdot\left\{
\mu(u,v)-\frac{irq^{\frac{7}{8}}}{1-q}\Phi ^{(1)}\left(\begin{matrix}q;0,0 \\ q^2 \end{matrix};q;re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q\right)\right\} \\
&=
(re^{\pi i(u-v)}q, re^{-\pi i(u-v)}q)_{\infty } \\
& \quad \cdot
\left\{
\mu(u,v)
-\frac{iq^{\frac{7}{8}}}{1-q}\sum_{n\geq 0}\sum_{k=0}^{n}
\frac{(q)_{n}}{(q)_{k}(q)_{n-k}(q^{2})_{n}}e^{\pi i(2k-n)(u-v)}q^{n}r^{n+1}\right\} \\
&=
(re^{\pi i(u-v)}q, re^{-\pi i(u-v)}q)_{\infty }\\
&\quad\cdot\left\{
\mu(u,v)
-\sum_{n\geq 0}\frac{iq^{\frac{7}{8}}H_{n}(\cos{\pi (u-v)}\mid q)}{(q)_{n+1}}q^{n}r^{n+1}\right\}.
\end{align*}
From Theorem \ref{thm:mu and CqH}, $H_{n}(x\mid q):=iq^\frac{1}{8}\mu(u,v,-n)$, so (\ref{eq:Sr expression3}) holds.
\qed
By re-expanding the generating function $S(r)$, we obtain the following relationship between the function $\mu(u,v,k+1)$ and the continuous $q$-Hermite polynomials .
\begin{cor}
\label{cor:Sr expressions}
For non-negative integers $k$, we have
\begin{align}
\label{eq:Sr expressions}
\mu(u,v;k+1)
=
\sum_{l=0}^{k}
\frac{q^{l}}{(q)_{l}}F_{k-l+1}(\cos{\theta }\mid q)\mu (u,v;1-l),
\end{align}
where
$$
F_{n+1}(\cos{\theta }\mid q)
:=
e^{- ni\theta}{}_1\phi_1\hyper{q^{-n}}{0}{q}{e^{2 i\theta}q}\frac{(-1)^nq^\frac{n(n+1)}{2}}{(q)_n}.
$$
For positive integers $m$, we have
\begin{align}
\label{eq:Sr expressions 2}
\sum_{k=0}^{m}\mu{(u,v;k+1)}\frac{H_{m-k}(\cos{\pi (u-v)}\mid q)}{(q)_{m-k}}q^{m-k}
=
-\frac{iq^\frac{7}{8}H_{m-1}(\cos{\pi (u-v)}\mid q)}{(q)_{m}}.
\end{align}
\end{cor}
\begin{proof}
From the Taylor expansion of the $q$-exponential function \cite{GR} (II.2)
$$
(x)_{\infty }=\sum_{n\geq 0}\frac{1}{(q)_{n}}(-1)^nq^\frac{n(n-1)}{2}x^{n},
$$
we have
\begin{align*}
(e^{i\theta }rq,e^{-i\theta }rq)_{\infty }
&=
\sum_{n\geq 0}(-qr)^{n}\sum_{k=0}^{n}\frac{q^{\binom{k}{2}}q^{\binom{n-k}{2}}}{(q )_{k}(q )_{n-k}}e^{i(n-2k)\theta }\\
&=
\sum_{n\geq 0}{}_1\phi_1\hyper{q^{-n}}{0}{q}{e^{2i \theta}q}\frac{(-1)^nq^\frac{n(n-1)}{2}}{(q)_n}(e^{-i\theta}rq)^n\\
&=
\sum_{n\geq0}F_{n+1}(\cos\pi(u-v)\mid q)r^n.
\end{align*}
By expanding (\ref{eq:Sr expression3}) and comparing the coefficient in (\ref{eq:Sr expression3}), we obtain (\ref{eq:Sr expressions}).
To prove (\ref{eq:Sr expressions 2}), we divide by both sides of (\ref{eq:Sr expression3}) by $(e^{i\theta }rq,e^{-i\theta }rq)_{\infty }$;
\begin{align}
\label{eq:Sr expression3 divid}
\frac{1}{(re^{\pi i(u-v)}q,re^{-\pi i(u-v)}q)_{\infty }}S(r)
=
\sum_{m\geq 0}\frac{\mu(u,v;1-m )}{(q)_{m}}q^{m}r^{m}.
\end{align}
By the generating function of continuous $q$-Hermite polynomial (\ref{eq:gen func of CqH}), the left hand side of (\ref{eq:Sr expression3 divid}) is equal to
$$
\sum_{m\geq 0}\sum_{k=0}^{m}\mu{(u,v;k+1)}\frac{H_{m-k}(\cos{\pi (u-v)}\mid q)}{(q)_{m-k}}q^{m-k}r^{m}.
$$
Then we obtain the conclusion (\ref{eq:Sr expressions 2}).
\end{proof}
\section{Modular transformation of $\mu(u,v;k,\tau)$}
In this section, again let $k$ be an integer.
We state that $\mu(u,v;k,\tau)$ is essentially equivalent to the original $\mu$-function with respect to the property like a real-analytic Jacobi form.
\begin{prop}
We define a modified $\mu(u,v;k,\tau)$ by
\begin{align}
\tilde{\nu} (u,v;k,\tau)
:=
\frac{\mu(u,v;k+1,\tau)}{F_{k+1}(\cos\pi(u-v)\mid q)}+\frac{1}{2i}R_{k+1}(u-v,\tau),
\end{align}
where
$$
R_{k+1}(u,\tau):=R(u,\tau)-2q^{-\frac{1}{8}}\sum_{l=1}^k\frac{q^l}{(q)_l}\frac{F_{k-l+1}(\cos\pi u\mid q)}{F_{k+1}(\cos\pi u\mid q)}H_{l-1}(\cos\pi u \mid q).
$$
We have
\begin{align}
\tilde{\mu}(u,v,\tau)=\tilde{\nu} (u,v;k,\tau).
\end{align}
Therefore, the following transformations hold;
\begin{align}
\label{eq:tnu +1}
\tilde{\nu} (u,v;k,\tau+1)
&=
e^{-\frac{\pi i}{4}}\tilde{\nu}(u,v;k,\tau ), \\
\label{eq:tnu +tau}
\tilde{\nu} \left(\frac{u}{\tau},\frac{v}{\tau};k,-\frac{1}{\tau}\right)
&=
-i\sqrt{-i\tau}e^{\pi i\frac{(u-v)^2}{\tau}}\tilde{\nu} (u,v;k,\tau ).
\end{align}
\end{prop}
\begin{proof}
From Collorary \ref{cor:Sr expressions} (\ref{eq:Sr expressions}),
\begin{align}
\label{(81)}
& \mu(u,v;\tau)-\frac{\mu(u,v;k+1,\tau)}{F_{k+1}(\cos\pi(u-v)\mid q)} \nonumber \\
&=
iq^{-\frac{1}{8}}\sum_{l=1}^k\frac{q^l}{(q)_l}\frac{F_{k-l+1}(\cos\pi(u-v)\mid q)}{F_{k+1}(\cos\pi(u-v)\mid q)}H_{l-1}(\cos\pi(u-v)\mid q). \nonumber
\end{align}
Therefore
\begin{align*}
\tilde{\mu}(u,v,\tau)
&=
\mu(u,v,\tau)+\frac{1}{2i}R(u-v,\tau) \\
&=\frac{\mu(u,v;k+1,\tau)}{F_{k+1}(\cos\pi(u-v)\mid q)}+\frac{1}{2i}R(u-v,\tau) \\
& \quad
+iq^{-\frac{1}{8}}\sum_{l=1}^k\frac{q^l}{(q)_l}\frac{F_{k-l+1}(\cos\pi(u-v)\mid q)}{F_{k+1}(\cos\pi(u-v)\mid q)}H_{l-1}(\cos\pi(u-v)\mid q) \\
&=
\frac{\mu(u,v;k+1,\tau)}{F_{k+1}(\cos\pi(u-v)\mid q)}+\frac{1}{2i}R_{k+1}(u-v,\tau).
\end{align*}
\end{proof}
\begin{rmk}
T. Matsusaka also mentions this proposition \cite{M}.
\end{rmk}
\section{Remarks for further studies}
In this paper, from the point of view of the analysis of one-variable linear $q$-difference equations of the Laplace type, we have studied a generalization of Zwegers' $\mu$-function $\mu(x,y;a)$ which satisfies the $q$-Hermite-Weber equation (\ref{eq:q-Hermite}).
On the other hand, the generalized $\mu$-function $\mu(x,y;a)$ is a two-variable function originally.
More precisely, $\mu(x,y;a)$ is closely related to the $q$-Appell hypergeometric function $\Phi^{(1)}$ and its $q$-difference system which we call $q$-Appell difference system:
\begin{align}
[(1-T_{x})(1-c/q T_{x}T_{y})-x(1-aT_{x}T_{y})(1-b_{1}T_{x})]\Phi (x,y)&=0, \\
\label{eq:Asystem2}
[(1-T_{y})(1-c/q T_{x}T_{y})-y(1-aT_{x}T_{y})(1-b_{2}T_{y})]\Phi (x,y)&=0, \\
\label{eq:Asystem3}
[x(1-T_{y})(1-b_{1}T_{x})-y(1-T_{x})(1-b_{2}T_{y})]\Phi (x,y)&=0.
\end{align}
The first and second equations are essentially equivalent to Gasper-Rahman \cite{GR} Exercises 10.12 (i), (ii) (unfortunately, these formulas in \cite{GR} are incorrect).
The third equation (\ref{eq:Asystem3}) is a reducible factor of the following $q$-difference equation:
\begin{align}
& (1-T_{x})[(1-T_{y})(1-c/q T_{x}T_{y})-y(1-aT_{x}T_{y})(1-b_{2}T_{y})]\Phi (x,y) \nonumber \\
& \quad -
(1-T_{y})[(1-T_{x})(1-c/q T_{x}T_{y})-x(1-aT_{x}T_{y})(1-b_{1}T_{x})]\Phi (x,y) \nonumber \\
&=
(1-aT_{x}T_{y})[x(1-T_{y})(1-b_{1}T_{x})-y(1-T_{x})(1-b_{2}T_{y})]\Phi (x,y)
=0
\end{align}
and we show that $\Phi^{(1)}$ satisfy (\ref{eq:Asystem3}) easily.
First, $\Phi^{(1)}$ appears in the expression of $\mu(x,y;a)$.
In Theorem \ref{Theorem 3}, by dividing the bilateral sum of $q$-hypergeometric series ${}_2\psi_2$ and ${}_0\psi_2$ into two parts, positive and negative;
\begin{align*}
{}_2\psi_2\left(\begin{matrix}x/a,y/a\\0,0\end{matrix};q,a\right)
&=
\frac{xy}{a}\left(1-\frac{a}{x}\right)\left(1-\frac{a}{y}\right){}_3\phi_2\left(\begin{matrix}xq/a,yq/a,q \\ 0,0 \end{matrix};q,a\right) \\
& \qquad\quad
+{}_2\phi_2\left(\begin{matrix}0,q \\ aq/x,aq/y \end{matrix};q,-\frac{aq^2}{xy}\right), \\
{}_0\psi_2\left(\begin{matrix}-\\x,y\end{matrix};q,\frac{xy}{a}\right)
&=
\frac{1}{(1-x)(1-y)}{}_1\phi_2\left(\begin{matrix}q \\ xq,yq \end{matrix};q,\frac{xyq^2}{a}\right) \\
& \qquad\quad
+{}_3\phi_2\left(\begin{matrix}q/x,q/y,q \\ 0,0 \end{matrix};q,a\right),
\end{align*}
we rewrite Theorem \ref{Theorem 3} as follows:
\begin{align}
iq^{\frac{1}{8}}\mu(x,y;a)
&=
e^{\pi i\alpha(u-v)}\left\{\frac{xy}{a}\frac{(a,q,a/x,a/y)_\infty}{\theta_q(-y)\theta_q(-x/a)}
{}_3\phi_2\left(\begin{matrix}xq/a,yq/a,q \\ 0,0 \end{matrix};q,a\right) \right. \nonumber\\
\label{eq:mu Appell1}
&\qquad\quad\left.
+\frac{(a,q,aq/x,aq/y)_\infty}{\theta_q(-y)\theta_q(-x/a)}{}_2\phi_2\left(\begin{matrix}0,q \\ aq/x,aq/y \end{matrix};q,-\frac{aq^2}{xy}\right)\right\} \\
&=
e^{\pi i\alpha(u-v)}\left\{\frac{(a,q,xq,yq)_\infty}{\theta_q(-y)\theta_q(-x/a)}{}_1\phi_2\left(\begin{matrix}q \\ xq,yq \end{matrix};q,\frac{xyq^2}{a}\right) \right. \nonumber\\
\label{eq:mu Appell2}
&\qquad\quad\qquad\quad\left.
+\frac{(a,q,x,y)_\infty}{\theta_q(-y)\theta_q(-x/a)}{}_3\phi_2\left(\begin{matrix}q/x,q/y,q \\ 0,0 \end{matrix};q,a\right)\right\}.
\end{align}
By Andrews' formula (\ref{eq:Andrews formula}), we obtain
\begin{align}
iq^{\frac{1}{8}}\mu(x,y;a)
&=
e^{\pi i\alpha(u-v)}\left\{-\frac{(aq)_\infty\theta_q(-yq/a)}{(q)_\infty\theta_q(-yq)}\Phi^{(1)}\left(\begin{matrix}a;0,0 \\ aq \end{matrix};q;\frac{xq}{a},\frac{yq}{a}\right) \right. \nonumber\\
&\qquad\quad\left.+\frac{(a,q,aq/x,aq/y;q)_\infty}{\theta_q(-y)\theta_q(-x/a)}{}_2\phi_2\left(\begin{matrix}0,q \\ aq/x,aq/y \end{matrix};q,-\frac{aq^2}{xy}\right)\right\} \\
&=
e^{\pi i\alpha(u-v)}\left\{\frac{(a,q,xq,yq)_\infty}{\theta_q(-y)\theta_q(-x/a)}{}_1\phi_2\left(\begin{matrix}q \\ xq,yq \end{matrix};q,\frac{xyq^2}{a}\right) \right. \nonumber\\
&\qquad\qquad\qquad+\left.\frac{(aq)_\infty\theta_q(-x)}{(q)_\infty\theta_q(-x/a)}\Phi^{(1)}\left(\begin{matrix}a;0,0 \\ aq \end{matrix};q;\frac{q}{x},\frac{q}{y}\right)\right\}.
\end{align}
Namely, $\mu(x,y;a)$ is regarded as a bilateral version of $q$-Appell hypergeometric functions
$$
\Phi^{(1)}\left(\begin{matrix}a;0,0 \\ aq \end{matrix};q;\frac{xq}{a},\frac{yq}{a}\right)
$$
or
$$
\Phi^{(1)}\left(\begin{matrix}a;0,0 \\ aq \end{matrix};q;\frac{q}{x},\frac{q}{y}\right).
$$
Further, $\mu(x,y;a)$ essentially satisfies the $q$-Appell difference system in the case of $b_1=b_2=0,c=aq$:
\begin{align}
\label{eq:q-Appell 2}
\begin{cases}
[(1-x-T_x)(1-aT_xT_y)]\Phi (x,y)=0 \\
[(1-y-T_y)(1-aT_xT_y)]\Phi (x,y)=0 \\
[x(1-T_y)-y(1-T_x)]\Phi (x,y)=0
\end{cases}.
\end{align}
\begin{thm}The function
$$
\nu(x,y;a):=e^{-\pi i\alpha(u-v)}\frac{\theta_q(-ay)}{\theta_q(-y)}\mu(ax,ay;a)
$$
satisfies the multivariate $q$-difference equation $(\ref{eq:q-Appell 2})$.
More precisely, we have
\begin{align}
\label{eq:more reduce}
(1-aT_xT_y)\nu(x,y;a)&=0, \\
\label{eq:more reduce 2}
[x(1-T_y)-y(1-T_x)]\nu (x,y;a)&=0.
\end{align}
\end{thm}
\begin{proof}
To prove the first and second equations of (\ref{eq:q-Appell 2}), it is enough to show (\ref{eq:more reduce}).
By simple calculation, we have
\begin{align*}
aT_xT_y\nu(x,y;a)
&=
a\frac{\theta_q(-ayq)}{\theta_q(-yq)}\mu(axq,ayq;a) \\
&=
a\frac{-a^{-1}y^{-1}}{-y^{-1}}\frac{\theta_q(-ay)}{\theta_q(-y)}\mu(x,y;a) \\
&=
\nu(x,y;a).
\end{align*}
Here, the second equality follows from $\tau$-periodicity $(\ref{eq:main results4})$.
Another equation (\ref{eq:more reduce 2}) is proved as follows:
\begin{align*}
(xT_y-yT_x)\nu(x,y;a)
&=
xe^{-\pi i\alpha(u-v-\tau)}\frac{\theta_q(-ayq)}{\theta_q(-yq)}\mu(ax,ayq;a)\\
&\quad-ye^{-\pi i\alpha(u-v+\tau)}\frac{\theta_q(-ay)}{\theta_q(-y)}\mu(axq,ay;a)\\
&=
e^{-\pi i\alpha(u-v)}\frac{\theta_q(-ay)}{\theta_q(-y)}\left\{\left(-y\mu(ax,ay;a)+\sqrt{xy}\mu(ax,ay;a/q)\right)\right.\\
&\quad\left.-\left(-x\mu(ax,ay;a)+\sqrt{xy}\mu(ax,ay;a/q)\right)\right\}\\
&=(x-y)\nu(x,y;a).
\end{align*}
The first and second equalities in the above follow from the forward shift $(\ref{eq:main results2})$ and $(\ref{eq:v+tau})$ respectively.
\end{proof}
In particular, Zwegers' $\mu$-function $\nu(x,y;q)=-e^{-\pi i(u+v)}\mu(x,y)$ also is a solution of the case of $a=q$ in the the two-variate $q$-difference system (\ref{eq:q-Appell 2}):
\begin{align}
\label{eq:q-Appell 3}
\begin{cases}
[1-qT_xT_y]\nu (x,y;q)=0 \\
[x(1-T_y)-y(1-T_x)]\nu (x,y;q)=0
\end{cases}.
\end{align}
Based on this fact, it would be desire to study other generalizations and their global analysis of the $\mu$-function $\mu(u,v)$ from the view of analysis of the $q$-Appell difference system (\ref{eq:q-Appell 3}).
\section*{Acknowledgments}
We are grateful to Professor Yasuhiko Yamada (Kobe University) for his helpful advice on our paper.
We also wish to thank Professor Yosuke Ohyama (Tokushima University) for his valuable suggestions on $q$-special functions, including in the unpublished proof of Lemma\,\ref{lem:lemma2} \cite{O2}.
We are also indebted to Professor Kazuhiro Hikami (Kyushu University) for his information for \cite{GW} and quantum invariants.
Professor Toshiki Matsusaka (Kyushu University) also provides information on \cite{GW} and his note \cite{M}.
Some pieces of information on the $q$-Appell hypergeometric function $\Phi ^{(1)}$ and its $q$-difference equations are provided by Dr. T. Nobukawa.
This work was supported by JSPS KAKENHI Grant Number 21K13808.
|
1,116,691,499,652 | arxiv | \section{Introduction}
\begin{figure}[ht]
\centering
\mbox{\subfloat { \includegraphics[clip,trim=0cm 3.5cm 0cm 3.5cm ,width=8cm]{./figures/Overview_new.pdf}}}\hspace{0.6cm}%
\caption{Schematic diagram of the proposed method.}
\label{fig:overall}
\end{figure}
\vspace{-0.2cm}
High Dynamic Range Imaging (HDRI) is a photography technique that helps to capture better-looking photos in difficult lighting conditions. It helps to store all range of light (or brightness) that is perceivable by human eyes, instead of using limited range achieved by cameras. Due to this property, all objects in the scene look better and clear in HDRI, without being saturated (too dark or too bright) otherwise.
The popular approach for HDR image generation is called as Multiple Exposure Fusion (MEF), in which, a set of static LDR images (further referred as exposure stack) with varying exposure is fused into a single HDR image. The proposed method falls under this category. Most of MEF algorithms work better when the exposure bias difference between each LDR images in exposure stack is minimum\footnote{Exposure bias value indicates the amount of exposure offset from the auto exposure setting of an camera. For example, EV 1 is equal to doubling auto exposure time (EV 0).}. Thus they require more LDR images (typically more than 2 images) in the exposure stack to capture whole dynamic range of the scene. It leads to more storage requirement, processing time and power. In principle, the long exposure image (image captured with high exposure time) has better colour and structure information in dark regions and short exposure image (image captured with less exposure time) has better colour and structure information in bright regions. Though fusing extreme exposure images is practically more appealing, it is quite challenging (existing approaches fail to maintain uniform luminance across image). Additionally, it should be noted that taking more pictures increases power, capture time and computational time requirements. Thus, we propose to work with exposure bracketed image pairs as input to our algorithm.
In this work, we present a data-driven learning method for fusing exposure bracketed static image pairs. To our knowledge this is the first work that uses deep CNN architecture for exposure fusion. The initial layers consists of a set of filters to extract common low-level features from each input image pair. These low-level features of input image pairs are fused for reconstructing the final result. The entire network is trained end-to-end using a no-reference image quality loss function.
We train and test our model with a huge set of exposure stacks captured with diverse settings (indoor/outdoor, day/night, side-lighting/back-lighting, and so on). Furthermore, our model does not require parameter fine-tuning for varying input conditions. Through extensive experimental evaluations we demonstrate that the proposed architecture performs better than state-of-the-art approaches for a wide range of input scenarios.
\par The contributions of this work are as follows:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]
\item A CNN based unsupervised image fusion algorithm for fusing exposure stacked static image pairs.
\item A new benchmark dataset that can be used for comparing various MEF methods.
\item An extensive experimental evaluation and comparison study against 7 state-of-the-art algorithms for variety of natural images.
\end{itemize}
The paper is organized as follows. Section 2, we briefly review related works from literature. Section 3, we present our CNN based exposure fusion algorithm and discuss the details of experiments. Section 4, we provide the fusion examples and then conclude the paper with an insightful discussion in section 5.
\vspace{-0.1cm}
\section{Related Works}
\label{sec:related_works}
\vspace{-0.1cm}
Many algorithms have been proposed over the years for exposure fusion. However, the main idea remains the same in all the algorithms. The algorithms compute the weights for each image either locally or pixel wise. The fused image would then be the weighted sum of the images in the input sequence.
\par Burt \emph{et al.} \cite{burt93} performed a Laplacian pyramid decomposition of the image and the weights are computed using local energy and correlation between the pyramids. Use of Laplacian pyramids reduces the chance of unnecessary artifacts. Goshtasby \emph{et al.} \cite{Gosh05} take non-overlapping blocks with highest information from each image to obtain the fused result. This is prone to suffer from block artifacts. Mertens \emph{et al.} \cite{mertens2007exposure} perform exposure fusion using simple quality metrics such as contrast and saturation. However, this suffers from hallucinated edges and mismatched color artifacts.
\par Algorithms which make use of edge preserving filters like Bilateral filters are proposed in \cite{raman2009bilateral}. As this does not account for the luminance of the images, the fused image has dark region leading to poor results. A gradient based approach to assign the weight was put forward by Zhang \emph{et al.} \cite{zhang2012reference}. In a series of papers by Li \emph{et al.} \cite{shutao12}, \cite{shutao13} different approaches to exposure fusion have been reported. In their early works they solve a quadratic optimization to extract finer details and fuse them. In one of their later works \cite{shutao13}, they propose a Guided Filter based approach.
\begin{figure*}[ht]
\centering
\mbox{\subfloat { \includegraphics[clip,trim=0cm 5cm 0cm 6cm , height=5cm]{./figures/tied_weights_architecture_modified_new2.pdf}}
\caption{Architecture of proposed image fusion CNN illustrated for input exposure stack with images of size $h\times w$. The pre-fusion layers C1 and C2 that share same weights, extract low-level features from input images. The feature pairs of input images are fused into a single feature by merge layer. The fused features are input to reconstruction layers to generate fused image $Y_{fused} $.
\label{fig:arch_types}
\end{figure*}
Shen \emph{et al.} \cite{shen2014exposure} proposed a fusion technique using quality metrics such as local contrast and color consistency. The random walk approach they perform gives a global optimum solution to the fusion problem set in a probabilistic fashion.
\par All of the above works rely on hand-crafted features for image fusion. These methods are not robust in the sense that the parameters need to be varied for different input conditions say, linear and non-linear exposures, filter size depends on image sizes. To circumvent this parameter tuning we propose a feature learning based approach using CNN. In this work we learn suitable features for fusing exposure bracketed images. Recently, Convolutional Neural Network (CNN) have shown impressive performance across various computer vision tasks \cite{lecun2015deep}. While CNNs have produced state-of-the-art results in many high-level computer vision tasks like recognition (\cite{he2016deep}, \cite{sarvadevabhatla2016enabling}), object detection \cite{li2016r}, Segmentation \cite{he2017mask}, semantic labelling \cite{pinheiro2013recurrent}, visual question answering \cite{antol2015vqa} and much more, their performance on low-level image processing problems such as filtering \cite{nithish2017} and fusion \cite{prabhakar2016ghosting} is not studied extensively. In this work we explore the effectiveness of CNN for the task of multi-exposure image fusion.
\par To our knowledge, use of CNNs for multi-exposure fusion is not reported in literature. The other machine learning approach is based on a regression method called Extreme Learning Machine (ELM) \cite{wang2012extreme}, that feed saturation level, exposedness, and contrast into the regressor to estimate the importance of each pixel. Instead of using hand crafted features, we use the data to learn a representation right from the raw pixels.
\vspace{-0.1cm}
\section{Proposed Method}
\label{sec:proposed_method}
\vspace{-0.1cm}
In this work, we propose an image fusion framework using CNNs. Within a span of couple years, Convolutional Neural Networks have shown significant success in high-end computer vision tasks. They are shown to learn complex mappings between input and output with the help of sufficient training data. CNN learns the model parameters by optimizing a loss function in order to predict the result as close as to the ground-truth. For example, let us assume that input \textbf{x} is mapped to output \textbf{y} by some complex transformation \emph{f}. The CNN can be trained to estimate the function \emph{f} that minimizes the difference between the expected output \textbf{y} and obtained output \( \hat{\textbf{y}} \). The distance between \textbf{y} and \( \hat{\textbf{y}} \) is calculated using a loss function, such as mean squared error function. Minimizing this loss function leads to better estimate of required mapping function.
Let us denote the input exposure sequence and fusion operator as $I$ and $O(I)$. The input images are assumed to be registered and aligned using existing registration algorithms, thus avoiding camera and object motion. We model $O(I)$ with a feed-forward process $F_W(I)$. Here, $F$ denotes the network architecture and $W$ denotes the weights learned by minimizing the loss function. As the expected output $O(I)$ is absent for MEF problem, the squared error loss or any other full reference error metric cannot be used. Instead, we make use of no-reference image quality metric MEF SSIM proposed by Ma \textit{et al.} \cite{ma2015perceptual} as loss function. MEF SSIM is based on structural similarity index metric (SSIM) framework \cite{wang2004image}. It makes use of statistics of a patch around individual pixels from input image sequence to compare with result. It measures the loss of structural integrity as well as luminance consistency in multiple scales (see section \ref{subsec:loss_fun} for more details).
An overall scheme of proposed method is shown in Fig. \ref{fig:overall}. The input exposure stack is converted into YCbCr color channel data. The CNN is used to fuse the luminance channel of the input images. This is due to the fact that the image structural details are present in luminance channel and the brightness variation is prominent in luminance channel than chrominance channels. The obtained luminance channel is combined with chroma (Cb and Cr) channels generated using method described in section \ref{subsec:chrom}. The following subsection details the network architecture, loss function and the training procedure.
\vspace{-0.2cm}
\subsection{DeepFuse CNN}
\vspace{-0.2cm}
\par The learning ability of CNN is heavily influenced by right choice of architecture and loss function. A simple and naive architecture is to have a series of convolutional layers connected in sequential manner. The input to this architecture would be exposure image pairs stacked in third dimension. Since the fusion happens in the pixel domain itself, this type of architecture does not make use of feature learning ability of CNNs to a great extent.
\par The proposed network architecture for image fusion is illustrated in Fig. \ref{fig:arch_types}. The proposed architecture has three components: feature extraction layers, a fusion layer and reconstruction layers. As shown in Fig. \ref{fig:arch_types}, the under-exposed and the over-exposed images ($Y_1$ and $Y_2$) are input to separate channels (channel 1 consists of C11 and C21 and channel 2 consists of C12 and C22). The first layer (C11 and C12) contains 5 $\times $ 5 filters to extract low-level features such as edges and corners. The weights of pre-fusion channels are \emph{tied}, C11 and C12 (C21 and C22) share same weights. The advantage of this architecture is three fold: first, we force the network to learn the same features for the input pair. That is, the F11 and F21 are same feature type. Hence, we can simply combine the respective feature maps via fusion layer. Meaning, the first feature map of image 1 (F11) and the first feature map of image 2 (F21) are added and this process is applied for remaining feature maps as well. Also, adding the features resulted in better performance than other choices of combining features (see Table \ref{tab2}). In feature addition, similar feature types from both images are fused together. Optionally one can choose to concatenate features, by doing so, the network has to figure out the weights to merge them. In our experiments, we observed that feature concatenation can also achieve similar results by increasing the number of training iterations, increasing number of filters and layers after C3. This is understandable as the network needs more number of iterations to figure out appropriate fusion weights. In this tied-weights setting, we are enforcing the network to learn filters that are invariant to brightness changes. This is observed by visualizing the learned filters (see Fig. \ref{fig_layerweights}). In case of tied weights, few high activation filters have center surround receptive fields (typically observed in retina). These filters have learned to remove the mean from neighbourhood, thus effectively making the features brightness invariant. Second, the number of learnable filters is reduced by half. Third, as the network has low number of parameters, it converges quickly. The obtained features from C21 and C22 are fused by merge layer. The result of fuse layer is then passed through another set of convolutional layers (C3, C4 and C5) to reconstruct final result ($Y_{fused}$) from fused features.
\vspace{-0.5cm}
\subsubsection{MEF SSIM loss function}
\label{subsec:loss_fun}
\vspace{-0.2cm}
\par In this section, we will discuss on computing loss without using reference image by MEF SSIM image quality measure \cite{ma2015perceptual}. Let $\{{\mathbf{y}}_k\}$=$\{{\mathbf{y}}_k|k$=1,2$\}$ denote the image patches extracted at a pixel location $p$ from input image pairs and ${\mathbf{y}}_{f}$ denote the patch extracted from CNN output fused image at same location $p$. The objective is to compute a score to define the fusion performance given ${\mathbf{y}}_k$ input patches and ${\mathbf{y}}_f$ fused image patch.
In SSIM \cite{wang2004image} framework, any patch can be modelled using three components: structure (${\mathbf{s}}$), luminance ($l$) and contrast ($c$). The given patch is decomposed into these three components as:
\begin{align}
{\mathbf{y}}_{k}=&\|{\mathbf{y}}_{k} - \mu _{{\mathbf{y}}_{k}}\| \cdot \frac {{\mathbf{y}}_{k} - \mu _{{\mathbf{y}}_{k}}}{\|{\mathbf{y}}_{k} - \mu _{{\mathbf{y}}_{k}}\|} + \mu _{{\mathbf{y}}_{k}} \notag \\
=&\|\tilde {\mathbf{y}}_{k}\| \cdot \frac {\tilde {\mathbf{y}}_{k}}{\|\tilde {\mathbf{y}}_{k}\|} + \mu _{{\mathbf{y}}_{k}} \notag \\
=&c_{k} \cdot {\mathbf{s}}_{k} + l_{k},
\end{align}
where, $\parallel\cdot\parallel$ is the $\ell_2$ norm of patch, $\mu _{{\mathbf{y}}_{k}}$ is the mean value of ${\mathbf{y}}_{k}$ and $\tilde {\mathbf{y}}_{k}$ is the mean subtracted patch. As the higher contrast value means better image, the desired contrast value ($\hat{c}$) of the result is taken as the highest contrast value of $\{c_k\}$, (i.e.) \[\hat{c} = \underset{\{k=1,2\}}{\max } c_{k}\]
The structure of the desired result ($\hat{{\mathbf{s}}}$) is obtained by weighted sum of structures of input patches as follows,
\begin{equation}
\bar {\mathbf{s}} = \frac {\sum _{k = 1}^{2}w\left ({\tilde {\mathbf{y}}_{k}}\right ){\mathbf{s}}_{k}}{\sum _{k = 1}^{2}w\left ({\tilde {\mathbf{y}}_{k}}\right )} \quad {\rm and} \quad \hat {\mathbf{s}} = \frac {\bar {\mathbf{s}}}{\|\bar {\mathbf{s}}\|},
\end{equation}
where the weighting function assigns weight based on structural consistency between input patches. The weighting function assigns equal weights to patches, when they have dissimilar structural components. In the other case, when all input patches have similar structures, the patch with high contrast is given more weight as it is more robust to distortions. The estimated $\hat{s}$ and $\hat{c}$ is combined to produce desired result patch as,
\begin{equation}
\hat{{\mathbf{y}}} = \hat{c} \cdot \hat{s}
\end{equation}
As the luminance comparison in the local patches is insignificant, the luminance component is discarded from above equation. Comparing luminance at lower spatial resolution does not reflect the global brightness consistency. Instead, performing this operation at multiple scales would effectively capture global luminance consistency in coarser scale and local structural changes in finer scales. The final image quality score for pixel $p$ is calculated using SSIM framework,
\begin{equation}
Score(p) = \frac {2\sigma _{\hat {\mathbf{y}}{\mathbf{y}}_f} + C}{{\sigma ^{2}_{\hat {\mathbf{y}}}} + \sigma^{2}_{{\mathbf{y}}_f} + C},
\end{equation}
where, $\sigma^2_{\hat{{\mathbf{y}}}}$ is variance and $\sigma _{\hat {\mathbf{y}}{\mathbf{y}}_f}$ is covariance between $\hat{{\mathbf{y}}}$ and ${\mathbf{y}}_f$. The total loss is calculated as,
\begin{equation}
Loss = 1 - \frac{1}{N}\sum_{p\in P} Score(p)
\end{equation}
where $N$ is the total number of pixels in image and $P$ is the set of all pixels in input image. The computed loss is backpropagated to train the network. The better performance of MEF SSIM is attributed to its objective function that maximizes structural consistency between fused image and each of input images.
\begin{table}[t]
\centering
\caption{\textbf{Choice of blending operators}: Average MEF SSIM scores of 23 test images generated by CNNs trained with different feature blending operations. The maximum score is highlighted in bold. Results illustrate that adding the feature tensors yield better performance. Results by addition and mean methods are similar, as both operations are very similar, except for a scaling factor. Refer text for more details.}
\label{tab2}
\begin{tabular}{@{}ccccc@{}}
\toprule
Product & Concatenation & Max & Mean & Addition \\ \midrule
0.8210 & 0.9430 & 0.9638 & 0.9750 & \textbf{0.9782} \\ \bottomrule
\end{tabular}
\end{table}
\vspace{-0.2cm}
\subsection{Training}
\begin{figure*}[ht]
\centering
\mbox{\subfloat[Underexposed image] { \includegraphics[width =2.75cm]{./figures/input_images/Seq6/Img1.png}}}\hspace{0.001em}%
\mbox{\subfloat[Overexposed image] { \includegraphics[width =2.75cm]{./figures/input_images/Seq6/Img2.png}}}\hspace{0.001em}%
\mbox{\subfloat[Li \emph{et al.} \cite{shutao12}] { \includegraphics[width =2.75cm]{./figures/new_db/to_win/fmmr/House_Tom_Mertens09.png}}}\hspace{0.001em}%
\mbox{\subfloat[Li \emph{et al.} \cite{shutao13}] { \includegraphics[width =2.75cm]{./figures/new_db/to_win/gff/House_Tom_Mertens09.png}}}\hspace{0.001em}%
\mbox{\subfloat[Mertens \emph{et al.} \cite{mertens2007exposure}] { \includegraphics[width =2.75cm]{./figures/results/Seq6/Seq6_mertens.jpg}}}\hspace{0.001em}%
\mbox{\subfloat[Raman \emph{et al.} \cite{Shanmuga2011}] { \includegraphics[width =2.75cm]{./figures/new_db/to_win/raman/House_Tom_Mertens09.png}}}\hspace{0.002em}%
\mbox{\subfloat[Shen \emph{et al.} \cite{shen2011generalized}] { \includegraphics[width =2.75cm]{./figures/new_db/to_win/probabilistic/House_Tom_Mertens09.png}}}\hspace{0.002em}%
\mbox{\subfloat[Ma \emph{et al.} \cite{ma2015multi}] { \includegraphics[width =2.75cm]{./figures/new_db/to_win/ma/House_Tom_Mertens09.png}}}\hspace{0.002em}%
\mbox{\subfloat[Guo \emph{et al.} \cite{zhengguo2017detail}] { \includegraphics[width =2.75cm]{./figures/new_results/guo17/House_Tom_Mertens09_I2R_Enhanced.png}}}\hspace{0.002em
\mbox{\subfloat[DF-Baseline] { \includegraphics[width =2.75cm]{./figures/results/Seq6/Seq6_supervised.png}}}\hspace{0.002em}%
\mbox{\subfloat[DF-Unsupervised] { \includegraphics[width =2.75cm]{./figures/results/Seq6/Seq6_unsupervised.png}}}\hspace{0.002em
\caption{Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.}
\label{fig:lighthouse}
\end{figure*}
\vspace{-0.2cm}
We have collected 25 exposure stacks that are available publicly \cite{EMPADataset}. In addition to that, we have curated 50 exposure stacks with different scene characteristics. The images were taken with standard camera setup and tripod. Each scene consists of 2 low dynamic range images with $ \pm2 $ EV difference. The input sequences are resized to 1200 $ \times $ 800 dimensions. We give priority to cover both indoor and outdoor scenes. From these input sequences, 30000 patches of size 64 $ \times$64 were cropped for training. We set the learning rate to $10^{-4} $ and train the network for 100 epochs with all the training patches being processed in each epoch.
\vspace{-0.2cm}
\subsection{Testing}
\label{subsec:chrom}
\vspace{-0.2cm}
We follow the standard cross-validation procedure to train our model and test the final model on a disjoint test set to avoid over-fitting. While testing, the trained CNN takes the test image sequence and generates the luminance channel ($Y_{fused}$) of fused image. The chrominance components of fused image, $Cb_{fused}$ and $Cr_{fused}$, are obtained by weighted sum of input chrominance channel values.
The crucial structural details of the image tend to be present mainly in $Y$ channel. Thus, different fusion strategies are followed in literature for $Y$ and $Cb$/$Cr$ fusion (\cite{prabhakar2016ghosting}, \cite{tico2009image}, \cite{wang2009exposure}). Moreover, MEF SSIM loss is formulated to compute the score between 2 gray-scale ($Y$) images. Thus, measuring MEF SSIM for $Cb$ and $Cr$ channels may not be meaningful. Alternately, one can choose to fuse RGB channels separately using different networks. However, there is typically a large correlation between RGB channels. Fusing RGB independently fails to capture this correlation and introduces noticeable color difference. Also, MEF-SSIM is not designed for RGB channels. Another alternative is to regress RGB values in a single network, then convert them to a $Y$ image and compute MEF SSIM loss. Here, the network can focus more on improving $Y$ channel, giving less importance to color. However, we observed spurious colors in output which were not originally present in input.
We follow the procedure used by Prabhakar \textit{et al.} \cite{prabhakar2016ghosting} for chrominance channel fusion. If $x_1$ and $x_2$ denote the $Cb$ (or $Cr$) channel value at any pixel location for image pairs, then the fused chrominance value $x$ is obtained as follows,
\begin{equation}
x = \dfrac{x_1 (|x_1 - \tau|) + x_2 (|x_2 - \tau|)}{ |x_1 - \tau| + |x_2 - \tau|}
\label{eqn:chrom}
\end{equation}
The fused chrominance value is obtained by weighing two chrominance values with $\tau$ subtracted value from itself. The value of $\tau$ is chosen as 128. The intuition behind this approach is to give more weight for good color components and less for saturated color values. The final result is obtained by converting \{$Y_{fused}$, $Cb_{fused}$, $Cr_{fused}$\} channels into RGB image.
\begin{figure*}[!ht]
\centering
\subfloat[Underexposed input]{\includegraphics[width=1.1in]{./figures/new_db/to_win/2_images/Hostel1/img1.JPG}%
\label{fig1_ue_case}}
\hspace{0.01cm}
\subfloat[Overexposed input]{\includegraphics[width=1.1in]{./figures/new_db/to_win/2_images/Hostel1/img3.JPG}%
\label{fig1_ue_case}}
\hspace{0.01cm}
\subfloat[Mertens \emph{et al.} \cite{mertens2007exposure}]{\includegraphics[width=1.1in]{./figures/new_db/to_win/mertens/Hostel1.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\subfloat[Zoomed result of (c)]{\includegraphics[width=1.15in]{./figures/zoomed/Hostel1_mertens_zoomed.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\subfloat[DF - Unsupervised]{\includegraphics[width=1.1in]{./figures/new_db/to_win/color_results/Hostel1.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\subfloat[Zoomed result of (e)]{\includegraphics[width=1.1in]{./figures/zoomed/Hostel1_df_zoomed.png}%
\label{fig1_s24_mer}}
\\
\subfloat[Underexposed input]{\includegraphics[width=1.25in]{./figures/new_db/to_win/2_images/Seq_24/Img1.jpg}%
\label{fig1_ue_case}}
\hspace{0.01cm}
\subfloat[Overexposed input]{\includegraphics[width=1.25in]{./figures/new_db/to_win/2_images/Seq_24/Img3.jpg}%
\label{fig1_ue_case}}
\hspace{0.01cm}
\subfloat[Mertens \emph{et al.} \cite{mertens2007exposure}]{\includegraphics[width=1.25in]{./figures/new_db/to_win/mertens/Seq_24.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\subfloat[Zoomed result of (i)]{\includegraphics[width=0.82in]{./figures/zoomed/Seq_24_mertens_zoomed.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\subfloat[DF - Unsupervised]{\includegraphics[width=1.25in]{./figures/new_db/to_win/color_results/Seq_24.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\subfloat[Zoomed result of (k)]{\includegraphics[width=0.82in]{./figures/zoomed/Seq_24_df_zoomed.png}%
\label{fig1_s24_mer}}
\hspace{0.01cm}
\caption{Comparison of the proposed method with Mertens \emph{et al.} \cite{mertens2007exposure}. The Zoomed region of the result by Mertens \emph{et al.} in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens \emph{et al.} in (j) show that fine details of lamp are missing.}
\label{fig_hostel1}
\end{figure*}
\begin{table*}[th]
\tiny{
\centering
\caption{MEF SSIM scores of different methods against DeepFuse (DF) for test images. Bolded values indicate the highest score by that corresponding column algorithm than others for that row image sequence.}
\label{tab:mef_ssim}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
\textit{} & \textbf{{ \textit{Mertens09}}} & \textbf{{ \textit{Raman11}}} & \textbf{{ \textit{Li12}}} & \textbf{ \textit{Li13}} & \textbf{ \textit{Shen11}} & \textbf{ \textit{Ma15}} & \textbf{ \textit{Guo17}} & \textbf{ \textit{DF-Baseline}} & \textbf{ \textit{DF-UnSupervised}} \\ \hline \hline
\textit{AgiaGalini} & 0.9721 & 0.9343 & 0.9438 & 0.9409 & 0.8932 & 0.9465 & 0.9492 & 0.9477 & \textbf{0.9813} \\ \hline
\textit{Balloons} & 0.9601 & 0.897 & 0.9464 & 0.9366 & 0.9252 & 0.9608 & 0.9348& 0.9717 & \textbf{0.9766} \\ \hline
\textit{Belgium house} & 0.9655 & 0.8924 & 0.9637 & 0.9673 & 0.9442 & 0.9643 & 0.9706&0.9677 & \textbf{0.9727} \\ \hline
\textit{Building} & 0.9801 & 0.953 & 0.9702 & 0.9685 & 0.9513 & 0.9774 & 0.9666&0.965 & \textbf{0.9826} \\ \hline
\textit{Cadik lamp} & 0.9658 & 0.8696 & 0.9472 & 0.9434 & 0.9152 & 0.9464 &0.9484& \textbf{0.9683} & 0.9638 \\ \hline
\textit{Candle} & 0.9681 & 0.9391 & 0.9479 & 0.9017 & 0.9441 & 0.9519 & 0.9451&0.9704 & \textbf{0.9893} \\ \hline
\textit{Chinese garden} & \textbf{0.990} & 0.8887 & 0.9814 & 0.9887 & 0.9667 & \textbf{0.990} & 0.9860&0.9673 & 0.9838 \\ \hline
\textit{Corridor} & 0.9616 & 0.898 & 0.9709 & 0.9708 & 0.9452 & 0.9592 &0.9715& \textbf{0.9740} & \textbf{0.9740} \\ \hline
\textit{Garden} & 0.9715 & 0.9538 & 0.9431 & 0.932 & 0.9136 & 0.9667 &0.9481& 0.9385 & \textbf{0.9872} \\ \hline
\textit{Hostel} & 0.9678 & 0.9321 & 0.9745 & 0.9742 & 0.9649 & 0.9712 &0.9757& 0.9715 & \textbf{0.985} \\ \hline
\textit{House} & \textbf{0.9748} & 0.8319 & 0.9575 & 0.9556 & 0.9356 & 0.9365 &0.9623& 0.9601 & 0.9607 \\ \hline
\textit{Kluki Bartlomiej} & \textbf{0.9811} & 0.9042 & 0.9659 & 0.9645 & 0.9216 & 0.9622 &0.9680& 0.9723 & 0.9742 \\ \hline
\textit{Landscape} & 0.9778 & 0.9902 & 0.9577 & 0.943 & 0.9385 & 0.9817 &0.9467& 0.9522 & \textbf{0.9913} \\ \hline
\textit{Lighthouse} & 0.9783 & 0.9654 & 0.9658 & 0.9545 & 0.938 & 0.9702 &0.9657& 0.9728 & \textbf{0.9875} \\ \hline
\textit{Madison capitol} & 0.9731 & 0.8702 & 0.9516 & 0.9668 & 0.9414 & 0.9745 &0.9711& 0.9459 & \textbf{0.9749} \\ \hline
\textit{Memorial} & 0.9676 & 0.7728 & 0.9644 & \textbf{0.9771} & 0.9547 & 0.9754 &0.9739& 0.9727 & 0.9715 \\ \hline
\textit{Office} & \textbf{0.9749} & 0.922 & 0.9367 & 0.9495 & 0.922 & 0.9746 &0.9624& 0.9277 & \textbf{0.9749} \\ \hline
\textit{Room} & 0.9645 & 0.8819 & 0.9708 & \textbf{0.9775} & 0.9543 & 0.9641 &0.9725& 0.9767 & 0.9724 \\ \hline
\textit{SwissSunset} & 0.9623 & 0.9168 & 0.9407 & 0.9137 & 0.8155 & 0.9512 &0.9274& 0.9736 & \textbf{0.9753} \\ \hline
\textit{Table} & 0.9803 & 0.9396 & 0.968 & 0.9501 & 0.9641 & 0.9735 &0.9750& 0.9468 & \textbf{0.9853} \\ \hline
\textit{TestChart1} & 0.9769 & 0.9281 & 0.9649 & 0.942 & 0.9462 & 0.9529 &0.9617& 0.9802 & \textbf{0.9831} \\ \hline
\textit{Tower} & \textbf{0.9786} & 0.9128 & 0.9733 & 0.9779 & 0.9458 & 0.9704 &0.9772& 0.9734 & 0.9738 \\ \hline
\textit{Venice} & \textbf{0.9833} & 0.9581 & 0.961 & 0.9608 & 0.9307 & 0.9836 &0.9632& 0.9562 & 0.9787 \\ \hline
\end{tabular}%
}}
\end{table*}
\begin{figure*}[ht]
\centering
\subfloat{\includegraphics[width=1.2in]{./figures/new_db/to_win/2_images/Balloons_Erik_Reinhard/img11.png}%
\label{fig1_ue_case}}
\hspace{0.05cm}
\subfloat{\includegraphics[width=1.05in]{./figures/new_db/to_win/2_images/Balloons_Erik_Reinhard/img3.png}%
\label{fig1_ue_case}}
\hspace{0.05cm}
\subfloat{\includegraphics[width=1.05in]{./figures/new_db/to_win/fmmr/Balloons_Erik_Reinhard.png}%
\label{fig1_s24_mer}}
\hspace{0.05cm}
\subfloat{\includegraphics[width=1.05in]{./figures/new_db/to_win/gff/Balloons_Erik_Reinhard.png}%
\label{fig1_s24_mer}}
\hspace{0.05cm}
\subfloat{\includegraphics[width=1.05in]{./figures/new_db/to_win/probabilistic/Balloons_Erik_Reinhard.png}%
\label{fig1_s24_mer}}
\hspace{0.05cm}
\subfloat{\includegraphics[width=1.05in]{./figures/new_db/to_win/color_results/Balloons_Erik_Reinhard.png}%
\label{fig1_s24_mer}}
\\\vspace{-1.5mm}
\setcounter{sub\@captype}{0}
\subfloat[UE input]{\includegraphics[width=1.2in]{./figures/new_db/to_win/2_images/Office_Matlab/img11.png}%
\label{fig1_ue_case}}
\hspace{0.05cm}
\subfloat[OE input]{\includegraphics[width=1.05in]{./figures/new_db/to_win/2_images/Office_Matlab/img3.png}%
\label{fig1_ue_case}}
\hspace{0.05cm}
\subfloat[Li \emph{et al.} \cite{shutao12}]{\includegraphics[width=1.05in]{./figures/new_db/to_win/fmmr/Office_Matlab.png}%
\label{fig1_s24_mer}}
\hspace{0.05cm}
\subfloat[Li \emph{et al.} \cite{shutao13}]{\includegraphics[width=1.05in]{./figures/new_db/to_win/gff/Office_Matlab.png}%
\label{fig1_s24_mer}}
\hspace{0.05cm}
\subfloat[Shen \emph{et al.} \cite{shen2011generalized}]{\includegraphics[width=1.05in]{./figures/new_db/to_win/probabilistic/Office_Matlab.png}%
\label{fig1_s24_mer}}
\hspace{0.05cm}
\subfloat[DeepFuse]{\includegraphics[width=1.05in]{./figures/new_db/to_win/color_results/Office_Matlab.png}%
\label{fig1_s24_mer}}
\caption{Comparison of the proposed method with Li \emph{et al.} \cite{shutao12}, Li \emph{et al.} \cite{shutao13} and Shen \emph{et al.} \cite{shen2011generalized} for \textit{Balloons} and \textit{Office}. Image courtesy of Kede ma.}
\label{fig_shen11}
\end{figure*}
\vspace{-0.2cm}
\section{Experiments and Results}
\label{sec:exp_results}
\vspace{-0.2cm}
We have conducted extensive evaluation and comparison study against state-of-the-art algorithms for variety of natural images. For evaluation, we have chosen standard image sequences to cover different image characteristics including indoor and outdoor, day and night, natural and artificial lighting, linear and non-linear exposure. The proposed algorithm is compared against seven best performing MEF algorithms, (1) Mertens09 \cite{mertens2007exposure}, (2) Li13 \cite{shutao13} (3) Li12 \cite{shutao12} (4) Ma15 \cite{ma2015multi} (5) Raman11 \cite{Shanmuga2011} (6) Shen11 \cite{shen2011generalized} and (7) Guo17 \cite{zhengguo2017detail}. In order to evaluate the performance of algorithms objectively, we adopt MEF SSIM. Although number of other IQA models for general image fusion have also been reported, none of them makes adequate quality predictions of subjective opinions \cite{ma2015perceptual}.
\vspace{-0.2cm}
\subsection{DeepFuse - Baseline}
\vspace{-0.2cm}
So far, we have discussed on training CNN model in unsupervised manner. One interesting variant of that would be to train the CNN model with results of other state-of-art methods as ground truth. This experiment can test the capability of CNN to learn complex fusion rules from data itself without the help of MEF SSIM loss function. The ground truth is selected as best of Mertens \cite{mertens2007exposure} and GFF \cite{shutao13} methods based on MEF SSIM score\footnote{In a user survey conducted by Ma \textit{et al.} \cite{ma2015perceptual}, Mertens and GFF results are ranked better than other MEF algorithms}. The choice of loss function to calculate error between ground truth and estimated output is very crucial for training a CNN in supervised fashion. The Mean Square Error or $\ell_2$ loss function is generally chosen as default cost function for training CNN. The $\ell_2$ cost function is desired for its smooth optimization properties. While $\ell_2$ loss function is better suited for classification tasks, they may not be a correct choice for image processing tasks \cite{zhao2015loss}. It is also a well known phenomena that MSE does not correlate well with human perception of image quality \cite{wang2004image}. In order to obtain visually pleasing result, the loss function should be well correlated with HVS, like Structural Similarity Index (SSIM) \cite{wang2004image}. We have experimented with different loss functions such as $\ell_1$, $\ell_2$ and SSIM.
\begin{figure}[th]
\centering
\subfloat[Underexposed image]{\includegraphics[width=1in]{./figures/new_db/to_win/2_images/Table/img1.jpg}%
\label{fig1_ue_case}}
\hspace{0.05cm}
\subfloat[Ma \emph{et al.} \cite{ma2015multi}]{\includegraphics[width=1in]{./figures/new_db/to_win/ma/Table.png}%
\label{fig1_ue_case}}
\hspace{0.05cm}
\subfloat[Zoomed result of (b)]{\includegraphics[width=0.72in]{./figures/zoomed/table_ma.png}%
\label{fig1_s27_ma}}
\\
\subfloat[Overexposed image]{\includegraphics[width=1in]{./figures/new_db/to_win/2_images/Table/img3.jpg}%
\label{fig1_oe_case}}
\hspace{0.05cm}
\subfloat[DF - Unsupervised]{\includegraphics[width=1in]{./figures/new_db/to_win/color_results/Table.png}%
\label{fig1_oe_case}}
\hspace{0.05cm}
\subfloat[Zoomed result of (e)]{\includegraphics[width=0.7in]{./figures/zoomed/table_df.png}%
\label{fig1_s27_df}}
\caption{Comparison of the proposed method with Ma \emph{et al.} \cite{ma2015multi} for \textit{Table} sequence. The zoomed region of result by Ma \emph{et al.} \cite{ma2015multi} shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.}
\label{fig_seq27}
\end{figure}
The fused image appear blurred when the CNN was trained with $\ell_2$ loss function. This effect termed as \textit{regression to mean}, is due to the fact that $\ell_2$ loss function compares the result and ground truth in a pixel by pixel manner. The result by $\ell_1$ loss gives sharper result than $\ell_2$ loss but it has halo effect along the edges. Unlike $\ell_1$ and $\ell_2$, results by CNN trained with SSIM loss function are both sharp and artifact-free. Therefore, SSIM is used as loss function to calculate error between generated output and ground truth in this experiment.
The quantitative comparison between DeepFuse baseline and unsupervised method is shown in Table \ref{tab:mef_ssim}. The MEF SSIM scores in Table \ref{tab:mef_ssim} shows the superior performance of DeepFuse unsupervised over baseline method in almost all test sequences. The reason is due to the fact that for baseline method, the amount of learning is upper bound by the other algorithms, as the ground truth for baseline method is from Merterns \emph{et al.} \cite{mertens2007exposure} or Li \emph{et al.} \cite{shutao13}. We see from Table \ref{tab:mef_ssim} that the baseline method does not exceed both of them.
The idea behind this experiment is to combine advantages of all previous methods, at the same time avoid shortcomings of each. From Fig. \ref{fig:lighthouse}, we can observe that though DF-baseline is trained with results of other methods, it can produce results that do not have any artifacts observed in other results.
\begin{figure}[h]
\centering
\subfloat[Ma \emph{et al.} \cite{ma2015multi}]{\includegraphics[width=1in]{./figures/new_db/to_win/ma/Lighthouse_HDRsoft.png}%
\label{fig1_ue_case}}
\hspace{0.005cm}
\subfloat[Zoomed result of (a)]{\includegraphics[width=1.45in]{./figures/zoomed/lighthouse_ma_zoomed.png}%
\label{fig1_lgt_ma}}
\\
\subfloat[DF - Unsupervised]{\includegraphics[width=1in]{./figures/new_db/to_win/color_results/Lighthouse_HDRsoft.png}%
\label{fig1_oe_case}}
\hspace{0.005cm}
\subfloat[Zoomed result of (c)]{\includegraphics[width=1.43in]{./figures/zoomed/lighthouse_df_zoomed.png}%
\label{fig1_lgt_df}}
\caption{Comparison of the proposed method with Ma \emph{et al.} \cite{ma2015multi}. A close-up look on the results for \textit{Lighthouse} sequence. The results by Ma \emph{et al.} \cite{ma2015multi} show a halo effect along the roof and lighthouse. Image courtesy of Kede Ma.}
\label{fig_lighthouse}
\end{figure}
\vspace{-0.2cm}
\subsection{Comparison with State-of-the-art}
\vspace{-0.2cm}
\textit{Comparison with Mertens} \emph{et al.}: Mertens \emph{et al.} \cite{mertens2007exposure} is a simple and effective weighting based image fusion technique with multi resolution blending to produce smooth results. However, it suffers from following shortcomings: (a) it picks ``best" parts of each image for fusion using hand crafted features like saturation and well-exposedness. This approach would work better for image stacks with many exposure images. But for exposure image pairs, it fails to maintain uniform brightness across whole image. Compared to Mertens \emph{et al.}, DeepFuse produces images with consistent and uniform brightness across whole image. (b) Mertens \emph{et al.} does not preserve complete image details from under exposed image. In Fig. \ref{fig_hostel1}(d), the details of the tile area is missing in Mertens \emph{et al.}'s result. The same is the case in Fig. \ref{fig_hostel1}(j), the fine details of the lamp are not present in the Mertens \emph{et al.} result. Whereas, DeepFuse has learned filters that extract features like edges and textures in C1 and C2, and preserves finer structural details of the scene.
\textit{Comparison with Li} \emph{et al.} \cite{shutao12} \cite{shutao13}: It can be noted that, similar to Mertens \emph{et al.} \cite{mertens2007exposure}, Li \emph{et al.} \cite{shutao12} \cite{shutao13} also suffers from non-uniform brightness artifact (Fig. \ref{fig_shen11}). In contrast, our algorithm provides a more pleasing image with clear texture details.
\textit{Comparison with Shen} \emph{et al.} \cite{shen2011generalized}: The results generated by Shen \emph{et al.} show contrast loss and non-uniform brightness distortions (Fig. \ref{fig_shen11}). In Fig. \ref{fig_shen11}(e1), the brightness distortion is present in the cloud region. The cloud regions in between balloons appear darker compared to other regions. This distortion can be observed in other test images as well in Fig. \ref{fig_shen11}(e2). However, the DeepFuse (Fig. \ref{fig_shen11}(f1) and (f2) ) have learnt to produce results without any of these artifacts.
\begin{figure}[t]
\centering
\includegraphics[width=3in]{./figures/tiled_filters_2rows.png}
\caption{\textbf{Filter Visualization.} Some of the filters learnt in first layer resemble Gaussian, Difference of Gaussian and Laplacian of Gaussian filters. Best viewed electronically, zoomed in.}
\label{fig_layerweights}
\end{figure}
\textit{Comparison with Ma} \emph{et al.} \cite{ma2015multi}: Fig. \ref{fig_seq27} and \ref{fig_lighthouse} shows comparison between results of Ma \emph{et al.} and DeepFuse for Lighthouse and Table sequences. Ma \emph{et al.} proposed a patch based fusion algorithm that fuses patches from input images based on their patch strength. The patch strength is calculated using a power weighting function on each patch. This method of weighting would introduce unpleasant halo effect along edges (see Fig. \ref{fig_seq27} and \ref{fig_lighthouse}).
\textit{Comparison with Raman} \emph{et al.} \cite{Shanmuga2011}: Fig. \ref{fig:lighthouse}(f) shows the fused result by Raman \emph{et al.} for House sequence. The result exhibit color distortion and contrast loss. In contrast, proposed method produces result with vivid color quality and better contrast.
After examining the results by both subjective and objective evaluations, we observed that our method is able to faithfully reproduce all the features in the input pair. We also notice that the results obtained by DeepFuse are free of artifacts such as darker regions and mismatched colors. Our approach preserves the finer image details along with higher contrast and vivid colors. The quantitative comparison between proposed method and existing approaches in Table \ref{tab:mef_ssim} also shows that proposed method outperforms others in most of the test sequences. From the execution times shown in Table \ref{tab:comp_time} we can observe that our method is roughly 3-4$\times$ faster than Mertens \emph{et al}. DeepFuse can be easily extended to more input images by adding additional streams before merge layer. We have trained DeepFuse for sequences with 3 and 4 images. For sequences with 3 images, average MEF SSIM score for DF is 0.987 and 0.979 for Mertens \textit{et al}. For sequences with 4 images, average MEF SSIM score for DF is 0.972 and 0.978 for Mertens \textit{et al.} For sequences with 4 images, we attribute dip in performance to insufficient training data. With more training data, DF can be trained to perform better in such cases as well.
\begin{figure}[t]
\centering
\subfloat{\includegraphics[width=1.05in]{./figures/multimodal/input/source09_1.png}%
\label{fig1_ue_case}}
\hspace{0.02cm}
\subfloat{\includegraphics[width=1.05in]{./figures/multimodal/input/source09_2.png}%
\label{fig1_ue_case}}
\hspace{0.02cm}
\subfloat{\includegraphics[width=1.05in]{./figures/multimodal/result/source09_imad.png}%
\label{fig1_ue_case}}
\hspace{0.02cm}
\setcounter{sub\@captype}{0}
\\\vspace{-1.5mm}
\subfloat[Near focused image]{\includegraphics[width=1.05in]{./figures/multimodal/input/source13_1.png}%
\label{fig1_s24_mer}}
\hspace{0.02cm}
\subfloat[Far focused image]{\includegraphics[width=1.05in]{./figures/multimodal/input/source13_2.png}%
\label{fig1_s24_mer}}
\hspace{0.02cm}
\subfloat[DF result]{\includegraphics[width=1.05in]{./figures/multimodal/result/source13_imad.png}%
\label{fig1_fmmr_case}}
\hspace{0.02cm}
\caption{Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu \emph{et al.} \cite{liu2015multi}. Image courtesy of Slavica savic.}
\label{fig_focus}
\end{figure}
\subsection{Application to Multi-Focus Fusion}
In this section, we discuss the possibility of applying our DeepFuse model for solving other image fusion problems. Due to the limited depth-of-field in the present day cameras, only object in limited range of depth are focused and the remaining regions appear blurry. In such scenario, Multi-Focus Fusion (MFF) techniques are used to fuse images taken with varying focus to generate a single all-in-focus image. MFF problem is very similar to MEF, except that the input images have varying focus than varying exposure for MEF. To test the generalizability of CNN, we have used the already trained DeepFuse CNN to fuse multi-focus images without any fine-tuning for MFF problem. Fig. \ref{fig_focus} shows that the DeepFuse results on publicly available multi-focus dataset show that the filters of CNN have learnt to identify proper regions in each input image and successfully fuse them together. It can also be seen that the learnt CNN filters are generic and could be applied for general image fusion.
\begin{table}[t]
\centering
\caption{\textbf{Computation time}: Running time in seconds of different algorithms on a pair of images. The numbers in bold denote the least amount of time taken to fuse. $\ddagger$: tested with NVIDIA Tesla K20c GPU, $\dagger$: tested with Intel\textsuperscript{\tiny\textregistered} Xeon @ 3.50 GHz CPU}
\label{tab:comp_time}
\begin{tabular}{@{}rcccc@{}}
\toprule
\multicolumn{1}{c}{Image size} & Ma$15^\dagger$ & Li$13^\dagger$ & Mertens$07^\dagger$ & $DF^\ddagger$ \\ \midrule
512*384 & 2.62 & 0.58 & 0.28 & \textbf{0.07} \\
1024*768 & 9.57 & 2.30 & 0.96 & \textbf{0.28} \\
1280*1024 & 14.72 & 3.67 & 1.60 & \textbf{0.46} \\
1920*1200 & 27.32 & 6.60 & 2.76 & \textbf{0.82} \\ \bottomrule
\end{tabular}
\end{table}
\section{Conclusion and Future work}
\label{sec:typestyle}
In this paper, we have proposed a method to efficiently fuse a pair of images with varied exposure levels to produce an output which is artifact-free and perceptually pleasing. DeepFuse is the first ever unsupervised deep learning method to perform static MEF. The proposed model extracts set of common low-level features from each input images. Feature pairs of all input images are fused into a single feature by merge layer. Finally, the fused features are input to reconstruction layers to get the final fused image. We train and test our model with a huge set of exposure stacks captured with diverse settings. Furthermore, our model is free of parameter fine-tuning for varying input conditions. Finally, from extensive quantitative and qualitative evaluation, we demonstrate that the proposed architecture performs better than state-of-the-art approaches for a wide range of input scenarios.
\par In summary, the advantages offered by DF are as follows: 1) Better fusion quality: produces better fusion result even for extreme exposure image pairs, 2) SSIM over $\ell_1$ : In \cite{zhao2015loss}, the authors report that $\ell_1$ loss outperforms SSIM loss function. In their work, the authors have implemented approximate version of SSIM and found it to perform sub-par compared to $\ell_1$. We have implemented the exact SSIM formulation and observed that SSIM loss function perform much better than MSE and $\ell_1$. Further, we have shown that a complex perceptual loss such as MEF SSIM can be successfully incorporated with CNNs in absense of ground truth data. The results encourage the research community to examine other perceptual quality metrics and use them as loss functions to train a neural net. 3) Generalizability to other fusion tasks: The proposed fusion is generic in nature and could be easily adapted to other fusion problems as well. In our current work, DF is trained to fuse static images. For future research, we aim to generalize DeepFuse to fuse images with object motion as well.
{\small
\bibliographystyle{ieee}
|
1,116,691,499,653 | arxiv | \section{Introduction}
\label{intro}
Designing an intelligent dialog system that not only matches or surpasses a human’s level on carrying out an interactive conversation, but also answers the questions on a variety of topics, i.e., ranging from recent news about NASA to a biography of a famous political leader, has been one of the outstanding goals in the field of artificial intelligence (AI) \cite{DBLP:journals/ftir/GaoGL19}.
A
quickly-increasing
number of research papers prove the promising potential and the growing interest of researchers from both academia and industry in conversational AI.
Conversational AI constitutes an integral part of Natural User Interfaces \cite{DBLP:journals/ftir/GaoGL19} and is attracting
significant attention from researchers in Information Retrieval (IR), Natural Language Processing (NLP), and Deep Learning (DL) communities. For example,
AAAI 2020 introduced a
special workshop focusing on ``Reasoning for Complex Question Answering’', that featured a special focus on machine intelligence and common sense reasoning. Similarly, SIGIR 2018 introduced a new track entitled ``Artificial Intelligence, Semantics and Dialog’' to bridge the gap between IR and AI. The track is especially focused on QA, Conversational Dialog Agents, and Deep Learning for IR and agents. One of the top conferences in NLP, EMNLP, has had a track called ``Information Retrieval and Question Answering'' for years and from 2019
it has started inviting papers for the field of ``Question Answering'' as a separate track owing to the increasing research interests of the community and its faced-paced growth.
The
field of conversational AI can be segregated into three groups namely, i) \textit{task-oriented dialog systems} that are required to perform tasks on the users' behalf such as making a reservation in a restaurant or scheduling an event, ii) \textit{chat-oriented dialog systems} that need to carry out a natural and interactive conversation with the users, and iii) \textit{QA dialog systems} that are responsible to provide
clear and concise answers to the users' questions based on information deduced from different data sources such as text documents or knowledge bases. The examples of each of the aforementioned categories are given in Fig.~\ref{fig:types}. The conversation shown in Fig.~\ref{fig:types} comprises of multiple turns and each turn consists of a question and an answer \cite{DBLP:journals/tacl/ReddyCM19}.
\begin{figure}[htb!]
\center
\includegraphics[width=0.745\textwidth]{conversation.pdf}
\caption{Categorizations of conversational AI. Turn 1-3 depict chat-oriented dialog system, turn 4 portrays the element of QA dialog system, and turn 5-7 reflects the task-oriented conversation.}
\label{fig:types}
\end{figure}
The chat-oriented and task-oriented dialog systems have been well-researched topics resulting in a number of successful
dialog agents such as Amazon Alexa\footnote{https://www.amazon.com.au/b?node=5425666051}, Apple Siri\footnote{https://www.apple.com/au/siri/}, and Microsoft Cortana\footnote{https://www.microsoft.com/en-us/cortana}. However, QA dialog systems are fairly new and still require extensive research.
Many QA challenges
have been identified and initial solutions have been proposed \cite{DBLP:conf/emnlp/RajpurkarZLL16,DBLP:journals/tacl/KociskySBDHMG18,DBLP:conf/acl/JoshiCWZ17,DBLP:conf/naacl/SusterD18,DBLP:conf/emnlp/ZellersBSC18,DBLP:journals/pvldb/CuiXWSHW17,DBLP:conf/coling/BaoDYZZ16,DBLP:conf/semweb/TrivediMDL17},
giving the rise
of \textit{Conversational Question Answering} (CQA).
CQA techniques form the building blocks of QA dialog systems. The idea behind CQA is to ask the machine to answer a question based on the provided passage and this, in turn, has the potential to revolutionize the way
humans interact with the machines. However, this interaction could turn into a multi-turn conversation if a user requires more detailed information about the question.
The notion of CQA can be thought of as a simplified but concrete conversational search setting \cite{DBLP:conf/sigir/Qu0QCZI19}, wherein the system returns one correct answer to a user's question instead of a list of relevant documents or links as is the case with traditional search engines. The top search engine companies such as Microsoft and Google have incorporated CQA into their mobile-based search engines (also known as \textit{digital assistants}) to improve the users' experience when interacting with them.
CQA is an effective way for humans to gather information and is considered as a benchmark task to evaluate a machine's capability to understand and comprehend the input provided in written natural language \cite{DBLP:journals/corr/abs-1812-03593}. Such CQA systems have significant applications in areas like customer service support \cite{DBLP:conf/acl/CuiHWTDZ17} or QA dialog systems \cite{DBLP:journals/tacl/ReddyCM19, DBLP:conf/emnlp/HewlettJLG17}. The task of CQA poses several challenges to the researchers hence resulting in considerable interesting yet innovative researches over the past few years.
\vspace{2mm}
\noindent\textbf{Papers' Selection:} The research papers reviewed in this survey are high quality papers selected from the top
NLP and AI conferences, including but not limited to, ACL\footnote{https://www.aclweb.org/}, SIGIR\footnote{https://sigir.org/}, NeurIPS\footnote{https://nips.cc/}, NAACL\footnote{https://naacl.org/}, EMNLP\footnote{https://sigdat.org/}, ICLR\footnote{https://iclr.cc/}, AAAI\footnote{https://www.aaai.org/}, IJCAI\footnote{https://www.ijcai.org/}, CIKM\footnote{http://www.cikmconference.org/}, SIGKDD\footnote{https://www.kdd.org/}, and WSDM\footnote{http://www.wsdm-conference.org/}.
Other than
published research papers in the aforementioned conferences, we have also considered good papers in e-Print archive\footnote{https://arxiv.org/} as they manifest the latest research outputs. We selected papers from archive
using
three metrics:
paper quality, method novelty, and the number of citations (optional).
Fig.~\ref{fig:yearwise} depicts the year-wise distribution of the papers
reviewed in our survey. Our survey encompasses over
80
top-notch conferences and journal papers. The number of papers pertinent to CQA steadily increase from year 2016 onwards, with the highest being in 2019. Coincidentally, 2019 also marks the year when the fields of natural language generation and natural language understanding were revolutionized with the introduction of pre-trained language models. These pre-trained language models have the potential to address the issue of data scarcity and bring considerable advantages
by generating contextualized word embeddings \cite{45663f8dbad442a7b649001bfeb2be72}.
This rise of interest depicts the gradual shift in focus of the researchers in both academia and industry in utilizing pre-trained language models for the design of CQA systems. Also, Fig.~\ref{fig:con} portrays the venue-wise distribution of the research works we have reviewed, with ACL and EMNLP being the top
venues for natural-language related progress. We note, though, that more than 25\% of papers come from a variety of conferences/journals outside of the typical venues further attesting to the fact that this is an interdisciplinary topic spanning different areas such as
knowledge management, knowledge discovery, information retrieval, and artificial intelligence.
\begin{figure}[!tb]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.95\linewidth]{pieconf.pdf}
\caption{Year-wise distribution.}
\label{fig:yearwise}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.95\linewidth]{yearpub.pdf}
\caption{Venue distribution.}
\label{fig:con}
\end{subfigure}
\caption{(a) Year-wise statistics of the selected survey papers between 2016 and 2021 inclusive. The figure depicts that the field of CQA saw its rise recently. (b) Venue-wise distribution of the reviewed research works.}
\label{fig:conf}
\end{figure}
\vspace{2mm}
\noindent{\bf What Makes This Survey Different?}
There have been several published literature reviews on QA systems, i.e., in the context of both machine reading comprehension and KB-QA. In \cite{DBLP:journals/ftir/GaoGL19}, the authors provide
an overview of Conversational AI with a detailed discussion of neural methods and deep learning techniques being used in designing efficient conversational agents. These conversational agents include task-oriented dialog systems, chat-oriented dialog systems, and QA dialog systems. Although the paper sheds some light on several research works and datasets pertinent to CQA,
it does not cover the recent trends and methods on CQA.
The authors in \cite{fu2020survey} recently published their literature review which primarily highlights the complex QA over knowledge bases. The paper covers all the datasets and different approaches that are employed in complex KB-QA systems along with the discussion of complex QA. CQA is just mentioned as a ``future trend'' with
minimal discussion.
A summary of the techniques and methods of single-turn QA is presented in \cite{liu2019neural} along with proposing a general modular architecture needed for it. The paper further discusses the techniques that could be used in each module. Again, CQA is discussed briefly as a newly emerging trend along with the different challenges.
Another recent effort, \cite{gupta-etal-2020-conversational}, delineates on the latest trends and methods to cater to the successful implementation of multi-turn machine reading comprehension. However, it lacks the discussion on other forms of multi-turn QA.
The key aspect that makes this survey to stand out among its predecessors is its focus on CQA encompassing both sequential KB-QA and conversational machine reading comprehension.
Multi-turn QA has been discussed very nominally in previous surveys. It is an essential aspect to consider when discussing the process of carrying out a natural conversation with a machine. Based on the review, we thoroughly discuss the research works
of CQA, the techniques employed, and highlight the merits and demerits of
different techniques. Finally, we highlight and discuss existing challenges related to the field of CQA along with an attempt to suggest some application areas.
The field of CQA is witnessing its golden era in terms of research publications and this calls for having a strong background work that discusses its challenges and trends as a separate field than single-turn QA. Thus, this survey is an effort to establish a strong foundation for CQA which would benefit
the research communities as well. This work provides detailed insights into important ideas pertinent to CQA systems that are needed to design interactive and engaging conversational systems.
To the best of our knowledge, this is the first work to
investigate the field of CQA in detail.
We hope that this paper would turn out to be a valuable resource for researchers who are interested in this area.
\vspace{2mm}
\noindent{\bf The Survey Structure}.
The rest of the paper is organized as follows.
%
Section~\ref{section2} delineates on a brief background of single-turn QA and leads the discussion on CQA. This section further highlights the categorization of CQA systems based on the source they utilize to answer the questions.
Section~\ref{seq kb-qa}
describes the task of sequential knowledge-based question answering (KB-QA) system and the general architecture it employs. The section further highlights the techniques used in each module of the system to effectively carry out the task of sequential QA.
Section~\ref{cmrc} describes the task of Conversational Machine Reading Comprehension (CMRC) and how it differs from typical machine reading comprehension. The section also describes how the general architecture of machine reading comprehension can be adapted for conversational machine reading comprehension. It further describes the decomposition of the architecture in different modules and techniques employed in each
of them.
Section~\ref{dataset} describes the datasets introduced to further improve the work in the field of CQA along with a qualitative comparison of each of them.
Section~\ref{challenges} highlights the potential applications of the CQA systems in commercial areas along with the research trends that should be explored to leverage the strength of these systems more effectively.
Finally,
Section~\ref{conclusion}
offers some concluding remarks.
\section{Conversational Question Answering}
\label{section2}
Question answering in general
involves accessing different data sources to find the correct answer for
an asked question, as depicted in Fig.~\ref{highlevel}.
It dates back to the 1960s \cite{monz2011machine} when early QA systems, due to rule-based methods and absurdly small size of available datasets, did not achieve well, thereby making it difficult to
be used
in practical applications. These systems saw their rise in 2015 and this largely was
associated with two driving factors:
\begin{itemize}
\item [(a)] The use of deep learning methods to capture the critical information in QA tasks that outperform the traditional rule-based models, and
\item [(b)] The availability of several large-scale datasets,
i.e., SQuAD \cite{DBLP:conf/emnlp/RajpurkarZLL16}, Freebase \cite{bollacker2008freebase}, MS MARCO \cite{DBLP:conf/nips/NguyenRSGTMD16}, DBpedia \cite{lehmann2015dbpedia}, and CNN \& DAILY MAIL \cite{DBLP:conf/conll/NallapatiZSGX16}, which make it possible to deal with the task of QA on neural architectures more efficiently and further provide a test bed for evaluating the performance of these models.
\end{itemize}
\begin{figure}[tb!]
\center
\includegraphics[width=\textwidth]{Untitled_Diagram.pdf}
\caption{The high-level or generic architecture of QA systems where search system corresponds to the different sources. The specific architecture of a QA system depends on the underlying data source.}
\label{highlevel}
\end{figure}
To
realize the QA tasks more
close
to the real-world scenarios,
several advanced research directions
have emerged recently.
One such
direction is CQA \cite{liu2019neural},
which introduces a new dimension of dialog systems that combines the elements of both chit-chat and QA. CQA is a \textit{system ask, user respond} kind of setting where the system can asks a user multiple questions to understand the user's information need \cite{zhang2018towards}. Usually, a user starts the conversation with a particular question in mind and the system searches its database to find an appropriate solution to that query. This could turn into a multi-turn conversation if the user needs to have more detailed information about the topic.
\subsection{Categorization of CQA Systems}
There are several ways of structuring the different aspects of a QA system. Since CQA is categorized as a sub-category of QA,
the same categorization can be used for CQA systems as well. The categorization of the CQA model could be realized on the basis of the data domain, types of questions, types of data sources, and the types of systems that we are building for the questions at hand \cite{mishra2016survey}. Fig.~\ref{categorization} manifests the possible options that could be utilized to structure
a CQA
system. The
details of each of the category
are given in the rest of this section.
\begin{figure}[tb!]
\center
\includegraphics[width=\textwidth]{classification.pdf}
\caption{Categorization of
CQA on the basis of: i) data domains,
ii) types of questions, iii) types of data sources, and iv) types of systems \cite{mishra2016survey}.}
\label{categorization}
\end{figure}
\subsubsection{Data Domains}
Questions asked by
users are
either open domain \cite{jiang-etal-2019-freebaseqa, yang-etal-2015-wikiqa}
in which questions are domain-free and in a broad range,
or restricted
to specific
application domains
(i.e., closed domain)
such as Travel \cite{beaver2020towards}, Restaurants \cite{budzianowski2018multiwoz}, Movies \cite{DBLP:journals/corr/abs-1908-03180},
and
Hospitals \cite{budzianowski2018multiwoz}.
The
question repository of closed domain
question answering
is
smaller compared
to open domain
question answering.
This makes the models designed for closed domain QA less transferable
than
the models for open domain QA.
It should be noted that the sub-categories of open-domain QA and closed-domain QA are the examples of generic and task-specific datasets.
\subsubsection{Types of Questions}
Questions can be easily classified into various categories primarily depending upon their complexity, the nature of the response, or the techniques that should be utilized to answer them \cite{mishra2016survey}. The classification based on the questions commonly asked by the users is delineated as follows:
\paragraph{Factoid Questions:} Questions which expect the system to find a simple and fact-based answer in a short sentence, e.g., ``\textit{who acted as Chandler in FRIENDS?}''. Factoid questions typically begin with a \textit{wh}-word. Different extraction techniques can be employed to find the answers to the factoid questions. The techniques first recover latent or hidden information in the given question, and then look for the answer in the given text using either structure matching \cite{DBLP:conf/acl/ShenK06} or reasoning \cite{DBLP:conf/emnlp/IyyerBCSD14}. FreebaseQA \cite{jiang-etal-2019-freebaseqa} is one of the examples of factoid QA dataset.
\paragraph{Confirmation Questions:}
Questions which require the answer in a binary format i.e., yes or no, e.g., ``\textit{Is Sydney the capital of Australia?}''. As the answers are not simple extractive text spans from the given source,
a strong inference mechanism is needed to deduce the answers of confirmation type questions \cite{mishra2016survey}. While there may be a lot of information given about a topic, analyzing if the original statement is true or not is still a challenging task.
\paragraph{Simple Questions:}
Simple questions require small piece of text to find an answer and, thus, they are easier to comprehend. For instance, for a question like ``\textit{What is the magnitude of earthquake in Pakistan?}'', it can easily be deduced that the answer of this question would be a simple numeric value. The process of finding an answer to a simple question consists of three basic steps: i) question analysis, ii) relevant documents/knowledge graphs retrieval, and iii) answer extraction \cite{bouziane2015question}. MS MARCO \cite{DBLP:conf/nips/NguyenRSGTMD16}, SQuAD \cite{DBLP:conf/emnlp/RajpurkarZLL16}, and FreebaseQA \cite{jiang-etal-2019-freebaseqa} are some of the examples of simple questions-based datasets.
\paragraph{Complex Questions:}
Complex questions are questions that require different types of knowledge or several steps to answer.
They are difficult to answer and require access to multiple documents or multiple interactions with the system \cite{bhutani-etal-2020-answering}. Complex question like ``\textit{how many cities in China have more population than New Delhi?}'' requires the system to first figure out the population of New Delhi and then compare it with the population of different cities in China. Thus, answering complex questions requires complex techniques such as iterative query generation \cite{DBLP:conf/emnlp/QiLMWM19}, multi-hop reasoning \cite{xiong2021answering}, decomposition into sub-questions \cite{DBLP:conf/acl/IyyerYC17}, and combining cues from the multiple documents \cite{DBLP:conf/sigir/LuPRAWW19}.
LC-QuAD \cite{DBLP:conf/semweb/TrivediMDL17} and CSQA \cite{DBLP:conf/aaai/SahaPKSC18} are some of the examples of complex QA datasets.
\paragraph{Casual Questions:}
Casual questions require detailed explanation pertinent to the entity and they usually start with the words like \textit{why} or \textit{how}.
The answers generated for casual questions are not straight-forward or concise. This generation of detailed answers call for advanced natural language processing techniques that are able to understand the question on different levels of technicality such as semantics and syntax \cite{higashinaka-isozaki-2008-corpus}. An example of such questions could be ``\textit{why do earthquakes occur?}''.
\paragraph{Listing Questions:} These are the
questions which require the list of entities or facts as an answer, e.g., ``\textit{list the name of all the former presidents of America}''. The techniques that are utilized to answer factoid question works well for the listing questions.
The reason being that QA systems treat such questions as a sequence of factoid questions asked iteratively \cite{mishra2016survey}.
\paragraph{Unanswerable Questions:} These are the
questions whose answers cannot be found or deduced via the source text.
Unanswerable questions could be any type of the aforementioned questions. For these questions, the correct result of the QA system is to indicate that it is unanswerable. SQuADRUn \cite{rajpurkar-etal-2018-know} is an extension of the SQuAD dataset \cite{DBLP:conf/emnlp/RajpurkarZLL16} with over 50,000 unanswerable questions that was introduced to further improve the task of QA.
\subsubsection{Types of Data Sources}
CQA systems can be classified on the basis of the underlying data sources they utilize to find an answer. These underlying data sources could be:
\paragraph{Structured Data Source:}
In a structured document, data is stored in the form of entities. These entities form a separate table. An entity in a table can have multiple attributes associated with it. The definition of these attributes is referred to as the metadata and is stored in a schema. A query language is used to access the data and retrieve relevant information from the schema. Examples of structured data sources are databases and RDF graphs. QALD\footnote{http://qald.aksw.org/} and LC-QuAD \cite{DBLP:conf/semweb/TrivediMDL17} utilize structured data source (i.e., RDF graphs) to answer the questions.
\paragraph{Semi-structured Data Source:}
There is no clearly defined boundary between the stored data and its schema in the semi-structured data sources which makes it quite labor-intensive to build.
An example of a semi-structured data source is XML. The datasets that are designed using semi-structured data sources include TabMCQ \cite{wang-etal-2018-neural-question} and QuaSM \cite{pinto2002quasm}.
\paragraph{Unstructured Data Source:}
There are no pre-defined rules for storing the data in this particular arrangement. The data stored in the unstructured data sources could be of any type and require the use of advanced natural language processing techniques and information retrieval methods to find out the relevant answer. However, the reliability of finding the correct answers is low as compared to the structured data sources. Examples of unstructured datasets are SQuAD \cite{DBLP:conf/emnlp/RajpurkarZLL16}, QuAC \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18}, and CNN \& Daily Mail \cite{DBLP:conf/nips/HermannKGEKSB15}.
\subsubsection{Types of CQA Systems}
Over the past few years, the demand for CQA systems, from both research and commercial perspective, has increased in turn enabling users to search a large-scale knowledge-base (KB) or a text-based corpora written in natural language. This categorizes the CQA systems into sequential KB-QA agents and conversational machine reading comprehension systems:
\paragraph{Sequential KB-QA:}
KB-QA systems are extremely flexible and easy-to-use in contrast to the traditional SQL-based systems that require users to formulate complex SQL queries \cite{DBLP:journals/pvldb/CuiXWSHW17}. In a real-world scenario, users do not always ask simple questions \cite{DBLP:conf/aaai/SahaPKSC18}. Usually, the questions asked are complex in nature, and therefore, require multi-turn interaction with the KB. Also, once a question has been answered, the user tends to put forward another question that is linked to the previous question-answer pair. This forms the task of sequential QA using knowledge graphs.
\paragraph{Conversational Machine Reading Comprehension:}
The practical use of text-based QA agents, also referred to as CMRC agents, is more common in the mobile phones than in the search engines (like Google, Bing, and Baidu), wherein concise and direct answers are provided to the users rather than presenting them with a list of possible answers. For instance, if a user intends to look for a popular restaurant in a particular geographical area, the search engine would provide
her with a search result encompassing options spread on multiple pages, whereas, a CMRC-based dialog agent would ask a few follow-up questions to figure out the preference(s) of the user to subsequently narrow down the search result to one, i.e., possibly the best, answer.
With the emergence of CMRC, many researchers \cite{DBLP:journals/tacl/ReddyCM19, DBLP:conf/emnlp/ChoiHIYYCLZ18, DBLP:conf/aaai/SahaPKSC18, DBLP:conf/acl/IyyerYC17} have tried inducing a conversational aspect to meet the requirements for the task of CQA by introducing a background context and a series of inter-related questions.
\subsection{What Makes CQA Different from QA?}
\subsubsection{Task-based Differences}
The task of CQA differs from the traditional QA in a number of ways. In traditional QA systems, questions are independent of each other and are based on the given passage. In contrast, questions in CQA are related to each other which poses an entirely different set of challenges including but not limited to:
\begin{scriptsize}
\begin{table}[!tb]
\centering
\begin{tabular}{p{0.2\linewidth} p{0.6\linewidth}}
\toprule
\toprule
\multicolumn{2}{c}{Topic: Staten Island} \\ \hline
\textbf{Passage:} &
Staten Island is one of the five boroughs of New York City in the U.S. state of New York. In the southwest of the city, Staten Island is the southernmost part of both the city and state of New York, with Conference House Park at the southern tip of the island and the state. The borough is separated from New Jersey by the Arthur Kill and the Kill Van Kull, and from the rest of New York by New York Bay. With a 2016 Census-estimated population of 476,015, Staten Island is the least populated of the boroughs but is the third-largest in area at. Staten Island is the only borough of New York with a non-Hispanic White majority. The borough is coextensive with Richmond County, and until 1975 was the Borough of Richmond. Its flag was later changed to reflect this. Staten Island has been sometimes called ``the forgotten borough" by inhabitants who feel neglected by the city government. \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 1:}\\ \textbf{Answer 1:}\end{tabular} & \begin{tabular}[c]{@{}l@{}}How many burroughs are there?\\ Five.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 2:}\\ \textbf{Answer 2:}\end{tabular} & \begin{tabular}[c]{@{}l@{}}In what city?\\ New York City.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 3:}\\ \textbf{Answer 3:}\end{tabular} & \begin{tabular}[c]{@{}l@{}}And state?\\ New York.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 4:}\\ \textbf{Answer 4:}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Is Staten island one?\\ Yes. \end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 5:}\\ \textbf{Answer 5:}\end{tabular} & \begin{tabular}[c]{@{}l@{}} Where is it? \\ In the southwest of the city\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 6:}\\ \textbf{Answer 6:}\end{tabular} & \begin{tabular}[c]{@{}l@{}}What is it sometimes called?\\ The forgotten bourough.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Question 7:}\\ \textbf{Answer 7:}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Why?\\ Because the inhabitants feel neglected by the city \\ government.\end{tabular} \\ \\ \toprule \toprule
\hline
\end{tabular}
\caption{A chunk of a dialog from the
CoQA dataset \cite{DBLP:journals/tacl/ReddyCM19}}
\label{coqadataset}
\end{table}
\end{scriptsize}
\begin{itemize}
\item In order to find the correct answer for the question at hand, the model needs to encode not only the current question and source paragraph, but also the previous history turns. More specifically, as shown in Table~\ref{coqadataset}, Question 2 and Question 3 are related to Question 1.
\item The turns in CQA are of different nature. Some questions require more detailed information (i.e., \textit{drilling down}), some may require information about some topic previously discussed (i.e., \textit{topic shift}), some may ask about a topic again after it had been discussed (i.e., \textit{topic return}), and some questions may ask for the clarification of topic (i.e., \textit{clarification question}) \cite{DBLP:conf/naacl/Yatskar19}. All of these characteristics are incremental in nature and present challenges that most of the top-performing QA models fail to address directly, such as pragmatic reasoning and referring back to the previous context applying co-reference resolution. In Table~\ref{coqadataset}, Question 2 is an example of a drill down question, Question 7 is a clarification question and ``\textit{it}'' in Question 5 ``\textit{where is it?}'' requires co-reference resolution.
\end{itemize}
\subsubsection{Architectural Differences}
The architecture of a CQA model is similar to the one of a QA system on the base level. However, to introduce the conversational touch to the system, a CQA model extends the traditional QA system by introducing a few modules:
\begin{itemize}
\item A traditional single-turn KB-QA system encompasses a semantic parser and a knowledge base reasoning (KBR) engine. In addition to these, a sequential KB-QA system encompasses a dialog manager, which is responsible for tracking the previous dialog states and determines what question to ask next to help a user search the KB effectively
for
an answer.
\item A CMRC system differs from a traditional MRC system in two aspects. First, the encoder is embedded with a sub module referred to as history modeling module, which is responsible for not only encoding the current question and the given passage, but also the history turns of the conversation. Second, a reasoning module is extended to generate an answer, that might not be directly given in the passage, using pragmatic reasoning \cite{DBLP:journals/tacl/ReddyCM19}.
\end{itemize}
It is worth noting here that the paradigm of CQA is an emerging one,
which is not well studied in contrast to traditional QA systems. Therefore, not many research papers are available.
The architecture and researches carried out in both sequential KB-QA systems and CMRC systems
will be
discussed in detail in Section~\ref{seq kb-qa} and Section~\ref{cmrc}.
\section{Sequential KB-QA Systems}
\label{seq kb-qa}
A knowledge base (KB) is a structured information repository used for knowledge
sharing and management purposes \cite{martinez2015automated}. Freebase \cite{bollacker2008freebase}, NELL \cite{mitchell2018never}, DBpedia \cite{lehmann2015dbpedia}, and Wikidata\footnote{https://www.wikidata.org/wiki/Wikidata:Main\_Page} are
well-known examples of large-scale graph-structured knowledge
bases also termed as the Knowledge Graphs (KGs) and have
become significant resources when
dealing with open-domain questions. The KGs are known to be a graphical representations of a KB, and a typical KG comprises of triples encompassing subject, predicate, object triples \textit{(s,r,t)}, wherein \textit{r} is a relation or predicate between the entities \textit{s} and \textit{t} \cite{DBLP:journals/ftir/GaoGL19}. They play an important role in bridging up the lexical gap by providing additional information about relations which in turn helps in gaining more detailed information about the context. The knowledge graphs have seen their successful applications in various NLP tasks such as text entailment, information retrieval, and QA \cite{Zou_2020}.
\begin{figure} [tb!]
\center
\includegraphics[scale = 0.35]{align.pdf}
\caption{Aligning knowledge and conversation in sequential KB-QA.}
\label{movie}
\end{figure}
The task of
QA
over large-scale KB-QA systems has seen its progress from simple single-fact task to complex queries requiring multi-hop interaction and traversal of the knowledge graphs. These come under the category of single-turn QA where a user puts forward a question and the system finds the best possible answer for it. Though KB-QA based agents improved the flexibility of QA process to a considerable extent, nevertheless, it is irrational to believe that these systems could constitute complex queries without having complete knowledge about the organizational structure of the KB to be questioned \cite{DBLP:conf/nips/GuoTDZY18}. Thus, sequential KB-QA system is a more optimal option as it lets the users query the KB interactively.
The interactive sequential KB-QA system is useful in many commercial areas such as making a restaurant reservation \cite{sun2020multi}, finding a hotel
in a new city, finding a movie-on-demand \cite{DBLP:conf/acl/DhingraLLGCAD17}, or asking for relevant information based on certain attributes. Fig.~\ref{movie} illustrates how a sequential KB-QA system aims to find a movie based on specified attributes by a user. If it is a traditional KB-QA system, the conversation would have ended after the first turn with a number of results. But under the sequential KB-QA setting, the system asks the follow-up questions
for the specific details about the current question and present the user with the most appropriate answer.
\begin{figure} [tb!]
\center
\includegraphics[width=10cm, height=5cm]{kbqagen.pdf}
\caption{A high level diagram of sequential KB-QA.}
\label{kbqagen}
\end{figure}
The core architecture of a sequential KB-QA system comprises of a semantic parser and an inference engine, along with the addition of a dialog manager, that keeps track of the previous turns and decides which questions to ask to help the user query the KB effectively. The high-level
architecture of a sequential KB-QA is depicted in Fig.~\ref{kbqagen}, which
consists of:
i) \textit{Semantic Parser}, ii) \textit{Dialog Manager}, and iii) \textit{Response Generator}. The semantic parser is responsible for mapping input along with the previous context into a semantic representation (logical form) to query the KB. The dialog manager keeps track of the dialog history (i.e., QA pairs and DB state) and updates it accordingly \cite{suhr-etal-2018-learning}. It is also responsible for selecting the system's next action (i.e., to provide an answer or to ask a clarification question) based on the current question using dialog policy. The process of dialog policy can be either trained on dialogs \cite{wen-etal-2017-network, DBLP:conf/acl/DhingraLLGCAD17} or programmed \cite{wu2015probabilistic}.
At the end,
the response generator converts the system's action into natural language response. However, certain new approaches \cite{DBLP:conf/emnlp/MullerPSNA19,DBLP:conf/cikm/ChristmannRASW19} are working towards the elimination of the semantic parser module as it requires extensive and expensive labeling of data.
\subsection{Semantic Parser}
The notion of semantic parsing can be thought of as a process of mapping natural language text into meaningful logical forms and has emerged as a significant technical component for designing KB-QA systems \cite{cheng-etal-2019-learning}. Once a correct logical form has been obtained, it can be executed on the knowledge source in the form of a query to obtain answer denotations.
Iyyer et al. \cite{DBLP:conf/acl/IyyerYC17} introduced the task of semantic parsing for sequential QA by creating a dataset of simple inter-related questions out of a complicated WikiTableQuestions dataset \cite{pasupat-liang-2015-compositional}. The proposed model, called Dynamic Neural Semantic Parsing (DynSP), is a weakly supervised structured-output learning approach based on the reward-guided search. Given a question along with a table, the model forms a semantic parsing problem as a state-action search problem wherein each state denotes a partial or complete parse and each action can be considered as an operation to extend the parse. Unlike traditional parsers, DynSP explores and constructs different neural network structures for different questions.
The aforementioned approach maps only the current utterance into its logical form which makes it difficult for the system to interpret the meaning of the utterance especially where co-reference resolution is required. To address this shortcoming, the authors in \cite{DBLP:conf/nips/GuoTDZY18} proposed Dialog-to-Action (D2A) to facilitate the use of previous utterances (both questions and answers) concatenated with the current question. The task of generating logical form can be regarded as the prediction of a series of actions, and each of
them corresponds to simple grammar rules.
However, the model of D2A suffers from the problem of error propagation as it learns to reproduce previously generated actions, which might be incorrect. To overcome the issue of error propagation and ambiguous entity-linking, the step-wise framework is improved by multi-task learning for sequential KB-QA systems \cite{DBLP:conf/emnlp/ShenGQGTDLJ19}. This model, Multi-task Semantic Parsing (MaSP), learns pointer-based semantic parsing and entity-detection simultaneously as they are closely related.
The joint learning could enhance
the performance of the CQA task.
Specifically,
the input consists of the current question and historical interactions are passed through an encoder based on Transformer \cite{NIPS2017_3f5ee243} to generate the context-aware embeddings. The model employs pointer network \cite{NIPS2015_29921001} to locate the targeted entity and a number in the given question. The use of the pointer network comes with two advantages: i) it handles the co-reference resolution by learning the context of the entity, and ii) it reduces the size of decoding vocabulary significantly from several million to several dozen.
The model also incorporates a type-aware entity detection module in which the prediction is fulfilled in joint space of IOB (inside, outside, beginning) tagging and corresponding entity type for disambiguation. In the end, grammar guided decoder is used to infer logical forms that can be executed on the KB.
%
The model of MaSP suffers from the issue of producing ambiguous results because the task of jointly learning predicate and entity classification share no common information except for the supervision signals propagated to the classifiers. The issue was overcome by another recently introduced model called mu\textbf{L}ti-task sem\textbf{A}ntic par\textbf{S}ing with tr\textbf{A}nsformer and \textbf{G}raph atte\textbf{N}tion n\textbf{E}tworks (LASAGNE) \cite{kacupaj-etal-2021-conversational}. The model performs multi-task learning by utilizing a Transformer \cite{NIPS2017_3f5ee243} supplemented with a Graph Attention Network (GAT) \cite{DBLP:conf/iclr/VelickovicCCRLB18}. The model uses a Transformer to generate the logical forms of a natural language question, while GAT model is utilized to exploit the correlations between predicate and entity types due to its message-passing ability between the nodes. The authors also proposed an entity recognition module that contributes in detecting, linking, filtering, and permuting all the relevant entities in the generated logical forms. Unlike MaSP, LASAGNE use both sources of information, the encoder and the entity recognition module to perform these operations which makes the process of re-learning entity information from the context of the current question avoidable.
\subsection{Dialog Manager}
Conversational history plays a significant role when generating the logical forms of natural language utterances. Once a logical form is obtained, the system is in a better state to decide its next action, i.e., to ask a clarification question or provide an answer to a question.
Dialog-to-action \cite{DBLP:conf/nips/GuoTDZY18} incorporates a dialog memory to store the historical interaction of a user. The model consists of a bidirectional RNN with a Gated Recurrent Unit (GRU) \cite{69e088c8129341ac89810907fe6b1bfe} as an encoder to convert the input (previous question answer pairs concatenated with the current question) into a sequence of context vector. A grammar-guided decoder (GRU with attention mechanism) generates an action sequence based on the context vector \cite{luong-etal-2015-effective}. The dialog memory used in the model encompasses entities, predicates, and action sub-sequences which could be replicated selectively as decoding proceeds.
LASAGNE \cite{kacupaj-etal-2021-conversational} incorporate the dialog history based on previous interactions as an additional input to the model for handling ellipsis and coreference. The final input consists of the previous question-answer pair and the current question. The utterances are separated using a \textit{[SEP]} token and at the end of the last utterance, a context token \textit{[CTX]} is appended. The conversation is tokenized using WordPiece tokenization \cite{DBLP:journals/corr/WuSCLNMKCGMKSJL16} and then pre-trained GloVe model \cite{pennington-etal-2014-glove} is used to embed the words into vector representations.
\subsection{Response Generator}
In NLP tasks, response generation is the last and vital step to generate system utterances for a user, and the introduction of pre-trained language models has been a game-changing factor for the promising field of language generation over the past few years. Peng et al. \cite{DBLP:conf/emnlp/PengZLLLZG20} introduced a model based on Open AI's Generative Pre-training (GPT) \cite{radford2018improving} called Semantically-Conditioned Generative Pre-training (SC-GPT). The paper introduces a dataset called FewshotWOZ\footnote{https://github.com/pengbaolin/SC-GPT} to simulate the process of few-shot learning for limited data labels. SC-GPT generates semantically controlled responses and is trained in three steps: i) initially, it is pre-trained on massive plain corpora so that it can better generalize to new domains, ii) further pre-training is
conducted on dialog-act specific huge corpora to gain the capability of controllable generation and finally, and
iii) a limited amount of domain labels are used to fine-tune the model for its adaptation to the target domain.
Another framework called NLG-LM \cite{zhu2019multi} employs multi-task learning to not only generate semantically correct responses, but also maintain the naturalness of the conversation. The model utilizes sequence-to-sequence architecture to simultaneously train the Natural Language Generation (NLG) and Language Modeling (LM) tasks. The language modeling task, carried out in decoder, is incorporated on human generated utterances to bring out more language-related elements. In addition to that, the unsupervised nature of the language model eliminates the need for a massive amount of unlabelled data for training purposes.
\subsection{Sequential KB-QA Approaches without Semantic Parser}
There exists extensive research work in semantic parsing, wherein deep neural networks have been utilized for training models in a supervised learning setup over manually generated logical forms. However, generating labeled data for this task could be exhaustive and expensive \cite{DBLP:conf/emnlp/MullerPSNA19}. To address this issue, a new research direction has been recently investigated that utilizes weak training for semantic parsing where training data consists of question and answers and the structured resources are used to restore the logical representations that would result in the right answer.
In \cite{DBLP:conf/aaai/SahaPKSC18}, the authors proposed a model which is an amalgamation between Hierarchical Recurrent Encoder-Decoder (HRED) \cite{serban2016building} model and key-value memory network \cite{miller-etal-2016-key} to present the fusion of dialog and QA process.
HRED is responsible for generating high-level and low-level representations of an utterance and the context. Candidate tuples are selected in which the entity appears as subject/object. These candidates i.e., tuples are stored in a key-value memory network as key-value pair, where the key contains the relation-subject pair and the value contains the embeddings of the object. The model makes multiple passes (turns) to attend to different aspects of the question especially in the case of complex questions. A decoder is used to generate answer sequences.
\afterpage{
\begin{landscape}
\scriptsize
\RaggedLeft
\begin{longtable}{|p{1.5cm}|p{3.5cm}|p{3.0cm}|p{4.0cm}|p{4.0cm}|}
\caption{Recent studies on sequential KB-QA (2016-2021).} \label{tab:long} \\
\hline \multicolumn{1}{|c|}{\textbf{Ref.}} & \multicolumn{1}{|c|}{\textbf{Contribution(s)}} & \multicolumn{1}{|c|}{\textbf{Techniques Used}} & \multicolumn{1}{|c} {\textbf {Merits}} & \multicolumn{1}{|c|}{\textbf{Demerits}} \\ \hline
\endfirsthead
\multicolumn{5}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\
\hline \multicolumn{1}{|c|}{\textbf{Ref.}} & \multicolumn{1}{|c|}{\textbf{Contribution(s)}} & \multicolumn{1}{|c|}{\textbf{Techniques Used}} & \multicolumn{1}{|c} {\textbf {Merits}} & \multicolumn{1}{|c|}{\textbf{Demerits}} \\ \hline
\endhead
\hline \multicolumn{5}{|r|}{{Continued on next page}} \\ \hline
\endfoot
\hline \hline
\endlastfoot
DynSP \color{blue}$\textbf{\cite{DBLP:conf/acl/IyyerYC17}}$ & Introduced the task of semantic parsing for sequential KB-QA.
& Dynamic Neural Network structure.&
\begin{tableitems}[nosep,after=\strut]
\item Reward-guided search reduces the number of queries to be labelled.
\end{tableitems}
&\begin{tableitems}[nosep,after=\strut]
\item The parse language is not comprehensive enough to represent the semantic parses of the sentences in dataset.
\item The table based search-space approach cannot be scaled up to cater the needs of large-sacle curated KGs.
\end{tableitems} \\ \hline
HRED + KVmem \color{blue}$\textbf{\cite{DBLP:conf/aaai/SahaPKSC18}}$ &
Introduced sequential KB-QA dataset consisting of complex questions.
& End-to-end model based on HRED and KV memnet.
& \begin{tableitems}[nosep,after=\strut]
\item Incorporates dialog history with the current utterance.
\item Works well with simple and direct questions.
\end{tableitems} &\begin{tableitems}[nosep,after=\strut]
\item Performs poorly with complex questions.
\item Doesn't work well with indirect or incomplete questions.
\item KV memnet has flat organization of story which makes it unsuitable for complex questions.
\end{tableitems} \\ \hline
D2A \color{blue}$\textbf{\cite{DBLP:conf/nips/GuoTDZY18}}$ &
Introduced history interaction as a part of input to deal with enormous ellipsis phenomena.
& A bidirectional RNN with a GRU is used as an encoder. A grammar guided decoder along with a dialog memory component is used to generate action sequences.
& \begin{tableitems}[nosep,after=\strut]
\item The model can effectively handle the contextual references
\item The parser introduced is capable of parsing various types of question.
\end{tableitems} & \begin{tableitems}[nosep,after=\strut]
\item Error propagation may occur because the model replicates previously generated action-sequences which might be incorrect.
\item The supervision signals cannot be shared among the model for mutual benefits as they are learned independently for the subtasks.
\item Ambiguous entity linking.
\end{tableitems}\\
\hline
MaSP \color{blue}$\textbf{\cite{DBLP:conf/emnlp/ShenGQGTDLJ19}}$ &
Multi-task learning for sequential KB-QA.
& Utilizes Transformer as a contextual encoder and a pointer-equipped decoder.
& \begin{tableitems}[nosep,after=\strut]
\item Reduces the risk of error propagation by jointly learning semantic parsing and entity detection.
\item Works well with co-reference resolution.
\item Addresses ambiguous entity-linking by leveraging contextual features of the input.
\end{tableitems} &\begin{tableitems}[nosep,after=\strut]
\item May result in spurious logical form.
\end{tableitems}\\
\hline
LASAGNE \color{blue}$\textbf{\cite{kacupaj-etal-2021-conversational}}$ &
Improved multi-task semantic parsing for sequential KB-QA.
&
\begin{tableitems}[nosep,after=\strut]
\item Utilizes Transformer model to generate logical forms, while the graph attention model is used to exploit correlations between entity type and predicates.
\item Introduced an entity detection module which detects, links, and permutes all the relevant entities.
\end{tableitems}
& \begin{tableitems}[nosep,after=\strut]
\item Eliminates the risk of producing ambiguous results by sharing signals between entity and predicate nodes.
\item Works well with co-reference resolution.
\item Improves the process of entity detection and linking by utilizing information from both entity detection module and encoder.
\end{tableitems} &\begin{tableitems}[nosep,after=\strut]
\item May result in spurious logical forms which affects the model's performance in answering clarification and ellipsis-based questions.
\end{tableitems}\\ \hline
GNN + PointerNet \color{blue}$\textbf{\cite{DBLP:conf/emnlp/MullerPSNA19}}$ & Conversation processing around structured data.
& Neural approach based on GNN and pointer network.
& \begin{tableitems}[nosep,after=\strut]
\item Eliminates the need of semantic parsing.
\item Handles conversational context stored in tables, effectively.
\end{tableitems} & \begin{tableitems}[nosep,after=\strut]
\item Table-search methods cannot scale to large real-world KGs involving qualitative or logical comparison.
\end{tableitems} \\ \hline
CONVEX \color{blue}$\textbf{\cite{DBLP:conf/cikm/ChristmannRASW19}}$ &
\textit Completion of incomplete follow-up questions.
& Symbolic approach.
& \begin{tableitems}[nosep,after=\strut]
\item Automatically infers missing or ambiguous pieces for follow-up questions.
\item Eliminates the need for intermediary representation of the context and given question
\end{tableitems} & \begin{tableitems}[nosep,after=\strut]
\item May result in combinatorial explosion if sub-graphs not expended carefully.
\end{tableitems} \\
\hline
\end{longtable}
\end{landscape}
}
Another approach in \cite{DBLP:conf/emnlp/MullerPSNA19} presents a table-centered sequential KB-QA model which, instead of learning the intermediate learning forms, encodes the structured resources (i.e., tables) along with the questions and answers from the conversational context. The approach encodes tables as graphs by representing cells, columns, and rows. The column represents the main features of the questions and cells contains the relevant values. To handle the follow-up questions, the model adds previous answers by marking all the columns, rows, and cells with nominal features. It uses a Graph Neural Network (GNN) \cite{scarselli2008graph} based encoder to encode the graph by generating vector representation of the edge label between the two nodes. The copy mechanism based on the pointer network, instead of selecting symbols from output vocabulary, then predicts the sequences of answer rows and columns from the given input.
CONVEX (CONVersational KB-QA with context EXpansion) \cite{DBLP:conf/cikm/ChristmannRASW19} employs unsupervised method to answer sequential questions (follow-up questions) by keeping track of the conversational context using predicates and entities appeared so far. The initial question is used for initializing and selecting a small sub-graph of the knowledge graph. The essence of this approach is the graph exploration algorithm that tends to expand a frontier aptly to find the possible candidate answers for the follow-up questions. The right answer from the candidate answers is selected by calculating weighted proximity. The top-scoring answer (in the range of 0 to 1) will be returned as the answer to the current question.
Table~\ref{tab:long}
summarizes the major contributions, the techniques exploited, and the merits and demerits of the aforementioned approaches.
\section{Conversational Machine Reading Comprehension}
\label{cmrc}
Most of the work carried out in the field of machine reading comprehension is based on single-turn QA which is unlikely in the real-world scenario since humans tend to seek information in a conversational context \cite{DBLP:conf/emnlp/RenXCY18}. For instance, a user might ask, ``\textit{Who is Christopher Columbus?}'' and based on the answer received, he might further investigate, ``\textit{Where was he born?}'' and ``\textit{What was he famous for?}''. It is easy for a human to decipher that here ``\textit{he}'' in the follow-up questions refer to ``\textit{Christopher Columbus}'' from the first question. But when it comes to a machine to comprehend the context, it poses a set of challenges such as co-reference resolution or conversational history \cite{liu2019neural}, which most of the state-of-the-art QA systems do not address directly.
A typical MRC model consists of three main functions namely: i) encoding the given context and question into a set of symbolic representations called embeddings in a neural space, ii) reasoning through the embeddings to find out the answer vector in the neural space, and iii) decoding the answer vector to produce natural language output \cite{DBLP:journals/ftir/GaoGL19}. In \cite{DBLP:conf/sigir/Qu0QCZI19}, the authors proposed a modification by introducing two modules, i.e., history selection and history modeling modules to address the aforementioned challenges to incorporate the conversational aspect, hence introducing the task of CMRC.
Formally, given a context $C$, the conversation history in the form of question-answer pairs $Q_1, A_1, Q_2, A_2,...,Q_{i-1}, A_{i-1}$, and a question $Q_i$, the CMRC model needs to predict the answer $A_i$. The answer $A_i$ can either be a free-form text with evidence \cite{DBLP:journals/tacl/ReddyCM19} or a text span \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18}.
The flow of a general CMRC model is depicted in Fig.~\ref{architecture}.
\begin{figure} [tb!]
\center
\includegraphics[width=8cm, height=5cm]{cqa.pdf}
\caption{Generic framework of a CMRC model which consists of i) history selection module that selects ${H_i}'$ history turns from the conversational history context $H_i$, ii) encoder that transforms the tokens of ${H_i}', C, Q_i$ into input embeddings, iii) reasoning module is responsible for performing contextual integration of input embeddings into contextualized embeddings to perform reasoning, and iv) output predictor predicts the answer $A_i$ on the basis of context-query interaction.}
\label{architecture}
\end{figure}
We will discuss these modules separately in the
rest of this section, along with the techniques and trends utilized in each of them for the successful design and implementation of a CMRC model.
\subsection{History Selection Module}
To enable the CMRC model to predict the answer span more accurately, it is necessary to introduce the previous context along with the source passage and current question. However, context utterances that are relevant to the query are useful, whereas, the irrelevant ones may bring
more noise \cite{tian-etal-2017-make, 10.1007/978-981-16-0010-4_5}. Thus, the careful selection of conversational history turns is quite critical for the model. History selection process can be categorized as:
\paragraph{Selecting \textit{K} turns.} SDNET \cite{DBLP:journals/corr/abs-1812-03593} BIDAF++ \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18}, ORConvQA \cite{qu2020open}, and WS-OR-ConvQA \cite{DBLP:conf/ecir/QuYCCKI21} utilize conversation history by incorporating \textit{K} rounds of history turns.
\paragraph{Immediate History Turns.} BERT with 2-ctx \cite{ohsugi-etal-2019-simple} suggests that incorporating immediate two turns can be helpful in predicting the right answer span, whereas, BERT-HAE \cite{DBLP:conf/sigir/Qu0QCZI19} claims that incorporating 5-6 conversational history turns contributes more in finding the correct answer span. However, both models demonstrate a dramatic degradation in the performance with the increase in the number of history turns.
\paragraph{Dynamic History Selection.}
In \cite{DBLP:conf/naacl/Yatskar19}, the authors pointed out that the dialog features like topic return or topic shift may not align with the concept of selecting immediate dialog turns. Therefore, in order to address this shortcoming, History Answer Modeling (HAM) \cite{DBLP:conf/cikm/QuYQZCCI19} was introduced as a dynamic policy that weighs the previous dialog turns on the basis of their contribution to answering the current question. The model assigns weight by attending the previous history turns at a token level or sentence level and combine the same with the current turn’s representation.
Another approach, referred to as Env-ConvQA \cite{qiu2021reinforced}, proposed a dynamic \textit{k}-history turns selection process based on reward-based reinforced backtracking policy. The model treats the process of extracting the relevant history turns as a sequential decision making process. The model acts on the provided history turns and backtracks through each turn one by one to decide whether the turn is relevant to the current question or not.
\subsection{Encoder}
This component is responsible for converting the tokens of the source passage, current question, and the selected history turns into fixed-length vectors which are subsequently provided as an input to the reasoning module. Although the internals of an encoder may vary from approach to approach depending on the input required by the reasoning module, nevertheless, the high-level encoding generally involves transforming and combining different context-dependent word embeddings, including but not limited to, ELMo \cite{peters-etal-2018-deep}, GloVE \cite{pennington-etal-2014-glove}, and BERT \cite{DBLP:conf/naacl/DevlinCLT19}. To improve the impact of these embeddings, additional features such as
Parts of Speech (POS) tags and History Answer Embeddings (HAE) have also been incorporated as a part of the input. These embeddings can be categorized into conventional word embeddings and contextualized word embeddings.
\paragraph{Conventional Word Embeddings.}
This technique is responsible for encoding of words into low-dimensional vectors. The encoding is done in such a way that the inter-related tokens are placed in close proximity to each other in vector space to make the identification of co-relation easy between them. Several methods for generating distributed word representation have been proposed in the literature, with the most popular and efficient being GloVE \cite{pennington-etal-2014-glove} and Word2Vec \cite{DBLP:journals/corr/abs-1301-3781}. However, these methods fail to determine the accurate meaning of the words with respect to their given context.
\paragraph{Pre-trained Contextualized Word Embeddings.}
Though the conventional word embeddings method yields good results in identifying and establishing the correlation between the words encoded in low-dimensional vectors,
they still fail to capture the contextual representations sufficiently. To be accurate, the distributed word representations generated for a single word are the same in varying contexts. To overcome this issue, the idea of contextualized embeddings was put forward by the researchers. These embeddings are pre-trained on large corpora of text and are then utilized as either distributed word embeddings or are fine-tuned according to the specific downstream task. This comes under the category of transfer learning and has obtained astonishing results in various NLP-based tasks \cite{DBLP:conf/naacl/DevlinCLT19, peters-etal-2018-deep, NEURIPS2019_dc6a7e65}.
The most successful application of these embeddings has been in the field of machine comprehension. One of the very first in the series is Context Vectors (CoVe) \cite{NIPS2017_7209} which utilizes Seq2Seq models \cite{NIPS2014_a14ac55a} to train Long Short-Term Memory (LSTM) \cite{hochreiter1997long} encoders on a large scale dataset. The encoder then utilizes the obtained results on other downstream NLP tasks. Proposed by \cite{peters-etal-2018-deep}, Embeddings from Language Model (ELMo) is a successor of CoVE and embeddings are obtained by training a bi-directional Language Model (biLM). These embeddings can generate more accurate representations of the words as instead of using the results from the topmost layer of biLM, it combines outcomes from all the layers of biLM into one vector and assign a weighting score that is task-specific. Another popular model in terms of language understanding is Transformer \cite{NIPS2017_3f5ee243} which is a sequence transduction model based on multi-headed attention, thus entirely eliminating the need of utilizing multiple recurrent layers that are part of the most encoder-decoder architectures. This mechanism of self-attention makes the transformer model more efficient and parallelizable in learning the context of the input sequences. The most recent and top-trending one in the series is BERT \cite{DBLP:conf/naacl/DevlinCLT19} that has addressed the issue of unidirectionality used in training of different language models such as Generative Pre-training (GPT) \cite{radford2018improving} and GPT-2 \cite{radford2019language}. Due to the bi-directional property and the powerful
transformer \cite{NIPS2017_3f5ee243} architecture, BERT’s performance exceeds the top-performing models in many NLP downstream tasks \cite{DBLP:conf/naacl/DevlinCLT19}.
\subsection{History Modeling}
The process of history modeling is generally carried out in the encoder module where the conversation history is integrated with the context and current question to form a complete input.
We describe it as a separate module for easy understanding and better readability.
Different models employ different techniques or a combination of these to introduce conversational history turns as a part of an input. A brief description of each of these techniques is given as follows:
\paragraph{Appending the Conversation History.}
One of the most common ways to include the selected history turns (previous question answer pairs) as a part of the input is by appending them with the current question \cite{qu2020open, DBLP:journals/corr/abs-1812-03593,qiu2021reinforced} . This is further modified at sublevel by some approaches \cite{DBLP:conf/ijcai/0022WZ20, ohsugi-etal-2019-simple} via appending only history questions along with the turn number encoded with it. In \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18}, the authors claimed that adding dialog-turn in the input yields better results practically.
\paragraph{Introducing History Answer Markers in the Given Context.}
Another trend seen recently in modeling the conversation history is encoding the context tokens in history answer embeddings markers \cite{DBLP:conf/sigir/Qu0QCZI19}. The advantage of using these tokens is that they work as an indicator to point out whether a context token is a part of history answer or not. Another variation of HAE is Positional HAE (POS-HAE) \cite{DBLP:conf/cikm/QuYQZCCI19}, wherein position information of dialog turn relative to the current question is also encoded. This enables the model to capture the spatial patterns of history answers in context.
\paragraph{Generating Latent Representations using Context Tokens.}
One of the attributes of successful CMRC models is being able to grasp the flow of the conversation. Since the flow of the conversation based on the given context, it can be captured by generating latent or intermediate representations of the context tokens rather than using the raw inputs. Such approaches \cite{DBLP:conf/iclr/HuangCY19, DBLP:conf/acl-mrqa/YehC19} fall under the category of flow-based methods.
\subsection{Reasoning Module}
CMRC models can be grouped based on how they perform the process of reasoning. For \textit{single-step reasoning},
the model passes the contextualized input (context, question, and history turns) only across one layer and generates the answer. In contrast, for \textit{multi-step reasoning},
the contextualized input is fused across multiple layers to produce history-aware contextualized output embeddings. Generally, the input for this module consists of multiple sequence sets which are then fused in multiple layers and are usually interwined with an attention mechanism to generate accurate output embeddings. On the basis of underlying techniques, the reasoning process can be categorized as
\textit{conventional methods}, \textit{pre-trained language models}, \textit{flow-based models}, and \textit{open-retrieval based models}.
\subsubsection{Conventional Methods}
Several sequence models employing different mechanisms like self-attention and bidirectional attention are a common choice for carrying out the task of conversational machine comprehension.
Famous as CoQA's baseline, DrQA+PGNet \cite{DBLP:journals/tacl/ReddyCM19} leverages the strengths of two powerful models, i.e., Pointer-Generator Network (PGNet) \cite{see-etal-2017-get} and Document Reader (DrQA) \cite{chen-etal-2017-reading}. DrQA, based on bi-directional LSTM (biLSTM), first provides cues from the answer evidence in the given context. PGNet, which utilizes an attention-based Seq2Seq model \cite{DBLP:journals/corr/BahdanauCB14}, decodes the found evidence to predict the final answer.
BiDAF++ \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18} uses the Bi-directional Attention Flow (BiDAF) \cite{DBLP:conf/iclr/SeoKFH17} model augmenting the bi-directional attention flow along with contextualized embeddings and self-attention. The modeling performs reasoning via a multi-layered bidirectional attention flow layer followed by a multi-layered biLSTM to identify the correct answer span.
SDNet \cite{DBLP:journals/corr/abs-1812-03593} utilizes two bidirectional Recurrent Neural Networks (RNNs) \cite{rumelhart1986learning} to apply both self-attention and inter-attention between different layers in order to form the contextualized understanding of question and context.
\subsubsection{Pre-trained Language Models}
Large-scale pre-trained language models such as BERT \cite{DBLP:conf/naacl/DevlinCLT19}, RoBERTa \cite{liu2019roberta}, and GPT \cite{radford2018improving} have become popular to achieve the state-of-the-art results on NLP tasks. While GPT is known for its language generation capabilities, BERT is famous for language understanding and has provided great results in machine comprehension tasks.
One of the advantages of employing pre-trained language models is their capability to fuse both encoding and reasoning modules together. This results in a ready-to-tune architecture that hides the complex interactional nature between the given context and current question. However, incorporating previous context is a challenging task in pre-trained language models (particularly BERT) as it allows for only two segments in the input and the length of sequence is limited to 512. The more turns we try to append, the more context paragraph or history turns need to be truncated to be able to adapt to the model. The accurate modeling of the history results in better reasoning over the context.
The history integration challenge can be addressed using the following approaches:
\begin{itemize}
\item Highlighting conversational history by embedding history answer embeddings in the contextual tokens as suggested in BERT-HAE \cite{DBLP:conf/cikm/QuYQZCCI19}. The embeddings are only added for those tokens that are present in the previous conversational history.
\item Using separate models for all
the history turns to attend to the interaction between each turn and the given context as suggested by Ohsugi et al. \cite{ohsugi-etal-2019-simple}. The contextualized embeddings are then merged together to form an aggregated history-aware embeddings. These aggregated embeddings are then passed from BiGRU to capture an inter-turn interaction before any prediction can be made.
\item Introducing a reinforced backtracker in the model to filter out the unnecessary or irrelevant history turns instead of evaluating them as a whole as proposed by Qiu et al. \cite{qiu2021reinforced}. The selected turns along with the given passage forms an input to be provided to the BERT model.
\end{itemize}
Once the history turns have been integrated, BERT-based models calculate the probability of each word being the start word by generating a dot product between the final embedding and the start vector, followed by the application of softmax over all the words \cite{DBLP:conf/sigir/Qu0QCZI19}. Finally, the word with the highest probability value is selected. A similar process is employed to locate the final word in the given context. In \cite{qiu2021reinforced}, the model after predicting the answer span generates a reward to evaluate the utility of the history selection for answer prediction process. The computed reward, in turn, is utilized to update the policy network to maximize the accuracy of the model for the next cycle of prediction.
\begin{comment}
BERT-HAE \cite{DBLP:conf/sigir/Qu0QCZI19} leverages the strengths of BERT and modify the architecture to suit the task of CMC. The model selects \textit{k} history turns consisting of history answers and prepends them to the current question. The turns and question forms one field and is then packed together with the given context to be provided as an input to BERT. To model the conversation history, the model introduces an additional layer of history answer marker entitled \textit{History Answer Embeddings (HAE).} The model learns two tokens to denote whether a token is part of history answers or not. The model outperforms several published models and has efficient training time.
BERT w/2-ctx (context) \cite{ohsugi-etal-2019-simple} uses BERT to not only encode the current question and the context but also the relationship between previous dialog turns and the given context. It is carried out using multiple BERT models to capture the interaction between all the questions (current and previous) and answers with the given passage followed by concatenating all the resulting sequences together. The model then passes the aggregated sequences through bidirectional gated recurrent unit (BiGRU) before sending them for the answer prediction.
BERT-HAM \cite{DBLP:conf/cikm/QuYQZCCI19} is an extension of BERT-HAE and improves it by introducing a dialog-turn encoded variant, called Positional-HAE \cite{DBLP:conf/cikm/QuYQZCCI19}, to maintain a look-up table for every relative position from the current conversation. Apart from introducing conversation history to the model, Pos-HAE improves HAE by adding position information of previous turns, thus enabling the model to capture the spatial patterns of history answers in context.
The model generates the contextualized representations on both token-level and sequence-level and passes these on to \textit{History Attention Module} as input. This module employs the dynamic history selection policy. The selection mechanism works by attending over contextualized representations of all the previous history turns at word-level or sequence-level and combining the same with the current turn’s representation. Based on the calculated weights, aggregated token-level and sequence-level representations are obtained which are employed predict answer-span and dialog acts respectively.
\end{comment}
\subsubsection{Flow-based Models}
Another recent trend that has caught attention is the use of flow-based approaches in machine comprehension. A well-designed CMRC model should be able to grasp the flow of the conversation, i.e, knowing what topic is under discussion as well as facts and events relevant to it. Thus, the flow of conversation can be considered
as a sequence of latent representations generated based on the token of source passage. These latent representations, generated during the reasoning of previous conversations, aid in the contextual reasoning of the current question. The main models based on flow architecture are described below.
FlowQA \cite{DBLP:conf/iclr/HuangCY19} utilizes the contextualized embeddings as the latent representations, a process often referred to as Integration Flow (IF). The process involves the sequential processing of the context tokens in parallel to the question turns (referred to as context integration) along with processing question turns sequentially parallel to context tokens (flow). The model utilizes multiple flow layers interweaved with attention first on the context and then on the question itself to come up with the reasoning for answer span.
FlowDelta \cite{DBLP:conf/acl-mrqa/YehC19} was introduced as an improved version in the flow series that utilizes the same architecture as FlowQA but achieves better accuracy. Instead of using the intermediate or latent representations, the model passes the information gain through the reasoning process. The information gain is nothing but the difference between the latent representations of the previous two layers. By modeling such difference, the model would better focus on the information hints present in the context.
The previously discussed flow approaches follow the concept of IF that does not really mimic a human's style of reasoning. The underlying reason is that they first perform reasoning in parallel for each question and then refine and enhance the reasoning across different turns. Graph Flow \cite{DBLP:conf/ijcai/0022WZ20}, on the other hand, constructs a dynamic context graph encoding not only the passage itself but also the question as well as the conversation
history. The model processes the flow by applying GNN on all the sequences of context graphs and the output is utilized when processing the next graph. To capture the contextual relationship between the words, a biLSTM is applied before providing the words as an input to GNN. The Graph Flow architecture alternates this mechanism with co-attention over the question and the GNN output.
\subsubsection{Open-retrieval Based Models}
Another recently introduced trend in the field of CMRC is the use of open-retrieval methods. The methods discussed above relies heavily on the given passage to extract or generate an answer. However, this seems impractical in real-world scenario since the availability of gold passage is not always possible. Thus, the model should be able to retrieve the relevant passages from a collection. The main models employing the open-retrieval architecture are discussed below:
Open-retrieval CQA (ConvQA) \cite{qu2020open} is first in the series of open retrieval models for CMRC. It consists of three main modules: i) a passage retriever, ii) a passage reranker, and iii) a passage reader. The three modules are based on Transformers \cite{NIPS2017_3f5ee243}. The passage retriever first extracts the top-K relevant paragraphs from a collection provided a current question and the previous history. The retriever is based on dual-encoder architecture that utilizes two separate ALBERT \cite{DBLP:conf/iclr/LanCGGSS20} encoders for passages and questions. The reranker and reader uses the same BERT encoder. The encoder transforms the input sequence consisting of question, history, and relevant passages into the contextualized representations to be utilized by reranker and reader for answer extraction. The reranker module conducts a list-wise reranking of the retrieved passages which serves as a supervision signal to fine-tune the encoder. In the end, answer span is predicted by the reader module by computing the probability of the tokens being a start/end token.
In ORConvQA, the model focuses on identifying and extracting short span-based answers. In information-seeking dialog, however, answers are relatively free-form and long which are difficult to extract. Weakly-supervised open-retrieval CQA (WS-ORConvQA) \cite{DBLP:conf/ecir/QuYCCKI21} is an extension of ORConvQA and introduces a learned weak supervision approach that can find and extract both span-based and free-form answers. And if the exact match is not found, the model tries to find a span in the retrieved passages that has the maximum overlap with the gold answer. Given a question and its conversation history, the passage retriever first extracts the relevant paragraphs from a collection. The retriever assigns a score based on the dot product of the representations of the questions and the passage. The reader then reads the top passages and produces an answer. The model works on weakly-supervised training approach. Given one of the retrieved passages and gold answer, the weak supervisor predicts a span in the passage as weak answer to provide weak supervision signals for training the reader. The reader is based on standard BERT-based machine comprehension model \cite{DBLP:conf/naacl/DevlinCLT19} that calculates the probability of tokens being a start and an end token. The final answer is selected by computing the sum of its retriever score and reader score.
\afterpage{
\begin{landscape}
\centering
\scriptsize
\RaggedLeft
\begin{longtable}{|p{1.5cm}|p{2.5cm}|p{2.5cm}|p{3.8cm}|p{3.8cm}|p{2.3cm}|}
\caption{Recent studies on conversational machine reading comprehension (2016-2021).}
\label{tab:cmrct}
\\
\hline \multicolumn{1}{|c|}{\textbf{Ref.}} & \textbf{History Selection} & \multicolumn{1}{|c|}{\textbf{Encoder}} & \multicolumn{1}{|c|} {\textbf {History Modeling}} & \multicolumn{1}{|c|}{\textbf{Reasoning}} & \multicolumn{1}{|c|}{\textbf{Output Prediction}} \\ \hline
\endfirsthead
\multicolumn{6}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\
\hline
\multicolumn{1}{|c|}{\textbf{Ref.}} & \textbf{History Selection} & \multicolumn{1}{|c|}{\textbf{Encoder}} & \multicolumn{1}{|c|}{ \textbf {History Modeling}} & \multicolumn{1}{|c|}{\textbf{Reasoning}} & \multicolumn{1}{|c|}{\textbf{Output Prediction}} \\ \hline
\endhead
\hline \multicolumn{6}{|r|}{{Continued on next page}} \\ \hline
\endfoot
\hline
\endlastfoot
BIDAF++ w/k-ctx \color{blue}$\textbf{\cite{DBLP:conf/emnlp/ChoiHIYYCLZ18}}$ &
\textit{k} history turns.
& \begin{tableitems}[nosep,after=\strut]
\item GloVE for word embeddings.
\item BiDirectional LSTM for contextual embeddings.
\end{tableitems}
& \begin{tableitems}[nosep,after=\strut]
\item Encodes context tokens with history answer markers before passing on for reasoning.
\item Encode dialog turn number within the question embeddings.
\end{tableitems} &Performs reasoning via multi-layered bidirectional attention flow layer followed by multi-layered biLSTM.& Span prediction.\\ \hline
DrQA +PGNet \color{blue}$\textbf{\cite{DBLP:journals/tacl/ReddyCM19}}$ &
\textit {k} history turns.
& Bidirectional LSTM
& \begin{tableitems}[nosep,after=\strut]
\item Appends the selected history turns to the source passage and current question.
\end{tableitems} &DrQA model first point towards the evidence in the given text, PGNet then transform the evidence into the answer. & Free-form answers\\ \hline
SDNet \color{blue}$\textbf{\cite{DBLP:journals/corr/abs-1812-03593}}$ &
\textit{k} history turns
& \begin{tableitems}[nosep,after=\strut]
\item Word embeddings using GloVe.
\item Contextualized embeddings using BERT.
\end{tableitems}
& \begin{tableitems}[nosep,after=\strut]
\item Appends the selected history turns to the source passage and current question.
\end{tableitems} &Utilizes both self-attention and inter attention in multiple layers using biDirectional LSTM to reason across the given context.& Span prediction\\ \hline
\hline
BERT-HAE \color{blue}$\textbf{\cite{DBLP:conf/sigir/Qu0QCZI19}}$ &
\textit{k} history turns but found optimal answer in 5 and 6 history turns
& BERT-generated embeddings
& \begin{tableitems}[nosep,after=\strut]
\item Introduce history answer marker layer to the context token is present in any conversational history answer or not.
\end{tableitems} & \begin{tableitems}[nosep,after=\strut]
\item BERT generates a representation for each token based on the embeddings for position, segment, and tokens.
\item The model then computes the probability of tokens in a given paragraph of being a start and end token of the answer span.
\end{tableitems} & Span prediction\\ \hline
BERT-HAM \color{blue}$\textbf{\cite{DBLP:conf/cikm/QuYQZCCI19}}$ & Dynamic history selection policy
& Bert-based embeddings on both word and sequence level
& \begin{tableitems}[nosep,after=\strut]
\item Encode context tokens with dialog-turn encoded variant of HAE called \textit{Positional-HAE}.
\end{tableitems} & History attention module assigns weight to each token level and sequence level representation. And then aggregated representations of both are obtained that are further used for answer prediction. & \begin{tableitems}[nosep,after=\strut]
\item Span prediction.
\item Dialog-act prediction.
\end{tableitems}\\ \hline
BERT w/k-ctx \color{blue}$\textbf{\cite{ohsugi-etal-2019-simple}}$ &
\textit{k} history turns
& Contextualized paragraph representations
independently conditioned with each question and
each answer generated using BERT.
& \begin{tableitems}[nosep,after=\strut]
\item Appends history QA pair to the current question with each QA pair conditioned on the source paragraph.
\item The model then concatenates the resulting sequences to form a uniform representation.
\end{tableitems} &The concatenated result is then passed through the BiGRU for span prediction. & \begin{tableitems}[nosep,after=\strut]
\item Span prediction
\item Answer type prediction (Yes, no, unanswerable).
\end{tableitems} \\
\hline
Env-ConvQA \color{blue}$\textbf{\cite{qiu2021reinforced}}$ &
dynamic \textit{k} history turns
& BERT-generated embeddings.
& \begin{tableitems}[nosep,after=\strut]
\item Prepends selected subset of history QA pair and passage to the current question.
\item The model then concatenates the resulting sequences to form a uniform representation.
\end{tableitems} & \begin{tableitems}[nosep,after=\strut]
\item BERT generates a representation for each token based on the embeddings for position, segment, and tokens.
\item The model then computes the probability of tokens in a given paragraph of being a start and end token of the answer span.
\item After answer prediction, the model generates a reward to evaluate the role of selected history turns and update the policy network accordingly.
\end{tableitems} &
Span prediction \\
\hline \hline
FlowQA \color{blue}$\textbf{\cite{DBLP:conf/iclr/HuangCY19}}$ & \textit{k} history turns.
& Uses ELMo to generate contextual embeddings before passing it to IF layer.
& Integrates both QA pairs and the intermediate context representation from conversation history called \textbf{FLOW.}
& Employ multiple integration flow layers with alternating cross and self-attention to perform reasoning. & Span prediction\\ \hline
Graph Flow \color{blue}$\textbf{\cite{DBLP:conf/ijcai/0022WZ20}}$ &
prepends \textit{N} question answer pairs to the current question.
& GloVE and 1024-dim BERT embeddings
& Encodes history QA pairs into contextual graphs.
&BiLSTM is utilized for the context integration and the GNNs are used to capture the contextual interaction. & Span prediction\\ \hline
FlowDelta \color{blue}$\textbf{\cite{DBLP:conf/acl-mrqa/YehC19}}$ &
\textit{k} history turns
& Uses ELMo to generate contextual embeddings before passing it to IF layer.
& Integrates both QA pairs and the intermediate context representation from conversation history called \textbf{FLOW.}
& Model passes the information gain (the difference between the latent representations of last two layers) to let the model focus more precisely on the context. & Span prediction\\
\hline
\hline
ORConvQA \color{blue}$\textbf{\cite{qu2020open}}$ &
\textit{k} history turns
& Uses ALBERT to generate contextual embeddings before passing it to reader and reranker modules.
& \begin{tableitems}[nosep,after=\strut]
\item Appends history questions to the current question.
\item The model uses two encoders, one for encoding current question with its history and other for encoding relevant passages.
\end{tableitems}
& \begin{tableitems}[nosep,after=\strut]
\item Employs fully-supervised setting for the training of the reader.
\item The top-retrieved passages are then fed to the reranker and reader for a concurrent learning of all model components.
\item The reader predicts an answer by computing scores of each token being the start token and the end token.
\end{tableitems} & Span prediction\\
\hline
WS-ORConvQA \color{blue}$\textbf{\cite{DBLP:conf/ecir/QuYCCKI21}}$ &
\textit{k} history turns
& Uses ALBERT to generate contextual embeddings before passing it to reader module.
& \begin{tableitems}[nosep,after=\strut]
\item Appends history questions to the current question.
\item The model uses two encoders, one for encoding current question with its history and other for encoding relevant passages.
\item The retriever generates a score based on the dot product of the representations of the question and the passage.
\end{tableitems}
& \begin{tableitems}[nosep,after=\strut]
\item Employs weakly-supervised setting for the training of the reader.
\item The top-retrieved passages are then fed to the reader.
\item The reader computes the probabilities of the true start and end tokens among all the tokens from the top passages.
\item The answer span is selected on the basis of sum of retriever score and reader score.
\end{tableitems} & \begin{tableitems}[nosep,after=\strut]
\item Span prediction.
\item Free-form answer.
\end{tableitems}\\
\hline
\end{longtable}
\vfill
\end{landscape}
}
\subsection{Output Prediction}
The common trends that have been observed for the answer prediction module include span prediction, free-form answer prediction, and dialog acts prediction. For span prediction, the probabilities of tokens being the end and start token is calculated. For unanswerable questions, a token, UNANSWERED, is appended at the end of each passage in QuAC. The model learns to predict this token if it finds the question unanswerable. A sequence-level aggregated representation is used to calculate dialog-act prediction and the modeling of history dialog-acts is not required for the prediction of this task.
The categorization of the architecture based on the techniques used in each module is
summarized in Table~\ref{tab:cmrct}.
\section{Datasets for Conversational Question Answering}
\label{dataset}
One driver for the rapid growth in the field of CQA is the emergence of large-scale conversational datasets for both knowledge-base and machine comprehension.
Constructing a high-quality dataset is equally significant as optimizing CQA-based architectures.
In this section, we collect and compare the major datasets in the area of CQA.
\subsection{Datasets for Sequential KB-QA}
Most of the datasets for sequential KB-QA deals with simple questions, wherein each of them can be answered using a single tuple in the knowledge graph. However, in practice,
a system can encounter a more complicated form of questions requiring it to use logical and comparative reasoning to come up with an accurate answer. The point worth noting is that unlike the simple questions, the complicated questions require access to the larger subgraph of the KG. For example, to answer the question, ``\textit{Which country has the highest peak, Nepal or India?}'', one needs to find i) the highest peak in Nepal, ii) the highest peak in India, and finally, iii) the comparison of both the peaks to come up with the right answer.
Similar to the field of CMRC, sequential KB-QA saw its rise after the introduction of sequential QA datasets namely \textit{Sequential Question Answering (SQA)} \cite{DBLP:conf/acl/IyyerYC17}, \textit{Complex Sequential Question Answering (CSQA)} \cite{DBLP:conf/aaai/SahaPKSC18}, and \textit{ConvQuestions} \cite{DBLP:conf/cikm/ChristmannRASW19}. These datasets have facilitated the process of answering complex questions, thus
supporting
a number of researches. A high-level comparison based on their common characteristics is presented in Table~\ref{tab:characteristicsSQA}.
\subsubsection{SQA}
The main idea behind the creation of SQA is to decompose the complex questions and convert them into a series of inter-linked sequential questions to give a touch of natural conversation.
\paragraph{Dataset Collection:}
As described in \cite{DBLP:conf/acl/IyyerYC17},
the SQA dataset has been collected via crowdsourcing by leveraging WikiTable Questions (WTQ)\footnote{https://github.com/ppasupat/WikiTableQuestions}, which contains highly compositional questions associated with HTML tables from Wikipedia. Each crowdsourcing task contains a long and complex question originally from WTQ as the question intent. The workers are asked to compose a sequence of simpler but inter-related questions that lead to the final intent. The answers to the simple questions are subsets of the cells in the table.
\paragraph{Dataset Analysis:}
SQA consists of 6,066 unique question sequences containing 17,553
question-answer pairs resulting in an average of 2.9 questions per sequence. The questions are identified into three different classes: i) \textit{column selection} questions, wherein the answer is the entire column of the table and constitutes 23\% of the questions in SQA, ii) \textit{subset selection} questions where the answer is the subset of the previous question's answer and contributes 27\% of the questions in the dataset, and iii) \textit{row selection} questions where answers to the questions appear in the same rows but in different columns, making 19\% of the dataset.
\paragraph{Evaluation:}
For the system to be evaluated, the overall accuracy, sequence accuracy (the percentage of sequences
for which every question is answered correctly), and positional accuracy (accuracy at each position in a sequence) are calculated.
With that said, all systems struggle to
correctly answer all questions within a sequence,
despite the fact that each question is
simpler on average than those in WTQ.
\begin{scriptsize}
\RaggedLeft
\begin{table}[tbp!]
\caption{A comparison of the sequential KB-QA datasets SQA\cite{DBLP:conf/acl/IyyerYC17}, CSQA\cite{DBLP:conf/aaai/SahaPKSC18}, and ConvQuestions\cite{DBLP:conf/cikm/ChristmannRASW19} based on different characteristics as defined in their respective papers.}
\label{tab:characteristicsSQA}
\begin{tabular}{|p{2.5cm}|p{2.5cm}|p{2.5cm}|p{3.0cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Characteristics}} & \multicolumn{1}{|c|}{\textbf{SQA}} & \multicolumn{1}{|c|}{\textbf{CSQA}} &
\multicolumn{1}{|c|}{\textbf{ConvQuestions}}\\
\hline
\textbf{Data Source}& WikiTableQuestions & WikiData & WikiData (consisting of 5 domains i.e. books, movies, soccer, music, and tv series) \\
\hline
\textbf{Conversational Setup} &Three workers who were asked to decompose complex sentences into a sequence of simpler sequential sentences & Pair of in-house annotators where annotator acts as a \textit{user} and the other as a \textit{system} to provide answers or ask clarification questions & Master workers from AMT paired together where they were asked to provide answers vis web search \\
\hline
\textbf{Nature of QAs} & Simple & Complex inter-related as well as simple & Complex \\
\hline
\textbf{Question Types} & Factoid & Factoid & Factoid and non-opinionated \\
\hline
\textbf{Requires Reasoning?} & No & Yes & Yes\\ \hline
\textbf{Max turns per dialog} & N/A & 8.5 & 5\\ \hline
\textbf{Total Number of Questions} & 15, 553 & 1.6 M & N/A \\ \hline
\textbf{Total Number of Dialogs} & 6,066 & 200K & 11,200\\
\hline
\end{tabular}
\end{table}
\end{scriptsize}
\subsubsection{CSQA}
The CSQA dataset \cite{DBLP:conf/aaai/SahaPKSC18} consists of 200K QA dialogs for the task of complex sequential question answering.
CSQA combines two sub-tasks: i) answering factoid questions through complex reasoning over a large-scale KB, and ii) learning to converse through a sequence of coherent QA pairs. CSQA calls for a sequential KB-QA agent that combines many technologies including i) parsing complex natural language queries, ii) using conversation context to resolve co-reference and ellipsis in user utterances like the belief tracker, iii) asking for clarification questions for ambiguous queries, like the dialog manager, and iv) retrieving relevant paths in the KB to answer questions.
\paragraph{Dataset Collection:}
Each dialog is prepared in a two-in-house-annotators setting, one being a \textit{user} and the other acting as a \textit{system}. A user's role is to ask questions and a system's job is to answer the questions or asks for clarification if required. The idea is to establish the understanding of the simple and complex questions that can be asked by the annotators over a knowledge graph. These could then be abstracted to templates and utilized to instantiate more queries involving different objects, subjects, and relations. Apart from asking and answering simple questions (that requires only a single tuple to generate an answer), the annotators
come
up with questions involving logical and comparative operators like AND, OR, NOT, ==, and $>$=,
resulting in more complex questions to judge model's performance. The examples of such questions are ``\textit{Which country has more population than India?}'', and ``\textit{Which cities of India and Pakistan have River Indus passing through them?}''.
After collecting both simple and complex questions, the next step is to create coherent conversations involving these QA pairs. The resulting conversation should have i) linked subsequent QA pairs, and ii) the conversation should contain the necessary elements of a conversation such as confirmation, clarification, and co-references.
\paragraph{Dataset Analysis:}
The dataset consists of 200K dialogs and a total of 1.6 million turns. On average, the length of a user's questions is 9.7 words and a system's response is based on 4.74 words.
\paragraph{Evaluation:}
Different evaluation metrics are used to evaluate the different question types. For example to measure the accuracy of simple questions (consisting of indirect questions, co-references, ellipsis), logical reasoning, and comparative reasoning, both precision and recall are used. When dealing with quantitative reasoning and verification (boolean) questions, F1 score is utilized. For clarification questions, BLEU-4 score is used.
\subsubsection{ConvQuestions}
ConvQuestions \cite{DBLP:conf/cikm/ChristmannRASW19} has been published recently to further aid the field of sequential KB-QA. It consists of 11,200 distinct conversations from five different domains, i.e, books, movies, soccer, music, and TV-series. The questions are asked with minimal syntactic guidelines to maintain the natural factor of the questions. The questions in ConvQuestions are sourced from WikiData and the answers are provided via Web search. The questions in the dataset pose different challenges that need to be addressed including incomplete cues, anaphora, indirection, temporal reasoning, comparison, and existential.
\paragraph{Dataset Collection:}
Each dialog is prepared as a conversation generation task by the workers of AMT wherein they were asked to base their conversation on the five \textit{sequential} questions from any
domain
of their choice.
To make sure that the conversations are carried out as naturally as possible, the Turkers were asked not to interleave the questions and neither permute the order of follow-up questions to generate a large volume. Furthermore, the paraphrases of the questions were also collected to provide two versions of the questions. This would allow the data to be augmented with several interesting variations which, in turn, improves the robustness of the system. To make the dataset more closely related to real-world challenges, participants were encouraged to ask the complex questions.
\paragraph{Dataset Analysis:}
The dataset consists of 11,200 conversations each comprising of 5 turns. The average length of the first and follow-up questions were 9.07 and 6.20 words, respectively. Question entities and expected answers have a balanced distribution among non-human types (books, stadiums, TV-series) and humans (actors, artists, authors). Context expansion is the key for finding out the correct answer in ConvQuestions as the average KG distance from the original seed to the answer is 2.30. The question type consists of characteristics such as comparisons, temporal reasoning, and anaphora,
to make it more closely related to real-world challenges.
\paragraph{Evaluation:}
Since each question in the dataset has exactly one or at most three correct answers, it uses standard metrics of Precision at the top rank (P@1). The other metrics include Mean Reciprocal Rank (MRR) and Hit@5. Hit@5 measures the fraction of times a correct answer is identified within the top-5 positions.
\subsection{Datasets for Conversational Machine Comprehension}
Generally, the datasets for machine reading comprehension falls into three categories based on the type of answer they provide:
\begin{itemize}
\item \textit{Multiple-choice option} datasets provide text-based multiple choice question and expect the model to identify the right answer out of the available options. The examples of such datasets include RACE \cite{DBLP:conf/emnlp/LaiXLYH17}, MCTest \cite{DBLP:conf/emnlp/RichardsonBR13}, and MCSript \cite{DBLP:conf/lrec/0002MRTP18},
\item \textit{Descriptive answer} datasets allow answers to be in any free-form text. Such datasets are useful in situations, wherein the questions are implicit and may require the use of common sense or world knowledge. The examples include MS Marco \cite{DBLP:conf/nips/NguyenRSGTMD16} and Narrative QA \cite{DBLP:journals/tacl/KociskySBDHMG18}, and
\item \textit{Span prediction} or \textit{extractive} datasets
require the model to extract the correct answer span from the given source passage. Such datasets provide better natural language understandability and easy evaluation of the task. SQuAD \cite{DBLP:conf/emnlp/RajpurkarZLL16}, TriviaQA \cite{DBLP:conf/acl/JoshiCWZ17}, and NewsQA \cite{DBLP:conf/rep4nlp/TrischlerWYHSBS17} are some of the popular examples of extractive datasets.
\end{itemize}
CoQA \cite{DBLP:journals/tacl/ReddyCM19} and QuAC \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18}, the two datasets for CMRC, comes under the category of span-prediction datasets. Apart from these two datasets, there is another CMRC dataset, ShARC \cite{DBLP:conf/emnlp/SaeidiBL0RSB018}, which requires the understanding of a rule-text to answer a few inter-linked and co-referenced questions. These generated questions need to be answered using reasoning on the basis of background knowledge. However, this dataset does not really follow the definition of CMRC and is hence ignored. A summarized comparison pertaining to significant characteristics of both CoQA and QuAC is presented in Table~\ref{cmrc}.
\begin{scriptsize}
\RaggedLeft
\begin{table}[tbp!]
\caption{A comparison of the multi-turn conversational
datasets-CoQA \cite{DBLP:journals/tacl/ReddyCM19} and QuAC \cite{DBLP:conf/emnlp/ChoiHIYYCLZ18} based on different characteristics as defined in their respective papers.}
\label{cmrc}
\begin{tabular}{|p{3.5cm}|p{4.0cm}|p{3cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Characteristics}} & \multicolumn{1}{|c|}{\textbf{CoQA}} & \multicolumn{1}{|c|}{\textbf{QuAC}} \\
\hline
\textbf{Data Source} & Passages collected
from 7 diverse domains e.g. children
stories from MCTest,
news articles from
CNN, Wikipedia articles etc. & Sections from Wikipedia articles filtered in the “people” category associated with subcategories like culture, animal, geography, etc. \\ \hline
\textbf{Conversational Setup } & Questioner-answerer setting where both have access to the entire context. & Teacher-Student setting where the teacher has access to the full context for answering, while the student has only the title and summary of the article\\ \hline
\textbf{Requires External Knowledge?} & Yes & No\\
\hline
\textbf{Question Type} & Factoid & Open-ended, highly contextual\\ \hline
\textbf{Answer Type} & Free-form with an extractive rationale. & Extractive span which can be
yes/no or ‘No Answer’.\\ \hline
\textbf{Dialog Acts} & No & Yes\\ \hline
\textbf{Max Turns per Dialog }& 15 & 11 \\ \hline
\textbf{Unanswerable Questions} & Yes & Yes \\ \hline
\textbf{Total Number of Questions} & 126K & 100K \\ \hline
\textbf{Total Number of Dialogs} & 8K & 14K \\ \hline
\end{tabular}
\end{table}
\end{scriptsize}
\subsubsection{CoQA}
CoQA was introduced by Reddy et al. \cite{DBLP:journals/tacl/ReddyCM19} to measure a machine's ability to participate in a QA style conversation. The dataset was developed with three objectives in mind. The first is the nature of questions in human conversations. In this dataset, every question except the first one is dependent on the conversation history to make it more similar to the real-life setting of human conversation.
The second goal of CoQA is to maintain the naturalness of answers in a conversation. Many existing datasets limit answers to be found in the given source passage. However, such a setting does not always ensure natural answers. In CoQA, the authors address this issue by proposing free-form answers while providing a text-span from the given passage as a rationale to the answer.
The third goal of CoQA is to facilitate the development of CQA systems across multiple domains. The existing QA datasets mainly focus on a single domain which results in complications to test the generalization capabilities of the existing systems. Thus, CoQA extends its domains, i.e., each with its own data source. These domains include articles based on literature extracted from Project Gutenberg\footnote{https://www.gutenberg.org/}, children's stories taken from MCTest \cite{DBLP:conf/emnlp/RichardsonBR13}, Wikipedia articles\footnote{https://www.wikipedia.org/}, Reddit articles from Writing Prompt \cite{DBLP:conf/acl/LewisDF18}, middle and high school English exams taken from \cite{DBLP:conf/emnlp/LaiXLYH17}, science articles derived from Ai2 science question \cite{DBLP:conf/aclnut/WelblLG17}, and news articles taken from CNN \cite{DBLP:conf/nips/HermannKGEKSB15}. Evaluation and Reddit are used for out-of-domain evaluation only.
\paragraph{Data Collection:}
Each conversation is prepared in a two annotator setting, i.e., one being a questioner and the other being an answerer. The platform of Amazon Mechanical Turk (AMT)\footnote{https://www.mturk.com/} is used to pair workers on a passage through the ParlAI MTurk API \cite{DBLP:conf/emnlp/MillerFBBFLPW17} and both the annotators have full access to the passage.
\paragraph{Dataset Analysis:}
The dataset consists of 127K conversation turns gathered from 8K conversations over text passages. The average length of a conversation is 15 turns and each turn consists of a question and an answer. The distribution of CoQA is spread across multiple question types. Prefixes like \textit{did}, \textit{where}, \textit{was}, \textit{is}, and \textit{does}
are very frequent in the dataset. Also, almost every sector of CoQA contains co-references which shows that it is highly conversational. What makes conversations in CoQA even more human-like is that sometimes they just feature one-word questions like “who?” or “where?” or even “why?”. This shows that questions are context-dependant, and in order to answer correctly, the system needs to go through the previous history turns to understand the question.
\paragraph{Evaluation:}
The main evaluation metric for the dataset is macro-average F1 score of word overlap and is computed separately for in-domain and out-of-domain as well.
\subsubsection{QuAC}
In an information-seeking dialog, the students keep
asking their teacher questions
for clarification about a particular topic.
This idea forms the basis for this newly introduced dataset, Question Answering in Context (QuAC). Modeling such inter-related questions can be complex as the questions can be elliptical, highly context-dependent, and even sometimes unanswerable. To promote learning in such a challenging situation, QuAC presents a rich set of 14K crowd-sourced QA dialogs (consisting of 100K QA pairs).
\paragraph{Dataset Collection:}
The nature of interaction in QuAC is of student-teacher where the teacher has the access to the source paragraph. A student only provided with the heading of the paragraph aims to gain as much knowledge about its content as possible by asking multiple questions. The teacher tries to answer the questions by extracting correct answer spans from the source passage. Also, the teacher uses dialog acts as feedback to the students (i.e., may or may not ask a follow-up question) which results in more productive dialogs.
\paragraph{Dataset Analysis:}
The dataset has long answers of maximum of 15 tokens which is an improvement over SQuAD and CoQA. Another factor worth noting is that frequent question types in QuAC are based on \textit{Wh} words which makes the questions more open-ended, in contrast to the other QA datasets where questions are more factoid. Furthermore, 86\% of the questions are highly contextual, i.e., they require the model to re-read the context to resolve the co-references. Out of these questions, 44\% refer to entities or events in the dialog history whereas 61\% refer to the subject of the article.
\paragraph{Evaluation:}
Besides evaluating the accuracy using F1 score, QuAC also utilizes human equivalence score (HEQ)
to measure a system's performance by finding the percentage of exceeding or matching an average human's performance. HEQ-Q and
HEQ-D are, therefore, HEQ scores with the instances as
questions and dialogs respectively.
\section{Research Trends and Open Challenges}
\label{challenges}
CQA is a rapidly evolving field. This paper surveys
neural approaches
that
have been recently introduced to cater to the challenges pertaining to CQA.
These CQA systems have the potential to be successfully utilized in
practical applications:
\begin{itemize}
\item The KB-QA based systems allow users to access a series of information via conversation without even composing complex SQL queries. From commercial perspective, these KB-QA based systems can be employed either in open-domain QA (pertaining to worldly knowledge) or in closed-domain QA (such as in the medical field). A user
does not
have to access multiple sources of information, rather one agent would suffice
her
all information needs.
\item CQA systems provide simplified conversational search (ConvSearch) setting \cite{DBLP:conf/sigir/Qu0QCZI19} which has the strongest potential to become more popular than the traditional search engines such as Google or Bing, which unlike a user's expectations of getting a concise answer, provides a list of probable answers/solutions. These conversational systems can potentially be used for learning about a topic, planning an activity, seeking advice or guidance, and making a decision.
\item The conversational agents play a significant role in facilitating smooth interaction with users. One of the conceivable applications
could be customer support systems where
a
user does not have to go through the entire website and looks for the desired information.
\end{itemize}
As an emerging research area with many significant promising applications, CQA techniques are still not mature yet with many open issues remaining. In this section, we discuss several prominent ones:
\begin{itemize}
\item The role of context to be selected plays a significant role in providing accurate answers in CQA. With richer conversational scenarios, a number of contextual features need to be considered including personal context, social context, and task context. General research questions regarding contextual information in CQA include: ``\textit{What are the effective strategies and models to collect and integrate contextual information?}'', ``\textit{Are knowledge graphs sufficient enough to capture and represent this information?}'', and ``\textit{Do we need to incorporate the entire context or a relevant chunk would be enough to find the correct answer?}''.
Different models attempt to incorporate context in different ways. Out of all the history selection methods, the dynamic history selection mechanism proposed by Qu et al. \cite{DBLP:conf/cikm/QuYQZCCI19} is more compelling and intuitive.
As far as
the flow-methods are concerned, they consider the latent representations of the entire context to deal with the varying conversational aspect. Similarly, for sequential KB-QA, the authors in \cite{DBLP:conf/nips/GuoTDZY18} proposed the use of dialog manager to collect and maintain the previous utterances.
\item Information-seeking behaviors need to be modeled for CQA setting as it provides users with the opportunity to obtain more information about the topics of their interests. The research questions related to information-seeking behavior that needs to be explored include: ``\textit{What optimal structure for clarification questions can be used to better understand the users' information need?}'' and ``\textit{What effective strategies can be employed to design such clarification questions?}''.
\item Interpretability of a question plays a significant role when finding an answer
for it.
In the existing CQA systems, the models are anticipated to provide the answers to the questions without having to explain as to why and how they deduced an answer, making it difficult to understand the source and reason of an answer. CoQA is the only CQA dataset that provides reasoning for the provided answer. Another model, Cos-E \cite{DBLP:conf/acl/RajaniMXS19}, generates commonsense reasoning explanations for the deduced answer. Regardless of the fact whether or not the complete interpretability of CQA models is required, we can safely say that an understanding of the working of the internal model up to a certain extent can greatly help and improve the design of neural network systems in the future.
\item Commonsense reasoning is a long-standing challenge in Conversational AI, i.e., whether it is incorporating the commonsense in dialog systems or QA systems. Commonsense reasoning refers to the ability of an individual to make day-to-day inferences by using or assuming basic knowledge about the real-world. However, the CQA systems proposed so far work on pragmatic reasoning, i.e., finding the intended meaning(s) from the provided context because commonsense knowledge is often not explicitly explained in the data sources (i.e., KB-QA or CMRC dataset). Despite single-turn QA systems almost achieving human-level performance, the implementation of commonsense reasoning is still not very common. There are only a few research works that take commonsense reasoning into consideration when performing single-turn QA \cite{DBLP:conf/lrec/0002MRTP18, DBLP:conf/emnlp/HuangBBC19}.
There has been an increasing trend to incorporate commonsense reasoning into the single-turn MRC over the past few years. But when it comes to utilizing commonsense reasoning in CMRC, no successful attempt has been made. This may probably be owing to the fact that commonsense reasoning requires questions that needs some prior knowledge or background which the current CMRC datasets do not provide.
%
When it comes to single-turn KB-QA, there are a number of prominent researches that utilize commonsense in a QA process \cite{DBLP:conf/emnlp/LinCCR19,DBLP:conf/aaai/SharmaG19, DBLP:conf/aaai/LvGXTDGSJCH20}. Another effort was done by CoMET \cite{bosselut-etal-2019-comet}, wherein a Transformer to generate commonsense knowledge graphs was employed. Knowledge graphs like ConceptNet \cite{speer2017conceptnet} and ATOMIC \cite{sap2019atomic} have been designed to facilitate the implementation of commonsense in KB-QA systems. The field of sequential KB-QA remains untouched primarily because of the reason that the majority of existing methods lack the absence of connections between
concepts \cite{DBLP:conf/nlpcc/ZhongTDZWY19}.
\item Lack of inference capability is one of the reasons why QA struggles with generating the correct answers. Most of the existing CQA systems are based on semantic relevance between question and the given context which limits a model's capability to reason. An example discussed by Liu et al. \cite{8651505} depicts that provided the context, ``\textit{five people on board and two people on the ground died}'', the system was not able to provide the correct answer ``\textit{seven}'' to the question ``\textit{how many people died?}''. Thus, how to design systems with strong inference ability is still an open issue and calls for further research.
\end{itemize}
\section{Conclusion}
\label{conclusion}
The Conversational Question Answering (CQA) systems have been
emerging as a main technology
to close the interactional gap between machines and humans owing to the advancements in pre-trained language modeling and the introduction of conversational datasets. This progress simplifies the development and progress of application areas such as online customer support, interactions with IoT devices in smart spaces, search engines, thus enabling CQA to realize its social and economic impacts. The effective incorporation of contextual information, the ability to infer the questions and ask efficient clarification questions are the main challenges pertaining to the field of CQA.
Our survey on over 80 academic works
in the field of CQA, i.e., from 2016 to 2021, confirms the thriving expansion of this exciting field. In this survey, we have comprehensively discussed the field of CQA, which is further subdivided into i) sequential KB-QA, and ii)
Conversational Machine Reading Comprehension (CMRC). The general architecture of each of the category is decomposed into modules and prominent techniques employed in each module
have been discussed. We subsequently introduced and discussed the representative datasets based on their characteristics. Finally, we discussed the potential applications of CQA and the identified future research directions that need to be explored for realizing natural conversations.
We anticipate that this literature survey
will serve as a quintessence for the researchers and pave a way forward for streamlining the research in this important area.
\begin{acknowledgements}
Munazza Zaib sincerely acknowledges the generous support of the Macquarie University, Sydney, Australia for funding this research work via its International Macquarie University Research Excellence Scholarship (Allocation No. 20201589).
Wei Emma Zhang and Quan Z. Sheng have been partially supported by Australian Research Council (ARC) Discovery Grant DP200102298.
\end{acknowledgements}
\bibliographystyle{spbasic}
|
1,116,691,499,654 | arxiv |
\subsection{Background and Motivation}
\begin{figure*}[th]
\centering
\includegraphics[width=1\linewidth]{hypercskg}
\caption{\textbf{A motivating example of how DrFact works for OpenCSR.} We model the knowledge corpus as a hypergraph consisting of \textit{concepts} in $\mathcal{V}$ as \textit{nodes} and \textit{facts} in $\mathcal{F}$ as \textit{hyperedges}. Then, we develop a differentiable reasoning method, DrFact, to perform \textit{multi-hop reasoning} via ~\textit{fact-following} operations (e.g., $f_1 \rightarrow f_2$). }
\label{fig:hypercskg}
\end{figure*}
\smallskip
\noindent
\textbf{Task Formulation.}
We denote a \textbf{corpus} of knowledge facts as $\mathcal{F}$, and use $\mathcal{V}$ to denote a {vocabulary} of \textbf{concepts}; both are sets consisting of unique elements.
A \textbf{fact} $f_i\in \mathcal{F}$ is a sentence that describes generic commonsense knowledge, such as ``{\textit{trees} remove \textit{carbon dioxide} from the \textit{atmosphere} through \textit{photosynthesis}}.''
A \textbf{concept} $c_j\in \mathcal{V}$ is a noun or base noun phrase mentioned frequently in these facts (e.g., `tree' and `carbon dioxide'). Concepts are considered identical if their surface forms are the same (after lemmatization).
Given only a \textbf{question} $q$ (e.g., ``\textit{what can help alleviate global warming?}''),
an open-ended commonsense reasoner is supposed to \textbf{answer} it by returning {a weighted set of concepts}, such as \{($a_1$=\textit{`renewable energy'}, $w_1$), ($a_2$=\textit{`tree'}, $w_2$),
\dots \}, where $w_i\in \mathbb{R}$ is the weight of the predicted concept $a_i\in \mathcal{V}$.
To learn interpretable, trustworthy reasoning models,
it is expected that models can output intermediate results that justify the reasoning process
--- i.e., the supporting facts from $\mathcal{F}$.
E.g., an \textbf{explanation} for `tree' to be an answer to the question above can be the combination of two facts: $f_1$ = ``{carbon dioxide} is the major ...'' and $f_2$ = ``{trees} remove ...'', as shown in Figure~\ref{fig:opencsr}.
\smallskip
\noindent
\textbf{Implicit Multi-Hop Structures.}
Commonsense questions
(i.e., questions that need commonsense knowledge to reason)
contrast with better-studied multi-hop factoid QA datasets, e.g., HotpotQA~\cite{yang2018hotpotqa},
which primarily focus on querying about \textit{evident relations between named entities}.
For example, an example multi-hop factoid question can be ``which team does the player named 2015 Diamond Head Classic's MVP play for?''
Its query structure is relatively clear and \textit{self-evident} from the question itself: in this case the reasoning process can be decomposed into $q_1$ = ``the player named 2015 DHC's MVP'' and $q_2$ = ``which team does $q_1.\operatorname{answer}$ play for''.
The reasoning required to answer commonsense questions is usually more \textit{implicit} and relatively unclear.
Consider the previous example in Fig.~\ref{fig:opencsr},
$q$ = `what can help alleviate global warming?' can be decomposed by $q_1$ = ``what contributes to global warming'' and $q_2$ = ``what removes $q_1.\operatorname{answer}$ from the atmosphere'' --- but many other decompositions are also plausible.
In addition, unlike HotpotQA, we assume that we have \textit{no ground-truth justifications} for training, which makes OpenCSR even more challenging.
\subsection{Overview}
\subsection{Overview}
In \textsc{DrFact}, we propose to model reasoning as traversing a \textit{hypergraph}, where each \textit{hyperedge} corresponds to a fact in $\mathcal{F}$, and connects the concepts in $\mathcal{V}$ that are mentioned in that fact. This is shown in Figure~\ref{fig:hypercskg}.
Notice that a fact, as a hyperedge, connects multiple concepts that are mentioned, while the textual form of the fact maintains the contextual information of the original natural language statement, and hence we do not assume a \textit{fixed} set of relations.
Given such a hypergraph,
our open-ended reasoning model will traverse the hypergraph starting from the question (concepts) and finally arrive at a set of concept nodes by following multiple hyperedges (facts).
A probabilistic view of this process over $T$ hops is:
{
\begin{equation*}
\resizebox{0.97\hsize}{!}{$P(c \mid q) =
P (c \mid q, F_T ) \prod^T_{t=1} P (F_t \mid q, F_{t-1}) P \left(F_0 \mid q\right)$}
\end{equation*}
}%
Intuitively, we want to model the distribution of a concept $c\in \mathcal{V}$ being an answer to a question $q$ as $P(c\mid q)$.
This answering process can be seen as a process of multiple iterations of ``fact-following,'' or moving from one fact to another based on shared concepts, and finally moving from facts to concepts.
We use $F_t$ to represent a weighted set of retrieved facts at the hop $t$, and $F_0$ for the initial facts below.
Then, given the question and the current retrieved facts, we iteratively retrieve the facts for the next hop. Finally, we score a concept using retrieved facts.
\begin{figure*}[th]
\centering
\includegraphics[width=1\linewidth]{drfact_overall.pdf}
\caption{\textbf{The overall workflow of \textsc{DrFact}.} We encode the hypergraph (Fig.~\ref{fig:hypercskg}) with a concept-to-fact sparse matrix $E$ and a fact-to-fact sparse matrix $S$. The dense fact index $D$ is pre-computed with a pre-trained bi-encoder.
A weighed set of facts is represented as a sparse vector $F$.
The workflow (left) of \textsc{DrFact} starts mapping a question to a set of initial facts that have common concepts with it. Then, it recursively performs \texttt{Fact-Follow} operations (right) for computing $F_t$ and $A_t$. Finally, it uses \textit{learnable} hop-weights $\alpha_t$ to aggregate the answers.}
\label{fig:overview}
\end{figure*}
\subsection{Pre-computed Indices}
\smallskip
\noindent
\textbf{Dense Neural Fact Index $D$.}
We pre-train a bi-encoder architecture over BERT~\cite{Devlin2019}, which learns to maximize the score of facts that contain correct answers to a given question, following the steps of~\citeauthor{dpr} (2020) (i.e., dense passage retrieval), so that we can use MIPS to do dense retrieval over the facts.
After pre-training, we embed each fact in $\mathcal{F}$ with a dense vector (using the \texttt{[CLS]} token representation). Hence $D$ is a $|\mathcal{F}|\times d$ dense matrix.
\smallskip
\noindent
\textbf{Sparse Fact-to-Fact Index $S$.}
We pre-compute the sparse links between facts by a set of connection rules, such as $f_i \rightarrow f_j$ when $f_i$ and $f_j$ have at least one common concept and $f_j$ introduces at least two more new concepts that are not in $f_i$ (see Appendix~\ref{sec:impl} (2) for more).
Hence $S$ is a binary sparse tensor with the dense shape $|\mathcal{F}| \times |\mathcal{F}|$.
\smallskip
\noindent
\textbf{Sparse Index of Concept-to-Fact Links $E$.}
As shown in Figure~\ref{fig:hypercskg}, a concept can appear in multiple facts and a fact also usually mentions multiple concepts.
We encode these co-occurrences between each fact and its mentioned concepts into a sparse matrix with the dense shape $|\mathcal{V}| \times |\mathcal{F}|$ --- i.e., the \textit{concept-to-fact index}.
\subsection{Differentiable Fact-Following Operation}
The most important part in our framework is how to model the fact-following step in our formulation, i.e., $P\left(F_{t} \mid F_{t-1}, q\right)$.
For modeling the translation from a fact to another fact under the context of a question $q$, we propose an efficient approach with a differentiable operation that uses both \textit{neural} embeddings of the facts and their \textit{symbolic} connections in the hypergraph.
The symbolic connections between facts are represented by the very sparse fact-to-fact matrix $S$, which in our model is efficiently implemented with the \texttt{tf.RaggedTensor}
construct of TensorFlow \cite{drkit}.
$S$ stores a pre-computed dependency between pairs of facts, $S_{ij}$.
Intuitively, if we can traverse from $f_i$ to $f_j$ these facts should mention some common concepts, and also the facts' semantics are related,
so our $S_{ij}$ will reflect this intuition.
The fact embeddings computed by a pre-trained bi-encoder are in the dense index of fact vectors $D$, which contains rich semantic information about each fact, and helps measure the plausibility of a fact in the context of a given question.
The proposed fact-follow operation has two parallel sub-steps: 1) sparse retrieval and 2) dense retrieval.
The sparse retrieval uses a fact-to-fact sparse matrix to obtain possible next-hop facts.
We can compute
$F_{t}^{s} = F_{t-1} S$
efficiently thanks to the ragged representation of sparse matrices.
For the neural dense retrieval, we use a maximum inner product search (MIPS)~\cite{johnson2019billion,scann} over the dense fact embedding index $D$:
\begin{align*}
\mathbf{z_{t-1}} &= F_{t-1}D\\
\mathbf{h_{t-1}} &= g(\mathbf{z_{t-1}}, \mathbf{q_t}) \\
F_{t}^{d} &= \operatorname{MIPS}_K(\mathbf{h_{t-1}}, D)
\end{align*}
We first aggregate the dense vectors of the facts in $F_{t-1}$ into the dense vector $\mathbf{z_{t-1}}$, which is fed into a neural layer with the query embedding at the current step, $\mathbf{q_t}$ (encoded by BERT), to create a query vector $\mathbf{h_{t-1}}$.
Here $g(\cdot)$ is an MLP that maps the concatenation of the two input vectors to a dense output with the same dimensionality as the fact vectors, which we named to be fact-translating function.
Finally, we retrieve the next-hop top-K facts $F_{t}^d$ with the $\operatorname{MIPS}_K$ operator.
To get the best of both symbolic and neural world, we use element-wise multiplication to combine the sparse and dense retrieved results:
$F_{t} = F_{t}^s \odot F_{t}^d$.
We summarize the fact-following operation with these differentiable steps:
\begin{eqnarray} \label{eq:follow}
F_{t} & = & \operatorname{Fact-Follow}(F_{t-1}, q) \\
& = & F_{t-1} S \odot \operatorname{MIPS}_K(g(F_{t-1}D, \mathbf{q_t}), D) \nonumber
\end{eqnarray}
After each hop, we multiply $F_t$ with a pre-computed fact-to-concept matrix $E$, thus generating $A_t$, a set of concept predictions.
To aggregate the concept scores, we take the maximum score among the facts that mention a concept $c$. Finally
we take the weighted sum of the concept predictions at all hops as the final weighted concept sets $A=\sum_{t=1}^T \alpha_t A_t,$
where $\alpha_t$ is a \textit{learnable} parameter.
{Please read Appendix~\ref{sec:impl} for more details.}
Equation~\ref{eq:follow} defines a random-walk process on the hypergraph associated with the corpus. We found that performance was improved by making this a ``lazy'' random walk---in particular by augmenting $F_t$ with the facts in $F_{t-1}$ which have a weight higher than a threshold $\tau$:
$$F_t= \operatorname{Fact-Follow}(F_{t-1}, q) + \operatorname{Filter}(F_{t-1}, \tau).$$
We call this as \textbf{self-following}, which means that $F_t$ contains highly-relevant facts for all distances $t'<t$, and thus improve models
when there are variable numbers of ``hops'' for different questions.
\textbf{Initial Facts.}
Note that the set of \textit{initial facts} $F_0$ is computed differently, as they are produced using the input question $q$, instead of a previous-hop $F_{t-1}$.
We first use our pre-trained bi-encoder and the associated index $D$ via MIPS query to finds facts related to $q$, and then select from the retrieved set those facts that contain question concepts (i.e., concepts that are matched in the question text), using the concept-to-fact index $E$.
\subsection{Auxiliary Learning with Distant Evidence}
\label{ssec:aux}
Intermediate evidence, i.e., supporting facts, is significant for guiding multi-hop reasoning models during training.
In a weakly supervised setting, however, we usually do not have ground-truth annotations as they are expensive to obtain.
To get some noisy yet still helpful supporting facts, we use as distant supervision dense retrieval based on the training questions. Specifically, we concatenate the question and the best candidate answer to build a query to our pre-trained index $D$, and then we divide the results into four groups depending on whether they contain question/answer concepts: 1) question-answer facts, 2) question-only facts, 3) answer-only facts, and 4) none-facts.
Then, to get a 2-hop evidence chain, we first check if a question-only fact can be linked to an answer-only fact through the sparse fact-to-fact matrix $S$.
Similarly, we can also get 3-hop distant evidence.
In this manner,
we can collect
the set of supporting facts at each hop position, denoted as $\{F_1^*, F_2^*, \dots, F_T^*\}$.
The final learning objective is thus to optimize the sum of the cross-entropy loss $l$ between the final weighed set of concepts $A$ and the answer set $A^*$, as well as the auxiliary loss from distant evidence --- i.e., the mean of the hop-wise loss between the predicted facts $F_t$ and the distant supporting facts at that hop $F_t^*$, defined as follows:
$$\mathcal{L} = {l}(A, A^*) + \frac{1}{T} {\sum_{t=1}^T {l}(F_t, F_t^*)}$$
\subsection{Experimental Setup}
\subsubsection*{Fact corpus and concept vocabulary }
We use the {GenericsKB-Best} corpus as the main knowledge source\footnote{It was constructed from multiple commonsense knowledge corpora and only kept naturally occurring generic statements, which makes it a perfect fit for OpenCSR.}.
In total, we have {1,025,413} unique facts as our $\mathcal{F}$.
We use the spaCy toolkit
to prepossess all sentences in the corpus and then extract frequent noun chunks within them as our concepts.
The vocabulary $\mathcal{V}$ has {80,524} concepts, and every concept is mentioned at least 3 times.
\subsubsection*{Datasets for OpenCSR}
To facilitate the research on open-ended commonsense reasoning (OpenCSR),
we reformatted three existing multi-choice question answering datasets to allow evaluating OpenCSR methods.
We choose three datasets:
QASC, OBQA, and ARC, as their questions require commonsense knowledge about science and everyday objects and are presented in natural language.
By applying a set of filters and rephrasing rules,
we selected those open-ended commonsense questions that query concepts in our vocabulary $\mathcal{V}$.
As we know that there can be multiple correct answers for a question in OpenCSR,
we employed crowd-workers
to collect more answers for each \textit{test} question based on a carefully designed annotation protocol.
{In total, we collect {15,691} answers for {2,138} rephrased questions for evaluation, which results in 7.5 answers per question on average.}
Please find more details about crowd-sourcing and analysis in Appendix~\ref{sec:opencsrdata}.
We show some statistics of the OpenCSR datasets and our new annotations in Table~\ref{tab:stat}.
To understand the multi-hop nature and the difficulty of each dataset, we use a heuristic
to estimate the percentage of ``single-hop questions'', for which we can find a fact (from top-1k facts retrieved by BM25) containing both a question concept and an answer concept.
The ARC dataset has about 67\% one-hop questions and thus is the easiest, while OBQA has only 50\%.
\begin{table}[t]
\centering
\scalebox{0.8}{
\begin{tabular}{@{}r|ccc||c}
\toprule
\textbf{Stat. $\backslash$ Data } & {ARC} & {QASC} & {OBQA} & \textbf{Overall} \\ \midrule
\# \textbf{All Examples} & 6,600 & 8,443 & 5,288 & 20,331 \\ \midrule
\# {Training Set} & 5,355 & 6,883 & 4,199 & 16, 437 \\
\# {Validation Set} & 562 & 731 & 463 & 1,756 \\
\# {Test Set} & 683 & 829 & 626 & 2,138 \\
\midrule
{{Avg.\#Answers}} & 6.8 & 7.6 & 7.7 & 7.5 \\
\midrule
{{Single-hop}}~\% & 66.91\% & 59.35\% & 50.80\% & 59.02\% \\
\bottomrule
\end{tabular}
}
\caption{Statistics of datasets for OpenCSR (v1.0).}
\label{tab:stat}
\end{table}
\subsubsection*{Evaluation metrics.}
Recall that, given a question $q$, the final output of every method is a weighted set of concepts $A=\{(a_1, w_1), \dots \}$.
We denote the set of \textit{true answer concepts}, as defined above, as $A^*=\{a_1^*, a_2^*, \dots \}$.
We define \textbf{Hit@K} accuracy to be the fraction of questions for which we can find \textit{at least one} correct answer concept $a_i^*\in A^*$ in the top-$K$ concepts of $A$ (sorted in descending order of weight).
As questions have multiple correct answers, recall is also an important aspect for evaluating OpenCSR, so we also
use \textbf{Rec@K} to evaluate the average recall of the top-K proposed answers.
\begin{table*}[t]
\centering
\scalebox{0.9}{
\begin{tabular}{c||cc|cc|cc||cc}
\toprule
& \multicolumn{2}{c}{ARC} & \multicolumn{2}{c}{QASC} & \multicolumn{2}{c}{OBQA} & \multicolumn{2}{c}{\textbf{\underline{Overall}}} \\ \midrule
\rowcolor{Gray} Metric = \textbf{Hit@K (\%)} & H@50 & H@100 & H@50 & H@100 & H@50 & H@100 & H@50 & H@100 \\ \midrule
\cellcolor{cyan!10} BM25 (off-the-shelf) & 56.95 & 67.35 & 58.50 & 66.71 & 53.99 & 66.29 & 56.48 & 66.78 \\
\cellcolor{cyan!10}DPR~\cite{dpr} & 68.67 & 78.62 & 69.36 & 78.89 & 62.30 & 73.80 & 66.78 & 77.10 \\
\cellcolor{cyan!10}DrKIT~\cite{drkit} & 67.63 & 77.89 & 67.49 & 81.63 & 61.74 & 75.92 & 65.62 & 78.48 \\
\cellcolor{cyan!10} \textsc{DrFact} (\textbf{Ours}) & \textbf{71.60} & \textbf{80.38} & \textbf{72.01} & \textbf{84.56} & \textbf{69.01} & \textbf{80.03} & \textbf{70.87 }& \textbf{81.66 } \\
\midrule
\cellcolor{blue!15} BM25 + MCQA Reranker & 76.87 & 80.38 & 75.75 & 80.22 & 79.23 & 84.03 & 77.28 & 81.54 \\
\cellcolor{blue!15}DPR + MCQA Reranker & 76.72 & 83.16 & 81.66 & 87.45 & 77.16 & 83.39 & 78.51 & 84.67 \\
\cellcolor{blue!15}DrKIT + MCQA Reranker & 78.44 & 83.37 & 84.00 & 86.83 & 79.25 & 84.03 & 80.56 & 84.74 \\
\cellcolor{blue!15}\textsc{DrFact} + MCQA Reranker & \textbf{84.19} & \textbf{89.90} & \textbf{89.87} & \textbf{93.00} & \textbf{85.78 }& \textbf{90.10} &\textbf{86.61 }& \textbf{91.00 } \\\midrule \midrule
\rowcolor{Gray} Metric = \textbf{Rec@K (\%)} & R@50 & R@100 & R@50 & R@100 & R@50 & R@100 & R@50 & R@100 \\ \midrule
\cellcolor{cyan!10} BM25 (off-the-shelf) & 21.12 & 28.08 & 16.33 & 20.13 & 14.27 & 20.21 & 17.24 & 22.81 \\
\cellcolor{cyan!10}DPR~\cite{dpr} & 28.93 & 38.63 & 23.19 & 32.12 & 18.11 & 26.83 & 23.41 & 32.53 \\
\cellcolor{cyan!10}DrKIT~\cite{drkit} & 27.57 & 37.29 & 21.25 & 30.93 & 18.18 & 27.10 & 22.33 & 31.77 \\
\cellcolor{cyan!10} \textsc{DrFact} (\textbf{Ours}) & \textbf{31.48} & \textbf{40.93} & \textbf{23.29} & \textbf{33.60} & \textbf{21.27} & \textbf{30.32} & \textbf{25.35} & \textbf{34.95 } \\
\midrule
\cellcolor{blue!15} BM25 + MCQA Reranker & 39.11 & 42.96 & 29.03 & 32.11 & 36.38 & 39.46 & 34.84 & 38.18 \\
\cellcolor{blue!15}DPR + MCQA Reranker & 43.78 & 51.56 & 40.72 & 48.25 & 36.18 & 43.61 & 40.23 & 47.81 \\
\cellcolor{blue!15}DrKIT + MCQA Reranker & 43.14 & 49.17 & 39.20 & 44.37 & 35.12 & 39.85 & 39.15 & 44.46 \\
\cellcolor{blue!15}\textsc{DrFact} + MCQA Reranker & \textbf{47.73 }& \textbf{55.20 }& \textbf{44.30} & \textbf{50.30} & \textbf{39.60} & \textbf{45.24} & \textbf{43.88} &\textbf{50.25} \\\bottomrule
\end{tabular}
}
\caption{{Results of the \textbf{Hit@K} and \textbf{Rec@K} ($K$=50/100) on OpenCSR (v1.0).
We present two groups of methods with different inference speed levels. The {\MyColorBox[cyan!10]{upper group}} is retrieval-only methods that are efficient (\MyColorBox[cyan!10]{$<0.5$ sec/q}), while the {\MyColorBox[blue!15]{bottom group}} are augmented with a computationally expensive answer reranker (\MyColorBox[blue!15]{$\ge 14$ sec/q}).
}}
\label{tab:main}
\end{table*}
\begin{figure}
\centering
\vspace{-0.3em}
\includegraphics[width=1\linewidth]{compare}
\captionof{table}{Comparisons of the four retrieval methods.}
\label{tab:compare}
\end{figure}
\subsection{Baseline Methods}
\label{sec:baseline}
We present baseline methods and an optional re-ranker component for boosting the performance on OpenCSR.
Table~\ref{tab:compare} shows a summary of the comparisions of the three methods and our DrFact.
\smallskip
\noindent
\textbf{Direct Retrieval Methods.}
The most straightforward approach to the OpenCSR task is to directly retrieve relevant facts, and then use the concepts mentioned in the top-ranked facts as answer predictions.
{BM25} is one of the most popular \textit{unsupervised} method for retrieval, while
the \textit{Dense Passage Retrieval} ({DPR}) model is a state-of-the-art trainable, neural retriever~\cite{dpr}.
Following prior work with DPR,
we used BM25-retrieved facts to create positive and (hard-)negative examples as supervision.
For both methods, we score a concept by the \textit{max}\footnote{We also tried \textit{mean} and \textit{sum}, but \textit{max} performs the best.} of the relevance scores of retrieved facts that mention it.
\smallskip
\noindent
\textbf{DrKIT.}
Following~\citeauthor{drkit} (2020), we use DrKIT for OpenCSR, treating concepts as entities.
DrKIT is also an efficient multi-hop reasoning model that reasons over a pre-computed indexed corpus, which, as noted above (Sec.~\ref{sec:rel_work}),
differs from our work in that
DrKIT traverses a graph of entities and entity mentions, while \textsc{DrFact} traverses a hypergraph of facts.
\smallskip
\noindent
\textbf{Multiple-choice style re-ranking (MCQA).}
A conventional approach to multiple-choice QA (MCQA) is to fine-tune a pre-trained language model such as BERT, by combining a question and a particular concept as a single input sequence in the form of ``\texttt{[CLS]}question\texttt{[SEP]}choice'' and using \texttt{[CLS]} vectors for learning to score choices.
We follow this schema and train\footnote{Specifically, we fine-tune BERT-Large to score truth answers over 9 sampled distractors, and use it to rank the top-500 concepts produced by each above retrieval method.} such a multiple-choice QA model on top of BERT-Large, and use this to re-rank the top-$K$ concept predictions.
\subsection{Results and Analysis}
\noindent
\textbf{Main results.}
For a comprehensive understanding, we report the Hit@K and Rec@K of all methods, at K=50 and K=100, in Table~\ref{tab:main}.
The \textit{overall} results are the average over the three datasets.
We can see that \textsc{DrFact} outperforms all baseline methods for all datasets and metrics.
Comparing with the state-of-the-art text retriever DPR, \textsc{DrFact} improves by about 4.1\% absolute points in Hit@50 accuracy overall.
With the expensive yet powerful MCQA reranker module \textsc{DrFact} gives an even large gap ($\sim8\%$ gain in H@50 acc).
The performance gains on the QASC and OBQA datasets are larger than the one on ARC. This observation correlates the statistics that the former two have more multi-hop questions and thus \textsc{DrFact} has more advantages.
As shown in Figure~\ref{fig:hkcurve}, we can see that \textsc{DrFact} consistently outperforms other retrieval methods at different $K$ by a considerable margin.
Interestingly, we find that with the MCQA reranker, DrKIT does not yield a large improvement over DPR, and it usually has a lower than other methods.
We conjecture this is because that entity-centric reasoning schema produces too many possible concepts and thus is more likely to take more irrelevant concepts at the top positions.
The results on \textbf{Rec@K} in bottom section of Table~\ref{tab:main} show that
even our \textsc{DrFact}+MCQA model only recalls about 50\% of the correct answers in top-100 results on average.
This suggests that OpenCSR is still a very challenging problem and future works should focus on improving the ability of ranking \textit{more} correct answers higher.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{hkacc}
\caption{The curve of Hit@K accuracy in \textit{overall}. Please find the curve of Rec@K in Figure~\ref{fig:rkcurve}.}
\label{fig:hkcurve}
\end{figure}
\smallskip
\noindent
\textbf{Run-time efficiency analysis.}
We use Table~\ref{tab:eff} to summarize the online inference speed of each OpenCSR method.
At inference time, DPR will make one call to BERT-base for encoding a question and do one MIPS search.
Similarly, DrKIT and \textsc{DrFact} with $T$ hops will make one call to BERT-base for query encoding and do $T$ MIPS searches.
However, since the entity-to-mention matrix ($\operatorname{sp}_{e2m}$) of DrKIT is much larger than the fact-to-fact matrix ($\operatorname{sp}_{f2f}$) of \textsc{DrFact}, DrKIT is about twice as slow as \textsc{DrFact}.
The MCQA is much more computationally expensive, as it makes $K$ calls to BERT-Large for each combination of question and choice.
Note that in these experiments we use $T$=2 for DrKIT, $T$=3 for \textsc{DrFact} and $K$=500 for the MCQA re-rankers.\footnote{We note the MCQA-reranker could be speed up by scoring more choices in parallel. All run-time tests were performed on NVIDIA V100 (16GB), but MCQA with batch-size of 1 requires only $\sim$5GB. This suggests more parallel inference on a V100 could obtain 4.5 sec/q for MCQA.}
\begin{table}[]
\centering
\scalebox{0.75
}{
\begin{tabular}{c|c|c}
\toprule
\textbf{Methods} & \textbf{Major Computations } & \textbf{Speed} (sec/q) \\ \midrule
\cellcolor{cyan!10}BM25 & Sparse Retrieval & 0.14 \\
\cellcolor{cyan!10}DPR & BERT-base + MIPS & 0.08 \\
\cellcolor{cyan!10}DrKIT & BERT-base + $T$*(MIPS+ $\operatorname{sp}_{e2m}$) & 0.47 \\
\cellcolor{cyan!10}\textsc{DrFact} & BERT-base + $T$*(MIPS+ $\operatorname{sp}_{f2f}$) & 0.23\\
\midrule
\MyColorBox[cyan!10]{X}+ \MyColorBox[blue!15]{MCQA} & X + $K$ * BERT-Large & + \textit{14.12} \\ \bottomrule
\end{tabular}
}
\caption{{The major competitions of each method and their online (batch-size=1) inference speed in \textit{sec/q}. }}
\label{tab:eff}
\end{table}
\begin{table}[t]
\centering
\scalebox{0.83
}{
\begin{tabular}{c|ccc||c}
\toprule
& ARC & QASC & OBQA & Overall \\ \midrule
$T$=1 ~~~ & 69.3\% & 70.1\% & 65.0\% & 68.1\% \\
$T$=2 ~~~ & 71.1\% & 72.2\% & 68.3\% & 70.5\% \\
\rowcolor{Gray} $T$=3 \cmark \quad & 71.6\% & 72.0\% & 69.0\% & 70.9\% \\
\midrule
w/o. Self-follow & 70.9\% & 70.4\% & 68.4\% & 69.9\% \\
w/o. Aux. loss & 70.6\% & 70.1\% & 68.0\% & 69.6\% \\ \bottomrule
\end{tabular}
}
\caption{Ablation study of \textsc{DrFact} (H@50 test acc). }
\label{tab:ablation}
\end{table}
\smallskip
\noindent
\textbf{Ablation study.}
Varying the maximum hops (T=\{1,2,3\}) --- i.e., the number of calls to \texttt{Fact-Follow} --- indicates that overall performance is the best when T=3 as shown in Table~\ref{tab:ablation}.
The performance with T=2 drops 0.7\% point on OBQA.
We conjecture this is due to nature of the datasets, in particular the percentage of hard questions.
We also test the model (with T=3) without the \textit{auxiliary learning loss} (Sec.~\ref{ssec:aux}) or the \textit{self-following} trick.
Both are seen to be important to \textsc{DrFact}.
Self-following is especially helpful for QASC and OBQA, where there are more multi-hop questions.
It also makes learning and inference more faster than an alternative approach of ensembling multiple models with different maximum hops as done in some prior works.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{case}
\caption{A case study to compare DPR and \textsc{DrFact}.}
\label{fig:case}
\end{figure}
\smallskip
\noindent
\textbf{Qualitative analysis.}
We show a concrete example in Fig.~\ref{fig:case} to compare the behaviour of DPR and \textsc{DrFact} in reasoning.
DPR uses purely dense retrieval without any regularization, yielding irrelevant facts.
The fact $f_2$ matches the phrase ``separating...from sand,'' but does not help reason about the question.
The $f_3$ shows here for the semantic relatedness of ``steel'' and ``iron'' while ``filling'' here is not related to question concepts.
Our \textsc{DrFact}, however, can faithfully reason about the question via fact-following over the hypergraph, and use neural fact embeddings to cumulatively reason about a concept, e.g., \textit{magnet}.
By backtracking with our hypergraph, we can use retrieved facts as explanations for a particular prediction.
\subsection*{Commonsense Reasoning}
\smallskip
\noindent
\textbf{QA over KGs or Text.}
A conventional source of commonsense knowledge is triple-based symbolic commonsense knowledge graphs (CSKGs) such as ConceptNet~\cite{Speer2017ConceptNet5A}.
However, the binary relations in CSKGs greatly limit the types of the knowledge that can be encoded.
Here, instead of a KB, we use a corpus of generic sentences about commonsense facts, in particular GenericsKB~\cite{bhakthavatsalam2020genericskb}. The advantage of this approach is that text can represent more complex commonsense knowledge, including facts that relate three or more concepts.
Formalized in this way,
OpenCSR is a question answering task requiring (possibly) iterative retrieval, similar to other open-domain QA tasks~\cite{chen2017reading} such as HotpotQA~\cite{yang2018hotpotqa} and Natural Questions~\cite{kwiatkowski2019natural}.
As noted above, however, the surface of commonsense questions in OpenCSR have fewer hints about kinds of multi-hop reasoning required to answer them than the factoid questions in open-domain QA, resulting in a particularly challenging reasoning problem (see Sec.~\ref{sec:problem}).
\smallskip
\noindent
\textbf{Multi-Hop Reasoning.}
Many recent models for open-domain QA tackle multi-hop reasoning through iterative retrieval, e.g., GRAFT-Net~\cite{sun2018open}, MUPPET~\cite{feldman-el-yaniv-2019-multi}, PullNet~\cite{sun2019pullnet}, and GoldEn~\cite{qi2019answering}.
These models, however, are \textit{not} end-to-end differentiable and thus tend to have slower inference speed,
which is a limitation shared by many other works using reading comprehension for multi-step QA~\cite{das2019multi, lee2019latent}. As another approach,
Neural Query Language~\cite{Cohen2020Scalable} designs
differentiable multi-hop entity-following templates for reasoning over a compactly stored symbolic KG, but this KG
is limited to {binary} relations between entities from an {explicitly} enumerated set.
\smallskip
\noindent
\textbf{DrKIT}~\cite{drkit} is the most similar work to our \textsc{DrFact}, as it also supports multi-hop reasoning over a corpus. Unlike \textsc{DrFact}, DrKIT is designed for entity-centric reasoning. DrKIT begins with an entity-linked corpus, and computes both sparse and dense indices of \textit{entity mentions} (i.e., linked named-entity spans).
DrKIT's fundamental reasoning operation is to ``hop'' from one weighted set of $X$ entities to another, by 1) finding mentions of new entities $x'$ that are related to some entity in $X$, guided by the indices, and then 2) aggregating these mentions to produce a new weighted set of entities.
DrKIT's operations are differentiable, and by learning to construct appropriate queries to the indices, it can be trained to answer multi-hop entity-related questions.
Prior to our work DrKIT been applied only on \textit{factoid} questions about named entities.
In CSR, the concepts that drive reasoning are generally less precise than entities, harder to disambiguate in context, and are also much more densely connected, so it is unclear to what extent DrKIT would be effective. We present here novel results using DrKIT on OpenCSR tasks, and show experimentally that our new approach, \textsc{DrFact}, improves over DrKIT. \textsc{DrFact} mainly differs from DrKIT in that its reasoning process learns to ``hop'' from one fact to another, rather than from one entity to another, thus effectively using the full information from a fact for multi-hop reasoning.
\section{Constructing OpenCSR Datasets}
\label{sec:opencsrdata}
\subsection{Reformatting Questions and Answers}
In this section,
we introduce how we reformat the existing three datasets and crowd-source annotations of multiple answers for evaluating OpenCSR.
To convert a multiple-choice question to an open-ended question,
we first
remove questions where the correct answer does not contain any concept in $\mathcal{V}$
and the few questions that require comparisons between original choices, as they are designed only for multiple-choice QA, e.g., ``\textit{which} of the following is the \textit{most} \dots''
Then, we rephrase questions with long answers to be an open-ended question querying a single concept.
For example, an original question-answer pair such as (Q:``The Earth revolving around the sun can cause \_\_\_'', A:``{constellation} to appear in one place in spring and another in fall'') is now rephrased to (Q*=``The Earth revolving around the sun can cause \underline{what} to appear in one place in spring and another in fall?'', A*=``constellation'').
Specifically, we combine the original question (Q) and original correct choice (A) to form a long statement and rephrase it to be a new question (Q*) querying a single concept (A*) in the original answer, where we use the least frequent concept as the target.
This question-rephrasing largely improve the number of answerable questions, particularly for the OBQA dataset.
All are English data.
\subsection{Crowd-sourcing More Answers}
Note that there can be multiple correct answers to an open-ended question in OpenCSR while the original datasets only provide a single answer.
Thus, we use Amazon Mechanical Turk\footnote{\url{https://www.mturk.com/}} (AMT) to collect more answers for the test questions to have a more precise OpenCSR evaluation.
We design a three-stage annotation protocol as follows:
\begin{itemize}
\item S1) {\textbf{Multiple-Choice Sanity Check}}. We provide a question and 4 choices where only one choice is correct and the other 3 are randomly sampled. Only the workers who passed this task, their following annotations will be considered. This is mainly designed for avoiding noise from random workers.
\item S2) \textbf{Selection from Candidates}. To improve the efficiency of annotation, we take the union of top 20 predictions from BM25, DPR, DrKIT, and DrFact and randomly shuffle the order of these concepts (most of them are about 60$\sim$70 candidates). workers can simply input the ids of the concepts that they think are good answers to the question (i.e., a list of integers separated by comma). There are three different workers for each question and we take the candidates which are selected by at least two workers. Note that we also put the correct answer we already have in the candidates and use them as another sanity check to filter out noisy workers.
\item S3) \textbf{Web-based Answer Collection}.
We generate an URL link to Google Search of the input question to help workers to use the Web for associating more correct answers to the question (the input here is a string for a list of concepts separated by comma). We also provide our concept vocabulary as a web-page so one can quickly check if a concept is valid.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{answer_dist.pdf}
\caption{Distribution of \# answers of test questions. \vspace{-1em}}
\label{fig:answer_dist}
\end{figure}
After careful post-processing and multiple rounds of re-assignment, we have in total 15k answers for 2k questions, and the distribution of number of answers are in Figure~\ref{fig:answer_dist} and Table~\ref{tab:stat}.
\section{Details of Implementation and Our Experiments}
\label{sec:impl}
\subsection{DrFact Implementation}
We present some concrete design choices within our DrFact implementation which are abstractly illustrated in the main content of the paper.
\smallskip
\noindent
\textbf{(1) Pre-training Dense Fact Index $D$.}
As we mentioned in Sec.~\ref{sec:method},
we follow the steps of~\citeauthor{dpr} (2020) to pre-train a bi-encoder question answering model on top of BERT~\cite{Devlin2019}.
To create negative examples, we use the BM25 results which do not contain any answer concept.
We use BERT-base (\texttt{uncased\_L-12\_H-768\_A-12}) in our implementation and thus $d=768$ in our experiments.
\smallskip
\noindent
\textbf{(2) Sparse Fact-to-Fact Index $S$.}
We use a set of rules to decide if we can create a link $f_i \rightarrow f_j$ (i.e., $S_{ij}=1$) as follows:
\begin{itemize}
\item $i\neq j$. We do not allow self-link here but use \textit{self-following} as we described in Sec.~\ref{sec:method}.
\item $|I|>=1$ where $I$ is the set of concepts that are mentioned in both $f_i$ and $f_j$. Note that we remove the most frequent 100 concepts (e.g., human) from $I$.
\item $|I| < |f_i|$. We do not create links when all concepts in $f_i$ are mentioned in $f_j$, which are usually redundant.
\item $|f_j| - |I| >=2$. We create links only when there are more than two unseen concepts in $f_j$ which are not in $f_i$, such that the fact-to-fact links create effective reasoning chains.
\end{itemize}
We also limit that a fact can be followed by at most 1k different facts. Additionally, we append the links from our distant supervision of justifications as well if they were filtered out before.
\smallskip
\noindent
\textbf{(3) Hop-wise Question Encoding $\mathbf{q_t}$.}
We encode the question $q$ with BERT-base and then use its \texttt{[CLS]} token vector as the dense representation for $\mathbf{q}$.
For each hop, we append a hop-specific layer to model how the question context changes over the reasoning process ---
$\mathbf{q_t} = \operatorname{MLP}_{\theta_t}(\mathbf{q})$.
\smallskip
\noindent
\textbf{(4) Fact Translating Function $g$.}
The translating function accepts both the vector representation of previous-hop facts $\mathbf{F_{t-1}}$ and the hop-wise question vector $\mathbf{q_t}$ and uses an MLP to map the concatenation of them to a vector used for a MIPS query:
$\mathbf{h_{t-1}}=\operatorname{MLP}_{\theta_g}([\mathbf{F_{t-1}};\mathbf{q_{t}}])$.
Thus, $\mathbf{h_{t-1}}$ has the same dimension as a fact vector in $U$.
\smallskip
\noindent
\textbf{(5) Hop-wise Answer Weights $\alpha_t$.}
We use the shared query vector to learn how to aggregate predictions at different hops.
For a $T$-hop DrFact model, we learn to transform the $\mathbf{q}$ to a $T$-dim vector where $\alpha_t$ is the $t$-th component.
\begin{figure}
\centering
\hspace{-2em}
\includegraphics[width=1\linewidth]{rkacc}
\caption{The curve of Rec@K in \textit{overall} data.}
\label{fig:rkcurve}
\end{figure}
\subsection{Hyper-parameters and Training Details}
\label{sec:hp}
We now present the details and final hyper-parameters that we used in our experiments.
For all methods, we tune their hyper-parameters on the validation set and then use the same configurations to train them with the combination of the training and validation sets for the same steps.
\noindent
\textbf{BM25.} We use the off-the-shelf implementation by elasticsearch\footnote{\url{https://github.com/elastic/elasticsearch}}, which are open-source and unsupervised.
For the run-time analysis, we use Intel(R) Xeon(R) CPU @ 2.00GHz and the localhost webserver for data transfer.
\smallskip
\noindent
\textbf{DPR.} We use the source code\footnote{\url{https://github.com/facebookresearch/DPR}} released by the original authors. The creation of negative contexts are the same when we pre-train our dense fact index $D$, which are sampled from BM25 results.
\smallskip
\noindent
\textbf{DrKIT.}
We use the official source code\footnote{\url{https://github.com/google-research/language/tree/master/language/labs/drkit}} for our experiments.
We did minimal modifications on their code for adapt DrKIT towards building dense index of mentions for the OpenCSR corpus and datasets. For fair comparisions between DPR, DrKIT and DrFact, we all use BERT-base as question and mention/fact encoder. We use 200 as the dimension of mention embeddings and T=2 as the maximum hops. We found that using T=3 will cause too much memory usage (due to denser entity-to-mention matrix) and also result in a very slow training speed. Non-default hyper-parameters are: \textit{train\_batch\_size}=8 due to the limit of our GPU memory, \textit{entity\_score\_threshold}=5e-3 (out of \{5e-2, 5e-3, 5e-4, 1e-4\}) to filter numerous long-tail intermediate concepts for speeding up training and inference.
\smallskip
\noindent
\textbf{DrFact.}
Similar to DrKIT, we also implement DrFact in TensorFlow for its efficient implementation of \texttt{tf.RaggedTensor} which are essential for us to compute over large sparse tensors.
We record the default hyper-parameters in our submitted code.
We use a single V100 GPU (16GB) for training with batch size of 24 (using 15GB memory) and learning rate as 3e-5, selected from \{1e-5, 2e-5, 3e-5, 4e-5, 5e-5\}.
The \textit{entity\_score\_threshold}=1e-4, and \textit{fact\_score\_threshold}=1e-5, which are all selected from \{1e-3, 1e-4, 1e-5\} based on the dev set.
\smallskip
\noindent
\textbf{Model Parameters.}
DPR, DrKIT and DrFact are all based on the BERT-base, which are 110 million parameters (after pre-training index).
DrKIT and DrFact additionally have several MLP layers on top of `[CLS]' token vectors, which are all less than 1 million parameters.
The MCQA-reranker model is based on BERT-Large, and thus has 345 million parameters.
\section{Discussion on Other Related Work}
\label{sec:morel}
\smallskip
\noindent
\textbf{Other Open-Domain QA models.}
Recent open-domain QA models such as REALM~\cite{guu2020realm}, Path-Retriever~\cite{asai2020learning}, ORQA~\cite{lee2019latent}, and RAG~\cite{lewis2020retrieval},
mainly focus on QA over the full Wikipedia corpus like DrKIT~\cite{drkit} does.
Some of them explicitly use the links between pages to form reasoning chain, while a few them rely on expensive QA-oriented pre-training.
Moreover, as DPR~\cite{dpr} already shows better performance (see their Table 4) than most prior works with a simpler method, we thus use DPR as the major baseline for evaluation in this work.
\section{Introduction}\label{sec:intro}
\input{1_intro.tex}
\section{Related Work}\label{sec:rel_work}
\input{5_related_work.tex}
\section{Open-Ended Commonsense Reasoning}
\label{sec:problem}
\input{2_problem.tex}
\section{{DrFact}: An Efficient Approach for Differentiable Reasoning over Facts}
\label{sec:method}
\input{3_method.tex}
\section{Experiments}\label{sec:exp}
\input{4_exp.tex}
\section{Conclusion}\label{sec:conclusion}
\input{6_conclusion.tex}
\section*{Acknowledgments}
Xiang Ren is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research, the Defense Advanced Research Projects Agency with award W911NF-19-20271, and NSF SMA 18-29268.
We thank all reviewers for their constructive feedback and comments.
\section*{*~Ethical Considerations}
\input{8_ethics.tex}
|
1,116,691,499,655 | arxiv | \section{Introduction}
\label{intro
Let us consider functions $f:R\mapsto\mathbb R^+$ where $R$ is an
interval of $\mathbb R$. In what follows, $R$ is the real line
$\mathbb R$ or the semi-axis $\mathbb R^+=[0,\infty)$. We also
assume that the function $f$ is locally summable on $R$, i.e. it is
summable on each bounded subinterval $I$ of $R$.
The mean value of the function $f$ on a bounded interval $I$ is
defined by
\begin{equation*}
f_I=\frac1{|I|}\int_If(x)\,dx,
\end{equation*}
and the mean oscillation of this function is
\begin{equation*}
\Omega(f;I)=\frac1{|I|}\int_I\left|f(x)-f_I\right|\,dx,
\end{equation*}
where $|\,\cdot \,|$ denotes the Lebesgue measure.
For a given $\varepsilon\in(0,2]$ the Gurov--Reshetnyak class
$\mathcal{GR}=\mathcal{GR}(\varepsilon)=\mathcal{GR}_R(\varepsilon)$
is defined as the set of all non-negative functions $f$ which are
locally summable on $R$ and such that the Gurov--Reshetnyak
condition
$$
\Omega(f;I)\le\varepsilon\,f_I
$$
is satisfied on all bounded intervals $I\subset R$~(see~\cite{GR}).
Note that since any non-negative function $f$ on any
interval $I$ satisfies the inequality $\Omega(f;I)\le2f_I$, the
class $\mathcal{GR}_R(2)$ is trivial and it coincides with the class
of all functions locally summable on $R$. However, if
$\varepsilon\in(0,2)$ then $\mathcal{GR}_R(\varepsilon)$ is a
non-trivial class (see \cite[P. 112]{K07}, \cite{KLS}). If $I$ is a
subinterval of $R$, then the expression $\left<\,f\,\right>_{I}=
{\Omega(f;I)}/{f_I}$ is called the relative oscillation of the
function $f$ on the interval $I$. Further, the term $\llll f\rrrr_R=
\sup\limits_{I\subset R}\left<\,f\,\right>_{I}$ is called the "norm"
of function $f$ in the Gurov-Reshetnyak class $\mathcal{GR}_R$.
One of the main properties of functions from the Gurov--Reshetnyak
class consists in the possibility to improve their summability
exponents. This property lays the foundation for numerous
applications of this class of functions. More precisely, for any
$\varepsilon\in (0,2)$ there are $p^+_{R}= p^+_{R}(\varepsilon)>1$
and $p^-_{R}= p^-_{R}(\varepsilon)<0$, such that the condition
$f\in\mathcal{GR}_R(\varepsilon)$ implies the local summability of
the function $f^p$ for any $p\in(p^-_{R},p^+_{R})$ (see.
\cite{B}, \cite{F90}, \cite{F94}, \cite{GR}, \cite{F85}, \cite{I}, \cite{KLS},
\cite{W}). For $R=\mathbb R^+$, the exact limiting value
$p^+_{\mathbb R_+}=p^+_{\mathbb R_+}(\varepsilon)>1$ of the positive
summability exponent $p$ is the root of the equation
\begin{equation*}
\frac{p^p}{\left(p-1\right)^{p-1}}= \frac2\varepsilon,
\end{equation*}
and $p^-_{\mathbb R_+}=1-p^+_{\mathbb R_+}<0$. The sharpness of the
values $p^-_{\mathbb R_+}$ and $p^+_{\mathbb R_+}$ can be verified
by the use of the power functions $g(x)=x^{1/(p-1)}$ and
$h(x)=x^{-1/p}$ $(x\in\mathbb R_+,\,p>1)$, respectively. Thus
\begin{equation}\label{eq11}
\varepsilon_{\mathbb R_+}(p)\equiv \llll g\rrrr_{\mathbb R_+}= \llll
h\rrrr_{{\mathbb R_+}}= 2\frac{(p-1)^{p-1}}{p^p},
\end{equation}
\cite[pp. 131, 144]{K07}, \cite{K15}, \cite{K90}, \cite{K03a},
\cite{K04}. These examples also show that for functions
$f\in\mathcal{GR}_{\mathbb R_+}(\varepsilon)$, the function $f^p$ is
not necessarily locally summable in the limiting cases $p=p^-_{\mathbb
R_+}(\varepsilon)<0$ or $p=p^+_{\mathbb R_+}(\varepsilon)>1$.
On the other hand, for $R=\mathbb R$ the sharp limiting summability
exponents $p^-_{\mathbb R}(\varepsilon)<0$ and $p^+_{\mathbb
R}(\varepsilon)>1$ of functions $f\in\mathcal{GR}_{\mathbb
R}(\varepsilon)$ are not known. It is clear that $p^-_{\mathbb
R}(\varepsilon)\le p^-_{\mathbb R_+}(\varepsilon)$, $ p^+_{\mathbb
R}(\varepsilon)\ge p^+_{\mathbb R_+}(\varepsilon)$. Similarly to
$R=\mathbb R_+$, it is only natural to assume that for $R=\mathbb R$
the power functions $f_\alpha(x)=|x|^{\alpha}$ $(x\in\mathbb R,\
\alpha>-1)$ with $\alpha=1/(p-1)$ and $\alpha=-1/p$ $(p>1)$ are also
extremal ones. However, the computation of the corresponding
Gurov-Reshetnyak "norms" $\varepsilon^-_{\mathbb R}(p)\equiv \llll
f_{1/(p-1)}\rrrr_{\mathbb R}$ and $\varepsilon^+_{\mathbb
R}(p)\equiv\llll f_{-1/p}\rrrr_{\mathbb R}$ in this case is not as
simple as for $R=\mathbb R_+$. Nevertheless, it is shown in
\cite{K15} that $\varepsilon^-_{\mathbb R}(p)>\varepsilon_{\mathbb
R_+}(p)$ and $\varepsilon^+_{\mathbb R}(p)>\varepsilon_{\mathbb
R_+}(p)$.
One of the main results of the present work is the computation of
the "norm" $\llll f_\alpha\rrrr_{\mathbb R}$ of the function
$f_\alpha$ in the Gurov-Reshetnyak class on the real line $\mathbb
R$ (see Theorem~\ref{theo21} below). In particular, this theorem
implies the equation $\varepsilon^-_{\mathbb
R}(p)=\varepsilon^+_{\mathbb R}(p)\equiv\varepsilon_{\mathbb R}(p)$
$(p>1)$ (cf. Corollary~\ref{cor23}).
The above problem can be reformulated as follows: If a monotone
function $f$ belongs to the class $\mathcal{GR}_{\mathbb
R_+}(\varepsilon)$ for an $\varepsilon\in(0,2)$, then its even
extension to $\mathbb R$, which is also denoted by $f$, belongs to
the Gurov-Reshetnyak class $\mathcal{GR}_{\mathbb R}(\varepsilon')$
with an $\varepsilon'\in[\varepsilon,2)$ (see Lemma \ref{lem21}).
Therefore, one can also ask a question about the
norms\begin{equation*} \left\|\,{\bf
T}\,\right\|_{\mathcal{GR}}^{(\varepsilon)}\equiv\frac1\varepsilon
\sup\left\{\llll f\rrrr_{\mathbb R}:\
\llll f\rrrr_{\mathbb R_+} =\varepsilon
\ (0<\varepsilon<2)\right\}, \ \left\|\,{\bf
T}\,\right\|_{\mathcal{GR}}\equiv
\sup_{0<\varepsilon<2}\left\|\,{\bf
T}\,\right\|_{\mathcal{GR}}^{(\varepsilon)}
\end{equation*}
of the operator ${\bf T}$ of the even extension of monotone functions
$f\in\mathcal{GR}_{\mathbb R_+}(\varepsilon)$ to the real line
$\mathbb R$.
In the Table~\ref{tab:rezults} below we report the results of
numerical calculations of the values of $\varepsilon_{\mathbb R}(p)$
for various $p>1$. Comparing these results with known values of
$\varepsilon_{{\mathbb R}_+}(p)$, one obtains lower bounds for the
norms $\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}^{(\varepsilon)}$
and $\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}$.
An analogous problem, concerning the norm of the operator of the
even extension for the class $BMO$ of monotone functions with
bounded mean oscillation, has been considered in \cite{Kl}. Some
estimates for the norm of such an extension have been obtained in
\cite{S}. It is remarkable that the lower estimate presented in
Remark~\ref{rem31} for the norm of the operator of the even
extension $\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}$ of the present
work coincides with the lower estimate $\left\|\,{\bf
T}\,\right\|_{BMO}$ obtained in \cite{DKT} for the corresponding
operator of the even extension of monotone functions $f\in BMO$ (see
Remark \ref{rem32}).
\section{Gurov-Reshetnyak inequality for power functions on the real line}
Let us recall that the mean value $f_I=\gamma$ of function $f$ on a
subinterval $I$ is uniquely defined by the condition\footnote{By
$E(P)$ we denote the set of all points
$x\in E$, satisfying the condition
$P=P(x)$.}
\begin{equation*}
\int_{I\left(f\ge \gamma\right)}\left(f(x)-\gamma\right)\,dx=
\int_{I\left(f\le\gamma\right)}\left(\gamma-f(x)\right)\,dx.
\end{equation*}
It is easily seen that
\begin{equation*}
\Omega(f;I)= \frac2{|I|}\int_{I\left(f\ge
f_I\right)}\left(f(x)-f_I\right)\,dx= \frac2{|I|}\int_{I\left(f\le
f_I\right)}\left(f_I-f(x)\right)\,dx.
\end{equation*}
According to \cite{K15}, the Gurov-Reshetnyak "norm" of any monotone
function $f$ on $\mathbb R_+$ can be computed by the formula
\begin{equation*}
\llll f\rrrr_{{\mathbb R}_+}= \sup_{b>0}\left<\, f\,\right>_{(0,b)},
\end{equation*}
and this fact will be used in what follows.
Let us show that the even extension of monotone functions from a
non-trivial Gurov-Reshetnyak class $\mathcal{GR}_{\mathbb R_+}$
belong to a non-trivial class $\mathcal{GR}_{\mathbb R}$.
\begin{lem}\label{lem21}
For any $\varepsilon\in(0,2)$ there is an $\varepsilon'\in(0,2)$
such that if a monotone function $f$ belongs to
$\mathcal{GR}_{\mathbb R_+}(\varepsilon)$, then the even extension
of $f$ from ${\mathbb R_+}$ to $\mathbb R$ belongs to
$\mathcal{GR}_{\mathbb R}(\varepsilon')$.
\end{lem}
\textbf{Proof.}
The proof of this lemma can be split into three steps.
{\bf Step 1.} Consider an even function $f\in\mathcal{GR}_{\mathbb
R_+}(\varepsilon)$, $0<\varepsilon<2$. Then for any interval
$I\subset\mathbb R_+$ the Gehring inequality holds\footnote{Recall
the Gehring inequality first originated in \cite{G}.}
\begin{equation}\label{eq22}
\left(\frac1{|I|}\int_If^q(x)\,dx\right)^{1/q}\le
B\cdot\frac1{|I|}\int_If(x)\,dx,
\end{equation}
where $q>1$ and $B>1$ depend on the parameter $\varepsilon$ only
\cite[P. 131]{K07}, \cite{K90}.
{\bf Step 2.} Let us show that the function $f$ satisfies the
Gehring inequality on $\mathbb R$. It suffices to consider only the
intervals of the form $I=(-a,b)$, where $0<a<b$. Since $f$ is an
even function, one can apply the inequality \eqref{eq22} on the
interval $(0,b)$ and obtain
\begin{multline*}
\left(\frac1{|I|}\int_If^q(x)\,dx\right)^{1/q}\le
\left(\frac2{a+b}\int_{0}^bf^q(x)\,dx\right)^{1/q}\le
\\
\le \left(\frac{2b}{a+b}\right)^{1/q}B\frac1b\int_{0}^bf(x)\,dx\le
2^{1/q}\left(\frac{b}{a+b}\right)^{1/q-1}B\frac1{a+b}\int_{-a}^bf(x)\,dx\le
\\
\le2B\frac1{|I|}\int_If(x)\,dx.
\end{multline*}
{\bf Step 3.} Now it remains to use the fact that the Gehring
inequality implies the Gurov-Reshetnyak inequality with an
$\varepsilon'\in(0,2)$ (see \cite[P. 114]{K07}, \cite{K03}), which
completes the proof.
\rbx
\begin{rem}\label{rem21}
The above obtained value of
$\varepsilon'\equiv\varepsilon'(\varepsilon)$ is not exact since at
each step of the proof of Lemma~\ref{lem21}, the parameters used are
overestimated.
\end{rem}
\begin{rem}\label{rem22}
For $\varepsilon<1$ there is a simpler proof of Lemma~\ref{lem21}.
In this case one can employ the simple inequality~\cite{DKT}
\begin{equation*}
\Omega(f;(-\delta b,b))\le\frac2{1+\delta}
\Omega(f;(0,b))\quad(0\le\delta\le1,\ b>0).
\end{equation*}
Indeed, since
\begin{equation*}
f_{(-\delta b,b)}=\frac{1}{(1+\delta)b}\int_{-\delta b}^bf(x)\,dx\ge
\frac1{1+\delta}\frac1b\int_0^bf(x)\,dx=
\frac1{1+\delta}f_{(0,b)},
\end{equation*}
then
\begin{equation*}
\frac{\Omega(f;(-\delta b,b))}{f_{(-\delta b,b)}}\le
(1+\delta)\frac{\Omega(f;(-\delta b,b))}{f_{(0,b)}}\le
(1+\delta)\frac{\frac2{1+\delta}\Omega(f;(0,b))}{f_{(0,b)}}\le
2\llll f\rrrr_{{\mathbb R}_+}.
\end{equation*}
Other details of the proof are left to the reader.
\end{rem}
\begin{rem}\label{rem23}
It follows from the proof in Remark~\ref{rem22} that if
$\varepsilon\in(0,1)$ then $\left\|\,{\bf
T}\,\right\|_{\mathcal{GR}}^{(\varepsilon)}\le2$. However, since
$\varepsilon'$ in Lemma~\ref{lem21} satisfies the relation
$\varepsilon'\le2$ the inequality $\left\|\,{\bf
T}\,\right\|_{\mathcal{GR}}^{(\varepsilon)}\le2$ remains valid for
$\varepsilon\in[1,2)$. Therefore, one also has $\left\|\,{\bf
T}\,\right\|_{\mathcal{GR}}\le2$. On the other hand, a lower bound
for $\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}$, obtained in
numerical experiments, is presented in Remark~\ref{rem31} below.
\end{rem}
Further, we will compute the "norm" of power function in the
Gurov-Reshetnyak class. Using a linear transformation one can check
that for the function $f_\alpha(x)=|x|^\alpha$ $(x\in\mathbb
R,\alpha>-1)$ the following relations
\begin{equation*}
\llll f_\alpha\rrrr_{{\mathbb R}}=
\sup_{0\le\eta\le1}\left<\,f_\alpha\,\right>_{(-\eta,1)},\quad
\llll f_\alpha\rrrr_{{\mathbb R}_+}=
\left<\,f_\alpha\,\right>_{(0,1)}=
\frac{2|\alpha|}{(\alpha+1)^{(\alpha+1)/\alpha}}
\end{equation*}
hold.
\begin{thm}\label{theo21}
If $\alpha>-1$, $\alpha\ne0$, then
\begin{equation*}
\llll f_\alpha\rrrr_{{\mathbb R}}=
\llll f_\alpha\rrrr_{{\mathbb R_+}}\cdot
\max_{0\le\eta\le1}\psi(\alpha,\eta),
\end{equation*}
where
\begin{align*}
&\psi(\alpha,\eta)= \\[1ex]
&=\left\{\begin{array}{ll}
\D\frac{\left(1+\eta^{\alpha+1}\right)^{1/\alpha}}{(1+\eta)^{(\alpha+1)/\alpha}}+
\frac{(\alpha+1)^{(\alpha+1)/\alpha}}\alpha
\left[\frac1{1+\eta^{\alpha+1}}-\frac1{1+\eta}\right],& \text{ if }\; 0\le\eta\le\eta_1,\\[10pt]
\D 2\frac{\left(1+\eta^{\alpha+1}\right)^{1/\alpha}}{(1+\eta)^{(\alpha+1)/\alpha}},& \text{ if }\; \eta_1\le\eta\le1,\\
\end{array}\right.
\end{align*}
and $\eta_1=\eta_1(\alpha)\in(0,1)$ is the root of the equation
\begin{equation}\label{eq23}
\eta^\alpha=\frac1{1+\alpha(\eta+1)}.
\end{equation}
\end{thm}
\textbf{Proof.}
For any fixed $\eta\in[0,1]$ we set $I= I(\eta)=(-\eta,1)$. Then
\begin{equation*}
\left(f_\alpha\right)_I=\frac1{1+\eta}\int_{-\eta}^1|x|^\alpha\,dx=
\frac{1}{\alpha+1}\cdot\frac{1+\eta^{\alpha+1}}{1+\eta}.
\end{equation*}
Let $\eta_1\in(0,1)$ be the root of the equation
$\eta=\left(\left(f_\alpha\right)_I\right)^{1/\alpha}$, cf.
\eqref{eq23}. It is easily seen that this equation is solvable and
the root is unique.
{\bf (a).} If $\eta\le\eta_1$, then
\begin{multline*}
\Omega(f_\alpha;I)=\frac{2\cdot\sign\alpha}{1+\eta}\int\limits_{\left(\left(f_\alpha\right)_I\right)^{1/\alpha}}^1\left(x^\alpha-\left(f_\alpha\right)_I\right)\,dx=
\\
=\frac{2\cdot\sign\alpha}{1+\eta}\left[\frac\alpha{\alpha+1}\left(\left(f_\alpha\right)_I\right)^{(\alpha+1)/\alpha}-\left(f_\alpha\right)_I+\frac1{\alpha+1}\right].
\end{multline*}
Set
\begin{multline*}
\varphi_0(\alpha,\eta)\equiv\left<\,f_\alpha\,\right>_I=
\frac{\Omega(f_\alpha;I)}{\left(f_\alpha\right)_I}=
\\
=\frac2{1+\eta}\frac{|\alpha|}{(\alpha+1)^{(\alpha+1)/\alpha}}\left(\frac{1+\eta^{\alpha+1}}{1+\eta}\right)^{1/\alpha}-
\frac{2\cdot\sign\alpha}{1+\eta}+\frac{2\cdot\sign\alpha}{1+\eta^{\alpha+1}}.
\end{multline*}
Taking into account that
\begin{equation*}
\varphi_0(\alpha,0)= \left<\,f_\alpha\,\right>_{(0,1)}=
\frac{2|\alpha|}{(\alpha+1)^{(\alpha+1)/\alpha}},
\end{equation*}
one obtains
\begin{multline*}
\psi_0(\alpha,\eta)\equiv
\frac{\left<\,f_\alpha\,\right>_{I}}{\left<\,f_\alpha\,\right>_{(0,1)}}=
\\
=
\frac{\left(1+\eta^{\alpha+1}\right)^{1/\alpha}}{(1+\eta)^{(\alpha+1)/\alpha}}+
\frac{(\alpha+1)^{(\alpha+1)/\alpha}}\alpha
\left[\frac1{1+\eta^{\alpha+1}}-\frac1{1+\eta}\right].
\end{multline*}
{\bf (b).} On the other hand, if $\eta\ge\eta_1$, then
\begin{multline*}
\Omega(f_\alpha;I)=\frac{2\cdot\sign\alpha}{1+\eta}\cdot2\int\limits_0^{\left(\left(f_\alpha\right)_I\right)^{1/\alpha}}\left(\left(f_\alpha\right)_I-x^\alpha\right)\,dx=
\\
=\frac4{1+\eta}\frac{|\alpha|}{\alpha+1}\left(\left(f_\alpha\right)_I\right)^{(\alpha+1)/\alpha}.
\end{multline*}
Consider now the expression
\begin{equation*}
\varphi_1(\alpha,\eta)\equiv\left<\,f_\alpha\,\right>_I=
\frac{\Omega(f_\alpha;I)}{\left(f_\alpha\right)_I}=
\frac{4|\alpha|}{(\alpha+1)^{(\alpha+1)/\alpha}}\frac{\left(1+\eta^{\alpha+1}\right)^{1/\alpha}}{(1+\eta)^{(\alpha+1)/\alpha}}.
\end{equation*}
Since
\begin{equation*}
\varphi_1(\alpha,1)=
\frac{2|\alpha|}{(\alpha+1)^{(\alpha+1)/\alpha}}=
\varphi_0(\alpha,0)= \left<\,f_\alpha\,\right>_{(0,1)},
\end{equation*}
then
\begin{equation*}
\psi_1(\alpha,\eta)\equiv
\frac{\left<\,f_\alpha\,\right>_{I}}{\left<\,f_\alpha\,\right>_{(0,1)}}=
\frac{\varphi_1(\alpha,\eta)}{\varphi_1(\alpha,1)}=
2\frac{\left(1+\eta^{\alpha+1}\right)^{1/\alpha}}{(1+\eta)^{(\alpha+1)/\alpha}}.
\end{equation*}
Let $\psi$ be the function defined by
\begin{equation*}
\psi(\alpha,\eta)=\left\{\begin{array}{ll}
\psi_0(\alpha,\eta),\quad0\le\eta\le\eta_1,\\[1ex]
\psi_1(\alpha,\eta),\quad\eta_1\le\eta\le1.\\
\end{array}\right.
\end{equation*}
Then
\begin{equation*}
\frac{\llll f_\alpha\rrrr_{{\mathbb
R}}}{\llll f_\alpha\rrrr_{{\mathbb R_+}}}=
\max_{0\le\eta\le1}\psi(\alpha,\eta),
\end{equation*}
and the proof is completed.
\rbx
\begin{cor}\label{cor21}
Let $\psi$ be the function defined in Theorem~\ref{theo21}. Then
\begin{equation*}
\psi\left(-\frac\alpha{\alpha+1},\eta^{\alpha+1}\right)=
\psi\left(\alpha,\eta\right)\quad
(\alpha>-1,\ 0\le\eta\le1).
\end{equation*}
\end{cor}
\textbf{Proof.}
Straightforward calculations.
\rbx
For $p>1$ we set $\alpha=1/(p-1)$. Then
$$\alpha+1= p/(p-1),\quad
-\alpha/(\alpha+1)=-1/p,
$$
and Corollary~\ref{cor21} can be rewritten in the following form.
\begin{cor}\label{cor22}
Let $p>1$ and let $\psi$ be the function defined in
Theorem~\ref{theo21}. Then
\begin{equation*}
\psi\left(-\frac1p,\eta^{p/(p-1)}\right)=\psi\left(\frac1{p-1},\eta\right)\quad
(0\le\eta\le1).
\end{equation*}
\end{cor}
Recall that the "norms" on the Gurov-Reshetnyak class are denoted
by $\varepsilon^-_{\mathbb R}(p)\equiv \llll
f_{1/(p-1)}\rrrr_{\mathbb R}$ and $\varepsilon^+_{\mathbb
R}(p)\equiv \llll f_{-1/p}\rrrr_{{\mathbb R}}$, $p>1$. However, one
has
\begin{equation*}
\llll f_{1/(p-1)}\rrrr_{{\mathbb R_+}}=
\llll f_{-1/p}\rrrr_{\mathbb R_+}\equiv
\varepsilon_{{\mathbb R}_+}(p),
\end{equation*}
and Theorem~\ref{theo21} and Corollary~\ref{cor22} lead to the
following result.
\begin{cor}\label{cor23}
If $p>1$, then
\begin{equation*}
\llll f_{1/(p-1)}\rrrr_{\mathbb R}=
\llll f_{-1/p}\rrrr_{\mathbb R}\equiv
\varepsilon_{{\mathbb R}}(p)=
\varepsilon_{{\mathbb R}_+}(p)\cdot
\max_{0\le\eta\le1}\psi\left(\frac1{p-1},\eta\right).
\end{equation*}
\end{cor}
The next corollary allows us to improve Theorem~\ref{theo21} by
using a better description of the set where the function $\psi$ can
attain its maximum.
\begin{cor}\label{cor24}
Let $\alpha>-1$ and $\alpha\ne0$. Then
\begin{equation*}
\max_{0\le\eta\le1}\psi(\alpha,\eta)=\max_{0\le\eta\le\eta_1}\psi_0(\alpha,\eta),
\end{equation*}
where the function $\psi_0(\alpha,\eta)$ is defined in the proof of
Theorem~\ref{theo21}, and the number $\eta_1=\eta_1(\alpha)\in(0,1)$
is derived from the equation~\eqref{eq23}.
\end{cor}
\textbf{Proof.}
Since the function $\psi(\alpha,\eta)$ is continuous on the interval
$[0,1]$ in the variable $\eta$, it suffices to show that for any
fixed $\alpha$ the function $\psi_1(\alpha,\eta)$, defined in the
proof of Theorem~\ref{theo21}, is decreasing on the interval
$\left[\eta_1,1\right]$. For this we compute the derivative
\begin{equation*}
\frac\partial{\partial\eta}\psi_1(\alpha,\eta)=
2\frac{\alpha+1}\alpha
\frac{\left(1+\eta^{\alpha+1}\right)^{1/\alpha-1}}{(1+\eta)^{2+1/\alpha}}\left[\eta^\alpha
-1\right],
\end{equation*}
and observe that if $0<\eta<1$, then the derivative is negative.
This completes the proof.
\rbx
Corollaries~\ref{cor22} and~\ref{cor24} imply the following result:
\begin{cor}\label{cor25}
If $p>1$, then
\begin{equation*}
\max_{0\le\eta\le\eta_1\left(1/(p-1)\right)}\psi_0\left(\frac1{p-1},\eta\right)=
\max_{0\le\eta\le\eta_1\left(-1/p\right)}\psi_0\left(-\frac1{p},\eta\right).
\end{equation*}
\end{cor}
Let us compute the derivative of the function $\psi_0(\alpha,\eta)$.
Thus
\begin{align*}
\frac\partial{\partial\eta}\psi_0(\alpha,\eta)&=
\frac{(\alpha+1)^{(\alpha+1)/\alpha}}{\alpha}
\frac{\alpha+1}{1+\eta^{\alpha+1}}\frac1{1+\eta}
\\
&\quad\times
\left\{\left[1-\left(\frac1{\alpha+1}\frac{1+\eta^{\alpha+1}}{1+\eta}\right)^{1/\alpha}\right]
\left[\frac1{\alpha+1}\frac{1+\eta^{\alpha+1}}{1+\eta}-\eta^\alpha\right]
\right.
\\
&\quad-
\left[\frac\alpha{\alpha+1}\left(\frac1{\alpha+1}\frac{1+\eta^{\alpha+1}}{1+\eta}\right)^{(\alpha+1)/\alpha}\!-\!
\frac1{\alpha+1}\frac{1+\eta^{\alpha+1}}{1+\eta}+\frac1{\alpha+1}\right]
\\
&\phantom{\qquad-
\frac\alpha{\alpha+1}\left(\frac1{\alpha+1}\frac{1+\eta^{\alpha+1}}{1+\eta}\right)^{(\alpha+1)/\alpha}}
\left.
\times\;\eta^\alpha\frac{(\alpha+1)(1+\eta)}{1+\eta^{\alpha+1}}
\right\}.
\end{align*}
Note that for $\eta=\eta_1$ the second factor in the second line of
the formula is equal to zero. Moreover, the expression in the third
line is nothing else but
$(1+\eta)\Omega(f_\alpha;I)/(2\sign\alpha)$, and
elementary computations lead to the following representation for the
derivative of the function $\psi_0(\alpha,\eta)$:
\begin{multline*}
\frac\partial{\partial\eta}\psi_0(\alpha,\eta)=\frac{\alpha+1}\alpha\frac1{(1+\eta)^{2+1/\alpha}\left(1+\eta^{\alpha+1}\right)^2}
\times
\\
\times \left[
\left(1+\eta^{\alpha+1}\right)^{(\alpha+1)/\alpha}\left(\eta^\alpha-1\right)+
(\alpha+1)^{1/\alpha}\left(1+\eta^{\alpha+1}\right)^2(1+\eta)^{1/\alpha}-
\right.
\\
\left.
-
(\alpha+1)^{(\alpha+1)/\alpha}\eta^\alpha(1+\eta)^{2+1/\alpha}\right].
\end{multline*}
It is easily seen that
\begin{align*}
&\frac\partial{\partial\eta}\psi_0(\alpha,0)=
\frac{\alpha+1}\alpha\left[(\alpha+1)^{1/\alpha}-1\right]>0\quad
\text{if
}\alpha>0,\\[1ex]
&\frac\partial{\partial\eta}\psi_0(\alpha,0+)=+\infty\quad \text{if
} -1<\alpha<0.
\end{align*}
On the other hand,
\begin{equation*}
\frac\partial{\partial\eta}\psi_0\left(\alpha,\eta_1\right)=
-\frac{(\alpha+1)^{(3\alpha+1)/\alpha}}{2|\alpha|}
\frac{1+\eta_1}{\left(1+\eta_1^{\alpha+1}\right)^2}\eta_1^\alpha
\Omega\left(f_\alpha;I\left(\eta_1\right)\right)<0.
\end{equation*}
This leads to the following result.
\begin{cor}\label{cor26}
For each fixed $\alpha$ the function $\psi_0(\alpha,\eta)$ attains
its maximal value on the interval $\left[0,\eta_1\right]$ at the
inner point $\eta_{max}=\eta_{max}(\alpha)\in\left(0,\eta_1\right)$,
i.e. where
\begin{equation*}
\frac\partial{\partial\eta}\psi_0(\alpha,\eta)=0.
\end{equation*}
\end{cor}
Corollary~\ref{cor26} means that $\eta_{max}$ is the root of the
equation
\begin{multline}\label{eq35}
\left(1+\eta^{\alpha+1}\right)^{(\alpha+1)/\alpha}\left(\eta^\alpha-1\right)+
(\alpha+1)^{1/\alpha}\left(1+\eta^{\alpha+1}\right)^2(1+\eta)^{1/\alpha}-
\\
-
(\alpha+1)^{(\alpha+1)/\alpha}\eta^\alpha(1+\eta)^{2+1/\alpha}=0.
\end{multline}
However, even in the simplest case $\alpha=1$ the authors do not
know an analytic solution of this equation (see Example~\ref{ex31}
below).
\begin{rem}\label{rem24}
The numerical study of the behaviour of the function
$\psi_0(\alpha,\eta)$ for different values of $\alpha$ shows that
the derivative $\frac\partial{\partial\eta}\psi_0(\alpha,\eta)$ has
a unique root $\eta_{max}=\eta_{max}(\alpha)$ in the interval
$(0,\eta_1)$. Nevertheless, the authors do not know any rigorous
proof of this fact.
\end{rem}
\section{Numerical experiments, examples, and
comments}
Fix an $\varepsilon\in(0,2)$. Set
$p=p(\varepsilon)=p^+_{R}(\varepsilon)>1$,
$\alpha=\alpha(\varepsilon)=1/(p(\varepsilon)-1)$ and define
$\eta_1=\eta_1(\varepsilon)$ by~\eqref{eq23}. According to
Theorem~\ref{theo21} and Corollary~\ref{cor24}, one has
\begin{equation}\label{eq34}
\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}^{(\varepsilon)}
\ge
\max_{0\le\eta\le\eta_1}\psi_0(\alpha,\eta)
\equiv C_\varepsilon,
\quad
\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}
\ge
\sup_{0<\varepsilon<2}C_\varepsilon
\equiv C.
\end{equation}
Table \ref{tab:rezults} shows some values of the parameters
mentioned obtained in numerical experiments, where the
columns $6$ and $8$ contain the maximum points of the function
$\psi_0\left(\alpha,\eta\right)$ for $\alpha=1/(p-1)$ and $\alpha
=-1/p$, correspondingly.
\begin{table}[h]
\caption{The Gurov-Reshetnyak "norms" and extremal points.
Numerical results.\label{tab:rezults}}
\begin{center}
\begin{tabular}{|r|l|l|l|l@{\hspace{-1mm}}|l|l|l@{\hspace{-1mm}}|}
\hlin
$p\quad$ &
$\varepsilon=\varepsilon_{\mathbb R_+}$ &
$\varepsilon_{\mathbb R}\quad$ &
$C_\varepsilon=\D\frac{\varepsilon_{\mathbb R}}{\varepsilon_{\mathbb R_+}}$ &
$\D\alpha=\frac1{p-1}$ &
$\quad \eta_{max}^+\ $ &
$\D \alpha=-\frac1p $ &
$\quad \eta_{max}^-\ $\\
\hline
\scriptsize{$1\hskip 15pt$}&
\scriptsize{$2\hskip 15pt$}&
\scriptsize{$3\hskip 15pt$}&
\scriptsize{$4\hskip 25pt$}&
\scriptsize{$5\hskip 15pt$}&
\scriptsize{$6\hskip 15pt$}&
\scriptsize{$7\hskip 19pt$}&
\scriptsize{$8\hskip 15pt$}\\[-2pt]
\hline\hline
1.15 & 1.2813 & 1.4647 & 1.143133 & 6.6667 & 0.5484 & -0.8696 & 0.0100\\
\hline
1.20 & 1.1647 & 1.3542 & 1.162679 & 5.0000 & 0.4936 & -0.8333 & 0.0145\\
\hline
1.33 & 0.9493 & 1.1346 & 1.195193 & 3.0303 & 0.4030 & -0.7519 & 0.0257\\
\hline
1.50 & 0.7698 & 0.9378 & 1.218204 & 2.0000 & 0.3372 & -0.6667 & 0.0383\\
\hline
1.67 & 0.6513 & 0.8018 & 1.231116 & 1.4993 & 0.2982 & -0.5999 & 0.0486\\
\hline
{\bf 2.00} & {\bf 0.5000} & {\bf 0.6224} & {\bf 1.244737} & {\bf 1.0000} & {\bf 0.2531} & {\bf -0.5000} & {\bf 0.0640}\\
\hline
{\bf 3.00} & {\bf 0.2963} & {\bf 0.3726} & {\bf 1.257683} & {\bf 0.5000} & {\bf 0.2001} & -0.3333 & 0.0895\\
\hline
6.00 & 0.1340 & 0.1692 & 1.263337 & 0.2000 & 0.1638 & -0.1667 & 0.1141\\
\hline
11.00 & 0.0701 & 0.0886 & 1.264397 & 0.1000 & 0.1508 & -0.0909 & 0.1248\\
\hline
21.00 & 0.0359 & 0.0454 & 1.264692 & 0.0500 & 0.1442 & -0.0476 & 0.1309\\
\hline
101.00 & 0.0073 & 0.0093 & 1.264793 & 0.0100 & 0.1388 & -0.0099 & 0.1361\\
\hline
1001.00 & 0.0007 & 0.0009 & 1.264797 & 0.0010 & 0.1376 & -0.0010 & 0.1373\\
\hline
9999.00 & 0.0001 & 0.0001 & {\bf 1.264797} & 0.0001 & 0.1375 & -0.0001 & 0.1374\\
\hline
\end{tabular}
\end{center}
\end{table}
In addition, the results of numerical experiments are reflected in
Figures~\ref{ris:pic-1} and~\ref{ris:pic-2}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\pgfplotsset{width=6cm}
\begin{semilogxaxis} [
title = The Gurov - Reshetnyak's "norms",
xlabel = {$p$},
legend pos = north east,
ymin = 0,
grid = major,
] \legend{
$\varepsilon_{R_+}(p)$,
$\varepsilon_{R}(p)$
};
\addplot[mark = otimes,
mark options = {
scale = 1.0,
fill = pink,
draw = black
}] coordinates {
(1.15,1.2813)
(1.20,1.1647)
(1.33,0.9493)
(1.50,0.7698)
(1.67,0.6513)
(2.00,0.5000)
(3.00,0.2963)
(6.00,0.1340)
(11.00,0.0701)
(21.00,0.0359)
};
\addplot[mark = diamond,
mark options = {
scale = 1.3,
fill = pink,
draw = black
}] coordinates {
(1.15,1.4647)
(1.20,1.3542)
(1.33,1.1346)
(1.50,0.9378)
(1.67,0.8018)
(2.00,0.6224)
(3.00,0.3726)
(6.00,0.1692)
(11.00,0.0886)
(21.00,0.0454)
};
\end{semilogxaxis}
\end{tikzpicture}
\begin{tikzpicture}
\pgfplotsset{width=6cm}
\begin{semilogyaxis} [
title = The limiting exponent,
xlabel = {$\varepsilon$},
legend pos = north east,
ymin = 1.14,
grid = major
] \legend{
$p\left(\varepsilon_{R_+}\right)$,
$p\left(\varepsilon_{R}\right)$
};
\addplot[mark = otimes,
mark options = {
scale = 1.0,
fill = pink,
draw = black
}] coordinates {
(1.2813,1.15)
(1.1647,1.20)
(0.9493,1.33)
(0.7698,1.50)
(0.6513,1.67)
(0.5000,2.00)
(0.2963,3.00)
(0.1340,6.00)
(0.0701,11.00)
(0.0359,21.00)
};
\addplot[mark = diamond,
mark options = {
scale = 1.3,
fill = pink,
draw = black
}] coordinates {
(1.4647,1.15)
(1.3542,1.20)
(1.1346,1.33)
(0.9378,1.50)
(0.8018,1.67)
(0.6224,2.00)
(0.3726,3.00)
(0.1692,6.00)
(0.0886,11.00)
(0.0454,21.00)
};
\end{semilogyaxis}
\end{tikzpicture}
\end{center}
\centering
\vskip -0.5cm \caption{Relations between $p$ and
$\varepsilon$.} \label{ris:pic-1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\pgfplotsset{width=6cm}
\begin{semilogxaxis} [
title = The growth of "norms",
xlabel = {$p$},
legend pos = south east,
ymin = 1.14,
grid = major
] \legend{
$\left(\frac{\varepsilon_{R}}{\varepsilon_{R_+}}\right)(p)$
};
\addplot [solid, draw = black, line width=0.7pt]coordinates {
(1.15,1.143133)
(1.20,1.162679)
(1.33,1.195193)
(1.50,1.218204)
(1.67,1.231116)
(2.00,1.244737)
(3.00,1.257683)
(6.00,1.263337)
(11.00,1.264397)
(21.00,1.264692)
(101.00,1.264793)
(1001.00,1.264797) (9999.00,1.264797)
};
\end{semilogxaxis}
\end{tikzpicture}
\begin{tikzpicture}
\pgfplotsset{width=6cm}
\begin{semilogxaxis} [
title = The lower estimate of $\|{\bf T}\|_{{\mathcal GR}}^{(\varepsilon)}$,
xlabel = {$\varepsilon$},
legend pos = south west,
ymin = 1.14,
grid = major
] \legend{
$C_\varepsilon$
};~\addplot [solid, draw = black, line width=0.7pt]coordinates {
(1.2813,1.143133)
(1.1647,1.162679)
(0.9493,1.195193)
(0.7698,1.218204)
(0.6513,1.231116)
(0.5000,1.244737)
(0.2963,1.257683)
(0.1340,1.263337)
(0.0701,1.264397)
(0.0359,1.264692)
(0.0073,1.264793)
(0.0007,1.264797)
(0.0001,1.264797)
};
\end{semilogxaxis}
\end{tikzpicture}
\end{center}
\vskip -0.5cm \caption{The growth of the norms during the extension
from $\mathbb R_+$ to $\mathbb R$.} \label{ris:pic-2}
\end{figure}
{\bf Comments to the graphs.}\nopagebreak
{\bf Figure \ref{ris:pic-1}.} The lower graph in the left part of
the Figure~\ref{ris:pic-1} shows the dependance of the
Gurov-Reshetnyak "norms" $\varepsilon_{\mathbb R_+}(p)$ on the
parameter $p$. These results are obtained from formula~\eqref{eq11}.
Note that the data are represented in the logarithmic scale and do
not include all results from Column $2$. The upper graph shows the
dependance of the exponents $\varepsilon_{\mathbb R}(p)$ on $p$,
presented in Corollary~\ref{cor23}.
In the right part of Figure~\ref{ris:pic-1} we show the graphs of
the inverse relations, i.e. these are values of those $p$, for which
the function $f_{1/(p-1)}$ belongs to the class
$\mathcal{GR}_{\mathbb R_+}(\varepsilon)$ (the lower line) or to the
class $\mathcal{GR}_{\mathbb R}(\varepsilon)$ (the upper line) for
a given $\varepsilon$.
{\bf Figure \ref{ris:pic-2}.} In the left graph we show the quotient
of the "norms" of the function $f_{1/(p-1)}$ in classes
$\mathcal{GR}_{\mathbb R}$ and $\mathcal{GR}_{\mathbb R_+}$ with
respect to the parameter~$p$. The right graph reflects the behaviour
of the parameter $C_\varepsilon$ (see~\eqref{eq34}) with respect to
$\varepsilon$. In both cases the variables $p$ and $\varepsilon$ are
shown in the logarithmic scale.
\begin{rem}\label{rem31}
As is seen from Table~\ref{tab:rezults}, the "norm" of the operator
$\mathbf{T}$ can be estimated as follows
$$
\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}\ge
C=\lim_{\varepsilon\to0+}C_\varepsilon\approx 1.264797,
$$
where the constant $C$ is defined in~\eqref{eq34}. The corresponding
numerical value $C$ is shown in boldface in the last row of
Table~\ref{tab:rezults}.
\end{rem}
\begin{rem}\label{rem32}
For functions $f\in BMO$ with bounded mean oscillation the question
about the sharp value of the norm $\|\,f\,\|_{BMO,{\mathbb
R}}=\sup\limits_{I\subset {\mathbb R}}\Omega(f;I)$ of the even
extension from ${\mathbb R_+}$ to ${\mathbb R}$ of a
monotone function on the semi-axis $\mathbb R_+$ is posted in
\cite{Kl} and, to the best of our knowledge, it is still open. In
\cite{DKT} the $BMO$-norm of the function $f_0(x)=\ln(1/|x|)$, which
is a typical representative of this class, is found. Thus
\begin{equation*}
\left\|\,f_0\,\right\|_{BMO,\mathbb R}=
\frac2{\rm e}\cdot\frac1{t+1}\left[\exp\left(\frac{t\,\ln t}{t+1}\right)+ {\rm
e}\frac{t\,\ln t}{t+1}\right],
\end{equation*}
where $t>1$ is the root of the equation
\begin{equation}\label{eq36}
\exp\left(\frac{t\,\ln t}{t+1}\right)= {\rm
e}\left(t-1-\frac{t+1}{\ln t}\right).
\end{equation}
Since $\left\|\,f_0\,\right\|_{BMO,\mathbb R_+}=2/{\rm e}$, the
approximate solution of the equation \eqref{eq36} leads to the
following estimate (see \cite{DKT})
\begin{equation*}
\left\|\,{\bf T}\,\right\|_{BMO}=\!\!\sup_{\scriptsize
\begin{array}{cc}f\,\text{ is even on }\mathbb R,\\ \text{and monotone on }\mathbb
R_+\end{array}}\!\!\! \frac{\|\,f\,\|_{BMO,\mathbb
R}}{\|\,f\,\|_{BMO,\mathbb R_+}} \ge\frac{\|\,f_0\,\|_{BMO,\mathbb
R}}{\|\,f_0\,\|_{BMO,\mathbb R_+}}\equiv C_0 \approx 1.264797.
\end{equation*}
It is remarkable that $C$ and $C_0$ coincide up to $6$ significant
digits after the point, i.e. the lower bounds of the norms
$\left\|\,{\bf T}\,\right\|_{\mathcal{GR}}$ and $\left\|\,{\bf
T}\,\right\|_{BMO}$ of the operator of even extension ${\bf T}$
turned out to be the same within the calculation accuracy.
\end{rem}
\begin{exm}\label{ex31}
In the simplest case $\varepsilon=1/2$, $p=2$ and $\alpha=1$ or
$\alpha=-1/2$, one can find the roots of the equation~\eqref{eq23},
which are $\eta_1(1)=\sqrt2-1\approx0.414$,
$\eta_1\left(-1/2\right)=3-2\sqrt2\approx0.172$.
\begin{figure}[h]
\centering
\includegraphic
[width=11cm]{graph11} \caption{The graphs of the functions
$\psi(1,\eta)$, $\psi\left(-1/2,\eta\right)$ $(p=2)$ and
$\psi\left(1/2,\eta\right)$ $(p=3)$.} \label{ris:graph1}
\end{figure}
The corresponding line in Table~\ref{tab:rezults} is marked out by
boldface. In Figure~\ref{ris:graph1} the graphs of the functions
$\psi\left(-1/2,\eta\right)$ and $\psi(1,\eta)$ are represented by
solid lines. Note that $\psi_0(1,\eta_1(1))=
\psi_0\left(-1/2,\eta_1\left(-1/2\right)\right)=
2\sqrt2\left(\sqrt2-1\right)\approx1.172$. For these values of the
parameters, the equation~\eqref{eq35}, which is used to find
$\eta_{max}=\eta_{max}(1)\in\left(0,\eta_1\right)$, takes the form
\begin{equation*}
3\eta^5-3\eta^4-6\eta^3-10\eta^2-\eta+1=0.
\end{equation*}
The detection of any analytic solution of this equation seems to be
difficult. However, one can show that in the interval
$\left(0,\eta_1\right)$ this equation has a unique solution (see
Remark~\ref{rem24}). Indeed, since the second derivative of the
function $\psi_0(1,\eta)$ is
\begin{equation*}
\frac{\partial^2}{\partial\eta^2}\psi_0(1,\eta)=
-4\left(\frac{3\eta}{(1+\eta)^4}+2\frac{1-3\eta^2}{\left(1+\eta^2\right)^3}\right)
\end{equation*}
and $1-3\eta^2>0$ for $0<\eta<\sqrt2-1$, one obtains that
$\frac{\partial^2}{\partial\eta^2}\psi_0(1,\eta)<0$ on the interval
$\left(0,\eta_1\right)$. This means that
$\frac{\partial}{\partial\eta}\psi_0(1,\eta)$ is strictly decreasing
in the interval $\left(0,\eta_1\right)$, hence it has a unique root
in this interval. By Corollary~\ref{cor22}, the derivative
$\frac{\partial}{\partial\eta}\psi_0(-1/2,\eta)$ also has a unique
root in the interval $\left(0,\eta_1(-1/2)\right)$.
Notice that the values $\eta_{max}(1)\approx0.253$,
$\eta_{max}\left(-1/2\right)\approx0.064$ are approximations of the
corresponding roots and $C_{1/2}\approx1.245$.
\end{exm}
\begin{exm}\label{ex32}
Let $\alpha=1/2$, $p=3$, and $\varepsilon\approx0.296$. In
Table~\ref{tab:rezults} the numerical results corresponding to this
case are shown in the line partially written in boldface. In
Figure~\ref{ris:graph1} the graph of the corresponding function
$\psi(1/2,\eta)$ is represented by the dashed line.
In this case, the equation~\eqref{eq23} takes the form
\begin{equation*}
\sqrt\eta=\frac2{3+\eta}.
\end{equation*}
The solution of this equation
\begin{equation*}
\eta_1\equiv\eta_1\left(\frac12\right)=\sqrt[3]{3+2\sqrt2}+\sqrt[3]{3-2\sqrt2}-2
\approx0.355
\end{equation*}
is obtained by Cardano formulas. We also have
\begin{equation*}
\psi_0\left(\frac12,\eta\right)=
\frac{\left(1+\eta^{3/2}\right)^2}{(1+\eta)^3}+
\frac{27}4\left[\frac1{1+\eta^{3/2}}-\frac1{1+\eta}\right],
\end{equation*}
$\psi\left(1/2,\eta_1\right)\approx1.180$. In addition, the equation
~\eqref{eq35} takes the form
\begin{equation*}
\left(1+\eta^{3/2}\right)^3\left(\eta^{1/2}-1\right)+
\frac94\left(1+\eta^{3/2}\right)^2(1+\eta)^2-
\frac{27}8\eta^{1/2}(1+\eta)^4=0,
\end{equation*}
and its approximate solution is $\eta_{max}(1/2)\approx0.200$.
Finally, we obtain an approximate value of $C_{0.296}$, namely
$C_{0.296}\approx1.258$.
\end{exm}
\section{Acknowledgements}
This research was supported by the Universiti Brunei Darussalam
under Grant UBD/GSR/S\&T/19.
|
1,116,691,499,656 | arxiv | \section{Introduction}
Topological superconductor (TSC), which holds Majorana fermions \cite{Wilczek2009NP,Service2011Science,Alicea2012RPP,Beenakker2013ARCMP,Elliott2015RMP} and provides a platform for topological quantum computation \cite{Kitaev2003,Nayak2008RMP,Akhmerov2010PRB,Sarma2015Majorana,Zhang2018TQC}, is one of the recent important and highly explored research hotspots in condensed matter physics \cite{Read2000PRB,Kitaev2001,Qi2011RMP,Leijnse2012SST,Stanescu2013JPCM,Sato2017RPP,He2018CSB}. And TSC, a fermionic system described by the fully gapped bulk Hamiltonian, can be classified into the tenfold Altland-Zirnbauer (AZ) symmetry classes, based on the three fundamental symmetries containing particle-hole symmetry (PHS), time-reversal symmetry (TRS), and chiral symmetry \cite{AZ1997PRB,Schnyder2009AIP,Kitaev2009AIP,Ryu2010NJP,Chiu2016RMP}. For instance, two-dimensional (2D) TSCs can be classified into two different categories, including chiral TSC and time-reversal invariant (TRI) TSC, according to the three fundamental symmetries. The chiral TSC \cite{Read2000PRB,PhysRevB.82.184516}, which only processes PHS, is classified into the class D of the AZ tenfold classification table and characterized by the $\mathbb{Z}$ topological index (such as the integer Chern number). And the chiral TSC, the superconductor analog of the quantum anomalous Hall insulator, holds chiral Majorana edge modes (MEMs) at its boundary, which is guaranteed by the bulk topological invariant. This is the manifestation of the bulk-boundary correspondence, a guiding principle in topological phase of matter. Another well-known example is the TRI TSC \cite{PhysRevLett.102.187001} in class DIII of the AZ tenfold classification table, where the system processes PHS, TRS, and chiral symmetry. The TRI TSC, a superconductor analog of the quantum spin Hall insulator, is characterized by the $\mathbb{Z}_2$ topological index (such as the spin Chern number). At the boundary of the TRI TSC, the helical MEMs emerge, which is a Kramers pair of time-reversal related chiral MEMs.
Recently, seeking the MEMs in TSCs has received more attention. Actually, realization of natural TSCs remains a decade-old outstanding question. Fortunately, the discovery of topological insulators provides a good platform for searching TSCs. In 2008, Fu and Kane proposed that a strong topological insulator proximity coupled with an $s$-wave superconductor can be used to realize TSCs \cite{PhysRevLett.100.096407}. Therefore, an implementation scheme of the chiral MEMs is a hybrid system consisting of a quantum anomalous Hall insulator and an $s$-wave superconductor \cite{PhysRevB.82.184516}. And a collection of studies on this scheme has been reported in theoretical and experimental aspects \cite{PhysRevB.83.100512,PhysRevB.92.064520,PhysRevB.93.161401,PhysRevLett.121.256801,He294Science,PhysRevLett.120.107002,Kayyalha64Science}. Moreover, the helical MEMs in TSCs have also been extensively researched in theory and experiment \cite{PhysRevLett.105.097001,PhysRevLett.111.056402,PhysRevB.83.220510,
PhysRevLett.105.097001,PhysRevLett.108.147003,PhysRevLett.126.137001,Wang104science}, such as a typical theoretical proposal for the heterostructure composed of a quantum spin Hall insulator sandwiched by two $s$-wave superconductors with a $\pi$ phase difference \cite{PhysRevB.83.220510} and the experimental signature of helical MEMs revealed in the domain walls of the iron-based superconductor FeSe$_{0.45}$Te$_{0.55}$ \cite{Wang104science}.
Until now, the great majority of research programs on TSCs is implemented in crystalline systems, which can be solved by the topological band theory. It is interesting to note that the TSCs have been also investigated in the quasicrystalline systems recently \cite{PhysRevLett.116.257002,PhysRevLett.123.196401,PhysRevLett.125.017002,ghadimi2020topological,
PhysRevLett.110.176403,PhysRevB.94.125408,JPSJ2017Fab,PhysRevResearch.3.013148,zeng2020topological,cheng2021fate,PhysRevB.103.104203}, which lack the translational symmetry and cannot be explained by the topological band theory. For example, Fulga \emph{et~al}. \cite{PhysRevLett.116.257002} proposed that the chiral TSC can be realized on an eightfold Ammann-Beenker (AB) tiling quasicrystalline lattice (QL), in which the chiral MEMs are characterized by the nonzero pseudospectrum $\mathbb{Z}$ index \cite{LORING2015383} and the quantized thermal conductance. Furthermore, Ghadimi \emph{et~al.} \cite{ghadimi2020topological} presented that both the fivefold Penrose tiling and the eightfold AB tiling QLs can be used as a platform to achieve the TSC with chiral MEMs, where the bulk topology is characterized by the unity Bott index. But as we known, the investigation of the TRI TSCs with helical MEMs in quasicrystalline systems is still lacking so far. Besides the TSCs, multiple topological phases of matter have also been proposed in the quasicrystalline systems in recent years \cite{PhysRevLett.111.226401,PhysRevB.91.085125,PhysRevB.100.214109,
PhysRevLett.121.126401,PhysRevB.98.125130,PhysRevB.100.115311,PhysRevB.103.085307,PhysRevB.100.085119,
PhysRevLett.124.036803,PhysRevB.102.241102,PhysRevResearch.2.033071,
PhysRevB.101.041103,PhysRevLett.109.106402,PhysRevLett.109.116404,PhysRevB.88.125118,PhysRevX.6.011016,
PhysRevLett.119.215304,PhysRevX.9.021054,PhysRevLett.123.150601,PhysRevB.101.115413,Huang_2020,
PhysRevLett.122.237601,PhysRevB.100.054301,PhysRevB.102.024205,PhysRevB.101.125418,PhysRevB.101.020201,PhysRevA.103.033325}, such as the quantum Hall insulators
\cite{PhysRevLett.111.226401,PhysRevB.91.085125,PhysRevB.100.214109}, the quantum spin Hall insulators \cite{PhysRevLett.121.126401,PhysRevB.98.125130,PhysRevB.100.115311,PhysRevB.103.085307,PhysRevB.100.085119}, and the higher-order topological insulators \cite{PhysRevLett.124.036803,PhysRevB.102.241102,PhysRevResearch.2.033071}. Experimentally, the photonic quasicrystals \cite{PhysRevLett.110.076403} and the quasiperiodic acoustic waveguides \cite{PhysRevLett.122.095501} can be employed as the platforms to realize the topological phase of matter in QLs.
In addition, one of the most significant properties of the topological phases of matter is the robustness of the edge states against weak disorder, which is protected by the bulk topology. When the energy gap is closed by strong disorder, a topological phase transition appears and the topology disappears. The more intriguing finding is that disorder can encourage the emergence of a topologically nontrivial phase in an initially clean and normal system \cite{PhysRevB.100.115311,PhysRevB.103.085307,PhysRevLett.102.136806,
PhysRevLett.105.115501,Zhang_2013CPB,PhysRevB.92.085410,PhysRevLett.116.066401,PhysRevB.100.054108,
PhysRevB.80.165316,PhysRevLett.103.196805,PhysRevLett.105.216601,
PhysRevB.84.035110,PhysRevLett.113.046802,PhysRevB.91.214202,PhysRevB.96.205304,Wu_2016CPB,
PhysRevB.93.125133,Qin2016SR,PhysRevB.103.115430,Habibi2018PRB,Lieu2018PRB,PhysRevB.100.205302,
PhysRevLett.125.166801,PhysRevB.103.085408,
PhysRevLett.115.246603,PhysRevB.95.245305,PhysRevB.97.235109,
Zhang_2020SCPMA,luo2019nonhermitian,PhysRevA.101.063612,Liu_2020CPB,liu2021real,
PhysRevB.93.214206,PhysRevLett.114.056801,PhysRevLett.125.217202,PhysRevB.103.224207,hu2021bulkboundary}. And the disorder-induced topological phases have achieved in various experiment plotforms \cite{Meier2018Science,Stutzer2018Nature,PhysRevLett.125.133603,PhysRevLett.126.146802}, such as the one-dimensional disordered atomic wires \cite{Meier2018Science} and the photonic lattices \cite{Stutzer2018Nature}.
The pioneering work is that the topological Anderson insulator in HgTe quantum wells was proposed by Li \emph{et~al.} in 2009 \cite{PhysRevLett.102.136806}. Subsequently, this physical phenomenon of disorder-induced topological phase has been extensively studied,
including, but not limited to, Chern insulators \cite{PhysRevLett.105.115501,Zhang_2013CPB,PhysRevB.92.085410,PhysRevLett.116.066401,PhysRevB.100.054108},
topological insulators \cite{PhysRevB.80.165316,PhysRevLett.103.196805,PhysRevLett.105.216601,
PhysRevB.84.035110,PhysRevLett.113.046802,PhysRevB.91.214202,PhysRevB.96.205304,Wu_2016CPB},
topological superconductors \cite{PhysRevB.93.125133,Qin2016SR,PhysRevB.103.115430,Habibi2018PRB,Lieu2018PRB,PhysRevB.100.205302},
and higher-order topological insulators \cite{PhysRevLett.125.166801,PhysRevB.103.085408,PhysRevLett.126.146802}. For instance, in crystalline systems, disorder-induced chiral MEMs in 2D TSCs \cite{PhysRevB.93.125133,Qin2016SR,PhysRevB.103.115430} and disorder-induced MEMs in 1D Kitaev superconductor chains \cite{Habibi2018PRB,Lieu2018PRB,PhysRevB.100.205302} have been reported in the previous works. Meanwhile, it is important to notice that the topological Anderson insulators can also be implemented in quasicrystalline systems, such as the disorder-induced 2D quantum spin Hall insulators in the Penrose tiling \cite{PhysRevB.100.115311} and AB tiling \cite{PhysRevB.103.085307} QLs, in which the topologically nontrivial phase is characterized by the nonzero spin Bott index and the quantized two-terminal conductance. In view of the recent research on TSCs in QLs and the significant progress of disorder-induced topological phases in various systems, an intriguing question is that whether disorder-induced MEMs can also emerge in the 2D quasicrystalline TSCs.
In this work, we systematically investigate the effects of the Anderson-type disorder on the 2D AB tiling quasicrystalline TSCs, covering a chiral TSC in class D and a TRI TSC in class DIII. The AB tiling quasicrystal \cite{grunbaum1987tilings,c641bbfabe714fefbd7c37e571cb29fa,Duneau_1989} is tiled using squares and rhombuses with a small angle $45^{\circ}$ (see Fig.~\ref{fig1}). And the construction process of AB tiling quasicrystal through the inflation method is shown in Ref.~\cite{PhysRevLett.116.257002}. The Bott index (a real-space $\mathbb{Z}$ index for class D system) and the spin Bott index (a real-space $\mathbb{Z}_2$ index for class DIII system) are engaged in evincing the topologically nontrivial phases with the MEMs in the class D chiral and the class DIII TRI TSC systems. Meanwhile, for verifying the results of the topological invariants, we bring in the recursive Green's function method for calculating the thermal conductance to test the existence of the MEMs in the two quasicrystalline TSC systems. We reveal rich phase diagrams of the two TSC systems when the disorder is turned on, and we find that the chiral and helical MEMs are stable for weak disorder, while strong disorder takes the MEMs away. We also show that a disorder-induced topologically nontrivial phase at certain parameter values in the class D chiral TSC system appears, where there is accompanied by the disorder-induced chiral MEMs located at the square edge of the finite QL sample. Similarly, the disorder-induced helical MEMs can also be discovered in the class DIII quasicrystalline TSC.
The rest of the paper is organized as follows. In Sec.~\ref{Model}, we introduce two lattice tight-binding models with disorder on the AB tiling QL. Then, we give the details of numerical methods in Sec.~\ref{Methods}, and reveal the chiral and helical MEMs of the two TSC systems in Sec.~\ref{without_Disorder}. Subsequently, in Sec.~\ref{Disorder}, we provide numerical results for studying the topological phase transitions of the two TSC systems with disorder, and we end with a summary in Sec.~\ref{Conclusion}.
\section{Models}
\label{Model}
\begin{figure}[t]
\includegraphics[width=5cm]{fig1.pdf} \caption{Schematic illustration of the Ammann-Beenker tiling quasicrystal. The quasicrystal consists of two types of primitive tiles: square tile and rhombus tile with a small angle $45^{\circ}$. The black vertices represent the quasicrystal lattice sites. The lattice site connections of the short diagonal of rhombus represent the nearest-neighbor bond, and the site connections of the sides of the two primitive tiles (red lines) represent the next-nearest-neighbor bond, etc.}%
\label{fig1}
\end{figure}
We start with two lattice tight-binding models, which respectively describe a chiral TSC and a TRI TSC, with Anderson-type disorder on the AB tiling QL, and the diagrammatic sketch of the QL is shown in Fig.~\ref{fig1}. Here we only discuss the first two nearest-neighbor hopping and pairing terms (namely only the nearest-neighbor and next-nearest-neighbor bonds in the QL are considered), and ignore the other long-range terms. In addition, we assume that the lattice site number is $N$, and the lattice site distance of the next-nearest-neighbor bond is used as the length unit. The class D Hamiltonian of the chiral TSC is given by
\begin{align}
H_{\text{D}}=& \sum_{j}\mu_{j} \psi_{j}^{\dag } \tau _{z}\psi_{j}+\sum_{j\not=k}\frac{u (d_{jk})}{2}\psi_{j}^{\dag } \left\{-t\tau _{z} \right. \nonumber \\
& \left.+i\Delta [\cos(\theta_{jk}) \tau _{x}+\sin( \theta_{jk})\tau _{y}] \right\}\psi_{k},
\label{HD}
\end{align}
where the basis is $\psi_{j}^{\dag }=(\varphi_{j}^{\dag }, \varphi_{j})$, and $j$ and $k$ denote lattice sites running from $1$ to $N$. $\tau_{x,y,z}$ are the Pauli matrices acting on the particle-hole degree of freedom. $t$ is the hopping amplitude, and $\Delta$ is the strength of $p$-wave superconducting pairing. $\theta_{jk}$ is the polar angle of bond connecting sites $j$ and $k$ with respect to the horizontal direction. $u(d_{jk})=e^{-(d_{jk}-d_0)/\xi }$ is the spatial decay factor of the hopping and pairing terms, where $\xi$ is the decay length, $d_{jk}$ is the lattice site distance, and $d_0$ is the lattice site distance of the next-nearest-neighbor bond. The lattice site distance is $d_{jk}=\left\vert \mathbf{d}_{j}-\mathbf{d}_{k}\right\vert$, where $\mathbf{d}_{j}$ and $\mathbf{d}_{k}$ are the coordinates of lattice sites. The Anderson-type disorder term is $\mu _{j}=\mu +W\omega _{j}$, where $\mu$ is the chemical potential, $\omega_{j}$ is the uniform random variable chosen from $\left[ -0.5,0.5\right]$, and $W$ is the disorder strength. The class D Hamiltonian (\ref{HD}) only obeys the PHS and satisfies the relation $P_{1}H_{\text{D}}P_{1}^{-1}=-H_{\text{D}}$. $P_{1}=\tau _{x}\mathcal{I}K$ is the PHS operator, where $K$ is the complex conjugate operator and $\mathcal{I}$ is the $N\times N$ identity matrix.
The class DIII Hamiltonian of the TRI TSC is written as
\begin{align}
H_{\text{DIII}}=& \sum_{j}\mu_{j} \psi_{j}^{\dag } \tau _{z}\sigma _{0}\psi_{j}+\sum_{j\not=k}\frac{u (r_{jk})}{2}\psi_{j}^{\dag } \left\{-t\tau _{z} \sigma _{0} \right. \nonumber \\ \phantom{=;\;\;}
& \left.+i\Delta \left[\cos(\theta_{jk}) \tau _{x} \sigma _{z}+\sin( \theta_{jk})\tau _{y}\sigma _{0} \right] \right\}\psi_{k},
\label{HDIII}
\end{align}
where the basis is $\psi_{j}^{\dag }=(\varphi_{j,\uparrow}^{\dag }, \varphi_{j,\uparrow}, \varphi_{j,\downarrow}^{\dag }, \varphi_{j,\downarrow})$. $\sigma_{0}$ and $\tau_{0}$ are the $2\times2$ identity matrices. $\sigma_{x,y,z}$ and $\tau_{x,y,z}$ are the Pauli matrices acting on the spin and particle-hole degrees of freedom, respectively. Other physical quantities have the same physical meaning as the class D Hamiltonian $H_{\text{D}}$. The class DIII Hamiltonian (\ref{HDIII}) satisfies
\begin{align}
P_{2}H_{\text{DIII}}P_{2}^{-1}&=-H_{\text{DIII}}, \nonumber \\
TH_{\text{DIII}}T^{-1}&=H_{\text{DIII}}, \\
CH_{\text{DIII}}C^{-1}&=-H_{\text{DIII}}. \nonumber
\end{align}
Here $P_{2}$, $T$, and $C$ are the PHS, TRS and chiral symmetry operators, respectively, and they are expressed by
\begin{equation}
P_{2}=\tau _{x}\sigma _{0}\mathcal{I}K, T=\tau _{0}\sigma _{y}\mathcal{I}K, C=P_{2}\cdot T.
\end{equation}
Hereinbelow, the energy unit is regulated as $t$, and the spatial decay length $\xi$ and the lattice site distance $d_0$ are set as $1$.
\section{Numerical Methods}
\label{Methods}
\subsection{Bott index and spin Bott index}
The topologically nontrivial phase with edge states is characterized by the bulk topological invariant. Here we briefly introduce the two real-space topological invariants, which are employed in characterizing the topological phases of the two TSCs in the AB tiling QL, due to the QL lacks the translational symmetry and cannot be handled by the momentum-space topological invariants. The two real-space topological invariants are the Bott index \cite{Loring_2010EPL,HASTINGS20111699,LORING2015383,PhysRevX.6.011016,PhysRevB.98.125130,ghadimi2020topological}, a $\mathbb{Z}$ topological index to characterize the 2D chiral TSC system with chiral MEMs in class D, and the spin Bott index \cite{PhysRevLett.121.126401,PhysRevB.98.125130,PhysRevB.100.115311,PhysRevB.103.085307}, a $\mathbb{Z}_2$ topological index to characterize the 2D TRI TSC system with helical MEMs in class DIII. It is noted that the Bott index and the spin Bott index are both numerically calculated in real-space with the approximate periodic boundary condition, in which a square-shape AB tiling QL is transformed to a torus geometry.
First of all, we unroll the concrete numerical calculation steps of the Bott index \cite{Loring_2010EPL,HASTINGS20111699,LORING2015383,PhysRevX.6.011016,PhysRevB.98.125130,ghadimi2020topological}. Initially, we establish the occupation projector operator as
\begin{align}
Q=\sum_{j}^{L}|\Psi _{j}\rangle \langle \Psi _{j}|,
\label{P}
\end{align}
where $\Psi _{j}$ is the $j$th eigenvector of the Hamiltonian and $j$ is running from $1$ to $L$. $L$ is the total number of negative eigenvalues, where the negative energy states are the occupied states owing to the PHS of the TSC systems. Then, we define the projected position operators as
\begin{align}
&U_{X}=Q e^{i2\pi X}Q+(I-Q),\\
&V_{Y}=Qe^{i2\pi Y}Q+(I-Q),
\end{align}
where $I$ is the $L\times L$ identity matrix. $X$ and $Y$ are two diagonal matrices, and the diagonal elements are $X_{jj}=x_{j}$ and $Y_{jj}=y_{j}$ with the coordinate $(x_{j},y_{j})$ of the $j$th lattice site, where the coordinates are rescaled to the interval $[0,1)$. By means of gauging the commutativity of the projected position operators \cite{PhysRevB.98.125130}, the Bott index is defined as
\begin{align}
B=\frac{1}{2\pi }{\rm Im}\{{\rm Tr}[\log (V_{Y}U_{X}V_{Y}^{\dag}U_{X}^{\dag })]\}.
\label{Bott}
\end{align}
The case with $B=0$ corresponds to the topologically trivial phase with no MEMs, and $B=1$ corresponds to the topologically nontrivial phase with chiral MEMs.
After explaining the numerical calculation method of the Bott index, we then construct the concrete numerical calculation steps of the spin Bott index \cite{PhysRevLett.121.126401,PhysRevB.98.125130,PhysRevB.100.115311,PhysRevB.103.085307}. First, we formulate the projected spin operator as
\begin{equation}
Q_{z}=Q\hat{\eta}_{z}Q,
\end{equation}
where $Q$ is the occupation projector operator, and $\hat{\eta}_{z}=\frac{\hbar }{2}\sigma _{z}$ is the spin operator with the Pauli matrix $\sigma _{z}$. The eigenvalues of $Q_z$ still remain two isolated parts divided by zero energy. Then, we define a new projector operator as
\begin{align}
Q_{\pm }=\sum_{j}^{L/2}\left\vert \Phi _{j}^{\pm }\right\rangle \left\langle \Phi _{j}^{\pm }\right\vert,
\label{P1}
\end{align}
where $\Phi _{j}^{+}$ ($\Phi _{j}^{-}$) is the eigenvector corresponding to the $j$th positive (negative) eigenvalue of $Q_z$. Subsequently, we formulate
the new projected position operators as
\begin{align}
&U_{\pm }=Q_{\pm }e^{i2\pi X}Q_{\pm }+(I-Q_{\pm }),\\
&V_{\pm }=Q_{\pm }e^{i2\pi Y}Q_{\pm }+(I-Q_{\pm }).
\label{UV}
\end{align}
In numerical calculations, we adopt the singular value decomposition method to calculate the projected position operators $U_{\pm }$ and $V_{\pm }$ for improving the stability of the numerical results of the spin Bott index. The singular value decomposition of a matrix can be expressed as $M=Z\Lambda \Pi ^{\dag }$, where $Z$ and $\Pi$ are unitary and $\Lambda$ is real and diagonal. We specify the ``unitary part" $\tilde{M}=Z\Pi ^{\dag }$ as the new projected position operator and replace the initial matrix $M$ \cite{PhysRevB.98.125130}. Finally, the spin Bott index can be defined as
\begin{align}
B_{s}=\frac{1}{2}(B_{+}-B_{-}),
\label{Bott}
\end{align}
with
\begin{align}
B_{\pm }=\frac{1}{2\pi }{\rm Im}\{{\rm Tr}[\ln (\tilde{V}_{\pm }\tilde{U}_{\pm }\tilde{V}_{\pm }^{\dag
}\tilde{U}_{\pm }^{\dag })]\},
\label{Bott}
\end{align}
where $B_{\pm }$ are the Bott indexes of two spin sectors. The case with $B_{s}=0$ corresponds to the topologically trivial phase with no MEMs, and $B_{s}=1$ corresponds to the topologically nontrivial phase with helical MEMs.
\subsection{Thermal conductance}
Meanwhile, we test the topological nature of the MEMs of the chiral and TRI TSCs in the AB tiling QL by studying the thermal transport properties of the systems. The setup, a two-terminal normal metal-superconductor-normal metal (NSN) junction, is constructed by attaching two semi-infinite normal metal leads to the left and right ends of the superconductor device. The normal metal lead Hamiltonian is described by the superconductor device Hamiltonian in square lattice, where $W$ and $\Delta$ are set as zero. The connected Hamiltonian, which represents the connection of the normal metal lead and the superconductor device, is described by the superconductor device Hamiltonian by installing $\mu$, $W$, and $\Delta$ to zero. Here, the hopping amplitudes of the superconductor device Hamiltonian, the normal metal lead Hamiltonian and the connected Hamiltonian are all set to be equal.
In order to compute the two-terminal thermal conductance, we first calculate the Fermi level ($E=0$) scattering matrix $S$ of the NSN junction with
\begin{align}
S=\left(
\begin{array}{cc}
S_{\rm LL} & S_{\rm LR} \\
S_{\rm RL} & S_{\rm RR}
\end{array}
\right),
\end{align}
where the block matrices $S_{\rm LL}$ and $S_{\rm RR}$ represent the reflection amplitudes, the block matrices $S_{\rm LR}$ and $S_{\rm RL}$ represent the transmission amplitudes, and the subscripts ``L'' and ``R'' represent left and right leads, respectively. Each block of the scattering matrix is
\begin{align}
S_{mn}=\left(
\begin{array}{cc}
S_{mn}^{ee} & S_{mn}^{eh} \\
S_{mn}^{he} & S_{mn}^{hh}%
\end{array}
\right),
\end{align}
where $m,n=\rm L, \rm R$. The element $S_{mn}^{\alpha\beta}$ indicates the scattering amplitude of an outgoing $\beta $ particle attributed to the incoming $\alpha $ particle, where $\alpha$ and $\beta$ denote the electron ($e$) or hole ($h$). The scattering matrix is numerically calculated by utilizing the recursive Green's function method \cite{MacKinnon_1985,PhysRevB.72.235304,PhysRevB.86.174512,Lewenkopf2013JCE,
PhysRevB.88.064509,PhysRevB.100.205302}. And the scattering matrix $S$, related to the Green's function, is given by \cite{Lee1981PRL,Fisher1981PRB}
\begin{equation}
S_{mn}^{\alpha\beta}=-\delta_{m,n}\delta_{\alpha,\beta}+i\left[\Gamma_{m}^{\alpha}\right]^{1/2}G^{r}\left[\Gamma_{n}^{\beta }\right]^{1/2}.
\end{equation}
$\Gamma_{m}^{\alpha }$ is the linewidth function of $\alpha $ particle with $\Gamma_{m}^{\alpha }=i[(\Sigma_{m}^{\alpha})^{r}-(\Sigma_{m}^{\alpha})^{a}]$, where $(\Sigma_{m}^{\alpha})^{r/a}$ is the retarded (advanced) self-energy of $\alpha $ particle for the $m$-lead. $G^{r}$ is the retarded Green's function of the superconductor device, and can be expressed as
\begin{equation}
G^{r}=\left[E+i0^{+}-H-\sum_{m,\alpha} ( \Sigma_{m}^{\alpha})^{r}\right] ^{-1},
\end{equation}
where $H$ is the superconductor device Hamiltonian, and $E$ is the Fermi level and is set to $0$.
Therefore, in the low-temperature linear response regime, the thermal conductance is formulated as \cite{RevModPhys.87.1037}
\begin{equation}
G=G_{0} \rm Tr(S_{\rm LR}^{\dagger} S_{\rm LR}),
\end{equation}
with the quantum of thermal conductance $G_{0}=\pi^2 k_{B}^{2} T_0 /6h$. The quantized thermal conductance $G/G_{0}=1$ is the signature of a chiral MEM located at the edge of the superconductor device, and $G/G_{0}=0$ if there is no MEM. The thermal conductance $G/G_{0}=2$ indicate that there is a pair of MEMs, the helical MEMs, located at the edge of the superconductor device.
\section{chiral and helical MEMs in clean limit}
\label{without_Disorder}
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig2.pdf} \caption{(a) Energy spectrum of the class D Hamiltonian $H_{\text{D}}$ on the AB tiling QL with square shape under OBC (black circles) and PBC (cyan dots) versus the eigenvalue index $n$. The inset shows the enlarged section of eigenstates near zero energy for the system under OBC. The gray region shows the midgap states. (b) The probability density of the in-gap eigenstates near zero energy is marked by the red arrow in (a). The color map shows the values of the probability density. (c) Energy spectrum of the class DIII Hamiltonian $H_{\text{DIII}}$ on the AB tiling QL with square shape under OBC (black circles) and PBC (cyan dots) versus the eigenvalue index $n$. (d) The probability density of doubly degenerate in-gap eigenstates near zero energy marked by the red arrow in (c). The red dots (blue circles) represents an edge state with spin up (down). We take the model parameters $\Delta/t=1$, $\mu/t=1.6$, $W/t=0$ and lattice site number $N=1452$.}%
\label{fig2}
\end{figure}
In this section, to reveal the topologically nontrivial phase with MEMs in clean limit, we directly diagonalize the class D Hamiltonian~(\ref{HD}) and the class DIII Hamiltonian~(\ref{HDIII}) on the AB tiling QL with square geometry under an open boundary condition (OBC) and a periodic boundary condition (PBC), respectively. Here we set the model parameters $\Delta/t=1$, $\mu/t=1.6$, $W/t=0$ and lattice site number $N=1452$.
Figure~\ref{fig2}(a) shows the energy spectrum of the class D Hamiltonian~(\ref{HD}) under OBC (black circles) and PBC (cyan dots) versus the eigenvalue index $n$. It is found that an energy gap for the PBC system emerges in the energy spectrum, while the gapless in-gap energy states for the OBC system fill the bulk energy gap of the PBC system. In Fig.~\ref{fig2}(b), we plot the probability density of an in-gap eigenstate near zero energy [marked by the red arrow in Fig.~\ref{fig2}(a)] for a finite QL sample with square boundary geometry under the OBC. Interestingly enough, we find that the in-gap state is located at the square edge of the finite QL sample. We further calculate the Bott index to identify the topological origin of the edge modes. In the case of the same parameters with Fig.~\ref{fig2}(a), the numerical calculation result of the Bott index is $B=1$, which indicates that this phase is topologically nontrivial with chiral MEMs. Meanwhile, the two-terminal thermal conductance, where $G/G_{0}=1$ here, is calculated for confirming the topological nature of the edge modes.
Similarly, we show the energy spectrum of the class DIII Hamiltonian~(\ref{HDIII}) under OBC (black circles) and PBC (cyan dots) versus the eigenvalue index $n$ in Fig.~\ref{fig2}(c). We also find that the PBC system possesses an energy gap, while the OBC system has gapless in-gap states occupying the bulk energy gap of the PBC system. Note that all the in-gap states are doubly degenerate states due to the TRS. Figure~\ref{fig2}(d) shows the probability density of an in-gap eigenstate near zero energy [marked by the red arrow in Fig.~\ref{fig2}(c)] for a finite QL sample with square boundary geometry under the OBC. The red dots (blue circles) represents an edge state with spin up (down). It is found that the in-gap states are located at the square edge of the finite QL sample. The topologically nontrivial phase of a TRI TSC with the helical MEMs is confirmed by the numerical results of the spin Bott index $B_{s}=1$ and the two-terminal thermal conductance $G/G_{0}=2$.
\section{the effects of disorder}
\label{Disorder}
In this section, we numerically investigate the effects of the Anderson-type disorder on the topological phase transitions of the chiral and TRI quasicrystalline TSC systems. Based on the calculation of the real-space topological invariants and the two-terminal thermal conductance, the topological phase diagrams with different parameters will be presented.
\subsection{Class D}
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig3.pdf} \caption{The Bott index ($B$) and the thermal conductance ($G/G_{0}$) for the class D TSC system as a function of the disorder strength ($W/t$) for (a) $\mu/t=2.5$ and (b) $\mu/t=3$. We take the parameter $\Delta/t=2$. In calculating the Bott index (the thermal conductance), the lattice site number of the QL is taken as $N=1452$ ($8260$), and the error bar indicates a standard deviation of $500$ ($1000$) samples.}%
\label{fig3}
\end{figure}
Firstly, we reveal the disorder-induced topological phase transitions in the class D chiral TSC system. First of all, based on the computation of the Bott index ($B$) and the two-terminal thermal conductance ($G/G_0$), we study the effects of disorder on the topological phase transitions at two sets of system parameters. For the case of $(\Delta/t,\mu/t)=(2,2.5)$, the phase is topologically nontrivial with nonzero Bott index $B=1$ in clean limit. Figure~\ref{fig3}(a) shows the Bott index ($B$) and the thermal conductance ($G/G_0$) in this case as a function of the disorder strength ($W/t$). We find that the topologically nontrivial phase remains stable when the disorder strength is small, which is characterized by the nonzero Bott index $B=1$ and the quantized thermal conductance $G/G_{0}=1$ in a certain range of disorder strength ($0\leq W/t\leq11$). However, with the disorder strength ($W/t$) increasing, a topological phase transition occurs at $W/t=11$, beyond which both the Bott index ($B$) and the thermal conductance ($G/G_0$) decay to zero, and the class D chiral TSC system is converted to a topologically trivial phase.
For another case of $(\Delta/t,\mu/t)=(2,3)$, the phase is topologically trivial with zero Bott index $B=0$ in clean limit. The Bott index ($B$) and the thermal conductance ($G/G_0$) in this case as a function of the disorder strength ($W/t$) is plotted in Fig.~\ref{fig3}(b). With the increase of $W/t$, it is found that two topological phase transitions arise, accompanied by the Bott index changing from $B=0$ to $B=1$ at $W/t=5$ and returning to $B=0$ at $W/t=11$. Here, there exists a plateau of the nonzero Bott index $B=1$ in a certain range of disorder strength ($5\leq W/t\leq11$), which indicates a topologically nontrivial phase induced by disorder. Meanwhile, the numerical result of the thermal conductance is obtained, and we find that it can match well with the numerical result of the Bott index. The value of the thermal conductance jumps from $G/G_0=0$ to $G/G_0=1$ at $W/t=5.5$, and goes back to $G/G_0=0$ at $W/t=11$. Thus, it means that the chiral MEMs can be induced by disorder when the disorder strength is in the region of $5.5\leq W/t\leq11$ in the class D chiral TSC system (with model parameters $\Delta/t=2$, and $\mu/t=3$).
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig4.pdf} \caption{Phase diagram in ($W/t$, $\mu/t$) space for the class D TSC system with disorder obtained by calculating the Bott index ($B$). The red region denotes the topologically nontrivial phase ($B=1$), and the white region denotes the topologically trivial phase ($B=0$). We take the parameter $\Delta/t=2$ and $N=1452$.}%
\label{fig4}
\end{figure}
Additionally, the topological phase diagram for the class D system with disorder in the ($W/t$, $\mu/t$) space is plotted in Fig.~\ref{fig4}, where $\Delta/t=2$. The color map shows the values of the Bott index $B$. Each point in Fig.~\ref{fig4} expresses a single realization of the Bott index, which is used to define the region of topologically nontrivial phase. The red region denotes the topologically nontrivial phase with $B=1$, and the white region denotes the topologically trivial phase with $B=0$. It is found that the maximum disorder strength, less than which the topologically nontrivial phase remains stable, increases with the increasing of the chemical potential $\mu/t$. And the largest maximum disorder strength is about $W/t\approx13.5$, beyond which the topologically nontrivial phase vanishes. We also find the disorder-induced topologically nontrivial phase region, in a range of parameters ($W/t$, $\mu/t$) space, is distinctly presented in the topological phase diagram, shown in Fig.~\ref{fig4}.
\subsection{Class DIII}
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig5.pdf} \caption{The spin Bott index ($B_s$) and the thermal conductance ($G/G_{0}$) for the class DIII TSC system as a function of the disorder strength ($W/t$) for (a) $\mu/t=2.5$ and (b) $\mu/t=3$. We take the parameter $\Delta/t=2$. In calculating the spin Bott index (the thermal conductance), the lattice site number of the QL is taken as $N=264$ ($8260$), and the error bar indicates a standard deviation of $500$ ($1000$) samples.}%
\label{fig5}
\end{figure}
Next, we reveal the disorder-induced topological phase transitions in the class DIII TRI TSC system. Analogous to the class D case, we first study the effects of disorder on the topological phase transitions at two sets of system parameters, based on the computation of the spin Bott index ($B_s$) and the two-terminal thermal conductance ($G/G_0$). Figure~\ref{fig5} shows the spin Bott index ($B_s$) and the thermal conductance ($G/G_0$) as a function of the disorder strength ($W/t$).
For the case of $(\Delta/t,\mu/t)=(2,2.5)$, the phase is topologically nontrivial with the nonzero spin Bott index $B_s=1$ in clean limit. In Fig.~\ref{fig5}(a), it is found that the topologically nontrivial phase remains stable when the disorder strength is small, which is characterized by the nonzero spin Bott index $B_s=1$ and the quantized thermal conductance $G/G_{0}=2$ in a certain range of disorder strength ($0\leq W/t\leq9.5$). Then, a topological phase transition occurs at $W/t=9.5$ with further increasing $W/t$, beyond which both the spin Bott index ($B_s$) and the thermal conductance ($G/G_0$) decay to zero, and the class DIII TRI TSC system is converted to a topologically trivial phase. For another case of $(\Delta/t,\mu/t)=(2,3)$, the phase is topologically trivial with zero spin Bott index $B_s=0$ in clean limit. In Fig.~\ref{fig5}(b), it is found that two topological phase transitions arise with increasing $W/t$, accompanied by the spin Bott index changing from $B_s=0$ to $1$ at $W/t=6$ and returning to $0$ at $W/t=10$, and the thermal conductance jumping from $G/G_0=0$ to $2$ at $W/t=6$ and returning to $0$ at $W/t=10$. The plateaus of the nonzero spin Bott index $B_s=1$ and the quantized thermal conductance $G/G_0=2$ exist in a certain range of disorder strength ($6\leq W/t\leq10$), which indicates that a topologically nontrivial phase is induced by disorder. Thus, it means that the helical MEMs can be induced by disorder when the disorder strength is in the region of $6\leq W/t\leq10$ in the class DIII TRI TSC system (with model parameters $\Delta/t=2$, and $\mu/t=3$).
Additionally, the topological phase diagram for the class DIII TRI TSC system with disorder in the ($W/t$, $\mu/t$) space is plotted in Fig.~\ref{fig6}, where $\Delta/t=2$. The color map shows the values of the spin Bott index $B_s$. Each point in Fig.~\ref{fig6} expresses a single realization of the spin Bott index, which is used to define the region of topologically nontrivial phase. The red region denotes the topologically nontrivial phase with $B_s=1$, and the white region denotes the topologically trivial phase with $B_s=0$. It is found that the largest maximum disorder strength is about $W/t\approx15$, beyond which the topologically nontrivial phase vanishes. We also find that the disorder-induced topologically nontrivial phase region, in a range of parameters ($W/t$, $\mu/t$) space, is distinctly presented in the topological phase diagram, shown in Fig.~\ref{fig6}.
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig6.pdf} \caption{Phase diagram in ($W/t$, $\mu/t$) space for the class DIII TSC system with disorder obtained by calculating the spin Bott index ($B_s$). The red region denotes the topologically nontrivial phase ($B_s=1$), and the white region denotes the topologically trivial phase ($B_s=0$). We take the parameter $\Delta/t=2$ and $N=1452$.}%
\label{fig6}
\end{figure}
\section{Conclusion and discussion}
\label{Conclusion}
In this work, we investigate the topological phase transitions of a class D chiral TSC and a class DIII TRI TSC with Anderson-type disorder in a AB tiling QL. We employ the real-space topological invariants, including the Bott index (a $\mathbb{Z}$ index for class D system) and the spin Bott index (a $\mathbb{Z}_2$ index for class DIII system), and the two-terminal thermal conductance to determine the topological phases of the two quasicrystalline TSC systems. The class D chiral TSC in the topologically nontrivial phase exhibits the chiral MEMs located at the square boundary of a finite QL sample, and the class DIII TRI TSC holds the helical MEMs. Both the chiral MEMs in the class D TSC system and the helical MEMs in the class DIII TSC system are robust against weak disorder, while they are destroyed when the disorder strength is strong. More striking is that we discover a topological phase transition from a topologically trivial phase to a topologically nontrivial phase hosting chiral MEMs located on the edge of the class D quasicrystalline TSC at a finite disorder strength. Similarly, the disorder-induced helical MEMs in the class DIII quasicrystalline TSC is also found. We also present the phase diagrams based on the numerical calculation of the Bott index and the spin Bott index as functions of the disorder strength and the chemical potential, and it is shown that the interplay between the model parameters and disorder has an interesting influence on the existence of the topological phases in the quasicrystalline TSC systems.
The theoretical interpretation of the disorder-induced topological phase is that the model parameters are renormalized by the disorder, which is obtained by the effective-medium theory (self-consistent Born approximation method) in the crystalline systems \cite{PhysRevLett.103.196805,PhysRevB.93.125133,Qin2016SR,PhysRevB.103.115430,PhysRevB.100.205302}. However, owing to the translational symmetry is lacking in QLs, the self-consistent Born approximation method is invalid. The disorder-induced chiral and helical MEMs in the AB tiling QL cannot be properly explained by the effective-medium theory. Considering the similarities of the disorder-induced topological phase in crystalline and quasicrystalline lattices, we can conjecture that the origin of the appearance of the disorder-induced MEMs in the AB tiling QL is renormalization of the model parameters, which is caused by disorder.
Furthermore, the subgap Yu-Shiba-Rusinov bound states, which is induced by the magnetic impurity atoms in a superconductor, can be employed in forming a TSC \cite{Li_2020YSR}, such as the 1D TSC chain \cite{Nadj-Perge602,Jeon772} and the 2D amorphous TSC \cite{P_yh_nen_2018}. Therefore, we propose the experimental setup of the chiral TSC in the AB tiling QL is that the magnetic atoms, which is located at the vertices of the QL, are placed on a superconductor surface. While, the TRI TSC in the AB tiling QL is formed by a heterostructure, which consists of a layer of atoms being placed between two superconductors, where the superconductors have a $\pi$ phase difference \cite{PhysRevLett.100.096407,PhysRevB.79.161408}.
\section*{Acknowledgments}
B.Z. was supported by the NSFC (under Grant No. 12074107) and the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (under Grant No. T2020001). D.-H.X. was supported by the NSFC (under Grant Nos. 12074108). D.-H.X. also acknowledges the financial support of the Chutian Scholars Program in Hubei Province.
|
1,116,691,499,657 | arxiv | \section{Introduction} The stochastic order for random variables $X,Y$ from a
probability measure space $(M,P)$ to $\R$ is defined by $X\leq Y$ if $P(X>t)\leq P(Y>t)$ for all $t\in \R$. This notion
extends directly to random variables into $\R^n$ equipped with the coordinatewise order. Alternatively one can define
a stochastic order on the Borel probability measures on $\R$ or $\R^n$ by $\mu\leq \nu$ if for each $s\in\R$,
$\mu(s<t)\leq \nu(s<t)$, where $(s<t):=\{t\in\R: s<t\}$. One then has for random variables $X,Y$, $X\leq Y$ in the
stochastic order if and only if $P_X\leq P_Y$, where $P_X,P_Y$ are the push-forward probability measures with respect to
$X,Y$ respectively.
There are important metric spaces which are equipped with a naturally defined partial order, for example the open cone $\mathbb{P}_n$ of positive definite
matrices of some fixed dimension, where the order is the Loewner order. One can use the Loewner order to define an order on
$\mathcal{P}(\mathbb{P}_n)$, the space of Borel probability measures, an order that we call that stochastic order, as it generalizes the case of $\mathbb{R}$
or $\R^n$.
In this paper we broadly generalize the stochastic order to an order on the set of Borel probability measures on a partially ordered metric space.
We develop basic properties of this order and specialize to the setting of normal cones in Banach spaces to show that the stochastic order
in that setting is indeed a partial order.
In Section 3 we give the general definition of the stochastic order on $\mathcal{P}(X)$ for a partially ordered metric space $X$ and derive several useful
alternative formulations. In Section 4 we show for normal cones with interior that the stochastic order on $\mathcal{P}(X)$ is indeed a partial order (the
antisymmetry being the nontrivial property to establish). In Section 5 we show in the normal cone setting that the stochastic partial order is a
closed order with respect to the weak topology, and hence with respect to the Wasserstein topology. In Section 6 we consider the order-completeness
of $\mathcal{P}(X)$, and in Section 7 derive a version of the arithmetic-geometric-harmonic means inequality in the setting of the probability space
$\mathcal{P}(\mathbb{P})$ on the cone $\bP$ of positive invertible operators on a
Hilbert space.
In what follows $\RP=[0,\infty)$.
\section{Borel measures}
In this section we recall some basic results about Borel measures on metric spaces that will be needed in what follows. As usual the Borel
algebra on a metric space $(X,d)$ is the smallest $\sigma$-algebra containing the open sets
and a finite positive Borel measure is a countably additive measure $\mu$ defined on the Borel sets such that $\mu(X)<\infty$. We work exclusively with finite positive
Borel measures, primarily those that are probability measures.
Recall that a Borel measure $\mu$ is $\tau$-\emph{additive} if
$\tau(U)=\sup_\alpha \tau(U_\alpha)$ for any \emph{directed} union
$U=\bigcup_\alpha U_\alpha$ of open sets. The measure $\mu$ is said
to be \emph{inner regular} or \emph{tight} if for any Borel set $A$
and $\ve>0$ there exists a compact set $K\subseteq A$ such that
$\mu(A)-\ve<\mu(K)$. A
tight finite Borel measure is also called a Radon measure.
Probability on metric spaces has been carried out primarily for separable metric spaces, although results exist for the non-separable setting.
We recall the following result, which can be more-or-less cobbled together from results in the literature; see \cite{La17} for more details.
\begin{proposition}\label{P:La}
A finite Borel measure $\mu$ on a metric space $(X,d)$ has separable
support. The following three conditions are equivalent$:$
\begin{itemize}
\item[(1)] The support of $\mu$ has measure $\mu(X)$.
\item[(2)] The measure $\mu$ is $\tau$-additive.
\item[(3)] The measure $\mu$ is the weak limit of a sequence of finitely supported measures.
\end{itemize}
If in addition $X$ is complete, these are also equivalent to:
\begin{itemize}
\item[(4)] The measure $\mu$ is inner regular.
\end{itemize}
\end{proposition}
\begin{proof} For a proof of separability and the equivalence of the first three conditions, see \cite{La17}. Suppose (1)--(3) hold and
$X$ is complete. Let $\mu$ be a finite Borel measure.
Then the support $S$ of $\mu$ is closed, separable and has measure
$1$. Let $A$ be any Borel measurable set. Then $\mu(A\cap
(X\setminus S))=0$ since $\mu(X\setminus S)=0$, so $\mu(A)=\mu(A\cap
S)$. Since the metric space $S$ is a separable complete metric
space, it is a standard result that $\mu\vert_S$ is an inner regular
measure. Thus for $\ve>0$ there exists a compact set $K\subseteq
S\cap A\subseteq A$ such that $\mu(A)=\mu(A\cap S)<\mu(K)+\ve$.
Conversely suppose $\mu$ is inner regular. If $\mu(S)<\mu(X)$ for the support $S$ of $\mu$, then for $U=X\setminus S$, $\mu(U)>0$.
By inner regularity there exists a compact set $K\subseteq U$ such that $\mu(K)>0$. Since $K$ misses the support of $\mu$, for each
$x\in K$, there exists an open set $U_x$ containing $x$ such that $\mu(U_x)=0$. Finitely many of the $\{U_x\}$ cover $K$, the finite union has
measure $0$, so the subset $K$ has measure $0$, a contradiction. So the support of $\mu$ has measure $\mu(X)$.
\end{proof}
\begin{remark} Finite Borel measures on separable metric spaces are easily shown to be $\tau$-additive and hence satisfy the other equivalent conditions of Proposition \ref{P:La}.
Finite Borel measures that fail to satisfy the previous conclusions are rare. Indeed it is a theorem that in a complete metric space $X$ there exists
a finite Borel measure that fails to be inner regular if and only if the minimal cardinality $w(X)$ for a basis of open sets of $X$ is a measurable
cardinal; see volume 4, page 244 of \cite{Fr}. The existence of measurable cardinals is an axiom independent of the basic Zermelo-Fraenkel axioms of set theory
and thus if its negation is assumed, all finite Borel measures on complete metric spaces satisfy the four conditions of Proposition \ref{P:La}.
\end{remark}
\section{The stochastic order}
\emph{We henceforth restrict our attention to the set of Borel
probability measures on a metric space $X$ satisfying the four
conditions of Proposition \ref{P:La} and denote this set} $\Pro(X)$.
For complete separable metric spaces the set $\Pro(X)$ consists of
all Borel probability measures, which are automatically $\tau$-additive in this case.
\begin{definition} A \emph{partially ordered topological space} is a space equipped with a closed partial order $\leq$, one for which $\{(x,y):x\leq y\}$ is closed
in $X\times X$.
\end{definition}
For a nonempty subset $A$ of a partially ordered set $P$, let $\ua A:=\{y\in P:\exists x\in A,\, x\leq y\}$. The set $\da A$ is defined in an order-dual fashion.
A set $A$ is an \emph{upper set} if $\ua A=A$ and a \emph{lower set} if $\da A=A$. We abbreviate $\ua\{x\}$ by $\ua x$ and $\da\{x\}$ by $\da x$.
\begin{lemma}\label{L:SO1} A partially ordered topological space is Hausdorff. If $K$ is a nonempty compact subset, then $\ua K$ and $\da K$ are closed.
\end{lemma}
\begin{proof} See Section VI-1 of \cite{GS}. \end{proof}
The following definition captures in the setting of ordered topological spaces the notion that higher values should have higher probability.
\begin{definition} For a topological space $X$ equipped with a closed partial order, the \emph{stochastic order} on $\Pro(X)$ is defined by
$\mu\leq \nu$ if $\mu(U)\leq \nu(U)$ for each open upper set $U$.
\end{definition}
\begin{proposition}\label{P:SO2}
Let $X$ be a metric space equipped with a closed partial order. Then
the following are equivalent for $\mu,\nu\in\Pro(X):$
\begin{itemize}
\item[(1)] $\mu\leq \nu;$
\item[(2)] $\mu(A)\leq \nu(A)$ for each closed upper set $A;$
\item[(3)] $\mu(B)\leq \nu(B)$ for each upper Borel set $B$.
\end{itemize}
\end{proposition}
\begin{proof} Clearly (3) implies both (1) and (2).
(1)$\Rightarrow$(3): Let $B=\ua B$ be a Borel set. The $A=X\setminus B$ is also a Borel set. Let $\ve>0$. By inner regularity there exists
a compact set $K\subseteq A$ such that $\nu(K)> \nu(A)-\ve$. By Lemma \ref{L:SO1} $\da K$ is closed, and $K\subseteq \da K\subseteq \da A=A$. Thus $\nu(\da K)>\nu(A)-\ve$. The complement $U$ of $\da K$ is an open upper set.
Taking complements we obtain
\begin{align*}
\mu(B)&=1-\mu(A) \leq 1-\mu(\da K)=\mu(U) \leq \nu(U) =1-\nu(\da K) \\
&<1-\nu(A)+\ve=\nu(B)+\ve.
\end{align*}
Since $\mu(B)<\nu(B)+\ve$ for all $\ve>0$, we conclude $\mu(B)\leq \nu(B)$.
(2)$\Rightarrow$ (3): We can approximate any Borel upper set $B$ arbitrarily closely from the inside with compact subsets $K$ and their upper sets $\ua K$ will
be closed sets that are at least as good approximations. The Borel measure $\nu$ dominates $\mu$ on these closed upper sets and hence also
in the limiting case of $B$.
\end{proof}
\begin{remark}\label{R:SO3}
By taking complements one determines that each of the preceding equivalences has an equivalent version for lower sets with the
inequalities in (2) and (3) reversed.
\end{remark}
We turn now to functional characterizations of the stochastic order on $\Pro(X)$ for $X$ a metric space equipped with
a closed partial order.
In the next proposition, we write $\int_X f(x)\, d\mu(x)$ or simply $\int_X f\,d\mu$ for any Borel function $f:X\to \R^+$ and $\mu\in\Pro(X)$,
where the integral is possibly infinite.
We say that $f$ is \emph{monotone} if $x\leq y$ in $X$ implies $f(x) \leq f(y)$.
\begin{proposition}\label{P:SO3}
Let $X$ be a metric space equipped with a closed partial order. Then
the following are equivalent for $\mu,\nu\in\Pro(X):$
\begin{itemize}
\item[(1)] $\mu\leq \nu;$
\item[(2)] for every monotone $($bounded$)$ Borel function $f:X\to \RP$, $\int_X f\,d\mu\leq \int_X
f\,d\nu;$
\item[(3)] for every monotone $($bounded$)$ lower semicontinuous $f:X\to \RP$, $\int_X f\,d\mu\leq \int_X f\,d\nu$.
\end{itemize}
\end{proposition}
\begin{proof} The implications that the general case implies the bounded case in items (2) and (3) are trivial.
(1)$\Rightarrow$(2): Assume $\mu\leq \nu$ and let $f$ be a non-negative monotone Borel measurable
function on $X$. For each $n$, define $\delta_n:\RP\to\RP$ by $\delta_n(0)=0$,
$\delta_n(t)=(i-1)/2^n$ if $(i-1)/2^n<t\leq i/2^n$ for some
integer $i$, $1\leq i\leq n2^n$, and $\delta_n(t)=n$ for $n<t$.
Note that the ascending step function $\delta_n$
has finite image contained in
$\R^+$ and that the sequence $\delta_n$ monotonically increases
to the identity map on $\RP$. Hence $f_n:=\delta_nf$, the composition of $\delta_n$ and $f$,
monotonically increases to $f$. One verifies directly that the
step function $f_n$ has an alternative description given by
\[ f_n =\sum_{i=1}^{n2^n} \frac{1}{2^n}\chi_{f^{-1}(]i/2^n,\infty))},\]
where $\chi_A$ is the characteristic function of $A$. Since the sequence $\{f_n\}$
converges pointwise and monotonically to $f$, we conclude that $\int_X f\, d\mu=
\lim_n\int_X f_n\, d\mu$, and similarly for $\nu$. Since $f^{-1}(]i/2^n,\infty))$ is an upper
Borel set, by Proposition \ref{P:SO2} $\mu(f^{-1}(]i/2^n,\infty)))\leq \nu(f^{-1}(]i/2^n,\infty)))$
for each $i$, so $\int_X f_n\, d\mu\leq\int_X f_n\,d\nu$ for each $n$, and thus in the limit
$\int_X f\,d\mu\leq \int_X f\,d\nu$.
(2)$\Rightarrow$(3): Since a lower semicontinuous function is a Borel measurable function, (3) follows immediately from (2).
(3)$\Rightarrow$(1): The characteristic function $\chi_U$ is bounded, lower semicontinuous, and monotone for $U$ an open upper set
and hence $\mu(U)=\int_X \chi_U\,d\mu\leq \int_X \chi_U\ d\nu=\nu(U)$.
\end{proof}
Call a real function $f$ on a partially ordered set $X$ \emph{antitone} if it is order
reversing, i.e., $x\leq y$ implies $f(x)\geq f(y)$.
\begin{corollary}\label{C:SO4}
Let $X$ be a metric space equipped with a closed partial order. Then
the following are equivalent for $\mu,\nu\in\Pro(X):$
\begin{itemize}
\item[(1)] $\nu\leq \mu;$
\item[(2)] for every antitone $($bounded$)$ Borel function $f:X\to \RP$, $\int_X f\,d\mu\leq \int_X
f\,d\nu;$
\item[(3)] for every antitone $($bounded$)$ lower semicontinuous $f:X\to \RP$, $\int_X f\,d\mu\leq \int_X f\,d\nu$.
\end{itemize}
\end{corollary}
\begin{proof} Every partially ordered set has a dual order, namely the converse $\geq$ of $\leq$ is taken for the partial order.
Let $X^{od}$ denote the order dual of $X$. Note that a subset $A$ of $X$ is an upper set in $(X,\leq)$ if and only if it is a lower
set in $X^{od}$. Using Remark \ref{R:SO3}, one sees that $\mu\leq \nu $ with respect to $(X,\leq)$ if and only if
$\nu\leq \mu$ with respect to $X^{od}$. Since antitone functions convert to monotone functions in the order dual of $X$, the corollary
follows from applying the previous proposition to the order dual.
\end{proof}
Finally we consider sufficient conditions for one to define the stochastic order in terms of continuous monotone functions.
\begin{proposition}\label{P:SO5}
Suppose that $(X,d)$ is a metric space equipped with a closed
partial order satisfying the property that given $x\leq y$ and
$x_1\in X$, there exists $y_1\geq x_1$ such that $d(y,y_1)\leq
d(x,x_1)$. Then for $\mu,\nu\in\Pro(X)$ the following are
equivalent$:$
\begin{itemize}
\item[(1)] $\mu\leq \nu;$
\item[(2)] For every continuous $($bounded$)$ monotone $f:X\to\R^+$, $\int_X f\,d\mu\leq \int_X f\,d\nu;$
\item[(3)] For every continuous $($bounded$)$ antitone $f:X\to\R^+$, $\int_X f\,d\nu\leq \int_X f\,d\mu$.
\end{itemize}
\end{proposition}
\begin{proof}
That (1) implies (2) follows from Proposition \ref{P:SO3} and (1)
implies (3) by Corollary \ref{C:SO4}.
(3)$\Rightarrow$(1): Let $V$ be an open lower set with complement $A$, a closed
upper set. For each $n\in\N$, define $f_n:X\to[0,1]$ by $f_n(x)=\min\{nd(x,A),1\}$
and note that $f_n$ is a continuous function into $[0,1]$. To show $f_n$ is antitone, we
note for any $x\leq y$ and $x_1\in A$, there exists $y_1\geq x_1$
such that $d(y,y_1)\leq d(x,x_1)$. It follows from $x_1\leq y_1$
that $y_1\in A$, hence $d(y,A)\leq d(x,x_1)$, and thus $d(y,A)\leq
d(x,A)$ since $x_1$ was an arbitrary point of $A$. Hence
$$ f_n(y)=\min\{nd(y,A),1\}\leq\min\{nd(x,A),1\}=f_n(x).$$
It follows directly from the definition of $f_n$ that the sequence $\{f_n\}$ is an monotonically increasing sequence
with supremum $\chi_V$. Thus
$$\nu(V)=\int_X\chi_V\,d\nu=\lim_n \int_X f_n\,d\nu \leq \lim_n \int_X f_n\,d\mu=\int_X \chi_V\,d\mu)=\mu(V).$$
Since $V$ was an arbitrary open lower set, $\mu\leq \nu$ by Remark \ref{R:SO3}.
(2)$\Rightarrow$(1): Property (2) implies that $\int_X f\,d\mu\leq \int_X f\,d\nu$ for every continuous antitone function
$f:X^{od}\to \R^+$. By the preceding paragraph $\nu\leq \mu$ with respect to $X^{od}$, i.e., $\mu\leq \nu$ with respect
to $(X,\leq)$.
\end{proof}
\begin{definition}
A topological space equipped with a closed order is called \emph{monotone normal} if given a closed upper set $A$ and a closed
lower set $B$ such that $A\cap B=\emptyset$, there exist an open upper set $U\supseteq A$ and an open lower set $V\supseteq B$ such that $U\cap V=\emptyset$.
\end{definition}
\begin{remark}\label{R-3.10}
Assume that $(X,d)$ satisfies the property stated in Proposition \ref{P:SO5} and also its dual
version that given $x\le y$ and $y_1\in X$, there is an $x_1\in X$ such that $x_1\le y_1$ and
$d(x,x_1)\le d(y,y_1)$. Then $(X,\le)$ is monotone normal as in the above definition.
Indeed, for any closed upper set $A$ and any closed lower set $B$ with $A\cap B=\emptyset$,
one can easily verify that the open sets
$$
U:=\{x\in X:d(x,A)<d(x,B)\}\quad\mbox{and}\quad V:=\{x\in X:d(x,A)>d(x,B)\}
$$
satisfy $U\supseteq A$, $V\supseteq B$ and $U\cap V=\emptyset$.
One deduces that $U$ is an upper set and $V$ a lower from the hypothesized property and its dual.
Also, we remark that an open cone in a Banach space as considered in Section 5 satisfies the above
two properties (see Remark \ref{R-5.4}).
\end{remark}
\begin{proposition}\label{P:SO6}
Suppose that $(X,d)$ is a metric space equipped with a closed
partial order for which the space is monotone normal. Then for
$\mu,\nu\in\Pro(X)$ the following are equivalent$:$
\begin{itemize}
\item[(1)] $\mu\leq \nu;$
\item[(2)] For every continuous $($bounded$)$ monotone $f:X\to\R^+$, $\int_X f\,d\mu\leq \int_X f\,d\nu$.
\end{itemize}
\end{proposition}
\begin{proof}
In light of Proposition \ref{P:SO3} we need only show condition (2) implies condition (1).
Suppose there exists some open upper set $U$ such that $\nu(U)<\mu(U)$. By inner regularity there exists a
compact set $K\subseteq U$ such that $\nu(U)<\mu(K)\leq \mu(U)$. The closed upper set $A=\ua K\subseteq U$ also
satisfies $\nu(U)<\mu(A)\leq \mu(U)$. Since $X$ is monotone normal, a modification of the usual proof of Urysohn's Lemma
yields a continuous monotone function $f:X\to [0,1]$ such that $f(A)=1$ and $f(X\setminus U)=0$;
see for example \cite[Exercise VI-1.16]{GS}. We then have
$$\mu(A)=\int_X\chi_A\,d\mu\leq \int_X f\,d\mu\leq \int_X f\,d\nu\leq \int_X \chi_U\, d\nu =\nu(U),$$
a contradiction to our choice of $A$.
\end{proof}
\section{Normal cones}
Let $E$ be a Banach space containing an open cone $C$ such that its closure $\overline C$ is
a proper cone, i.e., $\overline C\cap(-\overline C)=\{0\}$. The cone $\overline C$ defines a
closed partial order on $E$ by $x\leq y$ if $y-x\in\overline C$. The cone $\overline C$ is
called \emph{normal} if there is a constant $K$ such that $0\leq x\leq y$ implies
$\Vert x\Vert\leq K\Vert y\Vert$.
For $x\leq y$ in $E$, the \emph{order interval} $[x,y]$ is given by
$$[x,y]:=\{w\in E: x\leq w\leq y\}=(x+\overline C)\cap (y-\overline C).$$
Note that $(x+C)\cap(y-C)$ is an open subset contained in $[x,y]$. A subset $B$ is
\emph{order convex} if $[x,y]\subseteq B$, wherever $x,y\in B$ and $x\leq y$. An alternative
formulation of normality postulates the existence of a basis of order convex neighborhoods at
$0$ and hence by translation at all points (see Section 19.1 of \cite{De}); here neighborhood
of $x$ means a subset containing $x$ in its interior.
\begin{proposition}\label{P:NC1}
Let $E$ be a separable Banach space with an open cone $C$ such that $\overline C$ is normal.
Then restricted to $C$, the $\sigma$-algebra generated by all its open upper sets is the Borel
algebra of $C$.
\end{proposition}
\begin{proof}
Let $\A$ denote the $\sigma$-algebra of subsets of $C$ generated by the collection of all open
upper sets contained in $C$. Fix some point $u\in C$. Let $x\in C$. Set $r_n=1/n$ for
$n\in\N$. Then for each $n$, $x\in (x-r_nu)+C$, an open upper set,
and $\ua x=\bigcap_n[(x-r_nu)+C]$. In fact, for any $y$ in the intersection
$y-(x-r_nu)=(y-x)-r_nu\in C$ and hence the limit $y-x$ is in $\overline C$, i.e.,
$y\in x+\overline C=\ua x$. The converse inclusion is obvious since
$r_nu+\overline C\subseteq C$ so that $x+\overline C\subseteq(x-r_nu)+C$. Thus $\ua x$ is a
countable intersection of open upper sets, hence in $\A$.
Since $\da x$ is closed in $E$, $C\cap(E\setminus \da x)$ is an open upper set. Thus its
complement in $C$, which is $C\cap \da x$ is in $\A$. Hence for $x\leq y$ in $C$, we note that
$[x,y]=\ua x\cap\da y=\ua x\cap C\cap \da y\in\A$.
Now let $U$ be a nonempty open subset of $C$. Using the alternative characterization of normality, we may pick for each $x\in U$
an order convex neighborhood $N_x$ of $x$ that is contained in $U$. For some $\ve$ small enough $x-\ve u, x+\ve u\in N_x$, and
hence the order interval $[x-\ve u,x+\ve u]\subseteq N_x$. Let $B_x:=(x-\ve u+C)\cap (x+\ve u-C)$, an open subset contained in
$[x-\ve u,x+\ve u]$. The collection $\{B_x:x\in C\}$ is an open cover of $U$, which by the separability of $E$ (and hence $U$) has
a countable subcover $\{B_{x_n}\}$. The corresponding $[x_n-\ve_n u,x_n+\ve_n u]$ then also form a countable cover of $U$, and
since from the preceding paragraph each order interval is in $\A$, it follows that $U\in\A$. Thus $\A$ contains all open sets of $C$, and hence
must be the Borel algebra.
\end{proof}
We next recall E.~Dynkin's $\pi-\lambda$ theorem. Let $X$ be a set. A $\pi$-system is a collection of subsets of $X$ closed under finite
intersection. A $\lambda$-system is a collection with $X$ as a member that is closed under complementation and under countable unions
of pairwise disjoint members of the system. An important observation is that a $\lambda$-system that is also a $\pi$-system is a $\sigma$-algebra.
\begin{theorem} $($Dynkin's $\pi-\lambda$ Theorem$)$
If a $\pi$-system is contained in a $\lambda$-system, then the $\sigma$-algebra generated by
the $\pi$-system is contained in the $\lambda$-system.
\end{theorem}
The stochastic order on $\Pro(X)$ for a metric space $X$ equipped with a closed order is easily seen to be reflexive and transitive, but anti-symmetry is much more difficult to derive.
We now have available the tools we need to show for the open cone $C$ that the stochastic order on $\Pro(C)$ is a partial order.
\begin{theorem}\label{T:NC2}
Let $E$ be a Banach space containing an open cone $C$ such that $\overline C$ is a normal cone. Then the stochastic order on $\Pro(C)$ is a partial order.
\end{theorem}
\begin{proof} We first consider the case that $E$ is separable. Let $\mu,\nu\in\Pro(C)$ be
such that $\mu\leq \nu$ and $\nu\leq\mu$. We consider the set $\A$ of all Borel sets $B$ such
that $\mu(B)=\nu(B)$. By definition of the stochastic order, $U\in \A$ for each open upper set
$U$, and the collection of open upper sets is closed under finite intersection, i.e., is a
$\pi$-system. Since $\mu$ and $\nu$ are $\sigma$-additive measures, it follows that the collection $\A$ is closed under complementation and union of pairwise disjoint countable
families, so $\A$ is a $\lambda$-system. By Dynkin's $\pi-\lambda$ theorem the $\sigma$-algebra
generated by the open upper sets is contained in $\A$, but by Proposition \ref{P:NC1} this is the Borel algebra. Hence $\mu=\nu$ on the Borel algebra, that is to say $\mu=\nu$.
We turn now to the general case in which $E$ may not be separable. In this case, however,
both $S_\mu$, the support of $\mu$, and $S_\nu$, the support of $\nu$, are separable
(Proposition \ref{P:La}). Then also the smallest closed Banach subspace $F$ containing
$S_\mu\cup S_\nu$ will be separable, and the restrictions $\mu\vert_F$,
$\nu\vert_F\in\Pro(C\cap F)$. Since $C\cap F$ is an open cone in $F$ with closure a normal cone, by the first part
of the proof $\mu(B)=\nu(B)$ for all Borel subsets contained in $C\cap F$. Since
$S_\mu\cup S_\nu\subseteq C\cap F$, for any Borel set $B\subseteq C$,
$$\mu(B)=\mu(B\cap S_\mu)=\mu(B\cap (S_\mu\cup S_\nu))=\nu(B\cap(S_\mu\cup S_\nu))=\nu(B\cap S_\nu)=\nu(B).$$
Thus $\mu=\nu$.
\end{proof}
\begin{remark} The techniques of the proof readily extend to any open upper set of $E$, in
particular to $E$ itself. Indeed, Proposition \ref{P:NC1} and Theorem \ref{T:NC2} hold when
restricted to any open upper set in place of $C$. So the stochastic order on $\Pro(E)$ arising
from the conic order of $E$ is also a partial order.
\end{remark}
\section{The Thompson Metric}
We continue in the setting that $E$ is a Banach space and $C$ is an open cone with its closure
$\overline C$ a normal cone.
A. C. Thompson \cite{Thomp} has proved
that $C$ is a complete metric space with respect to the
\emph{Thompson part metric} defined by
$$d_T(x,y)={\mathrm{max}}\{\log M(x/y), \log M(y/x)\}$$ where
$M(x/y):={\mathrm{inf}}\{\lambda>0: x\leq \lambda y\}=|x|_{y}$.
Furthermore, the metric topology on $C$ arising from the Thompson metric agrees with relative topology inherited from $E$.
The contractivity of addition in $C$ with respect to the Thompson metric has been observed in various settings
and studied in some detail in \cite{LL12}. We need only the basic formulation.
\begin{lemma}\label{L:TM1}
Addition is contractive on $C$ with respect to the Thompson metric in the sense that for all $x,y,z\in C$, $d_T(x+z,y+z)\leq d_T(x,y)$.
\end{lemma}
\begin{remark}\label{R:TM2}
The fact that the Thompson metric is complete allows us to deduce from Proposition \ref{P:La} for $E$ separable that $\Pro(C)$ consists of
all Borel probability measures and for $E$ an arbitrary Banach space that $\Pro(C)$ consists of the $\tau$-additive probability measures.
\end{remark}
\begin{proposition}\label{P:TM3}
The cone $C$ equipped with the Thompson metric satisfies the property that given $x\leq y$ and
$x_1\in C$, there exists $y_1\geq x_1$ such that $d_T(y,y_1)\leq d_T(x,x_1)$. Hence for $\mu,\nu\in \Pro(C)$,
$\mu\leq \nu$ in the stochastic order if and only if for every continuous $($bounded$)$ monotone
$f:X\to\R^+$, $\int_X f\,d\mu\leq \int_X f\,d\nu$.
\end{proposition}
\begin{proof}
Suppose $x\leq y$ and $x_1\in C$. The contractivity of the Thompson metric (Lemma \ref{L:TM1})
implies for $y_1=x_1+(y-x)$ that
$$d_T(y,y_1)=d_T(x+(y-x),x_1+(y-x))\leq d_T(x,x_1).$$
The last assertion of the proposition now follows from Proposition \ref{P:SO5}.
\end{proof}
\begin{remark}\label{R-5.4}
Here is a second proof of Proposition \ref{P:TM3}. Assume that $x\le y$ in $C$. For every
$x_1\in C$ let $\alpha:=d_T(x,x_1)$ so that $e^{-\alpha}x\le x_1\le e^\alpha x$. Set
$y_1:=e^\alpha y$; then $y_1\ge e^\alpha x\ge x_1$ and $y\le y_1=e^\alpha y$, so
$d_T(y,y_1)\le\alpha=d_T(x,x_1)$. Similarly one can show the dual version mentioned in
Remark \ref{R-3.10}. For every $y_1\in C$ let $\beta:=d_T(y,y_1)$ and $x_1:=e^{-\beta}x$; then
$x_1\le e^{-\beta}y\le y_1$ and $e^{-\beta}x=x_1\le x$ so that $d_T(x,x_1)\le\beta= d_T(y,y_1)$.
\end{remark}
Recall that one of the characterizations of the weak topology on any metric space, in particular on $\Pro(C)$,
is that a net $\mu_\alpha \to \mu$ weakly if and only if $\lim_\alpha \int_C f\, d\mu_\alpha\to \int_C f\,d\mu$ for all
continuous bounded functions into $\R$ (or $\R^+)$; see \cite{Bi}.
\begin{proposition}\label{P:TM4}
The stochastic partial order is a closed subset of $\Pro(C)\times \Pro(C)$ endowed with the product weak topology.
\end{proposition}
\begin{proof}
Let $\mu_\alpha\to \mu$ and $\nu_\alpha\to\nu$ weakly in $\Pro(C)$, where $\mu_\alpha\leq \nu_\alpha$ for each $\alpha$.
From Proposition \ref{P:TM3} for $f:C\to \R^+$ continuous bounded and monotone
$$\int_C f\,d\mu=\lim_\alpha \int_C f\,d\mu_\alpha\leq \lim_\alpha\int_Cf\,d\nu_\alpha=\int_C f\, d\nu.$$
Thus again from Proposition \ref{P:TM3}, $\mu\leq \nu$.
\end{proof}
Let $(X,\mathcal{M})$ be a \emph{measure space}, a set $X$ equipped
with a $\sigma$-algebra $\mathcal{M}$, and $(Y,d)$ a metric space. A
function $f:X\to Y$ is \emph{measurable} if
$f^{-1}(A)\in\mathcal{M}$ whenever $A\in\mathcal{B}(Y)$. For $f$ to
be measurable, it suffices that $f^{-1}(U)\in\mathcal{M}$ for each
open subset $U$ of $Y$. Hence continuous functions are measurable
in the case $X$ is a metrizable space and
$\mathcal{M}=\mathcal{B}(X)$, the Borel algebra. A measurable map
$f:X\to Y$ between metric spaces induces the \emph{push-forward} map
$f_*:\Pro(X)\to\Pro(Y)$ defined by $f_*(\mu)(B)=\mu(f^{-1}(B))$ for
$\mu\in\Pro(X)$ and $B\in\mathcal{B}(Y)$. Note for $f$ continuous
that $\mathrm{supp}(f_*(\mu))=f(\mathrm{supp}(\mu))^-$, the closure
of the image of the support of $\mu$.
Let $(X,d)$ be a complete metric space, and for $p\in[1,\infty)$ let
$\Pro^p(X):=\{ \mu\in\Pro(X): \int_Xd(x,y)^p\,d\mu(y)<\infty\}$, the set of $\tau$-additive Borel probability measures on $X$ with finite $p$th moment (defined independently of the choice of $x\in X$).
The {\em $p$-Wasserstein metric} $d_p^W$ on $\Pro^p(X)$ is defined by
\begin{align}\label{F-5.1}
d_p^W(\mu,\nu):=\biggl[\inf_{\pi\in\Pi(\mu,\nu)}\int_{X\times X}d(x,y)^p
\,d\pi(x,y)\biggr]^{1/p},\qquad\mu,\nu\in\Pro^p(X),
\end{align}
where $\Pi(\mu,\nu)$ is the set of all couplings for $\mu,\nu$, i.e.,
$\pi\in\Pro(X\times X)$ whose marginals are $\mu$ and $\nu$.
Recall (see, e.g.,
\cite{St}) that $\Pro^p(X)$ is a complete metric space with the metric $d_p^W$, and that
the Wasserstein convergence implies weak convergence. Hence we have the following corollary
of the preceding proposition.
\begin{corollary}
The stochastic partial order is a closed subset of $\Pro^1(C)\times \Pro^1(C)$ endowed with the product Wasserstein topology $($induced by $d_1^W$$)$.
\end{corollary}
We recall the notion of a contractive barycentric map.
\begin{definition}
Let $(X,d)$ be a complete metric space.
A map $\beta:\mathcal{P}^1(X)\to X$ is called a
\emph{contractive barycentric map} if
\begin{itemize}
\item[(i)] $\beta(\delta_x)=x$ for all $x\in X$;
\item[(ii)] $d(\beta(\mu),\beta(\nu))\leq d_1^W(\mu,\nu)$ for all $\mu,\nu\in\mathcal{P}^1(X)$.
\end{itemize}
For a closed partial order $\leq $ on $X,$
$\beta:\mathcal{P}^1(X)\to X$ is said to be \emph{monotonic} if
$\beta(\mu)\leq \beta(\nu),$ whenever $\mu\leq \nu.$
\end{definition}
A complete partially ordered metric space equipped with a monotonic contractive
barycenter has become an important object of study in recent years.
We consider the semigroup of mappings on $\psi:C\to C$ satisfying
$x\leq \psi(x)$ for all $x\in C.$ For instance, every translation
$\tau_{a}(x)=a+x$, $a\in \overline C$, satisfies this condition and also is
non-expansive for the Thompson metric.
\begin{corollary}\label{L-4}
Let $\psi:C\to C$ be a Lipschitzian map with respect to the
Thompson metric $d_T$ such that $x\le\psi(x)$ for all $x\in C$.
Then for every $\mu\in {\mathcal P}^{1}(C)$, we have
$\psi_*\mu\in{\mathcal P}^{1}(C)$ and $\mu\le\psi_*\mu$. If further,
$\beta:{\mathcal P}^{1}(C)\to C$ is a monotonic barycentric map,
then $\beta(\mu)\leq \beta(\psi_*\mu)$ for any $\mu\in {\mathcal
P}^1(C).$
\end{corollary}
\begin{proof}
Let $\mu=\frac{1}{n}\sum_{j=1}^{n}\delta_{x_{j}}$ be a finitely
supported uniform measure on $C$. From $x_{j}\leq \psi(x_{j})$ for
all $j$, we have
$$\mu=\frac{1}{n}\sum_{j=1}^{n}\delta_{x_{j}}\leq
\frac{1}{n}\sum_{j=1}^{n}\delta_{\psi(x_{j})}=\psi_*\mu.$$
Now for $\mu \in{\mathcal P}^1({\Bbb P}),$ pick a sequence
$\mu_{n}$ of finitely supported uniform measures converging to $\mu$
from below for the Wasserstein metric associated to the Thompson
metric (\cite[Theorem 4.7]{La17}). Then
$$\mu\leq \mu_{n}\leq \psi_*\mu_{n}\to \psi_*\mu$$
as $n\to \infty$ and hence $\mu\leq \psi_*\mu,$ by the previous
corollary.
\end{proof}
\section{Order-completeness}
In this section we always assume that the Banach space $E$ is {\em finite-dimensional} (hence
separable) and, as in Section 4, $C$ is an open cone in $E$ whose closure $\overline C$ is a
proper cone. Note (see Section 19.1 of \cite{De}) that the finite dimensionality assumption
automatically implies that $\overline C$ is a normal cone. We consider $C$ as a complete metric
space equipped with the Thompson part metric $d_T$ and the $p$-Wasserstein metric $d_p^W$ on
$\Pro^p(C)$, $1\le p<\infty$, given in \eqref{F-5.1} with $d=d_T$.
The next elementary lemma is given just for completeness.
\begin{lemma}\label{L-6.1}
\begin{itemize}
\item[(1)] For each $x,y\in C$, the order interval $[x,y]=(x+\overline C)\cap(y-\overline C)$
is a compact subset of $C$.
\item[(2)] For any $u\in C$, $\bigcup_{k=1}^\infty[k^{-1}u,ku]=C$.
\end{itemize}
\end{lemma}
\begin{proof}
(1):
Since $x+\overline C\subset C+\overline C\subset C$, $[x,y]\subset C$. It is also clear that
$[x,y]$ is a closed subset of $E$. Since $\overline C$ is a normal cone, we see that if
$z\in[x,y]$ then $\|z\|\le K\|y\|$. Hence, $[x,y]$ is a bounded closed subset of $E$. Since $E$
is finite-dimensional, $[x,y]$ is compact in $E$ and so is in $(C,d)$.
(2):
Let $u,x\in C$. For $k\in\N$ sufficiently large, $x-k^{-1}u\in C$ and $u-k^{-1}x\in C$ so that
$x\in(k^{-1}u+C)\cap(ku-C)$. Therefore, $x\in[k^{-1}u,ku]$, which implies the assertion.
\end{proof}
Before showing order-completeness, it is convenient to derive the compactness of order
intervals in $\Pro(C)$ as well as in $\Pro^p(C)$.
\begin{proposition}\label{P-6.2}
Let $\nu_1,\nu_2\in\Pro(C)$ with $\nu_1\le\nu_2$.
\begin{itemize}
\item[(1)] The order interval $[\nu_1,\nu_2]:=\{\mu\in\Pro(C):\nu_1\le\mu\le\nu_2\}$ is compact in the
weak topology.
\item[(2)] Let $1\le p<\infty$. If $\nu_1,\nu_2\in\Pro^p(C)$, then $[\nu_1,\nu_2]\subset\Pro^p(C)$ and
it is compact in the $d_p^W$-topology.
\end{itemize}
\end{proposition}
\begin{proof}
(1):
Choose any $u\in C$. For every $\epsilon>0$ Lemma \ref{L-6.1}\,(2) implies that there exists $k\in\N$
such that $(\nu_1+\nu_2)(C\setminus[k^{-1}u,ku])<\epsilon$. We write
$C\setminus[k^{-1}u,ku]=U_k\cup V_k$, where $U_k:=\{x\in C:x\not\le ku\}$ and
$V_k:=\{x\in C:x\not\ge k^{-1}u\}$. It is clear that $U_k$ is an upper open set while $V_k$ is a lower
open set. Hence, if $\mu\in[\nu_1,\nu_2]$, then we have
\begin{align*}
\mu(C\setminus[k^{-1}u,ku])
&\le\mu(U_k)+\mu(V_k)\le\nu_2(U_k)+\nu_1(V_k) \\
&\le(\nu_1+\nu_2)(C\setminus[k^{-1}u,ku])<\epsilon
\end{align*}
(for $\mu(V_k)\le\nu_1(V_k)$, see Remark \ref{R:SO3}). By Lemma \ref{L-6.1}\,(1), this says
that $[\nu_1,\nu_2]$ is tight, and so it is relatively compact in $\Pro(C)$ in the weak topology
due to Prohorov's theorem (see \cite{Bi}). Since $[\nu_1,\nu_2]$ is closed in the weak topology
by Proposition \ref{P:TM4}, $[\nu_1,\nu_2]$ is compact in the weak topology.
(2):
Next, assume that $\nu_1,\nu_2\in\Pro^p(C)$ for $p\in[1,\infty)$. First we prove the following
``tightness" condition:
\begin{align}\label{F-1}
\lim_{R\to\infty}\sup_{\mu\in[\nu_1,\mu_2]}\int_{d(x,u)>R}d(x,u)^p\,d\mu(x)=0
\end{align}
for some $u\in C$. Choose any $u\in C$. For every $R\ge0$ set
\begin{align*}
U_R&:=\{x\in C:M(x/u)>e^R,\,M(x/u)\ge M(u/x)\}, \\
V_R&:=\{x\in C:M(u/x)>e^R,\,M(x/u)<M(u/x)\}.
\end{align*}
Then it is immediate to see that
$$
\{x\in C:d(x,u)>R\}=U_R\cup V_R\ \ \mbox{(disjoint sum)}.
$$
Hence, for any $\mu\in\Pro(C)$ we have
\begin{align}\label{F-2}
\int_{d(x,u)>R}d(x,u)^p\,d\mu(x)
=\int_C1_{U_R}(x)d(x,u)^p\,d\mu(x)+\int_C1_{V_R}(x)d(x,u)^p\,d\mu(x).
\end{align}
When $x\in U_R$ and $x\le y\in C$, since $M(x/u)\le M(y/u)$ and $M(u/x)\ge M(u/y)$, we find that
$M(y/u)\ge M(x/u)>e^R$ and $M(y/u)\ge M(u/y)$ so that $y\in U_R$. Therefore, $U_R$ is an upper Borel
set. Moreover,
$$
d(x,u)=\log M(x/u)\le\log M(y/u)=d(y,u).
$$
Hence it follows that $x\in C\mapsto1_{U_R}(x)d(x,u)^p$ is a monotone Borel function. When $x\in V_R$
and $x\ge y\in C$, since $M(x/u)\ge M(y/u)$ and $M(u/x)\le M(u/y)$, $M(u/y)\ge M(u/x)>e^R$ and
$M(y/u)<M(u/y)$ so that $y\in V_R$. Therefore, $V_R$ is a lower open set and
$$
d(x,u)=\log M(u/x)\le\log M(u/y)=d(y,u).
$$
Hence we see that $x\in C\mapsto1_{V_R}(x)d(x,u)^p$ is an antitone Borel function. If
$\mu\in[\nu_1,\nu_2]$, then by Proposition \ref{P:SO3} and Corollary \ref{C:SO4} applied to
the right-hand side of \eqref{F-2} we obtain
\begin{align*}
\int_{d(x,u)>R}d(x,u)^p\,d\mu(x)
&\le\int_C1_{U_R}(x)d(x,u)^p\,d\nu_2(x)+\int_C1_{V_R}(x)d(x,u)^p\,d\nu_1(x) \\
&\le\int_{d(x,u)>R}d(x,u)^p\,d(\nu_1+\nu_2)(x)\longrightarrow0
\end{align*}
as $R\to\infty$, since $\int_Cd(x,u)^p\,d(\nu_1+\nu_2)(x)<\infty$. Hence \eqref{F-1} has been proved,
which in particular implies that $[\nu_1,\nu_2]\subseteq\Pro^p(C)$. Moreover, from a basic fact on the
convergence in Wasserstein spaces \cite[Theorem 7.12]{Vi}, we see that $[\nu_1,\nu_2]$ is compact in
the $d_p^W$-topology. Indeed, for every sequence $\{\mu_n\}$ in $[\nu_1,\nu_2]$, from the assertion (1)
one can choose a subsequence $\{\mu_{n(m)}\}$ such that $\mu_{n(m)}\to\mu$ weakly for some
$\mu\in\Pro(C)$. Hence, it follows from \cite[Theorem 7.12]{Vi} that $\mu\in\Pro^p(C)$ and
$d_p^W(\mu_{n(m)},\mu)\to0$. Note that the limit $\mu$ is in $[\nu_1,\nu_2]$, since the
$d_p^W$-convergence implies the weak convergence. Thus, $[\nu_1,\nu_2]$ is $d_p^W$-compact.
\end{proof}
The next proposition gives the order-completeness (or a monotone convergence property) of the
stochastic order on $\Pro(C)$ in the weak topology.
\begin{proposition}\label{P-6.3}
Let $\mu_n,\nu\in\Pro(C)$ for $n\in\N$.
\begin{itemize}
\item[(1)] If $\mu_1\le\mu_2\le\dots\le\nu$, then there exists a $\mu\in\Pro(C)$ such that
$\mu_n\le\mu\le\nu$
for all $n$ and $\mu_n\to\mu$ weakly.
\item[(2)] If $\mu_1\ge\mu_2\ge\dots\ge\nu$, then there exists a $\mu\in\Pro(C)$ such that
$\mu_n\ge\mu\ge\nu$ for all $n$ and $\mu_n\to\mu$ weakly.
\end{itemize}
\end{proposition}
\begin{proof}
(1):
Since $\{\mu_n\}\subset[\mu_1,\nu]$ and Proposition \ref{P-6.2}\,(1) says that $[\mu_1,\nu]$
is compact in the weak topology, to see that $\mu_n\to\mu$ weakly for some $\mu\in\Pro(C)$,
it suffices to prove that a weak limit point of $\{\mu_n\}$ is unique. Now, let
$\mu,\mu'\in\Pro(C)$ be weak limit points of $\{\mu_n\}$, so there are subsequences
$\{\mu_{n(l)}\}$ and $\{\mu_{n(m)}\}$ such that $\mu_{n(l)}\to\mu$ and $\mu_{n(m)}\to\mu'$
weakly. Let $f:C\to[0,\infty)$ be any continuous bounded and monotone function. Since
$\int_Cf\,d\mu_n$ is increasing in $n$ by Proposition \ref{P:TM3}, we have
$$
\int_Cf\,d\mu=\lim_l\int_Cf\,d\mu_{n(l)}=\lim_m\int_Cf\,d\mu_{n(m)}=\int_Cf\,d\mu'.
$$
This implies by Proposition \ref{P:TM3} again that $\mu\le\mu'$ and $\mu'\le\mu$ so that
$\mu=\mu'$ by Theorem \ref{T:NC2}. Therefore $\mu_n\to\mu\in\Pro(C)$ weakly. Moreover, since
$\int_Cf\,d\mu_n\le\int_Cf\,d\mu\le\int_Cf\,d\nu$ for every continuous bounded and monotone
function $f\ge0$ on $C$, we have $\mu_n\le\mu\le\nu$ for all $n$.
(2):
The proof is similar to the above with a slight modification.
\end{proof}
The next proposition gives the order-completeness of the stochastic order restricted on
$\Pro^p(C)$ in the $d_p^W$-convergence.
\begin{proposition}\label{P-6.4}
Let $1\le p<\infty$ and $\mu_n,\nu\in\Pro^p(C)$ for $n\in\N$.
\begin{itemize}
\item[(1)] If $\mu_1\le\mu_2\le\dots\le\nu$, then there exists a $\mu\in\Pro^p(C)$ such that
$\mu_n\le\mu\le\nu$ for all $n$ and $d_p^W(\mu_n,\mu)\to0$.
\item[(2)] If $\mu_1\ge\mu_2\ge\dots\ge\nu$, then there exists a $\mu\in\Pro^p(C)$ such that
$\mu_n\ge\mu\ge\nu$ for all $n$ and $d_p^W(\mu_n,\mu)\to0$.
\end{itemize}
\end{proposition}
\begin{proof}
For both assertions (1) and (2), by Proposition \ref{P-6.2}\,(2) it suffices to prove that a
$d_p^W$-limit point of $\{\mu_n\}$ is unique. Since the $d_p^W$-convergence implies the weak
convergence, this is immediate from the proof of Proposition \ref{P-6.3}.
\end{proof}
\begin{corollary}\label{C-6.5}
Let $\mu,\mu_n\in\Pro(C)$, $n\in\N$. Then $\mu_n$ weakly converges to $\mu$ increasingly
$($resp.\ decreasingly$)$ in the stochastic order if and only if $\int_Cf\,d\mu_n$
increases $($resp.\ decreases$)$ to $\int_Cf\,d\mu$ for every continuous bounded and monotone
$f:C\to\R^+$. Moreover, if $\mu,\mu_n\in\Pro^p(C)$ where $1\le p<\infty$, then the above
conditions are also equivalent to $\mu_n$ converges to $\mu$ in the metric $d_p^W$ increasingly
$($resp.\ decreasingly$)$ in the stochastic order.
\end{corollary}
\begin{proof}
Assume that for any $f:C\to\R^+$ as stated above, $\int_Cf\,d\mu_n$ increases (resp.\ decreases)
to $\int_Cf\,d\mu$. Then by Proposition \ref{P:TM3}, $\mu_1\le\mu_2\le\dots\le\mu$
($\mu_1\ge\mu_2\ge\dots\ge\mu$). By Proposition \ref{P-6.3} there exists a $\mu_0\in\Pro(C)$
such that $\mu_n\to\mu_0$ weakly. By assumption, $\int_Cf\,d\mu=\int_Cf\,d\mu_0$ for any $f$
as above, which implies that $\mu=\mu_0$ by Theorem \ref{T:NC2} and Proposition \ref{P:TM3}.
Hence $\mu_n\to\mu$ weakly. Since the converse implication is obvious, the first assertion has
been shown. The second follows from Proposition \ref{P-6.4}.
\end{proof}
\begin{remark}\label{R-6.6}\rm
It is straightforward to see that $x\mapsto\delta_x$ is a homeomorphism from $(C,d_T)$ into
$\Pro(C)$ with the weak topology and also an isometry from $(C,d_T)$ into $(\Pro^1(C),d_1^W)$.
Hence each conclusion of (1) and (2) of Proposition \ref{P-6.2} implies that the interval
$[x_1,x_2]$ in $C$ is compact for any $x_1,x_2\in C$ with $x_1\le x_2$. Since
$(2^{-1}u+C)\cap(2u-C)$ is a non-empty open subset of $[2^{-1}u,2u]$ for any $u\in C$, this
forces $E$ to be finite-dimensional. Thus, the finite dimensionality of $E$ is essential in
Proposition \ref{P-6.2}. But, there might be a possibility for Propositions \ref{P-6.3} and
\ref{P-6.4} to hold true beyond the finite-dimensional case.
\end{remark}
\section{AGH mean inequalities}
In this section we consider the Banach space $E=\mathcal{B}(H)$ of
bounded operators on a (general) Hilbert space $H$ with the operator
norm, and the open cone $C=\bP$ consisting of positive invertible
operators on $H$. Note that $\bP$ is a complete metric space with
the Thompson metric $d_T$. Let $\Lambda$ be the {\em Karcher
barycenter} on ${\mathcal P}^1(\bP)$; in particular, for a finitely
and uniformly supported measure $\mu={1\over
n}\sum_{j=1}^n\delta_{A_j}$,
$$\Lambda_n(A_{1},\dots,A_{n}):=\Lambda\left(\frac{1}{n}\sum_{j=1}^{n}\delta_{A_{j}}\right)$$
is the {\em Karcher} or {\em least squares mean} of $(A_1,\dots,A_n)\in\bP^n$, which is
uniquely determined by the {\em Karcher equation}
$$\sum_{j=1}^{n}\log (X^{-1/2}A_{j}X^{-1/2})=0.$$
Moreover, $\Lambda:{\mathcal P}^1(\bP)\to \bP$ is contractive
$$
d_T(\Lambda(\mu),\Lambda(\nu))\leq d_1^W(\mu,\nu),\qquad\mu,\nu\in{\mathcal P}^1(\bP).
$$
See, e.g., \cite{LL13,LL14,LL17} for the Karcher equation and Karcher (or Cartan) barycenter.
We consider the complete metric $d_n$ on the product space $\bP^n$
\begin{align}\label{F-7.4}
d_{n}((A_{1},\dots,A_{n}),
(B_{1},\dots,B_{n})):=\frac{1}{n}\sum_{j=1}^{n}d_T(A_{j},B_{j}).
\end{align}
The contraction property of the Karcher barycenter implies that the map
$$\Lambda_{n}:\bP^{n}\to \bP,\quad (A_{1},\dots,A_{n})\mapsto
\Lambda_n(A_{1},\dots,A_{n})$$
is a Lipschitz map with Lipschitz constant $1$.
The arithmetic and harmonic means
$${\mathcal A}_{n}(A_{1},\dots,A_{n})=\frac{1}{n}\sum_{j=1}^{n}A_{j},\qquad {\mathcal
H}_{n}(A_{1},\dots,A_{n})=\left[\frac{1}{n}\sum_{j=1}^{n}A_{j}^{-1}\right]^{-1}$$
are continuous from $\bP^n$ to $\bP$ and are also Lipschitz with
Lipschitz constant $1$ for the sup-metric on $\bP^n$
\begin{align}\label{E:supm}
d_n^\infty((A_1,\dots,A_n),(B_1,\dots,B_n)):=\max_{1\le j\le n}d_T(A_j,B_j).
\end{align}
\begin{definition}
For each $n\in\N$ and $\mu_{1},\dots,\mu_{n}\in\Pro(\bP)$, note that the product measure
$\mu_1\times\dots\times\mu_n$ is in $\Pro(\bP^n)$. This is easily verified since the support
of the product measure is the product of the supports of $\mu_i$'s having the measure $1$.
As seen from Proposition \ref{P:La}, note also that the push-forward of a $\tau$-additive
measure by a continuous map is $\tau$-additive. Hence one can define the following three
measures in $\Pro(\bP)$, regarded as the geometric, arithmetic and harmonic means of
$\mu_1,\dots,\mu_n$:
\begin{align}
\Lambda(\mu_{1},\dots,\mu_{n})
&:=(\Lambda_{n})_{*}(\mu_{1}\times\cdots\times\mu_{n}), \label{F-7.6}\\
{\mathcal A}(\mu_{1},\dots,\mu_{n})
&:=({\mathcal A}_{n})_{*}(\mu_{1}\times\dots\times\mu_{n}), \label{F-7.7}\\
{\mathcal H}(\mu_{1},\dots,\mu_{n})
&:=({\mathcal H}_{n})_{*}(\mu_{1}\times\dots\times\mu_{n}). \label{F-7.8}
\end{align}
\end{definition}
\begin{example}
For $\mu=\frac{1}{n}\sum_{j=1}^{n}\delta_{A_{j}}$ and $X\in\bP$,
\begin{align*}
\Lambda(\delta_{X},\mu)&=\frac{1}{n}\sum_{j=1}^{n}\delta_{X\#A_{j}},\\
{\mathcal A}(\delta_{X},\mu)&=\frac{1}{n}\sum_{j=1}^{n}\delta_{(X+A_{j})/2},\\
{\mathcal H}(\delta_{X},\mu)&=\frac{1}{n}\sum_{j=1}^{n}\delta_{2(X^{-1}+A_{j}^{-1})^{-1}}.
\end{align*}
\end{example}
\begin{proposition}\label{P-7.3}
For every $\mu_1,\dots,\mu_n\in\Pro(\bP)$,
$$
\mathcal{H}(\mu_1,\dots,\mu_n)=\bigl[\mathcal{A}(\mu_1^{-1},\dots,\mu_n^{-1})\bigr]^{-1},
$$ where $\mu^{-1}$ is the push-forward of $\mu$ by operator
inversion $A\mapsto A^{-1}.$
\end{proposition}
\begin{proof}
For every bounded continuous function $f:\bP\to\R$ we have
\begin{align*}
&\int_\bP f(A)\,d\bigl[\mathcal{H}(\mu_1,\dots,\mu_n)\bigr]^{-1}(A)\\
&\qquad=\int_\bP f(A^{-1})\,d\mathcal{H}(\mu_1,\dots,\mu_n)(A) \\
&\qquad=\int_{\bP^n}f\biggl({1\over n}\sum_{j=1}^nA_j^{-1}\biggr)
\,d(\mu_1\times\dots\times\mu_n)(A_1,\dots,A_n) \\
&\qquad=\int_{\bP^n}f\biggl({1\over n}\sum_{j=1}^nA_j\biggr)
\,d(\mu_1^{-1}\times\dots\times\mu_n^{-1})(A_1,\dots,A_n) \\
&\qquad=\int_\bP f(A)\,d\mathcal{A}(\mu_1^{-1},\dots,\mu_n^{-1})(A),
\end{align*}
which shows that $\bigl[\mathcal{H}(\mu_1,\dots,\mu_n)\bigr]^{-1}
=\mathcal{A}(\mu_1^{-1},\dots,\mu_n^{-1})$.
\end{proof}
For a complete metric space $(X,d)$, in addition to $\Pro^p(X)$ with the $p$-Wasserstein
metric $d_p^W$ in \eqref{F-5.1} for $1\le p<\infty$, we also consider the set $\Pro^\infty(X)$
of $\mu\in\Pro(X)$ whose support is a bounded set of $X$, equipped with the
$\infty$-Wasserstein metric
\begin{equation}\label{E:winf}
d_\infty^W(\mu,\nu)= \inf_{\pi\in\Pi(\mu,\nu)}
\sup\{d(x,y):(x,y)\in\mathrm{supp}(\pi)\},
\end{equation}
where $\Pi(\mu,\nu)$ is the set of all couplings for $\mu,\nu$.
\begin{proposition}
For every $p\in[1,\infty]$ and $M=\Lambda,\mathcal{A},\mathcal{H}$ in
\eqref{F-7.6}--\eqref{F-7.8}, if $\mu_1,\dots,\mu_n\in\Pro^p(\bP)$ then
$M(\mu_1,\dots,\mu_n)\in\Pro^p(\bP)$. Moreover,
$$
(\mu_1,\dots,\mu_n)\in(\Pro^p(\bP))^n\mapsto
M(\mu_1,\dots,\mu_n)\in\Pro^p(\bP)
$$
is Lipschitz continuous with respect to the Wasserstein metric $d_p^W$.
\end{proposition}
\begin{proof}
Since $\Lambda_n:\bP^n\to\bP$ is a Lipschitz map with Lipschitz constant $1$ with respect to
$d_n$ in \eqref{F-7.4}, we can use \cite[Lemma 1.3]{LL17} to see that for each $p\in[1,\infty]$
the push-forward map $(\Lambda_n)_*:\Pro^p(\bP^n)\to\Pro^p(\bP)$ is Lipschitz with Lipschitz
constant $1$ with respect to the metric $d_p^W$, where $d_p^W$ on $\Pro^p(\bP^n)$ is defined
in terms of $d_n$. Let $\mu_1,\dots,\mu_n;\nu_1,\dots,\nu_n\in\Pro^p(\bP)$. Then it is clear
that $\mu_1\times\dots\times\mu_n\in\Pro^p(\bP^n)$ and hence $\Lambda(\mu_1,\dots,\mu_n)=
(\Lambda_n)_*(\mu_1\times\dots\times\mu_n)$ is in $\Pro^p(\bP)$. To show the Lipschitz
continuity, we may prove more precisely that
\begin{align*}
d_p^W(\Lambda(\mu_1,\dots,\mu_n),\Lambda(\nu_1,\dots,\nu_n))
&\le\Biggl[{1\over n}\sum_{j=1}^n\Bigl(d_p^W(\mu_j,\nu_j)\Bigr)^p\Biggr]^{1/p}
\quad\mbox{when $1\le p<\infty$}, \\
d_\infty^W(\Lambda(\mu_1,\dots,\mu_n),\Lambda(\nu_1,\dots,\nu_n))
&\le\max_{1\le j\le n}d_\infty^W(\mu_j,\nu_j)
\hskip2.1cm\mbox{when $p=\infty$}.
\end{align*}
To prove this, let $\pi_j\in\Pi(\mu_j,\nu_j)$, $1\le j\le n$. Since
$\pi_1\times\dots\times\pi_n\in\Pi(\mu_1\times\dots\times\mu_n,\nu_1\times\dots\times\nu_n)$,
we have, for the case $1\le p<\infty$,
\begin{align*}
&d_p^W((\Lambda_n)_*(\mu_1\times\dots\times\mu_n),
(\Lambda_n)_*(\nu_1\times\dots\times\nu_n)) \\
&\quad\le d_p^W(\mu_1\times\dots\times\mu_n,\nu_1\times\dots\times\nu_n) \\
&\quad\le\biggl[\int_{\bP^n\times\bP^n}d_n^p((A_1,\dots,A_n),(B_1,\dots,B_n))
\,d(\pi_1\times\dots\times\pi_n)\biggr]^{1/p} \\
&\quad=\Biggl[\int_{\bP^n\times\bP_n}\Biggl({1\over n}\sum_{j=1}^nd_T(A_j,B_j)\Biggr)^p
\,d(\pi_1\times\dots\times\pi_n)\Biggr]^{1/p} \\
&\quad\le\Biggl[\int_{\bP^n\times\bP_n}{1\over n}\sum_{j=1}^nd_T^p(A_j,B_j)
\,d\pi_1\times\dots\times\pi_n\Biggr]^{1/p} \\
&\quad=\Biggl[{1\over n}\sum_{j=1}^n\int_{\bP\times\bP}d_T^p(A_j,B_j)
\,d\pi_j(A_j,B_j)\Biggr]^{1/p}.
\end{align*}
By taking the infima over $\pi_j$, $1\le j\le n$, in the last
expression, we have the desired $d_p^W$-inequality when $1\le
p<\infty$. The proof when $p=\infty$ is similar, so we omit the
details.
Since $\mathcal{A}_n,\mathcal{H}_n:\bP^n\to\bP$ is Lipschitz with
Lipschitz constant $1$ with respect to $d_n^\infty$ in
(\ref{E:supm}), we can use \cite[Lemma 1.3]{LL17} again with the
metric $d_p^W$ in terms of $d_n^\infty$ (in place of $d_n$ in the
above). For the Lipschitz continuity of
$\mathcal{A}(\mu_1,\dots,\mu_n)$ we have, for $1\le p<\infty$,
\begin{align*}
&d_p^W((\mathcal{A}_n)_*(\mu_1\times\dots\times\mu_n),
(\mathcal{A}_n)_*(\nu_1\times\dots\times\nu_n)) \\
&\quad\le d_p^W(\mu_1\times\dots\times\mu_n,\nu_1\times\dots\times\nu_n) \\
&\quad\le\biggl[\int_{\bP^n\times\bP^n}\max_{1\le j\le n}d_T^p(A_j,B_j)
\,d(\pi_1\times\dots\times\pi_n)\biggr]^{1/p} \\
&\quad\le\Biggl[\sum_{j=1}^n\int_{\bP\times\bP}d_T^p(A_j,B_j)\,d\pi_j(A_j,B_j)\biggr]^{1/p},
\end{align*}
which implies that
$$
d_p^W(\mathcal{A}(\mu_1,\dots,\mu_n),\mathcal{A}(\nu_1,\dots,\nu_n))
\le\Biggl[\sum_{j=1}^n\Bigl(d_p^W(\mu_j,\nu_j)\Bigr)^p\Biggr]^{1/p}.
$$
For $p=\infty$, we similarly have
$$
d_\infty^W(\mathcal{A}(\mu_1,\dots,\mu_n),\mathcal{A}(\nu_1,\dots,\nu_n))
\le\max_{1\le j\le n}d_\infty^W(\mu_j,\nu_j).
$$
The proof for $\mathcal{H}(\mu_1,\dots,\mu_n)$ is analogous, or we may use
Proposition \ref{P-7.3}.
\end{proof}
The next theorem is the AGH mean inequalities in the stochastic order for probability measures.
\begin{theorem}\label{T-7.5}
For any $\mu_{1},\dots,\mu_{n}\in\Pro(\bP)$,
$${\mathcal H}(\mu_{1},\dots,\mu_{n})\leq
\Lambda(\mu_{1},\dots,\mu_{n})\leq {\mathcal A}(\mu_{1},\dots,\mu_{n}).$$
\end{theorem}
\begin{proof} Let
$f:X\to\R^+$ be continuous and monotone. Then by the AGH mean inequalities for operators,
\begin{align*}
\int_\bP f\,d\Lambda(\mu_{1},\dots,\mu_{n})
&=\int_{\bP^n}(f\circ\Lambda_n)(A_{1},\dots,A_{n})
\,d(\mu_{1}\times\cdots\times\mu_{n})(A_{1},\dots,A_{n})\\
&\leq\int_{\bP^n} (f\circ {\mathcal A}_n)(A_{1},\dots,A_{n})
\,d(\mu_{1}\times\cdots\times\mu_{n})(A_{1},\dots,A_{n})\\
&=\int_\bP f\,d{\mathcal A}(\mu_{1},\dots,\mu_{n}),
\end{align*}
which implies by Proposition \ref{P:TM3} that
$\Lambda(\mu_{1},\dots,\mu_{n})\leq {\mathcal A}(\mu_{1},\dots,\mu_{n})$. The proof of
${\mathcal H}(\mu_{1},\dots,\mu_{n})\leq\Lambda(\mu_{1},\dots,\mu_{n})$ is similar.
\end{proof}
\begin{theorem}\label{T-7.6}
The maps $\Lambda,\mathcal{A},\mathcal{H}:(\Pro(\bP))^n\to\Pro(\bP)$ are monotonically
increasing in the sense that if $\mu_j,\nu_j\in\Pro(\bP)$ and $\mu_j\le\nu_j$ for
$1\le j\le n$, then $M(\mu_1,\dots,\mu_n)\le M(\nu_1,\dots,\nu_n)$ for
$M=\Lambda,\mathcal{A},\mathcal{H}$.
\end{theorem}
\begin{proof}
Let $f:\bP\to\R^+$ be a monotone bounded Borel function. We write
$$
\int_\bP f\,d\Lambda(\mu_1,\dots,\mu_n)=\int_\bP g(A_1)\,d\mu_1(A_1),
$$
where
$$
g(A_1):=\int_{\bP^{n-1}}(f\circ\Lambda_n)(A_1,A_2,\dots,A_n)
\,d(\mu_2\times\dots\times\mu_n)(A_2,\dots,A_n).
$$
From the monotonicity property of $\Lambda_n$, it is immediate to see that $A_1\mapsto g(A_1)$
is a monotone bounded Borel function on $\bP$. Hence by Proposition \ref{P:SO3} we have
$$
\int_\bP f\,d\Lambda(\mu_1,\dots,\mu_n)
\le\int_\bP g(A_1)\,d\nu_1(A_1)=\int_\bP f\,d\Lambda(\nu_1,\mu_2,\dots,\mu_n).
$$
This implies that $\Lambda(\mu_1,\mu_2,\dots,\mu_n)\le\Lambda(\nu_1,\mu_2,\dots,\mu_n)$.
Repeating the argument shows that $\Lambda(\nu_1,\mu_2,\dots,\mu_n)\le
\Lambda(\nu_1,\nu_2,\mu_3,\dots,\mu_n)$ and so on. Hence $\Lambda(\mu_1,\dots,\mu_n)\le
\Lambda(\nu_1,\dots,\nu_n)$ follows. The proof is similar for $\mathcal{A}$ and $\mathcal{H}$.
\end{proof}
\begin{remark}
One can apply the arguments in this section to other multivariate
operator means of $(A_1,\dots,A_n)\in\bP^n$ having the monotonicity
property. For instance, let $P_t(A_1,\dots,A_n)$ for $t\in[-1,1]$ be
the one-parameter family of multivariate power means interpolating
$\mathcal{H}_n$, $\Lambda_n$, $\mathcal{A}_n$ as
$P_{-1}=\mathcal{H}_n$, $P_0=\Lambda_n$ and $P_1=\mathcal{A}_n$. The
power mean $P_{t}(A_{1},\dots,A_{n})$ for $t\in (0,1]$ is defined by
the unique positive definite solution of
$X=\frac{1}{n}\sum_{j=1}^{n} X\#_{t}A_{j},$ where
$A\#_{t}B=A^{1/2}(A^{-1/2}BA^{-1/2})^{t}A^{1/2}$ denotes the
$t$-weighted geometric mean of $A$ and $B.$ It is monotonic and
Lipschitz
$$d_T(P_{t}(A_{1},\dots,A_{n}), P_{t}(B_1,\dots,B_n)\leq \max_{1\leq
j\leq n}d_T(A_{j},B_{j}).$$ Moreover, $P_t(A_1,\dots,A_n)$ is
monotonically increasing in $t\in[-1,1]$ and
\begin{align}\label{F-7.10}
\lim_{t\to0}P_t(A_1,\dots,A_n)=\Lambda_n(A_1,\dots,A_n).
\end{align}
For power means, see \cite{LP} for positive definite matrices
and \cite{LL13,LL14} for positive operators on an infinite-dimensional Hilbert space.
Then one has the one-parameter family of $P_t(\mu_1,\dots,\mu_n)$ for
$\mu_1,\dots,\mu_n\in\Pro(\bP)$ so that each
$P_t(\mu_1,\dots,\mu_n)$ is monotonically increasing in
$\mu_1,\dots,\mu_n$ as in Theorem \ref{T-7.6} and
$P_t(\mu_1,\dots,\mu_n)$ is monotonically increasing in $t$,
extending the AGH mean inequalities in Theorem \ref{T-7.5}. Moreover,
$$
P_s(\mu_1,\dots,\mu_n)\leq\Lambda(\mu_{1},\dots,\mu_{n})\leq P_t(\mu_1,\dots,\mu_n)
$$
for $-1\le s<0<t\leq 1$ as in Theorem \ref{T-7.5}.
Now assume that $\bP$ is the cone of positive definite matrices of some fixed dimension, and
let $\mu_j\in\Pro^1(\bP)$, $1\le j\le n$. For any continuous bounded and monotone
$f:\bP\to\R^+$ we see by \eqref{F-7.10} that
$$
\int_\bP f\,dP_t(\mu_1,\dots,\mu_n)
=\int_{\bP^n}(f\circ P_t)(A_1,\dots,A_n)\,d(\mu_1\times\dots\times\mu_n)(A_1,\dots,A_n)
$$
increases as $t\nearrow0$ and decreases as $t\searrow0$ to
$$
\int_{\bP^n}(f\circ\Lambda_n)(A_1,\dots,A_n)\,d(\mu_1\times\dots\times\mu_n)(A_1,\dots,A_n)
=\int_\bP f\,d\Lambda(\mu_1,\dots,\mu_n).
$$
Hence by Corollary \ref{C-6.5},
$$
\lim_{t\to0}d_1^W(P_t(\mu_1,\dots,\mu_n),\Lambda(\mu_1,\dots,\mu_n))=0.
$$
It would be interesting to know whether this convergence holds true in the infinite-dimensional
case as well.
\end{remark}
\begin{remark}
Several issues arise related to $\Lambda(\mu_1,\dots,\mu_n)$. For
example, it is interesting to consider existence and uniqueness for
the least squares mean on $\Pro^1(\bP)$;
$$\underset{\mu \in \Pro^1(\bP)}{\argmin}\sum_{j=1}^{n}d_1^W(\mu, \mu_{j})^2$$
and a connection with the probability measure
$\Lambda(\mu_{1},\dots,\mu_{n}).$ Moreover, the probability Borel
measure equation
$$x={\mathcal A}(x\#_{t}\mu_{1},\dots,x\#_{t}\mu_{n}), \ \ \ \mu_{j}\in {\mathcal P}_{cp}({\Bbb P}),\ t\in (0,1],$$
where $\mu\#_{t}\nu=f_{*}(\mu\times\nu)$ is the
push-forward by the $t$-weighted geometric mean map
$f(A,B)=A\#_{t}B,$ seems to have a unique solution in ${\mathcal
P}_{cp}({\Bbb P}),$ the set of probability measures with compact
support.
\end{remark}
\section{Acknowledgements}
The work of F.~Hiai was supported in part by
Grant-in-Aid for Scientific Research (C)17K05266.
The work of Y. Lim was supported by the
National Research Foundation of Korea (NRF) grant funded by the
Korea government (MEST) No.NRF-2015R1A3A2031159.
|
1,116,691,499,658 | arxiv | \section{INTRODUCTION}
We study the construction of robust preconditioners for the
high-contrast biharmonic plate equation (also referred as the
biharmonic equation). The aim is to achieve robustness with respect
to the contrast size and the mesh size simultaneously, which we call
as $m$- and $h$-robustness, respectively. In the case of a
high-contrast diffusion equation, we studied the family of
preconditioners $B_{AGKS}$ by proving and numerically demonstrating
that the same family used for finite element discretization
\cite{AGKS:2007} can also be used for conservative finite volume
discretizations with minimal modification \cite{AkYe2009}. In this
article, we extend the applicability of $B_{AGKS}$ even further and
show that the very same preconditioner can be used for a wider family
of elliptic PDEs. The broadness of the applicability of $B_{AGKS}$ has
been achieved by singular perturbation analysis (SPA) as it provides
valuable insight into qualitative nature of the underlying PDE and its
discretizations. In order to study the robustness of $B_{AGKS}$, we
use an SPA that is similar to the one devised on the matrix entries by
Aksoylu et al.~\cite{AGKS:2007}. SPA turned out to be an effective
tool in analyzing certain behaviors of the discretization matrix
$K(m)$ such as the asymptotic rank, decoupling, low-rank perturbations
(LRP) of the resulting submatrices. LRPs are exploited to accomplish
dramatic computational savings and this is the main numerical linear
algebra implication.
The devised SPA is utilized to explain the properties of the
submatrices related to $K(m)$. In particular, SPA of highly-bending
block $K_{HH}(m)$, as modulus of bending $m \to \infty$, has important
implications for the behaviour of the Schur complement $S(m)$ of
$K_{HH}(m)$ in $K(m)$. Namely,
\begin{equation} \label{limitingS}
S(m) := K_{LL} - K_{LH} K_{HH}^{-1}(m) K_{HL} = S_\infty + \mathcal{O}(m^{-1}) \ ,
\end{equation}
where $S_\infty$ is a LRP of $K_{LL}$. The rank of the perturbation
depends on the number of disconnected components comprising the
highly-bending region. This special limiting form of $S(m)$ allows
us to build a robust approximation of $S(m)^{-1}$ by merely using
solvers for $K_{LL}$ by the help of the
Sherman-Morrison-Woodbury formula.
Preconditioning for the biharmonic equation was extensively studied in
the domain decomposition setting
\cite{Mihajlovic.M;Silvester.D2004,Zhang.X1994} and multigrid, BPX,
and hierarchical basis settings
\cite{Braess.D;Peisker.P1987,Hanisch.M1993,Mihajlovic.M;Silvester.D2004a,Maes.J;Bultheel.A2006,Oswald.P1992,Oswald.P1995}.
Other solution strategies were also developed such as fast Poisson
solvers \cite{mayo1984,mayoGreenbaum1992} and iterative methods
\cite{dang2006}. However, there is only limited preconditioning
literature available for discontinuous coefficients. Marcinkowski
\cite{marcinkowski2007} studied domain decomposition preconditioners
for the mortar type discretization of the biharmonic equation with
large jumps in the coefficients.
The high-contrast in material properties is ubiquitous in composite
materials. Hence, the modeling of composite materials is an immediate
application of the biharmonic plate equation with high-contrast coefficients.
Since the usage of composite materials is steadily increasing, the
simulation and modeling of composite has become essential. We witness
that the utilization of composites has become an industry standard.
For instance, light weight composite materials are now being used in
modern aircrafts by Airbus and Boeing. There is imminent need for
robust preconditioning technology in the computational material
science community as the modeling and simulation capability of
composites evolve.
In \cite{wang2005_masterThesis}, the
Euler-Bernoulli equation with discontinuous coefficients was studied
for the kinematics of composite beams. In the beam setting, the
physical meaning of the PDE coefficient corresponds to the product of
Young's modulus and moment of inertia
\cite{pozrikidis2005_book}[p. 103], \cite{wang2005_masterThesis}. In
the biharmonic plate equation setting, the PDE coefficient represents the plate
modulus of bending \cite{pozrikidis2005_book}[p. 406].
Nonhomogeneous elastic plates has been considered in
\cite{manolisRangelovShaw2003} with varying modulus of elasticity.
Our model problem is limited to the biharmonic equation which captures
only the \emph{isotropic} materials. The extension of our analysis to
a more generalized 4-th order PDE is widely open. Such PDEs have an
important role in structural mechanics as they are used in modeling
\emph{anisotropic} materials. Plane deformations of anisotropic
materials were studied in \cite{millerHorgan1995}, but extension to
simultaneously heterogeneous and anisotropic case needs to be further
explored. Grossi \cite{grossi2001} has studied the existence of the
weak solutions of anisotropic plates. The coercivity of the bilinear
forms has also been established which may lay the foundations for our
future work related to LRPs.
\iffalse
We have accomplished a desirable preconditioning design goal. By
following the above steps, we were able to use the same family of
preconditioners to solve different PDEs and different discretizations.
In particular, $B_{AGKS}$ family was used not only in linear finite
element and finite volume discretizations of the high-contrast
diffusion equation, but also in HCT and Morley discretizations of the
biharmonic plate equation. This is mainly due similarities in low-rank
perturbation properties of the underlying PDEs and their
discretizations. Once this striking property is established, we
would immediately be able to extend the use of $B_{AGKS}$ family to a
significantly larger group of PDEs. Therefore, low-rank perturbation
properties need to be fully explored.
The use of the preconditioner is extended to different classes of PDEs.
the importance of preconditioning will be
revealed and robust preconditioners will be of high demand.
In~\cite{AGKS:2007}, we also provided a rigorous convergence analysis
of $B_{AGKS}$.
This was first carried out
in our recent work~\cite{AkKl:2007,AkKlWh2007-techReport,}.
This accomplishment is mainly due to the observation that when a solve
for the high permeable block $A_{HH}(m)$ is performed, then the
dependence of $S(m)$---the Schur complement of $A_{HH}(m)$ in
$A(m)$---on $m$ is eliminated, as $m \rightarrow \infty$. In his more
recent work~\cite{AGKS:2007}, the subtle connection between
$A_{HH}(m)$ and $S(m)$ was completely revealed by singular
perturbation analysis (SPA). In particular, $A_{HH}^{-1}(m)$ and
$S(m)$ converge to a low rank matrix and a low rank perturbation of
$A_{LL}$, respectively. The rank of the perturbation is determined by
the number of disconnected components (islands) forming the high
permeable region.
~\cite{Aksoylu:2007,AGKS:2007,AkKl:2007,AkKlWh2007-techReport}
\subsection{Application of biharmonic equation}
The reduction of the \emph{analysis of the two-dimensional problem in
the classical theory of elasticity} to the solution of biharmonic
equation is due to Airy, who used the calculations in the design
of a structural support system for an astronomical telescope.
The study of 3D problems in the mathematical theory of elasticity
also touches upon formulations which involve the biharmonic operator.
These developments essentially lay the foundation to the study of
the mathematical theory of elasticity which forms an important
aspect of the mechanics of deformable media.
The solutions developed for slow viscous flow problems including
flow of molten metals, flow particulate suspensions and in the
modeling of bio-fluid dynamics.
Relevant field equations are used to develop the biharmonic
equations governing plane problems in elasticity theory and slow
viscous flow. In addition, biharmonic equation is governing equation
in flexure of thin plates, described by the Germain-Poisson-Kirchoff
this plate theory.
Bilinear form in the Hessian formulation
Discretization cubic Hermite in particular conforming discretization HCT.
\fi
The remainder of the article is structured as follows. In
\S\ref{sec:underlyingPDE}, we present the underlying high-contrast
biharmonic plate equation and the associated bilinear forms. Subsequently,
the effects of high-contrast on the spectrum of stiffness matrix and
its subblocks are also discussed. Since the proposed preconditioner is
based on LRP, in \S\ref{sec:LRP}, we study the LRP of the limiting
Schur complement as in \eqref{limitingS}. In \S\ref{sec:SPA}, we
present the aforementioned SPA and reveal the asymptotic qualitative
nature of the solution. In particular, the solution over the
highly-bending region converges to a linear polynomial as $m
\rightarrow \infty$. In \S\ref{sec:AGKS}, we introduce the proposed
preconditioner and prove its effectiveness by establishing a spectral bound for
the preconditioned system. In \S\ref{sec:generalPDE}, a strategy is presented
on how to generalize the proposed preconditioner to cover
high-contrast elliptic PDEs of order $2k,~k>2$. In
\S\ref{sec:numerics}, the $m$- and $h$-robustness of the
preconditioner are demonstrated by numerical experiments.
\section{THE UNDERLYING PDE AND THE LINEAR SYSTEM} \label{sec:underlyingPDE}
\begin{figure}[htbp]
\centering{
\includegraphics[width=1in]{domain1}}
\caption{$\Omega = \overline{\Omega}_H \cup \Omega_L$ where $\Omega_H$
and $\Omega_L$ are highly- and lowly-bending regions,
respectively.\label{fig:domain1}}
\end{figure}
We study the following high-contrast biharmonic equation for the clamped
plate problem:
\begin{equation} \label{mainProblem}
\begin{array}{rcll}
\nabla^2 \, (\alpha \, \nabla^2 u) & = &
f \quad & \text{in} \quad \Omega \subset \mathbb{R}^2, \\
u = \partial_n u & = & 0 \quad & \text{on} \quad \partial \Omega.
\end{array}
\end{equation}
We restrict the plate bending process to a \emph{binary regime} (see
Figure \ref{fig:domain1})
in which the coefficient $\alpha$ is a
piecewise constant function with the following values:
\begin{equation*}
\alpha(x) =
\begin{cases}
m \gg 1, & x \in \Omega_H, \\
1, & x \in \Omega_L.
\end{cases}
\end{equation*}
It is quite common to idealize the discontinuous PDE coefficient
$\alpha$ by a piecewise constant
function~\cite{BaKn1990,KnWi:2003}. In the case of high-contrast
diffusion equation, Aksoylu and Beyer~\cite{AkBe2008} showed that the
idealization of diffusivity by piecewise constant coefficients is
meaningful by showing a continuous dependence of the solutions on the
diffusivity; also see~\cite{AkBe2009}. A similar justification can be
extended to the high-contrast biharmonic plate equation.
\subsection{Bilinear forms for the biharmonic equation}
In the theory of elasticity, potential energy is defined
by using \emph{rotationally invariant} functions.
For plates, the potential energy is given by~\cite[p. 30]{ciarlet2002_book}:
\begin{equation} \label{plateEnergyHessian}
J(v) := \frac{1}{2} \int_{\Omega} \alpha \,
\left[ \{ \textrm{trace} \, Hess \}^2 +
2(\sigma-1) \det Hess \right]~dx - \int_{\Omega} fv~dx,
\end{equation}
where $Hess$ is the Hessian,
\begin{equation*}
Hess = \left[ \begin{matrix}
\partial_{11}v & \partial_{12}v\\
\partial_{21}v & \partial_{22}v
\end{matrix}
\right].
\end{equation*}
The bilinear form corresponding to energy
minimization in \eqref{plateEnergyHessian} is given by:
\begin{equation} \label{ciarlet1}
a(u,v) := \int_{\Omega} \alpha \, \left [\nabla^2 u \, \nabla^2 v +
(1 - \sigma) \{ 2 \partial_{12}u \, \partial_{12}v -
\partial_{11} u \, \partial_{22} v -
\partial_{22} u \, \partial_{11} v \} \right]~dx,
\end{equation}
where $0< \sigma < 1/2$ is the Poisson's ratio. Note that the
straightforward bilinear form associated to \eqref{mainProblem} is
obtained by using Green's formula:
\begin{equation} \label{straightforwardBilinearForm}
\int_\Omega \nabla^2 \, (\alpha \, \nabla^2 u) \, v~dx =
\int_\Omega \alpha \, \nabla^2 u \, \nabla^2 v~dx +
\int_{\partial \Omega} \alpha \, \partial_n \nabla^2 u \, v~d\gamma -
\int_{\partial \Omega} \alpha \, \nabla^2 u \, \partial_n v~d\gamma.
\end{equation}
We see that both \eqref{ciarlet1} and \eqref{straightforwardBilinearForm}
contain the so-called \emph{canonical} bilinear form, $\tilde{a}(u,v)$,
associated to the biharmonic equation \eqref{mainProblem}:
\begin{equation} \label{canonical}
\tilde{a}(u,v) :=
\int_{\Omega} \alpha \, \nabla^2 u \, \nabla^2 v ~dx.
\end{equation}
When $u,v \in H_0^2(\Omega)$, both bilinear forms $a(u,v)$ and
$\tilde{a}(u,v)$ correspond to the strong
formulation \eqref{mainProblem} due to second Green's formula
and the zero contribution of the below term:
\begin{equation} \label{zeroContributionTerm}
\int_{\Omega} (1 - \sigma) \{ 2 \partial_{12}u \, \partial_{12}v -
\partial_{11} u \, \partial_{22} v -
\partial_{22} u \, \partial_{11} v \}~dx.
\end{equation}
\subsection{Effects of high-contrast on the spectrum}
\begin{figure}[th!]
\centering{
\includegraphics[width=6in]{spectra_HCT_level3}}
\caption{ The HCT discretization of the biharmonic equation with
$m=10^{10}$. (Left) The spectrum of the stiffness matrix $K$.
(Right) Spectrum of the diagonally scaled stiffness matrix. Notice
the 3 small eigenvalues of order $\mathcal{O}(m^{-1})$ corresponding
to the kernel of the Neumann matrix, $\textrm{span} \{
\underline{1}_H, \underline{x}_H, \underline{y}_H \}.$ The plot of
the two of smallest eigenvalues overlap because they are roughly of
the same magnitude. \label{fig:spectra}}
\end{figure}
Roughness of PDE coefficients causes loss of robustness of
preconditioners. This is mainly due to clusters of eigenvalues with
varying magnitude. Although diagonal scaling has no effect on the
asymptotic behaviour of the condition number, it leads to an improved
clustering in the spectrum. The spectrum of diagonally scaled
stiffness matrix, $A$, is bounded from above and below except three
eigenvalues in the case of a single isolated highly-bending island.
On the other hand, the spectrum of $K$ contains eigenvalues
approaching infinity with cardinality depending on the number of DOF
contained within highly-bending island. For the case of HCT
discretization with $m=10^{10}$, we depict the spectra of $K$ and $A$
and their subblocks in Figure \ref{fig:spectra}. Clustering provided
by diagonal scaling can be advantageous for faster convergence of
Krylov subspace solvers especially when deflation methods designed for
small eigenvalues are used; for further discussion see \cite{AkKl:2007}.
Utilizing the matrix entry based analysis by Graham and
Hagger~\cite{GrHa:99} for linear FE, in \cite{AkYe2009}, the authors
extended the spectral analysis to cell-centered FV discretization and
obtained an identical spectral result for $A$. Namely, the number of
small eigenvalues of $A$ depends on the number of isolated islands
comprising the highly-bending region. We observe a similar behaviour
for the biharmonic plate equation where the only difference is that for each
island we observe three small eigenvalues rather than one. The three
dimensional kernel of the Neumann matrix is responsible for that
difference; see \S\ref{sec:LRP}. A similar matrix entry based
analysis can be applied to discretizations of the plate equation, but
this analysis is more involved for HCT and Morley discretizations than
that for linear FE. Hence, we exclude it from scope of this article.
\section{DISCRETIZATIONS AND LOW-RANK PERTURBATIONS}
\label{sec:LRP}
We consider an $H^2$-conformal and also an $H^2$-nonconformal
Galerkin finite element discretization; Hsieh-Clough-Tocher (HCT)
\cite{HCTorigPaper} and Morley \cite{morleyOrigPaper} elements, respectively.
Let the linear system arising from the discretization be denoted by:
\begin{equation} \label{mainLinearSys}
K(m)~x = b.
\end{equation}
$\Omega$ is decomposed with respect to magnitude of the coefficient value as
\begin{equation} \label{subregionDecomp}
\Omega = \overline{\Omega}_H \cup \Omega_L,
\end{equation}
where $\Omega_H$ and $\Omega_L$ denote the highly- and lowly-bending
regions, respectively. DOF that lie on the interface, $\Gamma :=
\overline{\Omega}_H \cap \overline{\Omega}_L$, between the two regions
are included in $\Omega_H$. When $m$-dependence is explicitly stated and
the discretization system \eqref{mainLinearSys} is decomposed with
respect to \eqref{subregionDecomp}, i.e., the magnitude of the
coefficient values, we arrive at the following $2 \times 2$ block
system:
\begin{equation} \label{2x2blockSys}
\left[
\begin{array}{ll}
K_{HH}(m) & K_{HL} \\ K_{LH} & K_{LL}
\end{array}
\right]
\left[ \begin{array}{c} x_H \\ x_L \end{array} \right]
= \left[ \begin{array}{c} b_H \\ b_L \end{array} \right].
\end{equation}
There are important properties associated to the $K_{HH}$ block in
\eqref{2x2blockSys}: It is the only block that has $m$-dependence, and
furthermore, a matrix with low-rank kernel can be extracted from it.
Our preconditioner construction is based on LRPs from this extraction.
Next, we explain how to extract the so-called \emph{Neumann matrix}
and why $a(u,v)$ is the suitable bilinear form for that purpose.
By rewriting \eqref{ciarlet1} as the following
\begin{equation} \label{ciarlet2}
a(u,v) = \int_{\Omega} \alpha \, \left [ \sigma \, \nabla^2 u \, \nabla^2 v +
(1 - \sigma) \{ \partial_{11}u \, \partial_{11}v +
\partial_{22} u \, \partial_{22} v +
2 \, \partial_{12} u \, \partial_{12} v \} \right]~dx,
\end{equation}
we see that
\begin{eqnarray}
a(v,v) & = & \alpha \, \sigma~\|\nabla^2 v\|^2_{L_2(\Omega)} +
\alpha \, (1-\sigma) |v|^2_{H^2(\Omega)} \nonumber \\
& \geq & \alpha \, (1-\sigma) |v|^2_{H^2(\Omega)} \label{gateway}.
\end{eqnarray}
The inequality \eqref{gateway} has important implications. Namely,
$a(v, v)$ is $V_{\mathcal{P}_1}(\Omega)$-coercive where
$V_{\mathcal{P}_1}(\Omega) \subset H^2(\Omega)$ is a closed subspace
such that $V_{\mathcal{P}_1}(\Omega) \cap \mathcal{P}_1 = \emptyset$
and $\mathcal{P}_1$ denotes the set of polynomials of degree at most
$1$. Furthermore, \eqref{gateway} immediately implies that
$a(v,v)$ is $H_0^2(\Omega)$-coercive.
Let $\mathcal{T}^h$ be the triangulation of $\Omega$. Based on
$\mathcal{T}^h$, we define the associated discrete space
$V_{\mathcal{P}_1}^h(\Omega)$ such that $V_{\mathcal{P}_1}^h \cap
\mathcal{P}_1^h = \emptyset$. A precise definition of the $K_{HH}$
block in the stiffness matrix in \eqref{mainLinearSys} is given by:
\begin{equation*}
\langle K_{HH} \underline{\phi}_H^h, \underline{\psi}_H^h \rangle :=
a(\phi_H^h,\psi_H^h),
\end{equation*}
where $\phi_H^h, \psi_H^h \in V^h(\Omega_H) \subset H_0^2(\Omega_H)$
are the basis functions. We define the \emph{Neumann matrix}
$\mathcal{N}_{HH}$ as follows:
\begin{equation*}
\langle \mathcal{N}_{HH} \underline{\phi}_H, \underline{\psi}_H \rangle :=
a(\phi_H^h,\psi_H^h),
\end{equation*}
where $\phi_H^h, \psi_H^h \in V_{\mathcal{P}_1}^h (\Omega_H)$.
Since $a(\cdot, \cdot)$ is
$V_{\mathcal{P}_1}(\Omega)$-coercive, this implies by \eqref{gateway} that
\begin{equation} \label{ker_Nhh}
\ker \mathcal{N}_{HH} = \mathcal{P}_1^h|_{\overline{\Omega}_H} =~
\textrm{span} \{ \underline{1}_H, \underline{x}_H, \underline{y}_H \}.
\end{equation}
Hence, $K_{HH}(m)$ has the following decomposition:
\begin{equation} \label{K_HH_decomp}
K_{HH}(m) = m \, \mathcal{N}_{HH} + R,
\end{equation}
where $R$ is the coupling matrix corresponding to DOF on the interface
$\Gamma$. Now, we are in a position to reveal the resulting main
numerical linear algebra implication. As $m \rightarrow \infty$, the
limiting Schur complement $S_\infty$ in \eqref{limitingS} becomes a
rank-3 perturbation of $K_{LL}$. This result relies on the fact that the
inverse of the limiting $K_{HH}$ is of rank-3; see \eqref{lemma:part_i}.
This is due to the fact that $\mathcal{N}_{HH}$ has a rank 3 kernel whose (normalized)
discretization is given by:
\begin{equation} \label{defn:e_H}
e_H := [\underline{1}_H, \underline{x}_H, \underline{y}_H].
\end{equation}
\section{MAIN SINGULAR PERTURBATION ANALYSIS RESULTS} \label{sec:SPA}
\begin{lemma} \label{lemma:Main}
The asymptotic behaviour of the submatrices in \eqref{eq:exact}
is given by the following:
\begin{eqnarray}
K_{HH}(m)^{-1} & = & e_{H}\eta^{-1} e_{H}^{t} + \mathcal{O}(m^{-1}),
\label{lemma:part_i}\\
S(m) & = & K_{LL} -
(K_{LL} e_{H}) \eta^{-1} (e_{H}^{t} K_{LL}) +
\mathcal{O}(m^{-1}), \label{lemma:part_ii}\\
K_{LH} K_{HH}(m)^{-1} & = & (K_{L L} e_H)
\eta^{-1} e_{H}^{t} + \mathcal{O}(m^{-1}), \label{lemma:part_iii}
\end{eqnarray}
where
\begin{equation} \label{defn:eta}
\eta := e_H^t \, K_{HH} \, e_H.
\end{equation}
\end{lemma}
\begin{proof}
Since $\mathcal{N}_{HH}$ is symmetric positive semidefinite, using \eqref{ker_Nhh}
we have the following spectral decomposition where
$n_H$ denotes the cardinality of DOF in $\overline{\Omega}_H$:
\begin{equation} \label{spectralDecomp1}
Z^t \mathcal{N}_{HH} Z =
\text{diag}(\lambda_1, \ldots, \lambda_{n_H-3}, 0, 0, 0),
\end{equation}
where $\{ \lambda_i :\ i = 1, \ldots, n_H \}$ is a non-increasing
sequence of eigenvalues of $\mathcal{N}_{HH}$ and $Z$ is orthogonal. Since,
the eigenvectors corresponding to the zero eigenvalues are
discretization of the polynomials $1, x$, and $y$, we can write
$Z = \left[\tilde{Z} \ | \ e_H \right]$ where $e_H$ is defined
in \eqref{defn:e_H}. Using \eqref{K_HH_decomp}, we have:
\begin{eqnarray}
Z^{t}K_{HH}(m)Z & = & \left[
\begin{matrix}
m~\textrm{diag} (\lambda_{1}, \ldots, \lambda_{n_H - 3}) +
\tilde{Z}^{t}R \tilde{Z} & ~\tilde{Z}^{t}R e_{H}
\\ e_{H}^{t} R \tilde{Z} & e_{H}^{t} R e_{H} \\
\end{matrix}
\right] \nonumber \\
& =: & \left[
\begin{matrix}
\tilde\Lambda(m) & \tilde{\delta} \\ \tilde{\delta}^{t} & \eta \label{spectralDecomp1b} \\
\end{matrix}
\right].
\end{eqnarray}
To find the limiting form of $K_{HH}(m)^{-1}$ note that
\begin{eqnarray*}
\tilde{\Lambda}(m) & = &
m~\textrm{diag}(\lambda_{1}, \ldots, \lambda_{n_H - 3}) +
\tilde{Z}^{t} R \tilde{Z} \\
&=&
m~\textrm{diag}(\lambda_{1}, \ldots, \lambda_{n_H - 3}) \left( \tilde{I} +
m^{-1}~\textrm{diag}(\lambda_{1}^{-1}, \ldots,
\lambda_{n_H - 3}^{-1})\tilde{Z}^{t} R \tilde{Z} \right).
\end{eqnarray*}
Then,
\begin{equation*}
\|\tilde{\Lambda}(m)^{-1}\|_2 \nonumber \leq
\frac{m^{-1} \, \max_{i \leq n_H-3} \ \lambda_i^{-1}}
{1 - m^{-1} \, \max_{i \leq n_H-3} \ \lambda_i^{-1} \,
\|\tilde{Z}^t R \tilde{Z}\|_2 },
\end{equation*}
for sufficiently large $m$, we can conclude the following:
\begin{equation} \label{result1_lemma}
\tilde{\Lambda}(m)^{-1} = \mathcal{O}(m^{-1}).
\end{equation}
We proceed with the following inversion:
\begin{equation*} \label{inversion1} \left[ \begin{array}{cc}
\tilde{\Lambda}(m) & \tilde{\delta} \\
\tilde{\delta}^t & \eta
\end{array} \right]^{-1} = U(m)~V(m)~U(m)^t,
\end{equation*}
where
\begin{eqnarray*}
U(m) & := &
\left[ \begin{array}{cc}
\tilde{I} & -\tilde{\Lambda}(m)^{-1} \tilde{\delta} \\
0^t & 1
\end{array} \right],\\
V(m) & := &
\left[ \begin{array}{cc}
\tilde{\Lambda}(m)^{-1} & 0 \\
0^t & \left(\eta - \tilde{\delta}^t \tilde{\Lambda}(m)^{-1} \tilde{\delta}
\right)^{-1}
\end{array} \right].
\end{eqnarray*}
Then, \eqref{result1_lemma} implies that
\begin{eqnarray*} \label{result3_lemma}
U(m) & = & I + \mathcal{O}(m^{-1}), \\
V(m) & = & \left[ \begin{array}{cc} O & 0 \\ 0^t & \eta^{-1}
\end{array} \right] + \mathcal{O}(m^{-1}).
\end{eqnarray*}
Combining the above results, we arrive at
\begin{equation*}
\label{inversion_limit}
\left[ \begin{array}{cc}
\tilde{\Lambda}(m) & \tilde{\delta} \\
\tilde{\delta}^t & \eta
\end{array} \right]^{-1}
\ = \
\left[ \begin{array}{cc}
O & 0 \\ 0^t & \eta^{-1}
\end{array} \right]
\ + \ \mathcal{O}(m^{-1})\ ,
\end{equation*}
and, by \eqref{spectralDecomp1b}, we have
\begin{eqnarray}
\label{eq:largest}
K_{HH}(m)^{-1} \ & = &
\ Z \left[ \begin{array}{cc}
O & 0 \\ 0^t & \eta^{-1}
\end{array} \right] Z^t \ + \ \mathcal{O}(m^{-1}) \ \\
& =: & \ e_H \eta^{-1} e_H^t \ + \ \mathcal{O}(m^{-1}) \ ,\nonumber
\end{eqnarray}
which proves \eqref{lemma:part_i} of the Lemma.
Parts \eqref{lemma:part_ii} and \eqref{lemma:part_iii} follow from simple
substitution and using \eqref{SchurComplement1}. \hfill \qed \\
\end{proof}
\begin{remark}
If we further decompose DOF associated with $\overline{\Omega}_H$
into a set of interior DOF associated with index $I$ and
interface DOF with index $\Gamma$, we obtain the following block
representation of $K_{HH}$:
\begin{equation} \label{A_HH_blockwise}
K_{HH}(m) \ = \ \left[ \begin{array}{cc}
K_{II}(m) & K_{I \Gamma}(m) \\
K_{\Gamma I}(m) & K_{\Gamma \Gamma}(m)
\end{array} \right].
\end{equation}
The entries in the block $K_{\Gamma \Gamma}(m)$ are
assembled from contributions both from finite elements in $\Omega_H$
and $\Omega_L$, i.e. $K_{\Gamma \Gamma}(m) = A^{(H)}_{\Gamma
\Gamma}(m) + A^{(L)}_{\Gamma \Gamma}$.
We further write $e_H$ in block form;
$e_H = ( e_I^t \ , \ e_{\Gamma}^t)^t$.
Finally we note that the off-diagonal blocks have the decomposition:
\begin{equation}
\label{A_HL_blockwise}
K_{LH} \ = \ \left[ \begin{array}{cc}
0 & K_{L\Gamma}
\end{array} \right] \ = \ K_{HL}^t.
\end{equation}
Therefore, the results of Lemma \ref{lemma:Main} can be rewritten as
the following:
\begin{eqnarray*}
K_{HH}(m)^{-1} & = & e_{H}
\left( e_\Gamma^t K_{\Gamma \Gamma}^{(L)} e_\Gamma \right)^{-1} e_{H}^t +
\mathcal{O}(m^{-1}),\\
S(m) & = & K_{LL} - (K_{L\Gamma} e_{\Gamma})
\left( e_\Gamma^t K_{\Gamma \Gamma}^{(L)} e_\Gamma \right)^{-1}
(e_{\Gamma}^{t} K_{\Gamma L} + \mathcal{O}(m^{-1}),\\
K_{LH} K_{HH}(m)^{-1} & = & (K_{L \Gamma} e_{\Gamma})
\left( e_\Gamma^t K_{\Gamma \Gamma}^{(L)} e_\Gamma \right)^{-1}
e_{H}^{t} + \mathcal{O}(m^{-1}).
\end{eqnarray*}
\end{remark}
\subsection{Qualitative nature of the solution}
\label{sec:qualitativeNature}
We advocate the usage of SPA because it is a very effective tool in
gaining qualitative insight about the asymptotic behavior of the
solution of the underlying PDE. Through SPA, in
Lemma~\ref{lemma:Main}, we were able to fully reveal the asymptotic
behaviour of the submatrices of $K$ in \eqref{eq:exact}. This
information leads to a characterization of the limit of the underlying
discretized inverse operator. We now prove that \emph{the solution
over the highly-bending island converges to a linear polynomial}. In
other words, $x_H^\infty \in \text{span}~e_H$. This is probably the
most fundamental qualitative feature of the solution of the
high-contrast biharmonic plate equation.
\begin{lemma}
Let $e_H$ as in \eqref{defn:e_H}. Then,
\begin{equation} \label{x_HConst}
x_H(m) = e_H~c_H \ + \ \mathcal{O}(m^{-1}),
\end{equation}
where $c_H$ is a $3 \times 1$ vector determined by the solution in the
lowly-bending region.
\end{lemma}
\begin{proof}
We prove the result by providing an explicit quantification of
the limiting process based on Lemma~\ref{lemma:Main}:
\begin{equation*}
\begin{array}{lllll}
x_L(m) & = & S^{-1}(m)~
\{b_L - K_{LH} \, K_{HH}^{-1}(m) b_H \} &&\\
& = & S_\infty^{-1} \{b_L -
K_{LH} \left( e_H \eta^{-1} e_H^t \right) b_H \} + \mathcal{O}(m^{-1})\\
& =: & x_L^\infty + \mathcal{O}(m^{-1}),\\
x_H(m) & = &
K_{HH}^{-1}(m)~ \{b_H - K_{HL} \, x_L(m) \} &&\\
& = & e_H \eta^{-1} e_H^t
\{ b_H - K_{HL} \, x_L^\infty \} +
\mathcal{O}(m^{-1})\\
& =: & e_H~c_H \ + \ \mathcal{O}(m^{-1}).
\end{array}
\end{equation*}
\hfill \qed
\end{proof}
\section{CONSTRUCTION OF THE PRECONDITIONER} \label{sec:AGKS}
The exact inverse of $K$ can be written as:
\begin{eqnarray}
K^{-1} & = &
{ \left[ \begin{array}{cc} I_{HH} & ~~- K_{HH}^{-1} K_{HL}\\
0 & I_{LL}\end{array}\right]}~
{ \left[ \begin{array}{cc} K_{HH}^{-1}& 0 \\
0& ~~S^{-1} \end{array}\right]}~
{ \left[ \begin{array}{cc} I_{HH} & 0 \\ -K_{LH}K_{HH}^{-1}& ~~I_{LL}
\end{array}\right]}, \label{eq:exact}
\end{eqnarray}
where $I_{HH}$ and $I_{LL}$ denote the identity matrices of the
appropriate dimension and the Schur complement $S$ is explicitly given by:
\begin{equation} \label{SchurComplement1}
S(m) = K_{LL} - K_{LH}K_{HH}^{-1}(m) K_{HL}.
\end{equation}
Let the limit in \eqref{lemma:part_i} be denoted by
$K_{HH}^{\infty^{\dagger}}:=e_{H}\eta^{-1} e_{H}^{t}$. Based on
the above perturbation analysis, our proposed preconditioner is
defined as follows:
\begin{equation} \label{MainPrec1}
B_{AGKS}(m):=
\left[ \begin{array}{cc} I_{HH} & - K_{HH}^{\infty^\dagger} K_{HL} \\
0 & I_{LL}\end{array}\right]
\left[ \begin{array}{cc} K_{HH}(m)^{-1}& 0 \\
0& S_\infty^{-1} \end{array}\right]
\left[ \begin{array}{cc} I_{HH} & 0 \\ - K_{LH}K_{HH}^{\infty^\dagger} & I_{LL}
\end{array}\right]
\end{equation}
We need the following auxillary result to be used in the proof of
Theorem \ref{thm:prec_robust} which characterizes the spectral
behaviour of the preconditioned system.
\begin{lemma}
For sufficiently large $m$, we have
\begin{equation}
\label{A_HH0.5}
K_{HH}^{-1/2} = e_H \eta^{-1/2} e_H^t + \mathcal{O}(m^{-1/2}),
\end{equation}
where $\eta$ is the $3 \times 3$ SPD matrix independent of $m$ defined
in \eqref{defn:eta}.
\end{lemma}
\begin{proof}
We start by writing down the spectral decomposition of $K_{HH}(m)$
\begin{equation*}
\label{eq:eigdecomp}
Q(m)^t K_{HH}(m) Q(m) \ = \
\ \mathrm{diag}(\mu_1(m), \ldots , \mu_{n_H-3}(m), \mu_{n_H-2}(m) ,
\mu_{n_H-1}(m), \mu_{n_H}(m)),
\end{equation*}
where $\{\mu_i(m): \ i = 1, \ldots , n_H\}$ denotes a non-increasing
ordering of the eigenvalues of $K_{HH}(m) $. Since $K_{HH}(m)$ is SPD,
we have $\mu_i(m) > 0 $ for all $i\leq n_H$. We use the main fact
that eigenvalues and eigenvectors of a symmetric matrix are Lipschitz
continuous functions of the matrix
entries~\cite{kato1982_book,watkins2002_book}.
By \eqref{spectralDecomp1} and \eqref{eq:largest} in Lemma \ref{lemma:Main},
we give the following spectral decomposition:
\begin{equation} \label{interStep2}
K_{HH}^{-1}(m) = z_1 \, 0 \, z_1^t + \ldots + z_{n_{H}-3} \, 0 \, z_{n_{H}-3}^t
+ e_H \, \eta^{-1} \, e_H^t + \mathcal{O}(m^{-1}).
\end{equation}
Note that $\eta$ in \eqref{spectralDecomp1b} is a $3 \times 3$ symmetric,
and hence, diagonalizable matrix.
We proceed towards a fully diagonalized form of the limiting $K_{HH}^{-1}(m)$.
For that, we use the diagonalization of $\eta^{-1}$:
\begin{equation*} \label{interStep3}
\eta^{-1} = \hat{z}_{H_1} \, \mu_{H_1}^{-1} \, \hat{z}_{H_1}^t +
\hat{z}_{H_x} \, \mu_{H_x}^{-1} \, \hat{z}_{H_x}^t +
\hat{z}_{H_y} \, \mu_{H_y}^{-1} \, \hat{z}_{H_y}^t.
\end{equation*}
Therefore, we have the following expression for the last term in
\eqref{interStep2}:
\begin{equation} \label{lastTermDiag}
e_H \eta^{-1} e_H^t = [z_{H_1} \, z_{H_x} \, z_{H_y}] \,
\text{diag}(\mu_{H_1}^{-1}, \mu_{H_x}^{-1}, \mu_{H_y}^{-1}) \,
[z_{H_1} \, z_{H_x} \, z_{H_y}]^t,
\end{equation}
where
\begin{eqnarray*}
\left[z_{H_1} \, z_{H_x} \, z_{H_y}\right] & := &
\left[e_{H_1} \, e_{H_x} \, e_{H_y}\right]
\left[\hat{z}_{H_1} \, \hat{z}_{H_x} \, \hat{z}_{H_y}\right]\\
\left[e_{H_1}, e_{H_x}, e_{H_y}\right] & := & e_H.
\end{eqnarray*}
Now by substituting \eqref{lastTermDiag} in \eqref{interStep2},
we have the following spectral decomposition which corresponds to the fully
diagonalized version:
\begin{eqnarray}
K_{HH}^{-1}(m) & = & z_1 \, 0 \, z_1^t + \ldots +
z_{n_{H}-3} \, 0 \, z_{n_{H}-3}^t +
z_{H_1} \, \mu_{H_1} \, z_{H_1}^t + z_{H_x} \, \mu_{H_x} \, z_{H_x}^t +
z_{H_y} \, \mu_{H_y} \, z_{H_y}^t + \mathcal{O}(m^{-1}) \nonumber \\
& =: & Z_\infty \,
\text{diag}(0, \ldots, 0, \mu_{H_1}^{-1}, \mu_{H_x}^{-1}, \mu_{H_y}^{-1}) \, Z_\infty^t
+ \mathcal{O}(m^{-1}) \label{fullDiagVersion}.
\end{eqnarray}
The expression in \eqref{fullDiagVersion} also implies the convergence
of the eigenvectors of $K_{HH}(m)$:
\begin{equation} \label{eigvecConv}
Q(m) = Z_\infty + \mathcal{O}(m^{-1}).
\end{equation}
Note that $Z_\infty$ differs from $Z$ in \eqref{spectralDecomp1} only
in the last three columns due to diagonalization of $\eta$.
From \eqref{fullDiagVersion}, we obtain a characterization of the
largest three eigenvalues of $K_{HH}(m)^{-1}$:
\begin{subequations} \label{etaSpectralDecomp}
\begin{eqnarray}
\mu_{n_H - 2}(m)^{-1} & = & \mu_{H_1}^{-1} + \mathcal{O}(m^{-1})\\
\mu_{n_H - 1}(m)^{-1} & = & \mu_{H_x}^{-1} + \mathcal{O}(m^{-1})\\
\mu_{n_H}(m)^{-1} & = & \mu_{H_y}^{-1} + \mathcal{O}(m^{-1})\ .
\end{eqnarray}
\end{subequations}
Using \eqref{fullDiagVersion} and \eqref{etaSpectralDecomp}, we arrive at the
following:
\begin{eqnarray}
& & \text{diag} (\mu_{1}(m)^{-1/2}, \ldots, \mu_{n_H - 3}(m)^{-1/2},
\mu_{n_H - 2}(m)^{-1/2}, \mu_{n_H - 1}(m)^{-1/2}, \mu_{n_H}(m)^{-1/2}) \nonumber \\
& & =
\text{diag} (0, \ldots, 0, \mu_{H_1}^{-1/2}, \mu_{H_x}^{-1/2}, \mu_{H_y}^{-1/2})
+ \mathcal{O}(m^{-1/2}). \label{interStep4}
\end{eqnarray}
By using \eqref{interStep4} and \eqref{eigvecConv}, we arrive at the
desired result:
\begin{eqnarray*}
K_{HH}(m)^{-1/2} & = & Q(m) \,
\text{diag} (\mu_{1}(m)^{-1/2}, \ldots, \mu_{n_H}(m)^{-1/2})
Q(m)^t \\
& = & Z_\infty \, \text{diag} (0, \ldots, 0, \mu_{H_1}^{-1/2}, \mu_{H_x}^{-1/2}, \mu_{H_y}^{-1/2}) \, Z_\infty^t
+ \mathcal{O}(m^{-1/2})\\
& = & [z_{H_1} \, z_{H_x} \, z_{H_y}] \,
\text{diag} ( \mu_{H_1}^{-1/2}, \mu_{H_x}^{-1/2}, \mu_{H_y}^{-1/2}) \,
[z_{H_1} \, z_{H_x} \, z_{H_y}]^t + \mathcal{O}(m^{-1/2}) \\
& = & e_H \, \eta^{-1/2} \, e_H^t + \mathcal{O}(m^{-1/2}).
\end{eqnarray*}
\hfill \qed \\
\end{proof}
The following theorem shows that $B_{AGKS}$ is an effective preconditioner
for $m \gg 1$.
\begin{theorem} \label{thm:prec_robust}
For sufficiently large $m$, we have
\[
\sigma(B_{AGKS}(m)~K(m)) \ \subset \ [1-cm^{-1/2},1+cm^{-1/2}]
\]
for some constant $c$ independent of $m$, and therefore
\[
\kappa(B_{AGKS}(m)~K(m)) \ = \ 1 \ + \ \mathcal{O}(m^{-1/2}).
\]
\end{theorem}
\begin{proof}
Let us factorize the preconditioner as $B_{AGKS}=L^tL$ with
\begin{equation*}
L:= \left[ \begin{matrix}
K_{HH}(m)^{-1/2} & 0 \\
-S_\infty ^{-1/2} \, P_{LH}^\infty & S_\infty ^{-1/2}
\end{matrix} \right],
\end{equation*}
where the limiting Schur complement $S(m)$ and $K_{LH} K_{HH}^{-1}$ is
denoted by $S_\infty$ and $P_{LH}^\infty$, respectively.
We can easily show that
\begin{equation} \label{BK}
\sigma(B_{AGKS} K) = \sigma(LKL^t) = \sigma(I + E).
\end{equation}
Note that
\begin{equation} \label{interStep1}
P_{LH}^\infty K_{HH} P_{LH}^{\infty^t} - P_{LH}^{\infty} K_{HL} =
K_{LH}(e_H\eta^{-1}e_H^tK_{HH}e_H\eta^{-1}e_H^t -
e_H\eta^{-1}e_H^t)K_{HL} = 0.
\end{equation}
We give a step of the operation leading to \eqref{BK}.
Using \eqref{interStep1}, the $(2,2)$-th block entry of the $LKL^t$ reads:
\begin{equation*}
S_\infty ^{-1/2} [P_{LH}^{\infty} K_{HH} P_{LH}^{\infty^t} -
P_{LH}^\infty K_{HL} - K_{LH}P_{LH}^{\infty^t} + K_{LL}] S_\infty ^{-1/2} = I.
\end{equation*}
The other entries of $LKL^t$ can be computed in a similar way.
Using \eqref{A_HH0.5}, we have
\begin{equation*}
E_{LH} \ = \
S_\infty ^{-1/2} K_{LH} (I_{HH} - e_H \eta^{-1} e_H^t K_{HH})
e_H \eta^{-1/2}
e_H^t + \mathcal{O}(m^{-1/2}) \ = \ \mathcal{O}(m^{-1/2}). \label{E_LH}
\end{equation*}
Hence $\rho(E)$, the spectral radius of $E$, is $\mathcal{O}(m^{-1/2})$,
which together with (\ref{BK}) completes the proof.
\hfill \qed \\
\end{proof}
\section{GENERALIZATION TO ELLIPTIC PDES OF ORDER $2k$} \label{sec:generalPDE}
In essence, the biharmonic plate equation preconditioner is an extension of the
construction for the diffusion equation. It is possible to generalize
this construction to a family of elliptic PDEs of order $2k, k>2$.
We present how to obtain LRPs from associated bilinear forms.
We choose a different perspective than the one in Section \ref{sec:LRP}.
We start with a canonical bilinear form and show the modification
it needs to go through in order to construct LRPs.
Let the generalized problem be stated as follows:
Find $u \in H_0^k(\Omega)$ such that
\begin{equation} \label{eq:superPlateEq}
T_k u := (-1)^k \, \nabla^k \left( \alpha_k \, \nabla^k u \right) =
f \qquad \text{in}~ \Omega.
\end{equation}
The straightforward bilinear form associated to
\eqref{eq:superPlateEq} is obtained by application of Green's
formula $k$ times:
\begin{equation} \label{straightforwardBilinearForm_k}
\int_\Omega \nabla^k \, (\alpha_k \, \nabla^k u) \, v~dx =
\int_\Omega \alpha_k \, \nabla^k u \, \nabla^k v~dx +
\quad \text{boundary terms}.
\end{equation}
Then, we define a bilinear form corresponding to
\eqref{eq:superPlateEq} which can be seen as a \emph{generalization}
of the \emph{canonical} bilinear form in \eqref{canonical}:
\begin{equation} \label{canonical_k}
\tilde{a}_k(u,v) :=
\int_{\Omega} \alpha_k \, \nabla^k u \, \nabla^k v ~dx.
\end{equation}
Without modification, $\tilde{a}_k(\cdot, \cdot)$ cannot lead to LRPs
because $\tilde{a}_k(v,v)$ is not $H_0^k(\Omega)$-coercive. This is
due to the fact that $\tilde{a}_k(v,v)=0$ for $v \in \mathcal{P}_{k-1}
\cap H_0^k(\Omega)$. Hence, the stiffness matrix induced by
\eqref{canonical_k} has a large kernel involving elements from
$\mathcal{P}_{k-1}^h \cap V^h$ which indicates that extraction
of a Neumann matrix with a low-dimensional kernel is impossible.
In order to overcome this complication, we utilize a modified bilinear form:
\begin{equation*}
a_k(u,v) = \tilde{a}_k(u,v) + (1 - \sigma_k) \, \hat{a}_k(u,v).
\end{equation*}
The bilinear form should maintain the following essential properties:
\begin{enumerate}
\item $H_0^k(\Omega)$-coercive.
\item $V_{\mathcal{P}_{k-1}(\Omega)}$-coercive.
\item Corresponds to a strong formulation giving $T_k u$ in
\eqref{eq:superPlateEq} precisely,
\end{enumerate}
where $V_{\mathcal{P}_{k-1}(\Omega)}$ is a closed subspace
such that $V_{\mathcal{P}_{k-1}}(\Omega) \cap \mathcal{P}_{k-1} = \emptyset$
and $\mathcal{P}_{k-1}$ denotes the set of polynomials of degree at most
$k-1$.
The above properties (1) and (2) will be immediately satisfied if the
generalization of \eqref{gateway} holds for the modified bilinear
form:
\begin{equation}
a_k(v,v) \geq c_k \, |v|^2_{H^k(\Omega)} \label{gateway_k}.
\end{equation}
A similar construction of the \emph{Neumann} matrix can be immediately
generalized as follows:
\begin{equation*}
\langle \mathcal{N}^{(k)}_{HH} \underline{\phi}, \underline{\psi} \rangle :=
a_k(\phi_H^h,\psi_H^h).
\end{equation*}
The low-rank perturbations arise from the following decomposition of
$K_{HH}^{(k)}(m)$:
\begin{equation*}
K_{HH}^{(k)}(m) = m \, \mathcal{N}_{HH}^{(k)} + R^{(k)}, \quad
\left( K_{HH}^{(k)}(m) \right)^{-1} = e_H^{(k)} \eta^{(k)^{-1}} e_H^{(k)^t} +
\mathcal{O}(m^{-1}),
\end{equation*}
where $\eta^{(k)} := e_H^{(k)^t} K_{HH}^{(k)} e_H^{(k)}$.
LRP is produced by $e_H^{(k)} \in \mathcal{P}_{k-1}^h$ because the rank is equal
to the cardinality of the basis polynomials in $\mathcal{P}_{k-1}^h$.
\begin{equation*}
\ker \mathcal{N}_{HH}^{(k)} = \mathcal{P}_{k-1}^h|_{\overline{\Omega}_H}.
\end{equation*}
Due to \eqref{zeroContributionTerm}, $a_2(\cdot,\cdot)$ in
\eqref{ciarlet1} corresponds to the strong formulation $T_2$ exactly.
Let us denote the strong formulation to which $a_k(\cdot, \cdot)$
corresponds by $\hat{T}_k$. We have $\hat{T}_k = T_k,~k=1,2$ for the
high-contrast diffusion and biharmonic plate equations, respectively:
\begin{eqnarray*}
a_1(v,v) & := & (\nabla v, \alpha_1 \, \nabla v)\\
a_2(v,v) & := & \sigma_2 \, (\nabla^2 v, \alpha_2 \, \nabla^2 v) +
\alpha_2 \, (1 - \sigma_2) |v|_{H^2(\Omega)}^2
\end{eqnarray*}
However, for general $k$, $a_k(\cdot, \cdot)$ may not correspond to
$T_k$. In addition, one may need more general boundary conditions if
similar zero contributions in \eqref{zeroContributionTerm} can be
obtained for general $k$. Further research is needed to see if
such boundary conditions are physical. Currently, it is also unclear
for which applications such general PDEs can be used. However, there
are interesting invariance theory implications when one employs
bilinear forms corresponding to rotationally invariant functions
compatible to energy definition in \eqref{plateEnergyHessian}. This
allows a generalization of the energy notion and may be the subject
for future research. For further information, we list the relevant
bilinear forms that are composed of rotationally invariant functions
derived by the utilization of invariance theory.
\begin{eqnarray*}
a_3(v,v) & := & \sigma_3 \, (\nabla^3 v, \alpha_3 \,\nabla^3 v) +
\alpha_3 \, (1 - \sigma_3) |v|_{H^3(\Omega)}^2\\
a_4(v,v) & := & \sigma_4 \, (\nabla^4 v, \alpha_4 \,\nabla^4 v) +
\alpha_4 \, (1 - \sigma_4) |v|_{H^4(\Omega)}^2 +
\alpha_4 \, \gamma_4 |\nabla^2 v|_{H^2(\Omega)}^2.
\end{eqnarray*}
Note that the above bilinear forms satisfy \eqref{gateway_k}.
\section{NUMERICAL EXPERIMENTS} \label{sec:numerics}
The goal of the numerical experiments is to compare the performance of
the two preconditioners: AGKS and MG. The domain is a unit square
whose coarsest level triangulation consists of $32$ triangles. We
consider the case of a single highly-bending island located at the
region $[1/4,2/4] \times [1/4,2/4]$ consisting of 2 coarsest level
triangles. For an extension to the case of multiple disconnected
islands, one can refer to~\cite[Sections 3 and 4]{AGKS:2007}. The
implementation of HCT and Morley discretizations are based on
Pozrikidis' software provided in \cite{pozrikidis2005_book}. The
problems sizes of HCT and Morley discretizations are $131$, $451$,
$1667$, $6403$ and $81$, $289$, $1089$, $4225$ for levels $1, 2, 3$
and $4$, respectively.
We denote the norm of the relative residual at iteration $i$ by $rr^{(i)}$:
\begin{equation*}
rr^{(i)} := \frac{\|r^{(i)}\|_2}{\|r^{(0)}\|_2},
\end{equation*}
where $r^{(i)}$ denotes the residual at iteration $i$ with a stopping
criterion of $rr^{(i)} \leq 10^{-7}.$ In Tables \ref{ahs}--\ref{mmg},
preconditioned conjugate gradient iteration count and the average
reduction factor are reported for combinations of preconditioner,
smoother types, and number of smoothing iterations. The average
reduction factor of the residual is defined as:
\begin{equation*}
\left( rr^{(i)}\right )^{1/i}.
\end{equation*}
We enforce an iteration bound of $60$. If the method
seems to converge slightly beyond this bound, we denote
it by $60^+$, whereas, stalling is denoted by $\infty.$
We use Galerkin variational approach to construct the coarser level
algebraic systems. The multigrid preconditioner MG is derived from the
implementation by Aksoylu, Bond, and Holst~\cite{AkBoHo03}. We
employ a V(1,1)-cycle with point symmetric Gauss-Seidel (sGS) and
point Gauss-Seidel (GS) smoothers. A direct solver is used for the
coarsest level.
By exploiting the fact that $S_\infty$ in \eqref{limitingS} is
only a LRP of $K_{LL}$, we can build robust
preconditioners for $S_\infty$ in \eqref{MainPrec1} via standard
multigrid preconditioners. \eqref{limitingS} implies that
\begin{equation*}
S_\infty = K_{LL} - v \eta^{-1}v^T,
\end{equation*}
where $v := K_{LH} e_H$. If $M_{LL}$ denotes a
standard multigrid V-cycle for $K_{LL}$, we can construct an
efficient and robust preconditioner $\tilde{S}^{-1}$ for $S_\infty$
using the Sherman-Morrison-Woodbury formula, i.e.
\begin{equation} \label{Sherman-Morrison}
\tilde{S}^{-1} \ := \ M_{LL} \ + \
M_{LL} v ~(\eta - v^T M_{LL} v)^{-1} \, v^T M_{LL}.
\end{equation}
Note also that we can precompute and store $M_{LL} v$ during the setup
phase. This means that we only need to apply the multigrid V-cycle
$M_{LL}$ once per iteration. Therefore, the following practical
version of preconditioner \eqref{MainPrec1} is used in the
implementation:
\begin{eqnarray}
\tilde{B}_{AGKS} & := &
{ \left[ \begin{array}{cc} I_{HH} & - K_{HH}^{\infty^\dagger} K_{HL} \\
0 & I_{LL}\end{array}\right]}
{ \left[ \begin{array}{cc} M_{HH} & 0 \\
0& \tilde{S}^{-1} \end{array}\right]}
{ \left[ \begin{array}{cc} I_{HH} & 0 \\
- K_{LH}K_{HH}^{\infty^\dagger} & I_{LL}
\end{array}\right]} \label{practicalAGKS}.
\end{eqnarray}
We construct two different multilevel hierarchies for
multigrid preconditioners $M_{HH}$ in \eqref{practicalAGKS} and
$M_{LL}$ in \eqref{Sherman-Morrison} for DOF corresponding to
$\Omega_H$ and $\Omega_L$, respectively. For prolongation, linear
interpolation is used as in \cite{Braess.D;Peisker.P1987}.
The prolongation matrices $P_{HH}$ and $P_{LL}$ are extracted from the
prolongation matrix for whole domain $\Omega$ in the fashion following
\eqref{2x2blockSys}:
\begin{equation*} \label{2x2blockSysProlongation}
P =
\left[
\begin{array}{ll}
P_{HH} & P_{HL} \\ P_{LH} & P_{LL}
\end{array}
\right].
\end{equation*}
\input{tablesForPaper2_final}
As emphasized in our preceding paper~\cite{AGKS:2007}, AGKS can be
used purely as an algebraic preconditioner. Therefore, the standard
multigrid preconditioner constraint that the coarsest level mesh
resolves the boundary of the island is automatically
eliminated. However, for a fair comparison, we enforce the coarsest
level mesh to have that property.
We do not observe convergence improvement when a subdomain deflation
strategy based on the smallest eigenvalues is used as in the diffusion
equation case~\cite{AkYe2009}. The eigenvectors of the Neumann matrix,
$e_H$ in \eqref{defn:e_H}, cannot approximate the eigenvectors
corresponding to the smallest eigenvalues of $K_{HH}$ which are of
$\mathcal{O}(1)$ (see Figure \ref{fig:spectra}) since the remainder
matrix $R$ in \eqref{K_HH_decomp} is of $\mathcal{O}(10^4)$.
Therefore, a deflation strategy utilizing $e_H$ will not necessarily
guarantee deflation of the smallest eigenvalues of $K_{HH}$ in the
biharmonic case.
We first observe that the Morley discretization provides faster
convergence for both preconditioners. Then, we have the following
results regarding the effect of number of smoothing iterations on the
convergence behaviour. The convergence of MG heavily depends on the
number of smoothing iterations, i.e., the more the smoothing
iteration, the faster the convergence. For the HCT discretization,
AGKS requires more than a single smoothing iteration for convergence;
see Tables \ref{ahs} and \ref{ahg}. However, for the Morley
discretization, even with the same minimal number of smoothing
iteration, AGKS leads to convergence; see Tables \ref{ams} and
\ref{amg}. The choice of 5 smoothing iterations is sufficient for
AGKS to reach $h$-robustness and its peak performance. Hence, we can
conclude that AGKS clearly enjoys $h$-robustness. In contrast, MG is
not $h$-robust regardless of the $m$ value and the smoothing number;
see Tables \ref{mhs}, \ref{mhg}, \ref{mms}, and \ref{mmg}. MG is
totally ineffective as the problem size increases for both
discretizations, and more obviously for HCT.
Finally, we report the $m$-robustness results. The loss of
$m$-robustness of MG can be observed consistently for all $m$ values;
see Tables \ref{mhs}, \ref{mhg}, \ref{mms}, and \ref{mmg}. The AGKS
preconditioner becomes more effective with increasing $m$ and reaches
its peak performance by maintaining an optimal iteration count for all
$m \geq 10^5$. This indicates that $m \geq 10^5$ corresponds to the
asymptotic regime. Even increasing the $m$ value from $10^2$ to
$10^3$ reduces the iteration count significantly, a clear sign of
close proximity to the asymptotic regime. In addition, the AGKS
outperforms MG even for $m=1$. Consequently, for both discretizations,
we infer that AGKS is $m$-robust.
|
1,116,691,499,659 | arxiv | \section{Introduction}
White dwarfs are very well-known compact objects with typical values of mass around a solar mass and the size of the Earth. Composed mainly by carbon, they counteract the gravitational pull by means of the pressure of the degenerate electron gas while the carbon nuclei are the principal contribution to the mass.
Observations estimate magnetized white dwarfs (MWDs) surface magnetic fields in the range of $10^6\,$G to $10^9\,$G\cite{Terada:2007br,Schmidt:1995eh} whereas internal magnetic fields are determined indirectly using theoretical models based on macroscopic and microscopic analyses. Moreover, there are observations of superluminous thermonuclear supernovae, whose progenitor could be
super-Chandrasekhar WDs \mbox{($M_{\textrm{\scriptsize{WD}}}>1.44M_{\odot}$)} \cite{Howell:2006vn}. Consequently, in Refs.~\cite{Das:2012ai,Das:2013gd,Das:2014ssa} it was proposed to justify their existence with the presence of strong magnetic fields above $10^{13}\,$G.
A magnetic field acting on a fermions system breaks the SO(3) symmetry, giving rise to an anisotropy in the equations of state (EoS) \cite{2000PhRvL..84.5261C}.
Furthermore, the anisotropy of the energy momentum tensor caused by the magnetic field can be included considering an axi-symmetric and poloidal strong magnetic field\cite{PhysRevD.92.083006}, which allows to model rotating magnetized white dwarfs in a self-consistent way by solving Einstein-Maxwell equations\cite{Bonazzola:1993zz,Bocquet:1995je}.
The presence of the anisotropic pressures suggests that introducing an axially symmetric metric to solve Einstein equation is crucial. Previously we have used a cylindrical metric and obtained a maximum bound of $1.5\!\times\!10^{13}$ G for the magnetic field of stable MWDs \cite{1674-4527-15-10-1735}, ruling out the possibility of super-Chandrasekhar WDs with strong magnetic fields.
Rotation is another plausible cause for the increment in mass of white dwarfs.
In order to investigate this issue, we solve the rotating structure equations emerging from spherical symmetry by Hartle's method --despite spherical metric is no longer adequate when considering anisotropic EoS--. This allows us to determine if the deformation of the rotating stars accounts for stable RMWDs with mass above $1.44M_{\odot}$.
With that aim, we first describe the equilibrium of RMWDs by solving Einstein equations in section \ref{sec:srecs} while considering both pressures -one parallel and the other perpendicular to the magnetic field- independently. Then, in section \ref{sec:res} we present numerical results, and finally in section \ref{sec:concl}, our conclusions.
\section{Slowly rotating structure equations for RMWDs} \label{sec:srecs}
When discussing the structure of compacts objects, it must be analyzed both local and global properties of the involved matter. The first ones are described by an equation of state (EoS), while the latter ones comprises the dynamical response of matter at large scales to, for instance, gravity and rotation. In this paper, we consider the magnetized equations of state obtained in Refs.~\cite{ASNA:ASNA201512236,2016arXiv160100832A} for carbon/oxygen WDs whose matter is composed by particles under the action of a constant magnetic field, which leads to a splitting of the pressure into a component parallel to the magnetic field and a perpendicular one. The values of the magnetic field are chosen below the $10^{13}$ G threshold mentioned before.
Regarding the structure equations for a slowly rotating compact object, we take into account the angular velocity ($\Omega$) of the star uniform and sufficiently slow so that $R^3\Omega^2 \ll M$, where M and R are the mass and the radius of the non-rotating WDs respectively. Then, the angular velocity provokes small changes in the pressure $P$, energy density $E = \rho c^2$ and gravitational field with respect to the corresponding quantities of the static configuration. These changes can be considered as perturbations of the non-rotating solution.
So, to consider that a star is rigidly and slowly rotating implies calculating its equilibrium properties reckoning small perturbations on static configuration. Introducing new coordinates $(r,\theta,\phi)$, where $r(R,\theta) = R + \xi (R, \theta)$ takes into account deviations from spherical symmetry, the metric of the rotating configuration becomes \cite{1967ApJ...150.1005H,1968ApJ...153..807H}
\begin{eqnarray} \nonumber
ds^2 & = & -\, e^{\nu}\left\{1+2\left[h_0 + h_2 P_2(\cos \theta)\right]\right\} dt^2
+ \frac{1+2\left[m_0+m_2 P_2(\cos \theta) \right]\left[r-2M \right]^{-1}}{1- 2M/r} \, dr^2 \\[1pt]
&& +\, r^2\left[1+2(v_2-h_2)P_2(\cos \theta)\right] \left[d\theta^2 + \sin^2\theta \left(d\phi - \omega dt\right)^2\right] + O(\Omega^3).
\label{eq:metric}
\end{eqnarray}
Here $P_2(\cos \theta)$ is the Legendre polynomial of second order, $e^{\nu}$ and \mbox{$e^{\lambda} = \left[1 - 2M(r)/r\right]^{-1}$} are the static metric functions, and \mbox{$\omega (r) = \bar \omega (r) + \Omega$} is the angular velocity of the local inertial frame, where $\bar \omega (r)$ is the fluid's angular velocity relative to the local inertial frame. The functions $h_0\!=\! h_0(r)$, $h_2\!=\! h_2(r)$, $v_2\!=\! v_2(r)$, and mass perturbation factors $m_0\!=\! m_0(r)$ and $m_2\!=\! m_2(r)$ are all proportional to $\Omega^2$. Besides, we must define the pressure perturbation factors $p_0^\ast$ and $p_2^\ast$ on the order of $\Omega^2$, which modify the energy-momentum tensor \cite{1968ApJ...153..807H}.
Once computed Einstein equations considering perturbations up to $O(\Omega^2)$ with the metric (\ref{eq:metric}), the structure of the perturbed rotating stars is described by the static equations of Tolman-Oppenheimer-Volkoff for the pressure $P$, the mass and $\nu$ in addition to the equations for $\bar{\omega}$, $m_0$, $p_0^\ast$, $h_2$, $v_2$, $m_2$ and $p_2^\ast$. The system to integrate outward is
\begin{eqnarray}
\frac{dP}{dr} &=& -\frac{(E+P)(M+4\pi r^3 P)}{r(r-2M)} , \label{TOV1} \\[1pt]
\frac{dM}{dr} &=& 4\pi r^2 E , \label{TOV2} \\[1pt]
\frac{d\nu}{dr} &=& -\frac{2}{E+P}\frac{dP}{dr} , \label{TOV3} \\[1pt]
\frac{d\bar{\omega}}{dr} & =& \kappa, \\[1pt]
%
\frac{d \kappa }{dr}& =& \frac{4\pi r (E+P)(r \kappa+4\,\bar \omega)}{r-2M} - 4 \frac{\kappa}{r} , \\[1pt]
\frac{dm_0}{dr} & = & 4\pi r^2 (E+P)\frac{dE}{dP} p_0^\ast+ \frac{r^3e^{-\nu}}{3}\left(r-2M\right) \left[ \frac{\kappa^2}{4} + \frac{8\pi r(E+P) \,\bar\omega^2 }{r-2M}\right], \\[1pt]
\frac{dp_0^\ast}{dr}& =& \!-\frac{m_0(1+8\pi r^2P)}{(r-2M)^2} \!-\! \frac{4\pi r^2 (E+P)}{r-2M}p_0^\ast \!+ \!\frac{r^3e^{-\nu}}{3} \left[ \frac{\kappa^2}{4} +\! \frac{\bar \omega^2}{r}\! \left(\!\frac{2}{r}-\frac{d\nu}{dr}\!\right)\!+\!\frac{2\,\kappa\, \bar \omega}{r}\!\right]\!, \;\\[1pt]
\frac{dv_2}{dr} & = & -h_2 \frac{d\nu}{dr}+\frac{r^3e^{-\nu}}{3}\left(r-2M\right) \left[\frac{2}{r}+\frac{d\nu}{dr}\right]\left[ \frac{\kappa^2}{4} + \frac{4\pi r(E+P) \,\bar\omega^2 }{r-2M}\right], \\[1pt] \nonumber
\frac{dh_2}{dr} & = & h_2 \! \left[-\frac{d\nu}{dr} \!+ \! \frac{8\pi r^3(E+P)-4M}{r^2(r-2M)}\left(\!\frac{d\nu}{dr}\!\right)^{\!\!-1} \right] - \frac{4 v_2}{r(r-2M)}\left(\!\frac{d\nu}{dr}\!\right)^{\!\!-1} \\[1pt] \nonumber
&& + \, \frac{r^3 e^{-\nu}}{3} \left(\!\frac{d\nu}{dr}\!\right)^{\!\!-1}\! \left\{
\frac{\kappa^2}{4} \left[\left(r-2M\right)\left(\!\frac{d\nu}{dr}\!\right)^{\!2}-\frac{2}{r}\right] \right.\\[1pt]
&& \, \left. + \,\frac{4\pi r(E+P)\,\bar\omega^2}{r-2M} \left[(r-2M)\left(\!\frac{d\nu}{dr}\!\right)^{\!2} +\frac{2}{r}\right] \right\},
\end{eqnarray}
alongside with expressions
\begin{eqnarray}
m_2 &=& (r-2 M) \left\{\frac{1}{6}r^3e^{-\nu}\left[(r-2M)\,\kappa^2 + 16 \pi r (E+P)\, \bar \omega^2 \right]-h_2\right\} , \\
p_2^\ast &=& - \frac{1}{3}r^2e^{-\nu} - h_2\,.
\end{eqnarray}
These equations must be solved with the proper boundary conditions. This means to contemplate values of the central energy density within typical values for WDs. The pressure is maximum in the center of the star and must go to zero at the surface. Hence, the integration is carried out until $P$ vanishes. The value for $\bar \omega$ at the center is arbitrary
and the rest of the variables are set up to zero initially.
The total angular momentum is $J=R^4\kappa(R)/6$, the angular velocity of the rotating WD is \mbox{$\Omega = \bar \omega (R) + 2J/R^{3}$}, the moment of inertia is $I =J/\Omega$, and the total mass is
\begin{small}
\begin{equation}
M_T = M(R) + m_0(R) + \frac{J^2}{R^3}\,.
\end{equation}
\end{small}
Also, we compute the quadrupolar momentum \cite{1967ApJ...150.1005H,1968ApJ...153..807H}
\begin{small}
\begin{equation}
Q = \frac{8}{5} M^3 \frac{h_2+v_2-\tfrac{J^2}{M R^{3^{\phantom 1}\!\!}}}{ \frac{2M}{\sqrt{R(R-2M)}^{\phantom A} } Q_2^{{\phantom 2}1} \left(\tfrac{R}{M} -1\right) + Q_2^{{\phantom 2}2}\left(\tfrac{R}{M} -1\right)} + \frac{J^2}{M},
\end{equation}
\end{small}
\noindent where $Q_m^{{\phantom m}n}(x)$ is the associated Legendre function of second kind. The rotational deformation of the WD can be depicted through the displacement of the surface of constant density at radius $r$ in the static configuration to
\begin{small}
\begin{eqnarray}
&& r(R,\theta) = R + \xi_0 (R) + \xi_2 (R) P_2(\cos \theta), \\[1pt]
&& \xi_0 = - p_0^\ast (E+P) \! \left(\frac{dP}{dr}\right)^{\!-1}\!, \quad
\xi_2 = - p_2^\ast (E+P) \! \left(\frac{dP}{dr}\right)^{\!-1}\!
\end{eqnarray}
\end{small}
when rotating. The eccentricity is
\begin{small}
\begin{equation}
\varepsilon = \sqrt{1-\left(\frac{R_p}{R_eq}\right)^2},
\end{equation}
\end{small}
with
\begin{eqnarray}
R_p &=& r (R, 0) = R + \xi_0 (R) + \xi_2 (R), \\[1pt]
R_{eq} &=& r(R,\tfrac{\pi}{2})=R + \xi_0 (R) - \frac{\xi_2 (R)}{2}.
\end{eqnarray}
\section{Results and discussion} \label{sec:res}
Fig.~\ref{fig:MRrho} shows the behavior of the mass for the static and the rotating configurations. In the left panel, we superpose the curves corresponding to $M$ versus $R$, $M_T$ versus $R_p$ and $M_T$ versus $R_{eq}$. In the right panel, both masses are shown as a function of the central density. All plots include the non-magnetic configuration as well as the solutions for the parallel and perpendicular pressures corresponding to $B=10^{12}$ G.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{MR3.eps}
\includegraphics[width=0.49\linewidth]{Mvsrho.eps}
\caption{Left panel: Mass versus radius for static configuration and total mass as a function of equatorial and polar radii. Rigth panel: Static and total masses as a function of central density.}\label{fig:MRrho}
\end{figure}
As described in the previews section, considering slow rotation increases the mass of the stars. However, this increment diminishes as the density increases, so that the outcome for the total mass is lower than the Chandrasekhar mass even for higher densities solutions, at least for the values of the magnetic fields below the Schwinger critical magnetic field for which our EoS are valid. Furthermore, the precision of our stable solutions increases with the central density of the star.
In left panel of Fig.~\ref{fig:IQEx} we present the moment of inertia $I$ and the quadrupolar momentum $Q$ as a function of density for $B=0$, and $B= 10^{12}$ G, for both parallel and perpendicular pressures. The right panel shows the eccentricity also as a function of $\rho_c$. The change of the magnetized solutions respect to the non-magnetic ones are not substantial. Contrary to the behavior of $I$ and $Q$, the eccentricity decreases with the increment of energy density.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{QI.eps}
\includegraphics[width=0.49\linewidth]{evsrho.eps}
\caption{Left panel: Moment of inertia and quadrupolar momentum as a function of central density. Right panel: Eccentricity versus central density.}
\label{fig:IQEx}
\end{figure}
Additionally, the facts that $0\!<\!\varepsilon\!<\!1$, $R_p\! <\! R_{eq}$ and $Q\!>\!0$ implies that the solutions correspond to oblate WDs configurations. This can be easily pictured from Fig.~\ref{fig:Sph}, where we have plotted the polar radius versus the equatorial radius on left panel, and, in the right panel we have constructed a parametrical surface of the non-magnetized solution at $\rho_c =2.49547\times 10^8$ g cm$^{-3}$ using the corresponding values of $R_{eq}$ and $R_p$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.51\linewidth]{ReqvsRp2.eps}\hspace*{0.01\linewidth}
\includegraphics[width=0.4\linewidth]{SpheroidsW_B0_rho2_49547E8.eps}
\caption{Left panel: Polar radius versus equatorial radius, with density increasing towards left. The orange point corresponds to the $B=0$ solution at $\rho_c =2.49547\times 10^8$ g cm$^{-3}$. Right panel: representation of the oblate spheroid shape of the WD associated to the orange point in the graph of the left panel.}
\label{fig:Sph}
\end{figure}
\section{Conclusions} \label{sec:concl}
We have implemented an algorithm to study slowly rotating MWDs using the formalism proposed by Hartle for magnetized equations of state. Numerical solutions have been computed for the total mass of the rotating star as well as its equatorial and polar radii and the static couterpart. The moment of inertia, the quadrupolar momentum and the eccentricity were analyzed, confirming that rotation in this way deforms the star, that are now oblate spheroids. In all cases, results were obtained for the non-magnetic configuration and for a fixed value of $10^{12}$ G, lower than $10^{13}$ G, a critical field beyond which solutions are unstable. Our results for non-magnetized slowly rotating WDs are in agreement with Refs.~\cite{Boshkayev:2012bq,doi:10.1093/mnras/stw2614}. Also, the stable slowly rotating solutions obtained are bounded by the condition of applicability of Hartle's method, which is satisfied more accurately for WDs of higher densities.
Taking into account the splitting of the pressure, it would be interesting to investigate the possibility of an alternative method to the one discussed here in order to include both, parallel and perpendicular pressures at the same time. This could give an insight into a more precise description of such anisotropic WDs, and would allow to compare the deformation of the stars due to the effect of the magnetic field with the rotational deformation.
\section*{Acknowledgements}
D.A.T, D.M.P and A.P.M have been supported by the grant CB0407 and the ICTP Office of External Activities through NET-35. A.P.M thanks Consejo Nacional de Ciencia y tecnologia (CONACYT) for the support with the sabbatical Grant 264150 at ICN-UNAM, M\'exico, where this work was developed. D.M.P has been also supported by a DGAPA-UNAM fellowship.
|
1,116,691,499,660 | arxiv |
\section{Appendix}
\subsection{Proof of Lemma \ref{lemma:update-eqn}}\label{sec:pf:lemma:update-eqn}
\input{lemma_update_eqn_proof}
\subsection{Proof of Theorem \ref{thm:x_tilde}}\label{sec:pf:thm:x_tilde}
\input{thm_xtilde_proof}
\iftr
\subsection{An Illustrative Example}\label{sec:example1by2}
\input{example1by2}
\fi
\iftr
\subsection{Proof of Lemma \ref{lemma:x_tilde_t}}\label{sec:pf:thm:x_tilde_t}
\input{lemma_xtilde_t_proof}
\fi
\iftr
\subsection{Proof of Lemma \ref{lem:wisharts-nice-dream}}\label{sec:pf:lem:wisharts-nice-dream}
\input{lemma_wisharts_nice_dream_proof}
\fi
\iftr
\subsection{Proof of Lemma \ref{lem:A^+A}}\label{sec:pf:lem:A^+A}
We first focus on $\matr{G}= \matr{A}^\mathrm{T}\Bar{\matr{A}}^\mathrm{T}\Bar{\matr{A}}\matr{A}$.
Using the definition of $\Bar{\matr{A}}$ in \eqref{eqn:def:Amat}, we express the product $\Bar{\matr{A}}^\mathrm{T}\Bar{\matr{A}}$ as follows
%
\begin{equation}
\Bar{\matr{A}}^\mathrm{T}\Bar{\matr{A}} = \sum_{k=1}^K (\matr{A}_k\matr{A}_k^\mathrm{T})^+,
\end{equation}
%
where we have used the following identities for the pseudoinverse: $(\matr{M}^+)^\mathrm{T} = (\matr{M}^\mathrm{T})^+$ and $(\matr{M}^\mathrm{T})^+\matr{M}^+ = (\matr{M}\Mmat^\mathrm{T})^+$ for any matrix $\matr{M}$.
Hence, we have
\begin{align}
\matr{G} =\matr{A}^\mathrm{T}\Bar{\matr{A}}^\mathrm{T}\Bar{\matr{A}}\matr{A} = \matr{A}^\mathrm{T} \sum_{k=1}^K (\matr{A}_k\matr{A}_k^\mathrm{T})^+ \matr{A}
\end{align}
The matrix $\matr{G}$ and hence $\, \mathbb E\,[\matr{G}]$ can be seen as a matrix consisting of $K\times K$ blocks of varying sizes. The $(k,j)$\textsuperscript{th} block of $\, \mathbb E\,[\matr{G}]$ ($k$\textsuperscript{th} horizontal, $j$\textsuperscript{th} vertical block) is given by
%
\begin{align}\label{eqn:Gblock}
\, \mathbb E\,[\matr{A}_k^\mathrm{T}\sum_{i=1}^K(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_j].
\end{align}
%
We now consider the cases with $k \neq j$ and $k= j$, separately.
For $k\neq j$, \eqref{eqn:Gblock} can be written as
\begin{align}
\begin{split}
&\, \mathbb E\,[\matr{A}_k^\mathrm{T}\sum_{i=1}^K(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_j]\\
&= \, \mathbb E\,[\matr{A}_k^\mathrm{T}(\matr{A}_k\matr{A}_k^\mathrm{T})^+\matr{A}_j + \matr{A}_k^\mathrm{T}(\matr{A}_j\matr{A}_j^\mathrm{T})^+\matr{A}_j \\
&\quad + \matr{A}_k^\mathrm{T}\sum_{\substack{i=1\\i\neq k\\i\neq j}}^K(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_j]
\end{split}\label{eqn:crazy_sum}\\
&= \zerobf
\end{align}
%
Here we have used the fact that the rows of $\matr{A}$ are i.i.d. with $\sim \Gauss{p}$, hence the matrices $\matr{A}_j$ and $\matr{A}_i$ are statistically independent for $i\neq j$.
Thus, using the fact that $\, \mathbb E\,[\matr{A}_l]=0$, $\forall l$, \eqref{eqn:crazy_sum} is equal to the zero matrix of appropriate dimensions (under $k\neq j$).
For the second case, $k=j$, i.e., the $(k,k)$\textsuperscript{th} block is given by
%
\begin{equation}
\, \mathbb E\,[\matr{A}_k^\mathrm{T}\sum_{i=1}^K(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_k].
\end{equation}
We now consider the above expression together with the terms including $\matr{z}$. In particular, partitioning the vector $\matr{z}$ as $\matr{z}=[\matr{z}_1;\,\cdots;\,\matr{z}_K]$ where $\matr{z}_k\in\mathbb{R}^{p_k\times 1}$, we obtain
%
\begin{align}\label{eqn:double_sum_zAz}
\begin{split}
&\matr{z}^\mathrm{T}\, \mathbb E\,[\matr{A}^\mathrm{T}\Bar{\matr{A}}^\mathrm{T}\Bar{\matr{A}}\matr{A}]\matr{z} \\ &=\sum_{k=1}^K\left(\matr{z}_k^\mathrm{T}\sum_{i=1}^K\, \mathbb E\,\left[ \matr{A}_k^\mathrm{T}(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_k\right]\matr{z}_k\right).
\end{split}
\end{align}
To evaluate \eqref{eqn:double_sum_zAz}, we will first derive expressions for the terms with $k=i$ and then for the terms with $k\neq i$.
$k=i$: Note that for a matrix $\matr{M}$ and its pseudoinverse $\matr{M}^+$, we have $\matr{M}^\mathrm{T}(\matr{M}\Mmat^\mathrm{T})^+\matr{M}=\matr{M}^+\matr{M}$.
Hence, we obtain
%
\begin{equation}\label{eqn:AT(AAT)+A=A+A}
\matr{z}_k^\mathrm{T}\, \mathbb E\,[\matr{A}_k^\mathrm{T}(\matr{A}_k\matr{A}_k^\mathrm{T})^+\matr{A}_k]\matr{z}_k = \matr{z}_k^\mathrm{T}\, \mathbb E\,[\matr{A}_k^+\matr{A}_k]\matr{z}_k,
\end{equation}
%
By combining \eqref{eqn:AT(AAT)+A=A+A} with Lemma \ref{lem:wisharts-nice-dream}, we obtain
%
\begin{equation}\label{eqn:k=i}
\matr{z}_k^\mathrm{T}\, \mathbb E\,[\matr{A}_k^\mathrm{T}(\matr{A}_k\matr{A}_k^\mathrm{T})^+\matr{A}_k]\matr{z}_k = \norm{\matr{z}_k}^2 \frac{r_{\min,k}}{p_k},
\end{equation}
%
where $r_{\min,k} = \min\{n,p_k\}$.
$k\neq i$: Given that the rows of $\matr{A}$ are i.i.d. with $\sim \Gauss{p}$, the columns are also i.i.d.
Thus, $\matr{A}_k$ and $\matr{A}_i$ are statistically independent for $k\neq i$ and:
%
\begin{equation}
\, \mathbb E\,[\matr{A}_k^\mathrm{T}(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_k] = \, \mathbb E\,[\matr{A}_k^\mathrm{T}\, \mathbb E\,[(\matr{A}_i\matr{A}_i^\mathrm{T})^+]\matr{A}_k].
\end{equation}
%
Following the notation of \cite{cook_mean_2011}, $\matr{A}_i\matr{A}_i^\mathrm{T}$ follows the $n$-variate Wishart distribution with $p_k$ degrees of freedom: $W_n(\eye{p_k},p_k)$.
The pseudoinverse $(\matr{A}_i\matr{A}_i^\mathrm{T})^+$ follows the inverse Wishart distribution if $p_k > n + 1 $, and the generalized inverse Wishart distribution if $p_k < n - 1$ \cite{cook_mean_2011}.
From \cite{cook_mean_2011} we have the following expression for $\, \mathbb E\,[(\matr{A}_i\matr{A}_i^\mathrm{T})^+]$:
%
\begin{equation}\label{eqn:EAATp}
\, \mathbb E\,[(\matr{A}_i\matr{A}_i^\mathrm{T})^+]= \gamma_k' \eye{n},
\end{equation}
%
where
%
\begin{numcases}{\gamma_k'=}
\tfrac{1}{p_k-n-1} &\hspace{-15pt} for $p_k > n + 1$, \\
\tfrac{p_k}{n(n - p_k - 1)} &\hspace{-15pt} for $p_k < n - 1$, \\
+\infty &\hspace{-15pt} for $p_k \in \{n-1, n, n+1\}.$
\end{numcases}
Note that the more restrictive conditions on $p_k,n$ in \cite{cook_mean_2011} is due to the fact Prop.2.1\, and Thm~2.1 of \cite{cook_mean_2011} also present the second order moments for which more restrictive conditions are needed.
Hence, we have
%
\begin{equation}
\, \mathbb E\,[\matr{A}_k^\mathrm{T}(\matr{A}_i\matr{A}_i^\mathrm{T})^+\matr{A}_k] = \gamma_k'\, \mathbb E\,[\matr{A}_k^\mathrm{T}\matr{A}_k].
\end{equation}
The columns of $\matr{A}_k$ are i.i.d. standard Gaussian of dimension $n\times 1$, so we have
%
\begin{equation}\label{eqn:kneqi}
\gamma_k'\, \mathbb E\,[\matr{A}_k^\mathrm{T}\matr{A}_k] = \gamma_k' n \eye{p_k} = \gamma_k\eye{p_k},
\end{equation}
%
where $\gamma_k$'s are as defined in \eqref{eqn:gamma_main}.
Combining \eqref{eqn:k=i} and \eqref{eqn:kneqi} with \eqref{eqn:double_sum_zAz} we obtain the desired equality
%
\begin{equation}
\matr{z}^\mathrm{T}\, \mathbb E\,[\matr{A}^\mathrm{T}\Bar{\matr{A}}^\mathrm{T}\Bar{\matr{A}}\matr{A}]\matr{z} = \sum_{k=1}^K \norm{\matr{z}_k}^2 \left( \tfrac{r_{\min,k}}{p_k} + \sum_{\substack{i=1\\i\neq k}}^K \gamma_i \right).
\end{equation}
This concludes the proof of Lemma~\ref{lem:A^+A}.
\fi
\section{Introduction}
Distributed learning provides a framework for sharing the high computational burden of the learning task over multiple nodes, where the growing need for and interest from both academia and industry has led to a rapid advancement within the field \cite{verbraeken_survey_2019}.
Accordingly, distributed learning over wireless communication networks,
e.g., in the context of edge computing,
has emerged as a significant facilitator
\cite{niknam_federated_2019, wang2020convergence}.
We contribute to the overall understanding of these methods by characterizing potential pitfalls of distributed learning for linear regression in terms of generalization error and by providing guidelines for best practice.
In a standard learning task, the main aim is to be able to estimate an observation $y$ when a corresponding input $\matr{a}$ is given.
Estimation of unknown model parameters using a set of training data, i.e., pairs of $(y_i,\matr{a}_i)$ is referred to as model training.
How well the trained model can explain the training data is referred to as the training error, i.e., the error that the model makes for the estimation of $y_i$ in the training set.
A key performance criterion for any trained model is the generalization error, i.e., how well a trained model can estimate a new observation $y$ given the corresponding $\matr{a}$.
If the model performs well on new data, it is said to have low generalization error.
In general, low training error does not always guarantee a low generalization error. Hence, it is of central interest to develop methods that have both low training and generalization error \cite{zhang_understanding_2017}.
Modern machine learning techniques are often able to fit overparameterized models to exactly predict the training data, while still having low generalization error \cite{zhang_understanding_2017}.
Although various communications related challenges for distributed learning, such as energy constraints \cite{predd_distributed_2006}, quantization \cite{magnusson_communication_2018}
and privacy \cite{niknam_federated_2019}, have been successfully investigated,
to the best of our knowledge there has been no attempt to characterize the generalization properties of distributed learning schemes.
In this article, we address this gap.
In contrast to the setting where the observations (for instance, sensor readings) are distributed over the nodes \cite{predd_distributed_2006},
our approach follows the line of work initiated by the seminal work of \cite{tsitsiklis_distributed_1984} where the unknowns are distributed over the network.
We consider a linear model and utilize the successful distributed learning method \textsc{CoCoA}{} \cite{smith_cocoa_nodate}.
Our results show that the generalization performance of the distributed solution can heavily depend on the partitioning of the unknowns although the training error shows no such dependence, i.e., the distributed solution achieves training errors on the same level of accuracy as the centralized approach.
Motivated by the success of overparameterized models in machine learning \cite{zhang_understanding_2017} and recent results on the generalization error of such models \cite{belkin_reconciling_2019,belkin_two_2019}, we pay special attention to the overparameterized case, i.e., the number of unknowns is larger than the number of observations.
In particular, if the number of unknowns assigned to any node is close to the number of observations, then the generalization error of the distributed solution may take extremely large values compared to the generalization error of the centralized solution.
Our main analytical results in Theorem~\ref{thm:x_tilde} and Lemma~\ref{lemma:x_tilde_t} present the expectation of the generalization error as a function of the partitioning of the unknowns. Furthermore, these analytical results are verified by numerical results.
Using these results, we provide guidelines for optimal partitioning of unknowns for distributed learning.
\section{Problem Statement}
We focus on the linear model
%
\begin{equation}\label{eqn:model}
y_i = \matr{a}_i^\mathrm{T} \matr{x}+ w_i,
\end{equation}
%
where $y_i\in\mathbb{R}$ is the $i$\textsuperscript{th} observation, $\matr{a}_i\in\mathbb{R}^{p\times 1}$ is the $i$\textsuperscript{th} regressor, $w_i$ is the unknown disturbance for the $i$\textsuperscript{th} observation, and $\matr{x}= [x_1;\,\cdots\,;x_p]\in\mathbb{R}^{p\times1}$ is the vector of unknown coefficients.
We consider the problem of estimating $\matr{x}$ given $n$ data points, i.e., pairs of observations and regressors, $(y_i,\matr{a}_i),~{i=1,\,\dots,\,n},$ by minimizing the following regularized cost function:
%
\begin{equation}\label{eqn:problem}
\min_{\matr{x}\in\mathbb{R}^{p\times1}} \frac{1}{2}\norm{\matr{y} - \matr{A}\matr{x}}^2 + \frac{\lambda}{2} \norm{\matr{x}}^2,
\end{equation}
%
where $\matr{A}\in\mathbb{R}^{n\times p}$ is the regressor matrix whose $i$\textsuperscript{th} row is given by $\matr{a}_i^T\in\mathbb{R}^{1\times p}$.
%
We further denote the first term as $f(\matr{A}\matr{x}) = \tfrac{1}{2}\norm{\matr{y} - \matr{A}\matr{x}}^2$.
The second term $\tfrac{\lambda}{2}\norm{\matr{x}}^2$ with $\lambda\geq 0$ denotes the regularization function.
We consider the setting where the regressors $\matr{a}_i^\mathrm{T} \in\mathbb{R}^{1\times p} $ are independent and identically distributed (i.i.d.) with $\matr{a}_i \sim \Gauss{p}$.
Under this Gaussian regressor model, we focus on the generalization error of the solution to \eqref{eqn:problem} found by the distributed solver CoCoA \cite{smith_cocoa_nodate}.
Our main focus is on the scenario where $\lambda = 0$, $w_i=0$ where the solutions with ${\lambda>0}$ are used for comparison.
In the remainder of this section, we define the generalization error.
We provide details about our implementation of CoCoA in Section~\ref{sec:dist}.
Let $w_i=0$, $\forall i$, and let $\hat{\matr{x}}$ be an estimate of $\matr{x}$ found by using the data pairs $(y_i, \matr{a}_i),\,{i=1,\,\dots,\,n}$.
For a given $\matr{A}$, the generalization error, i.e., the expected error for estimating $y $ when a new pair $(y,\matr{a})$ with $\matr{a} \sim \Gauss{p}$ comes is given by
%
\begin{align}
\, \mathbb E\,_{a}[(y - \matr{a}^\mathrm{T}\hat{\matr{x}})^2 ] =& \, \mathbb E\,_{a}[(\matr{a}^\mathrm{T} \matr{x} - \matr{a}^\mathrm{T}\hat{\matr{x}})^2] \\
=& \, \mathbb E\,_{a}[ \text{Tr} [ (\matr{x}-\hat{\matr{x}}) (\matr{x}-\hat{\matr{x}})^\mathrm{T} \matr{a} \matr{a}^\mathrm{T} ]] \label{eqn:test:crossvanish} \\
=& \| \matr{x}-\hat{\matr{x}}\|^2, \label{eqn:test:avanish}
\end{align}
%
where $\matr{a}$ is statistically independent of $\matr{A}$ and we have used the notation $\, \mathbb E\,_{a}[\cdot]$ to emphasize that the expectation is over $\matr{a}$.
Here \eqref{eqn:test:avanish} follows from $\matr{a} \sim \Gauss{p}$.
We are interested in the expected generalization error over the distribution of training data
%
\begin{align}\label{eqn:generalization_error_def}
\epsilon_G =& \, \mathbb E\,_{\matr{A}} [ \| \matr{x}-\hat{\matr{x}}\|^2],
\end{align}
%
where the expectation is over the regressor matrix $\matr{A}$ in the training data.
In the rest of the paper, we focus on the evolution of $\epsilon_G$ in CoCoA.
For notational simplicity, we drop the subscript $\matr{A}$ from our expectation expressions.
\kern-0.1em
\section{Distributed Solution Approach}\label{sec:dist}
\kern-0.1em
\SetAlgoSkip{bigskip}
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\myCoLa
\end{algorithm}
As the distributed solution approach, we use the iterative approach \textsc{CoCoA}{} introduced in \cite{smith_cocoa_nodate}. In \textsc{CoCoA}{}, mutually exclusive subsets of coefficients of $\matr{x}$ and the associated subset of columns of $\matr{A}$ are distributed over $K$ nodes ($K \leq p$).
Hence, the $p$ unknown coefficients are partitioned over $K$ nodes so that each node governs the learning of $p_k$ variables, hence $\sum_{k=1}^K p_k =p$.
We denote the part of $\matr{A}$ available at node $k$ as $\matr{A}_k\in\mathbb{R}^{n\times p_k}$.
%
In particular, using this partitioning, $\matr{y}$ with $w_i=0$, $\forall i$, can be expressed as
\begin{align}
\matr{y}= \matr{A} \matr{x} = [\matr{A}_1,\cdots,\matr{A}_K] \begin{bmatrix}
\matr{x}_1\\\vdots\\\matr{x}_K
\end{bmatrix} =\sum_{k=1}^K \matr{A}_k \matr{x}_k,
\end{align}
where $\matr{x}_k$ is the partition at node $k$.
%
Note that there is no loss of generality due to the specific order of this partitioning structure since the columns of $\matr{A}$ are i.i.d. (since rows are i.i.d. with $\Gauss{p}$).
In \textsc{CoCoA}{}, at iteration $t$, node $k$ shares its estimate of $\matr{y}$, denoted $\matr{v}_k^{t}$, over the network.
%
Note that the $\matr{A}_k$'s and the observation vector $\matr{y}$ are fixed over all iterations.
%
The variables $\hat{\matr{x}}^t_k\in\mathbb{R}^{p_k\times 1}$ and ${\Delta\matr{x}}^t_k\in\mathbb{R}^{p_k\times 1}$ are the estimate and its update computed by node $k$, respectively.
Hence, $\hat{\matr{x}}^t$ and ${\Delta\matr{x}}^t$ are partitioned as $\hat{\matr{x}}^t = [\hat{\matr{x}}^t_1; \cdots; \hat{\matr{x}}^t_K]$ and ${\Delta\matr{x}}^t=[{\Delta\matr{x}}^t_1;\cdots;{\Delta\matr{x}}^t_K]$.
The average over all local estimates $\matr{v}_k^t$ is denoted as $\bar{\matr{v}}^t$.
At iteration $t$, \textsc{CoCoA}{} solves the following minimization problem at each node \cite{smith_cocoa_nodate}:
%
\begin{align}\label{eqn:problem_t}
\begin{split}
\min_{{\Delta\matr{x}}^t_k} \nabla_{\bar{\matr{v}}^t} f(\bar{\matr{v}}^t)^\mathrm{T}\matr{A}_k{\Delta\matr{x}}^t_k &\\
+ \tfrac{\sigma'}{2\tau}\norm{\matr{A}_k{\Delta\matr{x}}^t_k}^2&+\tfrac{\lambda}{2}\norm{\hat{\matr{x}}^t_k+{\Delta\matr{x}}^t_k}^2.
\end{split}
\end{align}
%
Using $f(\matr{A}\matr{x}) = \tfrac{1}{2}\norm{\matr{y} - \matr{A}\matr{x}}^2$, we have the smoothness parameter $\tau=1$\cite{he_cola_2019}.
We set $\sigma'=K$ since it is considered a safe choice\cite{he_cola_2019}.
Only keeping the terms that depend on ${\Delta\matr{x}}^t_k$ reveals that the solution to \eqref{eqn:problem_t} can be equivalently found by solving the following problem
%
\begin{align}\begin{split}\label{eqn:problem_open}
\min_{{\Delta\matr{x}}^t_k} & ~ ({\Delta\matr{x}}^t_k)^\mathrm{T}( \tfrac{K}{2}\matr{A}_k^\mathrm{T}\matr{A}_k + \tfrac{\lambda}{2} \eye{p_k}){\Delta\matr{x}}^t_k\\
& + ( \lambda\hat{\matr{x}}^t_k - \matr{A}_k^\mathrm{T}(\matr{y} - \bar{\matr{v}}^t) )^\mathrm{T}{\Delta\matr{x}}^t_k.
\end{split}
\end{align}
Taking the derivative with respect to ${\Delta\matr{x}}^t_k$ and setting it to zero, we obtain
%
\begin{align}\begin{split}\label{eqn:iteration_eqn_system}
(K & \matr{A}_k^\mathrm{T}\matr{A}_k + \lambda \eye{p_k}){\Delta\matr{x}}^t_k = - ( \lambda\hat{\matr{x}}^t_k - \matr{A}_k^\mathrm{T}(\matr{y} - \bar{\matr{v}}^t) ).
\end{split}\end{align}
%
With $\lambda=0$, existence of a matrix inverse is not guaranteed.
Hence, the local solvers use Moore-Penrose pseudoinverse to solve \eqref{eqn:iteration_eqn_system} as
%
\begin{equation}\label{eqn:pinv_solver}
{\Delta\matr{x}}^t_k = - (K\matr{A}_k^\mathrm{T}\matr{A}_k + \lambda \eye{p_k})^+ ( \lambda\hat{\matr{x}}^t_k - \matr{A}_k^\mathrm{T}(\matr{y} - \bar{\matr{v}}^t) ).
\end{equation}
%
The resulting algorithm for estimating $\matr{x}$ iteratively is presented in Algorithm~\ref{alg:cola}.
In \cite{he_cola_2019}, a generalization of \textsc{CoCoA}{} is presented, named \textsc{CoLa}{}, where a mixing matrix $\matr{W}$ is introduced to model the quality of the connection between nodes.
For $\matr{W}=\tfrac{1}{K}\bm{1}_K$ \textsc{CoLa}{} reduces to \textsc{CoCoA}{}, hence our analysis also applies to this special case of \textsc{CoLa}{}.
\section{Partitioning and the Generalization Error}\label{sec:theorems}
This section presents our main results in Theorem \ref{thm:x_tilde} and Lemma~\ref{lemma:x_tilde_t}, which reveal how the generalization error changes based on the data partitioning.
We first provide a preliminary result to describe the evolution of the estimates of Algorithm~\ref{alg:cola}:
\input{lemma_update_eqn}
Proof: See Section~\ref{sec:pf:lemma:update-eqn}.
%
This result shows that when $\lambda=0$, the estimate in each iteration is a combination of the previous global estimate ($\hat{\matr{x}}^t$) and the local least-squares solutions ($\matr{A}_k^+ \matr{y}$) from each node.
%
We now present our main results:
\begin{theorem}\label{thm:x_tilde}
\input{thm_xtilde}
\end{theorem}
Proof: See Section \ref{sec:pf:thm:x_tilde}.
%
%
Here, while writing the expressions, we have used the notational convention that if any $\alpha_k=+\infty$ and the corresponding $\norm{\matr{x}_k}^2 = 0$, then that component of \eqref{eqn:xtildetp} is also zero.
%
Note that the infinity, i.e., $\infty$, in \eqref{eqn:gamma_main_b} denotes the indeterminate/infinite values due to divergence of the relevant integrals.
%
\iftr
Further discussions on this point are provided together with an illustrative example in Section~\ref{sec:example1by2}.
\else
Further discussions on this point are provided together with an illustrative example in \cite{HellkvistOzcelikkaleAhlen_distributed2020_technicalReport}.
\fi
Theorem~\ref{thm:x_tilde} shows how the partitioning of $\matr{x}$ (and hence $\matr{A}$) over the nodes affects the generalization error $\epsilon_G$.
%
Note that the interesting case of $p_k\in\{n-1,n,n+1\}$, $K>1$, occurs with the overparameterized scenario of $n\leq p$.
If $\matr{x}_k\neq\bm{0}$ and any ${p_i\in\{n-1,n,n+1\}},\,i\neq k$, the generalization error after the first iteration will be extremely large, since the corresponding $\alpha_k$ in \eqref{eqn:xtildetp} will be extremely large.
In order to avoid large generalization errors, no partition $\matr{A}_k$ should have a number of columns $p_k$ close to the number of observations $n$.
Note that according to \eqref{eqn:alpha_k}, having $p_k\in\{n-1,n,n+1\}$ in one node affects the generalization error associated with the partition in the other nodes.
We now consider evolution of the generalization error:
\begin{lemma}\label{lemma:x_tilde_t}
\input{lemma_xtilde_t}
\end{lemma}
\iftr
Proof: See Section~\ref{sec:pf:thm:x_tilde_t}.
\else
Due to page limitations, the proof is presented in \cite{HellkvistOzcelikkaleAhlen_distributed2020_technicalReport}.
\fi
%
Lemma \ref{lemma:x_tilde_t} reveals that if we have $\, \mathbb E\,[\norm{\matr{x}_k - \hat{\matr{x}}_k^t}^2]\neq 0$ with ${p_i\in\{n-1,n,n+1\}},\,i\neq k,$ at a given iteration, then the average generalization error will increase dramatically in the next iteration.
%
Hence, if the average generalization error takes large values, it will not decrease by iterating the algorithm further. Numerical illustrations are provided in Section~\ref{section:numerical}.
Following \cite{breiman_how_nodate}, a similar analysis is presented in \cite{belkin_two_2019}, to explain the ``double descent'' curves in \cite{belkin_reconciling_2019}.
The analyses of \cite{belkin_two_2019,breiman_how_nodate} focus on the centralized problem where only a subset $\bar{p}$ of the $p$ unknowns are learnt and present how the generalization error increases when $\bar{p}$ is close to the number of observations $n$.
In this paper we extend these results for distributed learning with \textsc{CoCoA}{}.
We note that presence of noise $w_i$ in \eqref{eqn:model} during training would provide some numerical stability.
Similarly, having a non-zero regularization during training,~i.e., $\lambda>0$, will make the matrix in \eqref{eqn:pinv_solver} invertible, hence replacing the pseudo-inverse of \eqref{eqn:pinv_solver} with an inverse.
With a large enough $\lambda>0$ (compared to the machine precision), this will provide numerical stability which can reduce the large values in the generalization error significantly, at the cost of a larger training error.
On the other hand, a too large regularization will make the distributed solution penalize the norm of the solution too much, and the solution will neither fit the training data nor the test data.
We illustrate these effects in Section~\ref{section:numerical}.
\section{Numerical Examples}\label{section:numerical}
We now provide numerical results to illustrate the dependence of the generalization error on the partitioning and the effect of regularization.
%
We generate $\matr{x}$ with $\matr{x}\sim\Gauss{p}$ once in the numerical experiments and keep it fixed. We generate the rows of $\matr{A}\in\mathbb{R}^{n\times p}$ i.i.d. with distribution $\Gauss{p}$. We set $n=50$, $p=150$, $w_i=0,\,\forall i$. The data is partitioned over $K=2$ nodes, so $p = p_1+p_2$.
{\it{Verification of Theorem \ref{thm:x_tilde}}}: We first empirically verify the expression in \eqref{eqn:xtildetp} from Theorem \ref{thm:x_tilde}.
We obtain the empirical results by computing the first iteration of Algorithm \ref{alg:cola} for $N=100$ simulations.
%
Note that these values correspond to the average generalization error (i.e. risk) by \eqref{eqn:generalization_error_def}.
%
In Figure \ref{fig:expectation-sweep-fig}, we present the analytical value $\epsilon_G$, ~i.e, ${\, \mathbb E\,[\norm{\matr{x}-\hat{\matr{x}}^1}^2]}$ from~\eqref{eqn:xtildetp}, and the empirical average ${\tfrac{1}{N}\sum_{i=1}^N \norm{\matr{x}-\hat{\matr{x}}^1_{(i)}}^2}$, where the subscript $(i)$ denotes the $i$\textsuperscript{th} simulation.
%
Figure \ref{fig:expectation-sweep-fig} illustrates that the empirical average follows the analytical values for all $p_1$ in \eqref{eqn:gamma_main}.
When $p_k\in\{n-1,n,n+1\}$, the empirical average increases so drastically the values are out of the range of the plots.
For $p_k \approx n$, $p_k \notin \{n-1,n,n+1\} $, we see that empirical values take large values and these values are exactly on the analytical curve.
%
Note that no analytical value is computed for $p_k\in\{n-1,n,n+1\}$, hence the increase in the analytical expressions around $p_k \approx n$, $p_k \notin \{n-1,n,n+1\} $ directly comes from large but finite values dictated by the analytical expression.
{\it{Generalization error after convergence}}: We now illustrate that the generalization error does not decrease when Algorithm~\ref{alg:cola} is run until convergence.
We set the number of iterations for Algorithm~\ref{alg:cola} as $T=200$.
We note that increasing $T$ further does not change the nature of the results.
The average training error is calculated as ${\frac{1}{Nn}\sum_{i=1}^N\norm{\matr{A}_{(i)}(\matr{x} - \hat{\matr{x}}_{(i)}^T)}^2}$, where the superscript $T$ denotes the final iteration.
The generalization error is calculated in a similar fashion but using a new data matrix $\matr{A}'\in\mathbb{R}^{10n\times p}$ from the same distribution as $\matr{A}\in\mathbb{R}^{n\times p}$.
Here $\matr{A}$, $\matr{A}'$ are independently sampled for each simulation.
The matrix $\matr{A}'$ is chosen to have $10n$ rows so that the generalization error is averaged over a large number of data points.
%
For benchmarking, we use the training and the generalization errors of the centralized least-squares (LS) solution, i.e., $\hat{\matr{x}} = \matr{A}^+ \matr{y}$ using the whole $\matr{A}$.
%
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0.1cm 0.5cm 0cm .36cm,clip]{figures/expect-sweep-fig}
\caption{Comparison of $\, \mathbb E\,[\norm{\matr{x} - \hat{\matr{x}}^1}^2]$ expression in \eqref{eqn:xtildetp} with the empirical ensemble average for $K=2$, $\lambda=0$.}
\label{fig:expectation-sweep-fig}
\end{figure}
%
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth, trim=0.1cm 0.36cm 0cm .36cm,clip]{figures/main-fig}
\caption{The generalization error and the training error for ${K=2}$, $\lambda=0$ after convergence.}
\label{fig:main-fig}
\end{figure}
%
In Figure \ref{fig:main-fig}, we plot the empirical average of the generalization error and the training error of Algorithm~\ref{alg:cola} as a function of $p_1$ with $\lambda=0$.
When either $p_1$ or $p_2$ approaches $n=50$, there is a large increase in the generalization error.
This behaviour is consistent with the general trend of Figure \ref{fig:expectation-sweep-fig}, which was obtained using Theorem~\ref{thm:x_tilde}.
This numerical result supports the result of Lemma~\ref{lemma:x_tilde_t}, i.e., once the generalization error increases drastically, iterations of Algorithm \ref{alg:cola} do not decrease it.
In particular, the peak generalization error for Algorithm~\ref{alg:cola} is on the order of $10^5$ (not shown on the plot).
On the other hand, the distributed solution fits the training data perfectly, as does the LS solution: the respective training errors are lower than $10^{-25}$.
In contrast to the distributed case, the LS solution fits the new data well with an average generalization error of $\approx 60$.
%
Hence, although Algorithm~\ref{alg:cola} successfully finds a solution that achieves a training error on the same level with the direct centralized solution, the generalization error is significantly higher when $p_k\in\{n-1,n,n+1\}$.
\textit{Effect of regularization:} We now investigate the effects of regularization on the peaks of Figure \ref{fig:main-fig}.
We set a non-zero regularization parameter $\lambda$ and run the same simulations as in Figure \ref{fig:main-fig}.
A value of $\lambda$ between $10^{-4}$ and $10^3$ dampens the peaks in generalization error (when $p_1$ is close to $50,\,100$) to between $10^4$ and $10^2$.
As $\lambda$ is increased beyond $10^{-4}$, the training error starts to grow.
In particular, for $\lambda=10^3$, the training error is on the same level as the generalization error.
Any further increase in $\lambda$ increases both the training and the generalization error.
These results are consistent with the discussions in Section~\ref{sec:theorems}.
\kern-0.2em
\section{Conclusions}
\kern-0.2em
We have presented a characterization of the generalization error showing how partitioning plays a major role in distributed linear learning.
In particular, our analytical results show how it is crucial for the generalization performance that the partitioning must avoid setting the number of unknowns in any node close to the number of available observations.
We have presented numerical results, simulating the distributed learning system \textsc{CoCoA}{}, verifying our analytical results. Extension of this work to the fully decentralized case of \textsc{CoLa}{} is considered an important direction for future work.
\input{appendix}
\bibliographystyle{IEEEtran}
|
1,116,691,499,661 | arxiv | \section*{Methods}
\subsection*{Experimental Setup}
Optical studies of single $Ce^{3+}$ ions were performed in a home-built confocal microscope in which the crystal was mounted on a cold finger of a helium flow cryostat. An optical access was arranged through a window. The microscope objective lens of NA 0.95 was mounted inside the cryostat on a piezo nano-positioner. A toroidal permanent magnet was fixed onto the objective lens to provide magnetic field parallel to the propagation direction of the excitation laser beam. The fluorescence of cerium ions was collected by the same objective lens and detected with a single photon counting avalanche photodiode. In order to measure the emission spectra, the fluorescence was deflected by a mirror mounted on a flip mount onto a grating spectrometer equipped with a cooled CCD camera. For measurements involving microwaves, a high power MW amplifier ($50\;dB$ amplification, $30\;W$ maximum output power) was used. The frequency of microwaves was swept by a software-controlled MW synthesizer.
\subsection*{Sample Preparation}
In order to improve fluorescence collection efficiency, a solid immersion microlens was fabricated directly on the surface of the sample by focused ion beam milling. The lens had a shape of half-a-sphere with the radius of $5\;\mu m$. In order to perform spin manipulations with microwaves, a copper microwave structure was created next to the location of the SIL by lithographic means. Even though the microwave structure was not impedance-matched to the outer MW network, the measured overall MW loss amounted to only $6\; dB$.
|
1,116,691,499,662 | arxiv | \section{Introduction}
\label{section:introduction}
\subsection{Overview}
\label{subsection:overview}
We work over the complex numbers $\mathbb{C}$.
The main result of this paper, \autoref{theorem:very-general-VHS}, is that an
analytically very general $n$-pointed curve of genus $g$
(defined in \autoref{definition:general})
does not carry any
non-isotrivial polarizable integral variations of Hodge structure of rank less than $2\sqrt{g+1}$. In particular, an
analytically very general $n$-pointed curve of genus $g$ carries no geometric local
systems of rank less than $2\sqrt{g+1}$ with infinite monodromy, as we show in
\autoref{corollary:geometric-local-systems}. This is a strong restriction on the
topology of smooth proper maps to an analytically very general curve, and contradicts conjectures of Esnault-Kerz \cite[Conjecture 1.1]{esnault2021local} and Budur-Wang \cite[Conjecture 10.3.1]{budur2020absolute}, as explained in \autoref{corollary:non-density-of-geometric-local-systems}.
The above results rely on an analysis of stability properties of isomonodromic deformations of flat vector bundles with regular singularities, and require correcting a number of errors in the literature on this topic.
We next state our main results on stability properties of isomonodromic
deformations of flat vector bundles.
Let $C_0$ be the central fiber of a family of curves $\mathscr{C}\to \Delta$
with $\Delta$ a contractible domain, and let $(E_0, \nabla_0)$ be a vector
bundle with flat connection
on $C_0$. Recall that, loosely speaking, the isomonodromic deformation of $(E_0, \nabla_0)$ is the deformation $(\mathscr{E}, \nabla)$ of $(E_0, \nabla_0)$ to $\mathscr{C}/\Delta$, such that the monodromy of the connection is constant.
In \autoref{corollary:counterexample}, we construct a flat vector bundle on a smooth proper curve over $\mathbb{C}$, whose isomonodromic deformations
to a nearby curve
are never semistable.
(See \autoref{definition:nearby} for precise definitions.)
The construction arises from the ``Kodaira-Parshin trick," and contradicts earlier claimed theorems of Biswas, Heu, and Hurtubise (\cite[Theorem 1.3]{BHH:logarithmic},
\cite[Theorem 1.3]{BHH:irregular}, and
\cite[Theorem 1.2]{BHH:parabolic}), which imply that such a construction is impossible. See \autoref{remark:bhh-error} for a discussion of the errors in those papers.
As a complement to this example, we show in
\autoref{theorem:hn-constraints-parabolic} that any logarithmic flat vector
bundle admits an isomonodromic deformation to a nearby curve which is \emph{close} to semistable,
in a suitable sense,
and moreover is (parabolically) semistable if the rank is small compared to the genus of the curve.
While our results contradict those of \cite{BHH:logarithmic, BHH:irregular,
BHH:parabolic}, our methods owe those papers a substantial debt.
Biswas, Heu,
and Hurtubise
pitch the question of isomonodromically deforming a vector bundle to a
semistable vector bundle (see \autoref{question:bhh})
as an
analogue of Hilbert's 21st problem, also known as the Riemann-Hilbert problem.
The semistability property of \autoref{theorem:hn-constraints-parabolic} is also the main
input to our Hodge-theoretic main results, mentioned above.
The applications to polarizable variations of Hodge structures come from the fact that flat vector bundles underlying polarizable variations are rarely (parabolically) semistable, due to well-known curvature properties of Hodge bundles.
\subsection{Main Hodge-theoretic results}
\label{subsection:hodge-intro}
Results from this subsection, \autoref{subsection:hodge-intro}, as well as the
next, \autoref{subsection:isomonodromy-intro}, will be proven later in the
paper, as detailed in \autoref{subsection:organization}.
For convenience, throughout the paper, out main results will primarily be stated for hyperbolic curves.
\begin{definition}
\label{definition:hyperbolic}
Let $C$ be a curve over $\mathbb C$ of genus $g$ and $D \subset C$ a
reduced effective divisor of degree $n$.
Call $(C, D)$ {\em hyperbolic} if $C$ is a smooth proper connected curve
and
either $g \geq 2$ and $n\geq 0$, $g = 1$ and $n > 0$, or $g =0$ and $n > 2$.
We call an $n$-pointed curve $(C, x_1, \ldots, x_n)$ {\em hyperbolic} if
$(C, x_1 +\cdots + x_n)$ is hyperbolic.
\end{definition}
\begin{remark}
\label{remark:}
Equivalently, $(C,D)$ is hyperbolic if and only if it has no infinitesimal automorphisms,
i.e., $H^0(C, T_C(-D)) = 0$.
\end{remark}
We will also work with the following analytic notion of a (very) general general point.
\begin{definition}
\label{definition:general}
A property holds for an {\em analytically general} point of a complex orbifold $X$,
if there exists a nowhere dense
closed analytic subset $S \subset X$ so that the property holds
on $X - S$.
We say that a property holds for an
{\em analytically very general} point if, locally on $X$, there exists a countable collection of nowhere dense closed analytic subsets such that the property holds on the complement of their union. If $\mathscr{M}_{g,n}$ is the analytic moduli stack of
$n$-pointed curves of genus $g$, we say that a property holds for an
analytically (very) general $n$-pointed curve if it holds for an analytically
(very) general point of $\mathscr{M}_{g,n}$.
\end{definition}
\begin{remark}\label{remark:}
From the definition, it may appear that ``analytically very general'' is
a local notion, while ``analytically general'' is a global notion.
However, being ``analytically general'' also has the following
equivalent local definition, which is more similar to the definition of
``analytically very general'':
locally on $X$, there exists a nowhere dense closed analytic subset such
that the property holds on the complement of this subset.
\end{remark}
The main geometric consequence of this work is the following constraint on the
rank of non-isotrivial polarizable variations of Hodge structure (defined in
\autoref{section:hodge-theoretic-preliminaries}) on an analytically very general curve:
\begin{theorem}\label{theorem:very-general-VHS}
Let $K$ be a number field with ring of integers
$\mathscr{O}_K$. Suppose
$(C, x_1, \cdots, x_n)$
is an analytically very general $n$-pointed
hyperbolic curve of genus $g$, and $\mathbb{V}$ is a $\mathscr{O}_K$-local system on $C\setminus\{x_1, \cdots, x_n\}$ with infinite monodromy.
Suppose additionally that for each embedding $\iota: \mathscr{O}_K\to \mathbb{C}$, $\mathbb{V}\otimes_{\mathscr{O}_K, \iota}\mathbb{C}$ underlies a polarizable complex variation of Hodge structure.
Then, $$\on{rk}_{\mathscr{O}_K}(\mathbb{V})\geq 2\sqrt{g+1}.$$
\end{theorem}
\begin{remark}
\label{remark:}
Note that a result analogous to \autoref{theorem:very-general-VHS} does
not hold for variations without an underlying $\mathscr{O}_K$-structure.
Indeed, every smooth proper curve of genus at least $2$ admits a polarizable complex variation of Hodge structure of rank $2$ with infinite monodromy, arising from uniformization (see e.g.~\cite[bottom of p.~870]{simpson:constructing-VHS}).
\end{remark}
Let $X$ be a smooth variety. We say a complex local system $\mathbb{V}$ on $X$ is \emph{of geometric origin} if there exists a dense open $U\subset X$, and a smooth proper morphism $f: Y\to U$ such that $\mathbb{V}|_U$ is a direct summand of $R^if_*\mathbb{C}$ for some $i\geq 0$. As local systems of geometric origin satisfy the hypotheses of \autoref{theorem:very-general-VHS}, we have:
\begin{corollary}\label{corollary:geometric-local-systems}
Let $(C, x_1, \cdots, x_n)$ be an analytically very general hyperbolic $n$-pointed curve of genus $g$. If $\mathbb{V}$ is a local system on $C\setminus\{x_1, \cdots, x_n\}$ of geometric origin and with infinite monodromy, then $\dim_{\mathbb{C}}\mathbb{V}\geq 2\sqrt{g+1}$.
\end{corollary}
We will prove \autoref{corollary:geometric-local-systems}
in \autoref{subsubsection:geometric-origin-proof}.
As a consequence of \autoref{corollary:geometric-local-systems}
we obtain the following concrete geometric corollary:
\begin{corollary}
\label{corollary:abelian-schemes}
If $(C, x_1, \ldots, x_n)$ is an analytically
very general hyperbolic $n$-pointed genus $g$ curve, then any non-isotrivial
abelian scheme over $C\setminus\{x_1, \cdots, x_n\}$ has relative dimension at least $\sqrt{g+1}$.
Similarly, any relative smooth proper curve over $C\setminus\{x_1, \cdots, x_n\}$
has genus at least $\sqrt{g+1}$.
\end{corollary}
We will prove \autoref{corollary:abelian-schemes}
in \autoref{subsubsection:abelian-scheme-proof}.
In \autoref{proposition:hilbert-modular},
we prove a variant of the above corollary
with a stronger bound on the genus
when the abelian scheme has real multiplication, corresponding to a map from
$C\setminus\{x_1, \cdots, x_n\}$ to a Hilbert modular stack.
\begin{remark}
It is a well-known conjecture that integral local systems underlying a polarizable variation of Hodge structure are of geometric origin---see e.g.~\cite[Conjecture 12.4]{simpson62hodge} for a precise statement. \autoref{theorem:very-general-VHS} verifies this conjecture for local systems of rank less than $2\sqrt{g+1}$ on a analytically very general $n$-pointed hyperbolic curve of genus $g$, as local systems with finite monodromy arise from geometry.
\end{remark}
We are grateful to H\'el\`ene Esnault for pointing out the following consequence of \autoref{corollary:geometric-local-systems} to us. We let $$\mathscr{M}_{B,r}(C\setminus \{x_1, \cdots, x_n\}):=\on{Hom}(\pi_1(C\setminus\{x_1, \cdots, x_n\}), \on{GL}_r(\mathbb{C}))\sslash \on{GL}_r(\mathbb{C})$$ be the \emph{character variety} parametrizing conjugacy classes of semisimple representations of $\pi_1(C\setminus \{x_1, \cdots, x_n\})$ into $\on{GL}_r(\mathbb{C})$. See e.g.~\cite{sikora2012character} for a useful primer on character varieties.
\begin{corollary}
\label{corollary:non-density-of-geometric-local-systems}
Let $(C, x_1, \cdots, x_n)$ be an analytically very general hyperbolic $n$-pointed curve of genus $g$. Then if $1<r<2\sqrt{g+1}$, the local systems of geometric origin are not Zariski-dense in the character variety $\mathscr{M}_{B,r}(C\setminus \{x_1, \cdots, x_n\})$.
\end{corollary}
\begin{remark}
\autoref{corollary:non-density-of-geometric-local-systems} contradicts conjectures of Esnault-Kerz \cite[Conjecture 1.1]{esnault2021local} and Budur-Wang \cite[Conjecture 10.3.1]{budur2020absolute}, which imply the density of geometric local systems in the character variety of any smooth complex variety.
\end{remark}
We will prove \autoref{corollary:non-density-of-geometric-local-systems} in
\autoref{subsubsection:non-density-proof}.
In what follows, we say a flat vector bundle has \emph{unitary monodromy} if the associated monodromy representation
$\rho:\pi_1(C)\to \on{GL}_n(\mathbb{C})$
has image with compact closure.
We will deduce the above results from
\autoref{theorem:isomonodromic-deformation-CVHS} below,
using that a discrete subset of the image of a unitary $\rho$ is finite.
\begin{theorem}\label{theorem:isomonodromic-deformation-CVHS}
Let $(C, x_1, \cdots, x_n)$ be an $n$-pointed hyperbolic curve of genus $g$.
Let $({E}, \nabla)$ be a flat vector bundle on
$C$ with $\on{rk}{E}<2\sqrt{g+1}$ and with regular singularities at the $x_i$. If an isomonodromic
deformation of ${(E, \nabla)}$ to an analytically general nearby $n$-pointed curve underlies a polarizable complex variation of Hodge structure, then $({E},\nabla)$ has unitary monodromy.
\end{theorem}
\subsection{Main results on isomonodromic deformations}
\label{subsection:isomonodromy-intro}
As remarked in \autoref{subsection:overview}, the Hodge-theoretic results of
\autoref{subsection:hodge-intro} arise from an analysis of the Harder-Narasimhan filtrations of isomonodromic deformations of flat vector bundles on curves. Our first such result is a counterexample to
\cite[Theorem 1.3]{BHH:logarithmic},
\cite[Theorem 1.3]{BHH:irregular}, and
\cite[Theorem 1.2]{BHH:parabolic}, which demonstrates that the situation is somewhat more complicated than was previously believed --- there exist irreducible flat vector bundles whose isomonodromic deformations are never semistable.
Specifically, \cite{BHH:logarithmic} ask the following
question.
\begin{question}[\protect{\cite[p. 123]{BHH:logarithmic}}]
\label{question:bhh}
Let $X$ be a smooth proper curve, and $D\subset X$ a reduced effective divisor. Given a flat vector bundle $(E, \nabla)$ on $X$, with regular singularities along $D$, let $(E', \nabla')$ be the isomonodromic deformation of $(E,\nabla)$ to an analytically general nearby curve $(X', D')$. Is $E'$ semistable?
\end{question}
The main claim of \cite{BHH:logarithmic} is that \autoref{question:bhh} has a positive answer if $(E,\nabla)$ has irreducible monodromy and the genus of $X$ is at least $2$. However, the following results answer \autoref{question:bhh} in the negative, even in this case. See \autoref{remark:bhh-error} for a discussion of the errors in previous claims that \autoref{question:bhh} had a positive answer.
We use
$\mathscr{M}_{g,n}$ to denote the analytic moduli stack of smooth proper curves with
geometrically connected fibers and $n$ distinct marked points.
\begin{theorem}
\label{theorem:counterexample}
Let $g\geq 2$ be an integer. There exists a vector bundle with flat connection
$(\mathscr{F}, \nabla)$ on $\mathscr{M}_{g,1}$ such that for each fiber $C$ of
the forgetful morphism $\mathscr{M}_{g,1} \to \mathscr{M}_g$, the restriction of $(\mathscr{F}, \nabla)$ to $C$
\begin{enumerate}
\item has semisimple monodromy and
\item is not semistable.
\end{enumerate}
\end{theorem}
We also have the following variant, where the vector bundle has irreducible
monodromy, instead of just semisimple monodromy.
\begin{corollary}\label{corollary:counterexample}
Let $C$ be a smooth projective curve of genus at least $2$. There exists an irreducible flat
vector bundle $(E, \nabla)$ on $C$, whose isomonodromic deformations to
a nearby curve are never semistable.
\end{corollary}
\begin{proof}
The restriction $(\mathscr{F}, \nabla)|_C$ from \autoref{theorem:counterexample} provides a semisimple flat vector bundle, each of whose flat summands has degree zero; by \autoref{theorem:counterexample}(2), its
isomonodromic deformation to a nearby curve is never semistable. Hence one of
the irreducible summands of
$(\mathscr{F}, \nabla)|_C$
satisfies the statement of the corollary.
\end{proof}
In a positive direction, we have have the following result, showing
that the
isomonodromic deformation of any semisimple flat vector bundle to an
analytically general nearby curve is close to being semistable, and moreover it is
semistable if the rank is small.
\begin{theorem}
\label{theorem:hn-constraints}
Let $(C,D)$ be hyperbolic of genus $g$ and let
$({E}, \nabla)$ be a flat
vector bundle on $C$ with regular singularities along $D$,
and irreducible
monodromy.
Suppose
$(E',\nabla')$
is an isomonodromic deformation
of $({E}, \nabla)$ to an analytically general nearby curve,
with Harder-Narasimhan filtration $0 = (F')^0 \subset (F')^1 \subset \cdots \subset
(F')^m =
E'$. For $1 \leq i \leq m$, let $\mu_i$ denote the slope of
$\on{gr}^{i}_{HN}E' := (F')^i/(F')^{i-1}$.
Then the following two properties hold.
\begin{enumerate}
\item If $E'$ is not semistable, then for every $0 < i < m$, there
exists $j < i < k$ with $$\rk \on{gr}^{j+1}_{HN}E'\cdot \rk
\on{gr}^k_{HN}E'\geq g+1.$$
\item We have $0<\mu_i-\mu_{i+1}\leq 1$ for all $i<m$.
\end{enumerate}
\end{theorem}
In other words, the consecutive associated graded pieces of the generic
Harder-Narasimhan filtration have slope differing by at most one, and, if there
are multiple pieces of the generic Harder-Narasimhan filtration, many of them must have large rank relative to $g$.
\autoref{theorem:hn-constraints} is a special case of
\autoref{theorem:hn-constraints-parabolic} below, where we allow certain
parabolic structures on the vector bundle $E$. These more general results are required for our Hodge-theoretic applications.
\begin{remark}
\label{remark:}
\autoref{theorem:hn-constraints} also holds without the hyperbolicity
assumption, as we will explain. Nevertheless, it is convenient to make the assumption
so that curves have no infinitesimal automorphisms. In this case
isomonodromic deformations are somewhat better behaved, see
\cite[p. 518]{Heu:universal-isomonodromic}.
We now explain the proof of \autoref{theorem:hn-constraints} in the case
$(C, D)$ is not hyperbolic.
Suppose $(C, D)$ is not hyperbolic, so either $g = 1, n = 0$ or $g = 0, n \leq 2$. In
this case the fundamental group $\pi_1(C - \{x_1, \ldots, x_n\})$ is abelian.
This implies any irreducible representation of $\pi_1(C - \{x_1, \ldots, x_n\})$
is $1$-dimensional, so the corresponding flat vector bundle is a line bundle. In
this case, $E$ and $E'$ are semistable, so
\autoref{theorem:hn-constraints} still holds.
\end{remark}
As a corollary, we are able to salvage the main theorem of \cite{BHH:logarithmic} for flat vector bundles whose rank is small relative to $g$, using the AM-GM inequality.
The following corollary can be deduced directly from
\autoref{theorem:hn-constraints-parabolic} and AM-GM.
It is also a special
case of \autoref{cor:stable-parabolic}.
\begin{corollary}\label{cor:stable}
Let $(C, D)$ be a hyperbolic curve of genus $g$.
Let $({E}, \nabla)$ be a flat
vector bundle on $C$ with regular singularities along $D$, and suppose that
$\on{rk}(E)<2\sqrt{g+1}$.
Then an isomonodromic deformation of
$E$
to an analytically general nearby curve is semistable.
\end{corollary}
As remarked above, \cite{BHH:logarithmic} claims an analogous theorem with no bound on the rank of $E$; our \autoref{corollary:counterexample} implies such a bound is necessary. Our methods are heavily inspired by those of \cite{BHH:logarithmic}, but our
technique requires some new input from Clifford's theorem for vector bundles.
These results appear to be new even for vector bundles with finite monodromy; we give some example applications in this case.
\begin{example}
\label{example:splitting-type}
In this example, we describe what
\autoref{theorem:hn-constraints}(2) tells us about splitting types
of certain vector bundles on $\mathbb P^1$.
Suppose we are given a finite group $G$ and
a finite $G$-cover $f: X \to \mathbb P^1$.
Consider the flat vector bundle $E := f_* \mathscr O_X$ on $\mathbb P^1$, with the connection $\nabla$ induced by the exterior derivative $d: \mathscr{O}_X\to \Omega^1_X$,
which has regular singularities along the branch locus of $f$.
Let $F$ be a summand of $E$ with irreducible monodromy
and let $F'$ be an isomonodromic deformation of $F$ to a very general
nearby pointed genus $0$ curve.
Since every vector bundle on $\mathbb P^1$ is a sum of line bundles,
we can write $F' = \mathscr O_{\mathbb P^1}(a_1)^{b_1} \oplus \cdots
\oplus \mathscr O_{\mathbb P^1}(a_m)^{b_m}$,
with $a_1 < a_2 < \cdots < a_m$ and $b_i > 0$.
Then \autoref{theorem:hn-constraints}(1) tells us nothing, but
\autoref{theorem:hn-constraints}(2) tells us that the $a_i$ are
consecutive, i.e., $a_{i + 1} = a_i + 1$ for $1 \leq i \leq m-1$.
Such $F'$ appear as a summand in $f'_* \mathscr O_{X'}$, where $f': X'
\to \mathbb P^1$ is a general $G$-cover of $\mathbb P^1$.
\end{example}
\begin{example}
\label{example:tschirnhausen}
We now give a sample application of \autoref{cor:stable} to
semistability of Tschirnhausen bundles of finite covers.
Consider a family $\mathscr X \xrightarrow{\alpha} \mathscr{Z}
\xrightarrow{\beta} \mathscr Y
\xrightarrow{\gamma} B$
where $\mathscr X \to B, \mathscr Y \to B$ and $\mathscr Z \to B$ are smooth proper curves
with geometrically connected fibers,
$\beta$ is finite locally free of degree $d$, $\beta\circ \alpha$ is an $S_d$ cover
which is the Galois closure of $\beta$.
Suppose further $\beta \circ \alpha$ is branched
over a divisor $\mathscr D \subset \mathscr Y$ consisting of $n$ disjoint sections over $B$ so that
$(\mathscr Y, \mathscr D)$ is a relative hyperbolic curve of genus $g$ over $B$.
Assume the map $B \to \mathscr M_{g,n}$ induced by $\gamma$
is dominant.
We obtain a flat vector bundle $E := (\beta \circ \alpha)_* \mathscr
O_{\mathscr X}$ on $\mathscr Y$ with regular singularities along
$\mathscr D$.
One can decompose $E = \bigoplus\limits_{S_d \text{-irreps }
\rho} E_\rho^{\dim \rho}$ as a sum of flat bundles with irreducible
monodromy. Let $F$ denote one such
summand corresponding to the standard representation of dimension $d-1$.
The flat vector bundle $\beta_* \mathscr O_{\mathscr Z}$ decomposes as
$\mathscr O_{\mathscr Y} \oplus F$.
The dual $T$ of $F$ is known
as the Tschirnhausen bundle.
By \autoref{cor:stable},
the restriction of $T$ to a general fiber of $\mathscr Y \to B$
is semistable whenever $d -1 < 2 \sqrt{g+1}$.
Previous results on stability of $T$ were established in
\cite[Theorem 1.5]{deopurkarP:vector-bundles-and-finite-covers}.
If $h$ denotes the genus of $\mathscr Z \to B$, they proved the
restriction of $T$ to a general fiber of $\mathscr Y \to B$ is
semistable whenever $h \geq dg + d(d-1)^2 g$
\cite[Remark 3.16]{deopurkarP:vector-bundles-and-finite-covers}.
\end{example}
\subsection{Motivation}
Our main motivation comes from the following question. Let $f: X\to Y$ be a map
of algebraic varieties. What are the restrictions on the topology of $f$? Our
\autoref{corollary:geometric-local-systems} places a very strong restriction on
the topology of morphisms to an analytically very general curve $C$ of genus
$g$. For example, it implies that if $f: X\to C$ is a proper morphism with smooth generic fiber and bad reduction at $n$ analytically very general points of $C$, then any non-isotrivial monodromy representation occurring in the cohomology of $X/C$ has dimension at least $2\sqrt{g+1}$.
We became interested in this question and its connection to isomonodromy while
trying to understand \cite{BHH:logarithmic}. In that paper Biswas, Heu, and
Hurtubise raise \autoref{question:bhh}, asking whether it is possible to isomonodromically deform irreducible flat vector bundles to achieve semistability, by analogy to Hilbert's 21st problem (also known as the Riemann-Hilbert problem).
Hilbert's 21st problem, as answered by Bolibruch \cite{bolibruch1995riemann} (correcting earlier work of Plemelj), poses the question of whether every monodromy representation can be realized by a Fuchsian system. Esnault and Viehweg generalize this question to higher genus in \cite{esnault1999semistable}: they ask when an irreducible representation can be realized as the monodromy of a flat vector bundle $(E, \nabla)$ with regular singularities at infinity, with $E$ semistable.
In Esnault-Viehweg's formulation, the complex structure on the underlying curve
is fixed, and the residues of the differential equation at regular singular
points are modified to achieve semistability. Flipping this around,
Biswas-Heu-Hurtubise's analogue asks if semistability can be achieved by
modifying the complex structure and fixing the residues. They claim that this
is always possible in the logarithmic, parabolic, and irregular settings, in
\cite{BHH:logarithmic, BHH:parabolic, BHH:irregular}. After discovering the
Hodge-theoretic counterexample to these claims in
\autoref{corollary:counterexample}, we proved \autoref{theorem:hn-constraints}
as an attempt (1) to understand to what extent Biswas, Heu, and Hurtubise's
\autoref{question:bhh} has a positive answer, and (2) to apply the cases when there is a positive answer to the analysis of variations of Hodge structure on curves.
\subsection{Idea of proof}
\label{subsection:idea-of-proof}
To prove \autoref{theorem:very-general-VHS}, we first reduce to proving
\autoref{theorem:isomonodromic-deformation-CVHS}, using that discrete compact
spaces are finite.
We then prove \autoref{theorem:isomonodromic-deformation-CVHS} by showing that
any flat vector bundle satisfying the hypotheses of the theorem is forced to be (parabolically) semistable on an analytically general curve, whence the Hodge
filtration consists of a single piece by \autoref{corollary:unstable}.
The polarization then gives a definite Hermitian form preserved by the monodromy, and hence
the monodromy is unitary.
The key issue, which follows from \autoref{theorem:hn-constraints-parabolic},
is therefore to show that low rank flat vector bundles
are parabolically semistable
on an analytically general curve.
To prove \autoref{theorem:hn-constraints-parabolic}, we assume we have a flat
vector bundle $(E, \nabla)$ on our hyperbolic curve $(C, D)$, and consider an
isomonodromic deformation to a nearby curve.
To this end, we use the deformation theory of this flat vector bundle with its Harder-Narasimhan filtration,
which is governed by a variant of the Atiyah bundle.
We show that if the Harder-Narasimhan filtration does not satisfy the conclusion of \autoref{theorem:hn-constraints-parabolic}, then there is a direction along which we can deform the curve so that
the filtration is destroyed.
Indeed, if the filtration persisted, deformation theory provides us with a map
from $T_C(-D)$ to a certain
parabolically semistable subquotient of $\mathrm{End}(E)$ which vanishes on $H^1$.
Taking the Serre duals gives a semistable coparabolic vector bundle of low rank
and large coparabolic slope
which is not generically globally generated.
In the end, we rule this out by a variant of Clifford's theorem for vector
bundles.
\subsection{Organization of the paper}
\label{subsection:organization}
In \autoref{section:parabolic}, we review background on parabolic bundles.
In \autoref{section:deformation-theory}, we give background on Atiyah bundles,
parabolic Atiyah bundles, and isomonodromic deformations.
In \autoref{section:hodge-theoretic-preliminaries} we give background on complex variations of Hodge structures and their associated parabolic Higgs bundles. Experts can likely skip these three sections.
In \autoref{section:counterexample}, we prove \autoref{theorem:counterexample} and \autoref{corollary:counterexample}, providing counterexamples to earlier published claims about semistability of isomonodromic deformations.
In \autoref{section:hn-filtration} we prove the main results on isomonodromic
deformations, \autoref{theorem:hn-constraints} and \autoref{cor:stable} (and their generalizations \autoref{theorem:hn-constraints-parabolic} and \autoref{cor:stable-parabolic}). This is the technical heart of the paper.
In \autoref{section:hodge-theoretic-results}, we prove the main consequences for
variations of Hodge structure, \autoref{theorem:very-general-VHS}, \autoref{corollary:geometric-local-systems}, \autoref{corollary:non-density-of-geometric-local-systems}, and \autoref{theorem:isomonodromic-deformation-CVHS}.
Finally, \autoref{section:questions} lists some questions motivated by our results.
For readers unfamiliar with the theory of parabolic bundles, we suggest first considering the case of \autoref{theorem:very-general-VHS} where $\mathbb{V}$ is assumed to have unipotent monodromy at infinity. In this case one may replace parabolic stability with the usual notion of stability for vector bundles, simplifying the proof; in particular, one may use \autoref{cor:stable} in place of \autoref{cor:stable-parabolic}.
\subsection{Acknowledgments}
This material is based upon work supported by the Swedish Research Council under
grant no. 2016-06596 while the authors were in residence at Institut
Mittag-Leffler in Djursholm, Sweden during the fall of 2021.
Landesman was supported by the National Science
Foundation under Award No. DMS-2102955
and
Litt was supported by NSF grant DMS-2001196.
We are grateful for useful conversations with
Donu Arapura,
Indranil Biswas,
Marco Boggi,
Juliette Bruce,
Anand Deopurkar,
H\'el\`ene Esnault,
Joe Harris,
Viktoria Heu,
Jacques Hurtubise,
David Jensen,
Jean Kieffer,
Matt Kerr,
Eric Larson,
Anand Patel,
Alexander Petrov,
Andrew Putman,
Will Sawin,
Ravi Vakil,
and
Isabel Vogt.
\section{Background on parabolic bundles}
\label{section:parabolic}
We now review some basics on parabolic sheaves and parabolic bundles, primarily following the notation of
\cite[\S2.3]{BHH:parabolic}.
Some useful references include
\cite[Part 3, \S1]{seshadri:fibres-vectoriels-sur-les-courbes-algebriques},
\cite[\S1 and \S3]{yokogawa:infinitesimal-deformation}, and
\cite[\S2]{bodenY:moduli-spaces-of-parabolic-higgs-bundles}.
Let $C$ be a smooth proper curve and $D\subset C$ a reduced divisor. Loosely speaking, a parabolic bundle is a vector bundle on $C$ together with an additional
filtration of the fibers over points of the given divisor, weighted by an increasing sequence of real numbers in $[0,1)$.
\subsection{Definition of parabolic bundles}
\label{subsubsection:definition-parabolic}
\begin{definition}
\label{definition:parabolic-bundle}
Let $E$ be a vector bundle over a curve $C$.
Let $D = x_1 + \cdots + x_n \subset C$ denote a divisor, with the $x_i$ distinct.
A {\em quasiparabolic structure} on $E$ over $D$ is a strictly decreasing
filtration of subspaces
\begin{align*}
E_{x_j} = E_j^1 \supsetneq E_j^2 \supsetneq \cdots \supsetneq E_j^{n_j+1}
= 0
\end{align*}
for each $1 \leq j \leq n$.
A {\em parabolic structure} on $E$ over $D$ is a quasiparabolic
structure together with
$n$ sequences of real numbers
\begin{align*}
0 \leq \alpha^1_j < \alpha^2_j \cdots < \alpha^{n_j}_j < 1
\end{align*}
for $1 \leq j \leq n$.
A {\em parabolic bundle} is a vector bundle with a parabolic structure.
The collection $\{\alpha^i_j\}$ are called the {\em weights} of the
parabolic bundle. We say the parabolic bundle has {\em rational weights} if
all $\alpha^i_j$ are rational numbers.
We often notate the data of a parabolic bundle simply as $E_\star$
instead of $(E, \{E^i_j\}, \{\alpha^i_j\})$.
\end{definition}
\begin{definition}[Parabolic degree and slope]
\label{definition:}
Let $E_\star := (E, \{E^i_j\}, \{\alpha^i_j\})$. We define the {\em
parabolic degree} of $E_\star$ as
\begin{align*}
\on{par-deg}(E_\star) := \deg(E) + \sum_{j=1}^n \sum_{i=1}^{n_j}
\alpha^i_j \dim(E_j^i/E_j^{i+1}).
\end{align*}
We define the {\em parabolic slope} as $\mu_\star(E_\star) :=
\on{par-deg}(E_\star)/\rk(E_\star)$.
\end{definition}
\subsection{Definition of parabolic sheaves}
\label{subsubsection:parabolic-sheaf}
In order to later apply Serre duality, we will not only need parabolic bundles, but also so-called
coparabolic bundles. Coparabolic bundles are examples of parabolic sheaves,
which we define next.
The definitions in this subsection follow
\cite{yokogawa:infinitesimal-deformation} and
\cite{bodenY:moduli-spaces-of-parabolic-higgs-bundles}.
\begin{definition}
\label{definition:filtered-O_X-module}
Let $X$ be a scheme and $D \subset X$ an effective Cartier divisor.
Let $\mathbb R$ denote the category whose objects are real numbers with
a single morphism $i^{\alpha,\beta}:\alpha\to\beta$ if $\alpha \geq \beta$ and no
morphisms otherwise.
Let $\mathscr M_X$ denote the category of sheaves of $\mathscr{O}_X$-modules.
A {\em $\mathbb R$-filtered $\mathscr O_X$-module} is a functor $E: \mathbb R \to \mathscr M_X$.
Notationally, we use $E_\star$ to denote the functor $E$, so $E_\alpha := E(\alpha)$.
We write $i_E^{\alpha,\beta} := E(i^{\alpha,\beta})$.
\end{definition}
\begin{example}
\label{example:}
Define $E[\alpha]_\star$ as the $\mathbb{R}$-filtered $\mathscr O_X$-module given by
$E[\alpha]_\beta := E_{\alpha + \beta}$
and $i^{\beta,\gamma}_{E[\alpha]} := i_E^{\beta + \alpha, \gamma + \alpha}$.
Let $i^{[\alpha,\beta]}_E : E[\alpha]_\star \to E[\beta]_\star$ denote the
natural transformation whose value on $\gamma$ is $i_E^{\alpha + \gamma, \beta +
\gamma}$.
For $f: E_\star \to F_\star$, we use $f[\alpha] : E[\alpha]_\star \to F[\alpha]_\star$ for
the natural induced map.
By abuse of notation, we will frequently write $E$ in place of $E_0$.
\end{example}
\begin{definition}
\label{definition:parabolic-sheaf}
A {\em parabolic sheaf} on $X$ with respect to $D$ is
a $\mathbb R$-filtered $\mathscr O_X$-module $E_\star$ equipped with an isomorphism
\begin{align*}
j_E : E_\star \otimes \mathscr O_X(-D) \overset{\sim}{\to} E[1]_\star
\end{align*}
such that $i_E^{[1,0]} \circ j_E = \on{id}_{E_\star}
\otimes i_D: E_\star \otimes \mathscr O_X(-D) \to E_\star$, where $i_D: \mathscr{O}_X(-D)\to \mathscr{O}_X$ is the natural inclusion.
A natural transformation of parabolic $\mathscr O_X$-modules $f: E_\star \to
F_\star$ is a {\em parabolic morphism} if
\begin{equation}
\label{equation:}
\begin{tikzcd}
E_\star \otimes \mathscr O_X(-D) \ar {r}{f\otimes
\on{id}} \ar {d}{j_E} & F_\star \otimes \mathscr
O_X(-D) \ar {d}{j_F} \\
E[1]_\star \ar {r}{f[1]} & F[1]_\star
\end{tikzcd}\end{equation}
commutes.
Let $\on{Hom}(E_\star, F_\star)$ denote the set of parabolic morphisms.
We let $\mathscr{H}\kern -2pt om(E_\star, F_\star)$ denote the sheaf of homomorphisms
defined by taking $\mathscr{H}\kern -2pt om(E_\star, F_\star)(U):= \on{Hom}(E_\star|_U,
F_\star|_U).$
We also define a parabolic sheaf
$\mathscr{H}\kern -2pt om(E_\star, F_\star)_\star$ by taking
$\mathscr{H}\kern -2pt om(E_\star, F_\star)_\alpha := \mathscr{H}\kern -2pt om(E_\star, F[\alpha]_\star)$
and the $i_{\mathscr{H}\kern -2pt om(E_\star, F_\star)}^{\alpha,\beta}$ for $\alpha \geq \beta$
to be the natural maps
induced by $i_F^{[\alpha, \beta]}: F[\alpha]_\star \to F[\beta]_\star$.
We define $\on{End}(E_\star) := \on{Hom}(E_\star, E_\star)$
and use $\mathscr{E}\kern -1pt nd(E_\star) := \mathscr{H}\kern -2pt om(E_\star, E_\star)$.
\end{definition}
\begin{example}
\label{example:parabolic}
Any parabolic vector bundle $(E, \{E^i_j\}, \{\alpha^i_j\})$ defines a
parabolic sheaf as follows. For $0 \leq \alpha < 1$,
define
\begin{align*}
E_\alpha := \cap_{j=1}^n \ker(E \to
E_{x_j}/E^{\beta(\alpha, j)}_j).
\end{align*}
where $\beta(\alpha, j)=\min(i: \alpha^i_j\geq \alpha),$ for $\alpha\leq \max_i(\alpha^i_j)$ and taking $\beta(\alpha, j)=n_j+1$ for $1>\alpha>\max_i(\alpha^i_j)$.
For $n\in \mathbb{Z}$ set $E_{x+n} := E_x(-nD)$ and take $i^{\alpha,\beta}_E$ as the
natural inclusions. We will refer to parabolic vector bundles and their associated parabolic sheaves interchangeably.
\end{example}
\begin{remark}
\label{remark:end-definition}
It follows from the definition of $\on{End}(E_\star)$ that
$\mathscr{E}\kern -1pt nd(E_\star) \subset \mathscr{E}\kern -1pt nd(E)$ is the coherent subsheaf
corresponding to those endomorphisms preserving the quasiparabolic filtration
$\{E^i_j\}$ at each point $x_j$ of $D$.
\end{remark}
\begin{example}
\label{example:vector-bundle}
Every vector bundle $E$ defines a parabolic bundle on $X$ with respect
to $D$ by taking the quasiparabolic structure
$E_{x_j} = E_j^1 \subset E_{j}^2 = 0$ with $\alpha_j^1 = 0$.
In turn, this defines a parabolic sheaf by \autoref{example:parabolic}.
We say such a parabolic bundle as trivial parabolic structure.
\end{example}
\begin{remark}
\label{remark:map-vector-bundle-to-parabolic}
If $V$ is a vector bundle on a scheme $X$ and $E_\star$ is a parabolic
sheaf, we write a map $V \to E_\star$ to denote a map $V_\star \to
E_\star$, where $V_\star$ is the parabolic sheaf corresponding to $V$ as
in \autoref{example:vector-bundle}.
\end{remark}
In addition to parabolic vector bundles, we will also require coparabolic vector
bundles, in order to use Serre duality.
The essential idea is that while the parabolic sheaves associated to parabolic vector bundles are unchanged in intervals of
the form $(\alpha^i, \alpha^{i+1}]$, coparabolic vector bundles are unchanged in
intervals of the form $[\alpha^i, \alpha^{i+1})$.
I.e., parabolic vector bundles can be viewed as lower semicontinuous functors, taking
the discrete topology on the set of isomorphism classes of vector bundles, while coparabolic vector
bundles
can be viewed as upper semicontinuous functors, see \cite[Figure
1]{bodenY:moduli-spaces-of-parabolic-higgs-bundles}.
\begin{definition}[Coparabolic vector bundles,
\protect{\cite[Definition 2.3]{bodenY:moduli-spaces-of-parabolic-higgs-bundles}}]
\label{definition:coparabolic-vector-bundle}
Let $E_\star$ denote a parabolic vector bundle, viewed as a parabolic
sheaf.
The associated {\em coparabolic vector bundle}, denoted
$\widehat{E}_\star$, is the parabolic sheaf defined by
$$\widehat{E}_\alpha :=
\on{colim}_{\beta > \alpha} E_{\beta} $$
The colimit above is the union taken over the inclusions given by
$i^{\alpha,\beta}_E$.
\end{definition}
\begin{definition}[Coparabolic degree and slope]
\label{definition:}
If $F_\star$ is a coparabolic bundle of the form $F_\star = \widehat{E}_\star$,
for $E_\star$ a parabolic vector bundle, then the {\em coparabolic degree} of
$F_\star$ is defined by $\on{copar-deg}(F_\star) := \on{par-deg}(E_\star)$ and the
coparabolic slope of $F_\star$ is defined by $\mu_\star(F_\star) := \mu_\star(E_\star)$.
\end{definition}
\begin{example}
\label{example:vector-bundle-co}
Given a vector bundle $V$, one can define an associated parabolic bundle
$E_\star$ with the trivial parabolic structure, as in
\autoref{example:vector-bundle}. We call $\widehat{E}_\star$
as the coparabolic bundle associated with trivial coparabolic structure.
\end{example}
\subsection{Induced subbundles and quotient bundles}
\label{subsubsection:induced-subbundle}
Let $E_\star := (E, \{E^i_j\}, \{\alpha^i_j\})$ be a parabolic bundle.
Any subbundle $F \subset E$ has an induced parabolic structure $F_\star$ as
follows, see
\cite[Part 3, \S1.A, Definition 4]{seshadri:fibres-vectoriels-sur-les-courbes-algebriques}.
The quasiparabolic structure over $x_j$ on $F$ is obtained
from the filtration $$F_{x_j} = E^1_j \cap F_{x_j} \supset E^2_j \cap
F_{x_j} \supset \cdots \supset E^{n_j+1}_j \cap F_{x_j} = 0$$
by removing redundancies. For the weight associated to $F^i_j \subset
F_{x_j}$ one takes $$\max_{\substack{k, 1 \leq k \leq n_j}} \{ \alpha^k_j : F^i_j = E^k_j \cap
F_{x_j}\}.$$
Similarly, any quotient $E \to Q$ with kernel $F$ has an induced parabolic structure
given as follows.
The quasiparabolic structure over $x_j$ on $Q$ is obtained from
$$Q_{x_j} = (E^1_j+F_{x_j})/F_{x_j} \supset (E^2_j+F_{x_j})/F_{x_j}\supset\cdots\supset
(E^{n_j+1}_j+F_{x_j})/F_{x_j} = 0$$
by removing redundancies. For the weight associated to a subspace $Q^i_j \subset
Q_{x_j}$ one takes $$\max_{\substack{k, 1 \leq k \leq n_j}} \{ \alpha^k_j : Q^i_j = (E^k_j + F_{x_j})
/F_{x_j}\}.$$
\subsection{Stability of parabolic and coparabolic bundles}
\label{subsubsection:stability-parabolic}
\begin{definition}[Parabolic stability]
\label{definition:}
A parabolic vector bundle $E_\star$ is {\em parabolically semistable}
(respectively, parabolically stable) if $\mu_\star(F_\star) \leq
\mu_\star(E_\star)$ (respectively $\mu_\star(F_\star)< \mu_\star(E_\star)$) for all parabolic subbundles $F_\star \subset E_\star$ (with the
induced parabolic structure as described above).
\end{definition}
\begin{definition}[Coparabolic stability]
\label{definition:coparabolic-stability}
A coparabolic bundle $E_\star$ is {\em coparabolically semistable}
if $E_\star$ is of the form $\widehat{F}_\star$ for $F$ a semistable
parabolic bundle.
\end{definition}
\begin{remark}
\label{remark:}
Note that parabolic and coparabolic stability are both defined with
respect to parabolic bundles.
\end{remark}
\begin{remark}
\label{remark:coparabolic-stability}
This remark will not be needed in what follows.
It turns out that if $n > 0$,
a coparabolic bundle $E_\star=\widehat{F}_\star$ is coparabolically semistable if for any
injection from a parabolic vector bundle $G_\star \hookrightarrow E_\star$, $\mu_\star(G_\star) <
\mu_\star(E_\star)$.
That is, although the definition
only gives $\mu_\star(G_\star) \leq \mu_\star(E_\star)$, equality of
slopes is not possible when $n > 0$.
The reason for this is that any such map $G_\star \to E_\star$ also
induces a map $G[-\varepsilon]_\star \to E_\star\to F_\star$ for some sufficiently
small $\varepsilon > 0$.
We then have $\mu_\star(G_\star) < \mu_\star(G[-\varepsilon]_\star) \leq
\mu_\star(F_\star)=\mu_\star(E_\star)$.
\end{remark}
\begin{lemma}
\label{lemma:quotient-semistable}
Suppose $E_\star$ is a parabolic bundle and $F_\star \subset E_\star$ is
a parabolic subbundle with the induced subbundle structure. If $Q_\star
= E_\star/F_\star$ then $\on{par-deg}(E_\star) = \on{par-deg}(Q_\star) +
\on{par-deg}(F_\star)$.
In particular, a parabolic bundle $E_\star$ is semistable (respectively,
stable) if and only if for every quotient
bundle $Q_\star$, $\mu_\star(E_\star) \leq \mu_\star(Q_\star)$, (respectively $<
\mu_\star(Q_\star)$).
\end{lemma}
\begin{proof}
The second statement follows from the first, because
$\frac{\on{par-deg}(F_\star)}{\rk(F_\star)} <
\frac{\on{par-deg}(E_\star)}{\rk(E_\star)}$
is equivalent to
$\frac{\on{par-deg}(E_\star)- \on{par-deg}(F_\star)}{\rk(E_\star) -
\rk(F_\star)} >
\frac{\on{par-deg}(E_\star)}{\rk(E_\star)}$, and similarly where one
replaces the inequalities with equalities.
The first statement is stated in
\cite[Part 3, \S1.A, p. 69, Remark
3]{seshadri:fibres-vectoriels-sur-les-courbes-algebriques},
together with the definition of exact sequence of parabolic bundles
\cite[Part 3, \S1.A, p. 68]{seshadri:fibres-vectoriels-sur-les-courbes-algebriques}.
\end{proof}
\subsection{Harder-Narasimhan filtrations}
\label{subsubsection:harder-narasimhan-parabolic}
It is a standard fact that parabolic vector bundles have a Harder-Narasimhan
filtration, and its proof is similar to the construction of Harder-Narasimhan
filtrations of vector bundles, see
\cite[Part 3, \S1.B, Theorem 8]{seshadri:fibres-vectoriels-sur-les-courbes-algebriques}.
\subsection{Serre duality}
If $E_\star$ is a parabolic sheaf on a scheme
$X$, we have $H^0(C, E_\star) := \on{Hom}(\mathscr O_X, E_\star) =
\on{Hom}(\mathscr O_X, E_0) = \Gamma(X, E)$, and one can define the
higher cohomology groups by taking the corresponding right derived functor, as in
\cite[p. 130]{yokogawa:infinitesimal-deformation}, where, more generally,
$\on{Ext}^i(E_\star, F_\star)$ is defined for parabolic $\mathscr O_X$ modules.
In general we have $H^i(X, E_\star)=H^i(X, E_0)$.
To state Serre duality, we need the notion of parabolic tensor product and
duality.
\begin{definition}
\label{definition:dualization}
For $F_\star$ a parabolic sheaf, define $F^\vee_\star :=
\mathscr{H}\kern -2pt om(F_\star, \mathscr O_X)_\star$.
\end{definition}
The following gives a useful alternate description of parabolic dualization.
\begin{lemma}[Parabolic dualization, cf. \protect{\cite[(3.1)]{yokogawa:infinitesimal-deformation}}]
\label{lemma:parabolic-dual-formula}
If $F_\star$ arises from a parabolic vector bundle, we have
$F^\vee_\alpha \simeq (\widehat{F}_{-\alpha})^\vee (-D)$.
\end{lemma}
\begin{definition}[Parabolic tensor product, cf. \protect{\cite[Example
3.2]{yokogawa:infinitesimal-deformation}}]
\label{definition:parabolic-tensor}
Let $\tau: X - D \to X$ denote the inclusion.
Suppose $E_\star, F_\star$ are both parabolic modules so that each $F_\alpha$
and $E_\alpha$ is locally free,
define $(E_\star \otimes F_\star)_\alpha := \sum_{\alpha_1 + \alpha_2 =
\alpha} E_{\alpha_1} \otimes F_{\alpha_2}$, viewed as a subbundle of
$\tau_* \tau^*(E \otimes F)$. We take $i^{\alpha,\beta}_{E_\star \otimes F_\star}$ to be the natural inclusion map.
\end{definition}
\begin{remark}
\label{remark:}
In \cite[p. 136-137]{yokogawa:infinitesimal-deformation}, the tensor
product of two arbitrary parabolic sheaves is defined, but the
definition is more difficult to state, and we will not require this
greater generality.
\end{remark}
The next lemma states that parabolic tensor products and duals interact in the usual way with degree.
We omit the proof, which is a matter of unwinding definitions.
\begin{lemma}
\label{lemma:degree-in-duals}
For $E_\star$ and $F_\star$ two parabolic bundles, $\deg E_\star \otimes
F_\star= \deg E_\star \rk F_\star + \deg F_\star \rk E_\star$ and $\deg
E_\star^\vee = - \deg E_\star$.
\end{lemma}
\begin{proposition}[Serre duality]
\label{proposition:serre-duality}
Suppose $X$ is a smooth projective $n$-dimensional variety over an
algebraically closed field $k$ and let $\omega_X$ denote the dualizing
sheaf on $X$. For all parabolic vector bundles $E_\star$, we have a canonical isomorphism
\begin{align*}
H^i(X, E_\star) \simeq H^{n-i}(X, \widehat{E_\star^\vee} \otimes
\omega_X(D))^\vee.
\end{align*}
\end{proposition}
\begin{proof}
The version in \cite[Proposition
3.7]{yokogawa:infinitesimal-deformation} states
$\on{Ext}^i_X(E_\star, F_\star \otimes \omega_X(D)) \simeq
\on{Ext}^{n-i}_X(F_\star, \widehat{E}_\star)^\vee.$
Using \cite[Lemma 3.6]{yokogawa:infinitesimal-deformation}, and taking
$E_\star = \mathscr O_X$, we find
$H^i(X, F_\star \otimes \omega_X(D)) \simeq H^{n-i}( F_\star^\vee
\otimes
\widehat{\mathscr O_X})^\vee \simeq H^{n-i}(X, \widehat{F_\star^\vee})^\vee$.
Now, taking $E_\star$ as in the statement to be
$F_\star \otimes \omega_X(D)$, we find $F_\star^\vee = E_\star^\vee
\otimes \omega_X(D)$ and so
$H^i(X, E_\star) \simeq H^{n-i}(X, \reallywidehat{E_\star^\vee \otimes
\omega_X(D)})^\vee \simeq
H^{n-i}(X, \widehat{E_\star^\vee} \otimes
\omega_X(D))^\vee.$
\end{proof}
\section{Background on Atiyah bundles and isomonodromic deformations}
\label{section:deformation-theory}
\subsection{The Atiyah bundle of a filtered vector bundle}
\label{subsection:atiyah-bundle}
We begin by defining the Atiyah bundle.
Let $C$ be a smooth projective curve.
Following
\cite[16.8.1]{EGAIV.4},
for $E$ a vector bundle on $C$, define $\on{Diff}^1(E,E)$ as follows: for
$U\subset C$ open, $\on{Diff}^1({E}, {E})(U)$ is the set of
$\mathbb{C}$-linear endomorphisms $\tau$ of $E(U)$, such that for each $f\in
\mathscr{O}_C(U), v\in E(U)$, we have that $$\tau_f: v\mapsto
\tau(fv)-f\tau(v)$$ is $\mathscr{O}_C$-linear.
Here $\tau_f$ measures the failure of $\tau$ to be $\mathscr{O}_C$-linear, in
that $\tau_f$ is zero for all $f$ if and only if $\tau$ is $\mathscr{O}_C$-linear.
\begin{definition}[The Atiyah bundle, see
\protect{\cite[p. 5]{BHH:very-stable}}]
\label{definition:atiyah}
Let $E$ be a
vector bundle on a curve $C$.
Define the Atiyah bundle $$\on{At}_C(E)\subset \on{Diff}^1({E}, {E})$$ as the
subsheaf with sections on an open set $U \subset C$ given as follows.
Let $\on{At}_C(E)(U)$ consist of those $\mathbb{C}$-endomorphisms $\tau \in
\on{Diff}^1({E}, {E})(U)$ such that for each $f\in \mathscr{O}_C(U)$, the endomorphism of ${E}$ defined by $$\tau_f: v\mapsto \tau(fv)-f\tau(v)$$ is multiplication by a section $\delta_\tau(f)\in \mathscr{O}_C(U)$.
\end{definition}
One can also construct Atiyah bundles associated to filtered bundles.
\begin{definition}[Atiyah bundle of a filtered vector bundle]
\label{definition:atiyah-filtered}
Let $P^\bullet := (0 = P^0 \subset P^1
\subset \cdots \subset P^m = E)$ be a filtration on $E$.
We define $$\on{At}_C(E, P^\bullet)\subset \on{At}_C(E)$$ to be the subsheaf
consisting of those endomorphisms that preserve $P^\bullet$.
\end{definition}
\begin{remark}
\label{remark:atiyah-exact-sequence}
From the definition, (see also \cite[(2.7)]{BHH:logarithmic}), there is a
short exact sequence
\begin{equation} \label{equation:non-logarithmic-atiyah-sequence} 0\to
\mathscr{E}\kern -1pt nd(E, P^\bullet)\overset{\iota}{\to} \on{At}_C(E,
P^\bullet)\overset{\delta}{\to} T_C\to 0,\end{equation} where
$\mathscr{E}\kern -1pt nd(E, P^\bullet)\subset \mathscr{E}\kern -1pt nd(E)$ is the subsheaf of $\mathscr{O}_C$-linear endomorphisms preserving $P^\bullet$, $\iota$ is the evident inclusion,
and $\delta$ sends a differential operator $\tau$ to the derivation
$$\delta_\tau: f\mapsto \delta_\tau(f)$$ defined in
\autoref{definition:atiyah}.
\end{remark}
\begin{remark}
There is an alternate, perhaps more geometric, description of $\on{At}_C(E,
P^\bullet)$. Namely, the filtration $P^\bullet$ gives a restriction of
the structure group of ${E}$ to a parabolic subgroup ${P}\subset
\on{GL}_n$, and hence gives rise to a natural ${P}$-torsor $p:\Pi\to C$
over $C$, which is a subscheme of the frame bundle of ${E}$ (i.e.~it
consists of those frames which are compatible with $P^\bullet$). The tangent
exact sequence $$0\to T_{\Pi/C}\to T_\Pi\to p^*T_C\to 0$$ naturally admits a ${P}$-linearization (for the ${P}$-action on $\Pi$)
and hence descends to a short exact sequence on
$C$, which is precisely \eqref{equation:non-logarithmic-atiyah-sequence}.
\end{remark}
Next, we introduce Atiyah bundles with respect to divisors.
\begin{definition}[Atiyah bundle of a filtered vector bundle with respect to a divisor]
\label{definition:atiyah-bundle-divisor}
Let $D\subset C$ be a reduced effective divisor.
The Atiyah bundle $\on{At}_{(C,D)}(E,P^\bullet)$ is defined as the preimage
$$\on{At}_{(C,D)}(E,P^\bullet):=\delta^{-1}(T_C(-D)),$$ where $\delta$ is the map appearing in Sequence \eqref{equation:non-logarithmic-atiyah-sequence} and where $T_C(-D)\hookrightarrow T_C$ is the natural inclusion.
If $P^\bullet = (0 = P^0 \subset P^1 = E)$ is the trivial filtration, we omit it from the notation, e.g.~we will use the notation $\on{At}_{(C,D)}(E)$
in place of $\on{At}_{(C,D)}(E, 0 \subset E)$
when convenient.
\end{definition}
The
following alternate viewpoint on connections will be useful.
\begin{proposition}\label{proposition:atiyah-splittings}
Suppose $E$ is a vector bundle on $C$ and $D \subset C$ a reduced
effective divisor.
There is a natural bijection between splittings of the Atiyah exact sequence
\begin{equation}
\label{equation:Atiyah-exact-sequence-normal}
0\to \mathscr{E}\kern -1pt nd(E, P^\bullet)\to \on{At}_{(C,D)}(E, P^\bullet)\to T_C(-D)\to
0.
\end{equation}
and flat connections on $E$ with
regular singularities along $D$ and preserving $P^\bullet$, given by
adjointness. That is, given a connection $$\nabla: E\to E\otimes
\Omega^1_C(\log D)$$ preserving $P^\bullet$, we may by adjointness view
$\nabla$ as a map $q^\nabla: T_C(-D)\to
\mathscr{E}\kern -1pt nd_{\mathbb{C}}(E)$.
This map factors through $\on{At}_{(C,D)}(E)$ and
yields a splitting of \eqref{equation:Atiyah-exact-sequence-normal}.
Moreover, this correspondence between flat connections and splittings is bijective.
\end{proposition}
\begin{proof}
This is a matter of unwinding definitions.
\end{proof}
We will pass freely between $\nabla$ and $q^\nabla$ and refer to each as a connection.
\subsection{The parabolic Atiyah bundle}
\label{subsection:parabolic-atiyah}
We next recall the generalization of the Atiyah bundle to the parabolic setting.
Let $C$ be a curve with reduced divisor $D=x_1+\cdots+x_n$ and let $E_\star$ be a
parabolic vector bundle on $(C,D)$.
It will useful to recall the explicit description of $\on{End}(E_\star)$ given in
\autoref{remark:end-definition}.
To define the parabolic Atiyah bundle, we also need the following
definition.
\begin{definition}
\label{definition:}
Let
$\on{End}_j(E_\star)$ denote the image of the natural inclusion
$\mathscr{E}\kern -1pt nd(E_\star)_{x_j} \to \mathscr{E}\kern -1pt nd(E)_{x_j}$, viewed as a subspace of $\mathscr{E}\kern -1pt nd(E)_{x_j}$.
\end{definition}
\begin{remark}
\label{remark:}
We can write $\mathscr{E}\kern -1pt nd(E)/\mathscr{E}\kern -1pt nd(E_\star)$ as a direct sum of skyscraper
sheaves supported on the $x_j$.
More precisely, for $\on{End}_j(E_\star)$ the image of
$\mathscr{E}\kern -1pt nd(E_\star)_{x_j} \to \mathscr{E}\kern -1pt nd(E)_{x_j}$ in the fiber over $x_j$,
there is an exact sequence
\begin{equation}
\label{equation:}
\begin{tikzcd}
0 \ar {r} & \mathscr{E}\kern -1pt nd(E_\star) \ar {r} & \mathscr{E}\kern -1pt nd(E) \ar {r} &
\oplus_{j= 1}^n \mathscr{E}\kern -1pt nd(E)_{x_j}/\on{End}_j(E_\star) \ar {r} &
0.
\end{tikzcd}\end{equation}
\end{remark}
\subsubsection{The residue homomorphism}
\label{subsubsection:residue}
To define the Atiyah bundle associated to $E_\star$, we essentially want to
carve it out from the Atiyah bundle by requiring our differential operators to preserve the quasiparabolic
filtration over each $x_j$. For this, we need a certain homomorphism
${\phi}_j: \on{At}_{(C,D)}(E)_{x_j}
\to \mathscr{E}\kern -1pt nd(E)_{x_j}$,
referred to as a {\em residue homomorphism}.
The residue homomorphism is
defined in, for
example, \cite[(2.9)]{BHH:parabolic}, and we also recall it now.
Given any smooth proper curve $C$ and reduced divisor $D \subset C$ with $x_j$ in the support of $D$,
the residue homomorphism at $x_j$ is a map ${\phi}_j: \on{At}_{(C,D)}(E)_{x_j}
\to \mathscr{E}\kern -1pt nd(E)_{x_j}$ defined as follows.
There is a commutative diagram of vector spaces
\begin{equation}
\label{equation:}
\begin{tikzcd}
0 \ar {r} & \mathscr{E}\kern -1pt nd(E)_{x_j} \ar {r}{\nu_j} \ar {d} &
\on{At}_{(C,D)}(E)_{x_j} \ar {r} \ar {d}{\alpha_j} & T_C(-D)_{x_j}
\ar {r} \ar {d}{\beta_j} & 0 \\
0 \ar {r} & \mathscr{E}\kern -1pt nd(E)_{x_j} \ar {r}{\mu_j} & \on{At}_C(E)_{x_j} \ar
{r}{\gamma_j} & (T_C)_{x_j} \ar {r} & 0.
\end{tikzcd}\end{equation}
The map $\beta_j$ is induced from the natural inclusion of invertible sheaves, and
hence vanishes.
Therefore, $\gamma_j \circ \alpha_j = 0$, which means $\alpha_j$ factors through
$\mathscr{E}\kern -1pt nd(E)_{x_j}$. This produces the desired map
${\phi}_j: \on{At}_{(C,D)}(E)_{x_j} \to \mathscr{E}\kern -1pt nd(E)_{x_j}$,
satisfying the property that $\mu_j\circ \phi_j = \alpha_j$.
By definition of $\alpha_j$, the restriction of $\alpha_j$ to $\mathscr{E}\kern -1pt nd(E)_{x_j}$ is the identity.
That is, $\phi_j\circ \nu_j = \mathrm{id}$.
This gives us a splitting
$\on{At}_{(C,D)}(E)_{x_j} \simeq \mathscr{E}\kern -1pt nd(E)_{x_j} \oplus T_C(-D)_{x_j}$.
Now, let $z$ denote a uniformizer of the local ring of $C$
at $x_j$ and let $z \frac{\partial
}{\partial z}$ denote the corresponding section of $T_C(-D)$ at $x_j$.
Observe that this is independent of the choice of $z$.
Given $q^\nabla: T_C(-D) \to \on{At}_{(C,D)}(E)$ a connection with regular
singularities,
let $q^\nabla_{x_j}(z \frac{\partial }{\partial z}) \in \on{At}_{(C,D)}(E) \simeq \mathscr{E}\kern -1pt nd(E)_{x_j} \oplus T_C(-D)_{x_j}$ denote the image of $z
\frac{\partial }{\partial z}$
under $q^\nabla$ restricted to the fiber $x_j$.
Define the {\em residue} $\on{Res}(\nabla)(x_j) \in \mathscr{E}\kern -1pt nd(E)_{x_j}$ as the
projection of
$q^\nabla_{x_j}(z \frac{\partial }{\partial z}) \in \mathscr{E}\kern -1pt nd(E)_{x_j} \oplus T_C(-D)_{x_j}$ to $\mathscr{E}\kern -1pt nd(E)_{x_j}$.
Having defined the map $\phi_j$ above, we next define the Atiyah bundle
associated to a parabolic bundle.
\begin{definition}[Atiyah bundle of a parabolic bundle]
\label{definition:parabolic-atiyah-bundle}
Let $D\subset C$ be a reduced effective divisor
and let $E_\star = (E, \{E^i_j\}, \{\alpha^i_j\})$ be a parabolic bundle on $(C,
D)$.
For $\phi_j$ the residue homomorphism, as defined above in
\autoref{subsubsection:residue},
let $\widehat{\phi}_j: \on{At}_{(C, D)}(E) \to \mathscr{E}\kern -1pt nd(E)_{x_j}/\on{End}_j(E_\star)$ denote the composition
\begin{align*}
\on{At}_{(C, D)}(E) \to \on{At}_{(C,D)}(E)_{x_j} \xrightarrow{\phi_j}
\mathscr{E}\kern -1pt nd(E)_{x_j} \to \mathscr{E}\kern -1pt nd(E)_{x_j}/\on{End}_j(E_\star).
\end{align*}
Define $\on{At}_{(C, D)}(E_\star)$ as the coherent subsheaf of $\on{At}_{(C, D)}(E)$ given by
\begin{align*}
\on{At}_{(C,D)}(E_\star) := \ker\left( \on{At}_{(C,D)}(E)
\xrightarrow{\oplus_{j=1}^n \widehat{\phi}_j } \oplus_{j=1}^n
\mathscr{E}\kern -1pt nd(E)_{x_j}/\on{End}_j(E_\star) \right).
\end{align*}
Similarly, for $P^\bullet := (0 = P^0 \subset P^1
\subset \cdots \subset P^m = E)$ a filtration on $E$,
we let
$\on{At}_{(C,D)}(E_\star, P^\bullet) \subset \on{At}_{(C, D)}(E_\star)$
denote the coherent subsheaf
consisting of those endomorphisms that preserve $P^\bullet$.
\end{definition}
\begin{remark}
\label{remark:}
Using \autoref{definition:parabolic-atiyah-bundle}
and \eqref{equation:non-logarithmic-atiyah-sequence},
we find that $\on{At}_{(C,D)}(E_\star, P^\bullet)$ fits into a short exact sequence
\begin{equation}\label{equation:Atiyah-exact-sequence}
0\to \mathscr{E}\kern -1pt nd(E_\star, P^\bullet)\to \on{At}_{(C,D)}(E_\star, P^\bullet)\to T_C(-D)\to
0.
\end{equation}
By comparing \eqref{equation:Atiyah-exact-sequence} for a filtration $P^\bullet$
and the trivial filtration, we obtain
the short exact sequence
\begin{equation}
\label{equation:quotient-atiyah-bundles}
\begin{tikzcd}
0 \ar {r} & \on{At}_{(C,D)}(E_\star, P^\bullet) \ar {r} &
\on{At}_{(C,D)}(E_\star) \ar {r}
& \mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, P^\bullet)\ar {r} & 0,
\end{tikzcd}\end{equation}
where $\mathscr{E}\kern -1pt nd(E_\star, P^\bullet) \subset \mathscr{E}\kern -1pt nd(E_\star)$ is the coherent subsheaf
consisting of those endomorphisms preserving the filtration $P^\bullet$.
\end{remark}
\subsection{Parabolic structure and connections}
\label{subsection:atiyah-properties}
We continue our review of Atiyah bundles by recalling the parabolic bundle
associated to a connection, and a constraint on irreducibility of connections.
\begin{definition}
\label{definition:associated-parabolic}
Let $q^\nabla: T_C(-D) \to \on{At}_{(C,D)}(E)$ be a connection on $E$
with regular singularities along $D$.
Let $\on{Res}(\nabla)(x_j) \in \mathscr{E}\kern -1pt nd(E)_{x_j}$ denote the residue
of $\nabla$ at $x_j$, described in \autoref{subsubsection:residue}.
Suppose the eigenvalues of $\on{Res}(\nabla)(x_j)$ are $\eta_j^1, \ldots,
\eta^{s_j}_j$. Let $$\lambda^i_j:=\on{Re}(\eta^i_j)-\lfloor \on{Re}(\eta^i_j)\rfloor \in[0,1)$$ denote the fractional part of the real
part of $\eta^i_j$. Reorder the $\lambda^i_j$ and remove repetitions so that
\begin{align*}
0 \leq \lambda^1_j < \lambda^2_j < \cdots < \lambda_j^{n_j} < 1.
\end{align*}
Define $E^i_j \subset E_{x_j}$ as the sum of all generalized eigenspaces
of $\on{Res}(\nabla)(x_j)$
such that the fractional part of the real part of the associated
eigenvalue is $\geq \lambda^i_j$.
The data $(E, \{E^i_j\}, \{\lambda^i_j\})$ is the {\em parabolic bundle
associated to the connection} $\nabla$.
By \cite[Lemma 4.1]{BHH:parabolic}, the connection $q^\nabla: T_C(-D) \to
\on{At}_{(C, D)}(E)$
factors through $\on{At}_{(C, D)}(E_\star) \subset \on{At}_{(C, D)}(E)$
and we denote the induced map by $q^\nabla : T_C(-D) \to \on{At}_{(C,
D)}(E_\star)$ as well.
If the real part of each $\eta^i_j$ lies in $[0,1)$, then $(E_\star,
\nabla)$ is
called the {\em Deligne canonical extension} of $(E|_{C \setminus D},
\nabla_{C \setminus D})$.
\end{definition}
\begin{proposition}
\label{proposition:irreduciblity-splitting-condition}
Let $(E, \nabla: {E}\to {E}\otimes \Omega^1_C(\log D))$ be a flat
vector bundle on $C$ with regular singularities along $D$,
and let $E_\star$ denote the parabolic bundle associated to $\nabla$, as in
\autoref{definition:associated-parabolic}.
Suppose the monodromy representation $\rho$
associated to $(E, \nabla)|_{C\setminus D}$ via the Riemann-Hilbert
correspondence is irreducible. Let $q^\nabla:
T_C(-D)\to \on{At}_{(C,D)}(E_\star)$ be the corresponding splitting of the Atiyah
exact sequence via \autoref{proposition:atiyah-splittings} and
\autoref{definition:associated-parabolic}. Then for any
nontrivial filtration $P^\bullet$ of $E$, the composition
$$T_C(-D)\overset{q^\nabla}{\to} \on{At}_{(C,D)}(E_\star){\to}
\on{At}_{(C,D)}(E_\star)/\on{At}_{(C,D)}(E_\star, P^\bullet)\simeq
\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star,P^\bullet)$$ is nonzero.
\end{proposition}
\begin{proof}
Assume not. Then $q^\nabla$ has image in $\on{At}_{(C,D)}(E_\star, P^\bullet)$, and
hence yields a splitting of \eqref{equation:Atiyah-exact-sequence}.
Using \autoref{proposition:atiyah-splittings},
the corresponding connection with regular singularities on $E$ preserves $P^1$ and hence yields a flat connection on $P^1$ with regular singularities along
$D$, whose monodromy is a sub-representation of the monodromy representation
$\rho$ associated to $({E}, \nabla)|_{C\setminus D}$.
But this contradicts the
assumption that $\rho$ is irreducible. (See \cite[Proof of Proposition
5.3]{BHH:logarithmic} for a similar argument in the non-parabolic setting.)
\end{proof}
\subsection{Isomonodromic deformations}
\label{subsection:isomonodromic-deformations}
We next recall the notion of isomonodromic deformation.
We also define the notion of an ``isomonodromic deformation to an analytically
general nearby curve'' which appears in many of our main results.
\begin{notation}
\label{notation:curve-and-sections}
Let $\mathscr{C}, \Delta$ be complex manifolds, and let $\pi: \mathscr{C}\to \Delta$
be a proper holomorphic submersion with connected fibers of relative dimension
one, with $\Delta$ contractible. Let $$s_1, \cdots, s_n: \Delta \to
\mathscr{C}$$ be disjoint sections to $\pi$, and let $\mathscr{D}$ be the union
$$\mathscr{D}:=\bigcup_i \on{im}(s_i).$$ Given a point $0\in \Delta$, let
$(C, D) := (\pi^{-1}(0), \pi^{-1}(0) \cap \mathscr D)$ and further assume
$(C, D)$ is hyperbolic.
\end{notation}
\begin{lemma}
\label{lemma:iso-extension}
With notation as in \autoref{notation:curve-and-sections}, let $$(E, \nabla: E \to E\otimes \Omega^1_C(\log
D))$$ be a flat vector bundle on $C$ with regular singularities along
$D$. Such a logarithmic flat vector bundle extends canonically to a
logarithmic flat vector bundle $$(\mathscr{E}, \widetilde{\nabla}:
\mathscr{E}\to \mathscr{E}\otimes \Omega^1_{\mathscr{C}}(\log
\mathscr{D}))$$ on $\mathscr{C}$ with regular singularities along $\mathscr{D}$.
\end{lemma}
\begin{proof}
This follows from Deligne's work on differential equations
with regular singularities \cite{deligne:regular-singular} and is explained in
\cite[Theorem 3.4]{Heu:universal-isomonodromic}, following work of Malgrange
\cite{malgrange:I, malgrange:II}.
In particular, \cite[Theoreme 2.1]{malgrange:I} explains the case where $C$ has
genus zero, and the general case is similar.
We now recapitulate the proof.
The restriction
of $({E}, \nabla)$ to $C\setminus D$ is a flat vector bundle and hence
gives rise to a locally constant sheaf of $\mathbb{C}$-vector spaces
$$\mathbb{V}:=\ker(\nabla)$$ on $C\setminus D$. As $\Delta$ is contractible, the
inclusion $$C\setminus D\hookrightarrow \mathscr{C}\setminus \mathscr{D}$$ is a
homotopy equivalence; thus $\mathbb{V}$ extends uniquely (up to canonical
isomorphism) to a local system $\widetilde{\mathbb{V}}$ on $\mathscr{C}\setminus
\mathscr{D}$.
Manin's local results on extending flat vector bundles across divisors
\cite[Proposition 5.4]{deligne:regular-singular} imply that there is a
canonical extension, which is unique, up to unique isomorphism,
$$(\widetilde{\nabla}:
\mathscr{E}\to \mathscr{E}\otimes \Omega^1_{\mathscr{C}}(\log \mathscr{D}))$$ of
$$(\widetilde{\mathbb V}\otimes_{\mathbb{C}}\mathscr{O}_{\mathscr{C}\setminus
\mathscr{D}}, \on{id}\otimes d)$$ to a flat vector bundle on $\mathscr{C}$ with
regular singularities along $\mathscr{D}$, equipped with an isomorphism
$({\mathscr{E}}, \widetilde{\nabla})|_C\simeq (E,\nabla).$
\end{proof}
Using the above, we are ready to define isomonodromic deformations.
\begin{definition}[Isomonodromic Deformation]
\label{definition:isomonodromy}
With notation as in \autoref{notation:curve-and-sections}, let $D=x_1+ \cdots+ x_n$, so that $(C, D)$ is an $n$-pointed hyperbolic curve of genus $g$. Let $(E, \nabla)$ be a flat vector bundle on $C$ with regular singularities at the $x_i$.
We call the extension $(\mathscr{E}, \widetilde{\nabla})$ as in
\autoref{lemma:iso-extension} \emph{the isomonodromic deformation} of $({E}, \nabla)$.
If $\Delta=\mathscr{T}_{g,n}$ is the universal cover of
of the analytic stack $\mathscr{M}_{g,n}$, and $\mathscr{C}\to \Delta$ is the universal curve,
we call the isomonodromic deformation over such $\Delta$ \emph{the universal
isomonodromic deformation}.
\end{definition}
\begin{definition}
\label{definition:nearby}
With notation as in \autoref{definition:isomonodromy}, let $\Delta$ be
the universal cover of $\mathscr{M}_{g,n}$.
We use \emph{an isomonodromic deformation to a nearby curve} to denote the
restriction of $(\mathscr{E}, \widetilde{\nabla})$ to any fiber of $\mathscr{C}
\to \Delta$.
We use
\emph{an isomonodromic deformation to an analytically general nearby curve}
to denote the restriction of $(\mathscr{E}, \widetilde{\nabla})$ to a
general fiber of $\mathscr{C}
\to \Delta$, i.e., a fiber in the complement of a nowhere dense closed analytic
subset.
\end{definition}
\begin{remark}
\label{remark:}
The construction of \autoref{lemma:iso-extension} is functorial: given a commutative diagram
$$\xymatrix{
D \ar@{^(->}[d] \ar@{^(->}[r] & \mathscr{D} \ar[r]\ar@{^(->}[d] & \mathscr{D}'\ar@{^(->}[d] \\
C \ar@{^(->}[r] \ar[d]& \mathscr{C} \ar[r] \ar[d]^\pi & \mathscr{C}' \ar[d]^{\pi'}\\
0 \ar@{^(->}[r] & \Delta \ar[r] & \Delta'
}$$
and a flat vector bundle $({E}, \nabla)$ on $C$ with regular
singularities along $D$, the isomonodromic deformation over $\Delta'$ pulls back
to the isomonodromic deformation over $\Delta$.
\end{remark}
\begin{example}[Families of families, essentially in \cite{doran:isomonodromic}]
With notation as in \autoref{notation:curve-and-sections}, suppose $\mathscr{D}=\emptyset$, and let $\tilde h:
\mathscr{X}\to \mathscr{C}$ be a proper holomorphic submersion. Let
$X=h^{-1}(C)$, and let $h=\tilde h|_X$. Then for each $i\geq 0,$ $R^i\tilde
h_*\Omega^\bullet_{dR, \mathscr{X}/\mathscr{C}}$ with its Gauss-Manin connection
is the isomonodromic deformation of $R^ih_*\Omega^\bullet_{dR, X/C}$ with its
Gauss-Manin connection.
\end{example}
\begin{remark}
\label{remark:parabolic-structure-on-isomonodromic-deformation}
Using residues of the connection, we were able to associate to $(E,
\nabla)$ a certain parabolic bundle $E_\star$ in
\autoref{definition:associated-parabolic}.
This induces the structure of a relative parabolic bundle on the
isomonodromic deformation $(\mathscr E, \widetilde{\nabla})$ of $(E,
\nabla)$, which we denote $\mathscr E_\star$, as explained in
\cite[\S4.3]{BHH:parabolic}.
Let $\mathscr C_t$ denote the fiber of $\mathscr C \to \Delta$
over the point $t \in \Delta$ defining the isomonodromic deformation.
By \cite[Lemma 4.2]{BHH:parabolic}, the parabolic weights and the
associated dimensions of the graded parts of the quasiparabolic
structure corresponding to those weights on $\mathscr E|_{\mathscr C_t}$ are independent
of the point $t \in \Delta$. Indeed, the parabolic structure on $\mathscr{E}|_{\mathscr{C}_t}$ is exactly the one associated to the natural connection on $\mathscr{E}|_{\mathscr{C}_t}$ obtained by restricting $\widetilde{\nabla}$.
\end{remark}
\begin{definition}
\label{definition:refinements}
Given two parabolic bundles $F_\star = (F, \{F^i_j\}, \{\alpha^i_j\})$
and $E_\star = (E, \{E^i_j\}, \{\beta^i_j\})$
with respect to $D = x_1 + \cdots + x_n$,
we say $F_\star$ is
{\em refined} by $E_\star$ if $F = E$, and for each $F^i_j$ there is
some $i'$ with $F^i_j = E^{i'}_j$, under the identification $E_{x_j}=
F_{x_j}$ and $\alpha^{i'}_j = \beta^{i'}_j$.
In the case that the parabolic structure of $F_\star$ at $x_{j_1},
\ldots, x_{j_k}$ are trivial, i.e., they are of the form $F_{x_{j_t}} = F^1_{j_t} \supset
F^2_{j_t} = 0$,
we consider $F$ as a parabolic bundle with respect to
$D \setminus \{ x_{j_1}, \ldots, x_{j_k}\}$.
\end{definition}
\begin{remark}
\label{remark:refinement}
Continuing with the notation of
\autoref{remark:parabolic-structure-on-isomonodromic-deformation},
for any parabolic bundle $F_\star$ refined by $E_\star$,
there is a corresponding
isomonodromic deformation of $(F_\star,\nabla)$ over $\Delta$ given by taking the
parabolic structure from
\autoref{remark:parabolic-structure-on-isomonodromic-deformation}
and forgetting
part of the parabolic structure on $\mathscr{E}_\star$.
As an important special case, the trivial parabolic structure on $E$ described in \autoref{example:vector-bundle} is refined by the parabolic bundle $E_\star$ arising from \autoref{definition:associated-parabolic}.
\end{remark}
\subsection{Deformation theory of isomonodromic deformations}\label{subsection:filtered-bundle-deformations}
We now analyze the infinitesimal deformation theory of isomonodromic
deformations.
\begin{notation}\label{notation:deformation-theory-iso}
Let $\on{Art}_{\mathbb{C}}$ be the category of local Artin
$\mathbb{C}$-algebras. Let $C$ be a smooth proper curve over $\mathbb{C}$,
$D\subset C$ a reduced effective divisor with $D=x_1+\cdots+x_n$, and $$({E}, \nabla: {E}\to
{E}\otimes\Omega^1_C(\log D))$$ a flat vector bundle on $C$ with regular
singularities along $D$. Let $P^\bullet$ be a filtration of $E$.
\end{notation}
\begin{definition}[Deformations of a curve with divisor]
\label{definition:curve-deformation}
Let $$\on{Def}_{(C,D)}: \on{Art}_{\mathbb{C}}\to
\on{Set}$$ be the functor sending a local Artin $\mathbb{C}$-algebra
$(A,\mathfrak m, \kappa)$ (so $\mathfrak m$ is the maximal ideal and $\kappa$ is
the residue field) to the
set of flat deformations of $(C,D)$ over $A$.
More precisely, it assigns to $A$ the set of those
$(\mathscr C, \mathscr D, q, f)$ where
$q: \mathscr C \to \on{Spec} A$ is a flat morphism, $\mathscr D \subset \mathscr C$
is a relative Cartier divisor over $\on{Spec} A$ and $f: C \to \mathscr C$ is a map
inducing an isomorphism $C \to \mathscr C \times_{\on{Spec} A} \on{Spec} \kappa$
taking $D$ isomorphically to $\mathscr D \times_{\on{Spec} A} \on{Spec} \kappa$.
\end{definition}
\begin{proposition}\label{proposition:curve-deformation}
With notation as in \autoref{definition:curve-deformation}, there is a
canonical and functorial bijection
$$\on{Def}_{(C,D)}(\mathbb{C}[\varepsilon]/\varepsilon^2)\overset{\sim}{\to} H^1(C, T_C(-D)).$$
\end{proposition}
\begin{proof}
This is standard, see
\cite[Proposition 3.4.17]{sernesi:deformations-of-algebraic-schemes}.
\end{proof}
We next generalize the above to describe the deformation theory of filtered
vector bundles on curves.
\begin{definition}[Deformations of a parabolic filtered vector bundle]
\label{definition:}
Let
$$\on{Def}_{(C,D, {E_\star}, P^\bullet)}:\on{Art}_{\mathbb{C}}\to \on{Set}$$ be the functor
sending $A$ to the set of flat deformations of $(C, D, E_\star, P^\bullet)$ over $A$.
More precisely, it assigns to $A$ the set of
$(\mathscr C, \mathscr D, q, f, \mathscr E, \{\mathscr E^i_j\}, \mathscr{P}^\bullet, \psi)$
where $(\mathscr C, \mathscr D, q, f)$ is a flat deformation of $(C,D)$ over $A$
as in \autoref{definition:curve-deformation}, $\mathscr E$ is a vector bundle on $\mathscr C$,
$\oplus_j\mathscr E^i_j$ are sub-bundles of $\mathscr{E}|_\mathscr D$,
$\mathscr{P}^\bullet$ is a filtration of $\mathscr{E}$ by sub-bundles,
and $\psi: f^* (\mathscr E, \mathscr{P}^\bullet) \to (E,
P)$ is an isomorphism of filtered vector bundles on $C$ inducing an isomorphism $\mathscr{E}^i_j|_{x_j}\overset{\sim}{\to} E^i_j$ for each $i,j$.
\end{definition}
\begin{proposition}\label{proposition:filtered-bundle-deformation}
Let $(E_\star, P^\bullet)$ be a filtered parabolic vector bundle on a curve $C$. Let $D\subset C$ be a reduced effective divisor.
There is a canonical and functorial bijection $$\on{Def}_{(C, D, E_\star,
P^\bullet)}(\mathbb{C}[\varepsilon]/\varepsilon^2)\overset{\sim}{\to} H^1(C,
\on{At}_{(C,D)}(E_\star,P^\bullet)).$$
\end{proposition}
\begin{proof}
In the non-parabolic case, this is explained in \cite[\S2.2]{BHH:logarithmic}.
The parabolic case is described in \cite[Lemma 3.2]{BHH:parabolic} for
the case of a $1$-step filtration $P^\bullet = (0 \subset P^1 \subset
E)$, and the case of arbitrary length filtrations is analogous.
\end{proof}
\begin{remark}
\label{remark:}
If $P^\bullet$ is trivial, we omit it from the notation. In particular,
$$\on{Def}_{(C, D,
E_\star)}(\mathbb{C}[\varepsilon]/\varepsilon^2)\overset{\sim}{\to} H^1(C,
\on{At}_{(C,D)}(E_\star)).$$
\end{remark}
Now, begin with a flat vector bundle $(E, \nabla)$ with regular singularities
along $D \subset C$, and let $E_\star$ denote the associated parabolic bundle defined in
\autoref{definition:associated-parabolic}.
There is an evident natural transformation $$\on{Forget}: \on{Def}_{(C, D,
{E_\star})}\to \on{Def}_{(C,D)}$$ given by forgetting ${E_\star}$. The construction of
isomonodromic deformations yields a section $$\on{iso}: \on{Def}_{(C,D)}\to
\on{Def}_{(C,D,E_\star)}$$ to this map (which depends on $\nabla$), as we now spell
out.
\begin{proposition}
\label{proposition:connection-h1}
With notation as above, let $\delta: \on{At}_{(C,D)}(E_\star)\to T_C(-D)$ be the natural quotient map, and
$$q^\nabla: T_C(-D)\to\on{At}_{(C,D)}({E_\star})$$ be the section to $\delta$
described in \autoref{definition:associated-parabolic}
arising
from $\nabla$ via \autoref{proposition:atiyah-splittings} and
\cite[Lemma 4.1]{BHH:parabolic}.
Under the natural identifications
$$\on{Def}_{(C,D)}(\mathbb{C}[\varepsilon]/\varepsilon^2)\overset{\sim}{\to}
H^1(C, T_C(-D))$$ and $$\on{Def}_{(C,D,
{E_\star})}(\mathbb{C}[\varepsilon]/\varepsilon^2)\overset{\sim}{\to} H^1(C,
\on{At}_{(C,D)}({E_\star}))$$
arising from \autoref{proposition:curve-deformation} and \autoref{proposition:filtered-bundle-deformation}, the two squares below commute:
$$\xymatrix@C=.8em{
\on{Def}_{(C,D, {E_\star})}(\mathbb{C}[\varepsilon]/\varepsilon^2) \ar[r]^-\sim
\ar[d]_{\on{Forget}}& H^1(C, \on{At}_{(C,D)}({E_\star})) \ar[d]_{\delta_*} &
\on{Def}_{(C,D, {E_\star})}(\mathbb{C}[\varepsilon]/\varepsilon^2) \ar[r]^-\sim &
H^1(C, \on{At}_{(C,D)}({E_\star})) \\
\on{Def}_{(C,D)}(\mathbb{C}[\varepsilon]/\varepsilon^2) \ar[r]^-\sim & H^1(C,
T_C(-D)) & \on{Def}_{(C,D)}(\mathbb{C}[\varepsilon]/\varepsilon^2) \ar[r]^-\sim
\ar[u]_{\on{iso}} & H^1(C, T_C(-D)) \ar[u]_{(q^\nabla)_*}.
}$$
\end{proposition}
\begin{proof}
This is explained in \cite[Lemma 3.1 and Lemma 4.3]{BHH:parabolic}.
Also see \cite[\S2.2 and \S 4.1]{BHH:logarithmic} for the non-parabolic
case.
\end{proof}
We recall one additional result describing when a filtration extends to a
deformation.
The proof of the following lemma in the non-parabolic case is explained following the proof of
\cite[Lemma 3.1]{BHH:logarithmic}.
\begin{lemma}
\label{lemma:parabolic-extension-obstruction}
Suppose we are given $(C,D,E_\star,\nabla)$ as in
\autoref{proposition:connection-h1}
and a filtration $P^\bullet$ of $E$ as in
\autoref{notation:deformation-theory-iso}.
Assume
further we have a first-order deformation $(\mathscr C, \mathscr D)$ of $(C, D)$
corresponding to an element $s \in \on{Def}_{(C,D)}(\mathbb{C}[\varepsilon]/\varepsilon^2)\overset{\sim}{\to} H^1(C, T_C(-D))$.
With $q^\nabla$ as in
\autoref{proposition:connection-h1},
suppose $q^\nabla(s)$
corresponds to a deformation $(\mathscr C, \mathscr D, \mathscr E_\star)$ of
$(C,D,E_\star)$
in which $P^\bullet \subset E$ admits an extension to a filtration $\mathscr
P^\bullet$
of $\mathscr E$.
Then $$q^\nabla(s) \in \ker
\left(H^1(C, \on{At}_{(C,D)}(E_\star)) \to H^1(C,
\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, P^\bullet))\right).$$
\end{lemma}
\begin{proof}
Note that the map
$$H^1(C, \on{At}_{(C,D)}(E_\star)) \to
H^1(C,\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, P^\bullet))$$
is induced by the surjection of sheaves
$\on{At}_{(C,D)}(E_\star)
\to \mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, P^\bullet)$
from \eqref{equation:quotient-atiyah-bundles}.
In the above situation,
$q^\nabla(s) \in H^1(C, \on{At}_{(C,D)}(E_\star))$ is in the image of the natural map
$$H^1(C, \on{At}_{(C,D)}(E_\star, P^\bullet))\to H^1(C,
\on{At}_{(C,D)}(E_\star))$$ and the composition
$$H^1(C, \on{At}_{(C,D)}(E_\star, P^\bullet)) \to H^1(C,
\on{At}_{(C,D)}(E_\star)) \to H^1(C,
\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, P^\bullet))$$
vanishes. Indeed, this composition is part of the long exact sequence in cohomology induced by the short exact
sequence of sheaves (\ref{equation:quotient-atiyah-bundles}).
Therefore,
$q^\nabla(s) \in \ker
\left(H^1(C, \on{At}_{(C,D)}(E_\star)) \to H^1(C,
\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, P^\bullet))\right).$
\end{proof}
\section{Hodge-theoretic preliminaries}\label{section:hodge-theoretic-preliminaries}
We briefly recall the definition of a variation of Hodge structure, and some standard positivity and semistability results for the (parabolic Higgs) bundles associated to such variations.
In particular, \autoref{lemma:positive-Hodge-bundle}, which shows the first
filtered piece of the Hodge filtration tends to have positive parabolic degree, is crucial
to our semistability arguments.
The properties in \autoref{prop:basic-facts} will also be used repeatedly in this
paper
\subsection{Complex variations of Hodge structure}
Let $X$ be a smooth irreducible complex variety.
\begin{definition}[Polarizable complex variations of Hodge structure]
\label{definition:complex-variation}
A {\em complex variation of Hodge structure} on $X$ is a triple $(V,
V^{p,q}, D)$, where $V$ is a $C^\infty$ complex vector bundle on $X$,
$V=\oplus V^{p,q}$ is a direct sum decomposition, and $D$ is a flat
connection satisfying Griffiths transversality: $$D(V^{p,q})\subset
A^{1,0}(V^{p,q})\oplus A^{0,1}(V^{p,q})\oplus A^{1,0}(V^{p-1,q+1})\oplus
A^{0,1}(V^{p+1, q-1}).$$
A {\em polarization} on $(V, V^{p,q}, D)$ is a flat Hermitian form $\psi$ on $V$ such that the $V^{p,q}$ are orthogonal to one another under $\psi$, and such that $(-1)^p\psi$ is positive definite on each $V^{p,q}$.
A {\em polarizable complex variation of Hodge structure} is a complex
variation of Hodge structure which admits a polarization.
We call the holomorphic flat vector bundle $(E,
\nabla):=(\ker(D)\otimes_{\mathbb{C}} \mathscr{O}, \on{id}\otimes d)$ the
{\em holomorphic flat vector bundle associated to the complex variation of
Hodge structure}. The filtration $F^pV:=\oplus_{j\geq p} V^{j,q}$ of $V$ induces a decreasing \emph{Hodge filtration} $F^\bullet V$ by holomorphic sub-bundles, such that \begin{equation}\label{eqn:griffiths-transversality} \nabla(F^p)\subset F^{p-1}\otimes \Omega^1_X.\end{equation}
If $\mathbb{V}$ is a complex local system on $X$ which is isomorphic to
$\ker(D)$ for some polarizable complex variation of Hodge structure $(V, V^{p,q}, D)$, we say that $\mathbb{V}$ \emph{underlies a polarizable complex variation of Hodge structure}.
\end{definition}
For the next definition, recall that in, in the case of curves, we defined the
residues of a connection with regular singularities in
\autoref{subsubsection:residue}.
In the case of higher dimensional varieties see
\cite[p. 53]{deligne:regular-singular}.
\begin{definition}[Deligne canonical extension \protect{\cite[Remarques 5.5(i)]{deligne:regular-singular}}]
Let $\overline{X}$ be a smooth projective variety containing $X$ as a dense open subvariety with simple normal crossings complement $Z$.
Let $(E, \nabla)$ be a flat holomorphic vector bundle on $X$. The \emph{Deligne canonical extension}
$(\overline{E}, \overline{\nabla}: \overline{E}\to \overline{E}\otimes
\Omega^1_{\overline{X}}(\log Z))$ of $(E, \nabla)$ to $\overline{X}$ is the
unique flat vector bundle on $\overline{X}$ with regular singularities
along $Z$, equipped with an isomorphism $(\overline{E},
\overline{\nabla})|_X\overset{\sim}{\to}(E, \nabla)$, characterized by the
property that all eigenvalues of its residues
along components of $Z$
have real parts lying in $[0,1)$.
\end{definition}
\begin{definition}[The associated Higgs bundle]
Let $(E, F^\bullet, \nabla)$ be a holomorphic vector bundle $E$ on a smooth variety $\overline{X}$, with a flat connection $\nabla$ with regular singularities along a simple normal crossings divisor $Z\subset\overline{X}$, and a decreasing filtration $F^\bullet$ by holomorphic sub-bundles satisfying the Griffiths transversality condition \begin{equation}\label{eqn:griffiths-transversality-2} \nabla(F^p)\subset F^{p-1}\otimes \Omega^1_{\overline{X}}(\log Z).\end{equation} The \emph{associated Higgs bundle} is the pair $(\oplus_i \on{gr}^i_{F^\bullet}E, \theta)$, where the \emph{Higgs field} $$\theta:=\bigoplus_i (\theta_i: \on{gr}^i_{F^\bullet}E\to \on{gr}^{i-1}_{F^\bullet}E\otimes\Omega^1_{\overline{X}}(\log Z))$$ is the $\mathscr{O}_{\overline{X}}$-linear map induced by $\nabla.$
The vector bundle $E$ canonically has the structure of a parabolic bundle
$E_\star$
(if $X$ is a curve, this structure is described in
\autoref{definition:associated-parabolic}, and it is
described in general in \cite[Proposition 5.4]{arapura2019vanishing}).
This structure induces the structure of a parabolic bundle on $\oplus_i
\on{gr}^i_{F^\bullet} E_\star$, via
\autoref{subsubsection:induced-subbundle} as a direct sum of subquotients of $E$,
preserved by $\theta$. We refer to the pair $(\oplus_i
\on{gr}^i_{F^\bullet}E_\star, \theta)$ with its parabolic structure as the \emph{parabolic Higgs bundle} associated to the variation of Hodge structure.
\end{definition}
We collect some basic facts about polarizable complex variations of Hodge structure, the canonical extensions thereof, and their associated Higgs bundles:
\begin{proposition}\label{prop:basic-facts}
Let $\overline{X}$ be a smooth projective curve, $Z\subset
\overline{X}$ a reduced divisor, and let
$X=\overline{X}\setminus Z$.
Let $(V, V^{p,q}, D)$ be a polarizable complex variation of Hodge structure on
$X$,
and let $(E, F^\bullet,
\nabla)$ be the holomorphic flat vector bundle associated to this variation of
Hodge structure, with its Hodge filtration. Let $(\overline{E}, \overline{\nabla})$ be its Deligne canonical extension.
Let $\overline{E}_\star$ be the parabolic bundle associated to $(\overline{E}, \nabla)$, as defined in
\autoref{definition:associated-parabolic}.
\begin{enumerate}
\item The local system $\mathbb{V}:=\ker(\nabla)$ associated to $(E, \nabla)$ is semisimple.
\item The local system $\mathbb{V}$ may be canonically decomposed as $$\mathbb{V}\simeq \bigoplus_i \mathbb{L}_i\otimes
W_i,$$ where the $\mathbb{L}_i$ are pairwise non-isomorphic
irreducible complex local systems on $X$, and each $W_i$ is a complex vector
space. Each $\mathbb{L}_i$ underlies a polarizable complex
variation of Hodge structure, and each $W_i$ carries a complex polarized Hodge structure, both unique up to shifting the grading, and compatible with the variation carried by $\mathbb{V}$.
\item $\overline{E}_\star$ has parabolic degree zero.
\item There exists a canonical extension of $F^\bullet$ to $\overline{E}$, such that $(\overline{E}, F^\bullet, \overline{\nabla})$ satisfies the Griffiths transversality condition (\ref{eqn:griffiths-transversality-2}).
\item The parabolic Higgs bundle $(\oplus_i
\on{gr}^i_{F^\bullet}\overline{E}_\star, \theta)$ associated
to $(\overline{E}_\star, F^\bullet, \overline{\nabla})$ is
parabolically polystable of degree zero.
That is, there exist a collection of parabolic vector bundles
$E^j_\star$ with
$\deg(E^j_\star) = 0$ and maps
$\theta^j : E^j_\star \to E^j_\star \otimes \Omega^1_{\overline{X}}(\log Z)$ so that
both
$(\oplus_i\on{gr}^i_{F^\bullet}\overline{E}_\star,
\theta)=\oplus_j (E^j_\star, \theta^j)$
and for any
$\theta^j$-stable proper sub-bundle $H_\star \subset
E^j_\star$ with its induced parabolic structure, $\on{par-deg}
H_\star < 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof of
(1) is explained in \cite[1.11-1.12]{deligne1987theoreme} and (2) is
\cite[1.13]{deligne1987theoreme}. The proof of (3) follows from
\cite[B.3]{esnault1986logarithmic}
while (4) is explained in e.g.~\cite[Section 7]{brunebarbe2017semi}.
Finally,
(5) is due to Simpson \cite[Theorem 5]{simpson1990harmonic}.
See the discussion in the introduction of \cite{arapura2019vanishing} for a nice summary of this and related topics.
\end{proof}
The next lemma is crucial in the proof of our main result
\autoref{theorem:isomonodromic-deformation-CVHS} since the positivity it
gives for $F^i \overline{E}_\star$ will contradict our later results on
semistability, unless the Hodge filtration has only a single part.
The connection to semistability is spelled out below in
\autoref{corollary:unstable}.
\begin{lemma}\label{lemma:positive-Hodge-bundle}
Let $C$ be a smooth proper curve, $Z\subset C$ a reduced effective divisor,
and $(V, V^{p,q}, D)$ a a polarizable complex variation of Hodge structure
on $C\setminus Z$. Let $(\overline{E}_\star, F^\bullet, \nabla)$ be the Deligne canonical
extension of the associated flat holomorphic vector bundle to $C$ with its
canonical parabolic structure. Let $i$ be maximal such that
$F^i\overline{E}_\star$ is non-trivial, where $F^\bullet$ is the Hodge
filtration, and suppose that the Higgs field $$\theta_i:
F^i\overline{E}_\star\to
\on{gr}^{i-1}_{F^{\bullet}}\overline{E}_\star\otimes \Omega^1_C(\log Z)$$ is
non-zero. Then $F^i\overline{E}_\star$ has positive parabolic degree.
\end{lemma}
\begin{remark}
\label{remark:}
\autoref{lemma:positive-Hodge-bundle} is essentially due to Griffiths in the case of real variations of
Hodge structure on a
smooth proper curve. In that case it follows from the curvature formula \cite[Theorem
5.2]{griffiths1970periods}, and is observed there in some special cases
\cite[Corollary 7.10]{griffiths1970periods}. For a more precise
reference, see \cite[Corollary 2.2]{peters2000arakelov}, which
immediately implies the claim for real variations of Hodge structure on a smooth proper curve.
However, as we were unable to find a precise reference in the case of complex
variations on a quasi-projective curve, we now give a simple proof.
An similar argument is given in the proof of
\cite[Theorem 3.8]{esnaultK:d-modules-and-finite-monodromy}.
\end{remark}
\begin{proof}[Proof of \autoref{lemma:positive-Hodge-bundle}]
The parabolic vector bundle $F^i\overline{E}_\star$ with the zero Higgs field is a quotient of
the parabolic Higgs bundle $(\oplus_i \on{gr}^i_{F^\bullet}\overline{E}_\star, \theta)$, and
hence has non-negative parabolic degree by the fact that the latter is polystable of
degree zero, by \autoref{prop:basic-facts}(5). It has degree zero if and only if
it is a direct summand of $(\oplus_i \on{gr}^i_{F^\bullet}\overline{E}_\star, \theta)$ by polystability; but this is ruled out by the nonvanishing of $\theta_i$.
\end{proof}
\begin{remark}
By the construction of the Higgs field $\theta$, the condition that
$\theta_i$ is non-zero in \autoref{lemma:positive-Hodge-bundle} is
equivalent to the statement that $F^i\overline{E}_\star$ is not preserved by
$\overline{\nabla}$. For example, it is automatically nonzero if
$(\overline{E}_\star, \overline{\nabla})$ has irreducible monodromy and
$F^i\overline{E}_\star$ is a proper sub parabolic bundle of
$\overline{E}_\star$.
\end{remark}
We now spell out how \autoref{lemma:positive-Hodge-bundle}
relates to semistability.
\begin{corollary}\label{corollary:unstable}
Let $(\overline{E}_\star, F^\bullet, \nabla)$ be as in
\autoref{lemma:positive-Hodge-bundle}. Then the parabolic bundle
$\overline{E}_\star$ is not semistable.
\end{corollary}
\begin{proof}
The parabolic vector bundle $\overline{E}_\star$ has parabolic degree zero
by \autoref{prop:basic-facts}(3). But by
\autoref{lemma:positive-Hodge-bundle}, $F^i\overline{E}_\star$ has positive degree, and is hence a destabilizing subsheaf.
\end{proof}
\section{Variations of Hodge structure and the Kodaira-Parshin trick}
\label{section:counterexample}
In this section we find that
variations of Hodge structure on $\mathscr{M}_{g,1}$ with monodromy which is
``big" in a suitable sense provide examples of flat vector bundles on curves
whose isomonodromic deformation to a nearby curve is never semistable. We then produce such variations of
Hodge structure via the Kodaira-Parshin trick.
This will be used to prove
\autoref{theorem:counterexample} and contradicts earlier claimed theorems of
Biswas, Heu, and Hurtubise
\cite[Theorem 1.3]{BHH:logarithmic},
\cite[Theorem 1.3]{BHH:irregular}, and
\cite[Theorem 1.2]{BHH:parabolic},
as described further in \autoref{remark:bhh-error}.
In \autoref{section:hodge-theoretic-results}, we will use that
variations of Hodge structure with suitably large monodromy yield flat vector
bundles which do not have isomonodromic deformations to semistable bundles.
This will be used to analyze variations of Hodge structure on an analytically very general curve.
\subsection{Construction of the counterexample}
We now set up the proof of \autoref{theorem:counterexample} and
\autoref{corollary:counterexample}. We will construct a variation of Hodge
structure over the analytic moduli stack $\mathscr{M}_{g,1}$ whose restriction to each fiber of the forgetful map $\mathscr{M}_{g,1}\to \mathscr{M}_g$ satisfies the hypotheses of \autoref{lemma:positive-Hodge-bundle}. We will do this via the Kodaira-Parshin trick
(see
\cite[Proposition 7]{parshin:algebraic-curves-over-function-fields-i}
and also
\cite[Proposition 7.1]{lawrenceV:diophantine-problems}),
which produces a family of curves over $\mathscr{M}_{g,1}$ which is non-isotrivial when restricted to each fiber of the natural forgetful map $\mathscr{M}_{g,1}\to \mathscr{M}_g$.
We give a proof appealing to \cite{cataneseD:answer-to-a-question-by-fujita},
but one can also prove it using \autoref{prop:basic-facts} and
\autoref{corollary:unstable}, as we mention in
\autoref{remark:alternate-hodge-proof}
Because we had difficulty finding a suitable reference, we now present a version
of the Kodaira-Parshin trick in families.
\begin{proposition}[Kodaira-Parshin trick]
\label{proposition:kodaira-parshin}
Let $Y$ denote a Riemann surface of genus $g\geq 1$ with a point $p \in Y$ and
let $Y^{\circ} := Y - p$. Choose a basepoint $y \in Y^{\circ}$.
Suppose $G$ is a finite center-free group with a surjection
$\phi: \pi_1(Y^{\circ},y) \twoheadrightarrow G$ which sends the loop around the puncture $p \in Y$ to
a non-identity element of $G$.
Then there exists a smooth proper relative dimension $1$ map of
analytic stacks $f: \mathscr{X} \to
\mathscr{M}_{g,1}$ so that the fiber over a geometric point $[(C,p)] \in
\mathscr{M}_{g,1}$ is a finite disjoint union of $G$-covers of $C$
ramified at $p$.
\end{proposition}
We will prove this below in \autoref{subsubsection:proof-kodaira-parshin}.
\begin{remark}
\label{remark:}
In the finite disjoint union of $G$-covers appearing at the end of the
statement of \autoref{proposition:kodaira-parshin},
we can explicitly identify the finite set of $G$-covers.
Namely, suppose $h \in \pi_1(\mathscr M_{g,2})$, viewed as an
automorphism of the fundamental group of a $2$-pointed genus $g$ curve $\pi_1(Y^{\circ},y)$. (See \autoref{subsubsection:setup} below for an explanation of the action of $\pi_1(\mathscr{M}_{g,2})$ on $\pi_1(Y^\circ, y)$.)
There is one cover associated to each map
of the form $\phi_h: \pi_1(Y^{\circ},y) \to G$, with $\phi_h(g) :=
\phi(hgh^{-1})$,
modulo the following equivalence relation:
we identify $\phi_h \sim \phi_{g}$ if they
are conjugate, i.e., if there exists $m \in G$ with $\phi_h(s) = m
\phi_g(s)m^{-1}$ for all $s \in \pi_1(Y^{\circ},y)$.
\end{remark}
\subsubsection{Setup to prove \autoref{proposition:kodaira-parshin}}
\label{subsubsection:setup}
Let $$\pi:\mathscr{M}_{g, 2}\to \mathscr{M}_{g,1}$$ be the natural forgetful map.
Let $x\in \mathscr{M}_{g,2}$ be a point. Let $\bar{x} := \pi(x) \in
\mathscr{M}_{g,1}$ and let $C^\circ\subset \mathscr{M}_{g,2}$
denote the fiber of $\pi^{-1}(\bar{x})$.
Note that $C^\circ$ is the complement of a point in a smooth proper connected curve of genus $g$.
There is a natural short exact sequence
\begin{align}
1\to \pi_1(C^\circ, x) \to \pi_1(\mathscr M_{g,2}, x) \to \pi_1(\mathscr
M_{g,1}, \bar x)\to 1
\end{align}
associated to the map $\mathscr{M}_{g,2} \to \mathscr{M}_{g,1}$ with fiber
$C^{\circ}$.
We may obtain this from the Birman exact sequence \cite[Theorem
4.6]{farbM:a-primer} for mapping class groups,
after identifying the fundamental group for $\mathscr M_{g,n}$ with the mapping
class group of an $n$-times punctured genus $g$ surface.
(The case $n = 0$ follows from contractibility of the universal cover of
$\mathscr M_g$ \cite[Theorem 10.6]{farbM:a-primer} with covering group given by
the mapping class group,
and the case of general $n$ can be deduced from the Birman exact sequence
\cite[Theorem 4.6]{farbM:a-primer}.)
Let $G$ be a center-free finite group and suppose further there
is a surjection $$\gamma: \pi_1(C^\circ, x)\twoheadrightarrow G.$$
We assume that $\gamma$ takes the conjugacy class of the loop around the puncture of $C^\circ$ to a non-identity conjugacy class of $G$.
Define $\Gamma\subset \pi_1(\mathscr{M}_{g,2}, x)$ as the set of $h \in
\pi_1(\mathscr{M}_{g,2},x)$ such that there exists $\widetilde{\gamma}(h)\in G$ with $$\gamma(hgh^{-1})=\widetilde{\gamma}(h)\gamma(g)\widetilde{\gamma}(h)^{-1}$$ for all $g\in \pi_1(C^\circ, x).$
\begin{lemma}
\label{lemma:center-free-group}
Keeping notation as in \autoref{subsubsection:setup},
the map $\gamma$ determines a well-defined surjective homomorphism
\begin{align*}
\widetilde{\gamma}: \Gamma & \rightarrow G\\
h & \mapsto \widetilde{\gamma}(h).
\end{align*}
Moreover, $\Gamma \subset\pi_1(\mathscr{M}_{g,2},x)$ has finite index.
\end{lemma}
\begin{proof}
We first claim that $\Gamma$ contains $\pi_1(C^\circ, x)$ and surjects
onto $G$. Indeed, for $h\in
\pi_1(C^\circ, x)$, one may take $\widetilde{\gamma}(h)=\gamma(h)$.
Therefore, the surjectivity of $\gamma$ implies that $\widetilde{\gamma}$ is
also surjective.
Next, we claim that for each $h$, $\widetilde{\gamma}(h)$ is uniquely determined.
Indeed, suppose $\widetilde{\gamma}(h)$ may be either
$\alpha$ and $\beta$. Then we would have $\alpha \gamma(g) \alpha^{-1} =
\beta \gamma(g) \beta^{-1}$.
Since $\gamma$ is surjective, as shown above, we find $\alpha
\beta^{-1}$ lies in the center of $G$, and therefore is trivial. So
$\alpha = \beta$.
The uniqueness of $\widetilde{\gamma}(h)$ just established shows that $\widetilde{\gamma}$ determines a well-defined map.
This is moreover a homomorphism by the above established uniqueness, because we
then obtain $\widetilde{\gamma}(h) \widetilde{\gamma}(h') =
\widetilde{\gamma}(hh')$.
Finally, we claim $\Gamma$ has finite index in $\pi_1(\mathscr{M}_{g,2}, x)$.
To see this, observe that there is an action of $\pi_1(\mathscr{M}_{g,2}, x)$
on the set of surjective homomorphisms $\pi_1(C^{\circ},x) \to G$ sending $\phi:
\pi_1(C^{\circ},x) \to G$ to the map $\phi^h(g) := \phi(hgh^{-1})$.
By definition, we have $h \in \Gamma$ if and only if $\gamma^h$ is conjugate to $\gamma$.
In particular, $\Gamma$ contains the stabilizer of $\gamma$ under the action of
$\pi_1(\mathscr{M}_{g,2}, x)$.
But this stabilizer has finite index in
$\pi_1(\mathscr{M}_{g,2}, x)$ because
$G$ is finite and $\pi_1(C^\circ, x)$ is finitely generated.
Therefore,
there are only
finitely many
homomorphisms $\pi_1(C^\circ, x) \to G$,
and, in particular, finitely many such surjective homomorphisms.
\end{proof}
\subsubsection{}\label{subsubsection:proof-kodaira-parshin}\begin{proof}[Proof of \autoref{proposition:kodaira-parshin}]
Let $\widetilde{\Gamma}$ be the kernel of the map $\widetilde{\gamma}$ from
\autoref{lemma:center-free-group}.
The subgroup $\widetilde{\Gamma}$ corresponds to a finite \'etale cover
$\mathscr X^\circ \to \mathscr M_{g,2}$.
Observe that $\mathscr M_{g,2} \subset \mathscr{M}_{g,1} \times_{\mathscr M_g} \mathscr{M}_{g,1}$
can be viewed as a dense open substack, and
let $\mathscr{X}$ be the normalization
of $\mathscr{M}_{g,1} \times_{\mathscr M_g} \mathscr{M}_{g,1}$ in the function
field of $\mathscr{X}^\circ,$ forming the following cartesian diagram
\begin{equation}
\label{equation:}
\begin{tikzcd}
\mathscr X^\circ \ar {r} \ar {d} & \mathscr X \ar {d} \\
\mathscr M_{g,2} \ar {r} & \mathscr{M}_{g,1} \times_{\mathscr
M_g} \mathscr{M}_{g,1}.
\end{tikzcd}\end{equation}
Restricting the natural map $\mathscr{X}\to\mathscr{M}_{g,1} \times_{\mathscr M_g} \mathscr{M}_{g,1}$ to a fiber $C$ of the universal curve $\mathscr{M}_{g,1} \times_{\mathscr M_g} \mathscr{M}_{g,1} \to \mathscr M_{g,1}$ yields a finite disjoint union of $G$-covers of $C$, ramified only over the tautological marked point of $C$.
We then take our desired relative curve $f: \mathscr X \to \mathscr{M}_{g,1} \times_{\mathscr M_g}
\mathscr{M}_{g,1} \to \mathscr M_{g,1}$ as the resulting
composition.
\end{proof}
In order to use \autoref{proposition:kodaira-parshin}, we will need to know
there are groups $G$ satisfying its hypotheses. We now provide such an example.
\begin{example}
\label{example:kodaira-parshin}
As a concrete example of a group $G$ to which
\autoref{proposition:kodaira-parshin} applies, we can take $G = S_3$ to be the
symmetric group on three letters and identify $\pi_1(Y^\circ, y)$
with the free group on the generators $a_1, \ldots, a_g, b_1, \ldots, b_g$. The group $\pi_1(Y, y)$
is generated by $a_1, \ldots, a_g, b_1, \ldots, b_g$ with the relation
$\prod_{i=1}^g \left[ a_i, b_i \right]$.
Consider the surjection $\phi: \pi_1(C^\circ, y)\twoheadrightarrow S_3$
sending $a_1\mapsto (12), b_1 \mapsto (13)$ and sending $a_i \mapsto
\mathrm{id}, b_i
\mapsto \mathrm{id}$ for $i > 1$.
The loop around the puncture maps to
$\phi(\prod_{i=1}^g \left[ a_i, b_i \right]) = (12)(13)(12)^{-1}(13)^{-1} =
(123) \neq \mathrm{id}$.
\end{example}
\subsubsection{}
\label{subsubsection:proof-counterexample}
\begin{proof}[Proof of \autoref{theorem:counterexample}]
Let $f: \mathscr X \to \mathscr{M}_{g,1}$ denote the map from
\autoref{proposition:kodaira-parshin}.
Concretely, we can take $G = S_3$ and the map $\phi$ as in
\autoref{proposition:kodaira-parshin} to be that given in
\autoref{example:kodaira-parshin}.
Define the local system $\mathbb V := R^1 f_* \mathbb C$ on $\mathscr M_{g,1}$,
and define $\mathscr{F}$ to be the vector bundle $\mathbb{V}\otimes
\mathscr{O}$. Note that $\mathscr{F}$ admits a natural (Gauss-Manin) connection $\on{id}\otimes d$. The local system $\mathbb{V}$ evidently underlies a variation of Hodge structure.
Let $C$ be a
fiber of the forgetful morphism $\mathscr{M}_{g,1}\to \mathscr{M}_{g}$.
Let $X := f^{-1}(C) \subset \mathscr X$. We claim that the flat vector bundle $(\mathscr{F}, \nabla)$ satisfies the
conditions of \autoref{theorem:counterexample}, i.e.~it has semisimple monodromy and $\mathscr{F}|_C$ is not semistable.
We first check that $(\mathscr{F}, \nabla)|_C$ has semisimple monodromy. This is true for any flat vector bundle arising from the Gauss-Manin connection on the cohomology of a family of smooth proper varieties, by work of Deligne \cite[Th\'eor\`eme 4.2.6]{deligne1971theorie}.
We now check that $(\mathscr{F}, \nabla)|_C$ is not semistable.
By \cite[Theorem
4]{cataneseD:answer-to-a-question-by-fujita},
if $X \to C$ is not isotrivial,
$f_* \omega_f$ is a destabilizing subsheaf of $\mathscr F$.
It remains to show $X \to C$ is not isotrivial.
The fiber of $f|_X$ over a point $x\in C$ is a finite disjoint union of
finite covers of $C$, branched only over $x$. These fibers must vary in moduli as $x$ varies,
as there are only finitely many non-constant maps between any
two curves over of genus at least $2$,
by de Franchis' theorem.
(See \cite{de1913teorema} or \cite[Corollary 3, p.~75]{samuel1966lectures}, for
example.)
\end{proof}
\begin{remark}
\label{remark:alternate-hodge-proof}
We can also give a somewhat more involved proof of
\autoref{theorem:counterexample} using \autoref{corollary:unstable} in
place of \cite[Theorem
4]{cataneseD:answer-to-a-question-by-fujita}, as we now explain. This argument inspired the Hodge-theoretic results \autoref{theorem:very-general-VHS} and \autoref{corollary:geometric-local-systems}, proven in \autoref{section:hodge-theoretic-results}.
With notation as in the proof of
\autoref{theorem:counterexample},
$\mathscr{F}$ has degree $0$ since it admits a flat connection, by
\autoref{prop:basic-facts}(3).
Therefore, it suffices to show $\mathscr{F}$ has a subsheaf of positive
degree.
The Hodge filtration exhibits $F^1\mathscr F\simeq (f|_X)_* \omega_{(f|_X)}$ as a subsheaf
of $\mathscr F$,
which is destabilizing by
\autoref{corollary:unstable}
once we verify that
$\delta: F^1\mathscr F \to F^0 \mathscr F/ F^1 \mathscr F \simeq R^1 (f|_X)_* \mathscr{O}_{X}$
is nonzero.
We now check $\delta$ is nonzero.
Locally around a point of $C$, $\delta$
can be identified with the derivative of the period map
\cite[Theorem 5.3.4]{carlsonMP:period-mappings}
sending a curve corresponding to a fiber of
$f|_X : X \to C$
to the corresponding Hodge structure on its first cohomology. To show this derivative is not identically zero it suffices to show that the period map is non-constant.
More concretely, by the Torelli theorem, we only need to check
$f|_X: X \to C$ is not isotrivial.
This follows by de Franchis' theorem, as explained in the proof of
\autoref{theorem:counterexample}.
\end{remark}
\begin{remark}
\label{remark:bhh-error}
As noted prior to its statement,
\autoref{theorem:counterexample} contradicts
\cite[Theorem 1.3]{BHH:logarithmic},
\cite[Theorem 1.3]{BHH:irregular}, and
\cite[Theorem 1.2]{BHH:parabolic}, which claim, for example, that any irreducible flat vector bundle on a smooth proper curve of genus at least $2$ admits a semistable isomonodromic deformation. We now explain the gaps in the proofs of
those results. The error in \cite[Theorem 1.3]{BHH:logarithmic} occurs in
\cite[Proposition 4.3]{BHH:logarithmic}; the issue is that the map denoted
$f^*\nabla$ in diagram (4.14) does not in general exist. The proof works
correctly if $G=\on{GL}_2$. An identical error occurs in \cite[Proposition
4.4]{BHH:irregular}. A different argument is given in \cite{BHH:parabolic}.
There, the error occurs in the proof of \cite[Proposition 5.1]{BHH:parabolic},
in which the large diagram claimed to be commutative does not in general commute.
\end{remark}
\section{Analysis of Harder-Narasimhan filtration}
\label{section:hn-filtration}
\subsection{Main results on isomonodromic deformations} In this section, we prove the following theorem and corollary, generalizing \autoref{theorem:hn-constraints}.
For the statement of this theorem, recall the notion of a refinement of a
parabolic bundle, introduced in \autoref{definition:refinements}.
\begin{theorem}
\label{theorem:hn-constraints-parabolic}
Let $(C,D)$ be hyperbolic of genus $g$ with $D = x_1 + \cdots + x_n$, and let
$({E}, \nabla)$ be a flat
vector bundle on $C$ with regular singularities along $D$ and irreducible
monodromy. Let $E_\star$ denote a parabolic structure on $E$ refined by the parabolic structure on $E$ associated to $\nabla$.
Suppose
$(E_\star',\nabla')$
is an isomonodromic deformation (which exists by \autoref{remark:refinement})
of $({E_\star}, \nabla)$ to an analytically general nearby curve,
with Harder-Narasimhan filtration $0 = (F_\star')^0 \subset (F_\star')^1 \subset \cdots \subset
(F_\star')^m =
E_\star'$. For $1 \leq i \leq m$, let $\mu_i$ denote the slope of
$\on{gr}^{i}_{HN}E_\star' := (F'_\star)^i/(F_\star')^{i-1}$.
Then the following two properties hold.
\begin{enumerate}
\item If $E_\star'$ is not parabolically semistable, then for every $0 < i < m$, there
exists $j < i < k$ with $$\rk \on{gr}^{j+1}_{HN}E_\star'\cdot \rk
\on{gr}^k_{HN}E_\star'\geq g+1.$$
\item We have $0<\mu_i-\mu_{i+1}\leq 1$ for all $i<m$.
\end{enumerate}
\end{theorem}
\begin{corollary}\label{cor:stable-parabolic}
Let $(C, D)$ be a hyperbolic curve of genus $g$.
Let $({E}, \nabla)$ be a flat
vector bundle on $C$ with regular singularities
along $D$, and suppose that
$\on{rk}(E)<2\sqrt{g+1}$.
Then, for $E_\star$ any parabolic structure refined by the parabolic structure on $E$ associated to $\nabla$,
the isomonodromic deformation of $E_\star$
to an analytically general nearby curve is parabolically semistable.
\end{corollary}
The proofs are given in \autoref{subsection:proof-hn} and
\autoref{subsubsection:corollary-proof}.
This latter corollary salvages the main theorem of \cite{BHH:parabolic} in the
case of vector bundles of low rank when $E_\star$ is given the parabolic
structure associated to $\nabla$; it salvages the main theorem of
\cite{BHH:logarithmic} in low rank when $E_\star$ is given the trivial parabolic structure (in which case it agrees with \autoref{cor:stable}).
Crucial in this section will be the notion of generic global generation.
\begin{definition}[Generic global generation]
\label{definition:}
A vector bundle $V$ is {\em generically globally generated} if the
evaluation map $H^0(C, V) \otimes \mathscr O_C \to V$ does not factor through a proper
subbundle of $V$, i.e.~if the cokernel of this map is torsion.
We call a parabolic sheaf $E_\star$ {\em generically globally generated} if $E =
E_0$ is a vector bundle which is generically globally generated.
\end{definition}
The basic idea of the proof of \autoref{theorem:hn-constraints-parabolic} will be to show that any counterexample to
\autoref{theorem:hn-constraints-parabolic} will produce a certain semistable parabolic vector bundle of high
slope which is not generically globally generated.
In order to see why this failure of generic global generation
leads to a contradiction, we will need some facts about (generic) global generation of vector bundles on curves, arising from Clifford's theorem.
\subsection{Preliminary results on high slope bundles with many sections}
\label{subsection:high-slope-lemmas}
We start with a bound on the dimension of the space of global sections of a vector bundle
whose Harder-Narasimhan polygon has slopes between $0$ and $2g$.
\begin{lemma}
\label{lemma:cohomology-bound}
Suppose $V$ is a vector bundle on a smooth proper curve $C$ with Harder-Narasimhan filtration $0 = N^0 \subset N^1 \subset \cdots \subset N^m = V$. Suppose moreover that for each $i$, the slope of $\on{gr}^i_{N} V=N^i/N^{i-1}$
satisfies $$0\leq \mu(\on{gr}^i_{N} V):=\frac{\deg(\on{gr}^i_{N}V)}{\on{rk}(\on{gr}^i_{N}V)}\leq 2g.$$
Then $\dim H^0(C, V) \leq \frac{\deg V}{2} + \rk V$.
\end{lemma}
\begin{proof}
For convenience set $W_i := \on{gr}^i_{N}V=N^i/N^{i-1}$.
Suppose $W_1, \ldots, W_k$ have slopes $> 2g-2$ and $W_{k+1}, \ldots,
W_m$ have slopes $\leq 2g-2$.
Using Clifford's theorem for vector bundles
\cite[Theorem 2.1]{brambila-pazGN:geography-of-brill-noether-loci},
for $i > k$,
we have
\begin{align*}
\dim H^0(C, W_i)
\leq \frac{\deg W_i}{2} + \rk W_i.
\end{align*}
Also, for $i \leq k$, since $W_i$ are semistable, there are no maps $W_i \to
\omega_C$.
Therefore, $H^1(C, W_i) = 0$ when $i \leq k$.
It follows from Riemann Roch that
\begin{align*}
\dim H^0(C, W_i) = \deg W_i + (1-g) \rk W_i
\end{align*}
for $i \leq k$.
Summing over $i$, we get
\begin{align*}
\dim H^0(C, W) &\leq \sum_{i=1}^m \dim H^0(C, W_i) \\
&\leq \sum_{i=1}^k (\deg W_i + (1-g) \rk W_i)
+ \sum_{i=k+1}^m (\frac{\deg
W_i}{2} + \rk W_i) \\
&= \sum_{i=1}^m (\frac{\deg W_i}{2} + \rk W_i)
+ \sum_{i=1}^k (\frac{\deg W_i}{2} -g \rk W_i) \\
&= \frac{\deg W}{2} + \rk W+ \sum_{i=1}^k (\frac{\deg W_i}{2} -g \rk
W_i).
\end{align*}
To conclude, it is enough to show
$\frac{\deg W_i}{2} -g \rk W_i \le 0$.
However, since we were assuming the slope $\mu(W_i) \leq 2g$, we find
$\deg W_i \leq 2g \rk W_i$ and so $\frac{\deg W_i}{2} \leq g \rk W_i$,
as desired.
\end{proof}
The following lemma is a well known criterion for global generation, which we
spell out for completeness.
\begin{lemma}
\label{lemma:global-generation}
Let $V$ be a semistable vector bundle on a smooth proper curve $C$, such that the slope of $V$ satisfies $\mu(V)> 2g - 1$. Then
$V$ is globally generated.
\end{lemma}
\begin{proof}
Let $p\in C$ be a point. It suffices to show $V|_p$ is generated by global sections of $V$. Indeed, $V(-p)$ is a semistable bundle with slope $\mu(V(-p)) > 2g-2$. Hence
$H^1(C, V(-p)) = 0$, as any map $V(-p) \to \omega_C$ would be
destabilizing.
Since $H^1(C, V(-p)) = 0$, the sequence
\begin{equation}
\label{equation:}
\begin{tikzcd}
0 \ar {r} & V(-p) \ar {r} & V \ar {r} & V|_p \ar {r}
& 0
\end{tikzcd}\end{equation}
is exact on global sections, so
$H^0(C, V) \otimes \mathscr O \to V \to V|_p$ is surjective, as desired.
\end{proof}
The following result will be key to the proof of
\autoref{theorem:hn-constraints-parabolic}, via
\autoref{proposition:generic-parabolic-global-generation},
as it places a constraint on the rank of a
vector bundle which is not generically globally generated.
\begin{lemma}
\label{lemma:abstract-lower-bound-on-rank}
Suppose $V$ is a vector bundle on a smooth proper curve $C$ with
$\mu(V) \geq 2g - 2$ (respectively, $> 2g-2$.)
Assume further $U \subset V$ is a proper subbundle
$\delta := h^0(C, V) - h^0(C, U)$,
and either $U = 0$ or else both
\begin{enumerate}
\item $\mu(U) \leq \mu(V)$, and
\item each graded piece
$\on{gr}^i_{\on{HN}}U$ of the Harder Narasimhan filtration of $U$
satisfies $0 \leq \mu(\on{gr}^i_{\on{HN}}U) \leq 2g$.
\end{enumerate}
Then, $\rk V \geq g (\rk V - \rk U)-\delta$ (respectively, $> g (\rk V - \rk
U)-\delta$).
In particular, if
$h^0(C, U) = h^0(C, V)$, $\rk V \geq g$ (respectively, $\geq g+1$).
\end{lemma}
\begin{proof}
In the case $U = 0$, the inequality
$\rk V \geq g (\rk V - \rk U)-\delta$ (respectively, $\rk V> g (\rk V - \rk
U)-\delta$)
is equivalent to
$h^0(C, V) \geq (g-1) \rk V$ (respectively $h^0(C, V) > (g-1) \rk V$).
This holds by Riemann-Roch.
We now assume $U \neq 0$.
Applying \autoref{lemma:cohomology-bound}, we conclude
\begin{align*}
H^0(C,U) \leq \frac{\deg
U}{2} + \rk U.
\end{align*}
Using property Riemann-Roch and the definition of $\delta$,
\begin{align*}
\dim H^0(C,U) + \delta = \dim H^0(C, V) \geq \deg V + (1-g)
\on{rk}
V.
\end{align*}
Combining the above gives
\begin{align}
\label{equation:section-bound}
\deg V + (1-g) \rk V \leq \frac{\deg U}{2} + \rk U + \delta.
\end{align}
To simplify notation, we use $c := \rk V - \rk U$ to denote the corank
of $U$ in $V$.
Rewriting \eqref{equation:section-bound}, and using $\rk U = \rk V - c$ and $\mu(U) \leq
\mu(V)$ gives
\begin{align*}
\mu(V) \rk (V) + (1-g) \rk V \leq \frac{\mu(U) \rk U}{2} + \rk U
+ \delta
\leq \frac{\mu(V)}{2} (\rk V - c) + \rk V - c + \delta.
\end{align*}
Rearranging the terms,
and multiplying both sides by $2$, we obtain
\begin{align*}
(\mu(V)+2) \cdot c \leq (2g-\mu(V)) \rk V + 2 \delta.
\end{align*}
Since $2g-2 \leq \mu(V)$, we find
$2g \leq \mu(V) + 2$ and $2g - \mu(V) \leq 2$, implying
\begin{align*}
2g \cdot c \leq (\mu(V) + 2)c \leq (2g-\mu(V)) \rk V +2\delta
\leq 2 \rk V +2\delta.
\end{align*}
Therefore, $\rk V \geq gc - \delta$.
In particular, if $\delta =0$, $\rk V \geq g$ as $c \geq 1$.
In the case $2g-2 < \mu(V)$, we similarly find $\rk V > gc - \delta$
and $\rk V \geq g + 1$ when $\delta = 0$.
\end{proof}
\subsection{A constraint on global generation of parabolic bundles}
In this subsection, we show that semistable parabolic bundles with large slope which are not generically globally generated cannot have
small rank.
This is accomplished in
\autoref{proposition:generic-parabolic-global-generation}.
Although it is a special case of
\autoref{proposition:generic-parabolic-global-generation},
we start off by stating and proving the
following special, yet pivotal, case, in order to convey the main idea and orient the reader. We call it ``the non-GGG lemma," where GGG stands for ``generically globally generated."
\begin{proposition}[The non-GGG lemma]
\label{proposition:lower-bound-on-rank}
Suppose $V$ is a nonzero semistable vector bundle on a smooth proper curve $C$
which is not generically globally generated.
\begin{enumerate}
\item[(a)] If $\mu(V) > 2g - 2$, then $\rk V \geq g+1$.
\item[(b)] If $\mu(V) = 2g-2$, then $\rk V \geq g$.
\end{enumerate}
\end{proposition}
\begin{proof}
The statement is trivial when $g =0$, so we assume $g \geq 0$.
Let $U\subset V$ be the saturation of the image of the evaluation map $$H^0(C, V)\otimes \mathscr{O}_C\to V.$$
We aim to apply
\autoref{lemma:abstract-lower-bound-on-rank}.
If $V$ is not generically globally generated, $U
\subset V$ is a proper sub-bundle of $V$, with $H^0(C,U) \to H^0(C, V)$
an isomorphism. Hence, we will be done by the final statement of
\autoref{lemma:abstract-lower-bound-on-rank}, once we verify
hypotheses $(1)$ and $(2)$ of
\autoref{lemma:abstract-lower-bound-on-rank}.
Semistability of $V$ implies $\mu(U) \leq \mu(V)$, verifying $(1)$.
We conclude by checking hypothesis $(2)$.
Using \autoref{lemma:global-generation},
we may assume $2g-2 \leq \mu(V) \leq 2g - 1$.
Since $\mu(V) \leq 2g-1$ and $V$ is semistable, each
graded piece $\on{gr}^i_{\on{HN}}U$ of the Harder-Narasimhan filtration of $U$ must have slope
at most $2g-1$. Let $j$ be maximal such that $\on{gr}^j_{\on{HN}}U$ is non-zero. Since $U$ is generically globally generated, $\on{gr}^j_{\on{HN}}U$ has a global section, and therefore has
non-negative slope. By the definition of the Harder-Narasimhan filtration, the same is true for $\on{gr}^i_{\on{HN}}U$ for all $i$.
This verifies the final hypothesis $(2)$ of
\autoref{lemma:abstract-lower-bound-on-rank}, so we conclude $\rk V \geq
g + 1$ in case $(a)$ and $\rk V\geq g$ in case $(b)$.
\end{proof}
We next wish to generalize \autoref{proposition:lower-bound-on-rank} to the parabolic setting.
\begin{remark}
\label{remark:}
The main difficulty in generalizing to the parabolic setting will be that
the graded parts of the Harder-Narasimhan filtration of the bundle
$U$ appearing in the proof of \autoref{proposition:lower-bound-on-rank} need no
longer have slope bounded above by $2g-1$.
We will get around this second issue in the proof of
\autoref{proposition:generic-parabolic-global-generation}
by quotienting both $U$ and $V$ by the
part of the Harder-Narasimhan filtration with slope more than $2g-2$, and
applying the ensuing argument to the resulting quotients.
\end{remark}
Before taking up this generalization, we record a couple of preliminary lemmas which will allow us to
understand generic global generation of
parabolic and coparabolic bundles.
\begin{lemma}
\label{lemma:pushforward-degree}
Suppose $W_\star= (W,\{ W^i_j\}, \{\alpha^i_j\})$ is a parabolic bundle on $C$ with respect to a reduced divisor $D=x_1+\cdots+x_n$.
Let $\alpha := \max_{i,j} \alpha^i_j$.
Then
$\mu_\star(W_\star) - \mu(W) \leq n\alpha$.
Further, equality holds if and only if all $\alpha^i_j$ are equal to
$\alpha$.
\end{lemma}
\begin{proof}
By definition, this difference $\mu_\star(W_\star) - \mu(W)$ is
$$\sum_{j=1}^n \sum_{i=1}^{n_j} \alpha^i_j \frac{\dim
(W^i_j/W_j^{i+1})}{\rk W}.$$
To verify the inequality, splitting the contribution from each point,
it suffices to show
$$\sum_{i=1}^{n_j} \alpha^i_j \frac{\dim
(W^i_j/W_j^{i+1})}{\rk W} \leq \alpha.$$
Indeed, this holds because, for all $j$,
\begin{align*}
\sum_{i=1}^{n_j} \alpha^i_j \frac{\dim
(W^i_j/W_j^{i+1})}{\rk W} \leq \sum_{i=1}^{n_j} \alpha \frac{\dim
(W^i_j/W_j^{i+1})}{\rk W} = \alpha \sum_{i=1}^{n_j} \frac{\dim
(W^i_j/W_j^{i+1})}{\rk W} = \alpha.
\end{align*}
Finally, equality holds in the above inequality for all $j$ if and only if all
$\alpha^i_j$ are equal to $\alpha$.
\end{proof}
Recall from
\autoref{definition:coparabolic-stability} that a coparabolic bundle $\widehat{E}_\star$ is defined to be
semistable if $E_\star$ is semistable.
\begin{lemma}
\label{lemma:quotient-slope}
Let $V_\star$ be a semistable coparabolic vector
bundle or a semistable parabolic bundle on a curve $C$ with respect to a
reduced divisor $D=x_1+\cdots +x_n$ and $\mu_\star(V_\star)=r+n$.
Then any vector bundle $Q$ arising as a quotient of $V$ satisfies
$\mu(Q)\geq r$.
Moreover, $\mu(Q) > r$ holds in the parabolic case if $n > 0$.
\end{lemma}
\begin{proof}
First we deal with the case that $V_\star$ is a parabolic vector bundle.
By \autoref{lemma:quotient-semistable}, any
parabolic quotient of $V_\star$
(with the induced quotient structure of
\autoref{subsubsection:induced-subbundle})
has parabolic slope at least $r+n$.
Therefore, to complete the parabolic case,
it suffices to show that for any parabolic bundle $W_\star$ on $X$,
$\mu_\star(W_\star) - \mu(W) \leq n$, with a strict inequality if $n >
0$.
The $n = 0$ case is trivial while the $n > 0$ case
follows from \autoref{lemma:pushforward-degree} as $\alpha^i_j < 1$ for
all $i, j$.
Now, suppose $V_\star$ is a coparabolic bundle of the form $\widehat{W}_\star$ for $W_\star =
(W, \{W^i_j\}, \{\alpha^i_j\})$ a parabolic bundle.
For any $\varepsilon > 0$, there is a map $W[\varepsilon]_\star \to
\widehat{W}_\star$ and hence a map $W[\varepsilon]_0 \to Q$.
Let $Q^\varepsilon$ denote the image of this map, which we may endow
with the associated quotient parabolic structure to obtain a quotient
bundle
$W[\varepsilon]_\star \to Q^\varepsilon_\star$.
This implies
$\mu_\star(Q^\varepsilon_\star) \geq \mu_\star(W[\varepsilon]_\star) = r + n - \varepsilon$
for all $\varepsilon > 0$, by \autoref{lemma:quotient-semistable}.
By \autoref{lemma:pushforward-degree}, it follows that
$\mu_\star(Q^\varepsilon_\star) - \mu(Q^\varepsilon) \leq n$, so
$\mu(Q^\varepsilon) \geq r-\varepsilon$. Now $Q^\varepsilon\to Q$ has
torsion cokernel, so $\mu(Q)\geq \mu(Q^\varepsilon)\geq r-\varepsilon$
for all $\varepsilon>0$, giving the result.
\end{proof}
We also need the following fairly standard lemma, which has little to do with
parabolic bundles.
\begin{lemma}
\label{lemma:cohomology-vanish-for-large-slope}
Suppose $V$ is a vector bundle on a smooth proper curve $C$ so that each
graded part of the Harder-Narasimhan filtration of $V$ has slope more
than $2g - 2$. Then $H^1(C, V) = 0$.
\end{lemma}
\begin{proof}
Let $0= N^0 \subset N^1 \subset \cdots \subset N^s = V$ denote the
Harder-Narasimhan filtration.
From the exact sequence
\begin{equation}
\label{equation:}
\begin{tikzcd}
H^1(C, N^{i-1}) \ar {r} & H^1(C, N^i) \ar {r} & H^1(C,
N^i/N^{i-1}) \ar {r} & 0
\end{tikzcd}\end{equation}
by induction on $i$, it is enough to show $H^1(C, N^i/N^{i-1}) = 0$ for
every $i \leq s$.
Since $N^i/N^{i-1}$ is semistable of slope more than $2g-2$, there are
no nonzero maps $N^i/N^{i-1} \to \omega_C$, and so
$H^1(C, N^i/N^{i-1}) = 0$.
\end{proof}
Combining the above, we are able to verify the parabolic analog of
\autoref{proposition:lower-bound-on-rank}, namely \autoref{proposition:generic-parabolic-global-generation} below.
In this paper, we will only require the case $\delta = 0, c = 1$
of \autoref{proposition:generic-parabolic-global-generation}
so we encourage the reader to focus on that case.
We include the more general version, as the proof is nearly the same,
and will be useful in future work.
\begin{proposition}
\label{proposition:generic-parabolic-global-generation}
Suppose $C$ is a smooth proper connected genus $g$ curve and
$E_\star = (E, \{E^i_j\}, \{\alpha^i_j\})$ is a nonzero parabolic bundle $C$
with respect to $D = x_1 + \cdots + x_n$.
Suppose $\widehat{E}_\star$ is coparabolically semistable.
Let $U \subset \widehat E_0$ be a (non-parabolic) subbundle with
$c := \rk E_0 - \rk U$
and
$\delta := h^0(C, \widehat{E}_0) - h^0(C, U)$.
\begin{enumerate}
\item[(I)] If $\mu_\star(\widehat{E}_\star)> 2g-2 + n$,
then $\rk E >
gc - \delta$.
\item[(II)] If $\mu_\star(\widehat{E}_\star)= 2g-2+n$, then $\rk
E \geq gc - \delta$.
\end{enumerate}
In particular, if $\widehat{E}_\star$
fails to be generically globally generated, and
$\mu_\star(\widehat{E}_\star)> 2g-2 + n$, then $\rk E \geq g + 1$.
\end{proposition}
\begin{proof}
We first check the cases $g = 0$ and $g = 1$.
The statement is trivial when $g = 0$.
Case $(II)$ is trivial when $g = 1$, because
$\rk E \geq c$. Similarly case $(I)$ is trivial when $\delta >
0$.
It remains to verify case $(I)$ when $g = 1$ and $\delta = 0$.
In this case, it is enough to show that $\rk U > 0$,
and since $H^0(C, U) = H^0(C, \widehat{E}_0)$,
it is enough to show
$H^0(C, \widehat{E}_0)\neq 0$.
By \autoref{lemma:quotient-slope}, $\mu(\widehat{E}_0) > 0$,
so Riemann-Roch implies $H^0(C, \widehat{E}_0) \neq 0$.
We now assume $g \geq 2$.
Let $V_\star := \widehat{E}_\star$ denote the given coparabolic bundle.
As a first step, we reduce to the case that $U$ is generically globally
generated.
Indeed, let $U'$ denote the saturation of the image
$H^0(C, U) \otimes \mathscr O_C \to U \to V$.
Since $h^0(C, U') \geq h^0(C, U)$ and $\rk V -
\rk U' \geq \rk V - \rk U$, the result holds for $U$ if it holds for
$U'$.
We may therefore assume $U$ is generically globally generated.
Let $0=N^0 \subset N^1 \subset \cdots \subset N^s = U$ denote the
Harder-Narasimhan filtration of $U$.
Let $t$ be the minimal index so that $\mu(N^{t+1}/N^{t}) \leq 2g-2$.
If no such index exists, take $t = s$.
We will show that in fact $\rk V/N^t > gc$ in case $(I)$ of the statement
of this proposition
and $\rk V/N^t \geq gc$ in case $(II)$.
We will do so by verifying the hypotheses of
\autoref{lemma:abstract-lower-bound-on-rank} applied to the subbundle
$U/N^t \subset V/N^t$.
To this end, using \autoref{lemma:quotient-slope},
we find
\begin{align}
\label{equation:quotient-slope-bound}
\mu(V/N^t) > 2g - 2 \text{ in case $(I)$ and }
\mu(V/N^t) \geq 2g - 2 \text{ in case $(II)$.}
\end{align}
We next verify
$h^0(C, V)- h^0(C, U) = h^0(C,V/N^t) - h^0(C, U/N^t)$,
which implies that present value of $\delta$, $h^0(C, V)- h^0(C, U)$, agrees with the value of $\delta$ we will use
in our application of \autoref{lemma:abstract-lower-bound-on-rank},
$h^0(C,V/N^t) - h^0(C, U/N^t)$.
Indeed, $H^1(C, N^t) = 0$ by
\autoref{lemma:cohomology-vanish-for-large-slope}.
Therefore,
$h^0(C, U/N^t) = h^0(C, U) - h^0(C, N^t)$ and
$h^0(C, V/N^t) = h^0(C, V) - h^0(C, N^t)$.
Hence,
\begin{align*}
h^0(C, V/N^t)- h^0(C,U/N^t) = h^0(C, V) - h^0(C, U) = \delta.
\end{align*}
In the case $t = s$, so $N^t = U$, we have $U/N^t = 0$, and we have verified the hypotheses of
\autoref{lemma:abstract-lower-bound-on-rank},
so we now assume $N^t \neq U$.
It remains to check hypotheses $(1)$ and $(2)$ of
\autoref{lemma:abstract-lower-bound-on-rank}.
We first check $(2)$. Each
graded piece of the Harder-Narasimhan filtration of $U/N^t$
has slope $\leq 2g - 2$ by construction of $N^t$, which verifies
the upper bound in $(2)$.
We claim the lower bound in $(2)$ follows from generic global generation of $U$.
Indeed, recall we are assuming $U$ is generically globally generated, via
the reduction made near the beginning of the proof.
Therefore,
$H^0(C, U/N^{s-1})
= H^0(C, N^s/N^{s-1}) \neq 0$,
as any quotient bundle of a generically globally generated bundle is
generically globally generated.
This implies $\mu(N^s/N^{s-1}) \geq 0$, and so $\mu(N^j/N^{j-1}) \geq 0$ for
all $1 \leq j \leq t$.
Finally, we verify $(1)$, again in the case $t< s$, i.e., $N^t \neq U$.
In the previous paragraph, we showed each graded piece of the
Harder-Narasimhan filtration of $U/N^t$ has slope at most $2g -2$, so
we also obtain $\mu(U/N^t) \leq 2g - 2$.
On the other hand, we have already verified that $\mu(V/N^t) \geq 2g-2$
in \eqref{equation:quotient-slope-bound}, above.
Hence,
$\mu(U/N^t) \leq 2g-2 \leq \mu(V/N^t)$,
verifying $(1)$.
Applying \autoref{lemma:abstract-lower-bound-on-rank} to the vector bundle
$V/N_t$ with the subbundle $U/N_t$ shows
\begin{align*}
\rk V/N_t &\geq g(\rk V/N_t -
\rk U/N_t) - \delta
= g(\rk V -
\rk U)-\delta
=gc - \delta,
\end{align*}
where the inequality is strict in case $(I)$ because $\mu(V/N^t) > 2g -
2$.
This proves cases $(I)$ and $(II)$ since $V = \widehat{E}_0$.
The final statement
holds by taking $U$ to be the saturation of the image of
$H^0(C, \widehat{E}_0) \otimes \mathscr O_C \to \widehat{E}_0$.
In this case, $H^0(U) = H^0(C, \widehat{E}_0)$, $\delta = 0$,
and $c \geq 1$
when $\widehat{E}_0$ is not generically globally generated. So we get
$\rk E \geq g + 1$.
\end{proof}
\subsection{Reduction for the proof of \autoref{theorem:hn-constraints-parabolic}}
\label{subsection:proving-hn}
We next prove some results in preparation for the proof of
\autoref{theorem:hn-constraints-parabolic}.
Reviewing the idea of the proof, described in \autoref{subsection:idea-of-proof},
may be helpful.
\begin{notation}
\label{notation:graded-E}
Let $(C, D)$ be a hyperbolic curve.
Let $(E, \nabla)$ be a flat vector bundle on $C$ with regular singularities along $D$, whose
associated monodromy representation is irreducible.
Let $E_\star$ be a parabolic structure on $E$ refined by the canonical parabolic structure associated to $\nabla$, as in \autoref{definition:associated-parabolic} and
\autoref{definition:refinements}. Let $q^\nabla: T_C(-D) \to \on{At}_{(C,D)}(E_\star)$ be the splitting of the Atiyah exact sequence associated to $\nabla$ and described in \autoref{definition:associated-parabolic}.
Let
$N^\bullet_\star$, given by $0 = N^0_\star \subset N^1_\star \subset \cdots
\subset N^m_\star = E_\star$, be a
nontrivial filtration of $E_\star$ by parabolic subbundles (with the induced parabolic structure). In particular, note $m > 1$.
Let $\on{gr}^i_{N_\star}(E_\star) := N_\star^i/N_\star^{i-1}$ denote the
quotient parabolic bundle with the induced parabolic quotient structure as described in
\autoref{subsubsection:induced-subbundle}.
The parabolic bundle $\mathscr{E}\kern -1pt nd(E_\star)_\star /\mathscr{E}\kern -1pt nd(E_\star, N^\bullet_\star)_\star$ has a filtration
by sheaves whose
associated graded sheaf is of the form
\begin{align*}
\oplus_{1 \leq i< j \leq n} \mathscr{H}\kern -2pt om(\on{gr}^i_{N_\star}(E_\star),
\on{gr}^{j}_{N_\star}(E_\star))_\star.
\end{align*}
For $i<j$ define $E_\star^{i,j} :=
\mathscr{H}\kern -2pt om(\on{gr}^i_{N_\star}(E_\star),\on{gr}^{j}_{N_\star}(E_\star))_\star.$
Let $\Delta=\mathscr{T}_{g,n}$ be the universal cover of the analytic stack
$\mathscr{M}_{g,n}$, and let $(\mathscr{C}, \mathscr{D})$ be the universal
marked curve over $\mathscr{T}_{g,n}$. Let $0\in \Delta$ be such that
$(\mathscr{C}, \mathscr{D})_0$ is isomorphic to $(C,D)$; fix such an
isomorphism. Let $(\mathscr{E_\star}, \widetilde{\nabla})$ be the universal
isomonodromic deformation of $(E_\star, \nabla)$ to $\mathscr{C}$.
\end{notation}
We will later take the filtration $N_\star^\bullet$ to be the Harder-Narasimhan
filtration (cf. \autoref{subsubsection:harder-narasimhan-parabolic}) of $E_\star$.
By \autoref{proposition:irreduciblity-splitting-condition},
the map $q^\nabla$ yields a non-zero map
\begin{align}\label{q-nabla-map}
T_C(-D)\overset{q^\nabla}\longrightarrow
\on{At}_{(C,D)}(E_\star) \to
\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star, N_\star^\bullet).
\end{align}
We now observe that if $N_\star^\bullet$ extends to the universal isomonodromic deformation of
$(C,D, E_\star)$ on the first-order neighborhood of $(C,D)$, the induced map on first cohomology must vanish.
\begin{lemma}
\label{lemma:h1-map}
Retain notation as in \autoref{notation:graded-E}.
If the filtration $N_\star^\bullet$ extends to a filtration on the restriction of
$(\mathscr E_\star,\widetilde{\nabla})$ to the first-order neighborhood of $(C,D)=(\mathscr{C}, \mathscr{D})_0\subset (\mathscr{C}, \mathscr{D})$,
then the composite map
\begin{align*}
H^1(C,T_C(-D))
\overset{(q^\nabla)_*}\longrightarrow H^1(C, \on{At}_{(C,D)}(E_\star))
\to H^1(C, \mathscr{E}\kern -1pt nd(E_\star)_\star/\mathscr{E}\kern -1pt nd(E_\star, N^\bullet_\star)_\star).
\end{align*}
induced by (\ref{q-nabla-map}) is identically zero.
\end{lemma}
\begin{proof}
By \autoref{proposition:connection-h1}
the map $$(q^\nabla)_*: H^1(C,T_C(-D))\to
H^1(C, \on{At}_{(C,D)}(E_\star))$$
induced by the connection
sends a first-order deformation of the pointed curve $(C,D)$
to the corresponding first-order deformation of the triple $(C,D,E_\star)$
obtained from isomonodromically deforming the connection $\nabla$.
But given a first-order deformation $(\widetilde{C}, \widetilde{D} ,
\widetilde{E}_\star)$
of $(C, D, E_\star)$
such that $N_\star^\bullet \subset E_\star$ admits an extension $\widetilde N_\star^\bullet$ to
$\widetilde E_\star$,
the corresponding element of
$H^1(C, \on{At}_{(C,D)}(E_\star))$ maps to $0$ in $H^1(C,
\mathscr{E}\kern -1pt nd(E_\star)_\star/\mathscr{E}\kern -1pt nd(E_\star, N^\bullet_\star)_\star)$,
by \autoref{lemma:parabolic-extension-obstruction}. The assumption is precisely that this is true for all elements of $H^1(C, \on{At}_{(C,D)}(E_\star))$ in the image of $(q^\nabla)_*$.
\end{proof}
We now analyze the parabolic bundles $E_\star^{i,j}:=\mathscr{H}\kern -2pt om(\on{gr}^i_{N_\star}(E_\star),
\on{gr}^j_{N_\star}(E_\star))_\star,$ for $i<j.$
\begin{lemma}
\label{lemma:nonzero-induced-connection}
With notation as in \autoref{notation:graded-E},
for every $0 < i < m$, there exists $j,k$ with $j < i$ and $k \geq
i + 1$ so that the nonzero map $T_C(-D)\to
\mathscr{E}\kern -1pt nd(E_\star)_\star/\mathscr{E}\kern -1pt nd(E_\star, N^\bullet_\star)_\star$ induces a nonzero map
$\psi_{j+1, k}: T_C(-D)\to E_\star^{j+1,k}$.
\end{lemma}
\begin{proof}
First, recall the non-zero map
$T_C(-D)\to
\mathscr{E}\kern -1pt nd(E_\star)/\mathscr{E}\kern -1pt nd(E_\star,N_\star^\bullet)$ induced by $q^\nabla$, produced in
\autoref{proposition:irreduciblity-splitting-condition}.
Let $j$ be maximal such that $\nabla(N^j)\subset N^i\otimes \Omega^1_C(D)$.
Note that $j< i$ as the monodromy of $(E,\nabla)|_{C\setminus D}$ is irreducible, so $N^i$ is
not a proper flat subbundle of $(E, \nabla)$, implying $\nabla(N^i) \not \subset
N^i\otimes \Omega^1_C(D)$.
Let $k$ be minimal such that $\nabla(N^{j+1})\subset N^k\otimes\Omega^1(D)$.
Note that $k\geq i+1$ by the definition of $j$.
By construction, the connection induces a nonzero $\mathscr{O}_C$-linear map of parabolic bundles
\begin{align*}
N_\star^{j+1}/N_\star^j \to (N_\star^k/N_\star^{i}) \otimes
\Omega_C^1(D)\to (N_\star^k/N_\star^{k-1}) \otimes
\Omega_C^1(D),
\end{align*}
or equivalently a nonzero map
\[\psi_{j+1, k}: T_C(-D) \to \mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{N_\star}(E_\star),
\on{gr}^k_{N_\star}(E_\star))_\star = E_\star^{j+1,k}.\qedhere\]
\end{proof}
We have shown that for each $i$, there exist $j<i< k$, and a non-zero map $$T_C(-D)\to E_\star^{j+1,k} =
\mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{N_\star}(E_\star),
\on{gr}^{k}_{N_\star}(E_\star)).$$
We next refine \autoref{lemma:h1-map} by showing that if its hypotheses are satisfied and if in addition $N_\star^\bullet$ is the Harder-Narasimhan filtration of $E_\star$, the map on $H^1$
induced by
$\psi_{j+1, k}: T_C(-D)\to E_\star^{j+1,k}$
must also vanish.
\begin{lemma}
\label{proposition:filtered-h1-map}
Use notation as in \autoref{notation:graded-E}. Suppose in addition that $N_\star^\bullet$ is the Harder-Narasimhan filtration of $E_\star$. Fix $i$ with $0<i<m$ and let $j,k,$ and $$\psi_{j+1, k}: T_C(-D)\to E_\star^{j+1,k}$$ be the data constructed in \autoref{lemma:nonzero-induced-connection}. If the filtration $N_\star^\bullet$ extends to a filtration on the restriction of
$(\mathscr E_\star,\widetilde{\nabla} )$ to the first-order neighborhood of $(C,D)=(\mathscr{C}, \mathscr{D})_0\subset (\mathscr{C}, \mathscr{D})$,
then the map
$H^1(C, T_C(-D))\to H^1(C, E_\star^{j+1,k})$
induced by $\psi_{j+1, k}$
vanishes.
\end{lemma}
For the proof, we will need the following two lemmas.
\begin{lemma}
\label{lemma:hn-sections-vanishing}
Suppose $F_\star, G_\star$ are parabolic vector bundles on a smooth
proper curve connected $C$ with respect to a divisor $D$
such that the Harder Narasimhan filtrations $0 = N^0_\star \subset N^1_\star
\subset \cdots \subset N^u_\star= F_\star$ and
$0 = M^0_\star \subset M^1_\star
\subset \cdots \subset M^v_\star= G_\star$
satisfy
$\mu_\star(\on{gr}_{N_\star^\bullet}^i(F_\star)) >
\mu_\star(\on{gr}_{M_\star^\bullet}^j(G_\star))$
for any $1 \leq i \leq u$ and $1 \leq j \leq v$.
Then $H^0(C, F_\star^\vee \otimes G_\star) = 0$.
\end{lemma}
\begin{proof}
Observe that $F_\star^\vee \otimes G_\star$ has a filtration whose associated graded pieces are
$\on{gr}_{N_\star^\bullet}^i(F_\star)^\vee \otimes
\on{gr}_{M_\star^\bullet}^j(G_\star)$,
so it is enough to show the latter have vanishing $H^0$.
An nonzero element of $H^0(C, \on{gr}_{N_\star^\bullet}^i(F_\star)^\vee \otimes
\on{gr}_{M_\star^\bullet}^j(G_\star))$ would yield a nonzero map
$\on{gr}_{N_\star^\bullet}^i(F_\star) \to
\on{gr}_{M_\star^\bullet}^j(G_\star)$.
The saturation of the image of this map would be a parabolic bundle
$H_\star$. By semistability, $\mu_\star(\on{gr}_{N_\star^\bullet}^i(F_\star)) \leq
\mu_\star(H_\star) \leq \mu_\star(\on{gr}_{M_\star^\bullet}^j(G_\star))$,
contradicting the assumption that
$\mu_\star(\on{gr}_{N_\star^\bullet}^i(F_\star)) >
\mu_\star(\on{gr}_{M_\star^\bullet}^j(G_\star))$.
\end{proof}
\begin{lemma}
\label{lemma:h1-injection}
Suppose $$0\to F_\star \to G_\star \to H_\star\to 0$$ is a short exact sequence of
parabolic sheaves on a smooth proper connected curve $C$ with respect to a divisor
$D$ and we are given a map $E \to F_\star$
inducing the $0$ map $H^1(C, E) \to H^1(C, F_\star) \to H^1(C,
G_\star)$.
If additionally $H^0(C, H_\star) = 0$ then $H^1(C, E) \to H^1(C,F_\star)$ vanishes.
\end{lemma}
\begin{proof}
The vanishing of $H^0(C, H_\star)$ yields an injection $H^1(C, F_\star)
\to H^1(C, G_\star)$. If the composition $H^1(C, E) \to H^1(C, F_\star)
\to H^1(C, G_\star)$ vanishes then so does $H^1(C, E) \to H^1(C,
F_\star)$.
\end{proof}
We now proceed with the proof of \autoref{proposition:filtered-h1-map}.
\begin{proof}[Proof of \autoref{proposition:filtered-h1-map}]
The proof is a diagram chase.
Recall that the nonzero map in \autoref{lemma:nonzero-induced-connection} was
constructed by beginning with the map $T_C(-D) \to
\mathscr{E}\kern -1pt nd(E_\star)_\star/\mathscr{E}\kern -1pt nd(E_\star, N_\star^\bullet)_\star$, and then showing the induced map
$T_C(-D) \to
\mathscr{H}\kern -2pt om(N^{j+1}_\star, E_\star/N^{k-1})_\star$
factors through a map
$T_C(-D) \to
\mathscr{H}\kern -2pt om(N^{j+1}_\star, N^k_\star/N^{k-1})_\star$,
which in turn factors through
$T_C(-D) \to
\mathscr{H}\kern -2pt om(N^{j+1}_\star/N^j_\star, N^k_\star/N^{k-1})_\star$.
We will show each of the above three maps vanishes on $H^1$.
By \autoref{lemma:h1-map},
$T_C(-D) \to
\mathscr{E}\kern -1pt nd(E_\star)_\star$
vanishes on $H^1$.
Next, the injection of parabolic sheaves $N^{j+1}_\star \to E_\star$ induces a
surjection of parabolic sheaves $\mathscr{E}\kern -1pt nd(E_\star)_\star\twoheadrightarrow
\mathscr{H}\kern -2pt om(N_\star^{j+1}, E_\star)_\star$.
From this, we obtain
a surjection
$$\mathscr{E}\kern -1pt nd(E_\star)_\star /\mathscr{E}\kern -1pt nd(E_\star,N^\bullet_\star)_\star \twoheadrightarrow \mathscr{H}\kern -2pt om(N_\star^{j+1},
E_\star/N_\star^{k-1})_\star.$$
Note that $H^2$ of parabolic sheaves vanishes on a curve, because the same is true
for usual sheaves. We therefore obtain a surjection
$$H^1(C,\mathscr{E}\kern -1pt nd(E_\star)_\star/\mathscr{E}\kern -1pt nd(E_\star,N^\bullet_\star)_\star)\twoheadrightarrow H^1(C,\mathscr{H}\kern -2pt om(N_\star^{j+1},
E_\star/N_\star^{k-1})_\star).$$ Thus the composition $T_C(-D)\to
\mathscr{E}\kern -1pt nd(E_\star)_\star/\mathscr{E}\kern -1pt nd(E_\star,N^\bullet_\star)_\star\to
\mathscr{H}\kern -2pt om(N_\star^{j+1}, E_\star/N_\star^{k-1})_\star$ induces
the zero map on $H^1$, because the first map does by \autoref{lemma:h1-map}.
We next show the natural map $T_C(-D)\to \mathscr{H}\kern -2pt om(\on{gr}_{N_\star}^{j+1}(E_\star),
E_\star/N_\star^{k-1})_\star$ to be described below, vanishes on $H^1$.
Using \autoref{lemma:hn-sections-vanishing}, we find that
$$H^0(C, \mathscr{H}\kern -2pt om(N_\star^{j}, E_\star/N_\star^{k-1})_\star)=0.$$
Therefore, applying \autoref{lemma:h1-injection} to the short exact sequence $$0\to \mathscr{H}\kern -2pt om(\on{gr}_{N_\star}^{j+1}(E_\star),
E_\star/N_\star^{k-1})_\star \to \mathscr{H}\kern -2pt om(N_\star^{j+1},
E_\star/N_\star^{k-1})_\star \to
\mathscr{H}\kern -2pt om(N_\star^{j}, E_\star/N_\star^{k-1})_\star \to 0$$
and the map $f: T_C(-D) \to \mathscr{H}\kern -2pt om(\on{gr}_{N_\star}^{j+1}(E_\star),
E_\star/N_\star^{k-1})_\star$
shows that $f$ induces the $0$ map on $H^1$.
We conclude by showing the map $$\psi_{j+1, k}: T_C(-D)\to
\mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{N_\star}(E_\star), \on{gr}^k_{N_\star}(E_\star))_\star=E_\star^{j+1,k}$$
vanishes on $H^1$.
Again by \autoref{lemma:hn-sections-vanishing},
$$H^0(C, \mathscr{H}\kern -2pt om(\on{gr}_{N_\star}^{j+1}(E_\star),
E_\star/N_\star^k)_\star) = 0.$$
Applying \autoref{lemma:h1-injection} to the exact sequence
\begin{equation}
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.7] (a) at (0,0){
\begin{tikzcd}
0 \ar {r} & \mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{N_\star}(E_\star),
\on{gr}^k_{N_\star}(E_\star))_\star \ar {r} & \mathscr{H}\kern -2pt om(\on{gr}_{N_\star}^{j+1}(E_\star),
E_\star/N_\star^{k-1})_\star \ar {r} & \mathscr{H}\kern -2pt om(\on{gr}_{N_\star}^{j+1}(E_\star),
E_\star/N_\star^k)_\star \ar {r} & 0
\end{tikzcd}
};
\end{tikzpicture}
\end{equation}
and the map $\psi_{j+1, k}$ shows that $\psi_{j+1,k}$ vanishes on $H^1$, as
desired.
\end{proof}
We now show that if the map $H^1(C, T_C(-D)) \to H^1(C, E_\star^{j+1,k})$
vanishes, we will be able to produce a coparabolic bundle which is not generically
globally generated.
We will later apply \autoref{proposition:generic-parabolic-global-generation}
to this bundle
to obtain \autoref{theorem:hn-constraints-parabolic}(1).
\begin{lemma}
\label{lemma:not-generically-globally-generated}
With notation as in \autoref{notation:graded-E},
suppose the map
$H^1(C, T_C(-D)) \to
H^1(C, E_\star^{j+1,k})$ (induced by the non-zero map $\psi_{j+1, k}: T_C(-D)\to E_\star^{j+1,k}$ of \autoref{lemma:nonzero-induced-connection}) vanishes.
Then the coparabolic bundle $\reallywidehat{(E_\star^{j+1,k})^\vee} \otimes \omega_C(D)$ is not generically globally
generated.
\end{lemma}
\begin{proof}
Since
$\psi_{j+1, k}: T_C(-D) \to E_\star^{j+1,k}$ is nonzero,
we obtain a nonzero Serre dual map
\begin{align}
\label{equation:serre-dual-map}
\reallywidehat{(E_\star^{j+1,k})^\vee}
\otimes \omega_C(D) \to \omega_C \otimes \omega_C(D),
\end{align}
which induces the $0$ map
\begin{align*}
H^0(C,\reallywidehat{(E_\star^{j+1,k})^\vee} \otimes \omega_C(D) )
\to H^0(C, \omega_C\otimes \omega_C(D))
\end{align*}
by Serre duality \autoref{proposition:serre-duality} and
\autoref{proposition:filtered-h1-map}.
In particular,
$\reallywidehat{(E_\star^{j+1,k})^\vee} \otimes \omega_C(D)$ is not
generically globally generated. Indeed, any global section must lie in the kernel
of \eqref{equation:serre-dual-map}, which has corank one in
$\reallywidehat{(E_0^{j+1,k})^\vee} \otimes \omega_C(D)$.
\end{proof}
This concludes our setup for proving
\autoref{theorem:hn-constraints-parabolic}(1).
To prove
\autoref{theorem:hn-constraints-parabolic}(2), we will need the following
generalization of \autoref{lemma:global-generation} to the coparabolic setting.
\begin{lemma}
\label{lemma:coparabolic-global-generation}
If $V_\star$ is a semistable coparabolic bundle with respect to a
divisor $D = x_1 + \cdots + x_n$, of coparabolic slope
$\mu_\star(V_\star) > 2g - 1 + n$, then $V_\star$ is globally
generated.
\end{lemma}
\begin{proof}
Let $p\in C$ be a point.
Suppose $V_\star = \widehat{E}_\star$ for $E_\star$ a parabolic bundle.
It suffices to show $V_\star$ is generated
by global sections at $p$. Indeed,
$V_\star(-p)$ is a semistable coparabolic bundle with coparabolic slope
$\mu_\star(V_\star(-p)) > 2g-2+n$.
Therefore, $\mu_\star(E_\star^\vee(p) \otimes \omega_C(D)) < 0$,
implying $H^0(C, E_\star^\vee(p) \otimes \omega_C(D)) = 0$,
as any global section would be destabilizing.
By Serre duality \autoref{proposition:serre-duality}, (taking
$E_\star^\vee(p) \otimes \omega_C(D)$ in place of the parabolic
vector bundle $E_\star$ in \autoref{proposition:serre-duality}),
$H^1(C, V_\star(-p)) = H^0(C, E_\star^\vee(p) \otimes \omega_C(D))^\vee = 0$.
As in \autoref{lemma:global-generation},
the long exact sequence on cohomology associated $V(-p) \to V \to V|_p$
implies $H^0(C, V) \otimes \mathscr O \to V \to V|_p$ is surjective, as
desired.
\end{proof}
\subsubsection{}
We now prove \autoref{theorem:hn-constraints-parabolic}.
\label{subsection:proof-hn}
\begin{proof}[Proof of \autoref{theorem:hn-constraints-parabolic}]
We use notation as in \autoref{notation:graded-E}. We aim first to show that if $(E_\star',
\nabla')$ is the isomonodromic deformation of $(E_\star, \nabla)$ to
an analytically
general nearby curve $C'$, and if $E'_*$ is not semistable, then for every $i$ there are some $j < i < k$ with
$\rk \on{gr}^{j+1}_{HN}E_\star'\cdot \rk \on{gr}^k_{HN}E_\star'\geq g+1.$
By \cite[Lemma 5.1]{BHH:irregular},
the locus of bundles in a family $\mathscr E_\star$ on $\mathscr C \to \Delta$
which are not semistable form a closed analytic subset,
and if a general member is not semistable, then, after passing to an open subset
of $\Delta$, there is a filtration on $\mathscr E_\star$ restricting to the
Harder-Narasimhan filtration on each fiber. Thus after replacing $(C,D)$ with an
analytically general nearby curve $(C',D')$, and replacing $(E_\star,
\nabla)$ with
the restriction $(E_\star', \nabla')$ of the isomonodromic deformation to $(C',D')$,
we may assume the Harder-Narasimhan filtration $HN^\bullet$ of $E_\star'$ extends to a filtration of $\mathscr{E_\star}$ on a first-order neighborhood of $C'$. We set $(E'_\star)^{i,j} :=
\mathscr{H}\kern -2pt om(\on{gr}^i_{HN}(E'_\star),\on{gr}^{j}_{HN}(E'_\star))_\star$; note that $(E'_\star)^{i,j}$ is semistable by the definition of the Harder-Narasimhan filtration.
We next verify that for every $0 < i < m$, there is some $j < i < k$ for which
$\reallywidehat{((E'_\star)^{j+1,k})^\vee} \otimes \omega_C(D)$ is not generically globally generated.
By \autoref{lemma:nonzero-induced-connection},
for every $0 < i < m$,
there is some $j < i$ and $k \geq i+1$ so that the map $T_{C'}(-D') \to
\mathscr{E}\kern -1pt nd(E_\star')/\mathscr{E}\kern -1pt nd_{HN^\bullet}(E_\star')$
induces a nonzero map
$$T_{C'}(-D') \to (E'_\star)^{j+1,k}:=\mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{HN}E_\star', \on{gr}^k_{HN} E_\star')_\star.$$
By \autoref{proposition:filtered-h1-map},
$H^1(C', T_{C'}(-D')) \to
H^1(C', (E'_\star)^{j+1,k})$ vanishes.
Hence, by
\autoref{lemma:not-generically-globally-generated},
$\reallywidehat{((E'_\star)^{j+1,k})^\vee} \otimes \omega_C(D)$
is not generically globally generated. Note that
$\reallywidehat{((E'_\star)^{j+1,k})^\vee} \otimes \omega_C(D)$
has slope $> 2g - 2 + n$ by \autoref{lemma:degree-in-duals}.
We are finally in a position to prove \autoref{theorem:hn-constraints-parabolic}(1).
It follows from \autoref{proposition:generic-parabolic-global-generation} that $$\rk
\on{gr}^{j+1}_{HN}E_\star'\cdot \rk \on{gr}^k_{HN}E_\star'= \rk
\reallywidehat{((E'_\star)^{j+1,k})^\vee}
\otimes \omega_C(D) \geq g+1.$$ Thus \autoref{theorem:hn-constraints-parabolic}(1) holds.
We now conclude by verifying \autoref{theorem:hn-constraints-parabolic}(2).
By \autoref{lemma:coparabolic-global-generation},
\begin{align*}
\reallywidehat{((E')_\star^{j+1,k})^\vee} \otimes \omega_C(D) =
\reallywidehat{\mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{HN}E_\star',
\on{gr}^{k}_{HN} E_\star')^\vee} \otimes \omega_C(D)
\end{align*}
must have slope at most $2g-1+n$, since it is not generically globally generated.
As $\mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{HN}E_\star', \on{gr}^{k}_{HN} E_\star')$ has
negative parabolic slope by the
definition of the Harder-Narasimhan filtration and
\autoref{lemma:degree-in-duals}, we find
\begin{align*}
-1 \leq \mu_\star(\mathscr{H}\kern -2pt om(\on{gr}^{j+1}_{HN}(E_\star'),
\on{gr}^{k}_{HN}(E_\star'))) < 0.
\end{align*}
Using \autoref{lemma:degree-in-duals},
the parabolic slope of a tensor product of parabolic vector bundles is the sum of their
parabolic slopes, and taking duals negates parabolic slope.
Therefore,
\begin{align*}
0 < \mu_\star(\on{gr}^{j+1}_{HN}({E}_\star')) -
\mu_\star(\on{gr}^{k}_{HN}({E}_\star'))
\leq 1.
\end{align*}
Since
\begin{align*}
\mu_\star(\on{gr}^{j+1}_{HN}({E}_\star')) \geq
\mu_\star(\on{gr}^{i}_{HN}({E}_\star')) \geq
\mu_\star(\on{gr}^{i+1}_{HN}({E}_\star')) \geq
\mu_\star(\on{gr}^{k}_{HN}({E}_\star')),
\end{align*}
we also conclude
\[
0 < \mu_\star(\on{gr}^{i}_{HN}({E}_\star')) -
\mu_\star(\on{gr}^{i+1}_{HN}({E}_\star')) \leq 1.\qedhere
\]
\end{proof}
\subsubsection{}
\label{subsubsection:corollary-proof}
We now prove \autoref{cor:stable-parabolic}, using
\autoref{theorem:hn-constraints-parabolic}
and the AM-GM inequality.
\begin{proof}[Proof of \autoref{cor:stable-parabolic}]
It suffices to consider the case where $(E, \nabla)$ has irreducible monodromy, as an extension of semistable parabolic bundles of the same slope is semistable.
If $(E_\star', \nabla')$ is an isomonodromic deformation of $(E_\star,\nabla)$ to an
analytically general nearby curve which is not semistable, it follows from
\autoref{theorem:hn-constraints-parabolic}(1) that for each $i$, there will be $j,k$ with
$j<i<k$ so that the Harder-Narasimhan filtration $HN$ of $E_\star'$ satisfies
$\rk \on{gr}^{j+1}_{HN}E_\star'\cdot \rk
\on{gr}^k_{HN}E_\star'\geq g+1.$
Since $\rk \on{gr}^{j+1}_{HN}E_\star' + \rk \on{gr}^k_{HN}E_\star' \leq \rk E_\star' = \rk
E_\star$,
it follows from the arithmetic mean-geometric mean inequality that
$$g+1\leq \rk \on{gr}^{j+1}_{HN}E_\star'\cdot \rk \on{gr}^k_{HN}E_\star'\leq \left(\frac{\rk
E_\star}{2} \right)^2.$$
So
$\rk E_\star \geq 2 \sqrt{g+1}$ as desired.
\end{proof}
\subsubsection{}
\label{subsection:proof-intro-stable}
We conclude by proving \autoref{theorem:hn-constraints} and
\autoref{cor:stable}.
\begin{proof}[{Proof of \autoref{theorem:hn-constraints} and \autoref{cor:stable}}]
These follow immediately from \autoref{theorem:hn-constraints-parabolic} and \autoref{cor:stable-parabolic} respectively, taking
$E_\star$ to have the trivial parabolic structure.
\end{proof}
\section{Variations of Hodge structure on an analytically general curve}\label{section:hodge-theoretic-results}
We prove \autoref{theorem:isomonodromic-deformation-CVHS} in
\autoref{subsection:iso-def-proof},
\autoref{theorem:very-general-VHS} in \autoref{subsection:very-general-vhs},
\autoref{corollary:abelian-schemes} in
\autoref{subsubsection:geometric-origin-proof},
and
\autoref{corollary:geometric-local-systems} in
\autoref{subsubsection:abelian-scheme-proof}.
Finally, we prove an additional application concerning maps from very general
curves to Hilbert modular varieties in \autoref{subsection:very-general-vhs}.
\subsection{The proof of \autoref{theorem:isomonodromic-deformation-CVHS}}
\label{subsection:iso-def-proof}
We start with the following lemma, which gives a useful criterion for showing a
monodromy representation is unitary.
\begin{lemma}
\label{lemma:unitary}
Suppose $(C, x_1, \ldots, x_n)$ is an $n$-pointed hyperbolic curve and $(E,
\nabla)$ is a flat vector bundle on $C\setminus\{ x_1, \ldots, x_n\}$
underlying a polarizable complex variation of Hodge structure. Let
$(\overline{E}_\star, \overline{\nabla})$ be the Deligne canonical extension of $(E, \nabla)$ to a flat vector bundle on $C$ with regular singularities at the $x_i$, with the parabolic structure associated to $\overline{\nabla}$.
If $\overline{E}_\star$ is semistable, then the representation of $\pi_1(C\setminus\{x_1, \cdots, x_n\})$
associated to $(E, \nabla)$ is unitary.
\end{lemma}
\begin{proof}
By \autoref{prop:basic-facts}(2), we may write
$\mathbb{V}:=\ker(\nabla)$ as $$\mathbb{V}:=\bigoplus_i
\mathbb{L}_i\otimes W_i$$ where the $\mathbb{L}_i$ each have irreducible
monodromy and also underlie polarizable variations of Hodge structure,
and the $W_i$ are constant complex Hodge structures.
It suffices to show the representation associated to each $\mathbb L_i$ has unitary monodromy.
We may therefore reduce to the case that $\mathbb{V} = \mathbb L_i$ and assume
that $(E,\nabla)$ has irreducible monodromy.
Let $i$ be maximal such that $F^i\overline{E}_\star$ is non-zero.
Since $\overline{E}_\star$ is semistable, it follows from
\autoref{lemma:positive-Hodge-bundle} that
that the natural map
\begin{align*}
F^i\overline{E}_\star \to
F^{i-1}\overline{E}_\star/F^i\overline{E}_\star\otimes \omega_C
\end{align*}
induced by the connection is zero, i.e.~the connection preserves
$F^i\overline{E}_\star$. By irreducibility
of the monodromy of $(E, \nabla)$, we must have that $F^i\overline{E}_\star$ equals
$\overline{E}_\star.$
But in this case $({E}, \nabla)$ is unitary, as the monodromy preserves the polarization, a definite Hermitian form.
\end{proof}
\subsubsection{}
We now recall the setup of
\autoref{theorem:isomonodromic-deformation-CVHS}. Let
$(C, x_1, \cdots, x_n)$
be an $n$-pointed
hyperbolic curve
of genus $g$.
Let $({E}, \nabla)$ be a flat vector bundle on
$C$ with $\on{rk}{E}<2\sqrt{g+1}$ such that $(E,\nabla)$ has regular singularities at the $x_i$. Our goal is to show that if an isomonodromic
deformation of ${(E, \nabla)}$ to an analytically general nearby $n$-pointed curve underlies a polarizable complex variation of Hodge structure, then $({E},\nabla)$ has unitary monodromy.
\begin{proof}[Proof of \autoref{theorem:isomonodromic-deformation-CVHS}]
As the hypothesis is about the restriction of $(E, \nabla)$ to $C\setminus \{x_1, \cdots, x_n\}$, we may without loss of generality assume $(E,\nabla)$ is the Deligne canonical extension of $(E, \nabla)|_{C\setminus \{x_1, \cdots, x_n\}}$.
Endow $E$ with the parabolic structure associated to $\nabla$, and denote the
corresponding parabolic bundle by $E_\star$.
After replacing $(C, x_1, \cdots, x_n)$ with an analytically general nearby
curve, and $(E,\nabla)$ with its isomonodromic deformation to this curve, we may
assume by \autoref{cor:stable-parabolic} that $E_\star$ is semistable. Thus $(E, \nabla)$ has unitary monodromy by \autoref{lemma:unitary}.
\end{proof}
\subsection{The proof of \autoref{theorem:very-general-VHS}}
\label{subsection:very-general-vhs}
The proof of \autoref{theorem:very-general-VHS} follows
from the integrality assumption and the following lemma.
\begin{lemma}
\label{lemma:integral-and-unitary-implies-finite}
Suppose $\Gamma$ is a group,
$K$ is a number field, and $\rho$ is
a representation $$\rho: \Gamma\to \on{GL}_m(\mathscr{O}_K).$$
If for each embedding $\iota: K\hookrightarrow \mathbb{C}$ the
representation $\rho\otimes_{\mathscr{O}_K, \iota} \mathbb{C}$ is unitary, then
$\rho$ has finite image.
\end{lemma}
\begin{proof}
Indeed, for $\iota: K\hookrightarrow\mathbb{C}$ an embedding, let $$\rho_\iota:
\Gamma\to \on{GL}_m(\mathbb{C})$$ be the corresponding representation $\rho\otimes_{\mathscr{O}_K, \iota} \mathbb{C}$. First,
$$\prod_\iota \rho_\iota: \Gamma\to \prod_\iota \on{GL}_m(\mathbb{C})$$ has
compact image by the definition of being unitary.
Moreover, the image of $$\mathscr{O}_K\hookrightarrow \prod_\iota \mathbb{C}$$
is discrete, since the difference of any two distinct elements of
$\mathscr{O}_K$ has norm at least $1$.
Hence the image of $\prod_\iota \rho_\iota$ is discrete and compact, and therefore finite.
\end{proof}
Let $K$ be a number field with ring of integers
$\mathscr{O}_K$. Let $(C, x_1, \cdots, x_n)$ be an analytically general
hyperbolic $n$-pointed curve of genus $g$, and let $\mathbb{V}$ be a $\mathscr{O}_K$-local system on $C\setminus\{x_1, \cdots, x_n\}$ with infinite monodromy.
Suppose that for each embedding $\iota: \mathscr{O}_K\hookrightarrow
\mathbb{C}$, $\mathbb{V}\otimes_{\mathscr{O}_K, \iota}\mathbb{C}$ underlies a
polarizable complex variation of Hodge structure. Our goal is to prove
\autoref{theorem:very-general-VHS}, which states that $$\on{rk}_{\mathscr{O}_K}(\mathbb{V})\geq 2\sqrt{g+1}.$$
\begin{proof}[Proof of \autoref{theorem:very-general-VHS}]
We use
$\mathscr{T}_{g,n}$ to denote the universal cover of $\mathscr{M}_{g,n}$.
For a fixed representation
$\rho: \pi_1(C\setminus\{x_1, \cdots, x_n\}))\to \on{GL}_m(\mathscr{O}_K)$
let $T_\rho$ denote the set of $[(C', x_1', \cdots, x_n')]\in
\mathscr{T}_{g,n}$,
for which the associated $\mathscr O_K$-local system $\mathbb V$ has the
following property:
for each embedding $\iota: \mathscr{O}_K\hookrightarrow \mathbb{C}$,
$\mathbb{V}\otimes_{\mathscr{O}_K, \iota}\mathbb{C}$ underlies a polarizable
complex variation of Hodge structure on $C' \setminus \{x_1', \ldots, x_n'\}$.
Let $M_\rho$ denote the image of $T_\rho$ under the covering map
$\mathscr{T}_{g,n} \to \mathscr M_{g,n}$.
Our goal is to show that an analytically very general point of $\mathscr M_{g,n}$ lies in the complement of
the union of the $M_\rho,$ where $\rho$ ranges over the set of representations
of $\pi_1(C\setminus\{x_1, \cdots, x_n\})\to \on{GL}_r(\mathscr{O}_K)$, with
infinite image, for $K$ a number field, and $r<2\sqrt{g+1}$.
Since there are only countably many such representations $\rho$,
it is enough to show that an analytically very general point lies in the complement
of $M_\rho$ for fixed $\rho$.
Since $(C, x_1, \ldots, x_n)$ is hyperbolic,
$\mathscr{T}_{g,n} \to \mathscr M_{g,n}$
is a covering space of countable degree, and so the image of a closed analytic
set is locally contained in a countable union of closed analytic subsets.
It therefore suffices to show that for any $\rho$ with infinite monodromy and
rank $< 2 \sqrt{g+1}$, $T_\rho$ is contained in a closed analytic subset
of $\mathscr T_{g,n}$.
We now show such $T_\rho$ as above are contained in a closed analytic subset of
$\mathscr{T}_{g,n}$.
Indeed, suppose $\mathbb V$ is the local system associated to $\rho$ on some curve $C\setminus\{x_1, \cdots, x_n\}$, and that for each embedding $\iota: \mathscr{O}_K\hookrightarrow \mathbb{C}$, $\mathbb{V}\otimes_{\mathscr{O}_K, \iota} \mathbb{C}$, underlies a polarizable complex variation of Hodge structure.
It is enough to show this complex polarizable variation of Hodge structure does
not extend to an analytically general nearby curve.
Indeed, if it did,
\autoref{theorem:isomonodromic-deformation-CVHS} implies
$\prod_{\iota: \mathscr O_K \to \mathbb C} \rho_\iota$
has unitary monodromy,
and
\autoref{lemma:integral-and-unitary-implies-finite} implies its monodromy is
finite.
This contradicts our assumption that $\rho$ has infinite monodromy. \end{proof}
\subsection{}\label{subsubsection:geometric-origin-proof}
\begin{proof}[Proof of \autoref{corollary:geometric-local-systems}]
Let $g\geq 1$ be an integer and let $(C, x_1, \cdots, x_n)$ be an analytically
general hyperbolic $n$-pointed curve of genus $g$. Let $U\subset C\setminus\{x_1, \cdots, x_n\}$ be a dense Zariski-open subset. Let $f: Y\to U$ be a smooth proper morphism, $i\geq 0$ an integer, and suppose $\mathbb{V}$ is a complex local system on $(C, x_1, \cdots, x_n)$ with infinite monodromy such that $\mathbb{V}|_U$ is a summand of $R^if_*\mathbb{C}$. Then we wish to show that $\dim_{\mathbb{C}}\mathbb{V}\geq 2\sqrt{g+1}$.
It suffices to show that $\mathbb{V}$ satisfies the hypotheses of
\autoref{theorem:very-general-VHS}. The existence of an
$\mathscr{O}_K$-structure follows from the fact that $R^if_*\mathbb{C}$ has a
$\mathbb{Z}$-structure.
Let $\mathbb{W}$ be the corresponding $\mathscr{O}_K$-local system. All that remains is to verify that for each embedding
$\iota:\mathscr{O}_K\hookrightarrow \mathbb{C}$, the corresponding complex local
system $\mathbb{W}_\iota:=\mathbb{W}\otimes_{\mathscr{O}_K, \iota} \mathbb{C}$
underlies a polarizable complex variation of Hodge structure. But each such
embedding yields a summand $\mathbb{W}_\iota|_U$ of $R^if_*\mathbb{C}$,
Galois-conjugate to the original embedding $\mathbb{W}|_U\subset
\mathbb{V}|_U\subset R^if_*\mathbb{C}.$ Any such summand underlies a polarizable
complex variation of Hodge structure, by \autoref{prop:basic-facts}(2). Now
$\mathbb{W}_\iota|_U$ extends from $U$ to $C\setminus \{x_1, \cdots, x_n\}$ and
so the same is true for the corresponding complex polarizable variation of Hodge
structure by \cite[Corollary 4.11]{schmid1973variation}, which completes the proof.
\end{proof}
\subsection{}
\label{subsubsection:abelian-scheme-proof}
\begin{proof}[Proof of \autoref{corollary:abelian-schemes}]
The statement for relative curves $h: S \to C \setminus \{x_1, \ldots, x_n\}$ reduces to the statement for abelian varieties
upon taking the relative Jacobian $\on{Pic}^0_h$, as the Torelli theorem implies that
$h$ is isotrivial if $\on{Pic}^0_h$ is.
Hence, it suffices to show that any abelian scheme $f: A \to C \setminus\{x_1,
\ldots, x_n\}$ of relative dimension $r < \sqrt{g+1}$ is isotrivial.
If $r < \sqrt{g+1}$ then $\rk R^1 f_* \mathbb C = 2r < 2 \sqrt{g+1}$, so $R^1 f_* \mathbb C$ has
finite monodromy by \autoref{corollary:geometric-local-systems}.
To show $f$ is isotrivial, it is enough to show $f$ is trivial after a finite base change.
After passing to a finite \'etale cover of
$C \setminus\{x_1,\ldots, x_n\}$, we may therefore assume
$R^1 f_* \mathbb C$ has trivial monodromy.
In this case, the theorem of the fixed part
\cite[Corollaire 4.1.2]{deligne:hodge-ii}
implies that the first part of the Hodge filtration $F^1(R^1 f_* \mathbb
C) \subset R^1 f_* \mathbb C$ is also a trivial local system.
Since $A$ is a quotient of
$F^1(R^1 f_* \mathbb C)^\vee$ by the constant local system
$R^{2g-1} f_* \mathbb Z \simeq \wedge^{2g-1} R^1 f_* \mathbb Z$,
(on fibers this local system is identified with the first homology of $f$
via Poincar\'e duality,)
$f$ is also trivial.
\end{proof}
\subsection{}\label{subsubsection:non-density-proof}
We next prove that local systems of geometric origin with low rank are not
Zariski dense in the character variety of an analytically very general genus $g$
curve.
Recall that we use $\mathscr{M}_{B,r}(X)$ for the character variety
parametrizing conjugacy classes of semisimple representations $$\rho:
\pi_1(C\setminus\{x_1, \cdots, x_n\})\to \on{GL}_r(\mathbb{C}).$$
\begin{proof}[Proof of \autoref{corollary:non-density-of-geometric-local-systems}]
Let $(C, x_1, \cdots, x_n)$ be an analytically very general hyperbolic
$n$-pointed curve of genus $g$, and fix an integer $r$ with $1<r<2\sqrt{g+1}$.
Note that we must have $g \neq 0$ because there are no integers $r$ with $1 < r
< 2 \sqrt{0 + 1}$.
By \autoref{corollary:geometric-local-systems}, the points of
$\mathscr{M}_{B,r}(C\setminus\{x_1, \cdots, x_n\})$ of geometric origin
correspond to representations $\rho$ with finite image whenever $r < 2
\sqrt{g+1}$.
We wish to show these finite image representations are not Zariski dense in
$\mathscr{M}_{B,r}(C\setminus\{x_1, \cdots, x_n\})$.
This follows from
by \autoref{lemma:finite-image-not-dense}.
upon choosing a topological identification $C\setminus\{x_1, \cdots, x_n\} \simeq
\Sigma_{g,n}$.
\end{proof}
\begin{lemma}
\label{lemma:finite-image-not-dense}
Let $\Sigma_{g,n}$ be a topological $n$-punctured genus $g$ surface with
basepoint $p \in \Sigma_{g,n}$. For $r > 1$, the set of representations $\rho: \pi_1(\Sigma_{g,n},p) \to
\on{GL}_r(\mathbb C)$ with finite image are not Zariski dense in the character
variety $\mathscr{M}_{B,r}(\Sigma_{g,n}, p)$.
\end{lemma}
\begin{proof}
It suffices to
prove non-density of representations with finite image in the \emph{framed}
character variety
$$\mathscr{M}_{B,r}^\square(\Sigma_{g,n}):=\on{Hom}(\pi_1(\Sigma_{g,n},p),
\on{GL}_r(\mathbb{C})).$$
For $s\in \mathbb{Z}_{>0}$, let $V_s\subset
\mathscr{M}_{B,r}^\square(\pi_1(\Sigma_{g,n},p))$ be the closed subvariety
consisting of those representations $\rho$ such that $$[\rho(x)^s,
\rho(y)^s]=1$$ for all $x,y\in \pi_1(\Sigma_{g,n},p)$.
By Jordan's theorem (\cite[p.~91]{jordan1878memoire} or \cite[Theorem
36.13]{curtis1966representation}) on finite subgroups of $\on{GL}_r(\mathbb{C})$,
there is some constant $m_r$ such that for each finite subgroup $G\subset
\on{GL}_r(\mathbb{C})$, there exists an abelian normal subgroup $H\subset G$ of index
at most $m_r$. Hence $V_{(m_r!)}$ contains all representations with finite
image. Thus it suffices to show $V_s$ is not all of
$\mathscr{M}_{B,r}^\square(\Sigma_{g,n})$ for any $s>0$.
We now write $$\pi_1(\Sigma_{g,n},p)=\left\langle a_1, \cdots, a_g, b_1,\cdots,
b_g, \gamma_1, \cdots, \gamma_n \mid \prod_{i=1}^g [a_i,b_i]\cdot \prod_{j=1}^n
\gamma_j\right\rangle$$ for the standard presentation of the fundamental group.
If $g\geq 2$, let $$\rho: \pi_1(\Sigma_{g,n},p)\to \on{GL}_2(\mathbb{C})$$ be the representation defined by $$a_1 \mapsto \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$$
$$a_2 \mapsto \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$$
and sending all other generators to the identity. If $g=1$, hyperbolicity of
$\Sigma_{g,n}$ implies $n>0$. Then $\pi_1(\Sigma_{g,n},x)$ is free on $n+1$ generators, and we may set $\rho$ to be a representation sending two of the generators to the matrices above, and all other generators to the identity.
Then $\rho$ does not lie in $V_s$ for any $s$ because $\rho(a_1^s)$ does not commute with
$\rho(a_2^s)$ for any integer $s > 0$. This completes the case $r = 2$. For the case that $r > 2$, the representation $\rho\oplus \on{triv}^{\oplus r-2}$ lies outside of $V_s$ for all $s$, for the same reason.
\end{proof}
\subsection{An application to Hilbert modular varieties}
\label{subsection:hilbert-modular}
Let $K$ be a totally real number field of degree $h$ over $\mathbb Q$, $\mathscr O_K$ its ring of integers, and
use $\mathscr X_K$ to denote the Hilbert modular stack parameterizing
$h$-dimensional principally polarized abelian varieties $A$ with an injection
$\mathscr O_K \hookrightarrow \on{End}(A)$.
Using \autoref{corollary:abelian-schemes}, we find there are no nonconstant maps
from an analytically very general $n$-pointed curve of genus $g$
to the moduli stack of principally polarized abelian varieties of dimension $h$
whenever $h \leq \sqrt{g+1}$, or equivalently when $g > h^2$.
We now show that the analogous statement for Hilbert modular stacks holds
whenever $g \geq 1$.
In particular, this improved bound is independent of $h$.
We thank Alexander Petrov for pointing out the following application.
\begin{proposition}
\label{proposition:hilbert-modular}
Let $\mathscr X_K$ be the Hilbert modular stack associated to a totally
real field $K$.
Let $(C, x_1, \ldots, x_n)$ be an analytically very general hyperbolic $n$-pointed
curve of genus $g \geq 1$.
Any map $\phi: C\setminus \{x_1, \ldots, x_n\} \to \mathscr X_K$ is constant.
\end{proposition}
\begin{proof}
We first use the map $\phi$ to produce a rank $2$ integral local system
on $C \setminus \{x_1, \ldots, x_n\}$.
A map $\phi: C\setminus \{x_1, \ldots, x_n\} \to \mathscr X_K$ yields an
abelian scheme $f: A \to C\setminus \{x_1, \ldots, x_n\}$ with an $\mathscr
O_K$-action. In particular, $R^1 f_* \mathbb Z$ is a
complex PVHS of rank $2h$ with an $\mathscr O_K$-action.
If $K$ has degree $h$, then this rank $2h$ local system may not be a
$\mathscr O_K$-local system, as its fibers may only be locally free, as opposed to free.
However, after passing to a finite extension $K'$ of $K$, such as the
Hilbert class field of $K$, any such rank $2$ locally free module
becomes a free module,
and so $R^1 f_* \mathbb Z
\otimes_{\mathscr O_K} \mathscr O_{K'}$ is an $\mathscr O_{K'}$-local system of rank $2$.
We now show the map $\phi$ must be constant.
By construction,
for any embedding $\mathscr O_{K'} \to \mathbb C$, the local system
$R^1 f_* \mathbb Z\otimes_{\mathscr O_K} \mathscr O_{K'} \otimes_{\mathscr
O_{K'}} \mathbb C$ underlies a $\mathbb{C}$-PVHS. Hence by \autoref{theorem:very-general-VHS}, $R^1 f_* \mathbb Z\otimes_{\mathscr O_K} \mathscr O_{K'}$ has finite monodromy, as
\begin{align*}
2 = \rk_{\mathscr O_K'}R^1 f_* \mathbb Z\otimes_{\mathscr O_K} \mathscr O_{K'} < 2 \sqrt{g+1}.
\end{align*}
Hence, the same holds for
$R^1 f_* \mathbb Z$.
This implies the map $\phi$ is constant using the theorem of the fixed part
\cite[Corollaire 4.1.2]{deligne:hodge-ii} (see the proof
of \autoref{corollary:abelian-schemes} for a similar argument).
\end{proof}
\section{Questions}\label{section:questions}
We conclude with some open questions related to our results.
\subsection{Bounds}
\begin{question}\label{question:bounds}
Is the bound of $2\sqrt{g+1}$ appearing in \autoref{cor:stable},
\autoref{theorem:very-general-VHS} and
\autoref{theorem:isomonodromic-deformation-CVHS} sharp? If not, can one
explicitly construct low rank geometric variations of Hodge structure with
infinite monodromy on a general curve or general $n$-pointed curve?
Do there exist counterexamples to the above results if one replaces $2
\sqrt{g+1}$ by a linear function of $g$?
\end{question}
We have no reason to believe the bound is sharp. The Kodaira-Parshin trick (as
used in \autoref{section:counterexample}, for example) is one source of
variations of Hodge structure on $\mathscr{M}_{g,n}$ of rank bounded in terms of
$g, n$, but it is not the only one. For example, the representations constructed
in \cite{koberda2016quotients} are cohomologically rigid and hence underlie
integral variations of Hodge structure by \cite[Theorem
1.1]{esnault2018cohomologically} and \cite[Theorem 3]{simpson1992higgs}.
Assuming Simpson's motivicity conjecture (\cite[Conjecture,
p.~9]{simpson1992higgs}) these constructions are geometric in nature, though this is not clear from the construction. Of course it would be extremely interesting to prove that these representations arise from algebraic geometry.
It the representations constructed in \cite{koberda2016quotients} have rank
growing exponentially in $g$ \cite[Corollary 4.5]{koberda2016quotients}.
It is natural, given our results, to ask if one can use their methods to produce representations of smaller rank.
We also raise a related question about bounds on maps to the moduli space of
curves.
\begin{question}
\label{question:}
Fix an integer $g\geq 2$.
What is the smallest integer $h \geq 2$ for which the generic genus $g$ curve,
i.e., the generic fiber of
$\mathscr M_{g,1} \to \mathscr M_g$,
has a
non-constant map to
$\mathscr M_h$?
\end{question}
\begin{remark}
\label{remark:}
Since a map $C \to \mathscr{M}_h$ corresponds to a family of
smooth curves of genus $h$ over $C$, by considering the associated family of
Jacobians, it follows from \autoref{corollary:abelian-schemes} that $h \geq
\sqrt{g+1}$.
The Kodaira-Parshin trick
\autoref{proposition:kodaira-parshin}
does not a priori apply to construct maps from the generic curve to $\mathscr{M}_h$, because as written it produces disconnected covers.
But one can apply a variant where one takes a cover defined by a characteristic quotient of the fundamental group
to show there is some (fairly large) value of $h$ for which the generic genus
$g$ curve has a non-constant map to $\mathscr M_h$.
See \cite[Theorem
1.4]{mcmullen:from-dynamics-on-surfaces-to-rational-points-on-curves} for more
details.
\end{remark}
\subsection{Non-abelian Hodge loci}
Let $(C, x_1, \cdots, x_n)$ be an $n$-pointed curve, $\mathbb{V}$ a
$\mathbb{Z}$-local system on $C\setminus\{x_1, \cdots, x_n\}$ with
quasi-unipotent local monodromy around the $x_i$, and let $(\mathscr{E}, \nabla)$ be the associated flat vector bundle. We refer to the locus $H_{\mathbb{V}}$ in $\mathscr{T}_{g,n}$ where the corresponding isomonodromic deformation of $(\mathscr{E}, \nabla)$ underlies a polarizable variation of Hodge structure as an \emph{non-abelian Hodge locus}. By analogy to the famous result on algebraicity of Hodge loci of Cattani-Deligne-Kaplan \cite{cattani1995locus}, it is natural to ask:
\begin{question}[Compare to {\cite[Conjecture 12.3]{simpson62hodge}}] \label{question:non-abelian-Hodge}
Let $Z$ be an irreducible component of $H_{\mathbb{V}}$. Is the image of $Z$ in $\mathscr{M}_{g,n}$ algebraic?
\end{question}
This would follow if all $\mathbb{Z}$-local systems which underlie polarizable variations of Hodge structure arise from geometry, which is perhaps a folk conjecture (and is conjectured explicitly in \cite[Conjecture 12.4]{simpson62hodge}). Just as \cite{cattani1995locus} provides evidence for the Hodge conjecture, a positive answer to \autoref{question:non-abelian-Hodge} would provide evidence for this conjecture.
When we refer to an analytically very general curve we mean in the sense
of \autoref{definition:general}.
A positive answer to \autoref{question:non-abelian-Hodge} would allow us to replace this with the usual algebraic notion of a very general curve in \autoref{theorem:very-general-VHS}.
It seems plausible that one can make this replacement in \autoref{corollary:geometric-local-systems} without requiring input from \autoref{question:non-abelian-Hodge}, using the main result of \cite{cattani1995locus}.
\bibliographystyle{alpha}
|
1,116,691,499,663 | arxiv | \section{Introduction}
Computational models for generating audio signals are a means of exploring and understanding our perception of sound. Natural sounds, defined here as everyday non-music, non-speech sounds, are an appealing medium with which to study perception since they exclude cognitive factors such as language and musical interpretation. McDermott \cite{mcdermott2011sound} used synthesis as a means to demonstrate that the human auditory system utilises time-averaged statistics of subband amplitudes to classify sound textures. In a similar vein, Turner \cite{turner2010statistical} constructed a synthesis model based on probabilistic latent variable analysis of those same subband amplitudes. One main advantage of a latent variable approach is the possibility that the uncovered latent behaviour may represent either \emph{i)} the primitive source that generated the signal, or \emph{ii)} the latent information that the human auditory system encodes when it calculates time-averaged statistics.
Latent variable analysis captures correlations across multiple dimensions by modelling the data's shared dependence on some unobserved (latent) variable or function. It is, by its very nature, ill-posed; we typically aim to simultaneously predict both the latent functions \emph{and} the mapping from this latent space to the observation data. As such, infinitely many potential solutions exist and we cannot guarantee that our prediction will encode the true sound source or our true perceptual representation.
The ill-posed nature of the problem necessitates the use of prior information. It is commonly suggested that nonnegativity, smoothness and sparsity form a suitable set of prior assumptions about real life signals. We argue that, even after imposing such constraints, a simple scalar mapping between the latent space and observation space is insufficient to capture all the complex behaviour that we observe in the subband amplitude envelopes of an audio signal. We construct a latent force model (LFM) \cite{alvarez2009latent} to incorporate prior knowledge about how amplitude envelopes behave via a discrete differential equation that models exponential decay \cite{wilkinson2017latent}.
Utilising the state space formulation \cite{hartikainen2011sequential}, we augment the standard LFM by explicitly including in the current state information from many discrete time steps. This allows us to capture phenomena such as feedback, damping and to some extent reverberation. In this probabilistic approach the latent functions are modelled with Gaussian processes, which provide uncertainty information about our predictions whilst also guaranteeing that the latent functions are smooth. Nonnegativity is imposed via a nonlinear transformation.
Evaluating latent representations is not straightforward. Objective measures of our ability to reconstruct the observation data don't inform us about the interpretability of our predictions. We hypothesise that if the latent functions capture physically or perceptually meaningful information, then a generative model based on synthesising latent functions that are statistically similar should generate realistic data when projected back to the observation space.
In this paper we introduce a generative model, applicable to a wide range of natural sounds, based on an extended LFM\footnote{Matlab source code and example stimuli can be found at \protect\url{c4dm.eecs.qmul.ac.uk/audioengineering/natural_sound_generation}} (Section \ref{sec:LFMaudio}). Comparative models based on variants of nonnegative matrix factorisation (NMF) are implemented to perform evaluation-by-synthesis, which shows how listeners often perceive the LFM approach to generate more realistic sounds even in cases where NMF is more efficient from a reconstruction error perspective (Section \ref{sec:results}).
\vspace{-0.2cm}
\section{Background}
\label{sec:background}
\vspace{-0.1cm}
The perceptual similarity of two sounds is not determined by direct comparison of their waveforms, but rather by comparison of their statistics \cite{mcdermott2011sound}. Hence it is argued that prior information for natural sounds should take a statistical form \cite{turner2010statistical}. We argue in Section \ref{sec:LFMaudio} that these statistical representations can be improved through the inclusion of assumptions about the physical behaviour of sound, resulting in a hybrid statistical-physical prior.
In order to analyse sound statistics, both McDermott \cite{mcdermott2011sound} and Turner \cite{turner2010statistical} utilise the subband filtering approach to time-frequency analysis, in which the signal is split into different frequency channels by a bank of band-pass filters. The time-frequency representation is then formed by tracking the amplitude envelopes of each subband. McDermott generates sound textures by designing an objective function which allows the statistics of a synthetic signal to be matched to that of a target signal. Turner utilises probabilistic time-frequency analysis combined with probabilistic latent variable analysis to represent similar features. Turner's approach has the advantage that once the parameters have been optimised, new amplitude envelopes can be generated by drawing samples from the latent distribution. It should be noted that samples drawn from the model will not exhibit the fast attack and slow decay we observe in audio amplitude envelopes, since the model is temporally symmetric.
NMF is a ubiquitous technique for decomposing time-frequency audio data \cite{fevotte2009nonnegative,smaragdis2003non,virtanen2007monaural}, however a common criticism is its inability to take into account temporal information. The most common approach to dealing with this issue is to impose smoothness on the latent functions, the idea being that smoothness is a proxy for local correlation across neighbouring time steps. Temporal NMF (tNMF) imposes smoothness by penalising latent functions which change abruptly \cite{virtanen2007monaural} or by placing a Gaussian process prior over them \cite{turner2014time}. An alternative approach is to use a hidden Markov model to capture the changes in an audio signal's spectral make up over time \cite{mysore2010non}. High resolution NMF (HR-NMF) models the temporal evolution of a sound by utilising the assumption that natural signals are a sum of exponentially modulated sinusoids, with each frequency channel being assigned its own decay parameter estimated using expectation-maximisation \cite{badeau2011gaussian}.
\vspace{-0.2cm}
\subsection{Latent Force Models}
\label{ssec:LFMs}
To incorporate our prior assumptions into data-driven analysis we use latent force models (LFMs) \cite{alvarez2009latent}, a probabilistic modelling approach which assumes $M$ observed output functions $x_m$ are produced by some $R<M$ unobserved (latent) functions $u_r$ being passed through a set of differential equations. If the chosen set of differential equations represents some physical behaviour present in the system we are modelling, even if only in a simplistic manner, then such a technique can improve our ability to learn from data \cite{alvarez2013linear}. This is achieved by placing a Gaussian process (GP) prior \cite{rasmussen2006gaussian} over the $R$ latent functions, calculating the cross-covariances (which involves solving the ODEs), and performing regression.
It was shown by Hartikainen and S\"arkka \cite{hartikainen2011sequential} that, under certain conditions, an equivalent regression task can be performed by reformulating the model (i.e. the ODE representing our physical knowledge of the system) into state space (SS) form, reformulating the GP as a stochastic differential equation (SDE), and then combining them into a joint SS model:
\begin{equation} \label{joint}
\frac{{d\bm{{x}}(t)}}{dt}=\bm{f}(\bm{{x}}(t))+Lw(t) \; .
\end{equation}
Here $\bm{{x}}(t)$ represents the state vector containing $\{x_m(t)\}^M_{m=1}$ \emph{and} the states of the SDE $\{u_r(t),\dot{u}_r(t),...\}^R_{r=1}$, $w(t)$ is a white noise process, $\bm{f}$ is the transition function which is dependent on $\theta$, the set of all ODE parameters and GP / SDE hyperparameters, and $L$ is a vector determining which states are driven by the white noise. The model's discrete form is
\begin{equation} \label{discModel}
\bm{{x}}[t_k]=\bm{\hat{f}}(\bm{{x}}[t_{k-1}],\Delta t_{k})+\bm{q}[t_{k-1}] \; ,
\end{equation}
where $\Delta_t$ is the time step size, $\bm{\hat{f}}$ is the discretised transition function and $\bm{q}[t_{k-1}]\sim N(\bm{{0}},Q[\Delta t_{k}])$ is the noise term with process noise matrix $Q$. The corresponding output measurement model is
\begin{equation} \label{measurement}
\bm{y}[t_k] = H\bm{x}[t_k] + \epsilon[t_k], \hspace{0.3cm} \epsilon[t_k] \sim N(0,\sigma^2) \; ,
\end{equation}
where measurement matrix $H$ simply selects the outputs from the joint model.
The posterior process $\bm{x}[t_k]$, i.e. the solution to (\ref{discModel}), is a GP in the linear case such that the filtering distribution $p(\bm{x}[t_{k}]\hspace{0.1cm}|\hspace{0.1cm}\bm{y}[t_1],...,\bm{y}[t_{k}])$ is Gaussian. Hence state estimation can be performed via Kalman filtering and smoothing \cite{sarkka2013bayesian}.
However, if $\bm{f}$ is a nonlinear function, as is the case if we wish to impose nonnegativity on the latent functions, then calculation of the predictive and filtering distributions involves integrating equations which are a combination of Gaussian processes and nonlinear functions. We may approximate the solutions to these integrals numerically using Gaussian cubature rules. This approach is known as the cubature Kalman filter (CKF) \cite{hartikainen2012state}.
The Kalman update steps provide us with the means to calculate the marginal data likelihood $p(\bm{y}[t_{1:T}]\hspace{0.1cm}|\hspace{0.1cm}\theta)$. Model parameters $\theta$ can therefore be estimated from the data by maximising this likelihood using gradient-based methods.
\vspace{-0.2cm}
\section{Latent Force Models for Audio Signals}
\label{sec:LFMaudio}
\vspace{-0.2cm}
To obtain amplitude data in the desired form we pass an audio signal through an equivalent rectangular bandwidth (ERB) filter bank. We then use Gaussian process probabilistic amplitude demodulation (GPPAD) \cite{turner2011demodulation} to calculate the subband envelopes and their corresponding carrier signals. GPPAD allows for control over demodulation time-scales via GP lengthscale hyperparameters. We are concerned with slowly varying behaviour correlated across the frequency spectrum, in accordance with the observation that the human auditory system summarises sound statistics over time \cite{mcdermott2011sound}. Fast-varying behaviour is relegated to the carrier signal and will be modelled as independent filtered noise.
The number of channels in the filter bank and the demodulation lengthscales must be set manually during this first analysis stage. Keeping the number of total model parameters small is a priority (see Section \ref{ssec:AmpEnvLFM}), so we typically set the number of filters to 16, and the lengthscales such that we capture amplitude behaviour occurring over durations of 10ms and slower.
\vspace{-0.2cm}
\subsection{Augmented Latent Force Models for Amplitude Envelopes}
\label{ssec:AmpEnvLFM}
We use a first order differential equation to model the exponential decay that occurs in audio amplitude envelopes \cite{wilkinson2017latent}. However this overly simplistic model does not take into account varying decay behaviour due to internal damping, or feedback and other nonstationary effects which occur as a sound is generated and propagates towards a listener.
Since we require nonnegativity of our latent functions, which is imposed via nonlinear transformation, we use the nonlinear LFM whose general from is (\ref{discModel}) with nonlinear $\bm{\hat{f}}$. For a first order ODE its discrete form is
\begin{equation} \label{eq:LFM}
\dot{x}_m[t_k]=-D_mx_m[t_k]+\sum^R_{r=1} S_{mr}g(u_r[t_k]) \; ,
\end{equation}
for $m=1,...,M$ where $M$ is the number of frequency channels. $D_m$ and $S_{mr}$ are the damping and sensitivity parameters respectively and $g(u)=log(1+e^u)$ is the positivity-enforcing nonlinear transformation. The model progresses forwards in time with step size $\Delta_t$ using Euler's method: ${x}_m[t_{k+1}]={x}_m[t_k] + \Delta_t\dot{x}_m[t_k]$.
To account for the complex behaviour mentioned above that occurs in real audio signals, we extend this discrete model such that predictions at the current time step $t_k$ can be influenced explicitly by predictions from multiple time steps in the past. As in \cite{wilkinson2017latent} we augment the model by adding a parameter $\gamma_m$ which controls the ``linearity" of decay. Our final model becomes
\begin{equation} \label{myModel}
\dot{x}_m[t_k]=-D_mx^{\gamma_m}_m[t_k]+\sum_{p=1}^PB_{mp}x_m[t_{k-p}]+\sum_{q=0}^P\sum^R_{r=1} S_{mrq}g(u_r[t_{k-q}]) \; .
\end{equation}
We restrict $\gamma_m \in [0.5,1]$, and for sounding objects with strong internal damping we expect $\gamma_m$ to be small, representing an almost linear decay. Parameters $B_{mp}$ are \emph{feedback} coefficients which determine how the current output is affected by output behaviour from $p$ time steps in the past. $S_{mrq}$ are \emph{lag} parameters which determine how sensitive the current output is to input $r$ from $q$ time steps ago.
The lag term is important since modes of vibration in a sounding object tend to be activated at slightly different times due to deformations in the object as it vibrates, and due to the interaction of multiple modes of vibration. It can also capture effects due to reverberation. The feedback terms allow for long and varied decay behaviour that can't be described by simple exponential decay.
The challenge is to incorporate (\ref{myModel}) into our filtering procedure. We do this by augmenting our state vector $\bm{{x}}[t_k]$ and transition model
\begin{equation} \label{eq:transitionModel}
\bm{\hat{f}}(\bm{{x}}[t_{k-1}],\Delta t_{k}) = \bm{{x}}[t_k] + \Delta_t\bm{\dot{x}}[t_k]
\end{equation}
with new rows corresponding to the delayed terms. Fig. \ref{fig:lfm_diagram} shows how after each time step the current states $X[t_k]=\{x_m[t_k]\}_{m=1}^M, U[t_k]=\{u_r[t_k]\}_{r=1}^R$ are ``passed down'' such that at the next time step they are in the locations corresponding to feedback and lag terms. When performing the Kalman filter prediction step, augmented states are included since they influence predictions for the current state, however the predictions for these augmented entries are simply exact copies from the previous time step.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{LFM_diagram_tex_2}
\vspace{-0.3cm}
\caption{The augmented LFM stores terms from previous time steps in the state vector. Blue represents output predictions $X$ (amplitudes), green represents latent predictions $U$. Each step, predictions pass down to feedback and lag state locations. The entire state is used to predict the next step's outputs and latents via Kalman filtering.}
\label{fig:lfm_diagram}
\vspace{-0.3cm}
\end{figure}
Fig. \ref{fig:lfmMetal} shows the latent prediction for a metal impact sound with one latent force, $R=1$. The mean of the distribution is the minimum least squares error estimate, so we pass it through discrete model (\ref{myModel}) to reconstruct the amplitude envelopes. Despite the single latent force, we observe that some of the complex behaviour has been learnt. Additionally, the latent force is both smooth and sparse, and the reconstructed envelopes have a slow decay despite this sparsity.
\vspace{-0.2cm}
\subsection{Generating Novel Instances of Natural Sounds}
\label{ssec:genModel}
A significant benefit of probabilistic approaches such as LFM or tNMF is that, as well as providing us with uncertainty information about our predictions, they provide the means to sample new latent functions from the learnt distribution. By passing these new functions through the model we can generate amplitude envelopes. These envelopes modulate carrier signals produced using a sinusoids-plus-noise approach based on analysis of the original carriers. The subbands are then summed to create a new synthetic audio signal distinct from the original but with similar characteristics.
Sampling from the prior of the learnt distribution generates functions with appropriate smoothness and magnitude, however the desired energy sparsity is not guaranteed. Latent functions are modelled independently, but in practice they tend to co-occur and are activated in similar regions of the signal. We use GPPAD again to demodulate our latent functions with a slowly varying envelope, then fit a GP with a squared exponential covariance function to this envelope \cite{rasmussen2006gaussian}. We sample from this high-level envelope and use it to modulate our newly generated latent functions; the results of this product is latent behaviour with sparse energy, as demonstrated in Fig. \ref{fig:genModel}(d).
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{lfm_metal}
\vspace{-0.3cm}
\caption{LFM applied to a metal impact sound, with mean and 95\% confidence of the latent distribution shown. The mean is passed through the model (\ref{myModel}) to reconstruct the envelopes. Complex behaviour is maintained despite using a single force.}
\label{fig:lfmMetal}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.2cm}
\subsection{Optimisation Settings}
\label{ssec:paramOpt}
The set of model parameters $\{D_{m},B_{mp},S_{mrq},\gamma_m,\lambda_r\}$, with GP lengthscales $\lambda_r$, becomes large as $R$, $P$ increase. To alleviate issues that occur when our parameter space becomes large we sparsify the feedback and sensitivity parameters. For example, if $P=10$, we may manually fix $B_{mp}$ to zero for $p\in[3,4,6,7,9]$ such that only half the parameters are included in the optimisation procedure.
Reliability of the optimisation procedure suffers as the number of parameters increases, so in practice all $M$ frequency channels are not optimised together. We select the $6$ envelopes contributing the most energy and train the model on the observations from only these channels. The remaining channels are then appended on and optimised whilst keeping the already-trained parameters fixed. This improves reliability but prioritises envelopes of high energy. We also skip prediction steps for periods of the signal that are of very low amplitude, which speeds up the filtering step. Despite these adjustments, optimisation still takes up to 3 days for a 2 second sound sample.
\section{Evaluation}
\label{sec:results}
To evaluate our method we collated a set of 20 audio recordings, selected as being representative of everyday natural sounds\footnote{From \protect{\url{freesound.org}} and from the Natural Sound Stimulus set: \protect\url{mcdermottlab.mit.edu/svnh/Natural-Sound/Stimuli.html}}. Music and speech sounds were not included, nor were sounds with significant frequency modulation, since our model doesn't capture this behaviour.
\vspace{-0.2cm}
\subsection{Reconstruction Error of Original Sound}
\label{ssec:reconError}
We analyse our ability to reconstruct the original data by projecting the latent representation back to the output space. For the LFM this means passing the mean of the learnt distribution through model (\ref{myModel}). Fig. \ref{fig:objRes} shows reconstruction RMS error and cosine distance of LFM and tNMF relative to NMF for the 20 recordings. The smoothness constraint enforced by placing a GP prior over the latent functions negatively impacts the reconstruction. This is demonstrated by the fact that tNMF performs poorly from an RMS error perspective. Despite this, the LFM has much descriptive power, and is sometimes capable of achieving a lower RMS error than the unconstrained NMF. Interestingly however, tNMF consistently outperforms the other two models based on cosine distance.
\begin{figure}[t]
\centering
\includegraphics[width=11.25cm]{gen_model_story}
\vspace{-0.3cm}
\caption{LFM generative model with 3 latent forces applied to an applause sound. The high-level modulator (black line in (b)) is calculated by demodulating the latent forces.}
\label{fig:genModel}
\vspace{-0.4cm}
\end{figure}
\subsection{Listening Test for Novel Sounds}
\label{ssec:listTest}
Objective results suggest that smoothness constraints harm reconstruction of the original signal. However, our aim is to learn realistic latent representations that will be the foundation of a generative model. To test their suitability, we designed an experiment to compare generative models based on LFM, NMF and tNMF. The approach outlined in Section \ref{ssec:genModel} was used for all model types. Since NMF is non-probabilistic, it does not provide an immediate way in which to sample new data, therefore GPs were fit to the latent functions after analysis.
Our experiment followed a multi-stimulus subjective quality rating paradigm\footnote{The test was run online and implemented with the Web Audio Evaluation Tool: \protect{\url{github.com/BrechtDeMan/WebAudioEvaluationTool}}}: 24 participants were shown 20 pages (order randomised), one per sound example, and asked to listen to the reference recording and then rate 7 generated sounds (2 from each model plus an anchor) based on their credibility as a new sound of the same type as the reference. Ratings were on a scale of 0 to 1, with a score of 1 representing a very realistic sound. Fig. \ref{fig:listTest} shows the mean realism ratings. Whilst variation was large between sound examples, LFM was generally rated as more realistic than the other methods.
\begin{figure}[!t]
\centering
\includegraphics[width=12cm]{objective_results}
\vspace{-0.3cm}
\caption{Reconstruction error of LFM and tNMF plotted relative to NMF. Crosses represent the median, error bars range from first to third quartile.}
\label{fig:objRes}
\vspace{-0.3cm}
\end{figure}
To test for significance we applied a generalised linear mixed effects model (GLMM), with beta regression, in which \emph{sound example} and \emph{participant} were treated as random effects. Table 1 shows that the mean realism rating was highest for LFM regardless of number of latent functions. The difference was significant at a 5\% level except for LFM vs. NMF with 3 latent functions. This suggests that for sounds requiring many latent functions to capture their behaviour, such as textural sounds, LFM may not offer a significant gain over purely statistical approaches. For example, the wind recording in Fig. \ref{fig:listTest}, a textural sound whose envelopes do not exhibit clear exponential decay, was captured best with tNMF.
\vspace{-0.4cm}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\begin{table}[!ht]
\fontsize{8}{8.2}\selectfont
\setlength\extrarowheight{3pt}
\begin{center}
\begin{tabular} { |p{2.0cm}||P{1.15cm} P{1.15cm}|P{1.15cm} P{1.15cm}|P{1.15cm} P{1.15cm}|P{1.15cm} P{1.15cm}| }
\hline
& \multicolumn{2}{c}{All sounds} & \multicolumn{2}{c}{1 latent fn.} & \multicolumn{2}{c}{2 latent fns.} & \multicolumn{2}{c|}{3 latent fns.} \\
\hline
& Estimate & p value & Estimate & p value & Estimate & p value & Estimate & p value\\
\hline
LFM vs. NMF & 0.3839 & \textbf{\textless1e-04} & 0.8248 & \textbf{\textless1e-05}& 0.3140 & \textbf{0.0448} & 0.2052 & 0.2867\\
LFM vs. tNMF & 0.4987 & \textbf{\textless1e-04} & 0.7976 & \textbf{\textless1e-05} & 0.5134 & \textbf{\textless0.001} & 0.3243 & \textbf{0.0285}\\
NMF vs. tNMF & 0.1148 & 0.3750 & -0.0272 & 0.9980 & 0.1994 & 0.3218 & 0.1191 & 0.7154\\
\hline
\end{tabular}
\vspace{0.1cm}
\caption{\label{AllTable}{GLMM with three-way comparison applied to listening test results. LFM received higher mean ratings, but confidence decreases with number of latent forces, indicated by increasing \emph{p values}. \emph{Estimate} can be interpreted as the ratio increase in realism rating when choosing model A over model B.}}
\end{center}
\end{table}
\vspace{-1.3cm}
\section{Conclusion}
\label{sec:conclusion}
\vspace{-0.2cm}
Our results show that in order to extend existing synthesis techniques to a larger class of sounds, it is important to utilise prior knowledge about how natural sound behaves. We achieved this by using latent force modelling to capture exponential decay, and augmented the standard approach to include feedback and delay across many discrete time steps. Doing so allowed us to make smooth, sparse latent predictions that we argue are more representative of the real source that generated a given sound.
This claim is supported by the fact that a generative model based on LFM was consistently rated as more realistic by listeners than alternatives based on variants of NMF, even in cases where it was not superior in reconstruction of the original signal. Resonance, decay and modulations in the subband amplitudes were captured well by our model, which is flexible enough to be applicable to sounds ranging from glass breaking to dogs barking.
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{listening_test}
\vspace{-0.3cm}
\caption{Mean realism ratings obtained from the listening test.}
\label{fig:listTest}
\vspace{-0.3cm}
\end{figure}
The nonlinear ODE representing our physical knowledge contains a large number of parameters, making our approach impractical in some cases, so a more compact model would be of huge benefit. Efficient nonlinear filtering methods or numerical ODE solvers would make the computation time more acceptable. Future work includes amplitude behaviour occurring on multiple time scales at once, and models for frequency modulation and other nonstationary effects would further expand the class of sounds to which such techniques can be applied.
\vspace{-0.3cm}
\bibliographystyle{splncs}
|
1,116,691,499,664 | arxiv | \section{Introduction: physics goals}
The International Linear Collider (ILC) is a mature proposal for the next major high energy accelerator after the Large Hadron Collider (LHC). The ILC Technical Design Report (TDR) \cite{Behnke:2013xla,Baer:2013cma,Phinney:2007gp,Behnke:2013lya} demonstrates that the accelerator project is technically feasible and construction ready. Moreover two detector designs detailed in the TDR, the Silicon Detector (SiD) and the International Large Detector (ILD), are prepared to enter the technical design phase. See figs. \ref{fig:ilc} and \ref{fig:sid} for renderings of the ILC and SiD.
\begin{figure*}[t]
\begin{center}
\framebox{\includegraphics[width=0.9\textwidth]{ilc.pdf}}
\caption{Schematic diagram (not to scale) of the main systems of the ILC assuming the nominal TDR design. Shown are the electron linac, the positron linac, the damping rings and two detectors at the collision point. Credit: ILC TDR \cite{Behnke:2013xla,Baer:2013cma,Phinney:2007gp,Behnke:2013lya}}
\label{fig:ilc}
\end{center}
\end{figure*}
The primary motivation for the ILC is the precision study of the Higgs boson. The Higgs phenomenon was independently proposed in 1964 by Higgs \cite{Higgs:1964pj} and Englert and Brout \cite{Englert:1964et} as a possible explanation for how the $W$ and $Z$ bosons obtain their mass. In fact the Higgs mechanism can explain how every particle obtains its mass. The scalar particle $H$ associated with the Higgs field, which mediates the Higgs mechanism, was jointly discovered in 2012 at the CERN LHC by the ATLAS \cite{Aad:2012tfa} and CMS \cite{Chatrchyan:2012ufa} Collaborations. In 2013, 49 years after their papers were published in the same journal, Higgs and Englert were awarded the Nobel Prize in Physics.
The ILC and its detectors are multipurpose, and address secondary physics motivations which elaborate their scientific merit. Undiscovered new particles and interactions postulated by various theoretical models can be discovered, constrained or ruled out with a full ILC program. In brief, the ILC goals outlined in the TDR, are as follows:
\begin{enumerate}
\item Measuring Higgs boson branching ratios and other properties with high precision
\item Searching for new particles, including dark matter and supersymmetric particles
\item Constraining new interactions by high precision measurements of the $W,Z$ and $t$ particles
\end{enumerate}
\noindent We quote at length from the executive summary in the TDR Volume 1 \cite{Behnke:2013xla}, which elaborates on these goals. First, precision study of the Higgs boson:
\begin{quote}
The initial program of the ILC for a 125 GeV Higgs boson will be centered at an energy of 250 GeV, which gives the peak cross section for the reaction $e^+ e^- \rightarrow Zh$. In this reaction, the identification of a $Z$ boson at the energy appropriate to recoil against the Higgs boson tags the presence of the Higgs boson. In this setting, it is possible to measure the rates for all decays of the Higgs boson - even decays to invisible or unusual final states — with high precision \dots
The study of the Higgs boson will continue, with additional essential elements, at higher energies. At 500 GeV, the full design energy of the ILC, measurement of the process $e^+ e^- \rightarrow \nu \bar{\nu} h$ will give the absolute normalization of the underlying Higgs coupling strengths, needed to determine the individual couplings to the percent level of accuracy. Raising the energy further allows the ILC experiments to make precise measurements of the Higgs boson coupling to top quarks and to determine the strength of the Higgs boson’s nonlinear self interaction \dots
\end{quote}
\noindent Next, the search for new particles:
\begin{quote}
The ILC also will make important contributions to the search for new particles associated with the Higgs field, dark matter, and other questions of particle physics. For many such particles with only electroweak interactions, searches at the LHC will be limited by low rates relative to strong interaction induced processes, and by large backgrounds. The ILC will identify or exclude these particles unambiguously up to masses at least as high as the ILC beam energy \dots
\end{quote}
\noindent Finally, constraining new interactions:
\begin{quote}
The ILC will also constrain or discover new interactions at higher mass scales through pair production of quarks and leptons, $W$ and $Z$ bosons, and top quarks. Much of our detailed knowledge of the current Standard Model comes from the precision measurement of the properties of the $Z$ boson at $e^+ e^-$ colliders. The ILC will extend this level of precision to the $W$ boson and the top quark. The ILC will measure the mass of the top quark in a direct way that is not possible at hadron colliders, fixing a crucial input to particle physics calculations \dots
\end{quote}
The TDR outlines several reasons why the ILC is the preferred tool for these goals. First, \emph{cleanliness}. At the LHC a large number of background events contaminate each collision event, constraining the detector design to improve radiation hardness and forcing some detector elements away from the collision point. At the ILC the number of background events from spurious collisions is much lower, so that detectors are not as limited by radiation hardness constraints and may be placed very near the collision point. Second, \emph{democracy}. ILC signal cross sections are not much smaller than background cross sections since all backgrounds are electroweak in origin. At the LHC background from strong interaction processes are very high compared to signal processes. Third, \emph{calculability}. ILC theoretical cross sections are calculated with much greater precision because the associated uncertainty on QCD calculations are large; in contrast, $e^+ e^-$ cross sections are calculated at very high precision so that experimental deviations from the SM are more readily apparent. Finally, \emph{detail}. Due to the clean event environment and the potential to polarize beams, the detailed spins of initial and final states can be reconstructed.
Realizing the physics goals of the ILC program will require knowledge of the theoretical and experimental techniques fundamental to high energy physics, as well as the software written to simulate the underlying physics at the ILC and its detectors. The target audience for this primer is advanced undergraduates and beginning graduate students who have not yet had the benefit of a course in particle physics and who may be starting research on the ILC and one of its detectors. The goal is not to introduce particle physics at the ILC with depth and rigor, but rather to provide a fairly complete story in one place, together with references and suggestions for further reading where more depth and rigor can be found. The exercises are not meant to be deeply challenging but rather to provide a good starting point and working knowledge of particle physics and the technology used to study it.
\begin{figure*}[t]
\begin{center}
\vspace{-0.8in}
\resizebox{0.9\textwidth}{!}{\includegraphics*{SiD.pdf}}
\vspace{-0.8in}
\caption{Rendering of SiD. The ILC electron and positron beams collide in the detector center. Credit: SiD Consortium.}
\label{fig:sid}
\end{center}
\end{figure*}
In the first section we focus on the Standard Model (SM) of particle physics, describing the particles and gauge fields which mediate their interactions. Gauge invariance and the Higgs phenomenon are described. Next we focus on quantum scattering, first the nonrelativistic version in the Born Approximation and then the relativistic version encoded in the Feynman Calculus. We describe the production and decay of particles with prescriptions for how to calculate cross sections and lifetimes, then turn to the Higgs signal and background processes expected at a Higgs factory like the ILC. Suggestions for further reading follow at the end of the section, and exercises can be found in Appendix \ref{appendix1}.
In the following section we first survey the historical development of particle physics and the evolution in size and complexity of the machines driving that development. We then describe the fundamentals of particle accelerators and colliders, as well as the detectors built to study the results of particle collisions. We then focus on the technical designs of the ILC and SiD. SiD was first described in detail in the Letter of Intent (LoI) \cite{Aihara:2009ad}. In the following section We switch from ILC physics to software meant to simulate that physics. We describe event generators, which produce particle four-vectors produced after collisions, and detector simulations, which simulate the response of a detector like SiD to the particles and their decay products. Techniques for the reconstruction of shortlived particles like the Higgs boson are elaborated. Suggestions for further reading follow at the end of both sections, and exercises can be found in Appendix \ref{appendix1}.
Most software in high energy particle physics runs on the Linux operating system. At the time of writing, CERN CentOS 7 (CC7), a version of CentOS, is the default distribution. Instructions on downloading and installing CC7 are available on the web. All of the simulation software discussed in this primer is freely downloadable on the web. Installation instructions can be found on the webpages easily located with a search engine. Familiarity with a shell like \emph{bash} or \emph{csh} is required for installing the software, but only a small subset of shell commands is required. Instructions for installing and using ILCsoft, the nominal software for the global ILC effort, can be found in Appendix \ref{appendix2}.
\section{Higgs factory physics}
\subsection{Standard Model \label{sec:sm}}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c||c|c|c|c|} \hline
\multicolumn{4}{|c|}{Leptons} & \multicolumn{4}{|c||}{Quarks} & \multicolumn{4}{|c|}{Bosons} \\ \hline
& Q & M & ID & & Q & M & ID & & Q & M & ID \\ \hline \hline
$e^{\pm}$ & $\pm$1 & 0.0005 & $\mp$11 & $u$ & $+2/3$ & 0.002 & 2 & $g$ & 0 & 0 & 21\\
$\nu_e$ & 0 & 0 & 12 & $d$ & $-1/3$ & 0.005 & 1 & $\gamma$ & 0 & 0 & 22\\ \cline{1-8}
$\mu^{\pm}$ & $\pm$1 & 0.106 & $\mp$13 & $c$ & $+2/3$ & 1.28 & 4 & $Z$ & 0 & 91.2 & 23 \\
$\nu_{\mu}$ & 0 & 0 & 14 & $s$ & $-1/3$ & 0.095 & 3 & $W^{+}$ & $+1$ & 80.4 & 24 \\ \cline{1-8}
$\tau^{\pm}$ & $\pm$1 & 1.78 & $\mp$15 & $t$ & $+2/3$ & 173 & 6 & $W^{-}$ & $-1$ & 80.4 & -24 \\ \cline{9-12}
$\nu_{\tau}$ & 0 & 0 & 16 & $b$ & $-1/3$ & 4.18 & 5 & $H$ & 0 & 125.1 & 25 \\ \hline
\end{tabular}
\caption{Elementary fermions and bosons of the Standard Model (SM) with their electric charge (in $e$), mass (in GeV) and Particle Data Group (PDG) identification numbers. Masses are rounded and uncertainties are suppressed. For current mass precision, see the PDG \cite{Tanabashi:2018oca}.}
\label{tab:elementary}
\end{center}
\end{table*}
The Standard Model (SM) of particle physics comprises the elementary (noncomposite) particles and their strong, weak and electromagnetic interactions. The elementary spin 1/2 fermions are the quarks and leptons, while the elementary bosons are the spin 1 gauge bosons, which mediate interactions, and the spin 0 Higgs boson. See Table \ref{tab:elementary}. The SM also accounts for composite particles in bound states of quarks $q$ like the mesons ($q\bar{q}$) and baryons ($qq^{\prime}q^{\prime \prime}$). See Table \ref{tab:mesons}.
The Lagrangian density $\mathcal{L}_{SM}=\mathcal{L}_{particles}+\mathcal{L}_{interactions}$ encodes the SM. If fields $\phi$ represent a scalar (the Higgs boson), $\psi$ a fermion (lepton or quark) and $A^{\mu}$ a vector boson, then
\begin{eqnarray}
\mathcal{L}_{particles} & = & \mathcal{L}_0 +\sum_{\psi} \mathcal{L}_{1/2}+ \sum_{A^{\mu}} \mathcal{L}_{1}
\end{eqnarray}
\noindent where $\mathcal{L}_0$, $\mathcal{L}_{1/2}$, $\mathcal{L}_1$, are the Lagrangians appropriate for spins 0, 1/2 and 1, namely
\begin{eqnarray}
\mathcal{L}_0 & = & \frac{1}{2}(\partial_{\mu} \phi)(\partial^{\mu}\phi)-\frac{1}{2} m_{\phi}^2 \phi^2 \\
\mathcal{L}_{1/2} & = & \bar{\psi}(i\gamma^{\mu} \partial_{\mu}-m_{\psi}) \psi \\
\mathcal{L}_{1} & = & -\frac{1}{4} F_{\mu \nu}F^{\mu \nu} + \frac{1}{2} m_{A}^2 A^{\mu}A_{\mu} \label{eqn:l1}
\end{eqnarray}
\noindent Here $\gamma_{\mu}$ are the Gamma matrices and $F_{\mu \nu}$ is the field strength tensor. When the Euler-Lagrange equation is applied to these Lagrangians, they yield the Klein-Gordon, Dirac, and Maxwell equations. See Table \ref{tab:elementary} for the masses of the elementary fermions associated with $\psi$ and vector bosons associated with $A^{\mu}$ of the SM.
The term $\mathcal{L}_{interactions}$ describes the electromagnetic, weak and strong interaction of particles in the SM, and their form is determined by demanding \emph{gauge invariance}. The field theories of the electromagnetic and strong interactions are referred to as Quantum Electrodynamics (QED) and Quantum Chromodynamics (QCD). The gauge groups are $U(1)$ for the electromagnetic interaction, $SU(2) \times U(1)$ for the electroweak interaction and $SU(3)$ for the strong interaction. See fig. \ref{fig:vertices} for diagrams of the SM $\gamma f^{+}f^{-}$, $Zf\bar{f}$, $Wf_u f_d$ and $gq\bar{q}$ interactions demanded by gauge invariance and the $Hf\bar{f}$ and $HVV$ interactions determined by the Higgs mechanism (see below). In addition to these interactions the SM includes the triple and quadruple boson interactions which do not involve fermions: $3g$, $4g$, $3H$, $4H$, $HHWW$, $HHZZ$, $ZWW$, $ZZWW$, $4W$, $\gamma WW$, $\gamma \gamma WW$, $\gamma Z WW$. The associated \emph{couplings} show the relative strength of each interaction with respect to the others. We have $g_s/g_e =\sqrt{\alpha_s/\alpha} \approx 4$, explaining why the strong interaction is considered strong (and why nuclei hold together). According to the \emph{electroweak unification condition} $g_e=g_W \sin \theta_W=g_Z \cos \theta_W$ where $\sin^2 \theta_W \approx 0.231$ is the \emph{weak mixing angle}, so in general weak interactions are not much weaker than electromagnetic ones. It is only for low energy weak phenomena $E/m_W \ll 1$, \emph{e.g.} nuclear beta decay, that the weak interaction is suppressed by $m_W^2$ relative to the electromagnetic interaction.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{smvertices.pdf}
\caption{Fundamental SM interaction vertices and their couplings for fermions (solid lines), gauge bosons (wavy or loopy lines) and the Higgs boson (dashed lines). For the $Z$ boson vertex, $c_{A}^{f},c_{V}^{f}$ are axial and vector factors, and for the $W$ vertex $V_{ud}$ is the Cabibbo, Kobayashi, Maskawa (CKM) matrix element. For the Higgs boson vertices, $v\approx 246$~GeV and $V=W,Z$.}
\label{fig:vertices}
\end{center}
\end{figure*}
Not all vertices in fig. \ref{fig:vertices} have been observed in nature, in particular for vertices involving neutrinos. A particle which has nonzero spin may align its spin either with its direction of motion (righthanded, helicity +1) or against it (lefthanded, helicity -1). Processes in nature are \emph{spatially invariant}, or \emph{parity conserving}, if they occur with lefthanded or righthanded particles nonpreferentially. Most processes are parity conserving, but processes involving neutrinos violate parity conservation because righthanded neutrinos $\nu_{R}$ and lefthanded antineutrinos $\bar{\nu}_{L}$ are not observed in nature, and therefore excluded from the SM. Only $\nu_{L}$ and $\bar{\nu}_{R}$ exist in the SM. This has a critical consequence for weak interactions involving neutrinos. For example, the vertices $We^{-}_{L}\bar{\nu}_{R}$ and $We^{+}_{R}\nu_{L}$ exist in the SM, but $We^{-}_{R}\bar{\nu}_{L}$ and $We^{+}_{L}\nu_{R}$ do not.
The interaction vertices of fig. \ref{fig:vertices}, together with conservation laws, explain how the unstable particles of the SM decay. The electron and all neutrinos are stable against decay. Single quarks decay only within bound states, apart from the top quark through $t \rightarrow bW$ due to its short lifetime. Electron decay is ruled out by energy conservation, but muon decays can proceed through two connected diagrams: $\mu \rightarrow W^{\star} \nu_{\mu}$, where the $W$ is off mass shell, and $W^{\star} \rightarrow e\nu_e$. No other decay channel is available due to energy conservation. For the $\tau$ lepton, $\tau \rightarrow W^{\star} \nu_{\tau}$ and $W^{\star} \rightarrow e\nu_e, \mu \nu_{\mu}, q_u q_d$ are accessible if $q_u$, $q_d$ are first and second generation quarks. Onshell gauge bosons decay via $Z \rightarrow f\bar{f}$ and $W \rightarrow f_u f_d$ for all fermions except the top quark. The photon is stable. The Higgs boson decays through single vertices $H \rightarrow f\bar{f}$ and $H \rightarrow ZZ^{\star},WW^{\star}$ where one gauge boson is off mass shell, and also through multiple vertex processes.
The photon $\gamma$, stable with zero mass, is the exemplar of the gauge boson. Maxwell's field equations exhibit the canonical gauge invariance under $U(1)$ transformation, but only if the term $\frac{1}{2} m_{A}^2 A^{\mu}A_{\mu}$ in eq. \ref{eqn:l1} is zero. Similarly, the strong interaction exhibits invariance under $SU(3)$ transformation only if $m_{g}=0$. In contrast, the $Z$ and $W$ are \emph{massive}: the only particles more massive in the SM are the Higgs boson and the top quark. Early weak interaction theory predicted what their masses should be based on experimental data, but their nonzero mass was a puzzle. Electroweak $SU(2) \times U(1)$ gauge invariance is spoiled if the term in $\mathcal{L}_{1}$ (eq. \ref{eqn:l1}) quadratic in the $W$ and $Z$ fields is nonzero. How do mass terms for $W$ and $Z$ appear in $\mathcal{L}_{SM}$ if not through eq. \ref{eqn:l1}? The $W$ and $Z$ were discovered at the Super Proton Antiproton Synchrotron (S$p\bar{p}$S) collider at CERN in 1984 with $m_{W} \approx 80.4$~GeV and $m_{Z}\approx 91.2$~GeV, just where the experimental data pointed.
The explanation for how $W$ and $Z$ mass terms appear in $\mathcal{L}_{SM}$ is addressed by the \emph{Higgs mechanism}, which also predicts the Higgs boson $H$ and its couplings. We postulate a complex scalar field $\phi=(\phi_1 + i \phi_2)/\sqrt{2}$ in a potential $V(\phi)$. A generic potential is $V(\phi)=\sum_{n} c_{n} (\phi^{\star}\phi)^{n}$, but imposing theoretical constraints like renormalizability require $c_{1}<0,c_{2}>0$, and $c_{n}=0$ for $n\neq 1,2$, so
\begin{eqnarray}
V(\phi) & = & \mu^2 \phi^{\star}\phi + \lambda (\phi^{\star} \phi)^2
\end{eqnarray}
\noindent where $\mu^2 < 0$ and $\lambda >0$. Then by construction we have symmetric ground states $\phi_0=\frac{v}{\sqrt{2}} e^{i \theta}$ ($v=\sqrt{-\mu^2/\lambda}$) for the Lagrangian $\mathcal{L}_{\phi}=T-V=1/2( \partial_{\mu} \phi)^{\star} (\partial^{\mu} \phi) -V(\phi)$. If the symmetry is broken by choosing a particular $\theta$, \emph{e.g.} $\theta=0$, we have a ground state $\phi_0=\frac{v}{\sqrt{2}}$. For excitations $\kappa_1 + i \kappa_2$ near $\phi_0$, $\phi=\phi_0+\frac{1}{\sqrt{2}}(\kappa_1 + i \kappa_2)$, $\mathcal{L}_{\phi^{\prime}}$ yields a term quadratic in $\kappa_1$ with a coefficient $m_{\kappa_1}=\sqrt{-2 \mu^2}$. A mass term for a boson $\kappa_1$ has been generated by \emph{spontaneous symmetry breaking}.
The SM fermions exhibit an interesting pattern: they fit into three \emph{generations} ordered by mass. Within each generation, fermions are paired together in $SU(2)$ \emph{electroweak doublets}. Each charged lepton $\ell$ is paired with its corresponding neutrino $\nu_{\ell}$ in the doublet $(\nu_{\ell} \ell)^{T}$, and each \emph{up-type} quark $q_u$ is paired with a \emph{down-type} quark $q_d$ in the doublet $(q_u q_d)^{T}$. Each fermion also has an anti-fermion of the same mass but opposite charge. The Large Electron Positron (LEP) collider established that there are exactly three generations, assuming $m_{\nu}<\frac{1}{2}m_{Z}$ for all neutrinos of generation four or higher, by measuring the cross section for $e^{+}e^{-} \rightarrow Z \rightarrow \sum_{\ell} \nu_{\ell} \bar{\nu}_{\ell}$ to to high precision.
The first generation, the least massive, contains the electron $e$ and its neutrino $\nu_e$ as well as the up quark $u$ and down quark $d$. Bound states of the $e$, $u$ and $d$ explain all ordinary matter bound up in atoms: the proton ($p=uud$) and the neutron ($n=udd$) are bound states of three first generation quarks. An atom with atomic number $Z$ and atomic weight $A$ contains $Z$ electrons bound to a nucleus with $Z$ protons and $A-Z$ neutrons. Quarks carry fractional charge: $q_{u}=+2/3$ and $q_{d}=-1/3$. Thus $u$ and $d$ quarks also explain the pions discovered in cosmic rays in 1937 ($\pi^+=u\bar{d},\pi^-=\bar{u}d$).
The second generation contains the muon $\mu$ and its neutrino $\nu_{\mu}$, and the charm quark $c$ and strange quark $s$. The muon can be considered a heavy copy of the electron, with $m_{\mu}/m_{e} \approx 210$, but an unstable one since the muon can decay without violating energy conservation. The muon was discovered, like the pion, in cosmic rays. The second generation quarks are copies of the first generation quarks, with $m_{c}/m_{u} \approx 640$ and $m_s/m_d \approx 19$. Similar to the first generation, $q_{c}=+2/3$ and $q_{s}=-1/3$. While the second generation quarks do form bound states, the bound states are all unstable and decay to first generation free or bound fermions. The strange quark was discovered in 1947 in cosmic rays in the decay $K^0 \rightarrow \pi^+ \pi^-$ ($K^0=d\bar{s}$), while the the charm quark, or rather the bound state $J/\psi$ ($J/\psi=c\bar{c}$), was codiscovered in 1974 at the Stanford Positron Electron Accelerator Ring (SPEAR) and the Brookhaven Alternating Gradient Synchrotron (AGS) accelerator.
The third generation contains the $\tau$ and its neutrino $\nu_{\tau}$, and the top quark $t$ and bottom quark $b$. The $\tau$, the heaviest lepton, has $m_{\tau}/m_{e} \approx 3560$ and is unstable like the muon but has many more decay channels open. Like the $J/\psi$, the $\tau$ was discovered at SPEAR. The bottom quark $b$ has $m_{b}/m_{d}\approx 840$ and the top quark has a whopping $m_{t}/m_{u} \approx 8.65 \times 10^4$! Similar to the first generation, $q_{t}=+2/3$ and $q_{b}=-1/3$. The $b$ quark forms bound states with other quarks but the $t$, alone among quarks in this regard, decays well before it can form a bound state. The $b$ was discovered at Fermilab in 1977 in its bound state $\Upsilon$ ($\Upsilon=b\bar{b}$), while the $t$ had to wait until 1995 for discovery at the Fermilab Tevatron.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
\multicolumn{8}{|c|}{Mesons} \\ \hline
& $q\bar{q}^{\prime}$ & Q & M [GeV]& $c\tau$ & ID & Decay1(BR) & Decay2(BR) \\ \hline
$\pi^{+}$ & $u \bar{d}$ & +1 & 0.140 & 7.80m & +211 & $\mu^{+} \nu_{\mu}$(1.000) & $e^{+} \nu_{e}$ (0.000) \\
$\pi^0$ & $u\bar{u}-d\bar{d}$ & 0 & 0.135 & 25.5nm & 111 & $\gamma \gamma$(0.988) & $e^+e^- \gamma$(0.012) \\ \hline
$K^+$ & $u\bar{s}$ & +1 & 0.494 & 3.71m & +321 & $\mu^+ \nu_{\mu}$(0.636) & $\pi^+ \pi^0$(0.207) \\
$K^{0}_{S}$ & $d\bar{s}$ & 0 & 0.498 & 2.68cm & 310 & $\pi^+ \pi^-$(0.692) & $\pi^0 \pi^0$(0.307) \\
$K^{0}_{L}$ & $d\bar{s}$ & 0 & 0.498 & 15.3m & 130 & $\pi^{\pm} e^{\mp} \nu_e$(0.406) & $\pi^{\pm} \mu^{\mp} \nu_{\mu}$(0.270) \\
$\phi$ & $s\bar{s}$ & 0 & 1.019 & 46.5fm & 333 & $K^+ K^-$(0.492) & $K^{0}_{L}K^{0}_{S}$(0.340) \\ \hline
$D^+$ & $c\bar{d}$ & +1 & 1.870 & 312$\mu$m & +411 & $K^{0} \bar{K^{0}}X$(0.61) & $K^-X$(0.257) \\
$D^0$ & $c\bar{u}$ & 0 & 1.865 & 123$\mu$m & 421 & $K^- X$(0.547) & $K^{0} \bar{K^{0}}X$(0.47) \\
$J/\psi$ & $c\bar{c}$ & 0 & 3.097 & 2.16pm & 443 & $ggg$(0.641) & $\ell^+ \ell^-$(0.119) \\ \hline
$B^+$ & $u\bar{b}$ & +1 & 5.279 & 491$\mu$m & +521 & $\bar{D^0}X$(0.79) & $D^- X$(0.099) \\
$B^0$ & $d\bar{b}$ & 0 & 5.280 & 455$\mu$m & 511 & $\bar{D^0}X$(0.474) & $D^- X$(0.369) \\
$\Upsilon$ & $b\bar{b}$ & 0 & 9.460 & 3.63pm & 553 & $ggg$(0.817) & $\ell^+ \ell^-$(0.075) \\ \hline
\multicolumn{8}{|c|}{Baryons} \\ \hline
& $qq^{\prime}q^{\prime \prime}$ & Q & M & $c\tau$ & ID & Decay1(BR) & Decay2(BR) \\ \hline
$p$ & $uud$ & +1 & 0.938 & $\infty$ & 2212 & - & - \\
$n$ & $udd$ & 0 & 0.940 & 264Gm & 2112 & $pe^- \bar{\nu_e}$(1.00) & - \\ \hline
$\Sigma^{+}$ & $uus$ & +1 & 1.189 & 2.40cm & 3222 & $p\pi^0$(0.516) & $n\pi^+$(0.483) \\
$\Sigma^0$ & $uds$ & 0 & 1.193 & 22.2pm & 3212 & $\Lambda \gamma$(1.00) & - \\
$\Sigma^{-}$ & $dds$ & -1 & 1.197 & 4.43cm & 3112 & $n\pi^-$(0.998) & $ne^- \bar{\nu}_e$(0.001) \\ \hline
\end{tabular}
\caption{Some common meson and baryons, their valence quark content, charge (in $e$), mass (in GeV), $c$ times lifetime, PDG identification number and two dominant decays with their branching ratios. Measured values are rounded and uncertainties are suppressed. For current precision, see the PDG \cite{Tanabashi:2018oca}.}
\label{tab:mesons}
\end{center}
\end{table*}
Because of a property of QCD known as \emph{confinement}, single quarks $u,d,s,c,b$ are not observed. Rather, when produced they form bound states with other quarks produced either in association or pulled from the vacuum. Bound states of quarks, mesons ($q\bar{q}$) and baryons ($qq^{\prime}q^{\prime \prime}$), are \emph{colorless}. Color charge, the QCD analog of electric charge in QED, is either red, green or blue ($r,g,b$) and mesons carry color charge $r\bar{r}$, $g\bar{g}$ or $b\bar{b}$ while baryons carry color charge $rgb$ or $\bar{r} \bar{g}\bar{b}$. We have seen the first generation mesons ($\pi^+=u\bar{d}$), second generation mesons ($K^0=u\bar{s}$, $\phi=s\bar{s}$, $J/\psi=c\bar{c}$), as well as the third generation meson ($\Upsilon=b\bar{b}$), but there are many more. Similarly, the first generation baryons $p=uud$ and $n=udd$ are only the tip of the iceberg. See Table \ref{tab:mesons} for a slightly larger tip of the iceberg and the PDG \cite{Tanabashi:2018oca} for the complete iceberg as it is presently known.
A meson, a bound state of two quarks, may have a variety of total spin and total angular momenta. Another degree of freedom, \emph{weak isospin}, is analogous to spin, and adds further to the variety. Thus mesons with the same valence quark content may nevertheless be distinct based on how their spin and isospin add. Distinct radial and orbital angular momentum quantum numbers can also yield distinct mesons. For example, see Table \ref{tab:g1nm} for the first generation mesons $\pi^0$,$\pi^{\pm}$,$\eta^0$,$\omega^0$,$\rho^0$ and $\rho^{\pm}$, all of which have \emph{valence} quark content $u\bar{u},d\bar{d}$,$u\bar{d}$, $\bar{u}d$.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Meson & J & I & M [GeV]& $\Gamma$ [MeV]& ID \\ \hline
$\pi^{0}$ & 0 & 1 & 0.135 & $7.73 \times 10^{-6}$ & 111 \\
$\pi^{\pm}$ & 0 & 1 & 0.140 & $2.53 \times 10^{-14}$ & $\pm211$ \\
$\eta^0$ & 0 & 0 & 0.548 & $1.31 \times 10^{-3}$ & 221 \\
$\omega^0$ & 1 & 0 & 0.783 & 8.49 & 223 \\
$\rho^0$ & 1 & 1 & 0.775 & 149 & 113 \\
$\rho^{\pm}$ & 1 & 1 & 0.775 & 149 & $\pm213$ \\ \hline
\end{tabular}
\caption{Some first generation mesons with valence quark content $u\bar{u}$,$d\bar{d}$,$u\bar{d}$, $\bar{u}d$. They differ in their total angular momentum J and isospin I. Decays of the $\rho$, $\omega$ and $\eta$ are to pions and photons. Measured values are rounded and uncertainties are suppressed. For current precision, see the PDG \cite{Tanabashi:2018oca}.}
\label{tab:g1nm}
\end{center}
\end{table}
\subsection{Quantums scattering \label{sec:scattering}}
\begin{figure*}[t]
\begin{center}
\framebox{\includegraphics[width=0.9\textwidth]{scatter.pdf}}
\caption{Diagrams for classical and nonrelativistic quantum elastic scattering. For classical scattering, the crucial functional relation is $\theta=\theta(b)$ determined by the force law. For nonrelativistic quantum scattering, it is the scattering amplitude $f(\theta)$ determined by the Schr{\"o}dinger equation.}
\label{fig:scattering}
\end{center}
\end{figure*}
The fundamental quantities which determine the number and kind of events produced in particle collisions are essentially geometric: the \emph{cross section} for the process and the \emph{luminosity} of particle production. The number of events $N$ produced in a process with cross section $\sigma$ and luminosity $\mathcal{L}$ is $N=\sigma \mathcal{L}$. The units of cross section are area, typically the \emph{barn} $b=10^{-28}$~m$^2$. Most processes of interest in modern particle physics have cross sections with a few femtobarns (fb) or picobarns (pb). Luminosity $\mathcal{L}$, also known as the \emph{integrated} or \emph{total} luminosity, therefore has units of inverse area, typically fb$^{-1}$ or pb$^{-1}$.
In classical scattering the incident and target particle are treated as Newtonian particles and the cross section can be calculated geometrically given a force law. For a central force the critical ingredient for the calculation of a cross section is the relation between the \emph{impact parameter} $b$ and the \emph{scattering angle} $\theta$. If the coordinate system is taken so that the origin is the target location and and the $z$ axis point in the direction of the incident particle, $b$ is the distance in the $xy$ plane from the incident particle to the $z$ axis, and $\theta$ is the polar angle from the $z$ axis. The target presents a cross-sectional area $\sigma$ to the incident particle. See fig. \ref{fig:scattering} (left).
Each small solid angle $d\Omega$ the incident particle scatters into contributes a quantity $d\sigma$ to the total cross section, the quantitative amount $d\sigma/d\Omega$ depending on the nature of the force law. This quantity $d\sigma/d\Omega$ is the \emph{differential cross section}. From fig. \ref{fig:scattering} it is clear that $d\sigma=bd\phi db$ and $d\Omega=\sin \theta d\theta d\phi$, so that
\begin{eqnarray}
\frac{d\sigma}{d\Omega} & = & \frac{b}{\sin \theta} \left\vert \frac{db}{d\theta} \right\vert
\end{eqnarray}
\noindent In the case of a pointlike particle scattering off a hard sphere with radius $R$, for example, the relation is $b=R\cos \frac{1}{2}\theta$. In this case $d\sigma/d\Omega=\frac{1}{4}R^2$ and $\sigma=\int d\sigma=\pi R^2$, the cross-sectional area of the sphere. For $b \leq R$ we have one collision, so the luminosity $\mathcal{L}=1/\pi R^2$. For $b > R$ we have no collision so $\mathcal{L}=0$.
In accelerators, we generalize this notion of luminosity to include bunches of colliding particles, not just single particles, repeatedly colliding at fixed intervals of time. The \emph{instantaneous luminosity} is $L=d\mathcal{L}/dt$. See eq. \ref{eqn:lumi}. The total number of scattering events from beam collisions is
\begin{eqnarray}
N & = & \int dt \frac{d\mathcal{L}}{dt} \int d\Omega \frac{d\sigma}{d\Omega}
\end{eqnarray}
\noindent where the integrations are over time and solid angle.
In nonrelativistic quantum scattering in a central potential $V(r)$, the incident particle is treated as a plane wave and the scattered particle is treated as a spherical wave. The ansatz is a superposition of these two,
\begin{eqnarray}
\Psi(r,\theta) & = & A \left( \exp (ikz)+\frac{f(\theta)}{r} \exp(ikr) \right)
\label{eqn:ansatz}
\end{eqnarray}
\noindent where $f(\theta)$ is the \emph{scattering amplitude} determined by solving the time-independent Schr{\"o}dinger equation. See fig. \ref{fig:scattering} (right). By equating the plane wave probability flowing into the scattering center with the spherical wave probability flowing out it can be shown that $d\sigma/d\Omega=\vert f(\theta) \vert^2$.
This ansatz follows naturally from the \emph{first Born approximation}. The time independent Schr{\"o}dinger equation can be cast in integral form with the aid of a Green's function,
\begin{eqnarray}
\Psi(r) & = & \Psi_0(r)+\int d^3r_0 g(r-r_0)V(r_0)\Psi(r_0) \\
g(r) & = & - \frac{m}{2 \pi \hbar^2}\left( \frac{\exp (i k r}{r} \right)
\end{eqnarray}
\noindent where $g$, which resembles the Green's function, is known as the \emph{propagator}. If $\Psi_0=A\exp(ikz)$ is the plane wave of the incident particle, the Born series iteratively bootstraps solutions
\begin{eqnarray}
\Psi_1 & = & \Psi_0 + \int gV \Psi_0 \\
\Psi_n & = & \Psi_{n-1} + \int g^nV^n \Psi_0
\label{eqn:born}
\end{eqnarray}
\noindent where for clarity some notation has been omitted. Each successive term is a correction to the previous terms which, in principle, converges to the solution.
The first Born approximation is simply $\Psi_1$, the plane wave plus the Fourier transform of the potential. By comparing $\Psi_1$ with eq. \ref{eqn:ansatz}, the scattering amplitude can be extracted for a potential $V(r)$ which is localized at the scattering center and drops to zero elsewhere:
\begin{eqnarray}
f(\theta) & = & - \frac{2m}{\hbar^2 \kappa} \int dr r V(r) \sin (\kappa r) \label{eqn:scatamp}
\end{eqnarray}
\noindent where $\kappa=\vert k-k^{\prime} \vert$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{stu.pdf}
\caption{Feynman diagrams for two-body scattering in the $s$-channel (left), the $t$-channel (middle) and $u$-channel (right). Amplitudes will have vertex factors $g_{12}g_{34}$, $g_{13}g_{24}$ and $g_{14}g_{23}$ respectively. Cross sections will depend on the Mandelstam variables $s$, $t$, and $u$ respectively.}
\label{fig:mandelstam}
\end{center}
\end{figure*}
In classical scattering and nonrelativistic Born scattering discussed above, the scattering is \emph{elastic}. We now generalize to relativistic scattering and consider \emph{inelastic} scattering, in which the interaction may produce new particles distinct from the incident particles and the concepts of luminosity and cross section generalize. We show how differential cross sections and lifetimes are calculated with the fully relativistic Feynman prescription. The \emph{amplitude} $\mathcal{M}$ of the process is the key to calculating both cross sections and decay rates.
Fermi's Golden Rule states that the rate of a process from initial state $i$ to final state $f$ is the product of the phase space available in the final state $PS$ times the modulus of the amplitude squared, $\vert \mathcal{M} \vert ^2$:
\begin{eqnarray}
T_{i \rightarrow f} & = & \frac{2\pi}{\hbar} \vert \mathcal{M} \vert^2 \times PS
\end{eqnarray}
\noindent The amplitude $\mathcal{M}$ is calculated using Feynman rules described below. For each particle in the final state $f$ there is a contribution $\frac{c}{(2\pi)^3} \frac{d^3p}{2E}$ to $PS$ and an overall delta function to enforce energy conservation.
In the case of scattering the amplitude $\mathcal{M}$ is similar to the scattering amplitude $f(\theta)$ from Born scattering. The coupling of the scattered particles is a factor in the amplitude. In the limit of zero coupling or zero $PS$ in the final state, the transition rate is zero. In either case the process will not occur. For large couplings and $PS$, the transition rate is large. A large coupling can be counterbalanced by small $PS$, and \emph{vice versa}.
From Fermi's Golden Rule it can be shown that, after integrating phase space for two-body scattering $1+2 \rightarrow 3+4$ and two-body decay $1 \rightarrow 2+3$ of a particle with mass $m$, the differential cross section and decay rate are
\begin{eqnarray}
\frac{d\sigma}{d\Omega} & = & S\left( \frac{\hbar c}{8\pi} \right)^2 \frac{\vert \vec{p}_f \vert}{\vert \vec{p}_i \vert} \frac{\vert \mathcal{M} |^2}{(E_1+E_2)^2} \\
\Gamma & = & \frac{S \vert \vec{p}_f \vert}{8\pi \hbar c} \frac{\vert \mathcal{M} \vert^2}{m^2}
\label{eqn:twobody}
\end{eqnarray}
\noindent where $E_1+E_2$ is the sum of energies in the initial state, $\vec{p}_f$ is the momentum of either final state particle, and $\vec{p}_i$ is the momentum of either initial state particle. The statistical factor $S=1/2$ for identical final state particles and $S=1$ for distinct final state particles.
For a given scattering or decay amplitude, there is a corresponding \emph{Feynman diagram} which connects the initial state particles to the final state particles through any number of intermediate vertices. The Feynman prescription for calculating an amplitude $\mathcal{M}$ for a Feynman diagram is this:
\begin{enumerate}
\item \emph{Momenta}. Label external momenta $p_i$, and internal momenta $q_j$.
\item \emph{Vertex Factor}. For each vertex with coupling $g$, write a factor $-ig$.
\item \emph{Propagator}. For each internal momentum $q_j$, write a factor $\frac{i}{q_{j}^{2}-m_{j}^2 }$.
\item \emph{Energy Conservation}. For each vertex $k_1,k_2,k_3$, write a factor $(2\pi)^4 \delta(k_1+k_2+k_3)$ .
\item \emph{Integration}. For each internal momentum $q_j$, integrate $\frac{1}{(2\pi)^4} \int d^4 q_{j}$.
\end{enumerate}
\noindent What remains after this procedure is $-i\mathcal{M}$. In fact this simplified prescription applies to scalars, rather than fermions or vector bosons, but broadly the idea is the same. Only a little more complexity is required to describe strong and electroweak interactions of fermions and vector bosons.
\subsection{Particle production and decay\label{sec:production}}
Particle production cross sections and decay rates are related in that both are calculated with the Feynman prescription using Feynman diagrams. We consider the cases of two-body production and two-body decay.
\textbf{Production.} It should be evident that a total cross section must be a relativistic invariant. In two-body scattering $1+2 \rightarrow 3+4$, the cross section must depend on the four-vectors $p_{\mu}^{1}$,$p_{\mu}^{2}$,$p_{\mu}^{3}$,$p_{\mu}^{4}$, but if it is a relativistic invariant it can \emph{only} depend on functions of the four-vectors which are relativistic invariants like the contractions $p_{\mu}p^{\mu}$.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.9\textwidth]{eeqq.pdf}
\caption{Cross section for $e^+ e^- \rightarrow \sum_{q} q\bar{q}$ \emph{vs.} $\sqrt{s}$ (in GeV) with experimental data from various sources. Breit-Wigner meson resonances $u\bar{u},d\bar{d},s\bar{s},c\bar{c},b\bar{b}$ lie atop a nonresonant component with a $1/s$ dependence. At low $\sqrt{s}$ the $\gamma^{\star}$ process dominates, while at higher $\sqrt{s}$ the $Z^{\star}$ process dominates. The Breit-Wigner $Z$ resonance dominates at $\sqrt{s}=m_{Z}$. Credit: PDG \cite{Tanabashi:2018oca}.}
\label{fig:eetohad}
\end{center}
\end{figure*}
We define the \emph{Mandelstam variables} $s,t,u$, for two-body scattering $1+2 \rightarrow 3+4$ which are contractions and therefore relativistic invariants:
\begin{eqnarray}
s & = & (p_1+p_2)_{\mu}(p_1+p_2)^{\mu} \\
t & = & (p_1-p_3)_{\mu}(p_1-p_3)^{\mu} \\
u & = & (p_2-p_3)_{\mu}(p_2-p_3)^{\mu}
\end{eqnarray}
\noindent Note that in two-body scattering there are ten distinct contractions $p_{\mu}^{i} p_{j}^{\mu}$ and four conservation constraints $p_{\mu}^{1}+p_{\mu}^{2}=p_{\mu}^{3}+p_{\mu}^{4}$. There are seven $m_{i}^2$,s,t,u but it can be shown that $s-t-u=\sum_i m_{i}^2$. Therefore any two-body scattering cross section can be written as a combination of $s,t,u$ and three masses, or $s,t$ and all four masses. If the masses are negligible compared to $s,t,u$, then the latter are sufficient.
For each Mandelstam variable, there is a corresponding type of two-body scattering Feynman diagram or \emph{channel}: $s$-channel, $t$-channel, $u$-channel. See fig. \ref{fig:mandelstam}. When a propagator is combined with the delta function meant to enforce energy conservation at a vertex, the result is a simple function of a Mandelstam variable. For simplicity, we consider $m_i/E_i \ll 1$. For an $s$-channel Feynman diagram $\int d^4 q \delta(p_1+p_2-q)/q^2 \propto 1/s$. Thus the Mandelstam variables enter amplitudes naturally through the propagators. Moreover, the couplings $g_1$ and $g_2$ associated with the two vertices will contribute a factor $g_{1}g_{2}$ to the amplitude. If both $s$- and $t$-channel scattering are possible, then the total amplitude will be a sum $\mathcal{M}=\mathcal{M}_{s}+\mathcal{M}_{t}+\mathcal{M}_{u}$ and the diagrams will produce interference terms in $\vert \mathcal{M} \vert^2$.
We cite a few instructive inelastic scattering cross sections. First compare the cross sections for analogous processes, $e^+ e^- \rightarrow \gamma \gamma$ from QED and $q\bar{q} \rightarrow gg$ from QCD. Both have $t$- and $u$-channel diagrams, the former mediated by a virtual electron and the latter by a virtual quark:
\begin{eqnarray}
\left( \frac{d\sigma}{d\Omega}\right)_{e\bar{e} \rightarrow \gamma \gamma} & = & \frac{\alpha^2}{2s} \left( \frac{t^2+u^2}{ut} \right) \\
\left(\frac{d\sigma}{d\Omega}\right)_{q\bar{q} \rightarrow gg} & = & \frac{8\alpha_{s}^2}{27s} \left( \frac{t^2+u^2}{ut} \right) \left(1-\frac{9tu}{4s^2} \right)
\end{eqnarray}
\noindent The vertices are from fig. \ref{fig:vertices}. Because $g_e=\sqrt{4 \pi \alpha}$ and $g_s=\sqrt{4 \pi \alpha_s}$, the processes have amplitudes proportional to $\alpha$ and $\alpha_s$ respectively, and cross sections proportional to $\alpha^2$ and $\alpha_{s}^2$. In the case of gluon pair production, however, there is also an $s$-channel process which interferes with the $t$-channel process due to the $3g$ vertex (there is no SM $3\gamma$ vertex).
Next consider quark pair production $e^+ e^- \rightarrow q\bar{q}$ in the $s$-channel mediated either by a virtual $\gamma$ or a virtual $Z$. For the virtual $\gamma$ process, there will be a vertex factor $g_e$ at the $e^+ e^-$ vertex and another vertex factor $g_e Q_{q}$ for the $q\bar{q}$ vertex, leading to an overall cross section factor of $\alpha^2 Q_{q}^2$, and the characteristic $1/s$ dependence. For the virtual $Z$ process, the vertex factors will yield an overall cross section factor of $X_q X_e g_{Z}^2$, where the shorthand $X_{f}\equiv (c_{V}^{f} )^2 + (c_{A}^{f} )^2$ is used. Because the $Z$ is unstable (whereas the $\gamma$ is not) the $Z$ propagator must also be modified, $m_{Z}^2 c^2 \rightarrow m_{Z}^{2}c^2 - i \hbar m_{Z} \Gamma_{Z}$, to account for the fact that the $Z$ decays with decay rate $\Gamma_{Z}>0$.
The total cross sections are given by
\begin{eqnarray}
\sigma_{e\bar{e} \rightarrow \gamma^{\star} \rightarrow q\bar{q}} & = & 3 Q_{q}^2 \frac{4 \pi \alpha^2}{3s} \\
\sigma_{e\bar{e} \rightarrow Z^{\star}\rightarrow q\bar{q}} & = & X_{q} X_{e} \frac{g_{Z}^2 }{192 \pi} \frac{s}{(\sqrt{s}-m_{Z})^2 + m_{Z}^{2} \Gamma_{Z}^{2}}
\end{eqnarray}
\noindent For $\sqrt{s} \ll m_{Z}$, the total cross section is dominated by the virtual photon process, but closer to the $Z$ mass the virtual $Z$ diagram dominates.
Note that the cross section diverges if the $Z$ is stable, \emph{i.e.} its decay width $\Gamma_Z=0$. Indeed all unstable particles have their propagators modified in this way. The $Z$ cross section is an example of a \emph{Breit-Wigner} cross section, characteristic of \emph{resonances} with very small lifetimes $1/\Gamma$. The full width at half maximum of a Breit-Wigner resonance is just the width $\Gamma$. See fig. \ref{fig:eetohad} for the total cross section for $e^+ e^- \rightarrow \sum_q q\bar{q}$ \emph{vs.} $\sqrt{s}$ together with experimental data. Breit-Wigner meson resonances $u\bar{u},d\bar{d},s\bar{s},c\bar{c},b\bar{b}$, along with the $Z$ resonance, lie atop a nonresonant component with a $1/s$ dependence.
\textbf{Decay.} Consider the decay rate $\Gamma$ of an unstable particle. Elementary particle decay is a purely statistical process, and occurs without regard for the history of the particle. For $N$ particles the small change in $dN$ in a small amount of time $dt$ is $dN=-N \Gamma dt$, from which it follows that $N(t)=N_0 \exp(-\Gamma t)$ and the mean \emph{lifetime} of a single particle is $\tau=1/\Gamma$. In general unstable particles may decay to a variety of final states, so the total decay rate is $\Gamma=\sum_f \Gamma_{f}$ where the sum is over all final states. The \emph{branching ratio} for an unstable particle to a particular final state $f$ is $BR(i \rightarrow f)=\Gamma_f/\Gamma$.
There is a natural connection between the decay rate $\Gamma$ of a particle and the uncertainty in its mass through the Heisenberg Uncertainty Principle, $\Delta E \Delta t \geq \hbar/2$. A shortlived particle with mass $m$ and mass uncertainty $\Delta E/c^2$ may exist for a short lifetime $\Delta t=1/\Gamma$, so at the minimum $\Delta E = \hbar \Gamma/2$. The mass will be in the interval $(m-\Delta E/c^2,m+\Delta E/c^2)$, so the \emph{natural width} of the particle is $2\Delta E/c^2=\hbar \Gamma$, or in natural units the width is $\Gamma$, the decay rate, with units of energy. Hereafter the terms decay width and decay rate are used interchangeably.
The only leptons which decay are the $\mu$ and the $\tau$, both by virtual $W$ emission, the former via $\mu \rightarrow e \nu_e \nu_{\mu}$ with branching ratio near unity, the latter in a plethora of final states. The top quark decays via $t \rightarrow bW$ before hadronization can occur,with branching ratio near unity. All hadrons except the proton decay, also \emph{via} virtual $W$ emission from a quark within the hadron. The reader is referred to the PDG \cite{Tanabashi:2018oca} for $\tau$ and hadron partial decay widths and branching ratios. We consider here partial widths and branching ratios of the bosons $W,Z,H$.
For the $W$, the decay is either to leptons $\ell \nu_{\ell}$ or quark pairs $q_i \bar{q}_j$. For the $Z$, the decay is either to lepton pairs $\ell^+ \ell^-,\nu_{\ell},\bar{\nu}_{\ell}$ or quark pairs $q\bar{q}$. Applying the Feynman rules with vertex factors from fig. \ref{fig:vertices} yields
\begin{eqnarray}
\Gamma_{W \rightarrow \ell \bar{\nu},q_{i} \bar{q}_{j}} & = & \frac{\sqrt{2} G_{F} m_{W}^3}{12\pi} \times \left(1, 3 \vert V_{ij}\vert^2 \right) \\
\Gamma_{Z \rightarrow \ell^+ \ell^-, \nu \bar{\nu}, q\bar{q}} & = & \frac{\sqrt{2} G_F m_{Z}^3}{6 \pi} \times \left( X_{\ell}, X_\nu, 3X_q \right)
\end{eqnarray}
\noindent where $G_F\equiv \sqrt{2}g_{W}^{2}/8m_{W}^2$ is Fermi's constant, which absorbs the vertex factors. The extra factors $3$ appear for quarks because they carry an extra three degrees of freedom: strong color $r,g,$ or $b$. Leptons $\ell_i \bar{\nu}_{i}$ carry no color charge. See Table \ref{tab:wzbr} for the measured decay rates and branching ratios for the $W$ and $Z$.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|} \hline
Decay & $\Gamma_{f}$ (GeV) & BR (\%)\\ \hline
$W \rightarrow \ell \nu_{\ell}$ & 0.226 & $10.86\pm 0.09$ \\
$W \rightarrow$ hadrons & 1.41 & $67.41 \pm 0.27$ \\ \hline
$Z \rightarrow \ell^+ \ell^-$ & 0.08398 & $3.3658 \pm 0.0023$\\
$Z \rightarrow$ invisible & 0.4990 & $20.000 \pm 0.055$\\
$Z \rightarrow$ hadrons & 1.744 & $69.911 \pm 0.056$ \\ \hline
\end{tabular}
\caption{Measured $W,Z$ boson partial widths and branching ratios. Partial width uncertainties are suppressed. Total $\Gamma_W=2.085\pm0.042$~GeV, $\Gamma_Z=2.4952\pm0.0023$~GeV. From the PDG \cite{Tanabashi:2018oca}.}
\label{tab:wzbr}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
Decay & $\Gamma_{f}$/MeV & BR/\% & $\mu/\mu_{SM}$\\ \hline
$H \rightarrow b\bar{b}$ & 2.35 & $57.7^{+3.21}_{-3.27}$ & $1.02^{+0.15}_{-0.15}$ \\
$H \rightarrow WW^{\star}$ & 0.875 & $21.5^{+4.26}_{-4.20}$ & $ 1.08^{+0.18}_{-0.16} $\\
$H \rightarrow gg$ & 0.349 & $8.57^{+10.22}_{-9.98}$ & -\\
$H \rightarrow \tau^+ \tau^-$ & 0.257 & $6.32^{+5.71}_{-5.67}$ & $1.11^{+0.17}_{-0.17}$\\
$H \rightarrow c\bar{c}$ & 0.118 & $2.91^{+12.17}_{-12.21}$ & -\\
$H \rightarrow ZZ^{\star}$ & 0.107 & $2.64^{+4.28}_{-4.21}$ & $1.19^{+0.12}_{-0.11} $\\
$H \rightarrow \gamma \gamma$ & 0.00928 & $0.228^{+4.98}_{-4.89}$ & $1.10^{+0.10}_{-0.09}$ \\
$H \rightarrow \mu^+ \mu^-$ & 0.000891 & $0.0219^{+6.01}_{-5.86}$ & $0.6^{+0.8}_{-0.8}$ \\ \hline
Combined & 4.07 & 100.0 & $1.10 \pm 0.11$ \\ \hline
\end{tabular}
\caption{Theoretical Higgs boson partial widths (uncertainties suppressed) and branching ratios for $m_{H}=125$~GeV at highest current order from the LHC Higgs Cross Section Working Group \cite{Dittmaier:2011ti,Dittmaier:2012vm,Heinemeyer:2013tqa}, and the measured signal strength relative to the SM $\mu/\mu_{SM}$ from the PDG \cite{Tanabashi:2018oca}. }
\label{tab:hbr}
\end{center}
\end{table}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{feynman_ilc.pdf}
\caption{Feynman diagrams for some main signals and backgrounds at the ILC. At far left, the $s$-channel diagrams for fermion pair production (top) and Higgstrahlung (bottom). The remaining $t$-channel diagrams, from left to right, show $WZ$ fusion production of a single $W$ (top), $ZZ$ fusion production of a single $H$ (bottom), $WW$ fusion production of a single $Z$ (top) and $H$ (bottom), and diboson production of $ZZ$ (top) and $WW$ (bottom).}
\label{fig:ilcdiagrams}
\end{center}
\end{figure*}
Now consider decay rates of the Higgs boson to fermion pairs $f\bar{f}$ and gauge boson pairs $ZZ,W^+W^-,gg,\gamma \gamma$. For fermion pairs the Feynman diagram has a single vertex and the amplitude at first order is simply the vertex factor since there are no internal momenta. The results at first order are:
\begin{eqnarray}
\Gamma_{H \rightarrow f\bar{f}} & = & n_{c} \frac{G_F m_{f}^2 m_{H}}{4 \pi \sqrt{2}} \left[ 1 - \frac{4m_{f}^2}{m_{H}^2} \right] ^{3/2}
\end{eqnarray}
\noindent where $n_c=1$ for leptons and $n_c=3$ for quarks.
For the $ZZ$ and $W^+W^-$ diagrams the amplitudes are $\mathcal{M} \propto -2i m_{Z}^2/v,-2i m_{W}^2/v$. The results for $V=Z,W$ at first order are
\begin{eqnarray}
\Gamma_{H \rightarrow VV} & = & S \frac{G_F}{8\pi \sqrt{2}} m_{H}^3 (1-4 \lambda_{V})^{1/2} (12 \lambda_{V}^2-4 \lambda_V +1)
\label{eqn:offshell}
\end{eqnarray}
\noindent where $\lambda_V = (m_V/m_H)^2$ and $S=1$ for $V=W$ and $S=1/2$ for $V=Z$. Decay rates to offshell gauge bosons $ZZ^{\star}$ and $WW^{\star}$ are complicated due to the fact that one gauge boson is virtual since $m_{H}<2m_{V}$, and the decay rates in eqn. \ref{eqn:offshell} must be adjusted by phase space factors before comparison to experiment.
Finally, decays to $gg$ and $\gamma \gamma$ do not occur at first order, and require integrations over internal momenta in top quark loops. The vertex factors are $g_{s}^2 m_{t}/v$ and $g_{e}^2 m_{t}/v$:
\begin{eqnarray}
\Gamma_{H \rightarrow gg,\gamma \gamma} & = & \frac{G_F m_{H}^3}{36 \sqrt{2} \pi} \left[ \frac{\alpha_s}{\pi} \right]^2 \vert I \vert^2, \frac{G_F m_{H}^3}{8 \sqrt{2} \pi} \left[ \frac{\alpha}{\pi} \right]^2 \vert I \vert^2
\end{eqnarray}
\noindent where $\vert I \vert^2 \approx 1$ contains the $m_t$ dependence.
As noted, these contributions to decay rates only represent the first order at which they occur. Higher order corrections can be large. See Table \ref{tab:hbr} for the calculated Higgs boson decay rates for $m_{H}=125$~GeV at the current highest order together with the currently measured signal strengths.
\subsection{ILC signal and background}
All processes at the ILC can be classified according to the number of fermions $f$ in their final state after boson decay. Thus $e^+e^- \rightarrow f\bar{f}$ is a 2f process, while $e^+e^- \rightarrow ZZ,WW$ are 4f processes. If the beam electron or positron splits $e \rightarrow \gamma e$, or if both split, then the initial state may contain one or two photons. Thus $\gamma e \rightarrow \gamma e$ is a 1f process, while $\gamma e \rightarrow e Z, \nu W$ are 3f processes. Processes 2f,4f also arise from $\gamma \gamma$ initial states: $\gamma \gamma \rightarrow f\bar{f},WW$. See fig. \ref{fig:ilcdiagrams} for the Feynman diagrams of some of the main 2f,4f,6f signals and backgrounds at the ILC.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{higgsplot.pdf}
\caption{Cross sections for Higgstrahlung, $WW$ fusion, $ZZ$ fusion, $t\bar{t}$ associated production and double $H$ production with $m_{H}=125$~GeV. ISR is included. Solid lines indicate polarized beams with 80\% lefthanded electrons and 30\% righthanded positrons. Dashed lines indicate 80\% righthanded electrons and 30\% lefthanded positrons. Obtained with Whizard 2.6.4 \cite{Kilian:2007gr}.}
\label{fig:hxsec}
\end{center}
\end{figure*}
\textbf{Signal.} The Higgs boson can be produced singly at the ILC in four ways: $e^+ e^- \rightarrow ZH$ (\emph{Higgstrahlung}), $e^+ e^- \rightarrow \nu \bar{\nu} H$ (\emph{WW fusion}), $e^+ e^- \rightarrow e^+ e^- H$ (\emph{ZZ fusion}) and $e^+ e^- \rightarrow t\bar{t}H$ (\emph{$t\bar{t}$} associated). Higgs bosons may also be produced doubly in more rare processes, $e^+ e^- \rightarrow ZHH$ (\emph{double Higgstrahlung}) and $e^+ e^- \rightarrow \nu \bar{\nu} HH$ (\emph{double WW fusion}). In double Higgstrahlung the triple Higgs coupling $HHH$ is accessible, while in double $WW$ fusion the $HHWW$ coupling is accessible. Associated $t\bar{t}$ production and double Higgs production are only available at or above $\sqrt{s}=500$~GeV.
Higgstrahlung is an $s$-channel process in which the $H$ is radiated from a $Z$. Higgstrahlung turns on near threshold at $\sqrt{s} \approx m_{Z}+m_{H}$ with the $e_{R}^{+}e_{L}^{-}$ cross section rapidly reaching a maximum of $\sigma_{ZH} \approx 300$~fb near $\sqrt{s} \approx 250$~GeV. Thereafter it decreases with the characteristic $1/s$ dependence of an $s$-channel process, reaching $\sigma_{ZH} \approx 200$~fb (100~fb) near $\sqrt{s}=350$~GeV (500~GeV). The $e_{L}^{+}e_{R}^{-}$ cross section is approximately 2/3 of the $e_{R}^{+}e_{L}^{-}$ cross section.
Vector boson ($ZZ$ or $WW$) fusion production is a $t$-channel process in which a $Z$ or a $W$ is exchanged and the large $ZZH$ and $WWH$ couplings produce a Higgs boson. $WW$ fusion turns on at threshold and the $e_{R}^{+}e_{L}^{-}$ cross section rises to $\sigma_{\nu \nu H} \approx$ 37~fb (72~fb,162~fb) at $\sqrt{s} \approx 250$~GeV (350~GeV,500~GeV). $ZZ$ fusion $e_{R}^{+}e_{L}^{-}$ cross section rises to $\sigma_{\nu \nu H} \approx$ 11~fb (10~fb,12~fb) at $\sqrt{s} \approx 250$~GeV (350~GeV,500~GeV). The $e_{L}^{+}e_{R}^{-}$ cross section is approximately 2/3 of the $e_{R}^{+}e_{L}^{-}$ cross section for processes involving a $Z$ boson but considerably smaller for processes involving a $W$ boson.
See fig. \ref{fig:hxsec} for signal cross sections \emph{vs.} $\sqrt{s}$, and Table \ref{tab:ilcxsec} for signal cross sections at $\sqrt{s}=250,350,500$~GeV in the Higgstrahlung and vector boson fusion production channels assuming the nominal ILC design beam polarizations.
\textbf{Background.} See fig. \ref{fig:ilcxsec} for the 2f,4f,6f bckground cross sections \emph{vs.} $\sqrt{s}$ assuming unpolarized beams. Since the main Higgs boson production processes are 4f or 6f, depending on its decay, and 2f backgrounds are fairly straightforwardly suppressed, the 4f and 6f backgrounds are the most important to consider here.
\begin{figure*}[p]
\begin{center}
\vspace{-1in}
\includegraphics[width=1.\textwidth]{ilccross.pdf}
\vspace{-0.5in}
\caption{Total cross sections for $e^+ e^-$ to various SM background final states \emph{vs.} $\sqrt{s}$ with unpolarized beams and without ISR or beamstrahlung. The cross section for $\sum q\bar{q}$ below $\sqrt{s} \approx 100$~GeV is the same as fig. \ref{fig:eetohad}. Credit: ref. \cite{Murayama:1996ec}.}
\label{fig:ilcxsec}
\end{center}
\end{figure*}
Beams at the ILC will be polarized. By polarizing the electrons and positrons $e_{L}^{+}e_{R}^{-}$ or $e_{R}^{+}e_{L}^{-}$ a process involving a $W$ boson can be turned on or off: if the process requires the $W$ to couple $\nu_R$ or $\bar{\nu}_{L}$ then it does not occur. Because it is not possible to polarize 100\% of electrons or positrons, there will be some fraction of the beams which do not contain the desired polarization. Hereafter we quote cross sections for 30\% polarized positron beams and 80\% polarized electron beams, the ILC design goal. For both signal and background, cross sections are higher for $e_{R}^{+}e_{L}^{-}$ than for $e_{L}^{+}e_{R}^{-}$ with the nominal polarization fractions, but in the case of background processes the difference is more dramatic.
See Table \ref{tab:ilcxsec} for 2f, 4f, 6f background cross sections for polarized $e^+e^-$ beams at $\sqrt{s}=$250,350,500 GeV calculated with Whizard 2.6.4 \cite{Kilian:2007gr}. No requirements have been imposed, except for the process $\gamma \gamma \rightarrow W^+ W^-$, for which a minimum $t$-channel momentum transfer requirement $q^2>1$~MeV and $e \rightarrow e\gamma$ splitting function $x>0.001$ are imposed in order to prevent divergence. Systematic uncertainties reported by Whizard are typically below 1\% but are in a few cases of order 10\%. More information on ILC backgrounds, including those with initial states $e\gamma$ and $\gamma \gamma$, can be found in \cite{Potter:2017rlo}, where the generator MG5 aMC@NLO \cite{Alwall:2014hca} has been used instead of Whizard.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline
Process & \multicolumn{2}{|c|}{$\sqrt{s}=250$~GeV} & \multicolumn{2}{|c|}{$\sqrt{s}^{\star}=250$~GeV} & \multicolumn{2}{|c|}{$\sqrt{s}=350$~GeV} & \multicolumn{2}{|c|}{$\sqrt{s}=500$~GeV} \\
& $+,-$ & $-,+$ & $+,-$ & $-,+$ & $+,-$ & $-,+$ & $+,-$ & $-,+$ \\ \hline \hline
$e^+ e^- \rightarrow ZH$ & 0.313 & 0.211 & 0.297 & 0.200 & 0.198 & 0.134 & 0.096 & 0.064 \\
$e^+ e^- \rightarrow \nu \bar{\nu} H$ & 0.037 & 0.015 & 0.034 & 0.014 & 0.072 & 0.012 & 0.162 & 0.014\\
$e^+ e^- \rightarrow e^+ e^- H$ & 0.011 & 0.007 & 0.010 & 0.007 & 0.010 & 0.006 & 0.012 & 0.007\\ \hline
$e^+ e^- \rightarrow b\bar{b}$ & 15.4 & 8.87 & 16.3 & 9.44 & 7.52 & 4.34 & 3.72 & 2.14 \\
$e^+ e^- \rightarrow c\bar{c}$ & 15.5 & 9.64 & 16.5 & 10.7 & 7.76 & 5.03 & 3.97 & 2.42\\
$e^+ e^- \rightarrow u\bar{u},d\bar{d},s\bar{s}$ & 47.0 & 28.0 & 49.5 & 29.9 & 23.1 & 13.6 & 11.3 & 6.92\\
$e^+ e^- \rightarrow \tau^+ \tau^-$ & 6.10 & 4.74 & 6.36 & 5.02 & 3.02 & 2.43 & 1.57 & 1.21\\
$e^+ e^- \rightarrow \mu^+ \mu^-$ & 6.19 & 4.63 & 6.43 & 5.15 & 3.00 & 2.48 & 1.50 & 1.20\\ \hline
$e^+ e^- \rightarrow WW$ & 37.5 & 2.58 & 37.9 & 2.62 & 27.1 & 1.79 & 17.9 & 1.15\\
$e^+ e^- \rightarrow e^{\pm} \nu W^{\mp}$ & 10.2 & 0.109 & 10.4 & 0.108 & 10.1 & 0.134 & 10.9 & 0.215\\
$e^+ e^- \rightarrow e^+ e^- Z$ & 2.51 & 2.63 & 2.38 & 2.13 & 2.64 & 2.23 & 2.64 & 3.04\\
$e^+ e^- \rightarrow ZZ$ & 1.80 & 0.827 & 1.82 & 0.837 & 1.20 & 0.552 & 0.761 & 0.348 \\
$e^+ e^- \rightarrow \nu \bar{\nu} Z$ & 0.354 & 0.117 & 0.347 & 0.117 & 0.470 & 0.092 & 0.780 & 0.088 \\ \hline
$e^+ e^- \rightarrow t\bar{t}$ & 0 & 0 & 0 & 0 & 0.267 & 0.117 & 0.890 & 0.421 \\
$e^+ e^- \rightarrow WWZ$ & 0 & 0 & 0 & 0 & 0.024 & 0.002 & 0.083 & 0.006 \\
$e^+ e^- \rightarrow ZZZ$ & 0 & 0 & 0 & 0 & 0.001 & 0.000 & 0.002 & 0.001 \\ \hline
\end{tabular}
\caption{Cross sections (in pb) for some signal and background processes at the ILC for $\sqrt{s}=$250, 350, 500~GeV. ISR is included. For $\sqrt{s}^{\star}$ beamstrahlung is included, otherwise not. Beam polarization $+,-$ indicates 30\% righthanded positrons, 80\% lefthanded electrons while $-,+$ indicate 30\% lefthanded positrons and 80\% righthanded electrons. Obtained with Whizard 2.6.4 \cite{Kilian:2007gr}. }
\label{tab:ilcxsec}
\end{center}
\end{table*}
The cross sections in Table \ref{tab:ilcxsec} include an important effect: \emph{initial state radiation} (ISR). In a Feynman diagram with initial state $e^+e^-$ a photon may attach to either electron or positron. The photon carries away energy, effectively lowering the center of mass energy of the $e^+e^-$ system, subjecting the interacting particles to a cross section for a lower $\sqrt{s}$ than the nominal $\sqrt{s}$ of the beams. Thus ISR effectively increases the cross section for a process with a decreasing cross section \emph{vs.} $\sqrt{s}$, and decreases it for a process with an increasing cross section \emph{vs.} $\sqrt{s}$. The probability for ISR to occur and the resulting change in cross section is folded into the cross sections reported by Whizard. The effect can be dramatic: in \emph{radiative return} to the $Z$, including ISR at $\sqrt{s}=250$~GeV increases the $e^+ e^- \rightarrow q\bar{q}$ cross section fivefold.
As will be discussed in the next section, beam particles are bunched together and the \emph{bunches} are spaced discretely. One side effect is \emph{beamstrahlung}, photon radiation from electron or positron in the $e^+e^-$ system induced by the field of an oncoming bunch. The effect is similar to ISR: the effective $\sqrt{s}$ of the $e^+e^-$ system is lowered somewhat. For $\sqrt{s}=250$~GeV in Table \ref{tab:ilcxsec}, cross sections are reported both with beamstrahlung and without. Beamstrahlung is sensitive to the details of the beam parameters, and for the case shown in Table \ref{tab:ilcxsec} the parameters for the staged ILC250 \cite{Evans:2017rvt} are assumed.
Another side effect of bunching beam particles is \emph{pileup}. For signal processes and even many background processes, cross sections are low enough such that the probability of two overlaid events per bunch crossing is very low. However some background processes, like $t$-channel $\gamma \gamma \rightarrow e^+ e^-,q\bar{q}$, have high enough cross sections that they can contaminate the nominal event. The effect is to overlay the nominal event with one or more $e^+e^-$ or $q\bar{q}$ pairs. If pileup events are reconstructed correctly these 2f pairs are easily suppressed, but pileup events introduce problematic ambiguities in event reconstruction. These interactions are described in \cite{Schulte:1997nga}.
\subsection{Further reading and exercises}
For the SM, see \emph{Introduction to the Standard Model of Particles Physics} (Cottingham and Greenwood) \cite{Cottingham:396082} for a concise and elegant introduction. The chapter on nonrelativistic quantum scattering in \emph{Introduction to Quantum Mechanics} (Griffiths) \cite{Griffiths2004Introduction} is very good.
For more depth and rigor, see \emph{Introduction to Elementary Particles} (Griffiths) \cite{Griffiths:2008zz}, \emph{Gauge Theories of the Strong, Weak, and Electromagnetic Interactions} (Quigg) \cite{Quigg:1983gw}, \emph{Quarks and Leptons} (Halzen and Martin) \cite{Halzen:1984mc} and \emph{Collider Physics} (Barger) \cite{Barger:1987nn}. For quantum field theory, see \emph{An Introduction to Quantum Field Theory} (Peskin and Schroder) \cite{Peskin:1995ev}.
The Particle Data Group review articles \emph{The Standard Model and Related Topics}, \emph{Kinematics, Cross Section Formulae and Plots} and \emph{Particle Properties} \cite{Tanabashi:2018oca} are invaluable. Particle physics continues to evolve, and the most recent and precise measurements can be found under the PDG \emph{Summary Tables} \cite{Tanabashi:2018oca}.
Exercises for this section can be found in sect. \ref{sec:higgs} of Appendix \ref{appendix1}.
\section{ILC: accelerators and detectors}
\subsection{Historical perspective}
\begin{table*}
\begin{center}
\begin{tabular}{|l|c|c|} \hline
Year & Recipient & Reason Given By Nobel Committee\\ \hline \hline
1984 & Carlo Rubbia* & “discovery of the field particles W and Z” \\
1979 & Steven Weinberg* & theory of the unified weak and electromagnetic interaction \\
1976 & Burton Richter* & “discovery of a heavy elementary particle of a new kind” \\
1969 & Murray Gell-Mann & “classification of elementary particles and their interactions” \\
1968 & Luis Alvarez & “technique of using hydrogen bubble chamber and data analysis” \\
1965 & Richard Feynman* & “fundamental work in quantum electrodynamics” \\
1961 & Robert Hofstadter & “discoveries concerning the structure of the nucleons” \\
1958 & Donald Glaser & “for the invention of the bubble chamber” \\
1945 & Wolfgang Pauli & “for the discovery of the Exclusion Principle” \\ \hline
1939 & E.O. Lawrence & “for the invention and development of the cyclotron”\\
1938 & Enrico Fermi & “demonstrations of the existence of new radioactive elements” \\
1936 & Carl Anderson & “for his discovery of the positron” \\
1935 & James Chadwick & “for the discovery of the neutron” \\
1931 & Paul Dirac* & “for the discovery of new productive forms of atomic theory” \\
1925 & Charles Wilson & “making the paths of electrically charged particles visible”\\
1920 & Niels Bohr & “structure of atoms and of the radiation emanating from them”\\
1921 & Albert Einstein & “discovery of the law of the photoelectric effect”\\
1906 & J.J. Thomson & “conduction of electricity by gases”\\ \hline
\end{tabular}
\caption{Nobel Prizes in Physics awarded to physicists discussed in this section. Adapted from the Nobel Prize webpage \cite{nobelpage}. Asterisked names had co-recipients.}
\label{tab:nobel}
\end{center}
\end{table*}
In the previous section, the particles and interactions of the Standard Model (SM) were presented as an ahistorical \emph{fait accompli}, apart from mentioning where and when some particles were discovered. Such a presentation belies the metaphorical - and sometimes real - blood, sweat and tears of many physicists, both experimental and theoretical, over many decades, as well as the considerable cost of designing, building and operating the technology which provides the experimental foundation for the SM. See Table \ref{tab:nobel} for the physicists discussed in this section who were awarded the Nobel Prize in Physics.
The danger of this approach is in underestimating the magnitude of both the cost and the socio-technological challenge of building the ILC and its detectors. Before turning to the fundamentals of accelerators and detectors, we briefly remedy this shortcoming. The history of particle physics in the 20th century is a steady progression to higher energies required for resolving smaller particles. Shorter de Broglie wavelengths $\lambda=h/p$ are necessary for resolving smaller particles, requiring probes with ever increasing energy. At the beginning of the 20th century, probes from cathodes or radioactive nuclear decay were sufficient for tabletop discoveries, but as the century progressed more complex and expensive technology was required.
For the first generation of SM fermions, tabletop experiments carried out by one experimentalist, aided by a few assistants, were sufficient for major discoveries. J.J. Thomson discovered the electron in 1898 with a cathode ray tube, a simple handheld evacuated glass tube with low voltages for electron emission, acceleration and deflection. The detector was the glass tube itself. The experiment of Geiger and Marsden which led Ernest Rutherford to the discovery of the atomic nucleus in 1911 was a simple setup of a Radium source of incident alpha particles, a lead collimator, Gold foil for providing heavy nuclei targets and a phosphorescent screen of Zinc Sulfide for a detector.
Similarly, the discovery of the photon as the gauge boson which mediates the electromagnetic interaction occurred with considerable theoretical energy on the part of James Clerk Maxwell, Max Planck and Albert Einstein but, by today's standards, negligible cost and simple experimental technology. The photoelectric effect, blackbody spectrum, Compton scattering and Franck-Hertz experiments are easily demonstrated in beginning undergraduate physics courses. By the time the results of inexpensive spectroscopic experiments were being used by Niels Bohr and others to work out how the electron, nucleus and photon form the nonrelativistic quantum atom, the energies and event rates of the tabletop experiments were becoming insufficient for new discoveries. The tabletop experiments of nuclear beta decay, from which Wolfgang Pauli inferred the existence of the neutrino in 1930, and of James Chadwick, used to discover the neutron in 1932, were some of the last.
The first step away from the tabletop experiment came when physicists looked not to the earth for electrons traversing a voltage difference or nuclear fragments escaping a disintegrating nucleus, but to the heavens for a new source of energetic particles: secondary showers of particles created by collisions of highly energetic cosmic rays (protons or atomic nuclei) with atoms in the atmosphere. Like the tabletop experiments using radioactive nuclei, cosmic ray experiments could not provide a uniform energy or intensity, but the energies could be orders of magnitude larger than in the tabletop experiments and the event rates were large enough for new discoveries by sufficiently patient physicists.
The year the neutron was discovered, 1932, was also the year the positron $e^+$ was discovered among cosmic secondaries by Carl Anderson in a detector known as a cloud chamber, invented by Charles Wilson in 1911. The positron is the antimatter version of the electron $e^-$ first predicted by Paul Dirac with his fully relativistic quantum mechanics in 1931. The cloud chamber is an enclosed device filled with supersaturated water or alcohol which, when traversed by a charged particle, exhibits a visible track due to condensation centers made by ions created from the traversing charged particle. If a magnetic field is applied, the momentum can be inferred from the radius of curvature of the track. A few years after the positron was discovered, the charged pions $\pi^{\pm}$ and muons $\mu^{\pm}$ were discovered in photographic emulsions exposed to cosmic rays in the Bolivian Andes.
The neutral and charged kaons $K^0,K^{\pm}$ were also discovered in cosmic secondaries in 1947 in cloud chambers. The kaons were inferred from their visible decays $K^0 \rightarrow \pi^+ \pi^-$ and $K^+ \rightarrow \pi^+ \pi^- \pi^+$, unlike any known particle, and dubbed \emph{strange} particles. Shortly thereafter the meson discoveries were confirmed in accelerator experiments at Berkeley and Brookhaven (see below). Thereafter few major SM discoveries took place without accelerators, which provide the experimentalist with control over both the energies and event rates of their experiments.
\begin{figure*}[t]
\begin{center}
\framebox{\includegraphics[width=0.3\textwidth, height=2in]{cyclotron.pdf}
\includegraphics[width=0.64\textwidth, height=2in]{linac.pdf}}
\caption{Schematic diagrams of a cyclotron (left) and linear accelerator (right). Encircled dots indicate constant magnetic fields perpendicular to the page in the cyclotron dees. Arrows indicate electric fields which oscillate in direction and magnitude due to the applied alternating current. Successively longer drift tubes after each linac voltage kick correspond to successively longer semicircles in the dees after each cyclotron voltage kick.}
\label{fig:linac}
\end{center}
\end{figure*}
But with experimental control comes the cost of the technology required for it, as well a new scale of scientific cooperation on a single experiment. It became clear that no single physicist, and no single university, could provide the funding or personnel required for building and running the accelerator experiments. Only national governments could and, in the wake of World War II and the first use of nuclear weapons, many were willing to do so. Soon after the war many major national and international laboratories were formed to build and operate accelerators and their detectors at increasingly higher energies and luminosities.
In Europe, nations devastated by the war came together in 1954 to form a major new laboratory, the Conseil Europ{\'e}en pour la Recherche Nucl{\'e}aire (CERN) in Geneva. Shortly afterward in the Soviet Union, the Joint Institute for Nuclear Research (JINR) was established in Dubna in 1956. In West Germany, the Deutsches Elektronen Synchrotron (DESY) laboratory was formed in 1960 in Hamburg. In China, the Institute for High Energy Physics (IHEP) was established in 1973. In Japan, the K\={o} Enerug\={i} Kasokuki Kenky\={u} Kik\={o} (KEK, High Energy Accelerator Research Organization) was formed in 1997 in Tsukuba.
In the eastern US, a university consortium come together in 1947 to partner with the government to form a national laboratory at Brookhaven on Long Island, and in the western US the nuclei of later national laboratories were formed at Stanford University and the University of California at Berkeley. These later became the Brookhaven National Lab (BNL), the Stanford Linear Accelerator Center (SLAC) and the Lawrence Berkeley National Lab (LBNL). The Fermi National Accelerator Laboratory (FNAL), also known as Fermilab, came into being just outside Chicago in 1967.
The center of accelerator research in postwar US was Berkeley under the leadership of Ernest O. Lawrence and later Luis Alvarez. Lawrence invented and developed the \emph{cyclotron}, a circular accelerator. The first version of the cyclotron was of tabletop dimensions, but subsequent versions were much larger. The cyclotron comprises two 'D' shaped magnets (\emph{dees}) placed back to back with a small gap, providing a uniform magnetic field $B$ pointing perpendicular to the faces of the dees. An alternating currently between the dees provides acceleration each time a charged particle traverses the gap, the polarity switching between gap crossings, and the magnetic field of the dees keeps the the particle in a circular orbit with increasing radius on each gap traversal. See fig. \ref{fig:linac} (left).
The resulting trajectory is an outward spiral. Including relativistic effects, the orbital frequency of a particle with mass $m$, charge $q$ and velocity $v$ is
\begin{eqnarray}
\nu & = & \frac{qB}{2\pi m} \sqrt{1-\frac{v^2}{c^2}}
\end{eqnarray}
\noindent where $\nu_0=qB/2\pi m$ is the \emph{cyclotron frequency} and $1/\gamma=\sqrt{1-v^2/c^2}$ is the relativistic correction factor. For a low energy particle like a 25 MeV proton, with $1/\gamma \approx 1$, an alternating current with fixed frequency will stay synchronized (within tolerance) with the particle, but for relativistic particles some correction must be applied to maintain synchronicity.
In a \emph{synchrocyclotron}, the alternating current frequency is ramped to stay in sync with the particle. In a \emph{synchrotron} the magnetic field is ramped such that $B/\gamma$ is constant and the alternating current frequency can remain constant. In both cases accelerated particles must be bunched together at the same radius. Due to \emph{phase stability}, perturbations from the common radius are corrected by restoring forces. Both the cyclotron and the synchrocyclotron are limited by the amount of iron required for the dees, a severe cost constraint. With the synchrotron, a fixed orbital radius using magnets placed around a ring became possible and the dees were no longer necessary. One of the last functioning synchrocyclotrons built at Berkeley had a radius of 184 in and reached 720 MeV, the practical limit. The Berkeley Bevatron, a 6.5 GeV synchrotron, enabled the discovery of the antiproton in 1956 and the antineutron shortly afterward. All modern circular accelerators are synchrotrons.
By this time detectors had also advanced considerably over the cloud chamber. Donald Glaser invented the bubble chamber in 1952. In contrast to the cloud chamber, where tracks are formed from condensed liquid, the tracks in a bubble chamber are formed by vapor created by small energy deposits left by the traversing charged particle. The bubble chamber is filled with liquid gas just below the boiling point, then brought to expand with a piston into a supersaturated state which allows small vapor bubbles to form near the charged particle. Bubble chambers were used in experiments at the Brookhaven and Berkeley machines, and thus the $\rho, \omega$ and $\eta$ mesons were observed in 1961. Synchronizing the bubble chamber piston with accelerator bunch timing, and using computers to analyze pictures of tracks in the bubble chamber, brought the state of affairs very close to modern accelerators and detectors. With the development of spark, streamer and drift chambers we have nearly arrived at modern trackers.
Meanwhile, at Stanford, the potential of the linear accelerator was being developed first under the leadership of Robert Hofstadter and later Pief Panofsky, SLAC Director from 1961 to 1984. See fig. \ref{fig:linac} (right). In 1954 Hofstadter discovered the finite size of the proton in an experiment using 188 MeV electrons from a linear accelerator, suggesting it was not a point particle but rather a composite particle. By measuring the scattering amplitude $f(\theta)$ of the electrons on a proton target, Hofstadter showed that it was not consistent with an amplitude predicted for a potential $V(r)$ from a pointlike proton. The proton had structure. A subsequent version of the linear accelerator first proposed by Hofstadter stretched 2 miles long and came into operation in 1966 under Panofsky with an energy of 17 GeV. Deep inelastic scattering experiments using electrons from the linear accelerator established the protons are noncomposite, with constituent \emph{partons}, thus helping establish the 1964 quark model of Gell-Mann. The theory of partons was developed by Richard Feynman, already famous for his work in QED. In a foray into circular machines, the Stanford Positron Electron Accelerator Ring (SPEAR) was built based on a design by Burton Richter, and SPEAR quickly discovered the $J/\psi$ and $\tau$. Richter served as Director of SLAC from 1984 to 1999. When a positron linac beam was established and brought into collision with the electron linac beam in 1987, the first high energy linear collider was born as the Stanford Linear Collider (SLC).
At Fermilab, built and operated first under the directorship of Robert Wilson starting in 1967 and later Leon Lederman, the last major fixed target experiments were used to discover the $\Upsilon$ meson, the bound state of a $b$ quark and its antiquark. The Tevatron, a 1km radius $p\bar{p}$ synchrotron built to reach $\sqrt{s}$=1 TeV, came into operation in 1983. The top quark was discovered there in 1995. The threshold for Higgs boson production was reached by the Tevatron, but the luminosity was not sufficient for separation of signal from background.
In Europe, CERN had been aggressively pursuing large scale circular colliders and detectors. The Intersecting Storage Rings (ISR), which operated from 1971 until 1984, was the first hadron collider and reached energies up to $\sqrt{s}=$64 GeV. Under Herwig Schopper, Director General from 1981 to 1988, CERN not only carried out the experiment which led to the discovery of the $W$ and $Z$ bosons at the $\sqrt{s}=$540 GeV S$p\bar{p}$S synchrotron, but also proposed and began construction on the Large Electron Positron (LEP) collider, an 4.2km radius synchrotron which reached up to $\sqrt{s}=$200 GeV. Steven Weinberg and others had predicted the $W$ and $Z$ based on their theory of a unified electroweak interaction. Carlo Rubbia led the UA1 collaboration, which built the detector which discovered the $W$ and $Z$ in 1983, and served as CERN Director General from 1989 to 1993. LEP came online in 1989 and operated until 2000, when it had to make way for the Large Hadron Collider (LHC) in its tunnels. The LHC has reached $\sqrt{s}=$13 TeV.
\subsection{Accelerators and the ILC}
\subsubsection{Fixed target \emph{vs.} collider}
In a generic particle accelerator experiment, the particles in collision may have different momenta and a nonzero \emph{crossing angle}. In a \emph{fixed target} accelerator, one particle is stationary while the other is boosted. In a \emph{collider}, both particles are boosted. In a \emph{symmetric collider}, the colliding particles have equal but opposite momenta in the lab frame, while in an \emph{asymmetric collider} momenta are unequal in the lab frame. Early accelerator experiments were exclusively fixed target experiments but as the energy necessary to discover new particles increased, the collider came to dominate.
The reason is as follows. Consider two particles with four momenta $(E_1,\vec{p}_1)$ and $(E_2,\vec{p}_2)$ colliding to create a new particle. In a fixed target experiment $E_2=m_2$ and $\vec{p}_2=0$, so the sum is $(E_1+m_2,\vec{p}_1)$, and
\begin{eqnarray}
m^2 & = & (E_1+m_2)^2-\vec{p}_{1}\cdot \vec{p}_1 \\
& = & m_{1}^{2}+m_{2}^{2}+2m_2 E_1
\end{eqnarray}
\noindent so that the mass reach is $m \propto \sqrt{E_1}$ for $m_1,m_2 \ll E_1$. But in a symmetric collider with no crossing angle colliding particles of equal mass, $E_1=E_2$ and $\vec{p}_1+\vec{p}_2=0$, so the sum is $(E_1+E_2,0)$ and
\begin{eqnarray}
m^2 & = & (E_1+E_2)^2-0^{2} \\
& = & 4E_{1}^2
\end{eqnarray}
\noindent so that the mass reach is $m \propto E_1$.
Thus a collider with beam energies $\frac{1}{2}E$ can produce new particles of much higher mass than a fixed target accelerator with beam energy $E$. This is because in a fixed target accelerator most of the energy of the incident particle is used in conserving momentum and cannot go into creating a new particle. In a symmetric collider with no crossing angle all beam energy is available for new particle creation. Asymmetric colliders, colliders with crossing angles and colliders colliding particles with different mass are intermediate cases between fixed target and symmetric collider.
\subsubsection{Luminosity}
In sect. \ref{sec:scattering} we defined total luminosity $\mathcal{L}=1/A$ for a single collision of one incident particle with one target particle of cross sectional area $A$. Maximizing the rate of interesting events at a collider means maximizing the number of particles brought into collision per unit time, and in accelerators particles are grouped and accelerated in \emph{bunches} of multiple particles.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|} \hline
Collider & SPEAR $(e^+e^-)$ & S$p\bar{p}$S $(p\bar{p})$ & LEP $(e^+ e^-)$ & Tevatron $(p\bar{p})$ & LHC $(pp)$ \\ \hline \hline
$\sqrt{s}$ [GeV] & 8 & 630 & 209 & 1960 & 13000 \\
$C$ [m] & 234 & 6911 & 26659 & 6280 & 26659\\
$L$ [cm$^{-2}$s$^{-1}$] & $1 \times 10^{31}$ & $6 \times 10^{30}$ & $1 \times 10^{32}$ & $4.3 \times 10^{32}$ & $2 \times 10^{34}$ \\
Years & 1972-1990 & 1981-1990 & 1989-2000 & 1987-2011 & 2009-? \\
Laboratory & SLAC & CERN & CERN & Fermilab & CERN \\
Discoveries & $c,\tau$ & $Z,W$ & $n_{gen}=3$ & $t$ & $H$ \\ \hline
\end{tabular}
\caption{Historic circular colliders, their center of mass energies, circumference, peak luminosities, operational years, host labs and main discoveries. All are synchrotrons. Adapted from the PDG \cite{Tanabashi:2018oca}.}
\label{tab:circular}
\end{center}
\end{table*}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|} \hline
$e^+ e^-$ & \multicolumn{3}{|c||}{Linear} & \multicolumn{3}{|c|}{Circular} \\ \cline{2-7}
Collider & SLC & ILC & CLIC & LEP & CEPC & FCCee \\ \hline
$\sqrt{s}$ [GeV] & 100 & 250,500 & 380,3000 & 209 & 240 & 240,366 \\
$D$ or $C$ [km] & $2 \times 1.5$ & $2 \times 20.5,31$ & $2 \times 11,50$ & 27 & 100 & 98 \\
$L$ [cm$^{-2}$s$^{-1}$] & $2.5 \times 10^{30}$ & $1.8 \times 10^{34}$ & $6 \times 10^{34}$ & $1 \times 10^{32}$ & $3.2 \times 10^{34}$ & $ 2.3 \times 10^{36}$ \\
Years & 1989-1998 & - & - & 1989-2000 & - & - \\
Laboratory & SLAC & KEK? & CERN? & CERN & IHEP? & CERN? \\ \hline
\end{tabular}
\caption{Parameters of possible future $e^+ e^-$ colliders and their direct antecedents, the Stanford Linear Collider (SLC) and LEP. Adapted from the PDG \cite{Tanabashi:2018oca}.}
\label{tab:lepton}
\end{center}
\end{table*}
If $n_1$ particles in a bunch are incident on $n_2$ targets in a colliding bunch, and bunches are brought into collision at frequency $f$, then the time rate of particle-particle interactions is $f n_1 n_2$. Then we generalize the integrated luminosity $\mathcal{L}=1/A$ to the \emph{instantaneous luminosity},
\begin{eqnarray}
L & = & f \frac{n_1 n_2}{A}
\label{eqn:lumi}
\end{eqnarray}
\noindent where $f$ is the bunch frequency and $n_1,n_2$ are the bunch populations. Thus maximizing luminosity means maximizing $f,n_1$ and $n_2$ while minimizing $A$ within accelerator constraints. Note that only $f$ and $A$ have dimensions so the units of $L$ are cm$^{-2}$s$^{-1}$. \emph{Integrated or total luminosity} is time integrated $\mathcal{L}=\int dt L$, and has units of cm$^{-2}$ (fb$^{-1}$, pb$^{-1}$, \emph{etc.}).
For bunches with Gaussian populations of horizontal width $\sigma_x$ and vertical width $\sigma_y$ at the interaction point, the bunch cross section is elliptical with area $A=4 \pi \sigma_x \sigma_y$ assuming axes of length $2\sigma_x$ and $2\sigma_y$. In many cases bunches are collected into \emph{pulses}, so $f=f_{r} n_{b}$ where $f_{r}$ is the pulse repetition rate and $n_{b}$ is the number of bunches per pulse. Assuming these expressions, eq. \ref{eqn:lumi} becomes
\begin{eqnarray}
L & = & f_{r} n_{b} \frac{n_1 n_2}{4 \pi \sigma_{x}^{\star} \sigma_{y}^{\star}} \epsilon
\label{eqn:lumi2}
\end{eqnarray}
\noindent where the star indicates evaluation at the interaction point. The $\epsilon$ factor has been introduced to account for luminosity reductions due to crossing angle and other small accelerator effects, with $\epsilon \approx 1$ but $\epsilon < 1$.
The cross sectional area of the bunches is not constant. As the bunches move toward the interaction point the $\sigma_x$ and $\sigma_y$ exhibit harmonic oscillation due to electromagnetic fields with an amplitude determined by $\beta_x$ and $\beta_y$, the \emph{amplitude functions}. To reach maximum luminosity, the accelerator is thus \emph{tuned} so that at the interaction point the amplitude functions, $\beta_x^{\star}$ and $\beta_y^{\star}$ are minimal. Finally, the horizontal and vertical \emph{emittance} are defined to be $\epsilon_{x,y} \equiv \sigma_{x,y}^{2}/\beta_{x,y}$ and so that eq. \ref{eqn:lumi2} is often written using $\sigma_{x,y}=\sqrt{\epsilon_{x,y} \beta_{x,y}}$
\subsubsection{Circular \emph{vs.} linear colliders}
If the earliest accelerator experiments were fixed target experiments, they were also \emph{linear} accelerator experiments. At one end of the line was the source for beam particles, while at the other end was the fixed target. But physicists soon realized that if the accelerator could be bent back around upon itself in a circle, the final collision energy could be greatly enhanced by multiple transits of the same accelerator, as with Lawrence's cyclotron.
See Table \ref{tab:circular} for the parameters of five historically important circular colliders: SPEAR is the Stanford Positron Electron Accelerating Ring, S$p\bar{p}$S is the Super Proton Antiproton Synchrotron, LEP is the Large Electron Positron collider and LHC is the Large Hadron Collider.
Strong bending dipole magnets are required for keeping the beams in circular orbits. By equating the Lorentz force on a particle with charge $e$ and transverse momentum $p$ passing through a magnetic field $B$ with the centripetal force necessary for a circular orbit of radius $R$, one obtains
\begin{eqnarray}
p & = & aBR
\label{eqn:pisqbr}
\end{eqnarray}
\noindent where $a \approx 0.3$~GeV/mT. This result holds for relativistic particles as it does for nonrelativistic ones.
In a circular collider, counterrotating beams can be brought into collision at an \emph{interaction point} by specialized dipole magnets. Detectors are placed around the collision point (or points) to study the results of the collisions. Since beams made of bunches of identical charged particles will become unfocused over time due to electromagnetic repulsion, focusing quadrupole magnets are necessary to bring the bunches back into focus. Focusing quadrupoles usually alternate with bending dipoles in a circular collider.
\begin{figure*}[t]
\begin{center}
\vspace{-0.8in}
\resizebox{0.9\textwidth}{!}{\includegraphics*{cryo.pdf}}
\vspace{-0.8in}
\caption{ILC cryogenic modules. Each module contains 9 cavities (Type A) or 8 cavities plus a quadrupole magnet (Type B), Helium tanks and support and extraction tubes. Approximately 1800 such modules are necessary for the nominal TDR ILC. Credit: $\textcopyright$ Rey Hori and KEK.}
\label{fig:module}
\end{center}
\end{figure*}
One important drawback for a circular collider is \emph{synchrotron radiation}. Charged particles in circular orbits radiate photons. For relativistic particles with charge $q$, energy $E$ and mass $m$,
\begin{eqnarray}
\Delta E & = & \frac{4\pi}{3 \epsilon_0} \frac{q^2}{R} \left( \frac{E}{m} \right)^4
\end{eqnarray}
\noindent for each orbit. Because $\Delta E \propto m^{-4}$, light particles are particularly susceptible to synchrotron radiation. Comparing the two most commonly accelerated particles in a circular collider, $\Delta E_{e}/\Delta E_{p} = (m_{p}/m_{e})^4 \approx 10^{13}$, making electron losses considerably more severe than proton losses. Since energy lost to synchrotron radiation must be injected back into the beams in order to maintain fixed $\sqrt{s}$, the power required for $e^+ e^-$ colliders can be prohibitive. Since losses $\Delta E \propto E^4$, as higher center of mass energies are required to probe new physics the technological challenge of circular $e^+ e^-$ colliders will deepen.
For linear colliders there is no synchrotron radiation. In a simple linear accelerator, or \emph{linac}, drift tubes of successively greater length guide the beam particles while oscillating electric fields parallel to the beamline provide acceleration in the gaps between the drift tubes. Particles are bunched so they only experience the electric field when it accelerates them toward the target, and are within the drift tubes otherwise. A linear $e^+ e^-$ collider is made when a $e^+$ linac beam is brought into collision with a $e^-$ linac beam. A detector is then placed around the collision point. See Table \ref{tab:lepton} for the parameters of proposed linear and circular $e^+ e^-$ colliders ILC, the Compact LInear Collider (CLIC), the Circular Electron Positron Collider (CEPC) and the Future Circular Collider $e^+ e^-$ (FCCee), together with their direct historical antecedents, the Stanford Linear Collider (SLC) and LEP.
\subsubsection{International Linear Collider}
We note that for \emph{hadron colliders} like the LHC, both the luminosity and the center of mass energy can be misleading because they refer to the colliding hadrons, which are composite, not the underlying constituents of the hadron which undergo interactions during collision. By contrast, in a \emph{lepton collider} like the ILC the luminosity and center of mass energy refer directly to the elementary particles undergoing the interaction of interest.
For processes at a proton collider like the LHC, the elementary particles interacting are gluons and quarks, and the share of the proton's energy carried by each gluon or quark described by a \emph{parton density function} is necessarily smaller than that of the proton. For processes at a lepton collider like the ILC, the elementary interacting particles are leptons where the parton density function is identically unity. Furthermore, in a lepton collider the initial state of an interaction is known on an event-by-event basis, whereas in a hadron collider it is not. In particular, momentum conservation along the beamline can be exploited at a lepton collider like the ILC but not at a hadron collider like the LHC.
The ILC design represents an international convergence of several decades of research and development. The design described in the TDR \cite{Behnke:2013xla,Baer:2013cma,Phinney:2007gp,Behnke:2013lya} calls for $\sqrt{s}=500$~GeV upgradeable to $\sqrt{s}=1000$~GeV, with 11 km linacs and a total footprint of 31 km including 6 km for damping rings and the beam delivery system, and another 3 km for the rings to the main linacs. In the ILC Machine Staging Report \cite{Evans:2017rvt}, the goal reverts to $\sqrt{s}=250$~GeV with possible upgrade to $\sqrt{s}=500$~GeV.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
Parameter & Staged $\sqrt{s}=250$~GeV & $\sqrt{s}=250$~GeV & $\sqrt{s}=350$~GeV & $\sqrt{s}=500$~GeV \\ \hline \hline
$n_{1,2}$ & $2.0 \times 10^{10}$ & $2.0 \times 10^{10}$ & $2.0 \times 10^{10}$ & $2.0 \times 10^{10}$ \\
$f_{r}$ & 5Hz & 5Hz & 5Hz & 5Hz \\
$n_{b}$ & 1312 & 1312 & 1312 & 1312 \\
$\sigma_{x}^{\star}$ & 516nm & 729nm & 684nm & 474nm \\
$\sigma_{y}^{\star}$ & 7.8nm & 7.7nm & 5.9nm & 5.9nm \\ \hline
$\int dt \mathcal{L}$ & 2000fb$^{-1}$ & 2000fb$^{-1}$ & 200fb$^{-1}$ & 4000fb$^{-1}$ \\
$-,+/+,-$ & 900/900fb$^{-1}$ & 1350/450fb$^{-1}$ & 135/45fb$^{-1}$ & 1600/1600fb$^{-1}$ \\ \hline
\end{tabular}
\caption{ILC beam parameters for various $\sqrt{s}$ from the TDR \cite{Phinney:2007gp} and the staging report \cite{Evans:2017rvt}, with projected integrated luminosities and luminosity sharing between 30\% negatively polarized positrons and 80\% positively polarized electrons $-,+$, and \emph{vice versa} $+,-$, from scenario H-20 in the operating scenarios report \cite{barklow2015ilc}.}
\label{tab:beams}
\end{center}
\end{table*}
Polarized electrons are produced by photoproduction with a polarized laser. Positrons are produced in pair conversion $\gamma \rightarrow e^+ e^-$, where the energetic photon is produced by a high energy electron beam passing through a superconducting undulator. Positron polarization is a significantly greater technical challenge than electron polarization, and the nominal design calls for 80\% polarized electrons and 30\% polarized positrons.
Once produced, the electrons and positrons are injected into the main tunnel, where they are accelerated to 5 GeV and injected into the damping rings, storage rings with radius 1~km. See fig. \ref{fig:ilc} for reference. In the damping rings the beams are brought to the small cross sectional area necessary for high luminosity. They are then extracted and sent by transport lines for injection into the main linacs through bending rings. In the process the beams are accelerated to 15 GeV from 5 GeV and the bunches are compressed to their nominal bunch sizes.
The main linacs themselves consist of superconducting Niobium RF cavities cooled to 2K with supercooled He II. Each cavity is 1m long and consists of nine elliptical cells, which serve functions analogous to the drift tubes in the simple linac. Nine such cavities fit inside one cryogenic module of Type A. Eight such cavities, together with one focusing quadrupole magnet, fit inside a cryogenic module of Type B. Both modules A and B are 12.65 m in length and are assembled together in the pattern AABAAB to provide acceleration and beam focus. See fig. \ref{fig:module}. RF power is provided to the cavities by klystrons, yielding nominal nominal TDR accelerating gradients of 31.5 MV/m.
At the end of the linacs, a beam delivery system collimates the beams, administers a final focus with quadrupole magnets and delivers the accelerated electrons and positrons to the interaction point at a 40 mrad crossing angle. See Table \ref{tab:beams} for ILC beam parameters for several $\sqrt{s}$. These parameters determine the ILC luminosity in eq. \ref{eqn:lumi2}. Table \ref{tab:beams} also shows projected ILC integrated luminosity and sharing between beam polarizations $(-,+)$ and $(+,-)$, that is 30\% negatively polarized positrons, 80\% positively polarized electrons and 30\% positively polarized positrons, 80\% negatively polarized electrons for scenario H-20 described in the operating scenarios report \cite{barklow2015ilc}.
In the ILC interaction region space is made for two detectors in the \emph{push-pull} scheme, wherein one detector may be easily swapped into the interaction region as the other is swapped out. The two nominal ILC detectors are SiD and ILD. The advantage of using two detectors is scientific reproducability of results by two independent teams using distinct detector designs.
\subsection{Detectors and the SiD}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Element & Z & A & $X_0$ [cm] & $\lambda$ [cm] & $\langle \frac{dE}{dx} \rangle$ [MeV/cm] & $\rho$ [g/cm$^3$] \\ \hline \hline
H$_2$ & 1 & 1.0 & 888.0 & 732.4 & 0.3 & 0.071 \\
C & 6 & 12.0 & 19.3 & 38.8 & 3.8 & 2.2 \\
Si & 14 & 28.1 & 9.4 & 46.5 & 3.9 & 2.3 \\
Fe & 26 & 55.8 & 1.8 & 16.8 & 11.4 & 7.9 \\
W & 74 & 183.8 & 0.4 & 9.9 & 22.1 & 19.3 \\
U & 92 & 238.0 & 0.3 & 11.0 & 20.5 & 19.0 \\ \hline
\end{tabular}
\caption{Atomic number, atomic mass, radiation lengths, nuclear absorption lengths, minimum mean ionization energy loss and density for several elements. Low Z materials are typically used in trackers to minimize $dE/dx$ and maximize $X_0$ and $\lambda$, while high Z materials are used to minimize $X_0$ ($\lambda$) in electromagnetic (hadronic) calorimeters.}
\label{tab:properties}
\end{center}
\end{table*}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Parameter & SLD \cite{Rowson:2001cd} & OPAL \cite{1991275} & ATLAS \cite{Collaboration_2008} & SiD \cite{Behnke:2013lya} \\ \hline \hline
Track $\Delta p_t/p_T$ & 0.010,0.0024 & -,0.0015 & 0.36,0.013 & 0.002,0.00002\\
ECal $\Delta E/E$ & -,0.08 & -,0.05 & 0.4, 0.10 & 0.01, 0.17 \\
HCal $\Delta E/E$ & -, 0.6 & -, 1.2 & 0.15,0.80 & 0.094,0.56 \\ \hline
\end{tabular}
\caption{Tracker and calorimeter performance parameters $a,b$ for several historically important collider detectors and one proposed collider detector. Parameters are obtained from data for the former, from simulated data for the latter. }
\label{tab:performance}
\end{center}
\end{table*}
\subsubsection{Collider detectors}
The quantitative signature of stable or quasistable particles traversing a collider detector is measured by energy transfers from the particle to the detector material mediated by electromagnetic or nuclear interactions.
A particle's phenomenological signature in a collider detector can be classified as a \emph{shortlived} particle ($Z$, $W$, $t$, $\pi^0$, $\rho^{\pm}$, \emph{etc.}) with lifetimes to short to observe directly, a \emph{displaced vertex} ($B$, $D$, $\tau$, \emph{etc.}) with $\tau \approx 10^{-12}$~s, a \emph{quasistable} particle ($\mu$, $K$, $n$, \emph{etc.}) with lifetimes $\tau \approx 10^{-8}$~s or \emph{stable} ($e$ or $p$). Thus the ranges for relativistic particles are effectively of order $c\tau \approx$ 0, 1mm, 1m, $\infty$, respectively. For macroscopic detectors a few meters deep, only quasistable and stable particles are directly detected, but shortlived particles and displaced vertices can be reconstructed from their quasistable or stable decay products by four-vector addition. Neutrinos, because they only interact weakly, escape undetected.
For electrically charged particles, energy loss occurs through ionization, Coulomb scattering, bremsstrahlung induced by detector nuclei, and nuclear scattering or absorption if the particle is a hadron. For electrically neutral particles, energy loss occurs through photoelectric absorption, Compton scattering and pair production (for photons) or nuclear scattering and absorption (for hadrons).
For an example of energy loss, consider the mean ionization energy loss per unit length in a material given by the Bethe-Block equation,
\begin{eqnarray}
\langle \frac{dE}{dx} \rangle & = & \frac{b}{\beta^2} \left( \frac{Z}{\rho A} \right) \left( \ln \left[ \frac{2m_e \beta^2}{I(1-\beta^2)} \right] -\beta^2 \right)
\end{eqnarray}
\noindent where $b$ is a constant, $Z$ is the atomic number, $A$ is the atomic weight, $\rho$ is the density, $I$ is the mean ionization potential and $\beta=v/c$. Hence the material dependence comes entirely in the factor $Z/\rho A$ and $I$, and the only remaining dependence is on $\beta$. After a $1/\beta^2$ fall at low $\beta$, the mean loss passes through a minimum near $\beta^2 \approx 0.9$ and begins a relativistic, logarithmic rise. See Table \ref{tab:properties} for the mean ionization energy loss for a minimum ionizing particle for several elements.
The modern collider detector is a complex, integrated system of interdependent subdetectors coordinated by fast electronics. It combines subdetectors like \emph{trackers}, which measure the spatial position and, if a magnetic field is applied, momentum of traversing charged particles, with \emph{calorimeters}, which trap charged and neutral particles to measure their spatial position and energy, and a variety of other specialized subdetectors.
The earliest trackers were the photographic emulsions and cloud chambers used to study cosmic rays, which left visible tracks of chemical grains or condensation. With the advent of high energy colliders, new detector techniques were developed. Gaseous tracking detectors convert ionization electron avalanches from traversing charged particles to electric signals collected on wire cathodes. Modern trackers also employ semiconductors made of Silicon or Germanium, for example, in which the electron-ion pair in the gaseous tracker is replaced by an electron-hole pair in the valence and conduction bands of the semiconductor.
Whatever the tracker technology, the spatial \emph{hits} left in the tracker are mathematically fitted to reconstruct the trajectory of the traversing charged particle. If the active tracking region is subjected to a uniform magnetic field, the parameters of a charged particle's helical trajectory can be extracted from the fitted track and, from these parameters, the momentum is determined with eq. \ref{eqn:pisqbr}. The vertex detector is a specialized tracker designed for precision tracking to resolve displaced vertices near the interaction point. Good spatial resolution in a tracker yields both precise spatial vertexing and precise momentum determination.
While trackers are designed to induce minimal energy loss in traversing particles, calorimeters are designed to induce maximal energy loss. In the most common calorimeter configuration, a \emph{sampling calorimeter}, layers of absorbing material meant to induce showers alternate with layers of sensitive material to sample the energy deposition. The \emph{segmentation} of a calorimeter, the size of its sensitive elements, greatly impacts its energy resolution.
The electromagnetic calorimeter traps electrons and photons by inducing electromagnetic showers. In the presence of matter, the electron undergoes bremsstrahlung, $e \rightarrow \gamma e$, and a photon undergoes conversion $\gamma \rightarrow e^+ e^-$. Thus an incident electron $e \rightarrow \gamma e \rightarrow 3e \gamma \dots$ and an incident photon $\gamma \rightarrow e^+ e^- \rightarrow 2\gamma 2 e \dots$, producing a binary tree of cascading electrons and photons with successively lower energy until all electrons and photons are captured.
For an incident electron (photon) with initial energy $E_0$, the energy at depth $x$ is described by $E_0 \exp( -x/X_0)$ ($E_0 \exp( -7x/9X_0)$), where $X_0$ is a characteristic of the traversed material called the \emph{radiation length}. For an electron, if the cross section for bremsstrahlung is $\sigma_{brem}$ and the radiation length is $X_{0}$, then the effective volume of an atom is $\sigma_{brem}X_0$. The effective volume is also $m_{atom}/\rho$, the atomic mass divided by the material density, or $1/n$, where $n$ is the number of atoms per unit volume. Therefore $X_0=1/n\sigma_{brem}$. Since the cross section for pair production $\gamma \rightarrow e^+ e^-$ is approximately $\sigma_{pair}=\frac{7}{9} \sigma_{brem}$, the effective pair production length is $\frac{9}{7} X_0$. See Table \ref{tab:properties} for the radiation lengths for several elements.
\begin{figure*}[t]
\begin{center}
\vspace{-0.5in}
\includegraphics[height=0.9\textwidth, angle=90]{sidpar.pdf}
\vspace{-0.5in}
\caption{Technical drawing of the SiD detector, barrel view (left) and quadrant view (right). Shown are the vertex detector (blue), the tracker (red), the ECal (black), the HCal (magenta), the solenoid (white) and the muon detector (orange). Credit: SiD Consortium and SLAC.}
\label{figs.id}
\end{center}
\end{figure*}
The hadronic calorimeter traps charged and neutral hadrons by inducing hadronic showers in which the incident hadron and secondaries successively lose energy to nuclear collisions until complete absorption. Because the hadronic calorimeter is placed at a macroscopic distance from the interaction point, where unstable hadrons decay to stable or quasistable hadrons, the hadrons captured in a hadronic calorimeter are almost exclusively charged pions, kaons, protons and neutrons. In a hadronic calorimeter, unlike an electromagnetic calorimeter, not all energy from the incident hadron is seen due to (sometimes large) losses to nuclear binding energy, making it inherently less precise than an electromagnetic calorimeter.
For an incident hadron of energy $E_0$, the energy at depth $x$ is $E_0 \exp(-x/\lambda)$, where $\lambda$ is the \emph{nuclear absorption length}, a characteristic of the traversed material. The relation between nuclear absorption length and inelastic nuclear scattering cross section $\sigma_{nuc}$ is straightforward. If we consider the effective volume of one nucleus of the traversed material, this is $\lambda \sigma_{nuc}$. The effective volume is also the nuclear mass $m_{nuc}$ divided by the density $\rho$, or $m_{nuc}/\rho=1/n$, where $n$ is the number of nuclei per unit volume. Therefore $\lambda=1/n\sigma_{nuc}$. See Table \ref{tab:properties} for the nuclear interaction lengths for several elements.
A calorimeter is meant to contain and measure all of the energy from an incident particle, and at $x=nX_0$ ($x=n\lambda$) the containment fraction is $\exp(-n)$ in an electromagnetic (hadronic) calorimeter. Thus at $x=3X_0$ an electromagnetic shower is 95\% contained on average, and at $x=5X_0$ is is 99\% contained on average, and similarly for a hadronic shower. Calorimeter showers are statistical processes, however, so shower penetration depth varies from shower to shower.
The only particles which exit the tracker are quasistable or stable, almost exclusively electrons, muons, photons, pions, kaons, neutrons and protons. Since electrons and photons are absorbed by the electromagnetic calorimeter, while pions, kaons and nucleons are absorbed by the hadronic calorimeter, in principle only muons (and undetectable neutrinos) penetrate the hadronic calorimeter. In practice some hadronic (electromagnetic) showers do penetrate the hadronic (electromagnetic) calorimeter in \emph{leakage}, and individual particles can \emph{punch through}.
Muons, which are too heavy to undergo bremsstrahlung sufficient for absorption in the calorimetry and do not participate in nuclear interactions, are therefore easily identified in the muon detector, a tracker placed outside the hadronic calorimetry. Tracks reconstructed in the muon detector can be matched to tracks reconstructed in the main tracker, which typically measures momentum much more precisely.
The momentum resolution of a tracking detector can be parametrized with constants by the transverse momentum $p_T$ and the polar angle $\theta$ with respect to the beamline. The curvature $\Omega=R^{-1}$, so by eq. \ref{eqn:pisqbr} $d\Omega/dp=-qB/p^2$ and $\Delta p/p \propto p \Delta \Omega$. Similarly, the energy resolution of a calorimeter can be parametrized with constants by the energy $E$. Showers in calorimeters are statistical processes which deposit energy $E \propto N$, where $N$ is the number of shower particles, and $\Delta E \propto \sqrt{N}$. Thus $\Delta E/E \propto 1/\sqrt{E}$.
Thus the tracking and calorimeter performance can be parametrized by the following expressions:
\begin{eqnarray}
\frac{\Delta p_T}{p_{T}^{2}} & = & a \oplus \frac{b}{\sin \theta} \label{eqn:parametrize1}\\
\frac{\Delta E}{E} & = & a \oplus \frac{b}{\sqrt{E}}
\label{eqn:parametrize2}
\end{eqnarray}
\noindent where $x\oplus y \equiv \sqrt{x^2+y^2}$ is addition in quadrature. See Table \ref{tab:performance} for a comparison of tracking and calorimeter performance for several historically important detectors: SLD is the SLC Detector \cite{Rowson:2001cd}, OPAL is the OmniPurpose Apparatus at LEP \cite{1991275}, ATLAS is A Toroidal LHC ApparatuS \cite{Collaboration_2008}. SiD is the ILC Silicon Detector \cite{Behnke:2013lya}.
\subsubsection{Silicon Detector (SiD)}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Subdetector & Technology & $n_{layer}$ & Thickness & $r_{in}$ [cm] & $r_{out}$ [cm] & $\pm z_{max}$ [cm] \\ \hline \hline
Vertex Detector & Si Pixels & 5 & 0.015$X_0$& 1.4 & 6.0 & 6.25 \\
Tracker & Si Strips & 5 & 0.1$X_0$ & 21.7 & 122.1 & 152.2 \\
ECal & W-Si Pixels & 20+10 & 26$X_0\approx 1 \lambda$ & 126.5 & 140.9 & 176.5 \\
HCal & RPC-Steel & 40 & 4.5$\lambda$ & 141.7 & 249.3 & 301.8 \\
Solenoid & 5T SC & - & - & 259.1 & 339.2 & 298.3 \\
Muon Detector & Sc-Steel & 10 & - & 340.2 & 604.2 & 303.3 \\ \hline
\end{tabular}
\caption{Parameters of the baseline SiD barrel design, adapted from \cite{Behnke:2013lya}: technology, number of layers, thickness in $X_0$ or $\lambda$, inner radius, outer radius and $z$ extent. In the ECal there 20 thin Tungsten layers and 10 thick Tungsten layers. In the HCal the sensitive elements are Resistive Plate Chambers (RPC). In the muon detector the sensitive elements are scintillators (Sc).}
\label{tab:sid}
\end{center}
\end{table*}
SiD was designed to meet the physics goals of the ILC \cite{Behnke:2013lya}. SiD comprises a precise vertex detector, a main tracker, a sampling electromagnetic calorimeter (ECal) with Tungsten absorber, a sampling hadronic calorimeter (HCal) with Iron absorber, a 5T solenoid, and a muon detector instrumented in the solenoid flux return. See fig. \ref{figs.id} for a technical drawing of the SiD detector, and Table \ref{tab:sid} for a summary of the important SiD barrel parameters.
Throughout, SiD was designed to enable the \emph{particle flow} reconstruction technique in which charged particle trajectories are extrapolated from the tracker to the calorimeters and matched either to an ECal energy deposit, an HCal energy deposit, or a muon tracker track. Remaining calorimeter deposits which are unmatched to tracker tracks are designated neutral, either photons in the ECal or neutral hadrons in the HCal. This enables the substitution of the far more accurate tracking momentum measurement for the calorimeter energy measurement in the case of charged particles.
The vertex detector is made of five barrel layers at radii from $r=1.4$~cm to $r=6.0$~cm, centered on the beamline and capped by four endcap disks perpendicular to the beamline. The barrel layers and endcap disks are instrumented with $20 \times 20$~ $\mu$m$^2$ Silicon pixels. The primary goals of the vertex detector are 5 $\mu$m hit resolution, less than 0.3\% $X_0$ per layer, low power consumption and single bunch timing resolution. Achieving these goals enables precise vertexing with minimal energy loss in the active volume and rejection of backgrounds out of time with the bunch crossings.
The main tracker comprises five barrel layers ranging from $r=22$~cm to $r=122$~cm, capped by four slightly conical disks instrumented with Silicon microstrips. The performance goals of the main tracker include hermetic coverage for polar angles $10^{\circ}< \theta < 170^{\circ}$, momentum resolution $\delta (1/p_T) \approx 2 \times 10^{-5}$/GeV, less than $0.1X_0$ in the central region, and greater than 99\% hit efficiency.
The ECal barrel ranges from $r=127$~cm to $r=141$~cm with 20 thin layers of Tungsten and 10 thick layers of Tungsten, each absorbing layer alternating with sensitive layers with 13 mm$^2$ Silicon pixels. ECal endcaps close the barrel at $z=\pm 177$~cm. The ECal performance goal is energy resolution $\Delta E/E =0.01 \oplus 0.17/\sqrt{E}$.
The HCal barrel ranges from $r=142$~cm to $r=249$~cm with 40 Steel absorber layers alternating with gas Resistive Plate Chambers (RPC) sensitive layers. RPCs alternate highly resistive layers at high voltage with gas layers. When traversing charged particles ionize in the gas, the ionization electron induces an avalanche of secondary ionization electrons which are then read out by sensitive strips. Endcaps close the HCal barrel at $z=\pm 302$~cm. The HCal performance goal is energy resolution $\Delta E/E =0.094 \oplus 0.56/\sqrt{E}$.
A solenoid occupying $r=259$~cm to $r=339$~cm provides the 5T magnetic field necessary for measuring momentum in the tracker and particle flow in the calorimetry. Finally, a steel flux return for the solenoid occupies $r=340$~cm to $r=604$~cm and is instrumented with scintillators for the muon tracker.
\subsection{Further reading and exercises}
\emph{The Experimental Foundations of Particle Physics} (Cahn and Goldhaber) \cite{cahn_goldhaber_2009} reprints key papers in the experimental development of the SM and presents bridging commentary and exercises. For accounts of the historical development of the SM and the colorful characters involved, see \emph{Inward Bound} (Pais) \cite{Pais:1988} and \emph{From X-Rays to Quarks} (Segr\`e) \cite{Segrè:100978}, both written by physicists deeply involved in the story.
For brief pedagogical introductions to accelerators and detectors, the chapters on experimental methods in \emph{Particle Physics} (Martin and Shaw) \cite{Martin:2211679} and \emph{Introduction to High Energy Physics} (Perkins) \cite{Perkins:396126} are good. For textbooks on accelerators see \emph{An Introduction to the Physics of High Energy Accelerators} (Edwards and Syphers) \cite{Edwards:1993qe} and \emph{RF Linear Accelerators} (Wangler) \cite{Wangler:368392}. \emph{Experimental Techniques in High Energy Nuclear and Particle Physics} (Ferbel) \cite{Ferbel:238481} reprints several good review papers on tracking, calorimetry and particle identification. The Particle Data Group reviews on \emph{Accelerator Physics of Colliders}, \emph{High Energy Collider Parameters}, \emph{Passage of Particles Through Matter} and \emph{Particle Detectors at Accelerators} \cite{Tanabashi:2018oca} are invaluable lifelong, regularly updated references.
The SiD LoI \cite{Aihara:2009ad} is the most complete technical documentation of SiD. The ILC Technical Design Report is essential reading. There are four volumes, \emph{Volume 1: Executive Summary} \cite{Behnke:2013xla}, Volume 2: Physics \cite{Baer:2013cma}, Volume 3: Accelerator \cite{Phinney:2007gp} and Volume 4: Detectors \cite{Behnke:2013lya}. For a more recent overview see \cite{Bambade:2019fyw}. Research and development of SiD technologies has continued since the TDR as reported in the Linear Collider Collaboration Detectors R\&D Liaison Report \cite{jan_strube_2020_3766518}.
Exercises for this section can be found in sect. \ref{sec:ilc} of Appendix \ref{appendix1}.
\section{SiD: simulation and reconstruction}
\subsection{Generation of ILC events}
\subsubsection{Monte Carlo integration}
Two-body scattering and decay yield straightforward expressions for differential cross sections and decay widths (eq.s \ref{eqn:twobody}), and for tree level Feynman diagrams with only one internal momentum $q$ there is just one four-vector integration $\int d^4 q$ prescribed by the Feynman rules. But for $n$-body processes with arbitrary $m$ internal momenta, calculations can quickly become intractable analytically, and very challenging numerically.
The \emph{Monte Carlo} integration technique is best suited for such calculations because it promises faster convergence for arbitrary $n$ and $m$ compared to other numerical techniques. The idea is to randomly sample the integrand $f$ over the sampling volume $V^{\prime}$ and form a mean $\bar{f}$. Then the integral is $\int_{V} dV f \approx V^{\prime} \bar{f}$, with convergence at a rate $\propto 1/\sqrt{N}$ where $N$ is the number of samples. This converges for any reasonably well behaved $f$ and boundary. But by performing all possible integrations by hand before using the Monte Carlo technique, the convergence improves considerably.
For example, consider the integral $\int_{\mathbb{S}^2} dA f(x,y)$, where $\mathbb{S}^2$ is the unit circle in $\mathbb{R}^2$ and $f(x,y)=1$. This integral is easily calculated analytically, $\int_{\mathbb{S}^2} dA=\pi$. With the Monte Carlo technique we sample at $N$ points $(x,y)\in [-1,1] \times [-1,1]$ from a uniform distribution, rejecting the point if $x^2+y^2>1$. Then $\bar{f}=\frac{1}{N}\sum_{i=1}^{N} f_i$ converges to $\pi$ at a rate $ \propto 1/\sqrt{N}$. Going to $(r,\theta)$ coordinates first and performing the $\int d \theta= 2 \pi$ integration, however, we sample only from $r\in [0,1]$ the integrand $f(r)= r$ and obtain much faster convergence.
For a more \emph{apropos} example, consider Yukawa scattering in the first Born approximation. The Yukawa potential $V(r)= \beta \exp(-\mu r)/r$ with $\beta,\mu$ constants representing the strength and $\beta$ and range $1/\mu$ of the Yukawa interaction. After integrating over $\theta$ and $\phi$, the scattering amplitude for a particle with mass $m$ and wave number $\kappa=\sqrt{2mE}/\hbar$ is
\begin{eqnarray}
f(\theta) & = & - \frac{2 m \beta}{ \hbar^2 \kappa} \int_{0}^{R} dr r \frac{e^{-\mu r}}{r} \sin \kappa r
\end{eqnarray}
\noindent for $R \rightarrow \infty$. The analytical result for the integral is $I(\mu,\kappa)=(\mu^2 + \kappa^2)^{-1}$. For concreteness, consider unit $\mu,\kappa$ so I(1,1)=0.5. See Table \ref{tab:mc} for Python code which implements the Monte Carlo technique for approximating $I(1,1)$. This is an example of \emph{simple} Monte Carlo sampling. Much more sophisticated sampling methods are available.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|} \hline
\texttt{import math, random, matplotlib.pyplot as plt} \\
\texttt{def function(x):} \\
\hspace{0.1in} \texttt{return math.exp(-x)*math.sin(x)} \\
\texttt{def monte\_carlo(n,R):} \\
\hspace{0.2in} \texttt{sample=[R*random.random() for i in range(n)]} \\
\hspace{0.2in} \texttt{integrands=[function(point) for point in sample]} \\
\hspace{0.2in} \texttt{return R*sum(integrands)/n} \\
\texttt{n\_values=range(1,10000)} \\
\texttt{approximation=[monte\_carlo(n,100) for n in n\_values]} \\
\texttt{plt.scatter(n\_values,approximation,s=1)} \\
\texttt{plt.show()} \\ \hline
\end{tabular}
\caption{Python code to evaluate the Yukawa scattering amplitude integral $I(1,1)$ discussed in the text by the Monte Carlo technique. The code also plots the approximation \emph{vs.} the number of integrand samplings.}
\label{tab:mc}
\end{center}
\end{table}
Once the integration for the total cross section $\sigma_{tot}$ or decay rate $\Gamma_{tot}$ has been performed with the Monte Carlo technique, a closely related function can be performed. For a fixed initial state, we would like to generate a sample of $N$ events with final states which for large $N$ reproduce the differential cross sections for scattering or decay distributions for decays. For example, for two-body final state four-vectors $p_{\mu}^{1},p_{\mu}^{2}$, generate $N$ events with probabilities consistent with the differential distributions determined by the Feynman rules. We would like the sample of events to reproduce the differential distributions in the limit of large $N$. This is precisely what Monte Carlo \emph{event generators} do. They generate a sample of events with final state four-vectors which correctly reproduce the underlying final state physics.
A straightforward algorithm to correctly reproduce the final state probabilities proceeds as follows. Suppose the integrand $I=\frac{d\sigma}{d\Omega}$ or $I=\frac{d\Gamma}{d\Omega}$ reaches a maximum $I_{max}$ on the integration domain. Then repeat the following steps until $N$ final states are accepted:
\begin{enumerate}
\item Randomly sample $p_{\mu}^{1},p_{\mu}^{2},...,p_{\mu}^{n}$ from possible final states.
\item Randomly sample the unit interval, $R\in [0,1]$.
\item If $I(p_{\mu}^{1},p_{\mu}^{2},...,p_{\mu}^{n})/I_{max}<R$ accept the final state, otherwise reject it.
\end{enumerate}
For example, consider the differential cross section for relativistic ($\beta \approx 1$) scattering $e^+ e^- \rightarrow \gamma^{\star} \rightarrow q\bar{q}$. The differential cross section is
\begin{eqnarray}
\frac{d\sigma}{d\Omega} & = & 3 Q_{q}^2 \frac{\alpha^2}{4s} \left( 1+ \cos^{2} \theta \right)
\end{eqnarray}
\noindent so that $I_{max}=3Q_{q}^2 \alpha^2/2s$. If $\theta \in [0,\pi]$ is sampled uniformly and $I(\theta)/I_{max}$ is compared against a random number $R$ sampled from the unit interval, the algorithm above correctly reproduces the $\theta$ distribution of the final state quarks for large $N$.
\subsubsection{Whizard2: $W$, $H$, $Z$}
Polarized electron and positron beams are central to the design of the ILC. Any event generator considered for use in ILC studies must therefore simulate polarized beams. This requirement narrows the field of potential event generators considerably.
Two events generators which simulate polarized beams are in common use for ILC studies, Whizard2 \cite{Kilian:2007gr} and MG5 aMC@NLO \cite{Alwall:2014hca}.
Whizard2 further distinguishes itself by simulating two additional important initial state effects.
\emph{Initial state radiation} (ISR) refers to either photon emission from a charged initial state particle or gluon emission from a colored initial state particle. For an $e^+e^-$ collider it refers to photon emission from the electron or positron (or both).
\emph{Beamstrahlung} refers to bremsstrahlung which occurs when a particle in one collider bunch emits bremsstrahlung due to the field produced by the oncoming bunch, and is therefore sensitive to the detailed structure of the accelerator bunches.
Whizard2 can simulate both ISR and beamstrahlung. MG5 aMC@NLO simulates neither, although the authors have indicated plans to include ISR in future versions.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|l|l|} \hline
\multicolumn{1}{|c|}{Whizard2} & \multicolumn{1}{|c|}{Madgraph5\_aMC@NLO} \\ \hline
\multicolumn{2}{|c|}{Model, Process and Parameters} \\ \hline
\texttt{model=SM\_CKM} & \texttt{import model sm} \\
\texttt{process zh250pm=E1,e1=>e2,E2,b,bbar} & \texttt{generate e+e-> zh,z> mu+mu-,h> bb\textasciitilde }\\
\hspace{0.1in} \texttt{$\$ \! \!$ restrictions="1+2\textasciitilde{}Z\&\&3+4\textasciitilde{}Z\&\&5+6\textasciitilde{}H"} & \texttt{output zh250pm}\\
\texttt{compile} & \texttt{launch} \\
\texttt{mH=125.0 GeV} & \texttt{set mh 125.0}\\
\texttt{wH=0.004 GeV} & \texttt{set wh 0.004}\\ \hline
\multicolumn{2}{|c|}{Initial State $\sqrt{s}$, ISR and Polarization} \\ \hline
\texttt{beams= E1,e2=>isr,isr} & \texttt{set lpp1 0} \\
\hspace{2.5in} & \texttt{set lpp2 0} \\
\texttt{beams\_pol\_density=@(+1), @(-1)} & \texttt{set polbeam1 +30}\\
\texttt{beams\_pol\_fraction=30\%, 80\%} & \texttt{set polbeam2 -80} \\
\texttt{sqrts=250 GeV} & \texttt{set ebeam1 125.} \\
\texttt{integrate(zh250pm)} & \texttt{set ebeam2 125.} \\ \hline
\multicolumn{2}{|c|}{Final State QCD Showering and Hadronization} \\ \hline
\texttt{$\$ \! \!$ shower\_method="PYTHIA6"} & \texttt{shower=Pythia8} \\
\texttt{?hadronization\_active=true} & \hspace{2.5in}\\
\texttt{$\$ \! \!$ hadronization\_method="PYTHIA6"} & \\ \hline
\multicolumn{2}{|c|}{Event Generation} \\ \hline
\texttt{n\_events=10000} & \texttt{set nevents 10000}\\
\texttt{seed=12345} & \texttt{set iseed 12345} \\
\texttt{sample\_format=lhef,stdhep} & \texttt{} \\
\texttt{simulate(zh250pm)} & \\ \hline
\end{tabular}
\caption{Whizard2 and Madgraph5\_aMC@NLO scripts to generate $10^5$ Higgstrahlung events at $\sqrt{s}=250$~GeV. In both cases beams are polarized, 30\% positively polarized positrons and 80\% negatively polarized electrons. Here Whizard2 uses Pythia6 for final state showering and hadronization while MG5 aMC@NLO uses Pythia8. Whizard2 simulates ISR while MG5 aMC@NLO does not. }
\label{tab:generators}
\end{center}
\end{table*}
The Whizard2 executable runs with input from a script which specifies the parameters of the event generation. See Table \ref{tab:generators} (left) for a script to calculate the cross section for Higgstrahlung events with polarized beams and generate $10^5$ events. The first step is to specify a model, which includes all particles, their masses and decay widths, and the interactions. For example, \texttt{model=SM} specifies to use the SM model. This is the default, so if model is not specified then Whizard2 assumes you want the SM with the trivial CKM matrix. Other models implemented in Whizard2 are SM\_CKM (the SM with nontrivial CKM), MSSM\_CKM (the Minimal Supersymmetric SM with nontrivial CKM) and NMSSM\_CKM (the Next to MSSM with nontrivial CKM). Model parameter settings can be displayed and changed: \texttt{show(mH)} displays the default Higgs boson mass, \texttt{mH=125.0 GeV} sets it explicitly.
The next step is to specify a process to simulate and give it a name. For example, \texttt{process zh250pm=E1,e1 => E2,e2,b,bbar} specifies $b$ quark pair and muon pair production from electron positron initial states and names it \texttt{zh250pm} in order to reference it later. Once can impose restrictions on a process when defining it. For example, adding the text \texttt{$\$ \! \!$ restrictions="1+2\textasciitilde{}Z \&\&3+4\textasciitilde{}Z \&\&5+6\textasciitilde{}H" } to the above requests Whizard2 to couple the $\mu^+ \mu^-$ pair to an internal $Z$ boson and the $b$ quark pair to an internal Higgs boson, thereby requiring the $s$-channel Higgstrahlung. When the \texttt{compile} command is invoked, Whizard2 generates, compiles and loads Fortran code which calculates the amplitudes for the defined process.
Next one specifies the beam parameters, first defining $\sqrt{s}$ with \texttt{sqrts=250 GeV}, for example, and then specifying beam polarization and ISR, if desired, with the \texttt{beams}, \texttt{beams\_pol\_density} and \texttt{beams\_pol\_fraction} variables. Whizard2 now has all information necessary for calculating a cross section for the defined process, and invoking \texttt{integrate(zh250pm)} integrates the necessary integrals for Higgstrahlung $e^+ e^- \rightarrow ZH$ with $Z \rightarrow \mu^+ \mu^-$ and $H \rightarrow b \bar{b}$ at $\sqrt{s}=250$~GeV with the specified beams.
After specifiying a few more parameters, Whizard2 is ready to generate events which reproduce the correct differential cross sections. The next few commands in Table \ref{tab:generators} instruct Whizard2 to pass the handling of some QCD effects in the final state to Pythia6 (see below). Finally, the desired number of events are specified with \texttt{n\_events=10000}, the desired output formats are specified \texttt{sample\_format=lhef,stdhep}, and the event generation is invoked with \texttt{simulate(zh250pm)}. If the script from Table \ref{tab:generators} is saved in a file \texttt{script.sin} and the binary is \texttt{whizard}, the execution command is simply
\begin{center}
\texttt{<path>/bin/whizard script.sin}
\end{center}
\noindent The \texttt{lhef} format is the Les Houches Event File (LHEF) format \cite{Alwall:2006yp}, agreed to by a group of generator experts in 2006. The \texttt{stdhep} format is the standardized HEP (StdHep) format \cite{stdhep} which is still in common use. The \texttt{hepmc} format can also be specified for the HEP Monte Carlo format \cite{Dobbs:2001ck}. LHEF reports events prior to final state QCD effects carried out by Pythia6, while StdHep and HepMC formats include such effects.
\subsubsection{MG5 aMC@NLO}
MG5 aMC@NLO is a merging of the older MadGraph program with next leading order techniques, a Monte Carlo at Next Leading Order. Feynman diagrams at arbitrary orders in QED and QCD can be easily specified when defining a process. Table \ref{tab:generators} (right) shows a script for the same process as the Whizard2 script in Table \ref{tab:generators} (left): Higgstrahlung $e^+ e^- \rightarrow ZH$ at $\sqrt{s}=250$~GeV with $Z \rightarrow \mu^+ \mu^-$ and $H \rightarrow b \bar{b}$.
As with Whizard2, the first step is to define a model. Here \texttt{import model sm} specifies the SM. Other models include \texttt{loop\_sm}, \texttt{MSSM\_SLHA2} and \texttt{heft}. The default model is SM. The next step is to define a process within the chosen model, in this case \texttt{generate e+e- > zh, z>mu+mu-, h>bb\~}. Finally invoking \texttt{output zh250pm} names the process, creates a directory for event generation code, and generates Feynman diagrams. Invoking \texttt{launch} then allows the user to set various parameters of the run.
First, the Higgs boson mass and width are set, replacing the default settings. Next the initial state parton density functions are set with \texttt{set lpp1 0} and \texttt{set lpp2 0}, meaning we use trivial parton density functions for electrons. Next the energy and polarizations of each beam are set. Finally, \texttt{shower=Pythia8} instructs MG5 aMC@NLO to pass final state QCD effects to Pythia8 (see below) and after the number of events and a random number seed are set, the cross section calculation and event generation begin. The execution command is
\begin{center}
\texttt{<path>/bin/mg5\_aMC script.mg5}
\end{center}
Two file formats are saved by default, LHEF and HepMC, in the Events directory. In the bin directory an executable \texttt{generate\_events} is also produced. In the Cards directory one finds the \texttt{run\_card.dat} and \texttt{param\_card.dat}, which specify the parameters of the run and parameters of the model and process, respectively. Finally, \texttt{index.html} includes the cross section and Feynman diagrams.
\subsubsection{Pythia6 and Pythia8 \label{sec:pythia}}
Pythia is frequently used for specialized functions for handling quarks and gluons in the final state. Pythia6 \cite{Sjostrand:2006za} is the last version to use Fortran, the original implementation language. Pythia8 \cite{Sjostrand:2007gs} is the C++ implementation. Both Pythia6 and Pythia8 are in common use (there is no Pythia7).
\emph{Final state radiation} (FSR) refers to either photon emission from a charged final state particle or gluon emission from a colored final state particle. In principle FSR can be included in the Feynman diagram and calculated explicitly, but in practice it is straightforward to use \emph{parton showering} for the $q \rightarrow g q$ and $e \rightarrow \gamma e$ processes after the calculation of the diagram without these vertices. Parton refers to the $q$ or $e$, shower refers to the showerlike cascade of particles from $q \rightarrow g q \rightarrow q\bar{q}q,...$, for example. The Pythia code for parton showering has been extensively tested and tuned against experiment.
\emph{Fragmentation} refers to process of meson and baryon formation from energetic quark pairs under the conditions imposed by QCD confinement. In Pythia, \emph{hadronization} means the formation of hadrons through both fragmentation and the decay of the hadrons. Because QCD confinement is not well understood, fragmentation must be simulated using phenomenological models. Pythia implements the \emph{Lund fragmentation model}, which uses a relativistic massless string connecting the two fragmenting quarks with a linear potential $V=\kappa z$, where $z$ is the distance separating the quarks and $\kappa$ is a constant determined experimentally ($\kappa \approx$~1 GeV/fm).
In the Lund model, when the initial quarks are separated enough to concentrate a large amount of energy in the string, a new quark pair $q^{\prime} \bar{q}^{\prime}$ appears in the middle of the string and two proto-mesons $q\bar{q}^{\prime}$ and $q^{\prime} \bar{q}$ appear, each pair separated by a new string. This fragmentation repeats until the energy in the proto-meson is close to the mass of a physical meson. Baryons are handled similarly, except that instead of a quark pair separated by a string, a quark and diquark are separated by a string.
In the scripts in Table \ref{tab:generators}, both generators invoke Pythia for showering and hadronization of the $b\bar{b}$ quark pair. Whizard2 employs Pythia6 while MG5 aMC@NLO employs Pythia8 (\texttt{shower=Pythia8} specifies Pythia8 for hadronization as well as showering). In fact both generators can invoke either. For ILC studies, the advantage of using Pythia6 over Pythia8 is that the parameter tunes from LEP2 can be used. In Pythia8 there is no exact correspondence to the Pythia6 tuning parameters.
\subsection{Simulation of SiD response}
\subsubsection{Geant4: GEometry ANd Tracking}
Geant4 \cite{AGOSTINELLI2003250,Allison:2006ve,ALLISON2016186} is a giant in the world of simulating physical processes in matter, and its applications run well beyond collider physics. Geant4 enables the precise description of detector geometry down to the finest detail, and tracks particles from their origin through the detector, simulating all physical processes which apply to the particles and modifying position, energy and momentum accordingly. GEANT-3, its predecessor, was written in Fortran while Geant4 is written in C++.
Geant4 needs a source of the particles meant to traverse the detector in each event. In the collider detector case this means the files made by generators like Whizard2 and MG5 aMC@NLO containing collider events, but other sources can also be specified. Single particle guns, for example, can be specified, and are useful in evaluating the performance of the detector. Each particle has a definition including all of it properties like charge, mass, spin, necesssary for implementing the processes assigned to apply at each step.
Each event in Geant4 thus contains one or more particles, each particle defined by the particle's physical properties and four-vector. Before processing, the Geant4 event contains only these things, and after processing the event contains only \emph{hits} and \emph{digitizations}, the energy deposits and electronics responses to those energy deposits in the detector.
In Geant4 detector geometry is described by volumes, the largest of which is the \emph{world} volume. Smaller volumes are placed within the world volume, and each such volume may contain any number of daughter volumes. A \emph{logical} volume, in Geant4, is defined by a shape and the matter which composes it. Shapes may be arbitrarily complex, or simple, as with a box or cylinder. Matter can be defined as an element (atomic mass, atomic number, cross section, etc) or or as a material (density, state, temperature, radiation length, etc). Once a logical volume and is placed physically within its mother volume, it is a \emph{physical} volume. This hierarchy of volumes allows a local, as opposed to global, coordinate systems within each detector volume.
Tracking in Geant4 means applying a list of physical processes to particles in discrete steps, either time steps in the case of decay, or spatial steps in the case of interactions, and altering particle four-vectors accordingly. Of the major categories of physical process in Geant4, most relevant to collider detectors are the electromagnetic, hadronic, decay and transportation processes. For the electromagnetic process the step scale is set by the radiation length $X_0$ of the traversed material, and for the hadronic process by the nuclear interaction length $\lambda$. For the decay process, the step scale is set by the particle lifetime $\tau$.
For each process, there are actions which are applied before, during, and after each step to alter the traversing particle as well as the material around it. As the particle traverses the detector material, each relevant process proposes a numerical value for the step, and the smallest such step is chosen to implement all processes. Geant4 finishes applying processes to particles when the particle decays or the entire detector volume has been traversed.
A minimal Geant4 program to simulate a collider detector might contain the following hierarchy of classes:
\begin{enumerate}
\item \texttt{G4DetectorConstruction}
\item \texttt{G4PhysicsList}
\begin{enumerate}
\item \texttt{G4ElectromagneticProcesses}
\item \texttt{G4HadronicProcesses}
\end{enumerate}
\item \texttt{G4ActionInitialization}
\begin{enumerate}
\item \texttt{G4PrimaryGeneratorAction}
\end{enumerate}
\end{enumerate}
\noindent Within the \texttt{G4DetectorConstruction} class lies the necessary code to construct trackers, calorimeters and any other specialized subdetectors. Among the electromagnetic process classes for an electron in Geant4, for example, are \texttt{G4eIonisation} and \texttt{G4eBremstrahlung}. Within the primary generator class the Whizard2 and MG5 aMC@NLO collider event output can be specified, for example.
\subsubsection{DD4hep: Detector Description for HEP}
While Geant4 is complete and standalone, it is also generic and widely applicable. DD4hep \cite{frank_markus_2018_1464634} was designed as a generic collider detector description toolkit with a more specialized goal: a full, single source detector description suitable for the full lifetime of a collider experiment with full visualization and alignment functionality.
DD4hep simplifies the use of Geant4 for HEP, in particular for collider detector simulation. It serves as a simplifying interface between the user simulating a collider detector and Geant4. It provides the \emph{compact} XML detector description suitable for early stages of detector design as well as the full detector description suitable for a running experiment.
Nine XML tags define the detector description in DD4hep. Among them are the \texttt{define}, \texttt{materials}, \texttt{display}, \texttt{readouts} and \texttt{fields} tags. In the \texttt{define} field constants like subdetector component dimensions are defined numerically. In \texttt{materials} all materials and their properties necessary for detector construction are defined. The \texttt{display} tag defines how each subdetector appears visually in a detector display. The \texttt{field} tag defines the magnetic field created by a collider detector solenoid, for example.
The \texttt{readouts} tag defines how each subdetector cell, for example a pixel in the ECal or a strip in the tracker, reports itself. Each cell thus has a unique cell ID identifier string. For example, the string
\begin{center}
\texttt{$\langle$id$\rangle$system:5, barrel:1, layer:2, module:4, sensor:2, side:32:-2, strip:24$\langle$id$/\rangle$}
\end{center}
\noindent defines a cell in the barrel of subdetector 5, layer 2, module 4, sensor 3, side 32:-2, strip 24. Any hit report created by a particle traversing that cell will report this cell ID.
DD4hep also includes tools for detector alignment studies and several utilities useful for debugging detector descriptions: \texttt{geoDisplay} for visualizing the detector with the compact description, \texttt{geoConverter} for converting the DD4hep XML description to other detector representations like the one used by Geant4, \texttt{checkOverlaps.py} for checking if detector volumes intersect, \texttt{checkGeometry.py} for overlap checking and scans for detector boundary crossing, and \texttt{materialScan} for reporting all materials traversed in a specified direction.
\subsubsection{ILCsoft simulation with DD4hep/Geant4}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|l|} \hline
\multicolumn{1}{|c|}{DD4hep SiD ECal XML} \\ \hline
$\langle$ layer repeat="20" $\rangle$ \\
\hspace{0.25in} $\langle$ slice material = "TungstenDens24" thickness = "0.25*cm" /$\rangle$ \\
\hspace{0.25in} $\langle$ slice material = "Air" thickness = "0.025*cm" /$\rangle$ \\
\hspace{0.25in} $\langle$ slice material = "Silicon" thickness = "0.032*cm" sensitive = "yes"/$\rangle$ \\
\hspace{0.25in} $\langle$ slice material = "Copper" thickness = "0.005*cm" /$\rangle$ \\
\hspace{0.25in} $\langle$ slice material = "Kapton" thickness = "0.030*cm" /$\rangle$ \\
\hspace{0.25in} $\langle$ slice material = "Air" thickness = "0.033*cm" /$\rangle$ \\
$\langle$ /layer$\rangle$ \\ \hline
\multicolumn{1}{|c|}{Delphes SiD ECal TCL} \\ \hline
module SimpleCalorimeter ECal \{ \\
\hspace{0.25in} set ParticleInputArray ParticlePropagator/stableParticles \\
\hspace{0.25in} set TowerOutputArray ecalTowers \\
\hspace{0.25in} add EnergyFraction \{0\} \{0.0\} \\
\hspace{0.25in} add EnergyFraction \{11\} \{1.0\} \\
\hspace{0.25in} add EnergyFraction \{22\} \{1.0\} \\
\hspace{0.25in} add EnergyFraction \{111\} \{1.0\} \\
\hspace{0.25in} set ResolutionFormula \{sqrt(energy\textasciicircum 2 * 0.01 + energy * 0.17\textasciicircum 2)\} \\
\} \\ \hline
\end{tabular}
\caption{Full and fast simulation descriptions of the SiD ECal. Above, the DD4hep XML fragment which defines the SiD 20 thin ECal layers. Below, the Delphes TCL fragment which defines the SiD ECal performance.}
\label{tab:fullfast}
\end{center}
\end{table*}
In ILCsoft the executable for invoking Geant4 for detector simulation using a DD4hep compact detector description is \texttt{ddsim}. See Appendix \ref{appendix2} for instructions on installing and testing ILCsoft.
Assuming a generator file in HepMC format, a DD4hep compact detector description XML file and an output file in LCIO format, the syntax for invoking \texttt{ddsim} is
\begin{center}
\texttt{ddsim --compactFile compact.xml --inputFiles in.hepmc --outputFile out.lcio}
\end{center}
\noindent There are many options for running \texttt{ddsim} not listed here. For concreteness we assumed a HepMC input file, but \texttt{ddsim} also reads generator files in StdHep and LCIO formats. For a full description of the LCIO file format, see ref. \cite{gaede_2017}.
As an example of how a subdetector is configured with DD4hep, consider the XML fragment used to define the 20 thin ECal strips in the SiD compact description. See Table \ref{tab:fullfast} (top), where it is evident how to specify the materials used, their thicknesses, and the number of layers required.
\subsubsection{Delphes fast simulation}
Geant4 detector simulation is an example of \emph{full} simulation, that is, all underlying physical processes are simulated. Full simulation is typically resource intensive, one collider event taking computation time of order 1 to 10 seconds depending on the complexity of the event, complexity of the detector, and the processor speed.
\emph{Fast} simulation bypasses the underlying physics and simply applies parametrized identification efficiencies, fake rates, momentum resolution and energy resolution as measured in data or full simulation. Parametrizations like eq.s \ref{eqn:parametrize1} and \ref{eqn:parametrize2} are applied directly to generator particle four-vectors using random numbers.
A fast simulation can be considered a map from the domain of generator particle four-vectors (Monte Carlo \emph{truth}) in each event to a range of energy and momentum \emph{smeared} particles which survive a veto to account for identification inefficiencies. The amount of smearing depends on the resolution parametrization as encoded in the fast simulation. Thus there is no detector geometry, material specification or physical process simulation. Such simplification speeds processing time greatly, so that fast simulation can be several orders of magnitude faster than full simulation.
Delphes \cite{Selvaggi:2014mya,Mertens:2015kba} is a powerful and flexible fast simulation C++ program in widespread use. It has been used extensively for fast LHC studies and is now in use for possible future colliders. Delphes uses the TCL scripting language to define detector performance in a detector \emph{card}. The detailed baseline design performance of the SiD detector has been encoded in the Delphes SiD (DSiD) card, available on HepForge \cite{Potter:2016pgp}. See Table \ref{tab:fullfast} (bottom) for the TCL fragment which defines the performance of the SiD ECal in DSiD. Finally see ref. \cite{Potter:2017rlo} for examples of ILC backgrounds generated with Whizard2 and MG5 aMC@NLO and simulated with Delphes using the DSiD card.
Once Delphes is installed, the executable name depends on the format of the generator input file. For StdHep, the executable is \texttt{DelphesSTDHEP}. To run with the DSiD TCL card,
\begin{center}
\texttt{<path>/bin/DelphesSTDHEP delphes\_card\_DSiDi.tcl out.root in.stdhep}
\end{center}
\noindent Delphes can also read HepMC, LHEF and other generator formats. The output file is in Root \cite{Brun:1997pa} format. Root is a C++ data analysis framework, the successor to the Fortran based forerunner PAW. Most recent analysis in HEP is carried out in Root, though Python is becoming more prevalent.
While fast simulation delivers results on a short timescale, the underlying assumptions are typically too optimistic. In Table \ref{tab:fullfast} (bottom), for example, the SiD ECal specifies that 100\% of electrons, photons and neutral pions (since $\pi^0 \rightarrow \gamma \gamma$) energy is contained and reported. This ignores the phenomenon of leakage, in which the electromagnetic shower starts too late for all energy to be contained in the ECal. It also assumes, by default, that all other particles leave no energy in the ECal. But hadrons, for example, will often start hadronic showers in the ECal before entering the HCal, and some hadron showers may be entirely contained in the ECal.
Similar arguments for other subdetectors apply. With tracks, for example, one can specify an identification efficiency, but one cannot easily include fake tracks created by complicated beam scenarios in which particles unrelated to the primary interaction leave hits in the tracker. Because the performance of fast simulation is typically too optimistic, its performance for a particular study is usually compared to performance on a smaller full simulation for validation or, at least, an estimate of systematic uncertainty due to the use of fast simulation.
Fast simulation not only bypasses full simulation of the detector geometry and physical processes, it also bypasses the necessary task of converting hits and digitizations reported in the full simulation into particle candidate four-vectors. We consider this task in the following section.
\subsection{Reconstruction of ILC/SiD events \label{sec:reco}}
\subsubsection{Track finding and fitting}
A charged particle traveling in a constant magnetic field directed in the $z$ direction exhibits helical motion. In the $xy$ plane the particle follows a circle, while in the $z$ direction the particle follows a line. Where the helix intersects the layers of the tracker, the particles most probably leaves a hit detected by the tracker readout. Track \emph{finding} is the project of correctly grouping together the hits left by the particle so that, in track \emph{fitting}, the helix can be explicitly reconstructed.
Suppose a tracker is such that, in any direction, a particle will traverse $m$ layers. Then any stable or quasistable particle traversing the tracker will leave $m$ hits (assuming 100\% hit efficiency). If a collider event contains $n$ charged particles, the task of track finding is to correctly group the $n\times m$ hits into the $n$ tracks left by each such particle. There is a finite number of such groupings,
\begin{eqnarray}
N_{g} & = & {n m \choose m} \times {(n-1)m \choose m} \times \cdots \times {m \choose m}
\end{eqnarray}
\noindent but $N_g$ gets unreasonably large for even modest track multiplicity $n$. Fortunately there are constraints on each grouping. First, each hit in a nonpathological track must belong to a distinct tracker layer. Second, each track must be consistent with the tracker spatial resolution.
The measure of spatial resolution consistency is $\chi^2 = \frac{1}{n_{hits}-1} \sum_{hits} \vert \vec{r}_{fit}-\vec{r}_{hit} \vert^2/\sigma^2$, where $\vec{r}_{fit}$ is the point of intersection of the helical fit with the tracker layer, $\vec{r}_{hit}$ is the measured position of the associated hit and $\sigma$ is the spatial resolution of the tracker. Thus, of the $N_g$ track groupings, the unique grouping for which for each track all hits lie on distinct layers and the $\chi^2$ is minimal is likely to be the correct one.
Constructing all track groupings for which every track has hits with distinct layers is straightforward using track \emph{seeds}. Two hits underdetermine a circle, four hits overdetermine it. A seed track consists of three hits, each from a distinct layer. The seed track is fitted with a circle in $xy$ and a line in $sz$, where $s$ is arclength, and the fit is extrapolated to the remaining $m-3$ layers not already in the seed. At each such layer the nearest hit is accumulated to the seed. All tracks from such seeds are found and for each the $\chi^2$ is calculated. After requiring $\chi^2 < \chi^{2}_{max}$ for some maximum $\chi^2$, the set of remaining tracks are good candidates for charged particle trajectory reconstruction.
Once the track finding has finished, a final $xy$ circular fit to determine the three circular parameters (center $(x_0,y_0)$ and radius $R$) and $sz$ linear fit to determine two linear parameters (slope $m$ and intercept $b$) are performed for each track. Many fitting procedures are possible: least squares fitting suffices. There are five track fit parameters to extract:
\begin{enumerate}
\item $\Omega=R^{-1}$, the curvature where $R$ is the radius
\item $d_0$, the transverse impact parameter in $xy$
\item $\phi_0$, the azimuthal angle $\phi$ at closest approach in $xy$
\item $\tan \lambda=\frac{ds}{dz}$, the ratio of arclength $s$ to $z$ traversed
\item $z_0$, the azimuthal impact parameter in $z$
\end{enumerate}
\noindent The impact parameters $d_0,z_0$ measure how far from the interaction point the track began. The radius of curvature determines the transverse momentum $p_T$ from eq. \ref{eqn:pisqbr}, and $\tan \lambda$ determines the azimuthal momentum $p_z$. For a full definition of these parameters as saved in LCIO files, see ref. \cite{kramer_2006_004}.
This idealized description of track finding and fitting is complicated in practice. Adjustments in track finding must be made for the case of nonzero hit inefficiency. Electrons will frequently emit bremsstrahlung while transiting the tracker, after which the track radius of curvature changes. Frequently there are additional hits in the tracker from events unrelated to the primary event, including noise hits. All of these issues are addressed in track finding. For the case of charged particles leaving less than 3 hits there is no redress.
One example of a track finder and fitter implemented in ILCsoft is Conformal Tracking \cite{Brondolin:2019awm}. This algorithm first maps hits from $(x,y) \mapsto (u,v)=(x/(x^2+y^2),y/(x^2+y^2))$. This map has the property of mapping large radii to small radii and small radii to large radii for circles centered on the origin, and maps circles to lines for circles centered elsewhere. The effect is to make track pattern recognition intuitively clear, though at present no clear performance advantage has been demonstrated.
\subsubsection{Calorimeter cluster finding}
A calorimeter hit is a spatial location, like a tracker hit, together with an energy deposit. A single particle traversing the calorimeter, whether charged or neutral, will leave a hit in each traversed layer if it undergoes sufficient energy loss, and may leave hits in adjacent cells in the same layer.
The group of calorimeter hits left by a single particle is a \emph{cluster}. Cluster \emph{finding} is the project of correctly grouping calorimeter hits in events with multiple particles leaving multiple clusters. Once the hits are grouped into clusters, the cluster energy for each cluster is calculated by summing the cluster hit energies.
One strategy for cluster finding exploits the fact that clusters left by multiple particles are not usually contiguous, they are usually topologically distinct. The algorithm for a \emph{topological} cluster finder begins with seed hits, usually all hits satisfying a minimum energy requirement, and for each seed proceeds by recursively associating hits adjacent to the seed in the same layer and nearby hits in adjacent layers. If two seeds belong to the same cluster, they are naturally merged by the algorithm. The algorithm proceeds until there are no more adjacent hits. Setting the minimum energy for a seed is a tradeoff between the speed of the cluster finding and identification of low energy clusters. If the minimum seed energy is zero, all clusters will be identified but with a time penalty. Conversely, if the seed energy is high, low energy clusters will not be identified but the cluster finding is fast.
Complicating cluster finding are cases in which distinct clusters, by chance, overlap. There may also be noise hits which do not properly belong in any cluster. Clusters may be split between barrel and endcaps, and for hadrons they may be split between ECal and HCal. All such complicating issues are addressed in a cluster finder.
After cluster finding, the concept of \emph{particle flow} enables further classification of clusters. Particle flow relies on the straightforward observation that clusters left by charged particles will have associated tracks left in the tracker, while clusters left by neutral particles will not. A particle flow algorithm will extrapolate tracks into the calorimetry and associate each track to the nearest cluster. Track-associated clusters are considered to be left by charged particles and the momentum of the track, inherently more precise than the energy in the cluster, supersedes the energy of the cluster. Any cluster unassociated to a track is considered to be left by a neutral particle. PandoraPFA \cite{Thomson200925} is an example of a particle flow algorithm which has been implemented in ILCsoft.
Finally, if most of the cluster energy lies in the ECal, the cluster is considered to be left by an electron or photon. If most of the cluster energy lies in the HCal the cluster is considered to be left by a hadron. Thus are built candidate lists of electrons and charged hadrons, which inherit the inherently more precise track momentum, and candidate lists of photons and neutral hadrons, which inherit the cluster energy. Tracks which cannot be associated to any calorimeter cluster but which extrapolate to muon detector hits build candidate lists of muons. Objects identified in this way are called \emph{particle flow objects} (PFOs).
\subsubsection{Jet and vertex finding}
In sect. \ref{sec:pythia} we saw that quarks and gluons produced in a collider event undergo showering and hadronization, that is the gluons split to quark pairs, quarks radiate gluons, confinement generates many mesons and baryons, and those mesons and baryons decay. The project of \emph{jet finding} is to correctly assign the reconstructed stable and quasistable particles in a \emph{jet} and to reconstruct the four-vector of the initiating quark or gluon. The project of \emph{vertex finding} is to identify the location of a hadron decay within a jet from the tracks left by the charged particles in the hadron decay products.
Two types of jet finding algorithms are in common use: cone-based and sequential recombination algorithms. Cone-based algorithms start from a set of seeds, usually objects which satisfy a minimum energy, and group all other objects within a cone of fixed angular radius $R$ together. The objects may be hits, clusters, tracks, or PFOs. Each seed defines a cone axis. For each seed, the energy weighted position $\sum_i E_i r_i/\sum_i E_i$ (the sum is over all objects $i$ in the cone), or \emph{centroid}, defines a new geometric axis and the procedure iterates until the cone axis converges.
Whereas cone algorithms are top down, sequential recombination algorithms are bottom up. They begin by defining a distance measure between all object pairs, $d_{ij}=\min(k_{ti}^{2p},k_{tk}^{2p}) \Delta_{ij}/R^2$ and for single objects $d_{i}=k_{ti}^{2p}$. Here $k_{t}$ is transverse momentum, $\Delta_{ij}$ is the angular distance between objects $i$ and $j$, $R$ is a tuneable parameter and $p=-1,0,+1$ for $k_t$, Cambridge/Aachen, and anti-$k_t$ algorithms. If the minimum over the list of all distance measures is a $d_{ij}$, the objects are merged, the distances are re-calculated, and the procedure repeats. If the minimum is a $d_i$, the object is removed from the list and called a jet.
Jet finders perform differently when faced with radiated low energy gluons. If the gluon is radiated between two jets, it may cause the jets to be incorrectly merged in jet finding. Algorithms which prevent this are called \emph{infrared safe}. Algorithms which prevent problems in jet finding due to gluon emision along the jet axis are called \emph{collinear safe}.
FastJet \cite{Cacciari:2011ma} is a C++ library of jet finders. It has been incorporated into both Delphes and ILCsoft, but it can be installed for standalone jet finding. Of particular interest for $e^+ e^-$ collider events is the Durham algorithm, otherwise known as the $k_t$ algorithm for $e^+ e^-$ events, with distance measure $d_{ij}=2 \min(E_i^2,E_j^2)(1-\cos \theta_{ij})$. All sequential recombinations algorithms are implemented in FastJet, and the various cone algorithms used by collider experiments are implemented as plugins.
Within jets are hadrons produced by fragmentation and decay. Top quarks decay $t \rightarrow bW$ before they can hadronize, but every other quark in the SM hadronizes into mesons and baryons, which then produce a cascade of decays to hadrons composed of bound states of lighter quarks. For a $b$ quark, the cascade is formulated as $b \rightarrow W c \rightarrow W^+ W^- s$, producing $B$ mesons, $D$ mesons, and $K$ mesons respectively, together with the $W$ decay products. The kaons finish the cascade decay to hadrons made from first generation quarks. If two or more charged particles appear in their decays, they form tracks which form a vertex at the location of the decay.
From Table \ref{tab:mesons}, $c\tau$ for $B$ and $D$ range from $0.5$ to $0.1$ mm respectively, and the decay distance can be significantly increased due to time dilation. Thus we expect a \emph{primary vertex} at the collision point, a \emph{secondary vertex} at the $B$ decay point, and a \emph{tertiary vertex} at the $D$ decay point. The number of vertices and their distance from the primary vertex serve to distinguish events containing $b$ quarks from other events. A \emph{$b$-tag} incorporates such information in making a determination of the jet \emph{flavor}.
In principle the track impact parameters $d_0$ and $z_0$ should be sufficient to determine which tracks form vertices. In practice the uncertainty on those measurements makes this challenging. Nevertheless track impact parameter significances, $d_0/\sigma_{d_0}$ and $z_0/\sigma_{z_0}$ are often used as additional inputs to a $b$-tag since they can be large for tracks from $B$ decay. The number of tracks in an event with large impact parameter significances is also frequently used as an input to a $b$-tag.
In ILCsoft, vertex finding packages have been developed for ILC detectors in the Linear Collider Flavor Identifier (LCFI). LCFIPlus \cite{Suehara:2015ura} combines jet and vertex finding with multivariate techniques for optimal performance. In another approach to vertex finding, one wraps each track with a Gaussian probability tube using the measured track parameter uncertainties. For each track $i$ ($i=1,\dots,n$), the probability function $f_i(r)$ is formed, and for the collision point an ellipsoidal probability function $f_0(r)$ is formed. Then the \emph{vertex function}
\begin{eqnarray}
V(r) & = & \sum_{i=0}^{n} f_{i}(r) - \frac{\sum_{i=0}^{n} f_{i}^{2}(r)}{\sum_{i=0}^{n} f_{i}(r)}
\end{eqnarray}
\noindent yields maxima at vertices, provided they are resolved, and minima where there is only one track or no track at all. This technique was used successfully at SLD with ZVTOP \cite{JACKSON1997247}.
\subsubsection{ILCsoft reconstruction with Marlin}
In ILCsoft the executable for running the reconstruction chain, including tracking, cluster finding, particle flow, jet finding and vertex finding is \texttt{Marlin}. See Appendix \ref{appendix2} for instructions on installing and testing ILCsoft.
The particular C++ code which is invoked for each reconstruction function is defined in the \texttt{reconstruction.xml} XML file and passed to \texttt{Marlin} as the first argument. The input and output LCIO filenames are the second and third arguments:
\begin{center}
\texttt{Marlin reconstruction.xml --global.LCIOInputFiles=in.lcio / --MyLCIOOutputProcessor.LCIOOutputFile=out.lcio}
\end{center}
\noindent See Table \ref{tab:reco} for an XML fragment within the reconstruction XML which defines the underlying compiled C++ processors to invoke. All such processors are named within the XML \texttt{execute} tag. Algorithm parameters can be passed to the processors from the XML in the \texttt{processor} tags which follow the \texttt{execute} tag.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|} \hline
\multicolumn{1}{|c|}{Marlin SiD Reconstruction XML} \\ \hline
$\langle$execute$\rangle$ \\
\hspace{0.2in} $\langle$processor name="InitDD4hep"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="VertexBarrelDigitiser"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="TrackerBarrelPlanarDigiProcessor"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="MyConformalTracking"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="ECalBarrelDigi"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="ECalBarrelReco"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="HCalBarrelDigi"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="HCalBarrelReco"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="MyDDSimpleMuonDigi"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="MyDDMarlinPandora"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="MyFastJetProcessor"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="MyZVTOP\_ZVRES"/$\rangle$ \\
\hspace{0.2in} $\langle$processor name="MyLCIOOutputProcessor"/$\rangle$ \\
$\langle$/execute$\rangle$ \\ \hline
\end{tabular}
\caption{Schematic of the processors selected within the \texttt{execute} tag of the SiD reconstruction XML passed to the ILCsoft \texttt{Marlin} executable. Each processor is a compiled C++ program. Algorithm parameters are configured within the \texttt{processor} tags (not shown) and passed to the C++ processors.}
\label{tab:reco}
\end{center}
\end{table}
For the particular reconstruction sequence defined in Table \ref{tab:reco}, the following processors are invoked:
\begin{enumerate}
\item Digitization: default digitization for all subdetectors
\item Track Finding and Fitting: Conformal Tracking \cite{Brondolin:2019awm}
\item Cluster Finding and Particle Flow: PandoraPFA \cite{Thomson200925}
\item Jet Finding: FastJet \cite{Cacciari:2011ma} configured by XML
\item Vertex Finding: ZVTOP \cite{JACKSON1997247} topological vertexing
\end{enumerate}
\noindent The output of each step in the reconstruction chain is the input for the subsequent step: digitization output is input for all following steps, track finding output is input for particle flow, particle flow objects are input for jet finding, jets and tracks are input for vertex finding.
Alternative processors for any of the steps in the reconstruction chain may be specified, as long as they are C++ implemented in ILCsoft. Furthermore each alogorithm processor parameter is configurable by in separate tags in the reconstruction XML. In some cases they are highly configurable, and the user is warned that expert advice is required for optimal configuration. A Python alternative to Marlin reconstruction is described in \cite{Potter:2020ihz}, and a Julia alternative in \cite{Stanitzki:2020bnx}.
\subsubsection{Shortlived particle reconstruction}
Standard reconstruction sequences usually end after tracks, clusters, PFOs, jets and vertices are found, leaving shortlived particle reconstruction to the analysis of individual users. Such analyses are usually carried out in C++ or Python, though in principle any language can be used. See Appendix \ref{appendix2} for instructions on accessing LCIO files with Python.
Particles like the $\pi^0$, $K_S$, $\phi$, $J/\Psi$ and $\Upsilon$ are straightforward to reconstruct using four-vector addition. For $\pi^0 \rightarrow \gamma \gamma$ one simply iterates through all pairs of photons reconstructed in the event, forms four-vectors of each photon in the pair from the measured energy and position of the associated clusters, adds the four-vectors and calculates the invariant mass, which must be near the measured mass of the $\pi^0$. Similarly for $K_S \rightarrow \pi^+ \pi^-$ and $\phi \rightarrow K^+ K^-$ one iterates through pairs of oppositely charged hadrons. For leptonic decays $J/\Psi,\Upsilon \rightarrow \ell^+ \ell^-$ where $\ell=e,\mu$ this is also straightforward.
Gauge bosons can be more challenging. Reconstruction of gluons in $g \rightarrow q\bar{q}$ is straightforward since each quark produces a jet. If the jets are resolved, the gluon is reconstructed from the pair of jets. In many cases the quark pair produces only one jet which is itself the reconstructed gluon. Photons are usually reconstructed directly in the ECal, but can also be reconstructed if they pair produce $\gamma \rightarrow e^+ e^-$.
Electroweak gauge boson reconstruction can be complicated by the presence of one or more neutrinos in the decay products. For a hermetic detector which measures all particles (apart from neutrinos) produced in a collider event, the \emph{missing energy} and \emph{missing momentum} can be calculated by imposing conservation of energy and momentum. For a lepton collider the initial state beam energy and momentum and the final state energy and momentum must be equal, which yields the four-vector sum of the neutrinos in the event. For a hadron collider only the initial state transverse to the beamline is known.
For $Z\rightarrow \nu \bar{\nu}$ there is no recourse unless the signal event topology and missing four-vector provide additional leverage. For $Z \rightarrow \tau^+ \tau^-$ the situation is somewhat improved, but a neutrino pair still make reconstruction challenging. The situation is much easier with $Z \rightarrow \ell^+ \ell^-$ for $\ell=e,\mu$. For $Z \rightarrow q\bar{q}$ one looks for two jets initiated by the quark pair, requiring a jet pair mass consistent with the $Z$ mass within jet energy resolution.
For $W \rightarrow q \bar{q}^{\prime}$ the strategy is the same as for $Z \rightarrow q\bar{q}$. For $W \rightarrow \ell \nu$ where $\ell =e,\mu$ the $W$ may be reconstructed in a signal topology with a single neutrino, whose four-vector may be equated to the missing four-vector. For two or more neutrinos this cannot be done. For $W \rightarrow \tau \nu$ the reconstruction is complicated by the presence of at least one additional neutrino in the $\tau$ decay.
Higgs boson reconstruction occurs in many more final states. For decays to boson pairs $H \rightarrow \gamma \gamma,gg$ the reconstruction is straightforward. For decays to boson pairs $H \rightarrow WW^{\star},ZZ^{\star}$, one boson is virtual so the $W$ or $Z$ mass constraint cannot be applied, but the decay products are nonetheless the same as for on shell decays.
For decays to quark pairs $H \rightarrow b\bar{b},c\bar{c}$ the strategy is the same as for $Z \rightarrow q\bar{q}$, where the $b$-tag may be employed to distinguish the flavor of the quark pairs. For $H \rightarrow \mu^+ \mu^-$ the reconstruction is straightforward, while for $H \rightarrow \tau^+ \tau^-$ the reconstruction is complicated by the presence of a neutrino pair. The \emph{collinear approximation} exploits the signal topology of a massive particle decaying to tau pairs by assuming that the neutrinos are collinear with the visible tau decay products, and can therefore be extracted by projecting the missing momentum onto the visible decay products.
\subsection{Further reading and exercises}
The Particle Data Group reviews on \emph{Monte Carlo Techniques}, \emph{Monte Carlo Event Generators} and the \emph{Monte Carlo Particle Numbering Scheme} \cite{Tanabashi:2018oca} are useful (the last is invaluable). The reprinted paper on Monte Carlo in \emph{Experimental Techniques in High Energy Nuclear and Particle Physics} (Ferbel) \cite{Ferbel:238481} is good.
For the software discussed here (Whizard2, MG5 aMC@NLO, Pythia6, Pythia8, Geant4, Delphes, DD4hep, FastJet) the technical writeups in journals referenced in the endnotes are, of course, invaluable. So are the user's manuals available, in most cases, on the software webpages. The Pythia6 writeup is a literary classic in technical documentation.
Exercises for this section can be found in sect. \ref{sec:sid} of Appendix \ref{appendix1}.
\section{Conclusion: sensitivity and optimization}
We have given a comprehensive overview of the physics potential for the ILC as well as the software tools available for research and development on SiD, one of two detector proposals detailed in the ILC TDR. While the nominal SiD design is complete, rigorously evaluated, and carefully costed, a final round of costing and optimization involving a larger community of physicists is likely to occur in the event that construction is approved by a host nation and SiD proceeds to the technical design phase. That new round will necessarily involve a new generation of physicists adept at modern software.
The author hopes that this primer will provide a foundation of scientific knowledge about the ILC and the technical knowledge necessary for exploiting currently available tools explicitly developed for this purpose, as well as pointers to more advanced reading and specialized technical tools. Before concluding this primer, we review the concept of the sensitivity of an experiment and its relation to optimization of the experimental design within cost constraints.
The \emph{sensitivity} of a live experiment refers to the statistical significance, in the case of signal observation, a limit on the maximum possible signal, in the case of signal nonobservation, or the precision with which a physical parameter can be measured. The \emph{expected sensitivity} of a future experiment is evaluated by simulation of the experiment.
If we denote the total number of background events $B$ then the Gaussian statistical \emph{uncertainty} on $B$ is $\sqrt{B}$. For small $B$ a Poisson treatment of uncertainties is necessary, but we assume here we are in the Gaussian regime. If we denote the signal $S$, then the statistical significance of the signal is $S/\sqrt{B}$ and, if we account for uncertainty on the signal as well as the background, $S/\sqrt{S+B}$. Statistical uncertainty is sometimes called somewhat misleadingly \emph{error}. It has become accepted practice in HEP to refer to a signal with significance $S/\sqrt{B} \approx 3$ as signal \emph{evidence} and a significance $S/\sqrt{B} \approx 5$ as signal \emph{observation}. These significances correspond to 68.3\% and 99.7\% of a normally distributed variable.
The signal $S$ and background $B$ are estimated by an \emph{analysis} of data and simulated data, usually performed with computer code. An analysis can be considered a filter which maximizes signal selection and minimizes background selection in order to maximize the signal significance. The \emph{efficiency} of the analysis for a signal or background is the event count $n$ after applying the filter divided by the event count $N$ before applying the filter.
Because efficiency is a proportion, the binomial uncertainty is the appropriate way to evaluate the statistical uncertainty. For an efficiency $\epsilon=n/N$, the uncertainty is
\begin{eqnarray}
\delta_{\epsilon} & = & z_{ci} \sqrt{\frac{\epsilon (1-\epsilon)}{N}}
\label{eqn:uncertainty}
\end{eqnarray}
\noindent where $z_{ci}=1.00, 1.96,2.58$ for the 68\%, 95\% and 99\% confidence intervals. For the standard $1\sigma$ error, $z_{ci}=1$. As one would expect, an efficiency becomes more precise when a larger number of events $N$ are used to evaluate it.
Another source of uncertainty, qualitatively much different from statistical uncertainty, is \emph{systematic} uncertainty. Systematic uncertainties are attempts to parametrize our ignorance of an experiment. For example, one might identify a systematic uncertainty arising from differences in analysis selection efficiencies by using different event generators to simulate the signal process. Evaluating \emph{systematics} is as much art as it is science, and identifying all possible systematics can be challenging. Systematic and statistical uncertainties are usually reported separately, but when they must be combined they are usually combined in quadrature.
Suppose we would like to optimize the expected ILC sensitivity to Higgstrahlung events. We first write the analysis code to try to maximize the Higgstrahlung event selection and simultaneously minimize the corresponding background selection. Then we evaluate the efficiencies for signal ($\epsilon_s$) and background ($\epsilon_b$) by running the code over simulated signal and background events, taking care to evaluate all possible backgrounds. The expected signal significance for Higgstrahlung at the ILC is overwhelmingly large - this is precisely the argument for building the ILC, a Higgs factory - but claiming this and proving it with full simulation data and a careful consideration of backgrounds are two different things.
If we assume an integrated luminosity $\int dt \mathcal{L}$ and cross sections for signal Higgstrahlung $\sigma_s$ and backgrounds $\sigma_b$, then
\begin{eqnarray}
\frac{S}{\sqrt{B}} & = & \frac{\epsilon_s \sigma_s \int dt \mathcal{L}}{\sqrt{\sum_b \epsilon_b \sigma_b \int dt \mathcal{L}}}
\label{eqn:sig}
\end{eqnarray}
\noindent where the sum is over all backgrounds. We emphasize that the luminosity, center of mass energy and beam polarization (which determine the cross section) and their uncertainties depend on the ILC, while the efficiencies and their uncertainties depend on the detector, in our case SiD.
It should be noted that the precision with which an analysis can measure a property of the Higgs boson, its branching ratios for example, depends on the number of Higgs bosons produced. The numerator $\epsilon_s \sigma_s \int dt \mathcal{L}$ in eq. \ref{eqn:sig}, the signal significance, is also the denominator $N$ in eq. \ref{eqn:uncertainty}, the uncertainty on measuring a proportion like a branching ratio. Clearly, choosing beam parameters to maximize $\sigma_{s}$ and $\int dt \mathcal{L}$, while minimizing $\sigma_{b}$, are part of the optimization. Thus $\sqrt{s}$, beam polarization and luminosity are key ILC parameters. The remaining part of the optimization, maximizing $\epsilon_{s}$, is the job of detector optimization. For recent estimates of the sensitivity of the ILC to a variety of physical parameters, including Higgs boson branching ratios, see \cite{Bambade:2019fyw,Fujii:2019zll}.
The efficiencies $\epsilon_s$ and $\epsilon_{b}$ in eq. \ref{eqn:sig} with their uncertainties are complex quantities, dependent on many underlying detector performance measures in particle reconstruction, identification and precision. Tracking efficiency and precision, calorimeter cluster finding and precision, jet finding and precision, and vertex finding and precision all play a role in determining $\epsilon_s$, and therefore signal sensitivity, for a given signal process. Ultimately these parameters are all determined by the detector design.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|} \hline
Material & Unit Cost [USD]\\ \hline
ECal Tungsten & ($180 \pm 75$)/kg \\
Silicon Detector & ($6 \pm2$)/cm$^2$ \\
HCal Tungsten & ($105 \pm 45$)/kg \\
HCal Steel & ($4500 \pm 1000$)/ton \\ \hline
\end{tabular}
\caption{Material costs per unit (2008 USD) agreed to by SiD, ILD and CLIC for the ILC TDR. Adapted from the ILC TDR \cite{Behnke:2013lya}.}
\label{tab:material}
\end{center}
\end{table}
Reducing a detector's cost to a minimum is straightforward, but the performance and ultimate physics goals will suffer. Conversely, designing a highly performant detector to maximize $\epsilon_s$ is also straightforward, but the cost may be too high to pay. The right balance between cost and performance must be struck. Typically a detector community targets a performance goal and then, in an effort to minimize the cost necessary to reach that goal, performs a detector \emph{optimization}. If the optimized cost is too high, the performance goals are reduced and the detector is reoptimized. Material cost assumptions are key inputs to this process. For SiD (as well as ILD and CLIC) the estimated material costs of Silicon, Tungsten and Steel assumed in the ILC TDR are summarized in Table \ref{tab:material}.
In the SiD Loi \cite{Aihara:2009ad} a an optimization of the solenoid field strength $B_z$, the calorimetry inner radius $R$ (equivalently the tracking outer radius) and the HCal depth $n\lambda$ was performed, yielding the nominal configuration $B_z=5$~T, $R=1.25$~m and $n=5$. These parameters together determine the jet energy resolution, a critical factor in determining the precision to which Higgs boson branching ratios may be measured with SiD. The initial LoI optimization yielded the cost estimates detailed in the ILC TDR \cite{Behnke:2013lya} chapter on SiD costs. See Table \ref{tab:cost} for a summary of these costs.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
Subdetector & Base [MUSD]& Eng. [MY] & Tech. [MY]\\ \hline
Vertex Det. & $2.8 \pm 2.0$ & 8.0 & 13.2 \\
Tracker & $18.5 \pm 7.0$ & 24.0 & 53.2 \\
ECal & $104.8 \pm 47.1$ & 13.0 & 288.0 \\
HCal & $51.2 \pm 23.6$ & 13.0 & 28.1 \\
Solenoid & $115.7 \pm 39.7$ & 28.3 & 11.8 \\
Muon Det. & $8.3 \pm 3.0$ & 5.0 & 22.1 \\ \hline
\end{tabular}
\caption{Baseline material cost (2008 MUSD) and engineering and technical labor (MY) estimated to build SiD subdetectors. Costs not shown are beamline systems, electronics, installation and management. Adapted from the ILC TDR \cite{Behnke:2013lya}.}
\label{tab:cost}
\end{center}
\end{table}
Determining which costs will be borne by the accelerator side and which will be determined by the detector side is a critical component of costing a detector. The TDR detector costing assumes the following costs are borne by the accelerator: detector hall with lighting and electrical power, internet and compressed air utilities, compressed Helium piping, and surface buildings and construction cranes. Another critical component in costing is who bears the cost of gray areas: research and development, detector commissioning, operating costs and physicist salaries. The ILC TDR cost estimate does not include these.
If we consider the HCal and Solenoid optimization to be final, then the most conspicuous cost is for the ECal, requiring a material baseline of $104.8 \pm 47.1$ MUSD and labor costs of 301.0 person years. The nominal ECal design thus requires more than three times the combined vertex detector, tracker, and muon detector cost in material alone. For labor the factor is even larger. A global optimization of ECal design parameters, including the total number of layers and thin and thick Tungsten layer widths, may find that a substantial reduction in cost results with a minimal loss in performance. One preliminary study finds this to be the case \cite{Braun:2020eme}.
\section*{Acknowledgements}
\begin{acknowledgement}
Much of this primer began as lecture notes for the graduate courses \emph{Physics 610: Collider Physics} and \emph{Physics 662: Elementary Particle Phenomenology}, taught at UO in Winter and Spring terms in 2018. Thanks to Jim Brau, who generously allowed the author to fill in as instructor of record for 610 and 662 while Brau attended to ILC matters. Brau first taught the author about measuring Higgs boson branching ratios at a linear collider two decades ago \cite{potter2003}.
Thanks to doyens Marty Breidenbach and Andy White for sharing their wisdom at the SiD Optimization meetings. Breidenbach, who maintains an incomprehensibly large store of experience, worked on deep inelastic scattering at Stanford, SPEAR and SLD and was an early architect of SiD. Thanks also to Jan Strube and Dan Protopopescu, who have convened the SiD Optimization meetings over the past few years. Protopopescu cowrote and maintains the DD4hep SiD detector description.
The historical material on accelerators and detectors was written while preparing lectures for \emph{Honors College 209H: Discovery of Fundamental Particles and Interactions} taught at UO Winter and Spring 2020. Thanks to Daphne Gallagher, Associate Dean in the Clark Honors College, for encouraging the development of this experimental undergraduate course.
\end{acknowledgement}
|
1,116,691,499,665 | arxiv | \section{Introduction}
Perturbation theory would be the first option to perform analytic calculation in many fields of physics.
However, the usual perturbation theory cannot capture nonperturbative nature of models because a perturbation series is not a convergent series but just an asymptotic series.
To address the problem of the perturbation theory, various methods to improve the convergence properties of the perturbation series are suggested.
Optimized perturbation theory (OPT) considered in this paper is one of a resummation technique first developed for quantum mechanics~\cite{Caswell:1979qh,Seznec:1979ev,Halliday:1979vn,Killingbeck1981} and immediately extended to apply quantum field theories~\cite{Stevenson:1981vj,Stevenson:1982qw,Okopinska:1987hp,Stancu:1989sk,Chiku:1998kd}. (See also~\cite{Arteca, Kleinert,Andersen:2004fp,Jakovac} and references therein).
The basic idea of OPT is combining the usual perturbation theory and variational methods.
In OPT, one introduces an artificial parameter, say $z$, and reorganizes an interaction term of an action.
After performing the perturbative expansion in terms of the modified interaction term, one obtains physical quantities depending on the artificial parameter.
When one imposes an appropriate variational criterion, $z$ has a nontrivial dependence in terms of an expansion parameter of the usual perturbation theory like coupling constants, and thus, non-perturbative effects could be included in the physical quantities\footnote{Precisely speaking, OPT is always applicable even if an action has no natural expansion parameters like supersymmetric Yang-Mills integral or IIB matrix model ~\cite{Sugino:2001fn,Kawai:2002jk,Kawai:2002ub}.}.
Similar techniques refered to as order dependent mapping~\cite{Seznec:1979ev}, variational perturbation~\cite{KLEINERT1993332}, self-consistent expansion~\cite{Schwartz1992}, gaussian expansion method~\cite{Kabat:1999hp, Nishimura:2001sx} and so on~\cite{Bender:1987dn, Bender:1988rq, Shaverdian:1983ay} are developed in various fields of physics.
However, OPT itself does not tell us how to impose the variational criterion.
Actually, much less is known about this issue except for a few concrete examples.
An empirical way to do is requiring that higher order terms of the modified perturbation series should have lesser contribution.
This is known as the fastest apparent convergence (FAC) condition~\cite{Seznec:1979ev,Halliday:1979vn,Halliday:1979xh}.
Seemingly, the FAC condition is natural because it is a necessary condition for the convergence of the modified perturbation series.
On the other hand, there are serious conceptual issues in a use of the FAC condition.
Because exact physical quantities are independent of the unphysical parameter $z$ by definition, approximants obtained in OPT should be insensitive to a choice of the parameter.
In other words, the approximants should be locally flat functions of $z$.
However, it is not guaranteed that a solution of the FAC condition is a interior point of the flat region.
In this paper,
we answer to this issue by studying analytic properties of the FAC condition relying on its integral representation.
We show that asymptotic properties of solutions of the FAC condition
can be clarified by studying the topological structure of Lefschetz thimbles, which are a generalization of steepest descent contours.
This paper is organized as follows.
In section~\ref{sec:opt}, we review OPT and discuss issues of the FAC condition in detail.
In section~\ref{sec:thimble}, we develop a way to analyze properties of solutions of the FAC condition on the basis of theory of Lefschetz thimbles.
Finally in section~\ref{sec:application}, we apply our method to a one dimensional integral and discuss an underlying mechanism that a solution of the FAC condition gives a reasonable choice of the artificial parameter of OPT.
Section~\ref{sec:summary} is devoted to summary and concluding remarks.
\section{Optimized perturbation theory}\label{sec:opt}
In this section, we explain an idea of OPT.
Let $S(x; \lambda)$ is an action which involves a coupling constant $\lambda$.
Our goal is to evaluate an integral:
\begin{align}
Z(\lambda)=\int_\mathcal{D} dx e^{-S(x;\lambda)},
\label{Z}
\end{align}
where $\mathcal{D}$ is an integration domain.
In OPT,
the action is decomposed as
\begin{align}
S(x;\lambda) = S_0(x;z) + \delta S_I(x;\lambda,z)|_{\delta=1},
\label{action}
\end{align}
such that $S_0(x;z)$ becomes an action of an exactly solvable model.
Here, we introduce a complex parameter $z$, which will be determined later.
One can evaluate the value of the integral~\eqref{Z}
by performing a formal expansion in terms of $\delta$.
We denote the $K$th-order approximant of the integral by $Z_K$:
\begin{align}
Z_K(\lambda, z) &\equiv
\sum_{k=0}^K a_k(\lambda,z) \delta^k\Big|_{\delta=1}, \label{Zk}
\\
a_k(\lambda, z) &=
\frac{(-1)^k}{k!} \int_\mathcal{D} dx S_I^k(x; \lambda, z) e^{-S_0(x;\,z)} .
\label{ak}
\end{align}
A peculiar feature of the approximant $Z_K$ is that it depends on the unphysical parameter $z$.
Because the original quantity we hope to obtain does not have such $z$-dependence,
it is natural to require that the approximant should be insensitive to the choice of the unphysical parameter.
Thus, a possible choice of $z$ is a solution of the following equation:
\begin{align}
\frac{\partial}{\partial z}
Z_K(\lambda, z)=0.
\label{PMS}
\end{align}
This condition is known as the principle of minimal sensitivity (PMS)~\cite{Caswell:1979qh, Stevenson:1981vj}.
Although the physical meaning of the PMS condition is clear,
it is sometimes not suitable for analytic studies when the form of Eq.~\eqref{PMS} is involved.
In that case,
an other possible option to determine $z$ would be the fastest apparent convergence (FAC) condition,
which requires that the higher order terms of $\delta$ in Eq.~\eqref{Zk} should have lesser contribution than the lower-order terms.
In practice,
the simplest version of the FAC condition,
\begin{align}
Z_K(\lambda, z) - Z_{K-1}(\lambda, z) = 0
\label{FAC1}
\end{align}
is frequently imposed.
In our notation, this condition is equivalent to
\begin{align}
a_K(\lambda, z) = 0.
\label{FAC2}
\end{align}
The FAC condition is also a reasonable criterion in the sense that it is a necessary condition for the convergence of the infinite series $\lim_{K\to\infty}Z_K$.
Moreover, the FAC condition plays an essential role in several cases.
One example is an application of OPT to quantum field theories slightly away from local-equilibrium.
There, the FAC condition naturally arises in a derivation of hydrodynamic equations~\cite{Hayata:2015lga}.
An actual precision of the approximation depends on the choice of the criterion, namely the PMS or FAC condition.
On the other hand,
both conditions have a same mechanism for taking into account nonperturbative nature of the model.
Indeed, through Eq.~\eqref{PMS} or Eq.~\eqref{FAC1},
the approximant $Z_K$ has a nontrivial $\lambda$-dependence that the usual perturbation theory never achieves.
Another important feature is that the artificial parameter also depends on $K$, a truncation order of the $\delta$-expansion.
It is known that this $K$-dependence is crucial for the convergence of OPT.
Indeed, it has been proven that the infinite series $\lim_{K\to\infty}Z_K(z(K))$ converges to the exact value $Z$ if $z$ has an appropriate $K$-dependence for
one dimensional integrals~\cite{Buckley:1992pc, Bender:1993nd, Remez:2018tle} and quantum mechanical anharmonic oscillators~\cite{Halliday:1979xh, Duncan:1992ba, Guida:1994zv, Guida:1995px, Kleinert:1995hc}.
For these known cases, the solutions of the PMS and FAC conditions reproduce an appropriate $K$-dependence.
In spite of these good properties,
the FAC condition is not fully reliable.
In particular, we raise two issues in a use of the FAC condition.
The first issue is that,
a solution of the FAC condition is not unique in general.
For instance,
the FAC condition for a one dimensional integral we will discuss in this paper becomes a polynomial equation whose order is $2K$.
Hence, one have to put additional criteria by hand to resolve this ambiguity.
The second issues is that the FAC condition does not guarantee the insensitivity of the approximant as a function of the unphysical parameter.
In order to address these issues, we begin our discussion with studying distribution of the zeros of $a_K(\lambda, z)$ in the complex $z$-plane.
\section{Lefschetz-thimbles}\label{sec:thimble}
In this section,
we give a way to examine zeros of $a_K(\lambda, z)$, the $K$th-order coefficient of the $\delta$-expansion.
For this purpose,
the integral representation of $a_K$ Eq.~\eqref{ak} is useful as we will see below.
Let us start with defining an effective action by
\begin{align}
I_K(x;z)\equiv S_0(x;z) - K\log S_I(x;z).
\end{align}
Here, we fix the coupling constant and omit its dependence.
From Eqs.~\eqref{ak} and~\eqref{FAC2},
the FAC condition is written by
\begin{align}
a_K(z) = \frac{(-1)^k}{k!}\int_\mathcal{D} dx e^{-I_K(x;\,z)} = 0.
\label{integral rep of FAC}
\end{align}
In order to evaluate this integral,
it is convenient to deform the integration domain on the real axis into a set of steepest descent contours in a complex plane.
Such contour is characterized by a flow whose source is a saddle point of the effective action.
Let us denote the saddle point by $\sigma_i$,
and suppose that a number of the saddle points is $N$, i.e.,
\begin{align}
\frac{\partial I_K(\xi; z)}{\partial \xi}\Big|_{\xi = \sigma_i} = 0,
\quad
i = 1, \dots, N.
\end{align}
We also assume that
\begin{align}
\frac{\partial^2 I_K(\xi; z)}{\partial \xi^2}\Big|_{\xi = \sigma_i} \neq 0,
\end{align}
and all saddle points are isolated.
Then, the steepest descent contour is obtained by
\begin{align}
\mathcal{J}_i
=
\left\{
\xi(t) \in \mathbb{C} \, \Big| \,
\frac{d \xi(t)}{d t} = +\overline{\frac{\partial I_K}{\partial \xi}}, \,
\xi(t=0) = \sigma_i
\right\}.
\label{J}
\end{align}
This contour and its generalization to higher dimensions is known as a Lefschetz thimble.
One can easily confirm that the real part of the effective action increases monotonically along the flow.
We also note that the imaginary part of the effective action is constant along the flow.
An endpoint of the Lefschetz thimble is a point at infinity or one of the logarithmic singularities of the effective action.
Since any well-defined integrals should be represented by a linear combination of the contours $\sum_{i=1}^N n_i\mathcal{J}_i$, Eq.~\eqref{integral rep of FAC} would be written as
\begin{align}
a_K(z)
=
\sum_{i=1}^N n_i e^{-i\Im I_K(\sigma_i;\,z)}
\int_{\mathcal{J}_i} d\xi e^{-\Re I_K(\xi;\,z)},
\label{thimble decomposition of FAC}
\end{align}
where $n_i$ is an unknown weight factor at this moment.
Relying on this expression,
we find that $a_K(z)$ vanishes if and only if
two or more steepest descent contours contribute to $a_K(z)$ and cancel out with each other.
In order to estimate when does the cancellation occur,
one can use the saddle point technique.
Indeed, for a general class of models,
we will find that the saddle point approximation makes sense.
To see this,
we suppose that the action has a following form:
\begin{align}
S(x) = \frac{\omega^2}{2}x^2 + \sum_{p=3}^q \lambda_p x^p, \quad q = 4, 6, \dots,
\label{action:general form}
\end{align}
where $\lambda_p$ is an arbitrary complex constant such that the integral~\eqref{Z} converges.
The corresponding effective action is given by
\begin{align}
I_K(x) = \frac{zx^2}{2}
- K\log \left( -\frac{z}{2}x^2 + \sum_{p=3}^q \lambda_p x^p + \frac{\omega^2}{2}x^2 \right).
\label{effective action:general form}
\end{align}
Replacing $z$ by $K^{1-2/q}z$ and changing the variable $x$ by $K^{1/q}x$,
the effective action reads
\begin{align}
I_K(x) =
K \left[
\frac{zx^2}{2}
- \log \left( -\frac{z}{2}x^2 + \lambda_q x^q + O(K^{-1/q}) \right)
\right] + \text{const.}.
\label{effective action: factorized form}
\end{align}
Therefore, if the truncation order of the $\delta$-expansion $K$ is large enough,
the integrand appeared in Eq.~\eqref{thimble decomposition of FAC} has a sharp peak around its saddle point.
In the lowest order approximation,
the FAC condition becomes
\begin{align}
\sum_{i=1}^N n_i e^{-I_K(\sigma_i;\,z)} = 0.
\label{anti Stokes line}
\end{align}
A detail derivation of this equation is given in appendix~\ref{App:saddle point}.
A set of solutions of this equation forms line segments in the complex $z$-plane, and these are known as the anti-Stokes lines.
From the above discussion, we find that solutions of the FAC condition distribute on the anti-Stokes lines.
In particular, the solutions appear exactly on the anti-Stokes line in the limit $K\to\infty$.
A remaining issue is how to compute the weight factors $\{n_i\}$.
Actually, this is a complicated task since the weight factors are governed by a global structure or topology of the Lefschetz thimbles in the complex $\xi$-plane.
Fortunately,
a concrete way to obtain $\{n_i\}$ is well-studied for finite dimensional integrals on the basis of Picard-Lefschetz theory~\cite{Pham:1983, Howls10.2307/53139, Delabaere2002}.
Since there are already many applications of this framework to physics including field theories~\cite{Witten:2010cx, Witten:2010zr}, many reviews are available. (For instance, see~\cite{Tanizaki:2015gpl} and references therein.)
Here, we just give a sketch how to compute the weight factors $\{n_i\}$.
What plays a key role is a steepest ascent contour defined by
\begin{align}
\mathcal{K}_i
=
\left\{
\xi(t) \in \mathbb{C} \, \Big| \,
\frac{d \xi(t)}{d t} = -\overline{\frac{\partial I_K}{\partial \xi}}, \,
\xi(t=0) = \sigma_i
\right\}.
\label{K}
\end{align}
We assume that different saddle points are not connected by the flows, and introduce an orientation of $\{\mathcal{J}_i\}$ and $\{\mathcal{K}_i\}$ by an appropriate manner.
By their definitions, $\mathcal{J}_i$ has an intersection point with $\mathcal{K}_i$ only at $\xi = \sigma_i$.
In this case,
one can define a kind of an \textit{inner product} by
\begin{align}
\braket{\mathcal{J}_i, \mathcal{K}_j} = \delta_{ij}.
\end{align}
Thus, the weight factor is given by
\begin{align}
n_i = \braket{\mathcal{D}, \mathcal{K}_i}.
\end{align}
This means that the weight factor is obtained as an intersection number between the original integration contour $\mathcal{D}$ and a steepest ascent contour $\mathcal{K}_i$.
Since the weight factor $n_i$ is integer,
it is a discontinuous function of $z$.
Sudden change of $n_i$ at certain $z$ means that topological structure of the Lefschetz thimbles changes at that point.
A set of these points again forms line segments in the complex $z$-plane.
They are referred to as the Stokes lines~\cite{Berry_1988,Berry_1989}.
For multiple integrals, it is quite difficult to determine $n_i$, and hence, the Stokes lines.
On the other hand,
the Stokes lines can be obtained by a simple criterion exceptionally if the complex dimension is one.
In that case, each Lefschetz thimble is uniquely labeled by the imaginary part of the effective action on the thimble.
Therefore, the Stokes line is determined by
\begin{align}
\Im I_K(\sigma_i; z) - \Im I_K(\sigma_j; z) = 2\pi n,
\quad i \neq j, \quad n \in \mathbb{Z}.
\end{align}
We note that nonzero $n$ is allowed because the imaginary part of the effective action jumps by $2\pi$ across a branch cut of the logarithmic function.
\section{Application}\label{sec:application}
In this section,
we explicitly show the relation between the anti-Stokes line and a distribution of solutions of the FAC condition for a one dimensional integral.
The simplest nontrivial example of Eq.~\eqref{action:general form} would be \begin{align}
Z(\omega^2,\lambda)
=
\int_{-\infty}^\infty \frac{dx}{\sqrt{2\pi}} e^{-S(x;\,\omega^2,\lambda)}, \quad
S(x;\omega^2,\lambda)
=\frac{\omega^2}{2}x^2 + \frac{\lambda}{4}x^4,
\label{model}
\end{align}
where $\lambda \in \mathbb{C}$ is a complex constant which obeys ${\rm Re} \lambda > 0$.
The sign of the parameter in the quadratic term $\omega^2$ affects the asymptotic behavior of the integral.
For instance,
the analytic expressions of $Z(\omega^2,\lambda)$ for $\omega^2=1,0,-1$ are given by
\begin{align}
Z(1,\lambda)
&=
\frac{1}{2\sqrt{\pi\lambda}}
\exp\left(\frac{1}{8\lambda}\right)
K_{1/4}\left(\frac{1}{8\lambda}\right), \\
Z(0,\lambda)
&=
\frac{\Gamma(\frac{1}{4})}{2\sqrt{\pi\lambda}}, \\
Z(-1,\lambda)
&=
\frac{1}{2}\sqrt{\frac{\pi}{2\lambda}}
\exp\left(\frac{1}{8\lambda}\right)
\left(
I_{-1/4}\left(\frac{1}{8\lambda}\right) + I_{1/4}\left(\frac{1}{8\lambda}\right)
\right),
\end{align}
where
$I_\nu(x)$ and $K_\nu(x)$ are the modified Bessel function of the first and second kind, respectively.
The convergence property of the $\delta$-expansion is well studied for this integral~\cite{Buckley:1992pc, Bender:1993nd, Remez:2018tle}.
\subsection{$\delta$-expansion}
Let us apply OPT to evaluate the integral~\eqref{model}.
First, we decompose the action by introducing a complex constant $z$ as
\begin{align}
&S(x;\omega^2,\lambda)=S_0(x;z)+\delta S_I(x;\omega^2,\lambda,z)|_{\delta=1}, \\
&S_0(x;z)=\frac{z}{2}x^2, \quad
S_I(x;\omega^2,\lambda,z)=
\left( \frac{\omega^2-z}{2}x^2 + \frac{\lambda}{4}x^4 \right).
\end{align}
Here, $z$ is arbitrary as long as $\Re z > 0$.
Performing the Taylor expansion of $Z$ in terms of $\delta$ up to terms of order $K$,
we get
\begin{align}
Z_K(z)
&=
\sum_{k=0}^K a_k(\omega^2,\lambda;z) \delta^k|_{\delta=1},\\
a_k(z)&=\frac{(-1)^k}{k!}
\int_{-\infty}^\infty\frac{dx}{\sqrt{2\pi}}
\left(\frac{\omega^2-z}{2}x^2 + \frac{\lambda}{4}x^4 \right)^k
e^{-z x^2/2}.
\label{ak integral expression}
\end{align}
By using the binomial expansion and compute the gaussian integrals term-by-term,
we reach to
\begin{align}
a_k(z)=
\sqrt{\frac{\pi}{z}}
\frac{(-\lambda)^k}{k!z^{2k}}
\frac{1}{\Gamma(\frac{1}{2}-2k)}
{}_1F_1\left(-k,\frac{1}{2}-2k;\frac{(\omega^2-z)z}{\lambda}\right)
\label{ak analytic expression},
\end{align}
where ${}_1F_1(a,b;x)$ is the confluent hypergeometric function.
Since ${}_1F_1(a,b;x)/\Gamma(b)$ is an entire function,
$a_k(z)$ is analytic when $z\neq0$.
In particular, $z^{2k+1/2}a_k(z)$ is a polynomial function of $z$ whose order is $2k$.
Due to the symmetry under the exchange of $z \leftrightarrow (\omega^2-z)$,
the FAC condition $a_K(z) = 0$ has $K$ solutions in the right half-plane.
Numerically, they are easily obtained on the basis of the Durand-Kerner-Aberth method, for instance.
However,
the analytic expression~\eqref{ak analytic expression} is useless to argue its distribution of zeros.
Thus, we should rely on the integral expression~\eqref{ak integral expression} rather than Eq.~\eqref{ak analytic expression} to say something about properties of the FAC condition.
\subsection{Stokes and anti-Stokes lines}
As we discuss in Sec.~\ref{sec:thimble},
we introduce the effective action $I_K$ as follows:
\begin{align}
a_K(z)
&=
\frac{(-1)^k}{\sqrt{2\pi}k!}
\int_\mathcal{D} d\xi
e^{-I_K(\xi;\,z)} \label{aK quartic} \\
I_K(\xi;z)
&=
\frac{z}{2}\xi^2
-K\log\left(\frac{\omega^2-z}{2}\xi^2 + \frac{\lambda}{4}\xi^4\right),
\label{effective action: FAC}
\end{align}
where $\mathcal{D}$ is the real axis at this moment.
Thanks to the symmetry $I_K(\xi;z)=I_K(-\xi;z)$,
we consider only the right half part of the complex $\xi$-plane.
Inside the region,
there are two saddle points, which are given by
\begin{align}
\sigma_\pm
=
\sqrt{\frac{2K}{z}(1 + w \pm \sqrt{1 + w^2})}.
\label{saddle}
\end{align}
Here,
we define
\begin{align}
w = \frac{z(z-\omega^2)}{2K\lambda},
\label{z to w}
\end{align}
for later convenience.
Thus, we find that the integral~\eqref{aK quartic} is rewritten in terms of integrals on the Lefschetz thimbles, which are obtained by solving the flow equation defined in Eq.~\eqref{J} from the saddle points $\sigma_\pm$.
We denote the Lefschetz thimbles by $\mathcal{J}_\pm$ and corresponding weight factors by $n_\pm$.
As we will see below,
the weight factors, which are difficult to calculate in general can be obtained explicitly in our case.
By a little algebra,
we find that the difference of the values of the effective action on the saddle points reads
\begin{align}
\Delta(w)
\equiv
\frac{I_K(\sigma_+) - I_K(\sigma_-)}{K}
=
2\sqrt{1 + w^2} - \log \frac{1 + \sqrt{1 + w^2}}{1 - \sqrt{1 + w^2}}.
\end{align}
Therefore, the Stokes lines are obtained as solutions of the equation\footnote{Because there are two Lefschetz thimbles in this simple example, we do not need to care about multiple of $2\pi$ ambiguity.}
\begin{align}
\Im \Delta(w) = 0. \label{stokes quartic model}
\end{align}
Since $z^{2k+1/2}a_k(z)$ is invariant under flipping the sign of the imaginary part of $z$,
one can restrict to be $\Im w \geq 0$ without loss of generality.
When $\Re w = 0$, Eq.~\eqref{stokes quartic model} has a trivial solution $w = ib$, $(0\leq b \leq 1)$.
Other solutions at $\Re w \neq 0$ can be found numerically.
We show these solutions in the upper left panel of Fig.~\ref{fig:stokes} by red solid lines.
Thus, we find that the upper half part of the complex $w$-plane is partitioned into three areas.
From each area, we take a representative point,
say $w=-1+0.5i$ (A), $1+0.5i$ (B) and $2+0.5i$ (C),
and calculate the Lefschetz thimbles~\eqref{J} and upward flows~\eqref{K} for these values of $w$.
Other parameters, $\lambda$, $\omega^2$ and $K$ which are not relevant to this argument is fixed to $\lambda=1$, $\omega^2=1$ and $K=2$.
In the rest panels in Fig.~\ref{fig:stokes},
we show the Lefschetz thimbles by orange solid lines and upward flows by blue dotted lines.
As we discussed in Sec.~\ref{sec:thimble},
intersections of the Lefschetz thimbles and the upward flows are given by the saddle points of the effective action~\eqref{saddle}, which are denoted by circles.
Endpoints of the Lefschetz thimbles are a point at infinity or singular points of the effective action:
\begin{align}
\zeta_1=0, \ \ \ \zeta_2=\sqrt{\frac{2(z-\omega^2)}{\lambda}},
\end{align}
which are denoted by crosses.
Now, it is easy to find out the intersections of upward flows and the real axis.
As a result, we get
\begin{align}
&n_+ = 0, \quad n_- = 1, \quad
w\in \text{left bottom area (represented by A)}, \\
&n_+ = 1, \quad n_- = 1, \quad
w\in \text{right bottom area (represented by B)}, \\
&n_+ = 1, \quad n_- = 0, \quad
w\in \text{top area (represented by C)}.
\end{align}
\begin{figure}[tbh]
\centering
\fig{7.5cm}{stokes_on_w.pdf}
\fig{7.5cm}{thimble_A.pdf}
\fig{7.5cm}{thimble_B.pdf}
\fig{7.5cm}{thimble_C.pdf}
\caption{(Left top) The Stokes (red solid lines) and anti-Stokes lines (black dotted lines) in the complex $w$-plane.
(Others) Lefschetz thimbles (orange solid lines) and steepest ascent contours (blue dotted lines) in the complex $\xi$-plane at $w = -1+0.5i$ (A), $1+0.5i$ (B) and $2+0.5i$ (C), respectively. Circle dots and crosses denote saddle points and singular points of the effective action $I_K$, respectively.}
\label{fig:stokes}
\end{figure}
Thanks to the above argument on the Stokes line,
the definition of the anti-Stokes line~\eqref{anti Stokes line} can be simplified as follows.
Since the Boltzmann factor $e^{-I_K(\sigma_i(w);\,w)}$ is finite,
Eq.~\eqref{anti Stokes line} has solutions if and only if both weight factors $n_-$ and $n_+$ are not zero.
In order words, $w$ should be a point in the right bottom area.
If this is the case,
Eq.~\eqref{anti Stokes line} becomes
\begin{align}
e^{-I_K(\sigma_+(w);\,w)} + e^{-I_K(\sigma_-(w);\,w)} = 0.
\end{align}
Since the relative phase between these terms is $\pi$,
we get
\begin{align}
\Re\Delta(w) = 0, \quad w\in \text{right bottom area}
\label{anti-Stokes line quartic model}
\end{align}
as a criterion for the anti-Stokes line.
This equation actually have solutions in the right bottom area,
and it forms a continuous line as depicted in the left top panel of Fig.~\ref{fig:stokes} by a black dotted line.
The endpoint of the line at $w = i$ is a point where the two saddle points are degenerated.
Before closing this subsection,
we remark that the length of the anti-Stokes line in the complex $w$-plane $L$ is bounded as
\begin{align}
L< \frac{\pi}{2},
\label{length of ASL}
\end{align}
and thus, $L$ is independent of the truncation order of the $\delta$-expansion $K$.
This fact seems obvious, but as we will see soon, it is crucial to argue an asymptotic behavior of the FAC condition.
\subsection{Solutions of the FAC condition and the anti-Stokes line}
As we discussed in Sec.~\ref{sec:thimble},
solutions of the FAC condition should be distributed around the anti-Stokes line.
This property is confirmed by Fig.~\ref{FAC and ASL},
where the solutions of the FAC condition in the complex $z$-plane are denoted by white circles for $\lambda = 1$, $\omega^2=1$ and $K=2, 4, 8, 16$.
The Stokes and anti-Stokes lines obtained in the previous subsection are mapped into the complex $z$-plane by Eq.~\eqref{z to w}, and they are denoted by red solid and white dotted lines, respectively.
We also show the absolute value of $z^{2K+1/2}a_K(z)$ by the contour plot.
A remarkable feature of $|z^{2K+1/2}a_K(z)|$ is that it tends to form a deep and flat valley around the anti-Stokes line as the truncation order of the $\delta$-expansion $K$ increases.
This tendency can be understood as follows.
Since the length of the anti-Stokes line in the complex $w$-plane is bounded as Eq.~\eqref{length of ASL},
that in the complex $z$-plane is given by $cK^{1/2}$, where $c$ is a $K$-independent constant.
On the other hand,
the FAC condition have $K$ non-degenerate solutions.
Thus, a linear density of the solutions of the FAC condition is given by
\begin{align}
\rho_K = \frac{K}{cK^{1/2}} = c^{-1} K^{1/2}.
\end{align}
This means that the solutions of the FAC condition accumulate on the anti-Stokes line in the limit $K\to\infty$.
In this limit, the FAC condition holds everywhere as long as $z\propto K^{1/2}$ due to the identity theorem~\footnote{
The scaling behaviour $z\propto K^{1/2}$ agrees with the previous proofs of the convergence of the $\delta$-expansion.}.
This result is natural
because the approximant of the $\delta$-expansion $Z_K$ should be insensitive to the choice of the unphysical parameter $z$.
Similar discussion can be performed for a general $q$, which is an exponent of the highest order term of the general effective action Eq.~\eqref{effective action: factorized form}.
In this case, the $K$-dependence of $z$ should be $z \propto K^{1-2/q}$ so that the saddle point approximation is justified.
Once $K$ is factorized from the effective action,
the Stokes and anti-Stokes lines should be determined irrespective of $K$.
Thus, the density of the solutions of the FAC condition around the anti-Stokes line would be given by
\begin{align}
\rho_K = \frac{K}{cK^{1-2/q}} = c^{-1} K^{2/q}.
\end{align}
Therefore, as long as $q$ is finite,
the FAC condition holds everywhere due to the accumulation of the solutions.
The only exception is that one considers infinitely large $q$.
In this case, physical quantities could be sensitive to a choice of the parameter $z$.
\begin{figure}[tbh]
\centering
\fig{7.5cm}{FAC_Z_k=2_o2=1_lam=1+0j.pdf}
\fig{7.5cm}{FAC_Z_k=4_o2=1_lam=1+0j.pdf}
\fig{7.5cm}{FAC_Z_k=8_o2=1_lam=1+0j.pdf}
\fig{7.5cm}{FAC_Z_k=16_o2=1_lam=1+0j.pdf}
\caption{The solutions of the FAC conditions in the complex $z$-plane for $\lambda=1$, $\omega^2=1$ and $K=2, 4, 8, 16$ are shown by white circles.
Red solid and white dotted lines stand for the Stokes and anti-Stokes lines, respectively. Levels of the contours is given by the absolute value of $z^{2K+1/2}a_K(z)$.}
\label{FAC and ASL}
\end{figure}
In the rest part of this subsection,
we discuss how physical quantities depends on the parameter $z$ around the anti-Stokes line.
To see this,
we depict the reminder of the $\delta$-expansion
\begin{align}
R_K(z) \equiv |Z-Z_K(z)|,
\label{reminder}
\end{align}
in the left panel of Fig.~\ref{fig:RK} for $K=16$.
We also put $\lambda=1$ and $\omega^2=1$, which are same values as used for Fig.~\ref{fig:stokes}.
Again, we find a deep and flat valley around the anti-Stokes line similar to that of $|z^{2K+1/2}a_K(z)|$.
In order to visualize more precise structure of the valley and its $K$ dependence,
we plot $R_K$ along the anti-Stokes line for $K=2,4,8$ and 16 in the right panel of Fig.~\ref{fig:RK}
In this figure, the horizontal axis shows the angle of the anti-Stokes line.
Even when $z$ is complex, the order of magnitude of $R_K$ decreases as $K$ becomes large.
We obtain similar results for different $\lambda$ and $\omega^2$ as shown in appendix~\ref{App:RK}.
In Fig.~\ref{fig:plateau},
we compare $K$th-order approximants $Z_K$ along the real axis and $\arg z = \pi/8$ line with the exact value.
The horizontal axis is shifted as $z - z_\text{asl}$, where $z_\text{asl}$ is the point of the anti-Stokes line.
In both figures,
$Z_K$ forms a plateau around $z_\text{asl}$ as $K$ increases, and thus, it becomes insensitive to the choice of the unphysical parameter $z$.
In many applications of OPT, this kind of plateau structure is used as a guiding principle to determine an optimal value of $z$.
What we clarified here is that the underlying nature of the plateau formation is an accumulation of the solutions of the FAC conditions in the complex $z$-plane.
\begin{figure}[tbh]
\centering
\fig{7.5cm}{FAC_R_k=16_o2=1_lam=1+0j.pdf}
\fig{7.5cm}{Rk_on_asl_o2=1_lam=1.pdf}
\caption{
The value of $R_K$ at $K=16$ in the complex $z$-plane (left) and at $K=2,4,8$ and 16 on the anti-Stokes line (right).
In the right panel, $\theta$ denotes the angle of the anti-Stokes line.
In both panels, $\lambda=1$, $\omega^2=1$.
Symbols and lines are same as Fig.~\ref{fig:stokes}.
}
\label{fig:RK}
\end{figure}
\begin{figure}[tbh]
\centering
\fig{7.5cm}{Zk_plateau_o2=1_lam=1_angle=0.pdf}
\fig{7.5cm}{Zk_plateau_o2=1_lam=1_angle=0125pi.pdf}
\caption{
The $z$-dependence of the $K$th-order approximants $Z_K$ on the real axis (left) and $\arg z=\pi/8$-line around the anti-Stokes line.
The horizontal line shows the exact value of $Z$.
}
\label{fig:plateau}
\end{figure}
\subsection{Relation to the PMS condition}
In the previous subsection,
we see that the plateau structure of the approximant of the physical quantity $Z_K$ emerges around the anti-Stokes line determined by the FAC condition.
However, strictly speaking, this is a quite nontrivial phenomenon because the FAC condition is a condition for the highest order term of the $\delta$-expansion, not for the physical quantity itself.
Here, we show why this phenomenon happens through the relation to the PMS condition, which could detect the plateau structure of $Z_K$ more directly.
As pointed in~\cite{Buckley:1992pc},
the first-order derivative of $Z_K$ with respect to $z$ also has a simple integral form:
\begin{align}
\frac{\partial Z_K(z)}{\partial z}
&=
\frac{(-1)^{K+1}}{2K!}
\int_{-\infty}^\infty\frac{dx}{\sqrt{2\pi}}
\left(\frac{\omega^2-z}{2}x^2 + \frac{\lambda}{4}x^4 \right)^K x^{2}
e^{-z x^2/2} \\
&=
\frac{(-1)^{K+1}}{2K!}
\int_{-\infty}^\infty\frac{dx}{\sqrt{2\pi}}
e^{-I_K^\text{PMS}(x;z)},
\label{PMS: integral rep.}
\end{align}
where an effective action for the PMS condition $I_K^\text{PMS}(x;z)$ reads
\begin{align}
I_K^\text{PMS}(x;z)
=
\frac{zx^2}{2}
- K\log \left(\frac{\omega^2-z}{2}x^2 + \frac{\lambda}{4}x^4 \right)
- \log x^2.
\label{effective action: PMS}
\end{align}
Equation~\eqref{effective action: PMS} involves an additional term $\log x^2$ compared with the effective action for the FAC condition~\eqref{effective action: FAC}.
This term only gives a subleading contribution in the saddle point approximation after the rescaling $z \to K^{1/2}z$ and $x \to K^{1/4}x$ discussed in Sec.~\ref{sec:thimble}.
Therefore, the distribution of solutions of the PMS condition coincide with the anti-Stokes line determined by the FAC condition in the limit $K \to \infty$.
In addition,
one can estimate locations of the solutions of the PMS condition at finite $K$ based on a geometrical argument.
Comparing Eq.~\eqref{PMS: integral rep.} with Eq.~\eqref{ak integral expression}, we find
\begin{align}
\frac{\partial a_{K+1}}{\partial \omega^2}
=
\frac{\partial Z_K}{\partial z}.
\end{align}
By using the analytic expression of $a_K$~\eqref{ak analytic expression},
the PMS condition can be rewritten as
\begin{align}
\frac{\partial}{\partial\omega^2}
{}_1F_1\left(-K, \frac{1}{2}-2K; \frac{(\omega^2-z)z}{\lambda}\right)
=0,
\end{align}
or equivalently
\begin{align}
\frac{z}{\omega^2-2z}
\frac{\partial}{\partial z}
{}_1F_1\left(-K, \frac{1}{2}-2K; \frac{(\omega^2-z)z}{\lambda}\right)
=0.
\end{align}
Thus, we find that the solutions of the PMS condition are given by those of the first-order derivative of the FAC condition.
Thanks to this relation,
we can constrain the distribution of the solutions of the PMS condition by the following argument.
Let us put $\zeta = (\omega^2-z)z/\lambda$,
and calculate zeros of $P(\zeta) \equiv {}_1F_1\left(-K, \frac{1}{2}-2K; \zeta\right)$.
Because $P(\zeta)$ is a polynomial function of $\zeta$,
all zeros of $P'(\zeta)$ lie in the convex hull of the set of the zeros of $P(\zeta)$~\footnote{This is known as Lucas's theorem.}.
These are depicted in the left panel of Fig.~\ref{PMS}.
Mapping the boundary of the convex hull on the $\zeta$-plane to the $z$-plane,
we obtain the allowed region where the solutions of the PMS condition can appear as shown in the right panel of Fig.~\ref{PMS}.
Since the location of the boundary of the convex hull has the same $K$-dependence as that of the anti-Stokes line\footnote{Specifically, the dependence is gevin by $K^{1/2}$.},
this constraint ensures that the solutions of the PMS condition also distribute around the anti-Stokes line determined by the integral representation of the FAC condition.
Therefore, we conclude that physical quantities are insensitive to $z$ as long as $z$ is chosen to be close to the anti-Stokes line.
\begin{figure}[tbh]
\centering
\fig{7.5cm}{PMS_vs_FAC_k=8_on_zeta.pdf}
\fig{7.5cm}{PMS_vs_FAC_k=8.pdf}
\caption{(Left) Zeros of $P$ and $P'$ are shown by circles and crosses for $K=8$.
The solid lines shows the convex hull of the set of the zeros of $P$.
(Right) The left figure obtained in the $\zeta$-plane is converted to the $z$-plane for $\lambda=1$ and $\omega^2=-1$.
Th red dotted line shows the anti-Stokes line.
}
\label{fig:PMS}
\end{figure}
Our argument here would be useful to practical applications of OPT, where one cannot obtain higher order terms of perturbation theory.
As demonstrated in Fig.~\ref{fig:plateau},
the plateau structure is not clear when $K$ is small.
In that case, the PMS condition is useless to determine $z$.
Nevertheless,
the FAC condition could give a rigorous criterion for $z$.
\subsection{Analogy to the statistical physics}
In this subsection,
we point out that there is a clear analogy between
the accumulation of the solutions of the FAC condition
and phase transitions in the statistical physics.
According to a renown work by Lee and Yang,
zeros of partition function in a complex parameter plane, which are referred to as Lee-Yang zeros have rich information about phase transitions~\cite{Yang:1952be, Lee:1952ig}.
In general,
a set of the Lee-Yang zeros forms a continuous arc in the complex parameter plane in the thermodynamics limit.
If the arc pinches the real axis,
a free energy has a singularity, and this means a first order phase transition.
Thus,
the distribution of the Lee-Yang zeros, namely that in the thermodynamic limit is crucial for arguments on the phase transition.
It is known that the distribution can be also well described by anti-Stokes line.
These properties are well studied in the Ising model and the mean field Gross-Neveu model and so on.
Based on the picture,
the pinching is understood as accumulation of the Lee-Yang zeros on the anti-Stokes line~\cite{Itzykson:1983gb, Pisani1993, Kanazawa:2014qma}.
This argument is parallel to that in the previous sections.
In the case of OPT,
the zeros of the $K$th-order term $a_K(z)$ of the $\delta$-expansion and the solutions of the FAC condition corresponds to the partition function and Lee-Yang zeros, respectively.
Thus,
the procedure to find an optimal $z$ based on the FAC condition can be regarded as the problem to find the first order transition in the complex $z$-plane.
The analogy between OPT and the statistical physics
are summarized in Table~\ref{analogy}.
On the other hand,
there is a difference between OPT and the statistical physics.
While only the zeros on the real axis have physical meaning in arguments of the first order phase transition,
the solution of the FAC condition is not necessarily real even when the coupling constant is real.
Moreover, the best choice of $z$ in no longer given by the intersection of the anti-Stokes line and the real axis when the coupling constant is complex.
These are demonstrated in App.~\ref{App:RK}.
\begin{table}[th]
\caption{An analogy.}
\label{analogy}
\centering
\begin{tabular}{cc}
\hline
Optimized perturbation theory & Statistical physics\\
\hline
$K$th-order coefficient $a_K$ & Partition function\\
Optimal $z$ & Lee-Yang zero\\
$K \to \infty$ limit & Themodynamic limit \\
\hline
\end{tabular}
\end{table}
\section{Summary and Concluding remarks}\label{sec:summary}
In this paper,
we have studied fundamental properties of the FAC condition which is used as a variational criterion in OPT.
We have pointed out that the FAC condition has two conceptual problems.
One is that we need to put additional criteria by hand to determine the artificial parameter since a solution of the FAC condition is not unique.
The another is that the insensitivity of the approximant as a function of an artificial parameter which is not guaranteed.
We have clarified that a distribution of the solutions of the FAC condition is related to topology of Lefschetz thimbles.
As a result, we have found that the solutions are distributed around the anti-Stokes line on which contributions from each thimble cancel out with each other relying on the saddle point approximation.
Moreover, we have argued that the approximation becomes exact in the limit $K\to \infty$, where $K$ is a truncation order of the reorganized perturbative expansion.
We have performed detailed studies on the anti-Stokes line for a one dimensional integral.
In that concrete example,
the saddle point approximation makes sense when we assume that the artificial parameter $z$ is scaled as $z \propto K^{1/2}$.
Then, we have shown that the length of the anti-Stokes line in the complex $z$-plane is also proportional to $K^{1/2}$.
Since the FAC condition has $K$ non-degenerated solutions, we have concluded that these solutions accumulate on the anti-Stokes line in the $K\to\infty$ limit.
If this is the case, the FAC condition is satisfied for any $z$ as long as $z \propto K^{1/2}$.
This is an underlying mechanism that physical quantities calculated by OPT can be insensitive to the choice of $z$.
We have confirmed that an approximated quantity obtained in OPT agrees with the exact value along the anti-Stokes line and a flat region is developed around the line as $K$ increases.
In addition, we have also discussed that the relation between the FAC and PMS conditions in order to clarify why the FAC condition leads the insensitivity of the physical quantities.
We have found that the PMS condition also has a same effective action as that for the FAC condition in the limit $K\to\infty$.
In addition, we have argued that the solutions of the PMS conditions must be interior points of the convex hull of the solutions of the FAC condition,
and this ensures that the solutions of the PMS conditions always appear around the anti-Stokes line determined by the FAC condition.
Finally, we have pointed out that there is a clear analogy between our argument and the physics of phase transitions.
According to the analogy, a partition function and the thermodynamic limit corresponds to the integral representation of the FAC condition and $K\to\infty$ limit, respectively.
While we have restricted our attention to one dimensional integrals in this paper,
the arguments performed here can be generalized to higher-dimensional integral, and presumably to path integrals.
Actually, calculating topology of Lefschetz thimbles in infinite dimensions is a formidable task.
However, we expect that it turns to be tractable once we reformulate OPT as the physics of phase transitions in terms of the artificial parameter $z$, where many familiar tools and techniques are available even in quantum field theories.
We will present these discussion elsewhere.
\section*{Acknowledgments}
T.M.D. is supported by
the RIKEN Special Postdoctoral Researchers Program.
|
1,116,691,499,666 | arxiv | \section{Introduction}
Quantum error correcting codes have played a crucial role in
quantum complexity theory
(see, e.g., \cite{Got2,Got3,BC, BCG})
and their study is a vastly growing field
(see, e.g.,\cite{Got,Zemor2,Zemor3, Kovalev1,
Kovalev2,TerhalTradeoffs,Fetaya}); they are related to
a variety of issues including resilience to noise
and fault tolerance, quantum cryptography, topological order,
multi-particle entanglement, and more.
Here, we initiate the study of the quantum analogue of Locally Testable
Codes ($\LTC$s). $\LTC$s, first defined in \cite{FS,RS,A},
are a particularly interesting class of error correcting codes which
played an instrumental role in all proofs of the celebrated
$\PCP$ theorem \cite{ALMSS, AS, Din}; their study had inspired the
definition of property testing \cite{GGR}
and the understanding of their limitations and possible constructions
has developed into a very interesting field of its own
(see for example Goldreich's survey \cite{Gold}).
To define $\LTC$s, consider the following question:
given a code of $n$-bit strings,
defined by $O(1)$-local constraints,
and a word which is of distance $\delta n>0$ from
the code (we say it has {\it proximity} $\delta$),
what is the probability that a randomly chosen
constraint is violated?
We denote by $R(\delta)$ (called the {\it soundness})
the lower bound on the probability
that {\it any} word of proximity $\delta$ from
the code will violate a randomly
chosen constraint.
$\LTC$s of excellent soundness at proximities larger than some
constant are known, most notably the Reed-Muller code \cite{Mac},
the Hadamard code \cite{AB}, and Hastad's
long-code \cite{Hastad} which
were used in the $\PCP$ proofs of \cite{ALMSS, AS,Din}.
Though of excellent soundness, these codes are not so satisfying when
considering other parameters of interest. For example, the rates of
the Hadamard
and long code are exponentially and doubly exponentially small,
respectively. Much research \cite{FS,RS} was devoted to optimizing
the parameters of $\LTC$s, maintaining constant relative distance
and constant {\it query complexity}
(namely, the number of bits in each constraint), and improving the rate.
The best known $\LTC$s in this respect are \cite{Din,BS} which
have constant distance, constant query complexity,
and rates which are $1/polylog$.
It is a major open question (called the $c^3$ problem \cite{GS})
whether good (namely, constant relative rate and distance)
$\LTC$s exist.
\subsection{Quantum Locally Testable Codes - Definition and Motivation}
To the best of our knowledge the quantum analogue of
$\LTC$s was not defined before.
We provide a definition of
general quantum Locally Testable Codes ($\qLTC$s)
in Definition \ref{def:qLTC}.
To define $\qLTC$s, we recall that a
quantum code defined by $O(1)$-local constraints can be viewed
as the groundspace (namely, the zero eigenspace)
of a local Hamiltonian $H=\sum_{i=1}^m \Pi_i$
whose local terms are
projections, which we will refer to as the quantum constraints.
We define Quantum Locally Testable Codes ($\qLTC$s) with soundness
$R(\delta)$ as
those codes for which
when a state $\Psi$ is within distance
at least $\delta n$ from the code
space, its average energy with respect to the constraints,
$\frac{1}{m}\langle \psi|H|\psi\rangle$,
is at least $R(\delta)$
(for an exact definition see Subsection \ref{sec:qLTC}).
The average energy is the natural and commonly used
analogue, in quantum Hamiltonian complexity,
of the probability to detect a violation in a randomly chosen
constraint (see for example \cite{AAV}).
This definition sets the stage for a wide range of
interesting questions.
What are the limitations on quantum $\LTC$s, and what are possible
constructions?
Are there $\qLTC$s which achieve, or get close to,
the best classical $\LTC$s in terms of parameters,
or are the quantum versions of those codes inherently limited by some quantum
phenomenon? What can we learn from $\qLTC$s regarding the notion
of local testability of proofs, a notion which in the classical setting
is tightly related to that of $\LTC$s \cite{Gold},
and which is still widely
evasive in the quantum setting \cite{AAV}?
Our motivation in introducing $\qLTC$s
in order to study the above questions
stems not only from trying to import the interesting
classical local-testability paradigm into the quantum setting, but
also from their strong relations to
questions which are of inherent
interest to quantum information, quantum complexity as well as to
quantum physics. We highlight here several such connections.
An important motivation is to gain insight into the widely open quantum
$\PCP$ conjecture \cite{Aha},
a quantum analogue of the $\PCP$ theorem; it states, roughly,
that it is quantum-$\NP$ hard
to approximate the ground energies
of local Hamiltonians even to within
a constant fraction. This conjecture is tightly related to
deep questions about
multiparticle entanglement, and
there has been much recent work attempting to make progress on it
(see the recent survey \cite{AAV} and references therein).
In the classical setting,
$\LTC$s have been instrumental in $\PCP$ theory \cite{ALMSS, AS, Din}
and are intimately related to the
notion of local testability of proofs
\cite{Gold}, and
understanding the
limitations of their quantum counterparts
might shed light on the $\qPCP$ problem.
Another important open question is that of
the feasibility of quantum self correcting memory. This is
a medium in which
a quantum state is maintained almost in tact
for a long time without active error correction,
even at constant temperatures;
errors are corrected passively by the interaction with the environment.
Clearly such a system is of high practical as well as theoretical interest,
and the topic has been studied extensively in recent years
(e.g., \cite{BT,Den,Ha,HaahPreskill,CL, Haah, Yoshida}).
It is of major interest to devise feasible constructions of
quantum self correcting memory.
A crucial role in this area is played by
the {\it energy barrier} of the quantum code, which
is the amount of energy required in order to
move from one codeword to an orthogonal one.
This notion, which has also been studied extensively (see, e.g.,
\cite{M}), is tightly related to the soundness of the code,
which can be viewed as the {\it energy cost} of
large errors; understanding $\qLTC$s might thus provide insights
into possible constructions of self-correcting memories.
A fundamental open question related to both of the above is
whether multiparticle entanglement can be made robust
at room temperatures. The question was formalized by Hastings in terms of
the $\NLTS$ (No Low-energy Trivial States)
conjecture \cite{Has}, which, roughly, states
that there exist local Hamiltonians
such that all their low-energy states are highly entangled.
Such Hamiltonians are necessary for the $\qPCP$ conjecture to
hold (\cite{Has}, and see also \cite{AAV}).
$\NLTS$ Hamiltonians and $\qLTC$s seem related:
while in $\qLTC$s, low energies imply
closeness to the code, in $\NLTS$ Hamiltonians they imply high entanglement,
which is well known to be necessary for code states.
Indeed, some weak connections between the two notions were already proven
\footnote{One can show that $\qLTC$s do not have tensor-product states with small (constant) mean energy.}.
In the following, we will investigate the behavior of $\qLTC$s in
various scenarios.
The behavior of $\LTC$s is usually explored in one of two contexts:
as an error-correcting code, or in relation to locally testable
proofs (see \cite{Gold}); depending on the context, one is interested
in different parameters. In particular, in the context of error correction,
the interesting regime of proximities, namely distance of the word from the
code, is at most half the distance {\it of} the code; in this regime,
the error can still be corrected. In the context of $\PCP$s, on the other
hand, much larger distances can be of interest, since a cheating prover may
provide witnesses of arbitrary distance from the code.
At any given point, we will mention the range which we will be
considering.
\subsection{Contributions}
\subsubsection{Definition and Basic Examples}
We provide a general definition of $\qLTC$s in Definition \ref{def:qLTC}.
Being probably the richest and most well-studied class of quantum
codes, stabilizer codes \cite{Got} are compelling to work with.
We thus provide a simpler definition for stabilizer $\LTC$s (denoted
$\sLTC$ -- Definition \ref{def:sLTC}) and
prove that it coincides with the definition of $\qLTC$s on stabilizer codes,
in Claim \ref{cl:sLTCqLTC}.
An illuminating example to consider is Kitaev's $2D$
toric code \cite{Kit2}, which turns out to have very bad soundness,
since a string-like error of any length -- e.g.,
an error made of Pauli operators applied on a $\Theta(\sqrt{n})$ long
line-segment of qubits --
only violates two constraints - those that intersect its two edges.
So, at small (up to $1/\sqrt{n}$) values of proximity,
the soundness is bounded from above by $1/\sqrt{n}$.
One can in fact extend this phenomenon to derive bounds on the soundness
for constant values of proximity.
Another illuminating example is the Quantum Reed-Muller codes
\cite{Ste}. Certain classical Reed-Muller code are known
to have good (constant) soundness \cite{Alon}.
Quantum Reed-Muller codes can be constructed
using classical Reed-Muller codes and their dual,
in the usual $\CSS$ paradigm \cite{Nielsen}.
By construction, the resulting code will
inherit its soundness from one of the two classical codes that defines the
$\CSS$ code -- the one with the worse soundness.
Unfortunately, the rate and distance of the quantum Reed-Muller codes
are much worse even than the optimal classical Reed-Muller codes,
as is expected from $\CSS$ codes \cite{Nielsen}
(more details will be provided in the journal version).
\subsubsection{Bound on the soundness of $\sLTC$s on small set expanders}
We provide two upper bounds
on the soundness of $\qLTC$s at low, constant
values of proximities $\delta>0$.
We focus on $\sLTC$'s on $n$ qudits,
which are {\it good} quantum codes,
defined by $m$ $k=O(1)$-local check terms, where
each qudit participates in $D_L=O(1)$ constraints.
For such codes, we consider bounds on the soundness at values of
proximities which are at most some constant;
this constant is a function of $k,D_L$, and
in particular, $\delta<1/k$.
Usually, in the classical setting, it is much easier to derive $\LTC$s
whose soundness is good (large) for those {\it small} proximity values.
Here, we show that in this supposedly {\it easier} range of
parameters, $\qLTC$s are severely limited compared to their classical
counterparts.
To make the statement of the results simpler, we observe that
the soundness $R(\delta)$, is bounded above by
the number of constraints that touch the erred qudits, divided by $m$:
hence it is at most $\delta n D_L/m=k\delta$
(using $D_L n =km$). It is more informative
to present our results in terms of the {\it relative soundness}
$r(\delta)= R(\delta)/k\delta$, which is the soundness normalized
by its maximal value
(for exact definition see Definition \ref{def:smallr}).
Our first main result proves that good $\qLTC$s exhibit a severe
limitation on their relative soundness, when set on good expanders.
More precisely, consider the bi-partite graph of the code
defined with $n$ bits on the left side,
$m$ constraints on the other side, and edges connecting each
constraint to all of its bits.
We say that the bi-partite graph is an $\epsilon$ small-set expander
if every small (size $k=O(1)$) subset of bits, is examined nearly by as many
constraints as it possibly can, namely, by at least $(1-\epsilon)kD_L$
constraints.
Theorem \ref{thm:QECC} shows that in the quantum setting,
when the underlying bi-partite graph of the $\sLTC$ code
is an $\epsilon$ small set expander, the relative
soundness is $O(\epsilon)$. In other words, the better the expansion,
the worse the soundness. This holds
for all proximities smaller than some constant $\delta_0$.
More formally, we show:
\begin{theorem}\label{thm:QECC}
Let $C$ be a good stabilizer code, on $n$ $d$-dimensional qudits, of
relative distance $>0$, and a $k$-local
generating set ${\cal G}\subset \Pi_d^n$,
such that each qudit is examined by $D_L$ generators.
Put $\delta_0 = \min \left\{\frac{1}{k^3 \cdot D_L},\frac{1}{2n}dist(C)\right\}$.
Suppose the bi-partite interaction graph of ${\cal G}$ is $\eps$-small
set expanding,
for $\eps < 1/2$.
Then, for all $0<\delta < \delta_0$,
we have $r(\delta)\le 2\eps$.
\end{theorem}
See subsection \ref{sec:qLTC} for exact definitions of
Stabilizer codes and their generators, and Definition
\ref{def:smallr} for the exact
definition of relative soundness.
Theorem \ref{thm:QECC} stands in sharp contrast to
the classical domain. Classically,
codes can easily be constructed on good expanders
so that for small proximities their soundness is excellent;
We provide an explicit
such example whose relative soundness is arbitrarily close to
$1$ by plugging the {\it lossless expanders} constructed in \cite{CRVW}, into
the expander code construction of Sipser and Spielman \cite{Spi}. This
implies good classical codes with
constant query complexity and with almost optimal
soundness for any proximity $\delta$ smaller than some constant
(see Claim (\ref{cl:classical} in Appendix \ref{app:classical}).
\subsubsection{Bound on the soundness of general $\qLTC$s}
Our second main result is an upper bound on the relative soundness which
holds for $\sLTC$s set on {\it any} underlying bi-partite graph,
not necessarily small-set expanders.
\begin{theorem}\label{thm:sound} (Roughly)
For any good stabilizer code $C$
of $k$-local terms ($k\geq 4$) over $d$-dimensional qudits, where
each qudit interacts with $O(1)$
local terms,
errors of fractional weight $\delta<\delta_0\le 1$, for
$\delta_0 = \Omega(1)$
have relative soundness at most
$\alpha(d)(1-\gamma_{gap})$
for some constant function $\gamma_{gap}=\gamma_{gap}(k,d)>0$.
\end{theorem}
\noindent
$\alpha(d)$ in the above theorem is defined to be $1-1/d^2$;
this is a technical upper bound
on the relative soundness
of $\qLTC$s defined on $d$-dimensional qudits,
stemming quite easily from the size of the alphabet $d$
(see subsection \ref{sec:alphabet});
Theorem \ref{thm:sound} shows
that the soundness is further bounded by some seemingly deeper quantum
phenomenon.
We stress that this upperbound, which is not exhibited in classical
codes, is found in the range of parameters of
$\delta$ (small constants) in which it
is supposed to be {\it easiest} to achieve
soundness for $\LTC$s, e.g., our
Claim \ref{cl:classical}.
\subsubsection{Quantum $\PCP$s of Proximity}
$\LTC$s are tightly connected \cite{Gold} to
$\PCP$'s of proximity ($\PCPP$s), which
are proof systems defined very similarly to
$\PCP$s (See \cite{BGHSV}).
For the reader familiar with $\PCP$s, they too consider a verifier
who gets access to an untrusted proof, however, $\PCPP$s differ from
$\PCP$s in two important aspects:
first, they are weaker, in the sense that they
are required to reject only inputs that are {\it far} from the language,
whereas in $\PCP$s any input out of the language should be rejected.
On the other hand, the verifier is charged not only for the number of
queries out of the proof,
but also for the number queries out of (part of) the input.
For a formal definition see Appendix \ref{app:PCPP}.
Ben Sasson et. al \cite{BGHSV} provide a standard construction of
an $\LTC$ from a $\PCPP$. Given a $\PCPP$ for membership in a code,
and an error correcting code $C$,
they construct an $\LTC$ code $C'$, which inherits its soundness parameter
from the soundness parameter of the $\PCPP$ and its distance from the
code $C$ (Construction 4.3, and Proposition 4.4
in \cite{BGHSV}, see Appendix \ref{app:PCPP}).
In Appendix \ref{app:PCPP}, we suggest a definition of quantum $\PCPP$s,
and show that a similar result to that of \cite{BGHSV}
holds in the quantum setting.
The meaning
of the definition of $\qPCPP$ and of the above described connection,
and their relevance and importance to the quantum $\PCP$ conjecture,
are far from clear (see for example \cite{AAV} for doubts
regarding the classical approach to proving the quantum $\PCP$ conjecture,
and the direct applicability of quantum Error correcting codes in this
context). Still we provide these definitions and results in the appendix,
to make the point that a syntactic connection
does carry over also in the quantum regime. It is a widely open question
to give deep meaning to the connection between
$\qLTC$s and quantum local testability
of proofs, as is known in the classical case \cite{Gold}.
\subsection{Overview of Proofs of Theorems \ref{thm:QECC} and
\ref{thm:sound}}\label{subsec:overview}
\subsubsection{Bounds on $\sLTC$ codes on Expanders}
To prove theorem \ref{thm:QECC}, we want to use good small-set expansion
in order to construct an error which will not have a large energy penalty (namely, will not violate too many constraints)
but which will be of large weight. More precisely, the error should have
a large weight modulo
the centralizer of the stabilizer group (see Definition \ref{def:sound}),
and yet should not
violate too many stabilizer generators
(recall that an error violates a stabilizer
generator, or constraint, if it does not commute with it;
see definition \ref{def:stab}).
The key idea is that in a small-set expander,
intersections between stabilizer generators which consist of more than
one qudit are rare (See fact \ref{fact:deg}).
The size of the intersection matters since for
two generators that intersect on
a single qubit, the restrictions of those operators to that qubit
must {\it commute}, because the two generators commute overall
(see definition \ref{def:stab}).
We note that it cannot be that {\it all} generators when restricted
to a given qudit commute,
because this would mean
this qubit is trivial for the code (see remark at the end of
Subsection \ref{sec:QECCdefs}).
An error defined on a qudit in such a way that it
commutes with the majority of the
generators acting on it, will violate only a small
fraction of the
constraints acting on that qudit.
To extend this to errors of larger weight (up to some small
constant fraction), we apply the above idea to each of
the generators in
a large ``sparse'' set of generators, namely
a set in which each two terms are of at least
some constant distance apart in the interaction bi-partite graph.
(formally, a $1$-independent set of terms;
see Definition \ref{def:Lind}).
It is not difficult to see that due to the distance between the
generators, the error weight remains large
even modulo the centralizer.
\subsubsection{Upper bound on soundness for stabilizer $\sLTC$s on any
graph}
To prove theorem \ref{thm:sound}, we want to prove that
regardless of the graph they are set on, the
relative soundness of $\qLTC$s is bounded from above by some constant
strictly smaller than $1$.
We use the bound of theorem (\ref{thm:QECC})
(the "surprising" side) augmented
with a claim that quantum stabilizer codes not only
suffer from the
quantum effect of Theorem (\ref{thm:QECC})
but also cannot avoid the classical
effect by which codes with {\it poor} small-set expansion have low soundness,
namely that large error patterns are examined by relatively few
check terms, so the number of constraints they violate is relatively low.
Together, this means that for {\it any} underlying graph,
whether a good or a bad small set expander,
the relative soundness is non-trivially bounded.
While in the classical case, the fact that poor expansion implies poor
relative soundness, is very easy to argue,
in the quantum case the proof turns out to be
quite non-trivial, but still a similar phenomenon holds.
Let us clarify what we're trying to show.
We want to show that if the expansion is bad, one can construct
an error of large weight but which does not
have large relative penalty.
Suppose we would like to show that the soundness function $r(\delta)$
is small, for some range of proximity values $(0,\delta_0]$.
Consider a set of qudits $S$ whose fractional size is some $\delta \in (0,\delta_0]$, and which has positive expansion error
$\eps>0$. A priori,
if we have an error supported on $S$, then the maximal number of violations
is at most $|S| D_L (1-\eps)$,
by the assumption on the expansion.
This might seem as though it proves the result trivially.
The technical problem here, however, is that
an error on $S$ may just "seem" to be large, whereas possibly, may be
represented much more succinctly modulo the centralizer group.
This problem is, once again, inherently quantum - it corresponds, essentially,
to showing that a given error has large weight even modulo to
the {\it dual} code,
namely the code spanned by the generators themselves.
We would hence like to devise an error pattern, that
cannot be downsized significantly by operations in the centralizer group,
but would still "sense"
the non-expanding nature of $S$, and hence have fewer-than-optimal violations.
To this end we prove the Onion fact (Fact \ref{fact:succinct}) which
might be of interest of its own. It
states that given an error on at most $k/2$
of the $k$ qudits supporting
a generator, its weight cannot be reduced modulo the centralizer
within the $k$-neighborhood
of the
generator (the $k$ neighborhood is, roughly,
the qudits belonging to the set of
terms of distance $k$ from that generator
in the interaction graph).
The ``Onion'' in the name is due to the
fact that the proof (given in Subsection \ref{sec:onion})
works via some hybrid argument on
the onion-like layers $\Gamma^{(i)}(u)$
surrounding the qudits of a generator $u$.
Our idea is to concentrate the error on
a large set of far away generators whose $k$-neighborhoods
are non-intersecting (we call those generators ``islands'').
We now argue as follows. If we draw a random error on the qudits belonging to
these "islands", with probability calibrated so
that the expected number of errors per "island",
is, say, $1$ error, the following will occur:
on one hand, many islands have more than one error, so they ``sense''
the sub-optimality of expansion.
On the other hand, only a meager fraction, exponentially small in $k$,
of the "islands" with at least two errors, will have more than $k/2$ errors;
only those, by the Onion fact (fact \ref{fact:succinct})
can be potentially reduced modulo the centralizer.
Hence with high probability,
the weight of the random error, cannot be significantly reduced
modulo the centralizer, yet it still has
less-than-optimal number of violations due to the expansion.
\subsection{Related work}
Theorem \ref{thm:QECC}
is related to our recent result \cite{CLH}
in which it was shown
that when a quantum local Hamiltonian, whose terms mutually
commute, is set on
a good small-set expander, then the approximation of its ground energy
lies in $\NP$. In that result,
the better the small-set expansion, the better the
approximation. In other words, as the expansion improves, the problem becomes
less interesting from the
quantum point of view. Another result of the same spirit
was derived by Brandao and Harrow \cite{BH} for non-commuting $2$-local
Hamiltonians on standard expanders.
In both results good expansion poses a limitation on the
expressiveness of quantum constraint systems.
We note that the starting point of both the proof of our
Theorem \ref{thm:QECC} and the result of \cite{CLH} are
Facts \ref{fact:essence} and \ref{fact:deg} regarding the percentage
of unique neighbors in good small set expanders; however,
the proofs proceed from that point onwards in very different directions.
Dinur and Kaufman \cite{Din3}
showed that classical $\LTC$ codes {\it must} be set on
a good small-set expander.
More precisely, given a code with soundness
$R(\delta) = \rho \cdot \delta$ for all
$\delta>\delta_0$ for some constant $\delta_0$,
the edge expansion of the underlying graph
is at least $c\rho$, for some constant $c$.
This might seem to provide another classical contrast to our Theorem
\ref{thm:QECC}, in addition to our Claim \ref{cl:classical}.
However,
\cite{Din3} does not use bi-partite graph expansion but rather the
graph in which an edge connects any two nodes that participate in a common
constraint; the two notions of expansion are very different
and hence direct comparison to the \cite{Din3} result
is not possible.
\subsection{Discussion and Further directions}\label{sec:discussion}
Many open questions arise regarding $\qLTC$s.
Can we find other $\qLTC$s with much better parameters than
those mentioned in this article?
It is a natural starting point to check known quantum codes that have good
self-correcting properties, or high
energy barrier \cite{Haah,M}.
Do $\qLTC$s
exist with parameters which are as good as those of \cite{Din, BS},
namely, constant distance, constant query complexity, constant soundness for
all proximities larger than some constant $\delta_0>0$,
and rate which is inverse
polylogarithmic?
If not, can we prove appropriate upper bounds on $\qLTC$s?
The upper bounds we provided here
point to an inherently quantum phenomenon,
which constitutes an obstacle against local testability for
$\qLTC$s in the low-proximity range of parameters.
Both of our main theorems, reflect, in fact, a deeper phenomenon called {\it monogamy of entanglement}
which was identified also in \cite{CLH} for commuting local Hamiltonians, and \cite{BH} for $2$-local general Hamiltonians.
Essentially, this phenomenon limits the amount of entanglement that a single qudit with $O(1)$ quantum levels can "handle".
In quantum codes, based on commuting check terms, the entanglement of code states arises
from the fact that the operators actually do not commute per qubit, but only
over sets of qubits.
Incidentally, per-qubit non-commutativity is also the phenomenon
responsible for the energy "penalty" received by certain (sparse) errors.
Hence, in cases where monogamy of entanglement is a significant factor, for example in small-set expander geometry,
we witness an inherent decline in the energy "penalty" of such errors, thus upper-bounding the quantum local testability.
It is thus the combination of {\it monogamy of entanglement} in small-set expanders, and the poor local testability of non-expanders,
that are responsible for the apparently quantum phenomenon.
Whether Theorem \ref{thm:sound}
hints at a more profound limitation on quantum
local testability, that holds also
for larger values of $\delta$,
calls for further research.
Perhaps refuting the $c^3$ open problem is doable
in the quantum case?
Finally, the link between quantum local testability of proofs and
$\qLTC$s, so crucial in the classical world \cite{Gold},
is far from clear in the quantum setting.
We have merely touched upon it (see the
result of quantum $\PCPP$s in the
appendix), however, much further clarification of this connection,
is called for.
{~}
\noindent
\textbf{Organization of paper}
In Section \ref{sec:bg} we provide the necessary background
on quantum error correcting codes and on small-set expanders.
Section \ref{sec:qLTC} provides definitions of quantum locally
testable codes ($\qLTC$s) and stabilizer $\qLTC$s , and basic results.
Section \ref{sec:QECC} provides bounds on the soundness of
quantum $\LTC$s on small-set expanders, and
Section \ref{sec:sound} provides an absolute bound on soundness
of stabilizer $\LTC$s regardless of the expansion of their
underlying graph.
Finally, In the Appendices we
provide several proofs which are on the more technical side.
In Appendix \ref{app:PCPP} we provide
our definition of quantum $\PCPP$s and
the construction and proof of the induced $\qLTC$.
\section{Background}\label{sec:bg}
\subsection{The Pauli groups}
\begin{definition}
\textbf{Pauli Group}
\noindent
The group $\Pi^n$ is the $n$-fold tensor product of Pauli operators $A_1\otimes A_2 \otimes \hdots \otimes A_n$, where $A_i\in \left\{I,X,Y,Z\right\}$,
along with multiplicative factors $\pm 1, \pm i$ with matrix multiplication as group operation.
\end{definition}
The Pauli group can be generalized to particles of any dimensionality
$d$:
\begin{definition}\label{def:generalpauli}
\textbf{The Pauli group generalized to $F_d$ }
\noindent
Let $X^k_d:|i\rangle \mapsto |(i+k) \pmod d\rangle ,
P_d^{\ell}|j\rangle \mapsto w_d^{j\ell}|j\rangle$
be the generalized bit and phase flip operators on the
$d$-dimensional Hilbert space, where $w_d=e^{2\pi i/d}$
is the primitive $d$-th root of unity.
Let $\Pi_d$ be the group generated by these operators and all roots of
unity of order $d$.
The group $\Pi_d^n$ is the $n$-fold tensor product of Pauli operators $A_1\otimes A_2 \otimes \hdots \otimes A_n$, where $A_i\in \left\{X_d^kP_d^\ell\right\}$
along with these multiplicative factors.
\end{definition}
The weight of a Pauli operator is defined to be the
number of locations where it is non-identity.
\subsection{General Quantum Error Correction}
\begin{definition}
\textbf{Quantum Code}\label{def:code}
\noindent A quantum code on $n$ qudits is given by
a set of ($m$) projections $\Pi_i$. The code is defined to be
the simultaneous $0$ eigenstates of all those projections.
\end{definition}
\begin{definition}
\textbf{Quantum Error detection 1}\label{def:det1}\cite{Knill}
\noindent
Let $C\subseteq {\cal H}$ be a quantum code on $n$ qudits.
Let $\Pi_C$ be the orthogonal projection onto $C$.
We say that the set of errors ${\cal E}$
is detectable by $C$ if for any $E\in {\cal E}$, we have:
\begin{equation}\label{eq:genqecc}
\Pi_C E \Pi_C = \gamma_{E} \Pi_C,
\end{equation}
where $\gamma_{E}$ is some constant which may depend on $E$.
\end{definition}
\begin{definition}
\textbf{Quantum Error detection 2}\label{def:det2}\cite{Knill}
\noindent
A set ${\cal E}$ is detectable by $C$, if
for any $\ket{\psi},\ket{\phi}\in C$ with
$\langle \psi | \phi \rangle=0$, and any $E\in {\cal E}$,
$\bra{\psi} E \ket{\phi}=0$.
\end{definition}
\begin{claim}\label{cl:defequiv}\cite{Knill}
Definitions (\ref{def:det2}) and (\ref{def:det1}) are equivalent:
\end{claim}
The proof can be found in the Appendix.
Definition (\ref{def:det2}) gives rise to the following
natural definition:
\begin{definition}\label{def:qeccdist}
\textbf{Distance of a code}\cite{Knill}
\noindent
Let $C$ be a quantum code detecting error set ${\cal E}\subset \Pi_d^n$.
$C$ has distance $dist(C)$ if for any two orthogonal code states $\ket{\phi},\ket{\psi}$, and any $E\in {\cal E}$ of weight at most $dist(C)-1$, we have $\langle \phi | E|\psi \rangle = 0$.
\end{definition}
\subsection{Stabilizer Quantum Error Correcting Codes}\label{sec:QECCdefs}
\begin{definition}\label{def:stab}
\textbf{Stabilizer Code}
\noindent
A stabilizer code $C$
is defined by an Abelian subgroup $A=A({\cal G})\subset \Pi_d^n$, generated by a set ${\cal G} \subset \Pi_d^n$.
The codespace is
defined as the mutual $1$-eigenspace of all elements in ${\cal G}$
(we require that $-I\notin {\cal G}$ so that this codespace is
not empty).
An element $E \in \Pi_d^n$ is said to be an error if
it does not commute with at least one element of ${\cal G}$,
i.e. $E \notin \mathbf{Z}({\cal G})$, where
$\mathbf{\mathbf{Z}}({\cal G})$ is the centralizer of ${\cal G}$.
An element $E \in \Pi_d^n$ is said to be a logical operation, if
it commutes with all of ${\cal G}$, but is not generated by ${\cal G}$, i.e.,
$E \in \mathbf{Z}({\cal G})-A.$
A stabilizer code is said to be $k$-local if each term $g\in {\cal G}$
is an element of $\Pi_d^n$, with weight exactly $k$.
\end{definition}
To fit with the terminology of Definition \ref{def:code}, consider for each
generator $g$ the
projection $\Pi_g$ which projects on the orthogonal subspace to the $1$
eigenspace of $g$.
\begin{definition}
\textbf{Succinct representation}
\noindent
A $k$-local set of generators ${\cal G}$ is said to be {\it succinct}, if there does not exist
a different generating set
${\cal G}'$, such that $A({\cal G}) = A({\cal G}')$ and $wt(g)<k$ for some $g\in {\cal G}'$.
\end{definition}
\noindent
The following is a well known fact \cite{Got}
which will be useful later on, and we prove it in appendix
(\ref{sec:lemPauli}).
\begin{lemma}\label{lem:Pauli}
\textbf{Stabilizer Decomposition}
\noindent
Let $C$ be a stabilizer code on $n$ qudits, and consider the sets
$EC= \left\{E \ket{\phi} , \ket{\phi}\in C\right\}$
with $E\in \Pi_d^n$.
Then two sets $EC$, $E'C$ are either orthogonal or equal to each other,
and $\left\{EC\right\}_{E\in \Pi_d^n}$ span the entire Hilbert space.
Moreover, consider the partition of the entire Hilbert space to
sets of states which are mutual eigenvectors of all generators of $C$ with
exactly the same set of eigenvalues for each generator.
Then this partition is exactly the partition derived by the $EC$'s,
and two orthogonal $EC$'s have two lists of eigenvalues which differ
on at least one generator.
In particular, any $n$ qudit state $\ket{\psi}$ may be written as
a sum of orthogonal vectors $$ \ket{\psi} = \sum_i E_i \ket{\eta_i},$$
where $E_i\in \Pi_d^n$ and $\ket{\eta_i}\in C$.
\end{lemma}
\begin{definition}\label{def:weight}
\textbf{Weight of an error in stabilizer codes}
\noindent
Let $C$ be a stabilizer code on $n$ $d$-dimensional qudits, with
generating set ${\cal G} \subset \Pi_d^n$.
For $E\in \Pi_d^n$, we denote:
\begin{enumerate}
\item
The number of locations in which
$E$ is non-identity - by $wt(E)$.
\item
The weight of $E$ modulo the group
$A({\cal G})$ - by $wt_{\cal G}(E)$:
$wt_{\cal G}(E)=\min_{f\in A({\cal G})}\{wt(fE)\}$.
\item
The weight of $E$ modulo the centralizer
$\mathbf{Z}({\cal G})$ - by $wt_{\mathbf{Z}({\cal G})}(E)$:
$wt_{\mathbf{Z}({\cal G})}(E)=\min_{z\in \mathbf{Z}({\cal G})}\{wt(zE)\}$.
\end{enumerate}
\end{definition}
\noindent
The above claims give rise to the following definition of distance in a stabilizer code:
\begin{definition}\label{def:stabdist}
\textbf{Distance of a stabilizer code}
\noindent
Let $C$ be a $k$-local stabilizer code on $n$ $d$-dimensional qudits,
with generating set ${\cal G}\subset \Pi_d^n$.
The distance of $C$ is defined as the minimal weight of any logical operation on $C$:
$$dist(C) = min_{E\in \mathbf{Z}({\cal G})-A({\cal G})} wt(E).$$
\end{definition}
\begin{claim}\label{cl:distequiv} {\bf Equivalence of distance definitions}
A stabilizer code $C$ has $dist(C)\geq \rho$ by definition \ref{def:stabdist},
iff it has distance $\geq\rho$ by definition \ref{def:qeccdist}.
\end{claim}
The proof is given in the appendix subsection (\ref{sec:distequiv}).
A code $C$ on $n$ qudits is said to have a constant relative distance $\delta>0$, if its distance
is at least $\delta n$.
We will make use of the following assumption which we isolate so that
we can refer to it later on:
{~}
\noindent{\bf Remark:}
If there is a qudit $q$
such that
all states in the code
look like $|\alpha\rangle$ tensor with some state on the remaining qudits,
for some fixed one-qudit state $|\alpha\rangle$ of that qudit $q$,
we say that $q$ is {\it trivial} for the code.
We will assume in the remainder of the paper that for all codes we handle,
no qudits are trivial for the code,
since such qudits can be simply discarded.
\subsection{Interaction graphs and their expansion}
We assume in the rest of the paper that each qudit
participates in exactly $D_L$ constraints.
We define bi-partite expanders, similar to
\cite{Spi}, \cite{CRVW}, who used them to
construct locally-testable classical codes.
Note that we require expansion to hold only for sets of constant size $k$.
\begin{definition}\label{def:bipgraph}
\textbf{Bi-Partite Interaction Graph}
\noindent
Let $C$ be a quantum code on $n$ $d$-dimensional
qudits, whose check terms $\left\{\Pi_i\right\}_i$
are $k$-local.
We define the bi-partite interaction graph of $C$ $G=G(C) = (L,R;E)$ as follows:
the nodes $L$ correspond to the qudits, the nodes $R$ correspond to the check terms,
and the set of edges connect each constraint $\Pi_i\in R$
to all the qudits in $L$ on which it acts non-trivially.
We note that $G$ is left $D_L$-regular, and right $k$-regular.
\end{definition}
\begin{definition}\label{def:expbi}
\textbf{Bi-partite expansion}
\noindent
Let $G=(L,R;E)$ be a bi-partite graph, that is left $D_L$-regular,
right $k$-regular.
A subset of qudits $S\subseteq L$ is said to be $\eps$-expanding, if $|\Gamma(S)| \geq |S| D_L (1-\eps)$, where $\Gamma(S)$ is the
set of neighbors of $S$ in this graph. $\epsilon$ is called the expansion error
for this set.
$G$ is said to be $\epsilon$-small-set-expanding, if every subset
$S\subseteq L$, $|S|\leq k$ has expansion error at most $\eps$.
\end{definition}
We state two technical facts on good bi-partite expanders that will be useful later on.
The proofs are in the appendix (\ref{sec:bipartite}).
\begin{fact} \label{fact:essence}
Consider $S\subseteq L$ in a bi-partite graph $G(L,R:E)$
and let $S$ be $\epsilon$-expanding, for $\eps<\frac{1}{2}$.
Then a fraction at most $2\eps$ of all
vertices of $\Gamma(S)$ have degree strictly larger than $1$ in $S$.
\end{fact}
\begin{fact}\label{fact:deg}
Let $S\subseteq L$ in a
bi-partite graph $G = (L,R;E)$, such that $S$ is $\eps$ expanding, for $\eps<\frac{1}{2}$.
Then there exists a vertex $q\in S$,
such that the fraction of neighbors of $q$ with at least two neighbors in $S$
is at most $2\eps$.
\end{fact}
\subsection{Notation}
We denote as follows.
$d$ is the dimension of the qudits involved.
For a bi-partite graph we denote $G=(L,R;E)$, $L$ denotes the
left set of vertices of size $|L|=n$ (corresponding to qudits),
$R$ denotes the right vertices $|R| = m$ (corresponding to constraints),
and $E$ is the set of edges between $L$ and $R$.
$D_L$ will denote the left degree of a bi-partite graph.
$k$ will denote the locality of the
constraints, namely the right degree of the graph.
Given $S\subseteq R$ (or $L$) in a bi-partite graph,
$\Gamma(S)$ denotes the neighbor set of $S$ in $L$ (or $R$).
${\cal N}(q)$
will denote the qudit-neighborhood
of a qudit $q$ in $L$, namely
all the qudits participating in all the constraints acting on $q$
(so, ${\cal N}_q = \Gamma^{(2)}(q)$).
We will use
$\epsilon$
to denote the expansion error for bi-partite
graphs (as in Definition \ref{def:expbi}).
We will use $\delta$ (and sometimes $\mu$)
to denote the proximity, namely, the
relative distance of a word from a code.
\section{Locally-testable quantum codes}\label{sec:qLTC}
In this section we define locally testable quantum codes,
both in the general case, and in the specific case of stabilizer codes.
We then show that our definitions coincide for stabilizer codes.
\subsection{Local testability of general quantum codes}
We first generalize definition (\ref{def:qeccdist}),
from a definition of distance
{\it of a code} to a definition of distance {\it from a code}:
\begin{definition}\label{def:distcode}
\textbf{Distance from a quantum code}
\noindent
Let $C$ be a quantum code detecting error set ${\cal E}\subset \Pi_d^n$.
For any two orthogonal states $\ket{\phi},\ket{\psi}\in {\cal H}$,
we define the Hamming distance
between them
$dist_C(\ket{\phi},\ket{\psi})$
as the maximal integer $\rho$,
such that for any $E\in {\cal E}$,
\dnote{why restrict to this and not
any Pauli of this weight?}
\lnote{We could of course, but this definition is more general, because it treats general errors. I think it should be kept that way.}
\dnote{this is very confusing, what's the advantage? let's discuss again}
with $wt(E)\leq \rho-1$,
we have $\bra{\psi} E \ket{\phi}=0$.
Similarly, given a state $\ket{\phi}$ orthogonal to $C$,
we say that the distance of
$\ket{\phi}$ from $C$ denoted by $dist(\ket{\phi},C)$ is the minimum
over all $\ket{\psi}\in C$ of $dist_C(\ket{\phi},\ket{\psi})$.
\end{definition}
We note here that the distance of a state {\it from the code} in the above, can be much larger than the distance {\it of the code}.
This, akin to the classical case, where locally-testable codes are required
to identify words far from the code, even if they cannot
be (uniquely)
decoded, so that these codes can be used as proof systems.
\noindent
\begin{definition}
\textbf{Quantum Locally Testable Codes ($\qLTC$)}\label{def:qLTC}
\noindent
Let $R = R(\delta)$ be some function $R(\delta): [0,1] \mapsto [0,1]$,
this is called the soundness function.
Let $C$ be a quantum code
on $n$ $d$-dimensional qudits, defined as the groundspace of
$H=\sum_{i=1}^m\Pi_C^i$, where
$\Pi_C^i$ are $m$ $k$-local projections for some constant $k$.
We say that $C$
is {\it quantum locally testable} with soundness $R(\delta)$,
if:
$$
\forall \delta>0, \ket{\Psi}: ~~
\mbox{ }
dist(\ket{\Psi},C) \geq \delta n
\mapsto
\frac{1}{m}\langle
\Psi |H|\Psi
\rangle
\geq
R(\delta).
$$
The query complexity of the code is defined to be $k$.
\end{definition}
\subsection{Local testability of quantum stabilizer codes}
We now show that local testability defined above (Definition \ref{def:qLTC})
has a natural interpretation in the context of stabilizer codes.
\begin{definition}\label{def:sound}
\textbf{Local Testability for Stabilizer Codes ($\sLTC$)}\label{def:sLTC}
\noindent
Let $R(\delta)$ be some function $R(\delta): [0,1] \mapsto [0,1]$.
We say that a stabilizer code $C$ on $n$ $d$-dimensional qudits is an
$\sLTC$ with query complexity $k$ and soundness $R(\delta)$,
if there exists a generating set ${\cal G}$ for $C$,
where each element has support
$k$, such that the following holds:
for any $E\in \Pi_d^n$ with $wt_{\mathbf{Z}({\cal G})}({\cal E}) \geq \delta n$,
a uniformly random generator $g\in {\cal G}$ does not commute with $E$ w.p. at least $R(\delta)$.
\end{definition}
\subsubsection{Equivalence of definitions of locally testable codes}
We now show that the definition of stabilizer locally testable codes
(Definition \ref{def:sLTC}) is in fact a special case of the general quantum locally testable codes (Definition \ref{def:qLTC}).
\begin{claim}\label{cl:sLTCqLTC}
\noindent
\begin{enumerate}
\item
If $C$ is a Stabilizer code with generating set ${\cal G}$,
which is an $\sLTC$
with query complexity $k$, and soundness $R(\delta)$,
then the set of projections $\left\{\Pi_g\right\}_{g\in {\cal G}}$, where $I-\Pi_g$ is the projection on the $1$-eigenspace
of $g$, defines a $\qLTC$ with query complexity $k$, and soundness $R(\delta)$.
\item
If $C$ is a $\qLTC$
with query complexity $k$, and soundness $R(\delta)$,
defined by a set of projections
$\left\{\Pi_g\right\}_{g\in {\cal G}}$,
such that the set $\left\{I-\Pi_g\right\}_{g\in {\cal G}}$ spans an Abelian subgroup of $\Pi_d^n$,
then $C$ is also an $\sLTC$
with query complexity $k$, and soundness $R(\delta)$.
\end{enumerate}
\end{claim}
\begin{proof}
\noindent
\textbf{$\sLTC \mapsto \qLTC$}
\noindent
By definition of a stabilizer code, for any $\ket{\phi}\in C$, we have $g\ket{\phi}=\ket{\phi}$ for all $g\in {\cal G}$,
so $\Pi_g \ket{\phi}=0$ for all $g\in {\cal G}$.
Next, consider a state $\ket{\phi}$ orthogonal to $C$, such that $dist(\ket{\phi},C)\geq \delta n$.
We would now like to show that a projection chosen randomly
from $\left\{\Pi_g\right\}_{g\in G}$
is violated by $\ket{\phi}$ with
probability at least
$R(\delta)$.
Consider the following orthogonal decomposition of $\phi$ as
implied by lemma (\ref{lem:Pauli}):
\begin{equation}\label{eq:partition}
\ket{\phi} =
\sum_i \alpha_i \ket{\alpha_i} = \sum_i \alpha_i E_i \ket{\eta_i},
\end{equation}
where $E_i\in \Pi_d^n$, $\ket{\eta_i}\in C$, and
$E_i \ket{\eta_i}$ are orthogonal.
We claim that for each $i$, $wt_{\mathbf{Z}({\cal G})}(E_i) \geq \delta n$:
otherwise, it is easy to see that
there exists some $E'\in \Pi_d^n$, $wt(E')<\delta n$, such that for
at least one $i$,
we have $E' E_{i} \in \mathbf{Z}({\cal G})$.
Since for any $J\in\mathbf{Z}({\cal G})$, $JC=C$, we have
that alternatively, $E' \ket{\alpha_i} \in C$.
Since $E'$ is unitary, and the $\ket{\alpha_i}$'s are orthogonal, then the $E' \ket{\alpha_i}$'s are orthogonal, thus $E' \ket{\phi}$ has a non-zero projection on $C$.
Contrary to the assumption that $dist(\ket{\phi},C)\geq \delta n$.
If $E_i$ and $g\in {\cal G}$ do not commute,
$E_i g = \omega g E_i$, for some $\omega \neq 1$.
In particular, $E_i\ket{\eta_i}$
is a $\omega$ eigenstate of $g$. This means it is orthogonal to
the $1$-eigenspace of $g$, and therefore:
$$
\bra{\alpha_i} \Pi_g \ket{\alpha_i} = 1.
$$
Yet, by the $\sLTC$ property of $C$, for each $i$, $E_i$ does not commute with a fraction at least $R(\delta)$ of the generators of ${\cal G}$.
Thus, a randomly chosen check term is violated by $\ket{\alpha_i}$ with probability at least $R(\delta)$,
so
$$
\frac{1}{|{\cal G}|}
\sum_{g\in {\cal G}} \langle \alpha_i | \Pi_g | \alpha_i \rangle
\geq
R(\delta).
$$
Since by lemma (\ref{lem:Pauli}) the decomposition above coincides with the simultaneous eigenbasis of ${\cal G}$,
we have:
$$
\frac{1}{|{\cal G}|}
\langle
\phi |
\sum_{g\in {\cal G}} \Pi_g |
\phi
\rangle
=
\frac{1}{|{\cal G}|}
\sum_i \sum_{g\in {\cal G}} |\alpha_i|^2 \langle \alpha_i | \Pi_g | \alpha_i \rangle
\geq
R(\delta).
$$
\noindent
\textbf{$\qLTC \mapsto \sLTC$}
\noindent
First, by definition, the set of states that are in the mutual
groundspace of the $\Pi_g$'s, are stabilized (i.e. eigenvalue $1$) w.r.t.
the terms ${\cal G}$, and vice versa.
Now, let $E\in \Pi_d^n$, whose weight modulo $\mathbf{Z}({\cal G})$ is
at least $\delta n$.
Let $\ket{\phi}\in C$ be any code state, and denote $\ket{\psi} = E \ket{\phi}$.
We claim that $dist(\ket{\psi},C)\geq \delta n$.
Otherwise there exists $E'\in \Pi^n$, $wt(E')<\delta n$,
such that $E' \ket{\psi}$ has a non-zero projection on $C$,
hence $E' E \ket{\phi}$ has a nonzero projection on $C$,
so by lemma (\ref{lem:Pauli}), we have that $E'E C = C$.
Therefore, $E' E$ commutes with all ${\cal G}$,
and hence $E' E\in \mathbf{Z}({\cal G})$, which implies that
$wt_{\mathbf{Z}({\cal G})}(E) < \delta n$, in contradiction.
By the $\qLTC$ property of $C$, we have
\begin{equation}\label{eq:gpenalty}
\langle
\psi |
\sum_{g\in {\cal G}} \Pi_g |
\psi
\rangle
\geq
|{\cal G}| \cdot R(\delta).
\end{equation}
Since $\ket{\psi} = E \ket{\phi}$, then for any generator $g$
$g \ket{\psi} = g E \ket{\phi} = \omega E g \ket{\phi} = \omega E \ket{\phi}$, for some $\omega\in \mathbf{C}$.
So for any $g\in {\cal G}$ $\ket{\psi}$ is some eigenstate of $g$.
Hence $\ket{\psi}$ is either
in the $1$-eigenspace of $\Pi_g$
or in its $0$-eigenspace, so by equation (\ref{eq:gpenalty}) it violates
a fraction at least $R(\delta)$ of all generators ${\cal G}$.
\end{proof}
\section{Bound on the soundness of stabilizer $\LTC$s on small-set
expanders}\label{sec:QECC}
In this section we prove theorem \ref{thm:QECC}.
We define the {\it relative soundness} formally:
\begin{definition}\label{def:smallr}{\bf Relative Soundness}
Define
$$
r(\delta) : [0,1] \mapsto [0,1],
$$
as follows:
$r(\delta)=R(\delta)/\Theta(\delta)$, where
$\Theta(\delta)\equiv \min\{\delta k,1\}$.
\end{definition}
We note that in the all the following, we will be interested
in $\delta<1/k$ and in this range $r(\delta)=R(\delta)/k\delta$.
\subsection{A useful fact about restrictions of stabilizers}
\begin{definition}
\textbf{Restriction of stabilizers}
\noindent
For a $E\in \Pi_d^n$, let $E|_q$ denote the $q$-th component of the tensor product $E$, and let $E|_{-q}$ denote the tensor product of all terms except the $q$-th.
Similarly, for a generating set ${\cal G}$, we denote by ${\cal G}|_q$ as the set $\left\{g|_q \mbox{, } g\in {\cal G} \right\}$, and similarly for ${\cal G}|_{-q}$.
\end{definition}
\noindent
We now prove a useful fact: that the restrictions to a given qudit
$q$ of all the generators of a stabilizer code with absolute
distance strictly larger than
$1$ cannot all commute.
\begin{fact}\label{fact:nocommute}
Let $C$ be a stabilizer code
with absolute minimal distance strictly larger than $1$.
Then for any qudit $q$, and any generator $g$ acting on $q$, there exists
another
generator $h(q)$
acting on $q$ such that $[g|_q, h|_q] \neq 0$.
\end{fact}
\begin{proof}
Assume on the negative, that there is a qudit $q$,
and a generator $g$, such that for all other generators $h$, we have $[g|_q,h|_q]=0$.
Let $Q=g|_q$.
We have that $Q'=Q\otimes I_{-q}$, namely the tensor product with identity
on the other qubits, commutes with all $g\in {\cal G}$, and thus
$Q'\in \mathbf{Z}({\cal G})$.
However, $Q'$ cannot be inside $A({\cal G})$, since otherwise
$q$ is in
some constant state (the $1$ eigenvector of $Q$) $\ket{\alpha}$
for all code states, and thus $q$ is trivial for the
code (see remark at the end of Subsection \ref{sec:QECCdefs}).
Hence, $Q'\in \mathbf{Z}({\cal G})-A({\cal G})$, so the distance of the code by definition (\ref{def:stabdist}) is $1$, in contradiction to our assumption.
\end{proof}
\subsection{Proof of Theorem \ref{thm:QECC}}
In the proof we will make use of "sparse'' sets of constraints,
defined as follows.
\begin{definition}
\textbf{$1$-independent set of constraints}\label{def:Lind}
\noindent
For a given constraint $u$, consider $\Gamma^3(u)$, the set of qudits
acted upon by constraints which act on qudits in $u$.
A set of constraints $U$ is said to be $1$-independent if for any
two constraints $u,w \in U$,
$\Gamma^3(u)\cap\Gamma^3(w)=\Phi$.
\end{definition}
\begin{proof}(Of theorem \ref{thm:QECC})
\paragraph{Generating the error}
We want to construct an error $E \in \Pi_d^n$, $wt_{\mathbf{Z}({\cal G})}(E)\geq \delta n$, that will not violate too many constraints in ${\cal G}$.
Let $C$ be a stabilizer code
with a $k$-local generating set ${\cal G}$, such that the
bi-partite interaction graph of $C$
is an $\eps$ small-set bi-partite expander.
Let $U$ be a $1$-independent set of constraints of size $\delta n$.
We note that since $\delta \leq \frac{1}{k^3 D_L}$
a $1$-independent set of this size must exist, by a simple greedy algorithm.
For a given constraint
$u\in U$, and $i\in [k]$, let $\alpha_i(u)$
denote the number of
generators $g\in {\cal G}$
that act on a qudit $i$ in $u$ and intersect $u$ in at least one other
qudit.
Then for each $u\in U$ we define $q(u)$ to
be a qudit of minimal $\alpha_i(u)$ over all $i\in [k]$.
Let $T = \left\{q(u) | u\in U\right\}$.
Let us define an error pattern:
$$ E = \bigotimes_{u\in U} u|_{q(u)}.$$
We first note that $E \notin \mathbf{Z}({\cal G})$;
This is true by Fact (\ref{fact:nocommute}):
for each qudit $q$ in the support of $E$,
$E|_q$ does not commute with $h|_q$ for some $h\in {\cal G}$.
But since $T$ is induced by a $1$-independent set, $h$
does not touch any other qudit in the support of $E$ except $q$,
so this implies $[h,E]=[h|_q,E|_q]\neq 0$.
We will now show that $E$ has large weight modulo $\mathbf{Z}({\cal G})$,
but is penalized by a relatively small fraction of ${\cal G}$.
\paragraph{Weight Analysis}
By definition, we have that $wt(E) = |T|=|U|=\delta n$.
We claim that:
\begin{equation}\label{eq:tweight}
wt_{\mathbf{Z}({\cal G})}(E)= |T|
\end{equation}
Since $\delta$ was chosen to be smaller than half the distance of
the code $C$,
$wt_{\mathbf{Z}({\cal G})}(E)= wt_{\cal G}(E)$
and so it suffices to lower-bound $wt_{{\cal G}}(E)$.
Suppose on the negative that $wt_{\cal G}(E)< |T|$.
Then there exists $\Delta\in A({\cal G})$, such that $E' = \Delta E$ has $wt(E')<|T|$.
Since the weight of $E'$ is strictly smaller than
that of $E$, there must be one qudit $q_0$ in $T$, s.t.
on the neighborhood ${\cal N}(q_0)$ the weight of $E'$ is strictly
smaller than that of $E$, which is $1$;
namely, $E'$ must be equal to the identity on all the qudits in the qudit-
neighborhood of $q_0$.
Here, we have used the fact that the qudit-neighborhoods of different qudits
in $T$
are non-intersecting. This is true
by the fact that the qudits were chosen by picking one qudit from each
constraint out of
a $1$-independent set of constraints
(definition \ref{def:Lind}).
This means that $\Delta$ must be equal to the inverse of $E$
on this neighborhood. But this inverse is exactly the following:
It is equal to $E|_{q_0}^{-1}$ on $q_0$, and to the identity on all other
qudits in the neighborhood.
By construction , $E|_{q_0}$ on $q_0$, (and therefore also
$E^{-1}|_{q_0}=\Delta_{q_0}$) does not commute with $h|_{q_0}$, for some $h\in {\cal G}$.
Since $\Delta$ is identity on all qudits of $h$ other than $q_0$, this implies that
$\Delta$ does not commute with $h$, in contradiction
to the fact that $\Delta \in A({\cal G})$.
\paragraph{soundness Analysis}
We upper-bound the number of generators that do not commute with $E$.
For each $u\in U$, the number of generators $g\in {\cal G}$
that do not commute with $E|_{q(u)}$ is at most
the number of generators that share at least two qudits with $u$.
By fact (\ref{fact:deg}) there exists a qudit
$q\in \Gamma(u)$ such that the fraction of its check terms
with at least two qudits in $\Gamma(u)$ is at most $2\eps$;
since we chose $q(u)$ to be the qudit that minimizes that fraction
over all qudits on which $u$ acts,
we have that for $q(u)$, the fraction of terms acting on it
that intersect $u$ with at least $2$ qudits is at most $2\epsilon$.
Thus, the absolute number of generators acting on $q(u)$
that intersect $u$ in at least two qudits is at most $2\eps D_L$.
Hence the overall number of
generators violated by $E$ is at most $2 \eps |T| D_L$.
By Equation \ref{eq:tweight} this is equal to
$2\eps D_L wt_{\mathbf{Z}({\cal G})}(E)$.
Using $D_Ln=mk$,
we have $R(\delta)\le 2\eps k \delta$ and so $r(\delta) \leq 2 \eps$.
\end{proof}
We now show that a slightly stronger version of the above theorem holds.
This version will be used for showing Theorem (\ref{thm:sound}).
\begin{claim} \label{cl:QECC}
Let $C$ be a good
stabilizer code, with a $k$-local succinct generating set, where each qubit is examined
by $D_L$ constraints.
If there exists
a $1$-independent set of constraints $U\subseteq R$,
s.t. $|U| = \delta n$ for some $0<\delta<1/k$,
and $\Gamma(U)$,
the set of qudits that the constraints in $U$ act on satisfies
$|\Gamma(\Gamma(U))|\geq |\Gamma(U)| D_L (1-\eps)$,
then for any $\delta'\leq \delta$ we have that
$r(\delta')\le 2\epsilon$.
\end{claim}
\begin{proof}
For a set $S\subseteq L$, let $\Gamma_1(S)$ denote the number of neighbors of $S$ having a single neighbor in $S$, and let
$\Gamma_{\geq 2}(S) \equiv \Gamma(S) - \Gamma_1(S)$.
Put $S = \Gamma(U)$, and let $S = \bigsqcup_{i=1}^k S_i$, denote a partition of $S$ into $k$
disjoint sets, where each $S_i$ takes a single (arbitray) qubit from each $\Gamma(u)$, $u\in U$.
By assumption, $|\Gamma(S)| \geq |S| D_L (1-\eps)$, whereas the total degree of $S$ is $|S| D_L$.
Hence, $|\Gamma_{\geq 2}(S)| \leq |S| D_L \eps$, so $|\Gamma_1(S)| \geq |S| D_L (1-2\eps)$.
Since each unique neighbor of $S$ examines exactly one partition $S_j$,
there exists a partition $S_0$ examined by at least $|S_0| D_L (1-2\eps) = \delta n D_L (1-2\eps)$, constraints from $\Gamma_1(S)$.
Now, given any $\delta'\leq \delta$,
let $S_0'$ be a subset of $S_0$ of size $\delta' n$, maximizing the ratio $\Gamma_1(S')/ |S'|$, over all sets $S'\subseteq S_0$ of this size.
Since each element of $\Gamma_1(S)$ examines just one element of $S$, such a set exists, with ratio at least $D_L (1-2\eps)$.
A tensor-product error ${\cal E}$ defined by taking, for each $u\in U$ the restriction to its qubit in $S_0'$,
we have by equation (\ref{eq:tweight}) $wt_{\mathbf{Z}({\cal G})}(E) = \delta' n$,
whereas the maximal penalty is at most $2 \eps D_L \delta' n$.
Since $\delta'\leq \delta<1/k$ it follows that $r(\delta')\leq 2\eps$.
\end{proof}
\section{An upper-bound on soundness}\label{sec:sound}
We now show an absolute constant strictly less than $1$, upper-bounding the
relative soundness of any
good quantum stabilizer code spanned by $k$-local generators, whose qudits
are acted upon by $D_L$ stabilizers each.
We start with an easy alphabet based upper bound.
\subsection{Alphabet-based bound on soundness}\label{sec:alphabet}
In attempting to understand soundness of good
stabilizer codes, one must first account for
limitations on the soundness that seem almost trivial,
and occur even when there is just a single error.
\begin{definition}\label{def:single}
{\bf Single error soundness}
\noindent
Let $t(d) = 1/(d^2-1)$; The single error
relative soundness in dimension $d$
is defined to be $\alpha(d)=1-t(d)$.
\end{definition}
The motivation for the above definition is as follows.
For any qudit $q$, there always exists $Q\in \Pi_d$, $Q\neq I$,
such that a fraction at least $t(d)$ of the generators touching
$q$ are equal to $Q$ when restricted to $q$.
If we consider a single-qudit error on $q$ to be equal to
$Q$, then it would commute with $t(d)$ of the generators acting on
$q$; thus they can violate at most
$\alpha(d)$ of the constraints acting on $q$.
Hence, one can expect that it is possible to construct an error of linear
weight, whose relative soundness $r(\delta)$
is bounded by the single error relative soundness
using qudits whose neighboring constraints are far from each other.
Indeed, we show:
\begin{fact}\label{fact:alphabet}
\textbf{Alphabet bound on soundness}
\noindent
For any good stabilizer code $C$ on $n$ $d$-dimensional qudits,
with a $k$-local succinct generating set ${\cal G}$, whose left-degree is $D_L$,
we have $r(\delta) \leq \alpha(d)$, for any $\delta \leq 1/(k^3 D_L)$.
\end{fact}
\begin{proof}
Similarly to Theorem (\ref{thm:QECC}), given the parameters assumed
in the statement of the fact, there exists a
$1$-independent set of constraints $U$ of size $\delta n$.
For each constraint $u\in U$ we select arbitrarily
some qubit $q=q(u)\in \Gamma(u)$ and examine
the restrictions to $q$ of all stabilizers acting non-trivially on $q$.
Let $P(q)$ denote the set of all such restrictions.
Let $MAJ(q)$ denote the element of $\Pi_d$ that appears
a maximal number of times in $P(q)$.
We then set $E = \bigotimes_{u\in U} MAJ(q(u))$.
We first realize that $E$ is an error:
we want to show that there exists a generator $g$ such that
$E$ and $g$ do not commute.
Otherwise,
$E$ commutes with all generators;
Since by construction, each generator intersects $E$ with at most one qudit,
this means that the restrictions to $q$ also commute: $[E|_q,g|_q]=0$
for all $q(u)$ acted upon by $E$. This is a
contradiction by Fact (\ref{fact:nocommute}); hence, there must be a
generator which does not commute with $E$, so $E$ is indeed an error.
Similarly to the proof of Equation (\ref{eq:tweight}) in the proof of
Theorem (\ref{thm:QECC}), we also have
$wt_{\mathbf{Z}({\cal G})}(E) = \delta n$.
Furthermore, for each qudit $q$, the fraction of generators on $q$, whose restriction to $q$ does not commute with $E|_q$ is at most $\alpha(d)$, since the number
of appearances of $E|_q = MAJ(q)$ in $P(q)$ is at least $t(d) = 1-\alpha(d)$.
Hence the number of violated constraints is at most $\alpha(d) \cdot |U|
\cdot D_L = \alpha(d)\delta n D_L$.
Since $\delta < 1/k$ it follows that $r(\delta) \leq \alpha(d)$.
\end{proof}
We note that classically, there is no direct analogue to the requirement of
non-commutativity to achieve constraint violation. No analogue of the
$\alpha(d)$ thus exists.
\subsection{Separation from alphabet-based soundness}
In this section we show that the alphabet-based
bound on the relative soundness
in fact cannot be achieved, and the relative soundness is further bounded
by a constant factor strictly less than $1$, which is due
to what seems to be an inherently quantum phenomenon.
We will use the geometry of the underlying interaction graph
to achieve this separation, by treating
differently expanding instances and non-expanding instances.
Before stating the main theorem of this section,
we require a generalization of definition \ref{def:Lind}
and a simple fact.
\begin{definition}
\textbf{$t$-independent set of constraints}\label{def:kind}
\noindent
Let $C$ be a quantum code with a set of $k$-local constraints, whose
underlying bi-partite graph is $G(C)=(L,R;E)$.
A set of constraints $U\subseteq R$ is said to
be $t$-independent
if for any $a,b\in U$ we have $\Gamma^{(2t+1)}(u) \cap \Gamma^{(2t+1)}(v) = \Phi$.
\end{definition}
The following fact can be easily derived by a greedy algorithm:
\begin{fact}\label{fact:indset}
Let $\eta = \eta(k,D_L) = k^{-(2k+1)} D_L^{-(2k-1)}$.
For any
quantum code $C$ whose bi-partite graph $G(C)$ is left
$D_L$-regular, and right $k$-regular, there exists a $k$-independent
set of size at least $\eta n$.
\end{fact}
\begin{proof}
Pick a constraint $u$, remove all constraints in
$\Gamma^{(4k)}(u)$, and repeat.
The number of constraints we have removed for each constraint is
$(kD_L)^{2k}$. Hence, we can proceed for $m/(kD_L)^{2k}$ steps.
We get that the fraction of constraints is at
least $k^{-(2k)}D_L^{-(2k)}$,
and since $mk = n D_L$, we get the desired result.
\end{proof}
\noindent
{\bf Theorem (\ref{thm:sound})}
{\it Let $C$ be a stabilizer code on $n$ $d$-dimensional qudits, of minimal distance at least $k$, and a $k$-local ($k\geq 4$)
succinct generating set ${\cal G}\subset \Pi_d^n$, where the right
degree of the interaction graph of ${\cal G}$ is $D_L$.
Then there exists a function $\gamma_{gap}=\gamma_{gap}(k)> min\left\{10^{-3},0.01/k \right\}$
such that for any
$\delta \leq \min\{dist(C)/2n,\eta/10\}$,
(for $\eta$ as defined in Fact \ref{fact:indset})
we have
$r(\delta')\leq \alpha(d) \left(1-\gamma_{gap}\right)$.
where $\delta' \in (0.99 \delta, 1.01\delta)$.
}
{~}
\noindent
The proof of the theorem will use, on one hand, claim (\ref{cl:QECC}) which upper-bounds the soundness of expanding instances, and on the other hand a lemma on non-expanding instances,which tries to "mimic" the behavior of the classical setting, in which non-expanding topologies suffer from poor soundness.
We now state this lemma:
\begin{lemma}\label{lem:indexp}
Let $C$ be a stabilizer code on $n$
qudits of dimension $d$, with minimal distance at least $k$
and a $k$-local ($k\geq 4$) succinct generating set ${\cal G}$,
where the left degree of the interaction graph of ${\cal G}$ is $D_L$.
Let $\gamma_{gap} = \gamma_{gap}(k)=
min\left\{10^{-3},0.01/k \right\}$.
If there exists a $k$-independent set $U$ of size $|U| = \delta n$,
with $\delta<dist(C)/2n$,
such that the bi-partite expansion error of $\Gamma(U)$ is at least $\eps = 0.32$,
i.e. $|\Gamma(\Gamma(U))|= |\Gamma(U)| D_L (1-\eps')$ for some $\eps'\geq 0.32$ then
$$
r(\delta') \leq \alpha(d) \cdot (1-\gamma_{gap}),
$$
for some $\delta' \in (0.099 \delta,0.101\delta)$.
\end{lemma}
The proof of this Lemma is technically non-trivial, and we defer it to a
separate section.
From this lemma, it is easy to show theorem (\ref{thm:sound}):
\begin{proof}(of theorem \ref{thm:sound})
The parameters of the theorem allow us to apply directly fact (\ref{fact:indset});
hence there exists a $k$-independent set $S$ of size at least $\eta n$, for $\eta$ as defined in Fact (\ref{fact:indset}).
Since $\delta \leq \eta/10$ there exists a $k$-independent set $S$ of size $10 \delta$.
Now, either:
\begin{enumerate}
\item
$S$ has expansion error at least $0.32$.
By lemma (\ref{lem:indexp}), we have
$$
r(\mu) < \alpha(d)(1-\gamma_{gap}),
$$
for some $\mu \in (0.099 \cdot (10 \delta),0.101 \cdot (10\delta)) = (0.99 \delta,1.01\delta)$,
and $\gamma_{gap}(k)$ from lemma (\ref{lem:indexp}),
which is at least $ min\left\{10^{-3},0.01/k \right\}$.
\item
The set $S$ is $\epsilon$-expanding for $\epsilon< 0.32$. In which case,
since $S$ is in particular $R$-independent, then by claim (\ref{cl:QECC}),
the soundness $r(\delta')\le 2\eps < 2/3 - 0.01 \leq \alpha(d) -0.01$, for all $\delta' \leq |S|/n$. In particular $r(\mu) < \alpha(d)(1-0.01/k)$.
\end{enumerate}
Taking the higher of these two bounds we get the desired upper-bound for $r(\mu)$.
\end{proof}
\subsection{Proof of Lemma (\ref{lem:indexp})}
In the following we first define the error;
We provide the proof that the expected penalty of this error is small in fact
(\ref{fact:penalty}), then state and prove the Onion fact in sub-subsection
\ref{sec:onion} and use it to prove Fact (\ref{fact:weight}),
in which we show that the error has large weight modulo the group.
Finally we combine all the above to finish the proof of the lemma.
\subsubsection{Constructing the error}\label{sec:err}
Let $U\subseteq R$ be a $k$-independent set as promised by the conditions of the lemma.
Then $|U| = \delta n$, and denoting $S=\Gamma(U)$, we have that $|S| = \delta n k$.
Therefore, $|\Gamma(S)| = |S| D_L (1-\eps')$, for some $\eps' \geq 0.32$.
Let ${\cal E}$ be the following random error process: for each qudit of $S$ independently, we apply $I$ w.p. $1-p$ for $p=1/(10k)$, and one of the other elements of $\Pi_d$ with equal probability $p \cdot t(d)$, where $t$ is defined in
Definition (\ref{def:single}).
$$
{\cal E} = \bigotimes_{i\in S} {\cal E}_i
\mbox{, where }
{\cal E}_i =
\left\{
\begin{array}{ll}
I_i & \mbox{w.p. } 1-1/(10k) \\
X_d^k P_d^l & \mbox{w.p. } t/(10k)
\end{array}
\right.
$$
We note here that the choice of $p$ is such that on average, each $k$-tuple has only a small number of errors; the expectation of the number of errors is an absolute constant $1/10$ (not a fraction of $k$).
This will help, later on, to lower-bound the weight of the error modulo the group.
\subsubsection{Analyzing Penalty}
We first claim, that on average, ${\cal E}$ has a relatively
small penalty w.r.t. ${\cal G}$, using the fact that the expansion error
is at least $0.32$ as in the condition of Lemma \ref{lem:indexp}.
For any ${\cal E}$, let $penalty({\cal E})$ denote the number of generators of ${\cal G}$ that do not commute with ${\cal E}$.
\begin{fact}\label{fact:penalty}
$$
\mathbf{E}_{\cal E}\left[Penalty({\cal E})\right]
\leq
p \alpha |S| D_L \left(1- 0.02/k \right)
$$
\end{fact}
\begin{proof}
Let $G=(L,R;E)$
denote the bi-partite graph corresponding to ${\cal G}$,
with $R$ being the generators of ${\cal G}$ and $L$ the qudits.
Let $S=\Gamma(U)$ be as before.
Let the error process ${\cal E}$ be the one defined above.
For any constraint $c\in \Gamma(S)$ which is violated when applied
to this error, observe that there must be a qudit
$i\in supp(c)$ such that
$\left[c|_i, {\cal E}_i\right] \neq 0$.
We now would like to bound the number of constraints violated by ${\cal E}$
using this observation, and linearity of expectation.
For an edge $e\in E$ connecting a qudit $i$ in $S$ and a constraint $c$ in
$\Gamma(S)$,
let $x(e)$ denote the binary variable which is $1$, iff the error
term ${\cal E}_i$ on does not commute with $c|_i$.
In other words, an edge marked by $1$ is an edge whose qudit
causes its constraint to be violated.
By construction, for each $e\in E$ which
connects the qudit $i$ and the constraint $c$ we have
\begin{equation}
\label{eq:singleexp}
\mathbf{E}_{\cal E}[x(e)] = p(1-t).
\end{equation}
This is true since a constraint $c$ restricted to the qudit $i$,
$c|_i$ does not commute with the error restricted to the same qudit $i$,
${\cal E}_i$, iff both ${\cal E}_i$ is non-identity (which happens with
probability $p$) and
is not equal to $c|_i$.
If we had just added now $x(e)$ over all edges going out of $S$
(whose number is $|S| D_L$), then by linearity
of expectation, this would have given an upper bound on the expected
number of violated constraint equal to
\begin{equation}\label{eq:sumexp}
\sum_e p(1-t)=p |S| D_L \alpha(d).
\end{equation}
Unfortunately this upper bound does not suffice; to strengthen it
we would now like to take advantage of the fact that
many of those edges go to the same constraint, due to the fact that the
expansion is bad; thus, instead of simply summing these expectation values,
we take advantage of the fact that two qudits touching the same
constraint
cannot contribute twice to its violation.
Observe that it may even be the case that some edges may cause
constraints to become "unviolated", so the actual bound may be even lower.
Let $E_{inj} \subseteq E$ be a subset of the edges between $S$ to $\Gamma(S)$
chosen by picking a single edge for each constraint in $\Gamma(S)$.
For an edge $e\in E$ let $c(e)$ denote the constraint incident on $e$,
and let $e_{inj}(c(e))$ denote the edge
in $E_{inj}$ that is connected to $c(e)$.
We now bound the expectation by subtracting $x(e)$ from the sum,
if the Boolean variable
$x(e_{inj}(c(e)))$ is $1$; this avoids counting the violation of the same
constraint twice due to the two edges.
We have:
$$
\mathbf{E}_{\cal E}\left[Penalty\right]
\leq
\mathbf{E}_{\cal E}\left[
\sum_{e\in E_{inj}} x(e) +
\sum_{e \notin E_{inj}}
\left(1 - x(e_{inj}(c(e))) \right) \cdot x(e)
\right].
$$
Expanding the above by linearity of expectation:
$$
\mathbf{E}\left[Penalty\right]
\leq
\sum_{e\in E_{inj}} \mathbf{E}_{\cal E}\left[ x(e) \right] +
\sum_{e \notin E_{inj}} \mathbf{E}_{\cal E}\left[x(e)\right] -
\sum_{e \notin E_{inj}} \mathbf{E}_{\cal E}\left[x(e_{inj}(c(e))) \cdot x(e) \right]=
$$
$$
\sum_{e\in E} \mathbf{E}_{\cal E}\left[ x(e) \right]+
\sum_{e \notin E_{inj}} \mathbf{E}_{\cal E}\left[x(e_{inj}(c(e))) \cdot x(e) \right].
$$
We have already calculated the first term in the sum in Equation
\ref{eq:sumexp};
We now lower bound the correction given by the second term.
We use the fact that for any $e\notin E_{inj}$
$$
\mathbf{E}_{\cal E}\left[x(e_{inj}(c(e)) x(e)\right] =
\mathbf{E}_{\cal E}[x(e_{inj}(c(e)))]
\mathbf{E}_{\cal E}[x(e)] $$
since ${\cal E}$ is independent between different qudits.
We can thus substitute
Equation \ref{eq:singleexp}, and get:
$$
\mathbf{E}_{\cal E}\left[Penalty\right]
\leq
p \alpha |S| D_L - |S| D_L \eps (p \alpha)^2.
$$
where we have used the fact that $|E \backslash E_{inj}| = |S| D_L \eps$.
This is equal to
$$
p\alpha |S| D_L ( 1 - p\alpha \eps).
$$
Using $p = 1/(10k),\eps \geq 0.32$,
$\alpha(d) \geq 2/3$, we get the desired bound.
\end{proof}
\subsubsection{The Onion fact (\ref{fact:succinct})}\label{sec:onion}
\begin{fact}\label{fact:succinct}
\textbf{Onion fact}
\noindent
Let $C$
be a stabilizer code on $n$ qudits
with a succinct generating set ${\cal G}$ of locality $k$, such that $dist(C)\geq k$.
Let $E\in \Pi_d^n$ s.t.
$supp(E) \subseteq \Gamma(u)$ for some generator $u\in {\cal G}$.
Finally let $\Delta\in A({\cal G})$, and let
$E_{\cal G}=\Delta\cdot E$.
Then, for any $i\in [k]$, if
$wt(E|_{\Gamma(u)}) = i$, then
$wt(E_{\cal G}|_{\Gamma^{(2k+1)}(u)}) \geq min\left\{i,k-i\right\}$.
\end{fact}
\begin{proof}
If $\Delta|_{\Gamma(u)}=I$ then
\begin{equation}\label{eq:i}
wt \left(E_{\cal G}|_{\Gamma^{(2k+1)}(u)}\right) \ge
wt \left(E_{\cal G}|_{\Gamma(u)}\right) =
wt \left(E|_{\Gamma(u)}\right) = i,
\end{equation}
so in this case we are done.
Otherwise, $\Delta|_{\Gamma(u)}$
is non-identity, and so has at least one non-identity
coordinate. Since $\Delta$ is non-identity,
by the assumption on the succinctness of ${\cal G}$ we
have $wt(\Delta) \geq k$.
Moreover, we claim that $wt \left(\Delta|_{\Gamma^{(2k+1)}(u)}\right) \geq k$.
Otherwise, consider the following process.
Start with the generator $u$, and consider the qudits in $\Gamma(u)$.
Now add the qudits in $\Gamma^{(3)}(u)$ (namely the qudits that are acted
upon by generators intersecting $u$).
Then add the next level, and so on for $k$ levels,
by which point we have added all qudits belonging
to $\Gamma^{(2k+1)}(u)$.
By the pigeonhole principle, if $wt \left(\Delta|_{\Gamma^{(2k+1)}(u)}\right)<k$,
then there must exist a level $t$, $1\le t \le k$, such that
$\Delta$ has zero support on qudits added in this level.
We now claim that ${\tilde \Delta}=\Delta|_{\Gamma^{(2(t-1)+1)}(u)}$,
is in the centralizer $\mathbf{Z}({\cal G})$ but its weight is less than $k$.
This, together with the fact that ${\tilde \Delta} \notin A({\cal G})$,
shown in the next paragraph, contradicts the assumption that $dist(C)\geq k$.
To see that ${\tilde \Delta}$ is in the centralizer,
we observe first that $\Delta$ commutes with all elements of ${\cal G}$
that act only on qudits in ${\Gamma^{(t-1)}(u)}$,
and since ${\tilde \Delta}$ agrees with $\Delta$ on
${\Gamma^{(2(t-1)+1)}(u)}$, ${\tilde \Delta}$ also commutes with them.
We also observe that ${\tilde \Delta}$ trivially commutes with all
elements in ${\cal G}$ whose support does not intersect
$\Gamma^{(2(t-1)+1)}(u)$. Hence we only need to worry about
those terms that act on at least one qudit in
$\Gamma^{(2t+1)}(u)-\Gamma^{(2(t-1)+1)}(u)$
and at least one qudit in $\Gamma^{(2(t-1)+1)}(u)$.
Let $v$ be some such term.
Note that $v$ does not act on any qudit outside $\Gamma^{(2t+1)}(u)$
by definition.
We know that $\Delta$ commutes with $v$.
But by the choice of $t$, we know that $\Delta$ is trivial on those qudits
added at the $t$-th level, and hence $\Delta$ restricted to
$\Gamma^{(2t+1)}(u)$ (which contains the qudits of $v$)
is the same as $\Delta$ restricted to $\Gamma^{(2(t-1)+1)}(u)$.
And so $\Delta$ restricted to $\Gamma^{(2(t-1)+1)}(u)$ commutes with $v$.
\lnote{The above paragraph could be significantly shortened:
By the property of $\Delta$ and
definition of $\Gamma^{(i)}(u)$, we have that for any $g\in {\cal G}$
$[g|_B, \Delta|_B] = [g,\Delta]=0$, where $B = \Gamma^{(t-1)}(u)$.
Hence $[g,\tilde \Delta] = [g|_B, \tilde \Delta|_B] = [g|_B, \Delta|_B] = 0$.}
We showed that ${\tilde \Delta}$ is in $\mathbf{Z}({\cal G})$.
If it also belongs to $A({\cal G})$, this contradicts succinctness
of ${\cal G}$;
otherwise it is in $\mathbf{Z}({\cal G})-A({\cal G})$
implying the distance of $C$
is at most $k-1$, contrary to assumption.
This means that $wt \left(\Delta|_{\Gamma^{(2k+1)}(u)}\right) \geq k$.
Therefore, we now know
by the triangle inequality on the Hamming distance, that
\begin{equation}\label{eq:k-i}
wt \left(E_{\cal G}|_{{\Gamma^{(2k+1)}(u)}}\right) \geq
wt \left(\Delta|_{{\Gamma^{(2k+1)}(u)}}\right) -
wt \left(E|_{{\Gamma^{(2k+1)}(u)}}\right)
=
\end{equation}
$$
wt \left(\Delta|_{{\Gamma^{(2k+1)}(u)}}\right) -
wt \left(E|_{\Gamma(u)}\right)
\geq
k-i.
$$
Taking the minimal of the bounds from Equations (\ref{eq:i}),(\ref{eq:k-i})
completes the proof.
\end{proof}
\subsubsection{Analyzing error weight}
We note that the expected weight of ${\cal E}$ is
$p|S|$ and since $|S|$ is linear in $n$,
by Chernoff
the probability that the weight of ${\cal E}$ is smaller by more than
than a constant fraction than this expectation is $2^{-\Omega(n)}$.
We need to show a similar bound on the weight modulo the centralizer group;
given that $\delta<dist(C)/2n$ we only need to bound the weight modulo
$A({\cal G})$.
Let $\Delta\in A$ be some element in the stabilizer group
and let ${\cal E}_{\cal G}=\Delta \cdot {\cal E}$.
We now need to lower-bound $wt({\cal E}_{\cal G})$.
\begin{fact}\label{fact:weight}
For integer $k$, let $\hat{k} = \floor{k/2}+1$.
Let $y(k): [4,\infty] \mapsto \mathbf{R}$ be the function:
$$
y(k) =
\left\{
\begin{array}{ll}
1-2^{(-\hat{k}+1)log(k)+k-2.3\hat{k}+4.54} & k\geq 12 \\
0.9999 & 6\leq k\leq 11 \\
0.9992 & k=5 \\
0.9985 & k=4
\end{array}
\right.
$$
We claim:
$$
Prob_{\cal E}
\left(
wt({\cal E}_{\cal G}) <
|S| p y(k)
\right)
=
2^{-\Omega(n)}.$$
\end{fact}
\begin{proof}(\textbf{Sketch.The detailed proof can be found in Appendix
(\ref{sec:weight}).})
The proof builds on the onion fact (\ref{fact:succinct}) as follows: the onion fact shows that "islands" with fewer than $k/2$ errors cannot "lose" error weight modulo the centralizer of ${\cal G}$.
The proof uses standard probabilistic arguments,
to argue, that the random error pattern we chose, is such, that the vast majority of islands, have fewer than this threshold error weight, and so the overall error weight is
virtually unharmed.
\end{proof}
\subsubsection{Concluding the proof of lemma (\ref{lem:indexp})}
\begin{proof}
By fact (\ref{fact:penalty}) the average penalty of ${\cal E}$ is small, i.e.
$$\mathbf{E}\left[Penalty({\cal E})\right] \leq |S| D_L p \alpha (1-0.02/k)
\triangleq P.$$
Yet, by fact (\ref{fact:weight}) w.p. exponentially close to $1$, we have
$$
wt({\cal E}_{\cal G})
\geq
|S| p y(k) \triangleq W_{low}\geq |S| p \cdot 0.99.
$$
Similarly, by the Hoeffding bound w.p. exponentially close to $1$, we have
$$
wt({\cal E}_{\cal G}) < |S| p (1+0.01) \triangleq W_{high}.
$$
Since all penalties are non-negative, we conclude that {\it conditioned} on
$\left|wt({\cal E}_{\cal G})/ (|S|p) -1\right| < 0.01$, we have
$\mathbf{E}\left[Penalty({\cal E})\right] \leq P+2^{-\Omega(n)}$.
Therefore, there must exist an error ${\cal E}$, whose weight modulo
${\cal G}$ deviates
by a fraction at most $0.01$ from $|S| p$, and whose penalty is at most $P+2^{-\Omega(n)}$.
We would like to bound the soundness of this error, which is the ratio
of the penalty to its relative weight times $D_L$.
We get that its soundness is at most
\begin{equation}\label{eq:sound}
r = \frac{P+2^{-\Omega(n)}}{D_L W_{low}}\leq
\frac{1}{D_L}
\cdot
\frac
{|S| D_L p \alpha (1-0.019/k)}
{|S| p y(k)}
=
\alpha \left(\frac{1-0.019/k}{y(k)}\right).
\end{equation}
We now note that in the last expression, for all $k\geq 12$,
the ratio $\frac{1-0.019/k}{y(k)}$
is at most $1-0.01/k$.
For all values of $4 \leq k < 12$ we substitute the appropriate
value of $y(k)$ and get similarly that
the ratio $\frac{1-0.019/k}{y(k)}$ is at most $1- 10^{-3}$.
Hence, the soundness of the error, $r$ is at most $\alpha(d)(1-\gamma_{gap})$
where $\gamma_{gap}$ is as defined in the statement of theorem
(\ref{thm:sound}).
\end{proof}
\section{Acknowledgements}
The authors would like to thank
Eli Ben-Sasson, Irit Dinur and Tali Kaufman for insightful discussions.
|
1,116,691,499,667 | arxiv | \section{Introduction}
\label{sec:intro}
Boundaries are important interpretable visual cues that can describe both the low-level image characteristics as well as high-level semantics in an image. Human vision uses occluding contours and boundaries to interpret unseen or seen objects and classes. In several vision tasks, they are exploited as priors~\citep{Zhu2020-edgeofdepth,Kim2021-boundaryrepdevil,Hatamizadeh2019-bioboundary,revaud2015epicflow,cashman2012shape}. Some key works on contours~\citep{Cootes2001-active,Matthews2004-active,Kass1988-snakes} have greatly impacted early research in computer vision. Although the advent of end-to-end deep learning has somewhat shifted the focus away from interpretable visual cues, boundary discovery still remains important in computer vision tasks.
Supervised deep learning has greatly transformed problems such as object detection and segmentation~\citep{Redmon2016-yolo,Chen2017-deeplab,Cheng2020-panopticdeeplab} by redefining the problem~\citep{Kirillov2019-panoptic}, using high-quality datasets~\citep{Cordts2016-cityscapes,neuhold2017mapillary} and better network architectures~\citep{Cheng2020-panopticdeeplab,cheng2021maskformer,wang2021max}.
Boundary detection, however, has seen a rather modest share of such progress. Although, modern deeply learned methods~\citep{xie2015holistically,liu2017richer,Maninis2017-cob} provide better accuracy and the possibility to learn only the high-level boundaries, a particularly elusive goal in learned boundary detection has been the so-called crisp boundaries~\citep{isola2014crisp,wang2018deep,deng2018learning}. The formulation of boundary detection as a binary segmentation task naturally introduces class imbalance, which makes detecting pixel thin boundaries extremely difficult. Arguably a majority of recent methods in boundary detection are proposed in order to tackle the same issue. Many methods address the lack of `crispness' by fusing high-resolution features with the middle- and high-level features~\citep{xie2015holistically,liu2017richer}. Such a strategy has been successful in other dense prediction tasks~\citep{ronneberger2015u} as well. Others propose different loss functions~\citep{kokkinos2016pushing,deng2018learning, kervadec2019boundary} to address class imbalance.
Despite the improvements, we identify two issues regarding crisp boundary detection. The first is that the evaluation protocol~\citep{martin2004learning} does not necessarily encourage crisp detection, as the quantification is done after Non-Maximal Suppression (NMS). Such an evaluation may be misleading when the network outputs need to be used at training time for other high-level tasks, e.g.\@\xspace, segmentation~\citep{Kim2021-boundaryrepdevil}. Second, the current losses~\citep{kokkinos2016pushing,xie2015holistically,LossOdyssey} push for edges as crisp as the ground-truth rather than as crisp as possible. This is particularly harmful since many boundary detection datasets~\citep{bsds,silberman2012indoor} contain ambiguous boundaries or inconsistently thick boundaries.
In this paper, we take a different perspective on boundary detection. Boundaries are formed where visual features change, popularly referred to as the differential representation~\citep{boykov2006integral,kervadec2019boundary}. Such an assumption treats the boundaries as a set of 1-D surfaces embedded in a continuous 2D space, implying they do not have any thickness. Many previous energy-minimization based approaches~\citep{Chan2005-levelset,Chan2001-activeedges,Paragios2002-matching,boykov2006integral,ma2000edgeflow} and a few current methods~\citep{kervadec2019boundary} tackle boundaries in a similar way. Level-set methods~\citep{Chan2005-levelset,boykov2006integral,ma2000edgeflow} consider boundaries as the level-set of a continuous function of the image. Specifically, \citep{ma2000edgeflow} defines the energy function related to the distance and direction of the boundary at each pixel and extracts the directional normals at the boundary using such an energy. Inspired by such works and also recent works on 3D implicit functions~\citep{Tancik2020-fourier,Sitzmann2020-implicit,Mildenhall2020-nerf}, we represent boundaries via a field of unit vectors defined at each pixel, pointing towards the closest boundary surface. The proposed vector field representation naturally solves class imbalance. In distance transforms, vector fields are considered incomplete euclidean transforms~\citep{Osher1988-fronts}, equal to the Jacobian of the signed distance field. The vector field we use is in fact the Jacobian of the positive distance field. In contrast to distance fields, it provides high sensitivity at the boundaries and is easily localizable. We demonstrate the equivalence of the normal field to the surface contour representation using the level set of the normal field's divergence, providing infinitely sharp boundaries. Owing to the zero-thickness, we refer to our result as the zero-pixel boundary. Our method is virtually hyper-parameter free at training and test time, and can provide zero-pixel thick boundaries at training time.
In order to evaluate the boundaries using the surface interpretation, we also advocate the use of surface distances including the average symmetric surface distance (assd) metric that is less prone to class imbalance and variable boundary thickness in the ground-truth~\citep{kervadec2019boundary}. Such metrics are very popular in biomedical image segmentation~\citep{yeghiazaryan2018family}. We show significant improvements in all metrics using our boundary representation when compared to the various combinations of Dice~\citep{dice1945measures} and cross-entropy losses in several large datasets.
\section{Related Work}\vspace{-1mm}
There is a rich history of boundary detection in computer vision.
Previous work on boundaries showed a diversity of definitions and approaches. We differentiate them based on two different interpretations of boundaries: i.e.\@\xspace, \emph{i)} a boundary is a separation between two or more image regions with different visual features and \emph{ii)} a boundary is a thin group of pixels belonging to a specific class. It should be noted that most modern methods fall under the second category.
Perhaps the most notable representatives of the first strand are the energy-based segmentation methods \citep{Chan2001-activeedges,comaniciu2002mean,boykov2006integral,grady2006random,ma2000edgeflow}. These methods relied on hand-crafted features and various optimization strategies to compute the low-level boundaries. In particular, \citet{ma2000edgeflow} compute the normal vectors of the low-level boundaries from an energy function, without looking into an equivalent learnable representation. Graph-based segmentation methods \citep{shi2000normalized,felzenszwalb2004efficient,cheng2016hfs} construct a graph from the image and cut the graph to obtain non-overlapping regions whose boundaries are viewed as image boundaries. A few deep learning methods followed a similar approach \citep{Wang2021-active,kervadec2019boundary}. Despite the advantage of the definition, current representations in this category are hard to adapt to generic boundaries and a compact multi-boundary representation with good performance remains lacking.
A larger corpus of work utilizes pixel-wise image features to decide whether pixels belong to a `boundary' class. They form our category \emph{ii)} methods. Early methods utilize various filter operators to detect discontinuities in image intensities or colors \citep{canny1986,sobel1972camera}. Learning based methods substantially boost the edge detection performance by classifying handcrafted features~\citep{konishi2003statistical,martin2004learning,bsds,dollar2013structured,hallman2015oriented}. Modern deep neural network (DNN) methods have further improved this field by learning powerful feature representations, particularly high-level semantic information \citep{shen2015deepcontour,bertasius2015deepedge,xie2015holistically,liu2017richer,wang2018deep,deng2018learning,he2019bi}. \citet{yang2016object} leveraged the powerful deep features to detect only object boundaries. Others try to simultaneously detect edges and predict the semantic class of each edge point, so-called semantic edge detection \citep{hariharan2011semantic,yu2017casenet,liu2018semantic,yu2018simultaneous}.
On the other hand, classifying pixels as boundary class introduces class imbalance during training. A common counter-strategy is to use a weighted cross-entropy loss giving the non-boundary pixels a small weight and the boundary class a large weight \citep{xie2015holistically,liu2017richer,he2019bi}. Yet, despite an improvement over regular cross-entropy, it does not solve the problem. To thin the boundaries, Non-Maximal Suppression (NMS) is usually adopted. Such methods may be harmful when directly integrated with higher-level tasks such as segmentation \citep{Kim2021-boundaryrepdevil}. The Dice loss \citep{dice1945measures} was thus advocated to generate crisp boundaries before NMS \citep{deng2018learning}, but it still produces several pixel thick boundaries and suffers more from missed predictions. Variations of the Dice loss~\citep{shit2021cldice} have been proposed to counter the missed detections. However, the right approach still depends on the downstream tasks~\citep{Zhu2020-edgeofdepth,Kim2021-boundaryrepdevil,Hatamizadeh2019-bioboundary,shit2021cldice} and in either case a careful selection of training as well as testing hyper-parameters is required. We provide an alternative approach, motivated by the class imbalance, while having no sensitive hyper-parameter.
\section{Boundary Transform and Representation}
\label{sec:boundarytransform}
In this section, we first discuss the surface representation of boundaries as a normal vector field transform and prove its relevant properties.
Our boundary representation is inspired by recent work on implicit neural 3D surface representations~\citep{Park2019-deepsdf,Mescheder2019-occupancy}, energy-based methods on edge detection~\citep{ma2000edgeflow,boykov2006integral} and distance transforms~\citep{Osher1988-fronts}. In 3D surface representations, a Signed Distance Function (SDF)~\citep{Osher1988-fronts} or occupancy map~\citep{Mescheder2019-occupancy} is used as representation. We instead propose a unit vector field from every point to the closest boundary. This choice is motivated by the high sensitivity and richer boundary context provided by the unit vector field, as shown in our experimental results in \S~\ref{sec:experiments}. Fig.~\ref{fig:rep} shows our vector field representation of the ground-truth, with predictions using a standard quiver plot on a sub-sampled set of points.
\begin{figure}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\textwidth]{figs/orig_im.png}
\caption{Image crop}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\textwidth]{figs/bd_im.png}
\caption{Traditional}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\textwidth]{figs/Representation.pdf}
\caption{Vector transform}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\textwidth]{figs/repzoom.pdf}
\caption{Zoomed in}
\end{subfigure}
\caption{\textbf{Boundary representations}. We contrast the conventional binary boundary representation with our representation. From left to right, we show (a) top-left crop of an image in a test set, (b) the standard binary representation of ground-truth boundary, (c) the vector transform plot of the prediction \emph{(in red)} overlaid on the ground-truth representation \emph{(in green)} and the conventional binary representation for clarity. Finally (d) shows the zoomed in view of (c) on the yellow rectangle.}
\label{fig:rep}
\end{figure}
We assume a continuous boundary image domain $\Omega \subset \mathbb{R}^2$ with the set of boundary points $\{\mathsf{x}'\} = \Pi\subset \Omega$. Each point is denoted as a 2-vector $\mathsf{x}=(x,y)\in\mathbb{R}^2$.
In order to encode boundary properties on the whole image, we compute a signed $x$ and $y$ distance field separately and finally encode only the direction. The result is a unit vector field that represents the boundary. We can express our boundary representation by the following transform for any point $\mathsf{x} \in \Omega$:
\begin{align}
\begin{split}
\mathsf{f}(\mathsf{x}) &= - (\mathsf{x} - \argmin _{\mathsf{x}'\in \Pi } d(\mathsf{x},\mathsf{x}')) \\
\mathsf{v}(\mathsf{x}) &= \frac{\mathsf{f}(\mathsf{x})}{\Vert \mathsf{f}(\mathsf{x}) \Vert_2}, \quad \text{if} \ \Vert \mathsf{f}(\mathsf{x}) \Vert_2\neq 0, \ \text{otherwise} \ \mathsf{n}, \quad
\mathsf{n}\ = \lim_{{\mathsf{f}_x} \to 0^+}\frac{\mathsf{f}(\mathsf{x})}{\Vert \mathsf{f}(\mathsf{x}) \Vert_2}.
\end{split}
\label{eq:vectransform}
\end{align}
\Eqref{eq:vectransform} defines the transform as a function $\mathsf{v}(\mathsf{x}):\Omega\to\mathbb{R}^2$ going from the boundary $\Pi$ to a field representation. Here, $d$ is the distance operator and $\mathsf{f}_x$ is the $x$ component of the field $\mathsf{f}$. Note that we choose the field vector arbitrarily among the two possible values at the boundary by approaching towards the boundary from the positive $\mathsf{f}_x$ value.
We note the following properties of the vector field $\mathsf{v}$.
\begin{property}\label{prop:normal}
The vector field $\mathsf{v}(\mathsf{x})$ is equal to the unit normal field at the boundary.
\end{property}
\begin{proof}
This is a well known result~\citep{osher2003level} and can be proved easily (see equation (2.4) in the reference). The fact that we forcefully choose one normal over its negative directional normal at the boundary points does not affect the statement.
\end{proof}
\begin{property}\label{prop:second}
Given a vector field representation $\mathsf{v}(\mathsf{x})$ of a boundary, one can obtain the binary boundary representation by considering the following transform:
\begin{align}
\label{eq:divvt}
\begin{split}
& g(\mathsf{x}) = \operatorname {div} \mathsf{v}(\mathsf{x}). \\
\end{split}
\end{align}
The original boundary set $\Pi$ can then be found by taking the zero level set of $g(\mathsf{x})+2$, i.e.\@\xspace,
\begin{equation}
\label{eq:invtransform}
\Pi = L_0(g+2).
\end{equation}
\end{property}
\begin{proof}
In the infinitesimal neighborhood of the boundary points, using property 3.1, the vector field is normal to the boundary, provided that good approximate normals can be obtained from \eqref{eq:vectransform}. As the neighborhood size approaches zero, the tangential vector components approach zero around a point for a continuous boundary segment. Thus, around such an infinitesimal neighborhood, the normal fields pointing in opposite direction will subtract perfectly, creating a divergence flow of -2 and around 0 or positive away from boundaries.
\end{proof}
Strictly speaking the result holds only for piece-wise smooth surfaces~\citep{osher2003level}, with lower than -2 divergence possible at discontinuous surface points.
\begin{property}
The relation is one-to-one between the binary boundary representation and the proposed vector field representation in a continuous domain.
\end{property}
\begin{proof}
This property is the result of \eqref{eq:vectransform}, for the forward transform and \eqref{eq:invtransform} for the inverse transform, providing a one-to-one relation.
\end{proof}
Note that the vector field transform as defined in \eqref{eq:vectransform} has to correct for two different kinds of indeterminate states. The first is on the boundary, that is solved by using the right hand limit so that one of the two opposite directions is chosen consistently. The second is when the infimum operation in \eqref{eq:vectransform} produces two or more closest points, corrected by choosing any one of the points for the infimum. The vector fields around such points flip directions creating a positive divergence as shown in Fig.~\ref{fig:network}. More discussions are provided in \S\ref{sec:experiments} about the latter, which are in fact helpful for deciding superpixel centers.
The above properties and their proofs are crucial for the validity of the proposed boundary representation and also to go from one representation to another for inference and visualization.
\paragraph{Vector Transform and the Distance Transform.}
In essence, the normalized vector field proposed in \eqref{eq:vectransform} is another representation of the distance transform. Let $\phi(\mathsf{x})\in \mathbb{R}^{+}$ define the distance transform, then the vector field $\mathsf{v}(\mathsf{x})$ in \eqref{eq:vectransform} can be obtained by the following partial derivatives~\citep{Osher1988-fronts, osher2003level}:
\begin{equation}
\label{eq:sdf}
\mathsf{v}(\mathsf{x}) = -\nabla \phi(\mathsf{x}).
\end{equation}
One can optimize a given network by minimizing the loss on the distance transform (DT) or SDF ~\citep{dapogny2012computation,caliva2019distance,Park2019-deepsdf} instead of using the loss on the normalized vector field. Compared to the binary mask, the Vector transform (VT), DT and SDF have an added advantage that they are sensitive to small topological changes. SDF on the other hand, does not support overlapping and open surfaces and is not easily adaptable to the image boundary problem. However, there are several reasons which make DT unsuitable for learning boundaries. During training, when the distance field is close to 0 i.e.\@\xspace, around the boundaries, any loss on the DT or SDF loses importance. Apart from the convergence problems of DT with gradient descent \cite{osher2003level}, DT is also hard to localize by thresholding under noise compared to the SDF and VT. In SDF, localizing the surface amounts to finding the zero crossings and in VT, the divergence measure in \eqref{eq:divvt} provides an extremely sharp contrast making \eqref{eq:invtransform} trivial to solve. These differences in the thresholding problem can be seen in Fig.~\ref{fig:VTvsDT} in Appendix B and the experimental results. Additionally, despite reducing the class imbalance compared to binary boundary prediction, DT has an implicit bias to the weighted average of its typical range.
On the other hand, a normalized vector field from VT~\eqref{eq:vectransform} is sensitive to the topology similar to a distance field while also being localizable and sensitive at the boundaries, as shown in Fig.~\ref{fig:rep}.
\section{Boundary Detection with the Vector Field}
In this section we provide the details for the construction of the boundary detection method using the representation proposed in \S \ref{sec:boundarytransform}.
\subsection{Network Architecture}
Most convolutional architectures~\citep{liu2017richer,xie2015holistically} for boundary detection take advantage of both the low-level high resolution features and the deep high-level features, using several fusion strategies. We choose a similar network architecture HR-Net~\citep{wang2020deep} which was proposed for segmentation and object detection tasks in high resolution images.
We further enrich the high resolution representation provided by HR-Net with an additional skip connection at $\times2$ downsampling level from the encoder to the decoder. This helps retrieve very high resolution details that are necessary in the boundary prediction task. The output of the HR-Net is first bilinearly upsampled and fused with the skip connection. Inspired by \citet{deng2018learning}, the high resolution skip connection goes through a ResNeXT module~\citep{Xie_2017_CVPR} before being fused with the decoder signal with a pixel-wise fully connected layer. Finally, the output goes through a convolutional layer and is further upsampled to formulate a prediction at the same resolution as the input. For more details refer to Appendix \S\ref{sec:archandtrain}.
This architecture allows us to test our method as well as traditional, pixel-wise classification methods, without giving an unfair advantage to one over the other.
Fig.~\ref{fig:network} shows the simple setup of our complete pipeline with training and inference.
The output of our network is a two channel image which contains, respectively, the x-component $\hat{\mathsf{v}}_x$ and the y-component $\hat{\mathsf{v}}_y$ of the field prediction corresponding to \eqref{eq:vectransform}. We denote the transform of \eqref{eq:vectransform} as Vector Transform (VT), upon which we build our method. Although it is possible to use a single angle prediction and representation as in $\theta \in [ -\pi, \pi ]$, we choose the $x$ and $y$ component representation because it avoids discontinuities in its value as the vector changes. To constrain the network to output predictions in the $[-1, 1]$ range typical of the sine and cosine functions, a hyperbolic tangent activation ($\tanh$) is applied to the output of the network.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{figs/ICLR_image_l.pdf}
\caption{\textbf{Training and Inference overview}. The predicted field $\hat{\mathsf{v}}(\mathsf{x})$ is convolved with pre-selected filters to obtain the divergence at pixel boundaries for inference. For visualization, we show the divergence and the predicted boundary in pixel locations without using the support image.}
\label{fig:network}
\end{figure}
\paragraph{Training loss.}
To train the $x$ and $y$ components of VT prediction, the mean-squared error (MSE) loss is used, which can be formulated as
\begin{equation}
\label{eq:vt_loss}
\ell_{VT} = \left\|\mathsf{v}^{gt} - \hat{\mathsf{v}}\right\|_2^2
\end{equation}
Unlike many other methods, there exists no hyper-parameters on the loss formulation.
\subsection{Inference}
\label{sec:inference}
At inference time, we predict the boundary VT densely on the image. It is entirely possible that for some downstream tasks, such a prediction may already provide the required endpoint. However, to obtain the surface representation of the boundary, we need to use the properties listed in \S \ref{sec:boundarytransform}. In particular, property \ref{prop:second} provides a differential means to obtain a zero-thickness boundary on a continuous domain. In the discrete space, the same principle can be used to derive the boundary as the inter-pixel points, which we refer to as the zero-pixel boundary.
At this stage, at least two different approaches can be used to obtain the inter-pixel separation.
One can be formulated by using $2\times1$ or $2\times 2$ kernels to extract the divergence, which provides the necessary derivatives at the pixel boundaries.
We describe in detail a second and simpler approach which uses a support image $\tilde{I}$ of twice the resolution, to extract boundaries as a single pixel detection. We define an operator $Z(I)=\tilde{I}$, which takes in any image $I$ and provides the image $\tilde{I}$ of twice the resolution according to the following rule.
\begin{equation}
\label{eq:dilated_image}
Z(I) = \tilde{I},\
\begin{cases}
\tilde{I}\left(x, y\right) = I\left(\frac{x}{2}, \frac{y}{2}\right) & \text{if $(x \operatorname{mod} 2 = 0\ \text{and}\ y\operatorname{mod} 2 = 0)$}, \\
\tilde{I}\left(x, y\right) = 0 & \text{otherwise}.
\end{cases}
\end{equation}
\Eqref{eq:dilated_image} is a simple copy and concatenate operation. Using the operator $Z$ on the predicted VT field $\hat{\mathsf{v}}$, we obtain $\tilde{\mathsf{v}}=Z(\hat{\mathsf{v}})$. We then compute the divergence using standard Sobel filters as:
\begin{equation}
\label{eq:divergence}
\mathbf{\nabla}\cdot \tilde{\mathsf{v}}(\mathsf{x}) = \frac{\partial \tilde{\mathsf{v}}(x,y)}{\partial x} + \frac{\partial \tilde{\mathsf{v}}(x,y)}{\partial y}.
\end{equation}
The image with boundaries ${I}_b$ can then be obtained inverting the divergence of the VT prediction image in \eqref{eq:divergence}, subtracting $1$ from it and applying ReLU activation~\citep{relu}:
\begin{equation} \label{eq:boundary}
I_b = \operatorname{ReLU}\ \left(- (\nabla\cdot \tilde{\mathsf{v}}(\mathsf{x}) + 1)\right).
\end{equation}
The resulting image will have non-zero values only on the predicted boundary surface, with boundary pixels having values around $1$. In practice we obtain small deviations of about $0.1$ around the expected value of $1$ due to imperfections in the field predictions.
In the support image structure, it can be observed that only pixels not belonging to the original image can be classified as boundaries as they are the only ones for which the divergence can have a value different from zero. Note that the above divergence definition using the support image is only required for zero-thickness boundary, we may prefer Fig.~\ref{fig:network} for a standard inference process.
In order to evaluate the zero-pixel boundary with traditional metrics, and visualize, boundary pixels need to be represented on the original image. We do so, by copying the values of boundary pixels in the support image to the corresponding neighboring pixels belonging to the original image and averaging the values if necessary; all the other values are set to zero. This leads to two pixel boundaries with the added property of being differentiable, which may be particularly useful when integrating the boundary detection task in a larger deep network.
When evaluating the method with the surface distance metrics - which are discussed in \S \ref{sec:metrics_intro} - the resulting image is thresholded to obtain a binary boundary image. More specifically, all the positive pixels are considered to belong to a boundary and this remains fixed throughout the experiments.
An important aspect of VT is that it provides directional information on each pixel, including those pixels which are associated to the boundaries. Directional boundaries have been previously explored with applications and advantages~\citep{Maninis2017-cob}. We provide two key examples. In the first application, we detect boundary pixels which are proposal candidates for straight lines in a differentiable way. In the second, we detect superpixels by grouping pixels inside a convex boundary.
These related tasks are discussed more in depth with some qualitative results in the appendices.
\subsection{Metrics}
\label{sec:metrics_intro}
The standard measures for boundary evaluation~\citep{martin2004learning} are fixed contour threshold (ODS) and per-image best threshold (OIS). With these approaches it is possible to compute the recall (R), the precision (P) and the F score.
These are the metrics traditionally applied using the approach proposed by \citet{martin2004learning} to determine the true positives and false negatives. It proceeds by creating a one-to-one correspondence between the ground-truth boundary pixels and the predicted boundary pixels. In this context, each pixel without a correspondence within a fixed threshold distance is considered either a false positive or a false negative.
This approach, however, suffers from a few drawbacks:
\begin{itemize}
\item It is extremely sensitive in differences of thickness between the prediction and the ground-truth, as the one-to-one correspondence will not be satisfied resulting in a large number of false positives or false negatives without regard for the actual boundary quality.
\item ODS and OIS optimize the threshold hyper-parameter on the test set. This may lead to an unfair evaluation of the quality of the detected boundaries.
\item As it is based on pixel correspondences, it cannot be directly applied to zero pixel thin boundaries.
\end{itemize}
To overcome these drawbacks and have a metric that can be applied to the zero pixel surfaces as well, we propose to use the average surface distances (asd), more favored in the medical imaging community~\citep{yeghiazaryan2018family}. For every boundary pixel or segment in the prediction, it is the distance to the closest ground truth boundary surface point - either a pixel or a segment. The same can be done starting from the ground truth pixels or segments. The distance from the prediction to the ground truth boundary ($asd_P$) is representative for the quality of the prediction - a precision-like term - while the distance from the ground truth to the prediction ($asd_R$), like recall, shows how effectively all the boundaries are detected. The two scores can be averaged to obtain a single metric, the average symmetric surface distance ($assd$).
Given that no one-to-one correspondences between pixels are computed, $asd_P$, $asd_R$ and $assd$ are less sensitive to the boundary thickness and more influenced by the overall quality of the prediction, compared to the previously defined metrics. However, one drawback of the surface distance metrics is that it has high sensitivity to isolated false detections or unmatched ground-truth boundary pixels.
\section{Experiments}
\label{sec:experiments}
We compare the proposed method on Cityscapes \citep{Cordts2016-cityscapes}, Mapillary Vistas \citep{neuhold2017mapillary} and Synthia \citep{Ros2016TheSD}, three datasets providing high quality instance and semantic boundaries. Despite the inconsistent thickness of annotated boundaries, we also compare our method on BSDS500~\cite{bsds}. The dataset contains a training size of just 200 images as well as relatively low image resolution. For this evaluation, every method is trained on the BSDS500 training set, using the network pretrained on Mapillary Vistas.
We compare VT against three different losses in binary representation; the dice loss (DL) \citep{dice1945measures}, a weighted combination of Dice loss and cross-entropy (DCL) \citep{deng2018learning} and the weighted cross-entropy (WCL) \citep{xie2015holistically}.
As part of the ablation, we further compare our method against the Distance Transform (DT) representation of boundaries which predicts a field of distances to the closest boundary. This is trained with a simple L1 loss between the ground truth $d_{gt}$ and the predicted distance $\hat{d}$.
Each representation and training loss is computed using the same network architecture and optimization.
\begin{table}[!t]
\centering
\setlength{\tabcolsep}{3.4mm}
\resizebox{0.90\columnwidth}{!}
\begin{tabular}{ c | c c c | c c c | c c c}\thickhline
\multirow{2}{*}{Method} & \multirow{2}{*}{$asd_R$} & \multirow{2}{*}{$asd_P$} & \multirow{2}{*}{$assd$} & \multicolumn{3}{c|}{ODS} & \multicolumn{3}{c}{OIS} \\
& & & & R & P & F & R & P & F \\ \hline
\rule{0pt}{2ex}OP $@t_0$ & 5.37 & 3.29 & 4.33 & 0.814 & 0.878 & 0.845 & 0.819 & 0.874 & 0.846 \\
TP & 5.16 & 3.92 & 4.54 & 0.699 & 0.740 & 0.719 & 0.726 & 0.718 & 0.722 \\
TP $@t_0$ & 5.16 & 3.92 & 4.54 & 0.877 & 0.500 & 0.637 & / & / & / \\
TG & 6.02 & 3.26 & 4.64 & 0.621 & 0.850 & 0.718 & 0.621 & 0.850 & 0.718 \\
TG $@t_0$ & 6.02 & 3.26 & 4.64 & 0.464 & 0.917 & 0.616 & / & / & / \\ \thickhline
\end{tabular}}
\caption{Comparison of the sensitivity to thickness in prediction and ground truth of the used metrics. OP is the original prediction, TP is the thickened version of the prediction and TG the thickened ground truth. In TP, we use the original ground truth and, in TG, the original prediction.}
\label{tab:metric_result}
\end{table}
\vspace{-1mm}
\begin{table}[!t]
\centering
\setlength{\tabcolsep}{3.4mm}
\resizebox{0.95\columnwidth}{!}
\begin{tabular}{ c c c c | c c c | c c c} \thickhline
\multirow{2}{*}{Datasets and Method} & \multirow{2}{*}{$asd_R$} & \multirow{2}{*}{$asd_P$} & \multirow{2}{*}{$assd$} & \multicolumn{3}{c|}{ODS} & \multicolumn{3}{c}{OIS} \\
& & & & R & P & F & R & P & F \\ \hline
\xrowht[()]{10pt}\textbf{Cityscapes} & \multicolumn{3}{c}{train: 2500} & \multicolumn{3}{c}{validation: 475} & \multicolumn{3}{c}{test: 500} \\ \hline
\rule{0pt}{2ex}\emph{VT} & 5.37 & \textbf{3.29} & \textbf{4.33} & \textbf{0.814} & \textbf{0.878} & \textbf{0.845} & \textbf{0.819} & \textbf{0.874} & \textbf{0.846} \\
DCL & \textbf{5.17} & 4.24 & 4.71 & 0.711 & 0.811 & 0.758 & 0.722 & 0.806 & 0.762 \\
DL & 5.40 & 6.51 & 5.96 & 0.758 & 0.747 & 0.752 & 0.747 & 0.760 & 0.754 \\
WCL & 6.42 & 5.98 & 6.20 & 0.773 & 0.756 & 0.764 & 0.755 & 0.779 & 0.767 \\
DT & 7.76 & 3.50 & 5.63 & 0.651 & 0.683 & 0.667 & 0.642 & 0.696 & 0.668 \\ \thickhline
\xrowht[()]{10pt}\textbf{Synthia} & \multicolumn{3}{c}{train: 6600} & \multicolumn{3}{c}{validation: 800} & \multicolumn{3}{c}{test: 1600} \\ \hline
\rule{0pt}{2ex}\emph{VT} & \textbf{1.73} & 1.61 & \textbf{1.67} & 0.767 & 0.877 & 0.819 & 0.767 & 0.878 & 0.819 \\
DCL & 3.60 & 2.69 & 3.15 & 0.682 & 0.754 & 0.717 & 0.710 & 0.730 & 0.720 \\
DL & 3.02 & \textbf{0.79} & 1.91 & 0.810 & 0.905 & 0.855 & 0.816 & 0.898 & 0.855 \\
WCL & 1.76 & 1.81 & 1.79 & \textbf{0.874} & \textbf{0.929} & \textbf{0.900} & \textbf{0.888} & \textbf{0.927} & \textbf{0.907} \\
DT & 4.72 & 2.76 & 3.74 & 0.786 & 0.840 & 0.812 & 0.782 & 0.846 & 0.813 \\ \thickhline
\xrowht[()]{10pt}\textbf{Mapillary Vistas} & \multicolumn{3}{c}{train: 17000} & \multicolumn{3}{c}{validation: 1000} & \multicolumn{3}{c}{test: 2000} \\ \hline
\rule{0pt}{2ex}\emph{VT} & 3.99 & \textbf{3.20} & \textbf{3.60} & 0.761 & \textbf{0.857} & \textbf{0.806} & 0.778 & \textbf{0.842} & \textbf{0.809} \\
DCL & 4.64 & 4.06 & 4.35 & 0.670 & 0.807 & 0.750 & 0.724 & 0.784 & 0.753 \\
DL & 5.16 & 3.28 & 4.22 & 0.735 & 0.787 & 0.760 & 0.733 & 0.792 & 0.761 \\
WCL & \textbf{2.86} & 5.67 & 4.27 & 0.759 & 0.730 & 0.744 & 0.767 & 0.763 & 0.765 \\
DT & 9.42 & 4.83 & 7.13 & \textbf{0.856} & 0.271 & 0.412 & \textbf{0.856} & 0.271 & 0.412 \\ \thickhline
\xrowht[()]{10pt}\textbf{BSDS500} & \multicolumn{3}{c}{train: 200} & \multicolumn{3}{c}{validation: 100} & \multicolumn{3}{c}{test: 200} \\ \hline
\rule{0pt}{2ex}\emph{VT} & 5.06 & 6.59 & 5.83 & \textbf{0.72} & 0.638 & 0.676 & \textbf{0.721} & 0.637 & 0.676 \\
DCL & 6.44 & 6.14 & 6.29 & 0.598 & 0.559 & 0.578 & 0.597 & 0.560 & 0.578 \\
DL & 7.99 & 5.81 & 6.90 & 0.540 & 0.534 & 0.537 & 0.543 & 0.531 & 0.537 \\
WCL & \textbf{4.04} & 7.28 & \textbf{5.66} & 0.660 & 0.718 & \textbf{0.688} & 0.662 & 0.716 & \textbf{0.688} \\
DT & 6.44 & \textbf{5.30} & 5.87 & 0.395 & \textbf{0.860} & 0.541 & 0.395 & \textbf{0.860} & 0.541 \\
Human & / & / & / & / & / & 0.8 & / & / & 0.8 \\ \hline
\end{tabular}}
\caption{Evaluation results on the Cityscapes, Synthia, Mapillary Vistas and BSDS500 datasets. For each dataset we indicate the number of images respectively in the train, validation and test set.}
\label{tab:results}
\end{table}
\paragraph{Evaluation Metrics.}
We evaluate each method based on the traditionally used R, P and F score in the ODS and OIS setups as well as on $asd_R$, $asd_P$, and $assd$ scores.
For the traditional metrics for BSDS500 dataset evaluation \citep{bsds}, we use an error tolerance of $0.0025$ of the diagonal length of the image. This is lower than what is commonly used on the BSDS500 dataset, to account for the larger image sizes.
To compute the surface distance metrics ($asd_R$, $asd_P$, and $assd$), each boundary representation is converted to a binary form with a thresholding operation. For the DL, DCL, WCL, and DT models, the threshold is fixed using a selected validation set for each dataset. For VT, instead, it is fixed for each test to the value of $-1$ of the divergence image, the same value added during inference before applying ReLU \S \ref{sec:inference}; points with lower divergence are classified as boundaries and the others as non-boundaries.
\subsection{Metric Analysis}
We first show an analysis to support the use of the surface distance metrics~\citep{yeghiazaryan2018family}, $asd_R$, $asd_P$, and $assd$ over R, P, and F score in ODS and OIS conditions. We use the VT method results on the Cityscapes dataset, where boundaries have a constant 2-pixel thickness.
To show the sensitivity of the metrics to different thicknesses in the prediction and ground truth,
we compare the original prediction (OP) with the scores achieved when \emph{doubling} the thickness of the prediction (TP) or of the ground truth (TG). When computing the R, P, and F score, we use two experimental setups: \emph{(i)} changing the used prediction threshold in an ODS and OIS fashion and \emph{(ii)} and keeping it fixed to the same value $t_0$ used in OP. For the surface distance metrics, the threshold is always fixed as in every other experiment.
From the results in Table \ref{tab:metric_result}, it is clear that the R, P and F scores are strongly dependent on the relative thickness of ground truth and prediction with variations in the F score of over $27\%$ versus only a $7\%$ change in $assd$. The small influence of thickness shows that the surface distance metrics are suited to evaluate predictions without post-processing. Furthermore, the evaluation also shows that the metric does not provide an advantage to the proposed method for its thin prediction.
\subsection{Representation Performance Comparison}
\begin{figure}
\centering
\includegraphics[width=0.22\linewidth]{figs/qual_image_78.jpg}
\includegraphics[width=0.22\linewidth]{figs/qual_truth_78.png}
\includegraphics[width=0.22\linewidth]{figs/qual_prediction_78.png} \\
\includegraphics[width=0.22\linewidth]{figs/qual_cross_78.png}
\includegraphics[width=0.22\linewidth]{figs/qual_dice_cross_78.png}
\includegraphics[width=0.22\linewidth]{figs/qual_dice_78.png}
\includegraphics[width=0.22\linewidth]{figs/qual_dt_78.png}
\caption{Qualitative comparison between methods on a Mapillary Vistas image from the test set. From left to right on the first row: the original image, the ground truth and the prediction using VT. On the second row: the prediction using WCL, DCL, DL, and DT.}
\label{fig:main_qual}
\vspace{-2mm}
\end{figure}
In this section, we show the performance of each representation on the four datasets considered. We do not apply non-maximum suppression so as to keep similar post-processing conditions throughout all methods and to evaluate the methods under the same conditions, so they could be integrated in other tasks \S \ref{sec:intro}.
On Cityscapes, Synthia and Mapillary Vistas, our method consistently outperforms the other boundary representations in terms of $assd$, the metric less affected by the prediction thickness. Furthermore, it achieves competitive results for all other metrics, being the best performing method on Cityscapes and Mapillary Vistas in terms of F score. Throughout Table \ref{tab:results}, it is possible to see that VT is the most stable method on the F score, being able to predict uniformly thin results. On BSDS500, VT is the second best performing method on $assd$ and F score, with a strong drop in overall performance given the dataset limitations. The VT field, in particular, suffers from the inconsistent annotations as they change the morphology of the field on and away from the boundaries, with errors that are not limited to the isolated pixel. Despite not achieving the best score on BSDS500, the result shows that VT is also able to predict low-level image contours without semantic meanings. To show its full capabilities in such task, it will be necessary to evaluate on a larger dataset with a clear definition of boundary, which is not currently available up to our knowledge.
From the qualitative comparison between predictions in Fig.~\ref{fig:main_qual}, it is evident that the Vector Transform is the only method able to predict crisp boundaries without requiring any non-maximum suppression. A particularly important evaluation here is that of DT versus VT. We observe from the results that DT prediction in particular tends to detect thick boundaries, since a higher than 0 threshold is required to obtain enough recall, in turn leading to thick detection. The discussions provided at the end of \S \ref{sec:boundarytransform} is directly relevant to its performance.
\section{Conclusion}
\label{sec:conclusion}
Despite being an old computer vision problem, boundary detection remains an active research field, both due to the many issues involved and its potential for other vision problems. In this paper, we propose the Vector Transform and theoretically demonstrate its equivalence with a surface representation of boundaries. The substantially different representation and corresponding approach of the Vector Transform automatically solve two of the biggest challenges: class imbalance in the prediction and the ability to output crisp boundaries.
We show the high performance capabilities of the representation and a few related tasks that can be tackled from such an approach, with extensive evaluations.
The formulation proposed in our work opens possibilities of new approaches on the use of boundaries and adds research questions on its applicability for several other related tasks.
\paragraph{Acknowledgements.} This research was funded by Align Technology Switzerland GmbH (project AlignTech-ETH). Research was also funded by the EU Horizon 2020 grant agreement No. 820434.
\section{Vector Transform in Practice}
There are some aspects of Vector Transform which requires special attention in several non-ideal cases. The first is the case of ground truth generation in datasets that were not designed with the interpretation of boundaries as surfaces. The second is the case of theoretical and practical aspects related to discontinuities and discretization.
\subsection*{Ground Truth Generation}
We now explain how in practice, we extract the Vector Transform from the ground truth boundary images to train the proposed method. For non-boundary pixels, the solution of the Vector Transform is clearly defined as the direction towards the closest boundary.
In practice, to mitigate for discretization-related errors, the closest boundary pixel is computed averaging the multiple closest ones. This has the same effect of interpolating a planar surface on the set of closest boundary pixels and computing the normal direction to that surface. In the extreme case in which there are multiple equally distant boundaries, only one is chosen randomly as the closest one and the direction is computed towards it.
For boundary pixels, there can be multiple scenarios to take into account depending on the dataset quality:
\begin{itemize}
\item If the boundary is taken from a segmentation image, it can be treated as a non boundary pixel with its direction computed toward the closest set of pixel outside its semantic class/instance object.
\item If the boundary is a manually annotated single pixel boundary, it is possible to consistently pair it to one of the neighboring non-boundary pixels and use its same vector direction. Any neighboring non-boundary pixel can be taken as long as the selection process remains consistent for each case.
\item In case of thick boundaries, it is possible to devise a technique to assign a direction to each of its pixels - such as the direction of the closest non-boundary pixel and similarly to the previous point in undecided cases. However, this is not a proper definition of a surface boundary and, in practice, it is not considered in any of the tested cases.
\end{itemize}
\subsection*{Discontinuities and Discretization}
As explored in \S\ref{sec:boundarytransform}, the invertibility of the vector transform does not strictly satisfy at discontinuities. Around the infinitesimal neighborhood of the discontinuous boundary, the divergence is evaluated using an approximate normal. It can be observed that under a large number of circumstances, the transform can yield lower than -2 or -2 value in many discontinuities. However, the discretization and/or its combination with the discontinuities can impact the transform, where we may observe a higher than -2 divergence of the field. Fortunately, the representation provides a relatively large margin between the divergence of a non-boundary point from that of any other point. Such a margin provides the robustness required in order to predict correct boundaries even in crowded regions.
\section{Additional Results Analysis}
In this section we report additional results with qualitative visualizations to support the observations in Section \ref{sec:experiments} and an additional experiment to analyze prediction profiles of DT versus VT measures around a predicted boundary.
We show qualitative results obtained with every method on an image taken from each of the datasets. From the qualitative comparison of predictions in Fig.~\ref{fig:sup_qual}, it is evident that Vector Transform is the only method able to always predict crisp boundaries without requiring any non-maximum suppression. This provides a significant advantage, particularly in crowded regions where traditional methods require NMS post-processing to be used for downstream tasks. Therefore, our method provides a strong advantage when used at training time as an aide for different tasks.
\begin{figure}[h]
\centering
\includegraphics[width=0.23\linewidth]{figs/sup_qual/image_frankfurt_000000_001236.jpg}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/truth_frankfurt_000000_001236.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/prediction_frankfurt_000000_001236.png} \\
\includegraphics[width=0.23\linewidth]{figs/sup_qual/cross_frankfurt_000000_001236.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dice_cross_frankfurt_000000_001236.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dice_frankfurt_000000_001236.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dt_frankfurt_000000_001236.png} \\
\includegraphics[width=0.23\linewidth]{figs/sup_qual/image_2.jpg}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/truth_2.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/prediction_2.png} \\
\includegraphics[width=0.23\linewidth]{figs/sup_qual/cross_2.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dice_cross_2.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dice_2.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dt_2.png} \\
\includegraphics[width=0.23\linewidth]{figs/sup_qual/image_54.jpg}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/truth_54.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/prediction_54.png} \\
\includegraphics[width=0.23\linewidth]{figs/sup_qual/cross_54.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dice_cross_54.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dice_54.png}
\includegraphics[width=0.23\linewidth]{figs/sup_qual/dt_54.png}
\caption{Qualitative comparison between methods.For each of the three examples, from left to right on the first row, there is the original image, the ground truth and the prediction using VT. On the second row, there is the prediction using WCL, DCL, and DT. From top to bottom, the first image is taken from Cityscapes test set, the second from Synthia and the third from Mapillary Vistas.}
\label{fig:sup_qual}
\end{figure}
Additionally, for the prediction profile experiment we measure the divergence of VT prediction along the normal direction of the boundary at an increasing distance. We compute the divergence versus distance for each predicted pixel. For these measurements at each distance we compute the mean and standard deviation. We perform the exact same experiment for the DT values instead of VT divergence. Both results are plotted in Fig.~\ref{fig:VTvsDT}. The mean VT divergence (or DT value) versus the distance along the normal is plotted in black while the shaded region shows the standard deviation of the measure at each distance. Note how VT divergence value quickly saturates to around 0 from -2 within a single pixel, while having an extremely low standard deviation. On the other hand, the DT prediction is mostly linear w.r.t distance by design but shows a large uncertainty around the 0 distance.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\linewidth]{figs/profiles/image_frankfurt_000000_001016.png}
\includegraphics[width=0.45\linewidth]{figs/profiles/truth_frankfurt_000000_001016.png} \\
\includegraphics[width=0.45\linewidth]{figs/profiles/vt_frankfurt_000000_001016.png}
\includegraphics[width=0.45\linewidth]{figs/profiles/dt_frankfurt_000000_001016.png} \\
\includegraphics[width=0.45\linewidth]{figs/profiles/vt_profile.pdf}
\includegraphics[width=0.45\linewidth]{figs/profiles/dt_profile.pdf} \\
\caption{Comparison of the predictions using VT and DT on a randomly selected image from Cityscapes dataset. Going from top to the bottom row, we show the image (left) with the corresponding ground truth (right), the predicted divergence (left) and predicted DT (right) and the two prediction profiles. The profiles show the divergence on predicted VT (left) and the predicted DT (right) versus the distance from the mid-boundary surface for the VT and DT methods. The plots show the mean (black line) and the standard deviation (gray shading) around it.}
\label{fig:VTvsDT}
\end{figure}
\section{Boundary Direction Estimation}
As our method outputs the VT for each pixel, when the boundaries are represented inside the image, it is natural to extract the boundary direction.
Differently from other methods \citep{Maninis2017-cob}, no additional module or post-processing is required to estimate the direction. Furthermore, our method is able to predict continuous angles without the necessity to select from a discrete set of values.
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth]{figs/paper_direction_2.png}
\includegraphics[width=0.3\linewidth]{figs/paper_direction_7.png}
\includegraphics[width=0.3\linewidth]{figs/paper_direction_11.png}
\caption{Qualitative results of the boundary direction estimation on three images of Mapillary Vistas test set. In the figures, the direction of the boundaries is plotted to the boundary pixels, which are thickened to ease visualization. Vertical lines correspond to colors in the range of green and blue that gradually turn into red and pink for horizontal lines.}
\label{fig:angle}
\end{figure}
In Fig.~\ref{fig:angle}, we show a qualitative representation of our boundary direction estimation on three example images from Mapillary Vistas to have a visualization of the prediction quality.
From a quantitative standpoint, the predicting VT field can achieve a root mean squared error on the estimated angle $\theta$ of $7.2$ degrees.
\section{Straight Line Proposals}
Given that in our method each boundary pixel has a direction feature, it is natural to try and solve the task of straight line proposal generation for detection without any specific supervision. A straight line is considered a boundary for which the direction does not change in neighboring pixels.
To detect such points, it is possible to apply a simple algorithm:
\begin{itemize}
\item First the VT field is converted using only the absolute value of the two channels. In this way, differences of vectors having same orientation but opposite direction on the two sides of a boundary are removed.
\item Then the derivative of the two channels are approximated using a Sobel filter and their absolute values are summed in a pixelwise manner.
\item In the obtained image, pixels with a high value indicate places where the orientation changes while the low values show constant orientation. Therefore, we define a threshold ($t=0.05$) and consider part of a straight line the boundary pixels with a value below the threshold.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth]{figs/paper_line_8.png}
\includegraphics[width=0.3\linewidth]{figs/paper_line_10.png}
\includegraphics[width=0.3\linewidth]{figs/paper_line_16.png}
\caption{Qualitative example of line detection using the VT field on three images of Mapillary Vistas test set. In the images, the straight lines in the boundaries are identified with the red colors. For visualization purposes, the boundaries have first been artificially thickened before detecting straight lines.}
\label{fig:line_det}
\end{figure}
In Fig.~\ref{fig:line_det}, we show some qualitative results obtained with the above method. It is possible to identify short straight lines detected in areas of the image without an apparent straight line. This is due to the small dimension ($3\times3)$ of the derivative kernel used to detect a straight line and could be solved by filtering the prediction. We show the result without postprocessing as a proof that our method can be applied no matter what type of line detection is needed, from small segments to long lines.
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_26.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_33.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_51.png} \\
\includegraphics[width=0.3\linewidth]{figs/sup_super/part_clusters_26.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/part_clusters_33.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/part_clusters_51.png} \\
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_26_COB.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_33_COB.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_51_COB.png} \\
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_26_MCG.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_33_MCG.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_51_MCG.png} \\
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_26_SCG.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_33_SCG.png}
\includegraphics[width=0.3\linewidth]{figs/sup_super/image_51_SCG.png}
\caption{Examples of superpixels obtained on three images of Mapillary Vistas test set using four different methods. From top to bottom: the original images, the superpixels obtained using the VT field, COB \citep{Maninis2017-cob}, MCG \citep{mcg}, and SCG \citep{mcg}, respectively.}
\label{fig:part_seg}
\end{figure}
\section{Superpixel}
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{figs/Net_clarifier.pdf}
\caption{\textbf{Network architecture overview}. Schematics of the network architecture highlighting the way HRNet is used and how full resolution boundaries are predicted. Each convolution, except from the last one, includes batch normalization \citep{pmlr-v37-ioffe15} and a ReLU activation \citep{relu}. As output, we show the predicted boundary; this can be obtained from any method as we use the same network changing only the post-processing.}
\label{fig:architecture}
\end{figure}
The VT field can also be used to create superpixels without any specific supervision. More specifically, when they are trained on semantically meaningful boundaries, they can be used to extract objects or object parts.
This can be done without obtaining the partial result of boundaries, using only the fields and applying region growing algorithm on it.
Specifically, we use the following algorithm:
\begin{itemize}
\item First, divergence is computed on the VT field using a Sobel filter as done to obtain boundary pixels. The high divergence values are the source points in the field and are treated as centroids of the parts. In case there is a connected part with high divergence, it is considered to be a single centroid region.
\item Each pixel is associated to a two dimensional point based on its coordinates on the image grid. The point is then moved (its coordinates are changed) following the opposite of the field direction, towards the center and away from the border. This continues until the algorithm converges or for a maximum number of steps. This results in each pixel being associated with a point that has been moved to a different position from the original.
\item Based on the position of the obtained point, the pixels are then assigned to the closest part centroid. Thanks to the complex shapes that the centroid regions can assume, the resulting clusters can have high structural complexity and represent complete objects or large parts.
In the special case of pixels with an associated point that falls outside of the image border, an assignment to a part centroid cannot be done. These pixels are then clustered using DBSCAN \citep{dbscan} algorithm using as features the position of the associated points.
\end{itemize}
The technique benefits from the possibility of being made highly parallel with the clustering algorithm that only needs to be applied on a limited number of well separated points.
We show some results obtained by grouping pixels based on the predicted Vector Transform in Fig.~\ref{fig:part_seg}. We compare to some well-know superpixel methods, i.e.\@\xspace, Convolutional Oriented Boundaries (COB) \citep{Maninis2017-cob}, Multiscale Combinatorial Grouping (MCG) \citep{mcg}, and Singlescale Combinatorial Grouping (SCG) \citep{mcg}, whose results are generated by thresholding the occlusion boundaries (Ultrametric Contour Map, UCM \citep{bsds}) using the optimal threshold of each method. It is visible that the VT field can group an object with high complexity as a single superpixel, which can ease downstream tasks that benefit from such a characteristic. In contrast, previous methods tend to create a higher number of clusters whose boundaries are not always coincidental with the complete objects. This task further justifies our re-definition of boundary detection.
\section{Network Architecture and Training}
\label{sec:archandtrain}
In this section, we provide more details on the network architecture and the training procedures. Figure \ref{fig:architecture} shows a schematic of the architecture used. The architecture is chosen to take into account both high and low resolution details as commonly done by boundary detection methods~\citep{liu2017richer,xie2015holistically}. More specifically, the use of a ResNeXt \citep{Xie_2017_CVPR} block to process the high resolution signals and the upsampling stage are inspired by \cite{deng2018learning}. Analyzing the architecture in details, we can identify four different sections:
\begin{itemize}
\item \textbf{Downsampling phase}: the image goes through two strided convolution layers which reduce the image resolution and increase the number of channels to $64$. This is part of HRNet \citep{wang2020deep} but is represented separately for a better understanding.
\item \textbf{HRNet Main Body}: this is the main part of HRNet \citep{wang2020deep} which extracts multi-resolution features subsequently after the two strided convolutions. The input and output of this block have the same resolution.
\item \textbf{ResNeXt Block}: this is a single ResNeXt \citep{Xie_2017_CVPR} block that is used to extract complex features to be used while retrieving the full resolution. More specifically, the block has cardinality 4, using the same notation as \cite{Xie_2017_CVPR}.
\item \textbf{Upsampling phase}: the final part of the network takes as input, the output of HRNet and upsamples it to formulate full resolution predictions. First, the input is bilinearly upsampled and the result is merged to the output of the ResNeXt block concatenating the two and applying a pixel-wise fully-connected layer. Finally, the result is further bilinearly upsampled and one final convolution layer is used to predict the output representation.
\end{itemize}
The training protocol used for the network resembles the one used in Panoptic-DeepLab \citep{Cheng2020-panopticdeeplab}. More specifically, we apply an initial learning rate of $0.001$, the 'poly' learning rate scheduler \citep{poly} and a batch size of $32$. The images are augmented using random resizing, random horizontal flipping and randomly cropping the resulting image to the size of $512\times512$ irrespective of the dataset. The dimensions for randomly resizing the shortest side are from the set of dimensions $\{512, 640, 704, 832, 896, 1024, 1152, 1216, 1344, 1408, 1536, 1664, 1728, 1856, 1920, 2048\}$. Inference, instead, is done at a single image resolution irrespective of the method applied. For Cityscapes \citep{Cordts2016-cityscapes} and Mapillary Vistas \citep{neuhold2017mapillary} the shortest edge has dimension $1024$ pixels and for Synthia \citep{Ros2016TheSD} it is $960$ pixels.
|
1,116,691,499,668 | arxiv | \section*{Acknowledgements}
The authors wish to acknowledge the financial support of the Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training (CDT) in Communications (EP/I028153/1), as well as Toshiba Research Europe Ltd.
\bibliographystyle{IEEEtran}
\section{Introduction} \label{sec_intro}
\textbf{Context:}
In recent years Software Defined Networking (SDN) has gained traction as a means of bringing scalability and programmability to network architecture. Particularly in Data Center and Optical Networks, SDN has been shown to offer a high degree of network configurability, reduction in capital expenditure, and a platform for virtualizing network functions \cite{sdn_comprehensive_survey}.
\textbf{Motivation:}
The advantages of SDN have led to a number of research efforts to apply the concept within the IEEE 802.15.4 low-power wireless standard, which underpins many Internet of Things (IoT) and sensor networks. In particular, the reconfigurability conferred through SDN would allow low-power wireless networks to treat sensor and control traffic on a per-flow basis, providing guarantees to critical data whilst optimizing the network for low-energy communication. Elements of this idea can be seen in the approach to centralized scheduling defined within the IETF 6TiSCH architecture \cite{6tisch_ietf_architecture}, which uses SDN concepts to provide spatial and frequency diversity within IEEE 802.15.4-2015 industrial IoT networks.
\textbf{Challenge:}
IoT sensor networks typically consist of constrained devices in a Low-Power and Lossy Network (LLN), and are limited in terms of reliability, throughput, and energy. Implementing a centralized SDN architecture in this environment therefore faces considerable challenges: not only is controller traffic subject to jitter due to unreliable links and network contention, but the overhead generated by SDN can severely affect the performance of other traffic. These limitations force us to revisit a number of traditionally held assumptions about how SDN operates, and how it can be applied within a constrained environment.
\textbf{Approach:}
In this paper, we tackle the challenge of adapting high-overhead SDN architecture for constrained, low-power wireless networks. We introduce $\mu$SDN, a low-overhead SDN architecture which builds on recent trends towards centralization in protocols for IEEE 802.15.4 networks, and extends concepts introduced in recent works: where efforts have mainly considered non-IPv6 networks. $\mu$SDN implements additional optimization techniques, compatibility with IPv6 networks, and interoperability with existing distributed routing protocols such as RPL\cite{rpl_rfc} (Routing Protocol for Low-Power and Lossy Networks). We then use $\mu$SDN to evaluate the effect of SDN traffic network performance, and consider a scenario where SDN can improve Quality of Service (QoS) for high-priority flows over traditional approaches.
\textbf{Contribution:}
\begin{itemize}
\item We introduce a number of optimization techniques to reduce control overhead and manage the challenges faced in applying SDN within low-power, multi-hop wireless networks.
\item We incorporate these techniques within $\mu$SDN, a lightweight SDN architecture for low-power wireless networks, and implement $\mu$SDN on top of the Contiki Operating System (OS) \cite{contiki} for constrained IoT networks.
\item We evaluate the performance of $\mu$SDN, and show that $\mu$SDN maintains scalability when compared against a conventional IEEE 802.15.4-2012 network, whilst allowing the network to benefit from SDN architecture.
\item We present results showing how $\mu$SDN can maintain application and control paths on a per-flow basis, dramatically reducing delay and jitter in an interference scenario where conventional approaches struggle.
\end{itemize}
\textbf{Outline:}
In Section \ref{sec_sdn} we examine, in general terms, the advantages inherent within SDN architecture, before discussing the motivation for applying SDN within low-power IoT scenarios in \ref{sec_motivation}. We then cover the challenges that must be overcome in order to apply SDN within a low-power, multi-hop environment in \ref{sec_challenges}. Additionally, in \ref{sec_rpl}, we provide necessary background information on RPL \cite{rpl_rfc}, the routing and topology control protocol typically employed within the IEEE 802.15.4 stack. An overview of key related works examining SDN in low-power wireless networks is given in \ref{sec_related_work}. In Section \ref{sec_approach} we frame the problem within the context of the challenges set out in \ref{sec_challenges}, covering various approaches necessary to optimize SDN architecture for constrained networks. We introduce $\mu$SDN, our lightweight SDN framework, in Section \ref{sec_design}, where we discuss its architectural design and implementation. Finally, we evaluate SDN overhead in IEEE 802.15.4 networks in Section \ref{sec_results}, as well as providing a use-case in which SDN reduces both delay and jitter for selected flows in an interference scenario. We then conclude in Section \ref{sec_conclusion}.
\section{Background and Related Work}
\label{sec_background}
\subsection{The Advantages of SDN}
\label{sec_sdn}
Though originally conceived for campus networks, SDN has been proposed as a solution to some of the problems inherent within traditional network architecture \cite{sdn_comprehensive_survey}. SDN separates the data and control planes by abstracting distributed state, network specification, and device forwarding \cite{shenker_future_past}. Through a network state model exposed by the Network Operating System (NOS), applications are then able to provide network services without knowledge of the underlying hardware or topology. The NOS utilizes data forwarding protocols, such as OpenFlow \cite{openflow}, to configure the network state based on compiled network behavior defined by an application layer. Unavailable with current protocols, SDN can provide a platform for virtualization of network functions and dynamic reconfiguration of services.
\textbf{Network (Re)Configurability:} In wired and optical networks, the programmability provided by SDN allows configuration of forwarding paths in the network, protocol independence, and customized processing of individual flows. In sensor networks in particular, this would allow freedom from protocol constraints. It would enable, for example, more stable networks to be configured to deliver greater performance, whereas more dynamic and mobile networks would be able to divert resources to critical and high-priority flows.
\textbf{Global Knowledge:} Constrained wireless networks typically employ distributed protocols in order to reduce the overall load on the network and minimize the inherent uncertainty. Whilst this approach has been widely successful, there is a growing acknowledgement that, through global knowledge, there are a number of areas in which centralized architectures could provide benefits to low-power wireless networks, particularly in managing interference and heterogeneity in dense and non-isolated networks.
\textbf{Virtualization:} The abstraction of protocol logic away from individual nodes lends itself well to the introduction of virtualization and network slicing. These are seen as key enablers in the provision of multi-tenant IoT networks, where multiple operators can utilize network infrastructure from a single vendor.
\subsection{Motivation for SDN in Low-Power IoT}
\label{sec_motivation}
SDN is now seen as a key enabler for next-generation wireless networks, particularly in low-power IoT sensor networks, which typically operate within an extremely constrained environment. Specifically, within IEEE 802.15.4-2012, power limitations force the use of multi-hop mesh networking in order to allow the network to reach beyond the radio transmission range. Typical networks may include dozens to hundreds of devices (within a single mesh) with multiple sensors per device. However, multiple networks may be connected across a backbone network, and protocols such as 6LoWPAN \cite{6lowpan_rfc} allow devices to be interoperable with IPv6. The flexibility and scalability provided by SDN presents further opportunity to move beyond the traditional notions of low-power IoT as small, horizontal islands serving a single application, typically categorized as one of three areas: \textit{Data Collection} (many-to-one), \textit{Alerts and Actuation} (point-to-point), and \textit{Data Dissemination} (one-to-many). The opportunity for SDN in this context can be summed up through examining the advantages in the previous section.
Using \textit{Global Knowledge}, SDN can help distribute flows and allocate network resources according to QoS requirements. This concept is already touched upon in the work on centralized scheduling for IEEE 802.15.4-2015 (as discussed in Section \ref{sec_related_work}).
\textit{Network (Re)configuration} can allow low-power wireless networks can be re-purposed on an ad-hoc basis, based on changing application and business needs. This would free low-power networks from serving a single application over their lifetime, and allow operators to add new sensors, actuators, and network capabilities with relative ease - without updating firmware.
By employing the SDN architecture to \textit{Virtualize} network functions such as routing, security, and data aggregation, IoT sensor networks can take advantage of greater computing resources at the controller. As well as allowing functions to be initialized or torn down depending on application needs, this process additionally permits the association of flows with individual functions. Moving from a horizontal island, SDN can allow the network to dynamically serve multiple applications, such as both \textit{Data Collection} and \textit{Actuation}, with varying QoS requirements.
\subsection{The Challenge of Low-Power Wireless Mesh}
\label{sec_challenges}
SDN is, by nature, an architecture with a high associated overhead: both in terms of the centralized control traffic, and also from flowtable lookups. Low-power wireless networks, on the other hand, are the antithesis of this. We provide an overview of the main constraints faced, and how this affects the task of trying to implement SDN within the network.
\textbf{Device Hardware Restrictions:} Low-power wireless networks consist of constrained devices with limited energy, memory, and processing capabilities. These restrictions allow devices to operate for months, or even years, on little power. The consequence of this is that concessions need to be made at all layers of the network stack. This is particularly limiting for SDN, which traditionally employs devices capable of processing thousands of flows per second, and sorting through tables that can sometimes hold hundreds of thousands of entries. Yet IEEE 802.15.4 devices often have only a few KB of memory, and excessive radio activity will quickly deplete a node's energy supply.
\textbf{Unreliable Links:} The lossy medium present in low-power wireless networks means they can be prone to unreliability. This is a direct consequence of the low-power hardware requirements, which forces concessions at the physical and MAC layers, but in order for SDN applications to provide effective decisions, there needs to be an up-to-date model of the network. The compounded problems of lossy links and a multi-hop mesh network means that addressing this can be problematic. Packets updating the controller of the network state can be dropped or subject to severe delays.
\textbf{Fragmentation:} IEEE 802.15.4 has a Maximum Transmission Unit (MTU) of 127B. After the link-layer header, the 6LoWPAN standard \cite{6lowpan_rfc} introduces IP capabilities but further reduces remaining space in a single, unfragmented packet. A full IPv6 6LoWPAN implementation with 64-bit addressing allows for a mere 53B of application data. In order to prevent packet fragmentation, and hence multiple transmissions per packet, SDN control messages need to fit within this allotted length.
\textbf{Interference:}
The low-power nature of transmissions means IEEE 802.15.4 networks can be sensitive to interference from nearby higher-power communications operating at the same frequency, such as IEEE 802.11 networks transmitting on the same channel at 2.4 GHz. This can potentially affect entire branches of the network, and hamper the delivery of messages from/to sensors and actuators. In an SDN architecture with centralized control, this prevents nodes from querying or receiving instructions from the controller.
\textbf{Multi-Hop Mesh Topology:} Distributed routing protocols, such as RPL, are commonly used to locally maintain topology whilst reducing control overhead in the overall network. As low-power devices reduce radio range, a multi-hop mesh allows networks to be extended over a greater area than if all nodes communicated with a single base station. Unfortunately, by introducing multiple hops, link uncertainty is compounded across the hop distance and can increase the chance of packets being dropped along the way.
\subsection{A Brief Overview of RPL}
\label{sec_rpl}
RPL (Routing Protocol for Low-Power and Lossy Networks) \cite{rpl_rfc} forms an integral part of many low-power wireless networks. It allows the formation of tree-like ad-hoc network topologies, where each node keeps an immediate one-hop parent based on a configurable Objective Function (OF) that determines which parent to select (often the node's rank within the graph, or link metrics). As nodes form part of the topology based solely on information provided by their immediate neighbors, RPL is effective in allowing nodes to quickly send information up the tree. However, the RPL graph forces nodes nearer the root to serve messages from nodes further down the tree, and exacerbates energy loss in nodes nearer the root. Some RPL terminology is used in later sections, and a brief description of these terms is given below:
\begin{itemize}
\item \textit{Direction-Orientated Directed Acyclic Graph (DODAG):} A tree-like graph with no cycles, and single root node with no outgoing edge (although this often acts as a border router).
\item \textit{DODAG Information Solicitation (DIS):} ICMPv6 message used by nodes to request RPL DAG information from one-hop neighbors.
\item \textit{DODAG Information Object (DIO):} ICMPv6 message sent as a response to DIS messages.
\item \textit{Destination Advertisement Object (DAO):} Sent from child nodes to the parent or root (depending on the RPL mode) in order to advertise themselves as a destination within the DAG.
\item \textit{RPL Storing Mode:} Nodes maintain a routing table for their Sub-DODAG.
\item \textit{RPL Non-Storing (NS) Mode:} Nodes only know their parent, and the root keeps a routing table for the whole DODAG.
\end{itemize}
\subsection{Related Work}
\label{sec_related_work}
SDN is an increasingly well-defined concept which has been successfully applied to other networking areas. However, there is still considerable debate of what SDN means when it comes to low-power wireless networks. The IETF 6TiSCH Working Group (WG) \cite{6tisch_ietf_architecture} is engaged in efforts to develop scheduling mechanisms for IEEE 802.15.4-2015 TSCH, which allowed the creation of channel hopping schedules but did not define how these schedules should be configured or maintained. 6TiSCH aims to incorporate SDN concepts within the standard, but foregoes traditional SDN elements such as flowtables and focuses more on the centralized allocation of resources (the TSCH {channel/time} slots) within the network.
A number of more traditional SDN architectures for Low-Power Wireless Networks have been proposed and we briefly summarize these. In particular, we present the prevailing ideas from key contributions in this area; however, this is not an extensive list of works, which are covered in detail in recent surveys \cite{wsdn_survey_taxonomy, sdwn_opportunities_challenges,sdn_for_iot_survey}.
Sensor OpenFlow \cite{sensor_openflow} were early advocates for using SDN in sensor networks. Their proposal highlights the difficulties of implementing Out-Of-Band (OOB) control plane communication within a sensor network, and explicitly argues for a \textit{custom low-power protocol}, rather than utilizing OpenFlow directly. They also propose \textit{energy saving through data aggregation}, and attempt to mitigate SDN overhead through the introduction of Control Message Quenching (CMQ) \cite{cmq}. This technique suppresses additional queries upon flowtable misses, purportedly allowing the network sufficient time to respond to the initial request.
Constanzo et al. \cite{sdwn} propose SDWN, an architectural framework which highlights novel uses for SDN in Low-Power Wireless Sensor Networks. The authors introduce the idea of using SDN flowtables to facilitate \textit{data aggregation} and \textit{Radio Duty-Cycling (RDC)} techniques, theoretically allowing SDN to programmatically configure the energy consumption of the node. In addition, to further reduce energy consumption, SDWN suggests a form of \textit{Protocol Oblivious Forwarding (POF)} \cite{pof} to reduce memory footprint, allowing flowtables to match on byte arrays within the packet.
Following from the proposals in SDWN, the authors of SDN-WISE \cite{sdn_wise} provide a public implementation of the architecture using Contiki \cite{contiki}. SDN-WISE introduces \textit{stateful flowtables}, essentially turning the flowtables into a Finite State Machine (FSM). This allows simple controller logic to be `programmed' into the nodes, where they can perform certain actions under one state, whilst performing a different set of actions when in another. For example, this could be used to allow nodes to run their SDN flowtable actions in a low-energy mode.
More recent works in the field directly consider the overhead incurred by SDN in Low-Power Wireless Networks, and try to reduce the overhead of other protocols within the stack. CORAL \cite{coral-demo} takes a similar approach to this paper, in adapting SDN for IPv6 based IEEE 802.15.4 RPL networks, and tries to deal with network overhead by using SDN to fine-tune timer settings in the RPL in order to reduce the number of RPL transmissions after initialization and provide more resources for the SDN protocol.
\section{Optimizing SDN for Low-Power Wireless}
\label{sec_approach}
Section \ref{sec_challenges} categorizes the challenges faced by SDN in low-power wireless networks. We address these challenges by looking at four core areas and how they might be optimized: the SDN \textit{protocol}, the SDN \textit{architecture}, the SDN \textit{flowtable and buffers}, and the SDN \textit{controller}.
\textbf{Protocol Optimization:}
\begin{itemize}
\item \textit{Eliminate Fragmentation} through tailoring the SDN control protocol so that it doesn't exceed the allocated packet size after the link layer and 6LoWPAN headers.
\item \textit{Reduce Packet Frequency} to minimize potential for congestion, as well as reduce opportunities for retransmissions at the link layer
\item \textit{Match on Byte Arrays/Index} rather than specific header fields, allowing greater reconfigurability and programmability in the mesh.
\end{itemize}
\break
\textbf{Architectural Optimization:}
\begin{itemize}
\item \textit{Use Source Routing} to prevent intermediate nodes from generating new control requests as the packet is transported from source to destination (assuming that the intermediate nodes have no rules for that flow).
\item \textit{Throttle Control Messages} ensuring that repeated control requests, from a node to the controller, are not sent in quick succession. Additionally, this has security implications in that it is a possible defense against a denial-of-service style attack.
\item \textit{Refreshing Flowtable Timers} reduces reliance on instructions from the controller as repeated successful matches will not expire. This is, however, a trade-off between configurability and performance.
\end{itemize}
\textbf{Memory Optimization:}
\begin{itemize}
\item \textit{Re-Use Flowtable Matches/Actions} by eliminating repeated entries. For example, if there are entries for two flows which are then forwarded to the same destination, that forwarding action should be stored as a single item, rather than being included in both entries.
\item \textit{Reduce Buffer Sizes} by setting specific fields to be buffered at the initial controller configuration, rather than buffering the whole packet.
\end{itemize}
\textbf{Controller Optimization:}
\begin{itemize}
\item \textit{An Embedded Controller} would allow simple requests to be responded to more quickly, rather than sending them on to the external IPv6 backbone network.
\end{itemize}
\section{$\mu$SDN Design and Implementation}
\label{sec_design}
\subsection{Overview}
In order to provide a platform for SDN experimentation in wireless sensor networks we have implemented $\mu$SDN, a lightweight SDN framework for Contiki OS. $\mu$SDN builds on some of the architectural concepts proposed in the recent works highlighted in Section \ref{sec_related_work}, whilst incorporating the optimization techniques from Section \ref{sec_approach} in order to mitigate control overhead and enhance scalability. $\mu$SDN sits above the IP layer within the IEEE 802.15.4-12 stack (as shown in Figure \ref{fig:usdn_stack}), and $\mu$SDN nodes are, in theory, inter-operable with legacy nodes in a IEEE 802.15.4 network. Although left unexplored in this paper, the topic of incorporating SDN nodes within a wider low-power wireless network is a potential area for future work, as it could potentially provide an opportunity to use SDN nodes to locally control branches in a hierarchical manner, thus potentially reducing the overhead cost of SDN even further, and allowing local controllers to make decisions without navigating a large number of hops. Finally, $\mu$SDN utilizes the RPL to provide a fallback communication path to the controller, though this could be replaced with any distributed routing protocol.
$\mu$SDN provides a modular architecture and API which allows application specific features to be separated from core SDN processes. This architecture is presented in Figure \ref{fig:usdn_arch} and is fully integrated with the IEEE 802.15.4-2012 protocol stack. It has been tested in Cooja using TI's exp5438 platform with MSP430F5438 CPU and CC2420 radio.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{images/usdn/stack-usdn.pdf}
\caption{$\mu$SDN in IEEE 802.15.4-2012 stack with CSMA MAC Layer.}
\label{fig:usdn_stack}
\end{figure}
\subsection{Modular Stack Implementation}
The \textit{$\mu$SDN Stack} provides a layered architecture and API to separate core function handling from the specifics of the SDN implementations.
\begin{itemize}
\item \textit{$\mu$SDN Protocol}: $\mu$SDN uses its own lightweight protocol for controller communication. It's transported over UDP to allow for secure DTLS (Datagram Transport Layer Security) when communicating with controllers outside the mesh, and is highly optimized to ensure no packet fragmentation.
\item \textit{Controller Adapter:} Exposes an abstract controller interface to the SDN layer, allowing the $\mu$SDN Protocol to be switched out for any other protocol which implements that interface.
\item \textit{SDN Engine:} Defines how the messages to and from the controller are handled. It is essentially the concrete implementation of the protocol logic, dictating how the node handles controller communication.
\item \textit{SDN Driver:} Provides an API for the SDN Engine by defining how the flowtable is handled. It provides high-level functions for performing certain tasks through the setting of flowtable entries such as: creating firewall entries, setting routing paths through the network, or aggregating flows. It also provides handling of the flowtable actions, and determines how and when nodes should defer to the controller for instruction.
\end{itemize}
\begin{figure}[b]
\centering
\includegraphics[width=0.6\columnwidth]{images/usdn/usdn-arch.pdf}
\caption{$\mu$SDN architecture. Blue denotes $\mu$SDN modules; whilst gray shows the core IEEE 802.15.4-2012 and 6LoWPAN layers}
\label{fig:usdn_arch}
\end{figure}
\begin{table*}[t]
\renewcommand{\arraystretch}{1.0}
\caption{$\mu$SDN Packet Types}
\label{table:usdn_packets}
\centering
\begin{tabular}{ |l|l|l|l| }
\hline
\bfseries Packet Type & \bfseries Direction & \bfseries Behavior & \bfseries Description\\ \hline
Node State Update (NSU) & UP & Periodic & Updates the controller with node information \\ \hline
Flowtable Query (FTQ) & UP & Intermittent & Requests flowtable instructions from controller \\ \hline
Flowtable Set (FTS) & DOWN & Intermittent & Sets an entry in a node's flowtable \\ \hline
Configuration (CONF) & DOWN & Initial & Configures a node's non-flowtable settings \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/nsu_period/40_node_cm_nsu_lat.pdf}
\caption{Effect of NSU period on average end-to-end application latency.}
\label{fig:30_node_join}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/ft_lifetime/40_node_cm_ft_lat.pdf}
\caption{Effect of FT lifetime on average end-to-end application latency.}
\label{fig:30_node_pdr}
\end{subfigure}
\caption{Comparison of a the effect of range of SDN controller update periods and flowtable lifetime on application traffic delay. Simulation parameters are detailed in Table \ref{table:sim_params}}
\label{fig:param_test}
\end{figure*}
\subsection{Core SDN Processes}
The \textit{$\mu$SDN Core} provides essential SDN processes, allowing protocol and framework specific implementations to be built on top.
\begin{itemize}
\item \textit{Controller Discovery:} Integrates with the network's distributed routing protocol. This gives the node \textit{fallback} or \textit{default} routing in the event that a node loses its connection to the controller. Although this is currently RPL (both Storing and Non-Storing), this could in theory be any distributed routing protocol implemented within the network. This grants controller connections within $\mu$SDN an element of robustness, and ensures nodes will always attempt to find a path to the controller.
\item \textit{Controller Join:} This Layer-3 process employs both the underlying RPL topology, as well as the $\mu$SDN protocol provided by the SDN stack. When the controller receives a RPL DAO (destination advertisement) message, it will send a $\mu$SDN CONF message to the joining node in order to initialize the node and provide configuration information. The joining node uses this CONF message as acknowledgement that it is connected to the controller.
\item \textit{Configuration and Metrics:} Allows controllers to setup the SDN network, choose which metrics to receive from the node, and select what information to receive in controller requests.
\item \textit{Flowtable:} Optimized for memory due to node hardware constraints. Using a similar approach to Protocol Oblivious Forwarding (PoF) \cite{pof} this allows for a flowtable with a minimal memory footprint. Additionally, a Hierarchical Flowtable Structure (HFS) interface is provided to allow controllers to configure multiple flowtables with varying priority levels. This, for example, allows the controller to configure a \textit{whitelist} which is processed before the main flowtable. Packets matched in this \textit{whitelist} are then handed back to the regular Layer-3 processes.
\item \textit{Overhead Reduction:} A number of functions are implemented to mitigate SDN control overhead. CMQ \cite{cmq} is used to handle repeated flowtable misses. Partial Packet Queries (PPQ) allow flowtable requests to be sent to the controller using only partial packet information, reducing 6LoWPAN fragmentation. Source Routing Header Insertion (SRHI) allows routing headers to be inserted onto packets, and can be read by either the RPL or SDN layer. Finally, Flowtable Refresh (FR) allows controllers to instruct particularly active flowtable entries to reset their lifetimers, rather than having the entry expire.
\end{itemize}
\subsection{$\mu$SDN Protocol and Traffic Characterization}
The $\mu$SDN protocol follows the main packet types present in traditional SDN protocols such as OpenFlow: with basic flowtable request/set functionality, as well as configuration and metric update packets. All $\mu$SDN packet types are listed in Table \ref{table:usdn_packets}. As discussed in Section \ref{sec_approach}, it is essential that any SDN protocol for low-powered wireless networks is highly optimized to eliminate 6LoWPAN fragmentation, and the packets therefore have limits on the amount of information that can be sent to and from the controller. To this end, $\mu$SDN compresses information such as node addresses, as well as using node configuration tables to ensure that the controller is able to specify information sent. The traffic generated by $\mu$SDN stems from three processes: controller discovery, node updates, and requests for controller instruction.
\textbf{Controller Discovery:} $\mu$SDN employs the RPL protocol to inform the controller of nodes which have joined the DAG, and are therefore reachable. However, it generates additional traffic in the form of a Configuration (CONF) response to each node joining. This allows nodes which have joined the network to receive initialization information from the controller, such as: NSU timer settings, flowtable lifetimes, and default flowtable entries.
\textbf{Node Updates:} A Node State Update (NSU) message, from a node to the controller, carries information about that node, such as energy, node state, and buffer congestion. This includes observations about its immediate neighbors and link performance. These periodic messages are sent on a timer process within the \textit{Configuration and Metrics} module, that can be configured by the controller using a CONF message.
\textbf{Controller Instruction:}
Flowtable Query (FTQ) packets are sent from a node to the controller in response to a flowtable miss, i.e. the SDN checks the flowtable for instructions on how to handle a packet but is unable to find a matching entry. With Partial Packet Queries (PPQ), FTQ messages send a portion of the packet data up to the controller. The controller then actions that data, and transmits a response back to the sender in the form of a Flowtable Set (FTS) message. The behavior of this traffic is by nature intermittent, though it also depends on whether or not the flowtable uses source routing headers for forwarding. If source routing isn't used, then it will exhibit bursty behavior as FTQ packets are generated by each node in the path between the source and destination.
\section{Evaluation}
\label{sec_results}
This section evaluates $\mu$SDN against a base RPL case with no SDN implementation. Experimentation was performed on an emulated EXP5438 platform with TI's MSP430F5438 CPU and CC2420 radio, using the Cooja simulator for Contiki OS \cite{contiki}. We firstly present the scenarios, configuration, and metrics used in the evaluation. This is followed by a comparison of $\mu$SDN performance against a standard RPL network with no SDN implementation. Finally, we present a use-case scenario showing how SDN can be used within a low-power wireless network in order to programmatically manage interference and provide QoS to high-priority flows. We show that the SDN overhead can be minimized to an extent that performance is close-to, or on-par with a network that implements no SDN, and that in certain scenarios the configurability conferred by the SDN architecture can enhance network performance.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.0}
\caption{Cooja Simulation Parameters}
\label{table:sim_params}
\centering
\begin{tabular}{ |l|l| }
\hline
\bfseries Parameter & \bfseries Setting \\ \hline
Duration & 1h \\ \hline
MAC Layer & ContikiMAC \cite{contikimac}\\ \hline
Transmission Range & 100m \\ \hline
Transmitting Nodes & All \\ \hline
Receiving Node & Root/Controller \\ \hline
Network Size & 30 Nodes \\ \hline
Packet Send Interval & 60 - 75s \\ \hline
Link Quality & 90\% \\ \hline
Radio Medium & UDGM \\ \hline
RPL Mode & Non-Storing \\ \hline
RPL Route Lifetime & 10min \\ \hline
RPL Default Route Lifetime & $\infty$ \\ \hline
$\mu$SDN Update Period & 180s \\ \hline
$\mu$SDN Flowtable Lifetime & 10min \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\renewcommand{\arraystretch}{1.0}
\caption{Interference Scenario Parameters}
\label{table:interference_params}
\centering
\begin{tabular}{ |l|l| }
\hline
\bfseries Parameter & \bfseries Setting \\ \hline
Interference Period & 100ms \\ \hline
Interference Duration & 15ms \\ \hline
Flow $F_0$ Bit Rate & 0.25s \\ \hline
Flow $F_1$ Bit Rate & 10s \\
\hline
\end{tabular}
\end{table}
\textbf{Configuration:}
All simulations were performed in Cooja using a Unit Disk Graph Medium (UDGM) distance loss model with the configuration specified in Table \ref{table:sim_params} (unless otherwise stated). Configuration parameters specific to the interference scenario are specified in Table \ref{table:interference_params}
\textbf{Scenarios:}
We evaluate $\mu$SDN in the following scenarios.
\begin{itemize}
\item \textit{SDN Traffic Test:} We examine the effect of update period and flowtable lifetime settings on SDN performance.
\item \textit{Full Overhead Reduction:} We evaluate $\mu$SDN with all overhead reduction mechanisms employed. The intent is to show a broad analysis of the effect of an optimized low-overhead SDN framework on network performance.
\item \textit{Interference Re-Route:} We demonstrate how $\mu$SDN can be used to counter interference in the network, providing reduced delay and jitter to high-priority flows.
\end{itemize}
\textbf{Metrics:}
We discuss the following performance metrics.
\begin{itemize}
\item \textit{Node Join Times:} Time taken until all nodes have discovered both the RPL DAG and SDN controller.
\item \textit{Traffic Ratio:} Overhead incurred both by RPL and SDN, with respect to application traffic transmitted from each node in the network.
\item \textit{End-to-End Latency:} The effect of SDN overhead on application traffic delay.
\item \textit{Packet Delivery Ratio:} The effect of SDN overhead on network reliability.
\item \textit{Radio Duty Cycle (RDC):} The effect of SDN overhead on node energy.
\end{itemize}
\subsection{Scenario: SDN Traffic Test} \label{sec_param_test}
We compare the effect of controller update periods and flowtable lifetimes on the performance of an SDN network. Figure \ref{fig:param_test} highlights how application delay is sensitive to increases in the frequency of FTQ/FTS transmissions and NSU update period.
\subsection{Scenario: Full Overhead Reduction} \label{sec_full_reduction}
We evaluate the SDN performance in Figure \ref{fig:optimizd_scenario}, where $\mu$SDN has been configured to reduce SDN overhead through SRHI, FR, CMQ, and PPQ as discussed in Section \ref{sec_design}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{images/40_node/40_node_csma_tr.pdf}
\caption{Ratio of application, RPL, and SDN traffic in a $\mu$SDN network.}
\label{fig:traffic_ratio}
\end{figure}
The ratio of traffic generated in a $\mu$SDN network is examined in Figure \ref{fig:traffic_ratio}. While the RPL traffic clearly presents the highest overhead within the network, it should be noted that this is a combined figure of DIS, DIO, and DAO messages; only the latter of which is transmitted across multiple hops, while the others are exchanged between neighbors. Constant Bit Rate (periodic) and Variable Bit Rate (intermittent) SDN messages are shown separately. These are NSU and CONF/FTQ/FTS messages respectively.
In Figure \ref{fig:node_join} we present the time taken for all nodes in the network to join both the RPL DAG, and the SDN controller. In the case of the former, this is the time for the controller to learn about the routing path to that node through RPL DAO messages, which then triggers the join process.
End-to-end application latency is evaluated in Figure \ref{fig:node_e2e}. Although there is a slight increase in delay for application packets in the $\mu$SDN scenario, this is generally consistent with the slight overhead incurred by the SDN processes at each node. That is, as each node needs to perform a flowtable lookup for incoming packets, this lookup time increases the further the source node is from the destination.
Figure \ref{fig:node_pdr} shows $\mu$SDN application traffic PDR against application traffic routed through RPL. $\mu$SDN experiences a slightly lower PDR due to increased congestion and MAC-layer drops shortly after initialization. As nodes forward application packets through SRHI they need to receive this source routing header from the controller. The increased network activity means that FTQ/FTS packets are occasionally lost, and the application packet is dropped.
As $\mu$SDN operates on top of the RPL protocol there is always an associated cost, particularly when considering the energy performance of nodes. Figure \ref{fig:node_rdc} shows the average RDC of nodes in a 30 node network at 1 to 5 hops, where there is a slight increase over the RPL case.
\subsection{Scenario: Interference Re-Routing} \label{sec_interference_rr}
Though SDN inevitably adds an associated cost to general performance, the authors of this paper argue that the configurablity conferred by SDN architecture allows increased QoS in cases where a distributed protocol will fail. To the effectiveness of $\mu$SDN programmability we implemented an interference scenario as shown in Figure \ref{fig:interference-rr-topo} and Table \ref{table:interference_params}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{images/scenario/interference-rr-topo.pdf}
\caption{Topology of intermittent interference scenario. The source node (S) is in green, whilst the destination/controller node is in orange (D/C). Intermittent interference is generated at I, interfering with node 5}
\label{fig:interference-rr-topo}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{images/scenario/interference_contikimac.pdf}
\caption{Delay and jitter of flows in the intermittent interference re-routing scenario (Section \ref{sec_interference_rr}). Compares a SDN scenario against a RPL scenario with no SDN. In the SDN scenario, $\mu$SDN is configured to reroute flow $F_1$ around the interference. The achieved reduction in delay and jitter can be seen in the highlighted area of the figure.}
\label{fig:interference_contikimac}
\end{figure}
In this setup, a source node creates two flows, $F_0$ and $F_1$. $F_0$ is a low priority, but high volume flow, whereas $F_1$ is critical flow with a much lower bit rate but high priority. RPL Objective Function (OF) Zero was used, which instructs RPL nodes to choose their parents based on node rank. In this case the source node \textit{S} will receive DAG information from both node \textit{3} and node \textit{4}, however it will choose \textit{4} as it's parent as that node will have a lower rank due to its proximity to the root node \textit{D}, which in this scenario is also the destination node and the SDN controller. An interferer node was placed so that node \textit{5} would experience a short burst (15ms) of interference every 100ms, causing flows across the RPL route to experience a high degree of degradation. As the interference is not constant, the RPL DAG is unable to heal and form a new path through node \textit{3}.
The introduction of $\mu$SDN to the network allows the controller to handle flows individually, and re-route $F_1$ through \textit{3} even though it is the longer path and is not the next hop dictated by the RPL OF. Flow $F_1$ is therefore able to bypass the interference, experiencing reduced delay and jitter whilst Flow $F_0$ continues to be routed using RPL. This also has the side-effect of reducing the delay of $F_0$ as the path [S, 5, 4, D] experiences less traffic. These results are shown in Figure \ref{fig:interference_contikimac}, where $\mu$SDN exhibits dramatically reduced delay in comparison to the scenario without the benefit of SDN configurability.
\section{Conclusions}
\label{sec_conclusion}
As low-power wireless communications move beyond simple sensor networks and towards multi-tenant and multi-application IoT scenarios, there is an increasing need for flexibility within the network. This paper has introduced $\mu$SDN, a lightweight SDN architecture which overcomes the challenges of introducing SDN in low-power wireless networks. We argue that co-existence with a distributed routing protocol is necessary to provide a framework for \textit{controller discovery }, although this means that any control traffic generated through SDN is an additional overhead. To this extent we have proposed the combination of a number of overhead reduction functions, and $\mu$SDN implements these to substantially mitigate the cost of SDN within a constrained environment. We have shown that it maintains comparable scalability with RPL-based IEEE 802.15.4-2012 networks, whilst providing the network with the opportunities inherent in SDN architecture, such as \textit{Global Knowledge}, \textit{Network (Re)Configurability}, and \textit{Virtualization}. In particular, this paper has demonstrated a scenario where $\mu$SDN is used to implement per-flow QoS handling within a simple network under intermittent interference, showing how $\mu$SDN can provide redundancy to priority flows, where we achieve considerable reduction in latency and jitter in comparison to a conventional low-power wireless network.
|
1,116,691,499,669 | arxiv | \section{Introduction}
\label{introd}
A system
of interacting hadrons, first of all, nuclear matter
is the attractive subject
\cite{nm-1,nm-2,nm-3,nm-4,nm-5,nm-6,nm-7,nm-8,nm-9,nm-10,GK99,nm-11}.
Realistic versions of the nuclear matter equation of state includes both
the attractive and repulsive forces
between
particles.
Thermodynamical behavior of this matter
leads to the liquid-gas first-order phase transition
which ends at the critical point.
Experimentally, a presence of the liquid-gas phase transition
in nuclear matter was reported and then analyzed
in numerous papers
(see, e.g., Refs.~\cite{ex-1,ex-2,ex-3,ex-4,ex-5,ex-5a,ex-6}).
Recently, the proposed van der Waals (vdW) equation of state
accounting for
the quantum statistics
was used to describe the properties of
hadronic matter \cite{marik} and was extended also
to
multicomponent systems \cite{vova}.
Many works
have presented the extensions
of the phase-transition theory to the effective density-dependent Skyrme
forces in terms of the potential density \cite{AV15,satarov0,satarov}; see also
reviews \cite{La81}. They are
especially helpful for the description of the Bose-condensate in
bosonic systems \cite{satarov}.
Starting from the pioneer works of Skyrme
(Ref.~\cite{Sk56}) and famous Skyrme self-consistent Hartree-Fock
calculations by Vautherin and Brink (Ref.~\cite{VB72}), these forces become very
popular in nuclear physics and astrophysics; see, e.g., review
articles
\cite{La81,La16}.
In different systems of hadrons, the critical points, including the Bose condensate, for
the classical and quantum approaches based on the vdW and Skyrme mean-field forces were studied in
Refs.~\cite{vova,roma,satarov0,vova1,AMS19,satarov,roma1,satarov1,SBSPV,St21-1,St21-2,KSSG21}.
The role and size of the quantum statistics effects
were analytically studied for nuclear matter, also
for pure neutron and pure $\alpha$-particle matter in Ref.~\cite{FMG19}.
In this approximation, the dependence
of
critical point parameters on
the particle mass $m$,
degeneracy factor $g$, and the
vdW inter-particle interaction parameters $a$ and $b$
was described well for
each of these systems.
Our consideration
was restricted to small temperatures,
$T \siml 30$~MeV, and not too large
particle densities. Within these restrictions, the
number of nucleons
becomes a conserved number,
and the chemical potential of
such systems
was determined by the particle number.
An extension to the fully relativistic hadron resonances in a gas
formulation with
vdW interactions between
baryons and between antibaryons was considered in
Ref.~\cite{VGS-17}.
We do not include the Coulomb forces
and make no differences between protons and neutrons
(both these particles are referred as nucleons).
In addition, under these restrictions the
non-relativistic treatment becomes very accurate
and is adopted in our studies.
In the present work we are going to apply
the same analytical method as presented in Ref.~\cite{FMG19}
for systems of nucleons and $\alpha$-particles but with
another inter-particle interaction
in terms of the density-dependent effective Skyrme potential.
This method will be applied also
to a mixed two-component system of nucleons and $\alpha$- particles.
Another attractive subject of the application of
our analytical results to analysis of the
particle number fluctuations near the critical point of
the nuclear matter (see, e.g., Ref.~\cite{roma}, and recent Ref.~\cite{FMG20})
will be studied in a separate forthcoming work.
The paper is organized as the following.
In Sec.~\ref{sec-2} we consider the ideal quantum gases introducing the small quantum statistics
parameter related to the de Broglie wave length.
In Sec.~\ref{sec-3},
the quantum statistics corrections of the perturbation expansion over this parameter including
the inter-particle interaction near the critical point
are presented
on the basis of
the vdW model.
Section ~\ref{sec-4} is devoted to the
extension of our analytical results
to those with using the
effective Skyrme
potential.
In Sec.~\ref{sec-5},
the
quantum statistics effects
near the critical point are studied for a mixed system of the
isotopically symmetric nuclear matter with a small impurity of
$\alpha$-particles. The results of our calculations are
discussed in Sec.~\ref{sec-disc}, and
are summarized in
Sec.~\ref{sec-sum}.
Some details of our derivations are presented in
Appendix.
\section{Ideal quantum gases and quantum statistics parameter}\label{sec-2}
The pressure $P_i(T,\mu)$ for the
system of
particles (e.g., $i=N$ for nucleons, $i=\alpha$ for $\alpha$ particles)
plays the role of the thermodynamical potential in
the grand canonical ensemble (GCE)
where temperature
$T$ and chemical potential $\mu$ are independent variables \cite{LLv5}.
The particle number density
$n_i(T,\mu)$, entropy density $s_i(T,\mu)$, and energy density
$\mathcal{E}_i(T,\mu)$ are given as
\be\label{term}
n_i=\left(\frac{\partial P_i}{\partial \mu}\right)_T,~
s_i=\left(\frac{\partial P_i}{\partial T}\right)_\mu,~
\mathcal{E}_i= Ts_i+\mu n_i-P_i~.
\ee
In the thermodynamic limit $V\rightarrow \infty$ considered in the present
paper all intensive thermodynamical functions -- $P$, $n$, $s$,
and
$\mathcal{E}$ --
depend on $T$ and $\mu$,
rather than on the system volume $V$, see for instance
Ref.~\cite{BG-08}.
We start with the
GCE expressions, $\sum_iP^{\rm id}_i(T,\mu)$, for the pressure $P^{\rm id}(T,\mu)$
and particle number density, $n^{\rm id}(T,\mu)=\sum_in^{\rm id}_i(T,\mu)$,
for the ideal
non-relativistic
quantum gas \cite{G,LLv5},
\bea\l{Pid}
& P^{\rm id}_i=\frac13 g_i\int \frac{d {\bf p}}{(2\pi \hbar)^3}\frac{p^2}{m_i}
\left[\exp\left( \frac{p^2}{2m_iT} -\frac{\mu}{T }\right) -
\theta_i\right]^{-1},\\
& n^{\rm id}_i=g_i\int \frac{d {\bf p}}{(2\pi \hbar)^3}
\left[\exp \left( \frac{p^2}{2m_iT} -\frac{\mu}{T} \right) -
\theta_i\right]^{-1}~,\l{nid}
\eea
where $m_i$ and $g_i$
are, respectively, the particle mass and degeneracy factor of the $i$
component. The value of $\theta_i=-1$ corresponds to the Fermi gas,
$\theta_i=1$ to the Bose gas, and $\theta_i=0$ is the Boltzmann (classical)
approximation when
effects of the quantum statistics
are neglected\footnote{The units
with Boltzmann constant
$\kappa^{}_{\rm B}=1$ are used. We keep the Plank constant in the
formulae to illustrate
the effects of quantum statistics,
but put $\hbar=h/2\pi=1$ in all
numerical calculations. For simplicity, we omitted here and below the
subscript id for the ideal gas everywhere where
it will not lead to a misunderstanding. }.
Equations (\ref{Pid}) and (\ref{nid}) for the pressure $P_i^{\rm id}$ and density
$n_i^{\rm id}$,
proportional to the famous Fermi-Dirac and Bose-Einstein integrals,
can be expressed in terms of
the fugacity,
\be\l{fuga}
z \equiv \exp(\mu/T)~,
\ee
as
\bea
P^{\rm id}_i(T,z) &
\equiv \frac{g_iT}{\theta_i \lambda^3_i}\,{\rm Li}_{5/2}(\theta_i z)~,
\l{Pid-1}\\
n^{\rm id}_i(T,z) &
\equiv
\frac{g_i}{\theta_i \lambda^3_i}\,{\rm Li}_{3/2}(\theta_i z)
~.
\l{nid-1}
\eea
Here, $\lambda_i$ is the de Broglie
thermal wavelength \cite{LLv5},
\be\l{lambdaT}
\lambda_i~\equiv ~\hbar\sqrt\frac{2 \pi}{m_iT}~,
\ee
and $\mbox{Li}_\nu$
is the polylogarithmic function of order $\nu$.
The integral representation of the polylogarithmic
functions was used in these derivations; see Eqs.~(\ref{Pid})
and (\ref{nid}), and Refs.~\cite{Grad-Ryzhik,Li}.
It is convenient also to use the power
series for the polylogarithmic functions,
\be\l{Liexp}
{\rm Li}_{\nu}(\theta_i z)\equiv \frac{\theta_i z}{\Gamma(\nu)}
\int_0^{\infty}\frac{\d x~x^{\nu-1}}{\exp(x)-\theta_i z}
= \sum_{k=1}^\infty
\frac{(\theta_i z)^k}{k^{\nu}}~,
\ee
where $\Gamma(x)$ is the gamma function.
Indexes $\nu=3/2$ and $5/2$ of these functions were used in
Eqs.~(\ref{Pid-1}) and (\ref{nid-1}).
The values of $\mu >0$, i.e., $z>1$, are forbidden
in the ideal Bose gas. The point
$\mu=0$ corresponds to an onset of the Bose-Einstein condensation in the system
of bosons. For fermions, any values of $\mu$
are possible, i.e., integrals (\ref{Pid}) and (\ref{nid})
[see also Eq.~(\ref{Liexp})] exist for
$\theta_i=-1$ at all real values of $\mu$.
The power series
[see Eqs.~(\ref{Pid-1}) and (\ref{nid-1})
with Eq.~(\ref{Liexp})]
is
obviously convergent at
$z < 1$ ($z>0$)
(see, e.g., Ref.~\cite{Li}). For the Fermi statistics
at
$z \simg 1$, the integral
representation of the corresponding polylogarithmic function
[see Eq.~(\ref{Liexp})] in
Eqs.~(\ref{Pid-1}) and (\ref{nid-1})
can be used
(see Ref.~\cite{Grad-Ryzhik}).
Particularly,
at $z\rightarrow \infty$ one can use the asymptotic Sommerfeld expansion of the
$\mbox{Li}_{\nu}(-z)$ functions over $1/\mbox{ln}^2|z|$;
see Ref.~\cite{brack}.
For the nucleon gas we take $m^{}_N \cong 938$~MeV neglecting
a small difference between proton and neutron masses.
The degeneracy factor is then $g^{}_N=4$ which takes into account
two spin and two isospin
states of nucleon.
For the ideal Bose gas of $\alpha$-nuclei,
one has $g_\alpha=1$ and $m_\alpha\cong 3727$~MeV.
At $z\ll 1$, only one
term, $k=1$, in series, Eq.~ (\ref{Liexp}), is sufficient to use
in Eqs.~(\ref{Pid-1}) and (\ref{nid-1}) which leads
to the classical ideal gas relationship
$P_i=n_i\,T$~.
Note that this
result
follows automatically from
Eqs.~(\ref{Pid}) and (\ref{nid})
at $\theta_i=0~$.
The classical Boltzmann approximation
at $z\ll 1$
is valid for large $T$ and/or
small $n$ region of the $n$-$T$ plane.
In fact, at very small $n_i$, one observes $z<1$ at small $T$ too.
\begin{figure}
\begin{center}
\includegraphics[width=7.0cm,clip]{Fig1-fe-fin.eps}
\end{center}
\caption{
Fugacities $z$ [Eq.~(\ref{fuga})] as functions of the
quantum statistics parameter $\varepsilon$ [Eq.~(\ref{eps})]
for its small values
where one finds the critical points
$\varepsilon_c =0.15-0.16$ [Eq.~(\ref{eps}) at $T=T_c$ and $n=n_c$;
see Ref.~\cite{FMG19} and Table \ref{table-1} below for
nuclear matter]. Solid black curve shows the exact fugacity
$z(\varepsilon)$,
found by inverting Eq.~(\ref{nid-1}) and using
Eq.~(\ref{eps}) at $\theta=-1$,
and
$k_{\rm max}$ is the maximal power of cut-off series for the polylogarithm
$\mbox{Li}_{3/2}(-z)$; see Eq.~(\ref{Liexp}).
Arrows show approximately the
maximal critical values $\varepsilon_c$ and corresponding $z_c$
under our consideration (Ref.~\cite{FMG19}).
}
\label{fig1-fe}
\end{figure}
Inverting Eq.~(\ref{nid-1})
with respect to the fugacity, $z=z(n_i)$, and substituting it into
Eq.~(\ref{Pid-1}), one obtains equation of state for the ideal gas through
the pressure, $P_i=P_i(T,n_i)$, for any $i$ component.
Instead of the particle number
density, $n_i$, it is convenient to introduce the
dimensionless argument, $e_i\propto n_i$, of the fugacity $z$ at a
given point of the $\mu-T$ plane:
\bea\l{eps}
&e_i \equiv
-\theta_i\, \varepsilon^{}_{i},\qquad
\varepsilon^{}_{i}=\frac{n_i \lambda^{3}_i}{4\sqrt2\, g_i}=D_i n_i,\nonumber\\
& D_i =
\frac{ \hbar^3\,\pi^{3/2}}{2\,g_i\,(m_iT)^{3/2}}~.
\eea
The fugacity $z$ as function of the
quantum statistics parameter $\varepsilon$
for its small values
for nuclear matter is shown in Fig.~\ref{fig1-fe}.
Taking thus a given component $i$, e.g., for nucleon matter
($\theta_i=-1$),
for simplicity,
we
omit
a subscript $i$ in
discussions of this figure.
In Fig.~\ref{fig1-fe}, the exact fugacity $z(\varepsilon)$
was obtained by multiplying equation
(\ref{nid-1}) by the
factor
$\lambda^3/(4\sqrt{2}~g)$
to get $\varepsilon=\varepsilon(z) $ and, then, inverting
this equation with respect to $z$.
So far, we did not use the
series representation [Eq.~(\ref{Liexp})] for the
polylogarithms $\mbox{Li}_\nu$
in discussions of Fig.~\ref{fig1-fe}, in particular, for calculations of
the solid curve ``exact''.
Different other curves in this figure
present the calculations for the
maximal power, $k_{\rm max}$, in
the partial sum of
Eq.~(\ref{Liexp}) over $k$.
We multiply Eq.~(\ref{nid-1}) with Eq.~(\ref{Liexp}) by the same factor,
$\lambda^3/(4\sqrt{2}~g)$,
and use
cut-off of the power
series (\ref{Liexp})
for the polylogarithmic function
$\mbox{Li}_{3/2}(-z)$
at the power $k_{\rm max}$.
As seen from this figure, one has
the asymptotic convergence (see
Ref.~\cite{Grad-Ryzhik})
of $z=z(\varepsilon)$ over $k_{\rm max}$, with good convergence
at $\varepsilon \siml 0.2$.
Such a convergence is
the better the
smaller $\varepsilon$.
Even the first-order correction is leading
near the critical point
(Ref.~\cite{ex-5,ex-6,marik,FMG19}) in the region
of $\varepsilon \ll 1$. More accurately, this region is given by
$\varepsilon \siml \varepsilon_c \approx 0.15-0.16$,
where $\varepsilon_c$ is the critical value of
$\varepsilon$
(see Eq.~(\ref{eps}) at the critical values $T=T_c$ and $n=n_c$,
Ref.~\cite{FMG19},
or Table \ref{table-1} below).
This region of the variable $\varepsilon$ is related to that of
$z \siml z_c \approx 0.8-1.2$ (see Fig.~\ref{fig1-fe}), which
covers well the corresponding critical value $z_c$.
The first (at $k_{\rm max}=2$) and, even better, second ($k_{\rm max}=3$)
quantum statistics corrections
improve
the convergence.
The cut-off sum (\ref{Liexp}) for $\mbox{Li}_{3/2}$ at the maximal power
$k_{\rm max}=3$ practically, with the precision of lines,
achieves the exact result for the fugacity $z=z(\varepsilon)$
(Fig.~\ref{fig1-fe})
at $\varepsilon \siml 0.2$.
In Fig.~\ref{fig1-fe}, the arrows show approximately
the maximal values of the quantum statistics parameter
$\varepsilon$ and corresponding fugacity $z$ for which one has still very good
approximation by a few first-order quantum statistics corrections.
However, for larger
$\varepsilon$,
where the fugacity $z$ larger or
about $1.5$
(e.g., in the
small temperature limit), the inversion of the cut-off sum
[Eq.~(\ref{Liexp})] for the polylogarithmic function $\mbox{Li}_{3/2}$ fails:
We need more
and more terms and meet
a divergence of the series over
$k$ with increasing the cut-off value of
$k_{\rm max}$. The region of larger
fugacity $z$ (and respectively, larger $\varepsilon$) are shown in
Fig.~\ref{fig1-fe}
for the purpose of
a contrast comparison with that of small values of $z \siml 1$,
which are really used in our approach.
As mentioned above, in a
region of a very large
fugacity, $z\gg 1$,
one has to use
another asymptotic expansion, for instance, over $1/\ln^2|z|$,
as suggested by
Zommerfeld
\cite{brack}.
The expansion of $z(\varepsilon)$ in powers of $\varepsilon$ is inserted
then into Eq.~(\ref{Pid-1}).
At small values,
$\varepsilon_i \ll 1$, the
expansion of the pressure over
powers of $\varepsilon_i$
is rapidly
convergent.
This expansion converges well
to the exact
(polylogarithmic) function results (\ref{Pid-1}) and (\ref{nid-1}).
Its convergence is the faster the smaller $\varepsilon_i$, such that
a few first terms
provide already a good approximation of the quantum
statistics
effects.
Taking
a few first
terms (e.g., $k_{\rm max}=4$)
in the
power series of Eq.~(\ref{Liexp}),
one obtains from Eqs.~(\ref{Pid-1}) and (\ref{nid-1}) a classical gas
result, $P_i=n_iT$,
and the leading
first few-order corrections due to the quantum statistics effects:
\be\label{Pid-n}
P^{\rm id}_i(T,n_i)= n_i T \left[1 +
e_i - c_2 e_i^2 - c_3 e_i^3
+ \mbox{O}(e^4_i)\right]~,
\ee
where
$c^{}_2=4[16/(9\sqrt{3})-1] \cong 0.106$~,
$c_3=4(15+9\sqrt{2}-16\sqrt{3})/3\cong 0.0201$~,
and so on.
For brevity, we will name
the linear and quadratic
$\varepsilon_i$-terms in
Eq.~(\ref{Pid-n})
as the first and second
(order)
quantum
statistics corrections.
Equation (\ref{Pid-n}) demonstrates explicitly a deviation of the quantum
ideal-gas pressure
from
its classical
value:
the Fermi statistics corrections
lead to an
increasing of the classical pressure
while the Bose statistics
yields its decreasing.
This is often interpreted \cite{LLv5} as the effective
Fermi `repulsion'
and Bose `attraction' between
particles.
\section{Quantum statistics effects with the van der Waals inter-particle
interaction
}\label{sec-3}
Recently, the
vdW equation of state
was extended by taking into account the effects of quantum statistics
for nuclear matter
in Ref.~\cite{marik}.
The pressure function of the quantum vdW (QvdW) model
for one-component
system was presented
in this paper
as
\bea\label{PQvdW}
& P(T,n)=P_{\rm id}\left[T, n_{\rm id}(T,\mu^*)\right]
- an^2~, \\
& n_{\rm id}(T,\mu^*)~=~ \frac{n}{1-bn}~,\label{nvdw}
\eea
where $P_{\rm id}$ and $n_{\rm id}$ are respectively given by Eqs.~(\ref{Pid}) and
(\ref{nid}).
The modified chemical potential, $\mu^*$, is the solution of
a transcendental
equation;
see more details in Ref.~\cite{marik}
and Appendix \ref{appA}, as applied
for one-component system.
Following Ref.~\cite{FMG19}, we introduce
a small quantum statistics
parameter $\delta$ of
expansion of the pressure $P(T,n)$ accounting for the vdW interaction
in terms of
parameters $a$ and $b$,
\be\label{del}
\delta \equiv -\frac{\theta \varepsilon}{1-bn}=
-\theta\, \frac{ \hbar^3\,\pi^{3/2}~n}{2\,g\,(1-bn)(mT)^{3/2}}~,
\ee
where $\theta$ was defined above for different statistics.
Both first and second quantum statistics corrections over $\delta$
to the
vdW model
will be presented below.
Expanding the pressure component
$P_{\rm id}\left[T, n_{\rm id}(T,\mu^*)\right]$
in Eq.~(\ref{PQvdW})
over the small parameter $\delta$ [Eq.~(\ref{del})],
and using Eqs.~(\ref{Pid-n}) and (\ref{nvdw}),
one obtains
\be\label{Pvdw-n}
P(T,n) = \frac{nT}{1-bn}\left[1+\delta
-c^{}_2 \delta^2+
\mbox{O}\left(\delta ^3\right) \right]
-a\,n^2,
\ee
where $c^{}_2$ is the same small number coefficient
as in Eq.~(\ref{Pid-n}). We proved \cite{FMG19} that
at small $|\delta|$
the expansion of the pressure
over powers of $\delta$ becomes rapidly convergent
to the exact results. Therefore,
a few first
terms provide
already a good approximation.
A new point of
our consideration is the analytical estimates
of the
quantum statistics effects,
and further study of convergence of the results,
including the second order in $\delta$.
Similarly to the ideal gases, the quantum corrections in
Eq.~(\ref{Pvdw-n})
increase
with the particle number density $n$ and decrease
with the system temperature $T$,
particle mass $m$, and degeneracy factor $g$. A new feature of
quantum statistics
effects in the system of particles with the
vdW
interaction is the additional
factor $(1-bn)^{-1}$ in the
correction $\delta$ [Eq.~(\ref{del})].
Thus, the
quantum statistics effects
become stronger
because of the repulsive
interaction between particles.
The vdW model, both in its classical form and in its QvdW
extension (\ref{PQvdW}) and (\ref{nvdw}),
describes the first order liquid-gas
phase transition. The critical point (CP) of this transition satisfies
the following
equations \cite{LLv5}:
\be\label{CP-0}
\left(\frac{\partial P}{\partial n}\right)_T = 0~,~~~~
\left(\frac{\partial^2 P}{\partial n^2}\right)_T=0~.
\ee
Using Eq.~(\ref{Pvdw-n}) in the first
and second approximation
over $\delta$,
one derives
from Eq.~(\ref{CP-0})
the
system of two equations
for the CP
parameters $n_c$ and $T_c$
at the same
corresponding order.
Solutions of this
system
in the same first and second order approximation
over $\delta$ have
the form:
\bea
& T_c^{(1)} ~ \cong
T_c^{(0)}\left(1 - 2 \delta^{}_0 \right)
~,\nonumber \\
& n_c^{(1)} \cong n^{(0)}_c\left(1 - 2\delta^{}_0 \right)
~, \label{nc-1}
\eea
and
\bea
& T_c^{(2)} ~ \cong
T_c^{(0)}\left(1 - 2 \delta^{}_0 + \frac{4}{3}\;\delta_0^2 \right)
~,\nonumber \\
& n_c^{(2)} \cong n^{(0)}_c\left(1 - 2\delta^{}_0 + 4.62\;\delta_0^2 \right)
~. \label{nc-2}
\eea
In Eqs.~(\ref{nc-1}) and (\ref{nc-2}),
the values $ T_c^{(0)}$ and $ n_c^{(0)}$ are the CP parameters of the classical
vdW model; see
Eq.~(\ref{CP}).
They are
defined by Eq.~(\ref{CP-0}) and the vdW equation of state
at the zero approximation [see Eq.~(\ref{Pvdw-n}) at
$\delta=0$].
The parameter $\delta^{}_0$
in Eqs.~(\ref{nc-1}) and
(\ref{nc-2}) is given by Eq.~(\ref{del}),
taken at the CP
of the zero-order approximation (\ref{CP}),
i.e. at $n=n_c^{(0)}$ and $T=T_c^{(0)}$.
For simplicity,
we present approximately
the number 4.62 in Eq.~(\ref{nc-2}) for a cumbersome expression.
Substituting Eqs.~(\ref{nc-1}) and (\ref{nc-2})
for the results of the
corresponding critical temperature, $T_c^{(j)}$, and density,
$n_c^{(j)}$, where $j=1$ and $2$,
into equation of state
[Eq.~(\ref{Pvdw-n})], at a given
perturbation order, one can calculate the
CP pressure $P_c^{(j)}$
at the same order.
Notice that the temperature $ T_c^{(1)}$ and density $ n_c^{(1)}$ are decreased for Fermi and increased for Bose particles
with respect to $ T_c^{(0)}$ and $ n_c^{(0)}$.
\section{
The
Skyrme potential model with quantum statistics corrections}\label{sec-4}
The pressure function of the quantum
Skyrme mean-field (QSMF) model
\cite{satarov}, after some transformations, can be presented
as
\be\l{PQSkyr}
P_{sk,i}(T,n_{i})=P^{\rm id}_ {i}\left(T, n_{i}\right)
- a^{}_{sk,i}n_{i}^2 + b^{}_{sk,i}n_{i}^{\gamma +2}~,
\ee
where $P^{\rm id}_ {i}$ is given by Eq.~(\ref{Pid});
$a^{}_{sk,i}$, $b^{}_{sk,i}$, and $\gamma$ are
parameters of the
QSMF parametrization.
The
index $i$ means, e.g.,
nucleons N or $\alpha$ particles
($i=\{N,\alpha\}$).
The QSMF parameters
are chosen by fitting
properties of
one-component nucleon- or $\alpha$- matter at the
temperature $T=0$.
Within the QSMF
model, one can
consider the critical points for a
first-order liquid-gas phase transition
for pure nucleon ($i=N$) or $\alpha$ ($i=\alpha$) matter, separately.
The critical point (CP) for
the QSMF model obeys
the
same equation (\ref{CP-0}) but
with the
quantum Skyrme mean-field pressure,
$P_i=P_{sk,i}(T,n_{i})$ [Eq.~(\ref{PQSkyr})] for each of components
$i$,
\be\l{CPSkyr}
\left(\frac{\partial P_{i}}{\partial n_{i}}\right)_T = 0~,~~~~
\left(\frac{\partial^2 P_{i}}{\partial n_{i}^2}\right)_T=0~.
\ee
For
calculations of the first-order quantum statistics
corrections over the small parameter $|e_i|$
[see Eq.~(\ref{eps})]
to the
QSMF pressure $P_{sk,i}(T,n_{i})$ [Eq.~(\ref{PQSkyr})],
one obtains approximately from Eq.~(\ref{Pid-n})
the following expression for the pressure component
$ P^{\rm id}_i(T,n_i)$
of Eq.~(\ref{PQSkyr}):
\be\l{PSkyr1ord}
P^{\rm id}_i(T,n_i) = n_i\,T\,\left(1 + e_{i}\right)~.
\ee
Then, the system of two equations
[Eq.~(\ref{CPSkyr}) for a given $i$]
for the CP density and temperature
values, $n_{sk,c}$ and $T_{sk,c}$,
up to the same first order over $e_i$, is reduced to
\bea\l{Skyr-cp-1}
&\!\!T_{sk}(1+2e_i)\!-\!2a^{}_{sk,i}n^{}_{sk}
\!+\!(\gamma +2)b^{}_{sk,i}n_{sk}^{\gamma +1}\!=\!0~,~\nonumber\\
&2T_{sk}e_i\!-\!2a^{}_{sk,i}n^{}_{sk}
\!+\!(\gamma +2)(\gamma +1)b^{}_{sk,i}n_{sk}^{\gamma +1}\!=0\!~.
\eea
Solving the system [Eq.~(\ref{Skyr-cp-1})] of equations
for the CP parameters, in the first-order approximation
over $e_i$,
one obtains
\bea
& T_{sk,c}^{(1)} ~ \cong
T_{sk,c}^{(0)}\left(1 - 2 e^{}_{i,0}\right)
~,\nonumber \\
& n_{sk,c}^{(1)} \cong n^{(0)}_{sk,c}
\left(1 - \frac{2e^{}_{i,0}T_{sk,c}^{(0)}}{\gamma(\gamma+1)(\gamma+2)b^{}_{sk,i}\;
\left[n^{(0)}_{sk,c}\right]^{\gamma+1}} \right)
~. \label{SkyrCP-1}
\eea
In Eq.~(\ref{SkyrCP-1}), the
temperature $T_{sk,c}^{(0)}$ and density $n_{sk,c}^{(0)}$ are the
solutions
of equations
[see Eq.~(\ref{Skyr-cp-1})]
at zero perturbation order, $e_i=0$,
\bea\l{SkyrCP-0}
& T_{sk,c}^{(0)}=\frac{2\gamma a^{}_{sk,i}}{\gamma+1}\left[\frac{2a^{}_{sk,i}}{b^{}_{sk,i}(\gamma+1)(\gamma+2)}\right]^{1/\gamma} ~,\nonumber ~~~\\
& n_{sk,c}^{(0)}=\left[\frac{2a^{}_{sk,i}}{b^{}_{sk,i}(\gamma+1)(\gamma+2)}\right]^{1/\gamma}~;
\eea
see also Ref.~\cite{satarov0} where another Skyrme parametrization
for the critical temperature and particle number density at zero
quantum statistics
corrections was used.
The parameters of Skyrme parametrization, $a^{}_{sk,i}$ and $b^{}_{sk,i}$,
and their dimensions
are given in the captions of Tables \ref{table-2} and \ref{table-3}.
The value $e^{}_{i,0}$ in Eq.~(\ref{SkyrCP-1})
is defined
by Eq.~(\ref{eps}) at $T=T_{sk,c}^{(0)}$ and $n=n_{sk,c}^{(0)}$ [Eq.~(\ref{SkyrCP-0})].
For the CP pressure at $e_i=0$, from Eqs.~(\ref{PQSkyr}),
(\ref{PSkyr1ord}) and (\ref{SkyrCP-0}) one finds
\be\label{SkyrCP-0P}
P_{sk,c}^{(0)}=n_{sk,c}^{(0)} T_{sk,c}^{(0)}
-a^{}_{sk,i}\left[n_{sk,c}^{(0)}\right]^2
+b^{}_{sk,i}\left[n_{sk,c}^{(0)}\right]^{\gamma+2}.
\ee
The first-order pressure, $P_{sk,c}^{(1)}$,
can be
straightforwardly
calculated from Eq.~(\ref{PQSkyr}) with using
Eq.~(\ref{PSkyr1ord}) and expressions for $T_{sk,c}^{(1)}$ and
$n_{sk,c}^{(1)}$ [Eq.~(\ref{SkyrCP-1})].
\section{
Quantum statistics effects in the QvdW
model for $N $ and $\alpha$ particles system}
\label{sec-5}
For the infinite system of a mixture of different Fermi and Bose
particles, e.g.,
nucleons and $\alpha$ particles, one can present a more simple model
based on the vdW forces as a continuation of Sec.~\ref{sec-3}.
For this aim, we present the results for
the pressure function of the vdW model
with the quantum statistics
ingredients of the QvdW model
\cite{vova},
\bea\l{PQvdWM}
&P(T,n)=P^{\rm id}_N(T,\mu^\ast_N)
+ P^{\rm id}_\alpha(T,\mu^\ast_\alpha)
\nonumber\\
&-a^{}_{NN} n^2_N -2 a^{}_{N\alpha}n^{}_Nn^{}_\alpha -
a^{}_{\alpha\alpha}n^2_\alpha~,
\eea
where $P^{\rm id}_i$ is the pressure of an ideal
($i=N,\alpha$)
gas [Eq.~(\ref{PNal})].
The chemical potential, $\mu_i^\ast$, in Eq.~(\ref{PQvdWM}) is
modified, as
shown in Appendix \ref{appA}, through the transcendent
system of equations (\ref{nNals}) and (\ref{nN})
within the QvdW model in terms of the particle number
densities $n_i$.
Following Ref.~\cite{vova}, one can fix the model
inter-particle interaction parameters $a_{ij}$ and
$b_{ij}$.
Then, it is convenient to introduce
new volume-exclusion parameters, $\tilde{b}_{ij}=2 b_{ii}b_{ij}/(b_{ii}+b_{jj})$,
where
$b_{ij}=2 \pi (R_i+R_j)$, and $R_i$ is the hard-core radius for the $i$th hard-core particle of a
multicomponent system; see Refs.~\cite{GK99,vova}.
Using the ground state
properties of the corresponding system components,
(see, e.g., Ref.~\cite{marik}), one has
\bea\l{ab}
&a=a^{}_{NN}=329.8\, \mbox{MeV} \cdot \mbox{fm}^3 ~,\nonumber\\
&b=b^{}_{NN}=\tilde{b}^{}_{NN}=3.35~\mbox{fm}^3 ~.
\eea
Again, these values are very close to those
found in Refs.~\cite{marik,vova}.
Small differences appear because of the
non-relativistic formulation used in the
present studies.
For
simplicity, for other
attractive inter-particle interaction components we will
put
\cite{vova}
\be\l{aij}
a^{}_{N\alpha}=a^{}_{\alpha N}=a_{\alpha\alpha}=0~.
\ee
For the repulsive-interaction components
$\tilde{b}^{}_{ij}$ of the vdW
exclusion-volume constants
we will use those of Ref.~\cite{vova}:
\bea\l{bij}
& \tilde{b}^{}_{\alpha\alpha}=16.76~\mbox{fm}^3, \nonumber\\
&\tilde{b}{}_{\alpha N}=13.95~\mbox{fm}^3,~~~\tilde{b}{}_{N\alpha}=
2.85~\mbox{fm}^3~.
\eea
Notice that
the system of $N$ and $\alpha$ particles was studied
in Ref.~\cite{satarov} within the quantum
Skyrme mean-field model
(Sec.~\ref{sec-4}).
However, the authors of this article
criticized the
QvdW approach
because the Bose condensation can not be described within the
QvdW model. This phenomenon is out of scope of the present study,
and will be worked out
within our analytical approach based
on the QSMF model of the previous section
in the forthcoming work.
In the Boltzmann approximation,
i.e. at $\theta=0$ in Eqs.~(\ref{Pid}) and (\ref{nid}), the
quantum vdW model
is reduced to the classical vdW one
\cite{vova},
\be\l{vdW}
P_i=\frac{n_iT}{1-n_j\tilde{b}_{ij}} - a_{ij}n_in_j~,
\ee
where the summations over double repeated subscripts $j$ are implied.
Note that the classical
vdW approach (\ref{vdW}) is further reduced
to the ideal
classical gas, $P_i=n_iT$,
at $a_{ij}=0$ and $b_{ij}=0$.
At $a_{ij}=0$ and $b_{ij}=0$, the QvdW approach
turns into that of the quantum ideal gas
[Eqs.~(\ref{Pid}) and (\ref{nid})].
As mentioned above, a few
first
quantum statistics corrections
of the QvdW model
will be considered.
Expanding the pressure component
$P^{\rm id}_i\left(T, \mu^\ast_i\right)$
[Eq.~(\ref{PNal})]
over
small parameters $|e^\ast_i|$
($i=N,\alpha$),
given by Eq.~(\ref{eps}) with
replacing
$n_i$ by $n_i^\ast$, which will be used below, at the first order in $e^\ast_{i}$
one obtains
\be\l{Pvdw-nA}
P^{\rm id}_i(T,n^\ast_i) = n^\ast_i\,T\,\left(1+e^\ast_{i}\right)~.
\ee
This expression is similar to those of
Eq.~(\ref{Pid-n}) at the first order
over $e_i$.
As found below,
at small $e^\ast_i$,
the expansion of the pressure
over powers of $e^\ast_i$
becomes rapidly convergent to the exact results, and
even the first term
provides already a good approximation.
Our results of Eqs.~(\ref{PQvdWM}) and (\ref{Pvdw-n})
for equations of state,
in contrast to Eq.~(\ref{Pid-n}), as
discussed in Refs.~\cite{LLv5,BR75},
take into account the particle interaction effects
(cf.\ with
Sec.~\ref{sec-3}).
A new point of
our consideration now is the analytical estimates
of the quantum statistics
effects
in a mixed system of
interacting fermions and bosons.
Similarly to the ideal gases, the quantum statistics
corrections in
Eqs.~(\ref{PQvdWM})
and (\ref{Pvdw-n})
are increased for Fermi or decreased for Bose particles
with the particle number density $n_i$.
They are decreased (or relatively, increased) with the system
temperature $T$,
particle mass $m_i$, and degeneracy factor $g_i$.
As in Ref.~\cite{vova}, we introduce
the "mass fraction" for
the $\alpha$ - particles impurity,
\be\l{Xal}
X_\alpha= \frac{4 n_\alpha}{n_N +4 n_\alpha} \equiv
\frac{4 n_\alpha}{n}~,
\ee
where $n$ is the baryon particle-number density,
$n=n^{}_N+4 n_{\alpha}$.
According to the numerical solutions in Ref.~\cite{vova},
for the parameters of Eqs.~(\ref{ab}), (\ref{aij}), and (\ref{bij}),
the value of
$X_\alpha$
[Eq.~(\ref{Xal})] has been
approximately obtained from a thermodynamical equilibrium of our
mixed system,
$X_\alpha\approx 0.013$.
As shown in Ref.~(\cite{roma1}), the critical point
in a similar two-component (neutron-proton) system, as
function of $X_i$ ($i$ is protons), converges with
decreasing $X_i$ smoothly to the
one-component (neutron) system.
In line of these results,
one may
assume similarly a small change of the critical point with a
small $\alpha$ particle impurity, $X_\alpha$, mentioned above.
Therefore, for simplicity, we will use
below a smallness of the $\alpha $ particle
contribution, $X_\alpha$, in our approximate CP
calculations by Eq.~(\ref{CP-0}), which was applied in Secs.~\ref{sec-3}
and \ref{sec-4} for one-component systems.
Taking this
estimate for a simple exemplary case,
one can easily find $n^\ast_N$ and
$n^\ast_\alpha$ from
Eq.~(\ref{nN}).
Then, using Eqs.~(\ref{Xal})
and (\ref{bij}), one can present them in the following approximate form:
\be\l{nistar}
n^\ast_N\approx\frac{r^{}_1 n}{1- b^{}_1 n},\quad
n^\ast_\alpha\approx\frac{r^{}_2 n}{1- b^{}_2 n}~,
\ee
where
\be\l{tbest}
r^{}_1\approx 1-X_\alpha \approx 0.987, \quad r^{}_2\approx
\frac{X_\alpha}{4}\approx 0.0033~.
\ee
Here, $b^{}_1$ and $b^{}_2$
are the
coefficients which are related approximately
to the repulsive
interaction
constants $\tilde{b}_{ij}$ [$i,j=N,\alpha$; see Eq.~(\ref{bij})].
These coefficients, as functions of $\tilde{b}_{ij}$,
can be evaluated as
\be\l{tbest1}
b^{}_1\approx 3.29\,\mbox{fm}^3~,
\quad b^{}_2\approx 2.81 \,\mbox{fm}^3~.
\ee
For another modified attractive-interaction parameter
$a_1$, one can use
\be\l{tbest2}
a^{}_1= r^2_1\, a^{}_{NN}\approx 321.3 \,\mbox{MeV}\cdot \mbox{fm}^3~.
\ee
Using Eqs.~(\ref{PQvdW}), (\ref{Pvdw-nA}) and (\ref{nistar}),
for the total system pressure
$P(T,n)$
one arrives at
\bea\l{PQvdw-na}
\!\!P(T,n)=
T \frac{r^{}_1 n \left(1\!+\!\rho^{}_1\right)}{1-b^{}_1 n}+
T \frac{r^{}_2 n\left(1-\rho^{}_2\right)}{1-b^{}_2 n}-a^{}_1n^2,
\eea
where
\be\l{delta}
\rho^{}_{1}=\frac{D_{N}r^{}_1 n}{1-b^{}_1 n}, \quad
\rho^{}_{2}=\frac{D_{\alpha}r^{}_2 n}{1-b^{}_2 n},
\ee
$D_{N}$ and $D_{\alpha}$ are the constants given by Eq.~(\ref{eps}),
$r^{}_1$ and $r^{}_2$ are given by
Eq.~(\ref{tbest}).
Note that the expression for the pressure,
Eq.~(\ref{PQvdw-na}), in
the case of $r^{}_2=0$ and $r^{}_1=1$ is exactly the same as for
a pure
nuclear matter
presented in Ref.~\cite{FMG19}
(see Sec.~\ref{sec-3}).
A new
feature of the quantum statistics
effects in the system of particles with the vdW interactions is the additional
factors
$(1-b^{}_{1}n)^{-1}$ and $(1-b^{}_{2}n)^{-1}$ in the perturbation parameters.
Thus, the quantum statistics
effects become stronger
because of the repulsive interactions between particles.
\begin{figure*}
\begin{center}
\includegraphics[width=8.0cm,clip]{fnT.eps}
\includegraphics[width=8.1cm,clip]{enTC.eps}
\end{center}
\caption{
Contour plots for the first-order fugacity $z(n,T)$ ($k_{\rm max}=2$)
and corresponding parameter $\varepsilon(n,T)$
for nucleon matter in the plane of density
$n$ and temperature $T$
are shown in the
left and right panels, respectively.
The red line
(left) shows the zero entropy line, such that the white area is
related to a nonphysical region where the entropy of the ideal gas
is
negative.
The critical point for our first-order and the zero-order (standard vdW)
approximations
for nuclear matter at the parameters $a$ and $b$ [Eq.~(\ref{ab})]
are shown in right panel by
the red [see Eq.~(\ref{nc-1}), Table \ref{table-1}, and Ref.~\cite{FMG19}]
and the black (vdW)
point, relatively. The blue point in the same plot
presents the numerical result for the critical point
(Ref.~\cite{marik} and Table \ref{table-1}).
}
\label{fig2}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=8.0cm,clip]{Fig3a_PvT-fin.eps}
\includegraphics[width=8.0cm,clip]{Fig3b-PnT-fin.eps}
\end{center}
\caption{
Pressures $P$ as functions of the reduced volume $v$ (a)
and
particle number density
$n$
(b) at different temperatures $T$ (in units of the
critical value $T_c$)
at first-order in the quantum statistics expansion over $\delta$
[Eq.~(\ref{del})]
for the simplest case of the
symmetric nucleon matter. The critical point is shown by the
close circle, see text for details.
The dotted line shows the second-order
approximation over $\delta$;
see Ref.~\cite{FMG19}, and Eq.~(\ref{Pvdw-n}).
The horizontal lines are plotted by using the Maxwell area
law in the panel (a)
and correspondingly in the panel (b).
The unstable and meta-stable parts of the isothermal lines are presented
by dashed and dash-dotted lines, respectively.
Other closed dots show schematically a binodal
boundary for the two-phase coexistence curve.
}
\label{fig3}
\end{figure*}
The vdW model, both in its classical form (\ref{vdW}) and in its QvdW
extension
[Eqs.~(\ref{PQvdW}) and (\ref{PQvdw-na})]
describes the first-order
liquid-gas
phase transition. As the value of $X_\alpha$, used in our derivations, is
very small, the
critical points in the considered approach can be
determined approximately, as mentioned above,
by the
same
equations given by Eq.~(\ref {CP-0}).
Using Eq.~(\ref{PQvdw-na}) in the first approximation
over quantum statistics corrections,
one derives
from Eq.~(\ref{CP-0})
the
system of two equations
for the CP
parameters $n_c$ and $T_c$
at the same first order:
\vspace{0.3cm}
\begin{table*}[pt]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Critical point
& vdWM
& $N$ 1st-order & $N$ 2st-order & $N$ numerical & $N+\alpha$ 1st-order & $N+\alpha$ numerical\\
parameters & Eq.~(\ref{CP})
& Eq.~(\ref{nc-1}) & Eq.~(\ref{nc-2}) & full QvdW & Eq.~(\ref{cp-1ab})
& full QvdW \\
\hline
$T_c$~[MeV] & ~29.2~& ~19.0~& ~20.0
& ~19.7~&~19.4~&~19.9\\
\hline
$n_c$~[fm$^{-3}$] &0.100 & 0.065 & 0.079
& 0.072 &~0.072~&~0.073\\
\hline
$P_c$~[MeV$\cdot$ fm$^{-3}$] & 1.09 & 0.48 & 0.56
& 0.52 &~0.51~&~0.56\\
\hline
\end{tabular}
\vspace{0.2cm}
\caption{{\small
Results for the CP parameters of the van der Waals model
(2nd column), the symmetric nuclear
matter ($N$) ($g^{}_N=4, ~m^{}_N=938~\mbox{MeV}$, 3rd, 4th and 5th columns), and
the mixed $N+\alpha$ matter
($g^{}_\alpha=1, ~m^{}_\alpha=3737~\mbox{MeV}$,
6th and 7th columns). Numerical results
obtained within the full
QvdW model in Refs.~\cite{marik} and \cite{satarov}
are shown in 5th and 7th columns,
respectively.
}}
\label{table-1}
\end{center}
\end{table*}
\vspace{-0.7cm}
\bea\l{cp-1ab}
&2na^{}_1 = \frac{T r_1\,\left(1+2\rho_1\right)}{\left(1-b^{}_1n\right)^2}
+\frac{T r_2\,\left(1-2\rho_2\right)}{\left(1-b^{}_2n\right)^2}~,
\nonumber\\
&a^{}_1 = \frac{T r_1 b_1}{\left(1-b^{}_1n\right)^3}
\left[1+\rho^{}_1\frac{(1+2b_1n)}{b^{}_1n}\right]\,\nonumber\\
&+\frac{T r_2 b_2}{\left(1-b^{}_2n\right)^3}
\left[1-\rho^{}_2\frac{(1+2b_2n)}{b^{}_2n}\right]~.
\eea
Note that
Eq.~(\ref{cp-1ab}) for the CP
in
the case of $r^{}_1=1$, $r^{}_2=0$ and $b^{}_1=b^{}_{NN}$ is exactly the same as
that for a pure
nucleon
matter, which was derived in Ref.~\cite{FMG19}
(see Sec.~\ref{sec-3}).
\section{Discussion of the results}
\l{sec-disc}
A summary of the results for CP parameters
are presented for the QvdW model and Skyrme mean-field
parametrization in Figs.~\ref{fig2} and \ref{fig3} and
Tables \ref{table-1}-\ref{table-3}.
Fig.~\ref{fig2} shows
the contour graphics in the $n-T$ plane where black lines mean
$z(n,T)=const$ [see Fig.~\ref{fig1-fe} and
Eqs.~(\ref{nid-1}) and (\ref{fuga})]
in the left and $\varepsilon(n,T)=const$ [Eq.~(\ref{eps})]
in the right panel with the constant values written in
white squares. As seen from these plots, all values of
$z \siml 1$ ($z \siml 1.2$)
correspond to
$\varepsilon \ll 1$ ($\varepsilon \siml 0.2$) above blue regions.
Therefore, together with Fig.~\ref{fig1-fe},
this explains the reason for using the expansion
over small parameter $\varepsilon$,
even when
the fugacity is of the order of one, having a little larger values.
In particular,
the critical points,
obtained in Ref.~\cite{FMG19} and shown in
Fig.~\ref{fig2} and Table \ref{table-1}, belong
such a region.
\vspace{0.3cm}
\begin{table}[pt]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Critical point & 0th-order & 1st-order & numerical\\
parameters & Eq.~(\ref{SkyrCP-0}) & Eq.~(\ref{SkyrCP-1}) & full QSMF \\
\hline
$T_{sk,c}$~[MeV] & ~20.06~& ~15.1~
& ~15.3~~\\
\hline
$n_{sk,c}$~[fm$^{-3}$] &0.06 & 0.047
& 0.048
\\
\hline
$P_{sk,c}$~[MeV$\cdot$ fm$^{-3}$] & 0.325 & 0.194
& -
\\
\hline
\end{tabular}
\vspace{0.2cm}
\caption{{\small
Results for the CP parameters of the symmetric
nuclear matter in the quantum Skyrme mean-field (QSMF) model
($g=4,~m=938~\mbox{MeV},~\gamma =1/6,~a^{}_{sk,N}
=1.167~\mbox{GeV}\cdot\mbox{fm}^3,~b^{}_{sk,N}
=1.475~\mbox{GeV}\cdot \mbox{fm}^{3 +3\gamma}$).
Numerical results obtained within the full QSMF model
in Ref.~\cite{satarov}
are shown in 4th column.
}}
\label{table-2}
\end{center}
\end{table}
\vspace{0.3cm}
\begin{table}[pt]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Critical point & 0th-order
& 1st-order & numerical \\
parameters & Eq.~(\ref{SkyrCP-0}) & Eq.~(\ref{SkyrCP-1}) & full QSMF \\
\hline
$T_{sk,c}$~[MeV] & ~9.667~& ~10.198~
& ~10.200~~\\
\hline
$4n_{sk,c}$~[fm$^{-3}$] &0.0353 & 0.037
& 0.037
\\
\hline
$P_{sk,c}$~[MeV$\cdot$ fm$^{-3}$] & 0.023 & 0.025
& -
\\
\hline
\end{tabular}
\vspace{0.2cm}
\caption{{\small
Results for the CP parameters of pure $\alpha$ matter
in the
QSMF model ($g=1,~m=3727~\mbox{MeV},~\gamma =1/6,~a^{}_{sk,\alpha}
=3.831~\mbox{GeV}\cdot \mbox{fm}^3,~ b^{}_{sk,\alpha}
=6.667~\mbox{GeV}\cdot \mbox{fm}^{3 +3\gamma}$).
Numerical results obtained within the full QSMF model
in Ref.~\cite{satarov}
are shown in 4th column.
}}
\label{table-3}
\end{center}
\end{table}
\vspace{-0.6cm}
Fig.~\ref{fig3} shows the isotherms of the pressure
$P$ as function of the reduced volume $v=1/n$
and the
particle number density $n$ for an isotopically
symmetric nuclear matter.
The
first- (and second-) order quantum statistics
corrections are presented in this figure and Table \ref{table-1}.
The critical point is shown by the
close circle found from the
accurate solution of
equations (\ref{CP-0})
[see Eq.~(\ref{nc-1})]
for nucleon matter; see also the close red circle in Fig.~\ref{fig2}.
The dotted line shows the second-order
approximation
[see Eq.~(\ref{nc-2})
and Eq.~(\ref{Pvdw-n})]
at the same
nuclear matter parameters.
Dotted line presents schematically a binodal
boundary for the two-phase coexistence curve in
the transition from the two- to one-phase range \cite{FMG19}.
Results for the CP parameters obtained by
Eqs.~(\ref{nc-1}) and (\ref{nc-2}),
and by solving the system
of equations, Eq.~(\ref{cp-1ab}),
are presented in Table \ref{table-1}.
These analytical results are close to the numerical
results obtained in Refs.~\cite{marik,vova}.
For the same nucleon matter case,
a difference of the results for the
vdW
[Eq.~(\ref{CP})] and
QvdW [Eqs.~(\ref{nc-1}) and (\ref{nc-2})] models
in
Table \ref{table-1}
demonstrates
a significant role of the effects of
quantum statistics
for the CP of the symmetric
nuclear-particle matter.
Table \ref{table-1} shows also
a good convergence
for the case of the mixed $N-\alpha$
system with a small $X_\alpha$, given by Eq.~(\ref{Xal}). Even the first-order
corrections are in good agreement with the exact numerical QvdW results;
see Refs.~\cite{marik,vova,FMG19} and
many other examples were recently considered in Ref.~\cite{VOV-17}.
As seen from
Table \ref{table-1}, the quantum statistics
effects of the $\alpha$- particle impurity
are, in fact, small because, first of all, of too
small relative concentration
$X_\alpha$ of this impurity, according to Eq.~(\ref{Xal}),
which was suggested in
Ref.~\cite{vova}. The size of
these effects appears to
be rather different
for the case of impurity contributions
$X_\alpha \cong 1$ of the
$\alpha$ particles into the nucleon matter.
As stated above, our analysis can be applied
beyond the vdW model. In fact, similar estimates
of the quantum statistic effects
have been
done also
for one of the mean-field
models (Ref.~\cite{AV15} and references therein) in
Sec.~\ref{sec-4}; see Tables \ref{table-2} and \ref{table-3}.
The QSMF
calculations was performed
for $\gamma=1/6$ and other corresponding parameters from
Ref.~\cite{satarov} and presented in these Tables.
There is very good agreement between analytical
results of calculations (\ref{SkyrCP-1})
up to the first-order corrections over $e_i$ and
numerical results obtained in Ref.~\cite{satarov};
see Table \ref{table-2} for nucleon matter and Table \ref{table-3}
for $\alpha$ matter.
A similar good agreement was obtained with the results of Ref.~\cite{satarov},
also for another parameter,
$\gamma=1$,
which was found in the derivations of SMF approach
\cite{La81} from
the original Skyrme forces \cite{Sk56,VB72}.
Thus,
the first
order over small parameter $e_i$ in expansion of the pressure
within the quantum Skyrme mean-field approach, as well
in the QvdW model,
turns out to be sufficient for very good agreement with
numerical calculations \cite{satarov} of the CP parameters.
We should emphasize also that it is remarkable that the results obtained
up to the
first-order corrections
reproduce the quantum statistics
effects
with a high accuracy (see Tables \ref{table-1}-\ref{table-3}).
The contribution of high-order (e.g., second-order)
corrections in expansion over
$\delta_i$ for
the QvdW model, or over
$e_i$ for the Skyrme-mean field parametrization,
is much
smaller than the first-order correction,
that
shows
a fast convergence
in $\delta_i$, or $e_i$,
by the
first-order terms.
Notice that a
smallness of the
parameters $\delta_i$ is associated with those of
$\delta_i\propto e_i$.
Therefore, high-order
corrections due to the quantum statistics
effects can be neglected
for main evaluations of the critical point
values.
\section{Conclusions}
\label{sec-sum}
We derived the critical point temperature,
particle number density, and pressure for the nucleon and $\alpha$ particle
matter within the quantum van der Waals (QvdW) and quantum Skyrme mean-field
(QSMF) models
taking into account the Fermi and Bose statistics corrections. We found their
analytical dependence on the system parameters of particles, such as their
mass, degeneracy, and inter-particle interaction constants.
It is shown that, in order
to determine the equation of state and the critical
point with
accounting for the quantum statistics and interaction effects,
it is sufficient to keep
only the first term in the pressure expansion
basically over the small quantum statistics parameter $e_i$,
where $i$ denotes the
baryon system under consideration.
The specific properties of particles
(their mass and degeneracy
factor) appear in the CP values through this small
parameter $e_i$.
Our derivations were carried out
for systems of Fermi or Bose particles in two cases:
for the
QvdW model and the
QSMF parametrization.
In both cases, taking into account already the first-order terms in
expansion of the pressure
over $e_i$
greatly simplifies the form of the equation of state
and solution of this equation
for the critical values of temperature, density, and pressure. The
values of these critical quantities at leading first order
turn out to be very close to those obtained in accurate
numerical calculations
within the full
QvdW and QSMF models.
For relatively small temperatures $T$ and/or
large particle-number densities
$n$,
the quantum statistics parameter,
$|e_i| \propto n_i T^{-3/2}$,
becomes large. In this region of the phase
diagram, the perturbation expansion diverges and, therefore,
the
QvdW and QSMF approaches
should be treated within the full quantum statistical
formulation.
However, as well known \cite{LLv5}, for the limit of small
temperatures $T$ and /or large
particle densities $n$, the vdW approach fails.
In particular, as shown early (Ref.~\cite{satarov}),
the Bose condensation phenomenon
should be treated within the QSMF model,
in contrast to the vdW approach.
A simple and explicit dependence on the system parameters,
such as the particle mass $m_i$ and degeneracy factor $g_i$,
is demonstrated at the leading few
first orders of
expansion over $e_i$.
Such a dependence is absent within the classical van der Waals
and
Skyrme mean-field models. The quantum statistics parameter $e_i$,
is proportional to $m_i^{-3/2} g_i^{-1}$. Therefore,
the effects of quantum statistics become smaller for
more heavy
particles and/or for larger
values of their degeneracy factor.
The quantum statistics corrections to the CP parameters
of the symmetric nuclear
matter appear to be quite significant.
For a pure nuclear matter, the value of
$T_c^{(0)} = 29.2$
~MeV in the classical vdW model
is decreased dramatically
to the QvdW value $T_c^{(1)}=19.0$~MeV
at the first-order approximation in the
quantum statistics expansion.
On the other hand,
this approximate analytical result within the first-order
quantum-statistics approach
is already close to the accurate
numerical value of $T_c=19.7$~MeV,
which was obtained by
numerical calculations within the full QvdW model.
For the Skyrme
mean-field parametrization, the quantum statistics effect is
smaller than that for the quantum van der Waals model.
This improves the
foundation of the
perturbation approach used with respect to
the small parameter $e_i$.
The agreement of the first-order
QSMF approach
with full numerical calculations \cite{satarov}
is even
better than that within the QvdW model.
The nuclear matter value of
$T_c^{(0)}=20.6$~MeV
in the classical SMF case is decreased
to the quantum SMF
value $T_c^{(1)}=15.1$~MeV.
This result
is obviously very close to
that of numerical calculations, $T_c=15.3$~MeV,
obtained in Ref.~\cite{satarov}.
The QwdW
equation of state has been derived analytically and used to study
the quantum statistics effects
in a vicinity of the critical point of the two-component system of
nucleon and $\alpha$-particle matter.
The expressions for the pressure of the equation of state
were obtained by using the quantum statistics expansion
over the two small parameters $e^{\ast}_i$
($i=\{N,\alpha\}$) near the vdW approach.
The trend of the critical-value corrections,
owing to an impurity
of the
$\alpha$ particles into the nucleon system,
occurs in
the following direction:
the
CP parameters are somewhat increased
as compared to those for pure nucleon system.
These analytical results are
in good agreement with those of more accurate
numerical calculations.
A very small impurity
of the $\alpha$
particles to the nucleon matter leads to a very
small
corrections to the equation of state and to the
critical point
of the nuclear matter.
Finally, one can conclude that our derivations within
the quantum van der Waals
and Skyrme mean-field parametrization
are straightforwardly
extended to other types of inter-particle interactions. In particular,
it is expected to be the case for a more general mean-field approach.
\begin{acknowledgments}
We thank M.I.~Gorenstein and A.I.~ Sanzhur for fruitful discussions and
suggestions, as well D.V.~Anchishkin, A.~Motornenko, R.V.~Poberezhnyuk,
and V.~Vovchenko for many useful discussions.
The work of S.N.F. and A.G.M. on the project
``Nuclear collective dynamics for high temperatures and
neutron-proton asymmetries'' was
supported in part by the Program ``Fundamental researches in high energy physics
and nuclear physics (international collaboration)''
at the Department of Nuclear Physics and Energy of the National
Academy of Sciences of Ukraine. S.N.F., A.G.M. and U.V.G. thank
the support in part by the budget program ``Support for the
development of priority areas of scientific researches'',
the project of the Academy
of Sciences of Ukraine (Code 6541230, No 0120U100434).
\end{acknowledgments}
|
1,116,691,499,670 | arxiv | \section{Introduction}
\label{sec:intro}
Magnetic Resonance Imaging (MRI) allows the study of the brain in a non-invasive and in-vivo way. In particular, structural MRI (sMRI)
gives an anatomical differentiation of main brain tissues, enabling the automatic segmentation of them. The cortical surface can be extracted by softwares like FreeSurfer\footnote{https://surfer.nmr.mgh.harvard.edu/fswiki} \cite{c1,c2} or BrainVISA\footnote{http://brainvisa.info} \cite{c3}.
A cortex parcellation, i. e. a subdivision of the cortex into several parcels or regions \cite{c4}, can be performed based on different criteria, mostly based on anatomical, functional or diffusion-based information. This is a very complicated task due to the restrictions of each modality and the high inter-subject variability that exists, in particular, in white matter (WM) and gray matter (GM).
When studying the human connectome, brain region definition takes an important role in the study of brain connectivity and function \cite{c5}.
Anatomical parcellation methods take into account the macroscopical anatomy, like the gyri and sulci \cite{c6, c7}.
For example, Cachia et al. used a geodesic distance to label the cortex mesh vertices, using two nested Vorono\"{i} diagrams and labeled sulci \cite{c7}. Other method uses a statistical surface-based atlas, which includes information of the cortex curvature and the manual labeling of 35 regions of interest (ROIs) per hemisphere \cite{c8}.
In order to evaluate diffusion-based \cite{c9} or functional-based \cite{c10} parcellations of the cortical surface, these can be compared to a geodesic parcellation which is based on the geodesical properties of the mesh. However, this calculation can be time-consuming.
Therefore, in this work, we propose a parallel method for the complete parcellation of the cortical surface, based on the geodesic distance. The goal is to create a fast individual cortical parcellation, available to the community for parcellation comparisons. The algorithm can be applied to subdivide each anatomical parcel given by Desikan-Killiany (\textit{DK}) atlas, or to perform the cortical parcellation of the entire brain, depending on the method to be evaluated.
\section{Materials and Methods}
\label{sec:matmet}
\subsection{Database and tractography datasets}
\label{ssec:dbtr}
We took from the ARCHI database \cite{c11} 50 subjects for the experiments.
It was acquired with a 3T MRI scanner (Siemens, Erlangen). The MRI protocol used an MPRAGE sequence (160 slices; matrix=256x240; voxel size=1x1x1.1\,mm), including the acquisition of a T1-weighted dataset.
BrainVISA software was used to pre-process the images. Then, FreeSurfer was applied to calculate the cortical mesh and to obtain the automatical labeling of the cortical regions, according to \textit{DK} atlas. The database also contains deterministic tractography datasets, based on a SS-EPI single-shell HARDI acquisition along 60 optimized DW directions, b=1500 $s/mm^2$ (70 slices, matrix=128x128, voxel size=1.7x1.7x1.7\,mm).
The experimental procedures involving human subjects described in this paper were approved by the Local Ethical Committee, ``Comité de Protection des Personnes Ile-de-France VII", with codes CPP100002/CPP10002, and all subjects signed an informed consent before inclusion.
\subsection{GeoSP: geodesic cortical surface parcellation}
\label{ssec:app}
The method implemented is called \textit{GeoSP}, and performs the cortical parcellation based on a geodesic distance over the surface. The algorithm has two different modes. The default mode is based on the \textit{DK} atlas to delimit the anatomical parcels and performs a geodesic subdivision of each anatomical parcel. Note that other atlases could also be used. The second mode creates a cortical parcellation for the entire brain.
For the first mode, the method receives for each anatomical parcel (35 in total) a value $k$, used to divide each anatomical parcel into the specified $k$ sub-parcels (for both hemispheres), i.e. an anatomical parcel with $k = 2$ will be divided into two sub-parcels. On the other hand, the second mode receives a unique $k$ value, which will be used to divide each brain hemisphere into $k$ sub-parcels, based on a geodesic distance, without using any other cortical parcellation.
The method can be subdivided into two main steps: \textit{(1)} a pre-processing that creates a graph representation of the mesh, and \textit{(2)} K-means clustering based on geodesic distance over the mesh \cite{c12}.\\
\begin{figure}[t!]
\centering
\includegraphics[scale=.28]{edvsgeo.eps}
\caption{Euclidean distance versus geodesic distance. By using the Euclidean distance between two points, a straight line is obtained (orange path). While the geodesic distance considers the shortest path across the gyri and sulci of the mesh (blue path).}
\label{fig:edvsgeo}
\end{figure}
\begin{algorithm}[ht]
\caption{Parallel\_kmeans}
\label{alg:parallel}
\begin{algorithmic}[1]
\STATE{$groups \leftarrow [ ]$} \COMMENT{list of sub-parcels containing the indexes of the vertices}
\STATE{$\textbf{if}$ $k>1$ $\textbf{then}$} \COMMENT{number of clusters}
\STATE\hspace{\algorithmicindent} {$centers \leftarrow initialize()$} \COMMENT{initializing centroids}
\STATE\hspace{\algorithmicindent} {$centers\_old \leftarrow centers$}
\STATE\hspace{\algorithmicindent} {$converge \leftarrow FALSE$}
\STATE\hspace{\algorithmicindent} {$i \leftarrow 0$}
\STATE\hspace{\algorithmicindent} {$\textbf{while}$ $i$ $\le$ $nIter$ \textbf{and} $!converge$} \COMMENT{iterations and criterion for convergence}
\STATE\hspace{0.8cm} {$groups \leftarrow calc\_groups()$} \COMMENT{calculating groups}
\STATE\hspace{0.8cm} {$centers \leftarrow comp\_centroids()$} \COMMENT{computing centroids}
\STATE\hspace{0.8cm} {$converge \leftarrow stop\_critery()$}
\STATE\hspace{0.8cm} {$centers\_old \leftarrow centers$}
\STATE\hspace{0.8cm} {$i \leftarrow i+1$}
\STATE\hspace{\algorithmicindent} {$\textbf{end while}$}
\STATE{$\textbf{else}$}
\STATE\hspace{\algorithmicindent} {$groups \leftarrow [all\_indices]$} \COMMENT{all the indices for one anatomic parcel}
\STATE {$\textbf{end if}$}
\RETURN ${groups}$ \COMMENT{returns the list whose elements are the sub-parcel groups(indices))}
\end{algorithmic}
\end{algorithm}
\noindent
\textbf{1) Pre-processing}\\
Each anatomical parcel (for default mode) or each hemisphere (for the second mode) is represented with an undirected graph. The graph $G = (V,E)$ directly represents the mesh structure, formed by the vertices $V$ and the edges $E$ that join them.
For the default mode, that performs the subdivision of each anatomical parcel given by the \textit{DK} atlas, the labels of each region are used to identify each parcel.
Finally, Euclidean distance ($d_E$) is calculated between each pair of vertices to create weighted graphs.\\
\noindent
\textbf{2) K-means clustering}\\
To subdivide an anatomical parcel or a hemisphere into $k$ sub-parcels, a K-means clustering is applied. The algorithm consists of the following sub-steps: \textit{(a)} initializing centroids, \textit{(b)} (re)calculating groups and \textit{(c)} (re)computing centroids. The algorithm uses a parallel implementation and its pseudocode is shown in Algorithm \ref{alg:parallel}. For default mode, the method launches the K-means algorithm in parallel for each anatomical parcel given by \textit{DK} atlas, while for the second mode, it launches a single thread per hemisphere. To exploit the capabilities of parallelism, it is launched in the sub-step of \textit{(re)computing centroids}.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=1.06]{all_Desikan.eps}
\caption{Parcellation of the cortical mesh obtained in one subject for modes based on \textit{DK} atlas and for the entire cortex. Right sagittal, coronal, axial and left sagittal views. First and second rows: parcellation based on \textit{DK} atlas subdivided into 140 (first row) and 350 (second row) sub-parcels, with execution times of 42.9s and 18.1s respectively. Third and fourth rows: parcellation for the entire cortex into 140 (first row) and 350 (second row) sub-parcels, with execution times of 75.4s and 82.25s, respectively.}
\label{fig:alldesikan}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[scale=1.04]{graph_cons.eps}
\caption{Brain connectivity analysis scheme. First, the tractography of each subject in T1 space is intersected with the cortical mesh, which is parcellated based on an atlas. Then, a connectivity matrix is created containing for each cell, corresponding to a pair of sub-parcels, the total number of connections between them. The matrix is then binarized, indicating with a 1 if the sub-parcels are connected and with a 0 if there is no connectivity between them. The connectivity matrix is finally converted to a network graph to analyze network metrics as the Dice coefficient.}
\label{fig:graph_cons}
\end{figure*}
\begin{enumerate}[label=(\alph*)]
\item \textbf{Initializing centroids}. To perform this sub-step, K-means++ algorithm \cite{c13} is used to select the initial centroids. It has a low time complexity of $\mathcal{O}(log k)$. First, the method receives $k$, the number of sub-parcels (clusters) to divide each anatomical parcel (or each hemisphere). For each anatomical parcel, there may be different $k$. Also, $k$ can be randomly set. Although the selection of starting centroids takes additional time, by using K-means++ the convergence of K-means occurs quickly with reduced computation time. This leads to initial centroids better distributed than random selection across the anatomical parcels.
\item \textbf{(Re)Calculating groups}. In this sub-step, clusters are (re)calculated by assigning each vertex to the closest centroid. To achieve this, the \textit{single-source shortest path problem} ($SSSP$) is used, which looks for the shortest path from a vertex $v$ (centroid) to the rest of the vertices of the graph $G$.
To calculate the distance between vertices, instead of the Euclidean distance, the geodesic distance is used. Then, based on an implementation of the \textit{Dijkstra algorithm with Fibonacci heaps} \cite{c14}, the $SSSP$ is calculated for all the centroids, that is, the shortest path from each centroid to all other vertices. This algorithm runs with low complexity ($\mathcal{O}(|E| + |V| log |V|)$). Finally, for each graph vertex, the distances obtained to the different centroids are compared, and each vertex is assigned to the centroid with the smallest geodesic distance. Figure \ref{fig:edvsgeo} illustrates the Euclidean and geodesic distances for two vertices over the mesh. The path between two points for Euclidean distance is a straight line while the path for the geodesic distance is a route along the surface of the mesh, taking into account the gyri and the sulci.
\item \textbf{(Re)Computing centroids}. This is the last sub-step of the algorithm, in which the centroids of the clusters must be (re)calculated. First, the \textit{all-pairs shortest paths} problem has to be solved, that is, for each pair of vertices, the shortest path has to be calculated. This is done with the \textit{Floyd–Warshall algorithm} \cite{c15}, which runs in $\mathcal{O}(|V|^3)$. Although the temporal complexity of this step is high, it is still a polynomial running time (cubic) depending on the size of the input. The result obtained is a new centroid, which is the vertex closest to all other vertices in the cluster.
\end{enumerate}
Sub-steps (b) and (c) are executed until the convergence criterion is reached. For this, the centroids of the current iteration are compared with the previous one. The algorithm stops if the distance is less than 2\,mm or a maximum of 20 iterations is reached.
\section{Results}
\label{sec:res}
The experiments were performed on a computer with an Intel Core i7-8700K 6-core 3.70 GHz CPU, 32 GB of RAM and 12 MB of shared L3 cache. The programming language used is Python 3.6 and the operating system is Ubuntu 18.04.2 LTS with kernel 4.15.0-74. The code is freely accessible at https://github.com/andvazva/GeoSP.
First, Figure \ref{fig:alldesikan} displays the results for one subject with 140 sub-parcels and 350 sub-parcels, for both modes of the method. To obtain 140 sub-parcels using the \textit{DK} atlas, we divide each anatomical parcel into $k = 2$ sub-parcels. Since \textit{DK} atlas has $35$ anatomical parcels per hemisphere, with $k = 2$, we obtain $70$ sub-parcels per hemisphere, leading to a total of 140 sub-parcels for the whole brain. Following the same procedure, to obtain 350 sub-parcels we divide each anatomical parcel into $5$ sub-parcels, which generates $175$ sub-parcels per hemisphere. It can be seen that the method generates homogeneous parcels both for the entire cortex and for the \textit{DK} atlas-based parcellation.
Then, to illustrate an example of use, we calculated the reproducibility of structural connectivity across subjects for three different parcellations: \textit{GeoSP}, \textit{DK} and \textit{Destrieux}.
Figure \ref{fig:graph_cons} displays the scheme of processing performed to obtain the reproducibility analysis for a parcellation. For each subject, we used the tractography dataset in T1 space to calculate the structural connectivity matrix, based on each parcellation. To construct a matrix, the intersection of the fibers with the cortical mesh is determined and the labels of the pair of parcels connected by each fiber are used to add a count in the corresponding cell of the matrix. Next, the matrx is binarized and converted into a graph to use network metrics. One of these metrics is the Dice coefficient and was calculated between each pair of subjects, for each method. Figure \ref{fig:dice} shows a boxplot of the reproducibility among the 50 subjects between \textit{GeoSP} and the other anatomical atlases. The reproducibility is slightly higher for \textit{GeoSP} in both cases, with a difference of $0.024$ between \textit{GeoSP} and \textit{DK} (70 parcels) and of $0.043$ for \textit{GeoSP} and \textit{Destrieux} (150 parcels).
Finally, the execution time for both modes was compared. Figure \ref{fig:times} displays the execution times according to the number of sub-parcels in which the cortex is subdivided. For mode one, based on \textit{DK} parcellation, the execution time decreases with the number of parcels. This is because the greater the number of sub-parcels, and being delimited by the anatomical parcels of the atlas, the algorithm has to perform fewer computations when recomputing the centroids. On the other hand, for the entire cortex, with a greater number of sub-parcels, more time is needed to subdivide the cortex. This is due to the size of the graphs (one for each hemisphere), where the recalculation of centroids becomes very expensive since it is necessary to recalculate all the shortest paths between all the pairs.
\begin{figure}[ht!]
\centering
\includegraphics[scale=1.1]{dice_binary.eps}
\caption{Comparison of the structural connectivity reproducibility between \textit{GeoSP} and the two atlases, with equal number of parcels. X-axis shows the different atlases used. Y-axis contains the Dice coefficient, the closer to one, the greater the reproducibility. The rhombus indicates the mean and the black line the median for each atlas. Results show a difference of $0.024$ between \textit{GeoSP} and \textit{DK} atlas, and $0.043$ between \textit{GeoSP} and \textit{Destrieux} atlas.}
\label{fig:dice}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=1.1]{times.eps}
\caption{Execution time (seconds) for each mode, depending on the number of sub-parcels. As expected, the subdivision into sub-parcels according to the delimitation given by the Desikan-Killiany atlas is less expensive than subdividing the entire cortex.}
\label{fig:times}
\end{figure}
\addtolength{\textheight}{-2.5cm}
\section{Conclusions}
\label{sec:con}
We propose a parallel method to perform a parcellation of the cortical surface mesh based on geodesic distance.
The algorithm was tested in 50 subjects. Results show homogeneous sub-parcels for both modes and different number of sub-parcels.
Structural connectivity reproducibility between \textit{GeoSP} and two anatomical atlases is very similar and slightly higher for \textit{GeoSP}. This may be due to the higher homogeneity of the parcels with \textit{GeoSP}. Moreover, the greater the number of parcels, the less reproducibility will be obtained. Hence, this test shows that special attention should be given to the indices to be used in comparisons between parcellations. In any case, we provide a fast and configurable parcellation method based on geodesic distance, available to the community, to perform the comparison and evaluation of data-driven parcellations, like those based on diffusion or functional MRI.
|
1,116,691,499,671 | arxiv | \section{Status of Proton Spin Structure}
For the past twenty years, much work has been done to understand the spin structure of the nucleons.
There has been progress in determining the contribution of the lightest quarks to the spin, but there
is still uncertain knowledge about the gluon contribution. Transversity studies have contributed additional insight about quark dynamics, but little is known about the the orbital angular momentum
of the constituents.\cite{ji} This paper will summarize a project that provides a method of gaining insight into the nature of the orbital angular momentum of the nucleon constituents.
Recent experiments \cite{compass,hermes} have significantly lowered the measurement errors of the quark longitudinal spin contribution ($\Delta \Sigma$) to the proton. The COMPASS collaboration analysis quotes a result
\begin{equation}
\Delta \Sigma = 0.30\pm 0.01 (\mbox{stat}) \pm 0.02 (\mbox{evol}) ,\qquad \mbox{all data} \label{DSigC}
\end{equation}
while the HERMES collaboration analysis quotes a result
\begin{equation}
\Delta \Sigma = 0.330\pm 0.025 (\mbox{exp}) \pm 0.011 (\mbox{th})\pm 0.028 (\mbox{evol}),\qquad \mbox{all data} \label{DSigH}.
\end{equation}
These groups and others \cite{rhic} have been working on providing a significant measure of the proton's spin weighted gluon density,
\begin{equation}
\Delta G(x,t)\equiv G_{++}(x,t)-G_{+-}(x,t),
\end{equation}
where $x$ is the Bjorken scaling variable and $t\equiv \log(\alpha_s(Q_0^2)/\log(\alpha_s(Q^2))$ is the $Q^2$ evolution variable. The combination of these measurements is summarized in terms of the
$J_z={1\over 2}$ sum rule:
\begin{equation}
J_z={1\over 2}\equiv {1\over 2}\Delta \Sigma+\Delta G+L_z. \label{JSR}
\end{equation}
Here $\Delta \Sigma=\int^1_0\>dx\Delta q(x,t)$ and $\Delta G=\int^1_0\>dx\Delta G(x,t)$ are the
projections of the spin carried by all quarks and the gluons on the $z$-axis, respectively. Also $L_z$ is the net $z$-component of the orbital angular momentum of the constituents. We do not attempt to separate the flavor components of $L_z$ within the sum rule. \\
\section{Modeling the Gluon Asymmetry}
Experimental groups at the COMPASS, HERMES and RHIC collaborations are measuring both the gluon polarization and the asymmetry, $A\equiv \Delta G/G$ to determine the gluon polarization
\cite{compass, hermes, rhic}. Since there is no suitable theoretical model for $\Delta G$, we have
devised a way to model the asymmetry, $A(x,t)$ to gain insight into the structure of $\Delta G$. This,
coupled with the $J_z={1\over 2}$ sum rule can then shed light on the nature of the orbital angular
momentum of the constituents, $L_z$. To model $A(x,t)$, we write the polarized gluon asymmetry using the decomposition
\begin{equation}
A(x,t)\equiv \Delta G/G=A_0(x)+\epsilon(x,t), \label{Adef}
\end{equation}
where
\begin{equation}
A_0(x)\equiv \Bigl[({{\partial \Delta G}\over {\partial t}})/
({{\partial G}\over {\partial t}})\Bigr] \label{A0def}
\end{equation}
is a scale invariant calculable reference form \cite{gpr}. Here
$\epsilon(x,t)$ represents the difference between the calculated and gauge-invariant asymmetry. Since $\Delta G$ is unknown, a
useful form is to write equation (\ref{Adef}) as
\begin{equation}
\Delta G = A_0(x)\>G(x,t)+\Delta G_{\epsilon}(x). \label{DG}
\end{equation}
Although the quantity $\Delta G_{\epsilon}(x)$ is not a physical parameter, it allows the theoretical
development of the calculable quantity, $A_0$. Once an asymmetry is generated from equations
(\ref{A0def}) and (\ref{DG}), the gauge-invariant quantity
$A(x,t)$ can be compared to data. Thus, each Ansatz for
$\Delta G_{\epsilon}(x)$ gives a corresponding form for
$\Delta G$ and a parametrization for $L_z$. These can be compared
to existing data to provide a range of suitable models for these
contributions.
With the definition for the asymmetry in equation (\ref{A0def}), the DGLAP equations can then be used to
evaluate the evolution terms on the right side.
\begin{equation}
A_0=\Biggl[{{\Delta P_{Gq} \otimes \Delta q+\Delta P_{GG}\otimes
\Delta G}\over {P_{Gq} \otimes q+P_{GG}\otimes G}}\Biggr]. \label{A00}
\end{equation}
The polarized gluon distribution in the numerator of equation (\ref{A00}) is replaced by
$\Delta G\equiv A_0\cdot G+\Delta G_{\epsilon}$. For certain unpolarized distributions, there are points
at which the denominator vanishes. To avoid this, we write equation (\ref{A00}) as:
\begin{eqnarray}
{{\partial{\Delta G}}\over {\partial{t}}}&=&(2/\beta_0) \Bigl[\Delta P_{gq}^{LO}\otimes \Delta q+\Delta P_{gg}^{LO}\otimes (A_0\cdot G+\Delta G_{\epsilon})\Bigr] \\ \nonumber
&=&A_0\cdot{{\partial{G}}\over {\partial{t}}} \\
&=&(2/\beta_0) A_0 \Bigl[P_{gq}^{LO}\otimes q+P_{gg}^{LO}\otimes G\Bigr]. \nonumber \label{ADGe}
\end{eqnarray}
The NLO form is essentially the same as equation (9) with the splitting functions $P^{LO}$
replaced with their NLO counterparts. The quark and gluon unpolarized distributions are CTEQ5 and
the polarized quark distributions are a modified GGR set. \cite{GGR}
There are constraints on $A_0(x)$ that must be imposed to satisfy the physical behavior of the gluon
asymmetry, $A(x)$. These are:
\begin{itemize}
\item positivity: $|A_0(x)| \le 1$ for all x, and
\item endpoint values: $A_0(0)=0$ and $A_0(1)=1$
\end{itemize}
Note that the constraint of $A_0\to 1$ is built in to satisfy the assumption that the large $x$
parton distributions are dominated by the valence up quarks in the proton. The
convolutions are dominated by the quark terms, which force the asymmetry to unity as $x\to 1$.
To investigate the possible asymmetry models, we use a parameterization for $A_0$ in the form
\begin{equation}
A_0\equiv Ax^{\alpha}-(B-1)x^{\beta}+(B-A)x^{\beta+1}, \label{A0form}
\end{equation}
which automatically satisfies the constraints that $A_0(0)=0$ and $A_0(1)=1$. Once a parametrization
for $\Delta G_{\epsilon}(x)$ is chosen, equation (9) is used to determine the parameters in
equation (\ref{A0form}).
\section{Results and Conclusions}
The models for $\Delta G_{\epsilon}(x)$ that led to asymmetries that satisfied these constraints were all
in the range $|\int^1_0 \Delta G_{\epsilon} dx|\le 0.25$, with positive and negative values included.
Larger values of $\Delta G_{\epsilon}$ violate one or both of the constraints. A representative sample of
models that satisfy the constraints are listed in Table 1.
\begin{table}[htdp]
\caption{Gluon Asymmetry Parameters}
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline
$\Delta G_{\epsilon}$ & $\int _0^1 \Delta G_{\epsilon} dx$ & $A_0$ & $\int _0^1 \Delta G dx$ \\
\hline
\hline
$0$ & $0$ & $3x^{1.5}-3x^{2.2}+x^{3.2}$ & 0.18 \\
\hline
$2(1-x)^7$ & $0.25$ & $4x^{1.6}-4x^{2.1}+x^{3.1}$ & 0.42 \\
\hline
$-2(1-x)^7$ & $-0.25$ & $1.75x^{1.1}-1.5x^{2.1}+0.75x^{3.1}$ & 0.01 \\
\hline
$-90x^2(1-x)^7$ & $-0.25$ & $3.5x^{1.3}-4.5x^{2.2}+2x^{3.2}$ & 0.05 \\
\hline
$9x(1-x)^7$ & $0.125$ & $3.75x^{1.4}-3x^{1.6}+0.25x^{2.6}$ & 0.29 \\
\hline
$-9x(1-x)^7$ & $-0.125$ & $3.25x^{1.4}-3.75x^{2.2}+1.5x^{3.2}$ & 0.11 \\
\hline
$4.5x(1-x)^7$ & $0.0625$ & $2x^{0.9}-1.5x^{1.2}+0.5x^{2.2}$ & 0.37 \\
\hline
$-4.5x(1-x)^7$ & $-0.0625$ & $2.25x^{1.1}-2.25x^{1.9}+x^{2.9}$ & 0.23 \\
\hline
\end{tabular}
\end{center}
\label{default}
\end{table}
Note that the integrals for $\Delta G$ are all positive, ranging from about 0.01 to 0.42. The models that
gave negative values for these integrals did not agree with the existing asymmetry data, reported at this
workshop to be:
\begin{itemize}
\item $\Delta G/G=0.016\pm 0.058\pm 0.055$ at $x=0.09$ from COMPASS, $Q^2>1$ GeV$^2$
\item $\Delta G/G=0.060\pm 0.31\pm 0.06$ at $x=0.13$ from COMPASS, $Q^2<1$ GeV$^2$
\item $\Delta G/G=0.078\pm 0.034\pm 0.011$ at $x=0.204$ from HERMES, factorization method
\item $\Delta G/G=0.071\pm 0.034\pm 0.010$ at $x=0.222$ from HERMES, approximate method.
\end{itemize}
The models in Table 1 that are within one $\sigma$ of the preliminary data stated above are in the third,
fourth and sixth rows, respectively. Plots of the full asymmetry are shown in Figure 1. None of the models in Table 1 are ruled
out by the data since they fall within two $\sigma$ of the data for our values of $Q^2>1$ GeV$^2$. All of these models except for
the fourth row in Table 1 (impulses in figure 1) generate total
asymmetries $A(x,t=0)$ that are close to $A(x)=x$. Ironically,
early assumptions of the polarized gluon assumed this
functional form as a naive estimate to the asymmetry. Next-to-leading order corrections to these
asymmetries tend to bring them less positive, but with the same general shape. A full set of viable asymmetries will be presented
in an upcoming paper. \cite{brs}
\begin{figure}[htbp]
\begin{center}
\rotatebox{270}{\resizebox{8cm}{!}{\includegraphics{ramsey_fig1.eps}}}
\caption{The gluon asymmetries most closely in agreement with data. Solid line, impulses and
linespoints represent the models in rows 3, 4 and 6 of Table 1 respectively.}
\label{ramsey_fig1}
\end{center}
\end{figure}
Using the data on $\Delta \Sigma$ in Section 1, the relation between $<\Delta G>$ and $<L_z>$ can be written as:
\begin{equation}
<\Delta G>=0.35\>-<L_z>\pm 0.02. \label{DGLz}
\end{equation}
The three models of the asymmetry that agree most closely with existing data give values of $\Delta G$ in the approximate range of $0\to 0.11$. Thus, the existing data with equation \ref{DGLz} imply the
approximate relation $0.24\le L_z\le 0.35\pm 0.02$. Thus, the contribution of the orbital motion of the
constituents to the proton spin may be comparable to the total quark contribution. A recent lattice calculation of the
contribution of the quark orbital motion to the proton spin
($L_z^q$) is consistent with zero. \cite{lattice} Thus, the
gluonic orbital motion appears to provide the majority contribution to $L_z$ in the $J_z={1\over 2}$ sum rule.
It is clear that future measurements of $\Delta G$ and
$\Delta G/G$ must be made in a wider kinematic range of $x$ and
$Q^2$ with improved precision to better specify the appropriate
model of the asymmetry and to extract the $x$ and $Q^2$
dependence of the orbital angular momentum of the constituents.
|
1,116,691,499,672 | arxiv | \section{Introduction}
\section{Introduction and summary}
Recently Komargodski and Schwimmer have proved the four dimensional ``$a$-theorem" \cite{Komargodski:2011vj,Komargodski:2011xv}.
This theorem, conjectured originally by Cardy \cite{Cardy:1988cwa}, states that for any Renormalization Group (RG) flow between an UV and an IR fixed points the following inequality holds
\be
a_{UV} \geq a_{IR}
\ee
where $a_{UV}$ and $ a_{IR}$ are the UV and IR conformal anomalies related to the expectation value of the energy-momentum tensor as follows
\begin{equation}\label{Afour}
\vev{T^\mu_\mu}=c \cW^2-a E_4,
\end{equation}
where $\cW^2$ is the Weyl tensor squared and $E_4$ is the Euler density. The $a$-theorem is the four dimensional relative of the well known two dimensional Zamolodchikov's $c$-theorem \cite{Zamolodchikov:1986gt}. The proof of the $a$-theorem of \cite{Komargodski:2011vj}
follows from the analyticity and unitarity of the forward 4-point scattering amplitude
of a scalar field $\tau(x)$. The latter corresponds to the dilaton, the Goldstone boson associated with the spontaneously broken conformal invariance or to the conformal compensator (spurion) for RG flows driven by relevant operators.
Another interesting result based on the ideas behind the $a$-theorem is a proof that perturbative theories always flow to CFTs \cite{Luty:2012ww}, although it seems that some aspects remain to be understood \cite{Fortin:2012cq}.
Since there is a monotonically decreasing function associated with RG flows in two and four dimensions, a natural question to raise is whether this theorem can be generalized to field theories at any even dimensions.
It turns out that the generalization of the arguments of \cite{Komargodski:2011vj} to six dimensional field theories is not straightforward as was revealed in \cite{Elvang:2012st}.
An alternative to proving a generalized theorem directly for $d$ dimensional field theory is to do it using a dual $d+1$ dimensional holographic gravitational bulk system.
RG flows have been intensively studied using holography, starting with the pioneering works \cite{Akhmedov:1998vf,Balasubramanian:1999jd,de Boer:1999xf} and soon later within the formalism of holographic renormalization \cite{Bianchi:2001kw,Bianchi:2001de}. In most of the cases
(e.g.\cite{Freedman:1999gk,Freedman:1999gp,Girardello:1999bd,Arutyunov:2000rq,Bigazzi:2001ta,Berg:2001ty,Halmagyi:2005pn,Hotta:2008xt}), the RG flows are driven by a relevant deformation of the Lagrangian. RG flows associated with spontaneous breaking of conformal symmetry were studied in, for instance, \cite{Martelli:2001tu,D'Hoker:2002aw,Freedman:2003ax}. Based on holography, $a$-theorems in arbitrary number of dimensions where proven using various different approaches. By studying the renormalization group flow along null geodesic congruences the authors of \cite{Alvarez:1998wr} have proven a holographic version of an ``$a$-theorem".
An ``$a$-function" that is positive and monotonic if a weak energy condition holds in the bulk gravity theory was constructed in \cite{Freedman:1999gp}. For even-dimensional boundaries, the $a$-function was shown to coincide with the trace anomaly coefficients of the corresponding boundary field theory in limits where conformal invariance is recovered.
In \cite{Myers:2010tj} it was demonstrated that unitary higher curvature gravity theories in both odd and even dimensions admit $a$-theorems in terms of their corresponding entanglement entropy. In cases where the dual CFT is even-dimensional it was shown that it is the coefficient of the ``$a$-anomaly" that flows.
The RG flows we study have a holographic dual description as a solitonic solution interpolating between two $AdS$ spaces of dynamical Einstein gravity coupled to a single scalar field in $(d+1)$ dimensions, which is generated by an appropriate scalar potential. We determine the solution of the scalar field and of the metric that admits a $d$-dimensional Poincar\'e isometry. The scalar and the metric both depend only on the $(d+1)^{th}$ dimensional (radial) coordinate. We then introduce the Fefferman-Graham coordinates to express the general fluctuations of the metric components and of the scalar field in the $AdS$ regions in terms of a power expansion in the radial coordinate which is also an expansion in powers of the $d$ dimensional space-time derivatives. This expansion is valid up to some value of the radial coordinate, so we need to introduce a cutoff in the radial direction that acts as an IR regulator. We then introduce the spurion via a generalization of the Penrose-Brown-Henneaux (PBH) diffeomorphism \cite{Penrose:1986ca,Brown:1986nw}, whose effect is to perform a Weyl transformation on the boundary values of the metric and the scalar. The same kind of transformation (in pure $AdS$) has been used recently to compute the effective action of the dilaton from the action of probe branes \cite{Elvang:2012st}. A generalization of the transformation to theories with scalars was also used in \cite{Hung:2011ta} to compute contributions of relevant operators to the entanglement entropy. Let us stress here that in our case there is no analog to the scalar fields on the probe branes and the parameter of the PBH transformation is not a dynamical mode.\footnote{However, it could be promoted to a dynamical mode by changing the kind of boundary conditions of the metric, we discuss this possibility in Sec.~\ref{sec:promote}}
The PBH transformation admits the same kind of expansion as the metric and scalar fluctuations, in such a way that terms in the expansion transform covariantly. There is an exception that appears at $d$th order in derivatives, whose effect is to introduce a term related to the Weyl anomaly in the generating functional computed using the usual holographic prescription. The term takes the following form
\begin{equation}
\int d^d x\,\sqrt{-\hat g}\,\tau(x)\,\left(\hat \cA_d^{UV}-\hat \cA_d^{IR}\right),
\end{equation}
where $\tau(x)$ is the spurion field and $\hat g$ is the determinant of a background metric, dressed with the spurion to make a Weyl-invariant combination, i.e. $\hat g_{\mu\nu}=e^{2\tau}g_{\mu\nu}$. $\cA_d$ is the conformal anomaly in even dimensions, at the UV or IR fixed points. In general, the anomaly term takes the form \cite{Boulanger:2007ab}
\begin{equation}
\cA_d=-a E_d+\sum c_i I_i,
\end{equation}
where $E_d$ is the Euler density and $I_i$ are Weyl-invariant polynomials of the curvature and its derivatives with $d$ derivatives of the metric, examples in $d=2$ and $d=4$ are given by \eqref{Atwo},\eqref{Afour2}. When the action of the gravitational dual is Einstein gravity plus matter fields, as in our case, the value of the coefficients $c_i$ are proportional to $a$ with fixed factors. In more general cases than the ones we consider the coefficients $c_i$ are independent. Our computation confirms that there is a term in the generating functional of the form
\begin{equation}
- (a_{UV}-a_{IR})\int d^d x\,\sqrt{-\hat g}\tau(x)\,{\hat E}_d.
\end{equation}
In deriving this result we have used the fact that
\begin{equation}
a_{UV}-a_{IR} \propto \frac{1}{\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right),
\end{equation}
with $L_{UV}$ and $L_{IR}$ denoting the radii of the two asymptotically $AdS$ spaces and $\kappa$ is the $d$ dimensional Newton constant.
Since according to \cite{Freedman:1999gp} $\Luv\geq \Lir$,\footnote{For this to be true, the energy-momentum tensor of the bulk matter fields should satisfy the null energy condition $T_{MN}u^M u^N\geq 0$, with $g_{MN}u^M u^N=0$.} it indeed follows that the coefficient of the anomalous term $a_{UV}-a_{IR} \geq 0$ is positive.
Note that we do not give here an independent holographic proof of this positivity since we use the null energy condition. However, it does show that we have identified consistently the holographic dilaton mode. The anomalous term in curved space implies the existence of a Wess-Zumino term in flat space for the dilaton with the same coefficient and a term $(\partial\tau)^d$. A consequence is that the scattering amplitude of $d$ dilatons should satisfy a positivity condition. This result is in full accordance with the ``$a$-theorem" of \cite{Komargodski:2011vj}.
We would like to emphasize that whereas in field theory such a statement has been proven in two and four dimensions, the holographic $a$-theorem holds at any even dimension.
In addition to the study of the holographic ``$a$-theorem", we analyze in detail the manifestations of the Goldstone theorem related to the spontaneous breaking of conformal symmetry by computing the one and two point functions of the operator dual of the scalar and the energy momentum tensor.
To implement this, we consider a suitable potential for the scalar field which admits the appropriate asymptotic behavior in the UV corresponding to a vacuum expectation value (vev) of a dual operator $\mathcal{O}$ of the $CFT_{UV}$. More precisely, the non-normalizable mode of the scalar field is set to zero, but the normalizable mode can have any value. There is a family of classical gravity solutions satisfying these boundary conditions and that at the same time are regular in the bulk. The near-horizon geometry is $AdS$, meaning that there is a conformal symmetry in the IR of the dual field theory. The renormalized on-shell action of the gravitational theory is independent of the value of the normalizable mode, hence there is a ``moduli space'' for the field theory dual with spontaneous symmetry breaking.
We first expand Einstein's equations to linear order in the fluctuations, fixing a particular (radial) gauge.
We then analyze the tensor and scalar fluctuations at low momenta away from the horizon and compare with the finite momentum solution close to the horizon. We fix the boundary conditions at the horizon to be ingoing, so that we compute retarded correlators or, by analytic continuation, Euclidean correlators. By carefully matching the two asymptotic behaviors
we can determine the full solution to leading order in momentum. The spectrum of fluctuations reveals a zero mode corresponding to a change of the expectation value in agreement to what is expected when the symmetry is spontaneously broken. Evaluating the on-shell action on these solutions we can compute correlation functions using the standard holographic prescription. First, we confirm that the operator dual to the scalar field has a non-zero expectation value $\vev{\cO}$, while the trace of the energy-momentum tensor vanishes.
The two-point function of the energy-momentum tensor reads
\begin{equation}\label{TTcorrintro}
\vev{T^{\mu\nu}(-q) T^{\alpha\beta}(q)}\propto \frac{\Lir^d}{\kappa^2\Luv}\frac{1}{(q^2)^2}\Pi^{\mu\nu,\alpha\beta}(\sqrt{-q^2})^d\log(\Lir \sqrt{-q^2})^2,
\end{equation}
where $q^\mu$ is the four momentum and $\Pi^{\mu\nu,\alpha\beta}$ is a kinematic factor depending on the momentum and the metric which is transverse and traceless. The explicit expression is given in \eqref{projector}.
The two-point function involving the energy-momentum tensor and the scalar operator is given by
\begin{equation}\label{TOcorrintro}
\left< T^{\mu\nu}(-q) \mathcal{O}(q) \right> =\frac{\Delta_{UV}\left<\mathcal{O}\right>}{2(d-1)} \frac{1}{q^2} P_T^{\mu\nu}-\vev{\cO}\eta ^{\mu\nu}.
\end{equation}
where $P_T^{\mu\nu}$ is a kinematic factor transverse to the momentum (see (\ref{projector})).
This result is in agreement with the field theory expectation based on Goldstone's theorem.
For the two-point function of the scalar operator we find
\begin{equation}\label{OOcorrintro}
\vev{\cO(-q)\cO(q)} \propto \frac{\kappa^2\Lir^{d+2-2\Delta_{IR}}}{\Luv}\vev{\cO}^2(\sqrt{-q^2})^{d-2\Delta_{IR}}.
\end{equation}
This result exhibits a long range interaction, which is consistent with Goldstone's theorem (a more detailed explanation is given in Sec.~\ref{sec:compare}), but the zero-momentum singularity is not a pole, as one would expect to find for a free dilaton mode, and in fact the singularity leads to an IR divergence that is at odds with unitarity.
Recall that na\"\i vely one would expect to find in the IR a $CFT_{IR}~\times$ free dilaton. We comment more on this in the Discussion section.
The absence of a pole is in contrast with the probe brane analysis \cite{Elvang:2012st}, where the dilaton is a mode which essentially decoupled from the remaining massless degrees of freedom.
The paper is organized as follows. In Section \ref{sec:RG} the basic setup of the holographic RG flows which is analyzed in this paper is established. We also explain which assumptions we make to simplify the analysis. Section \ref{sec:matching} is devoted to the manifestation of the conformal anomaly matching in holography. We first briefly review the picture of the boundary field theory. A compensator field (spurion) is introduced to maintain conformal invariance of the field theory coupled to an external curved space-time. The anomalous terms of the generating functional are written down and the corresponding ``$a$-theorem" \cite{Komargodski:2011vj} is stated. Back to the holographic dual, we show that the generating functional of the field theory includes an anomaly term whose coefficient is given by the difference between the $a$-anomaly of the UV and of the IR, in full accordance with the ``$a$-theorem of \cite{Komargodski:2011vj}.
The holographic spontaneous breaking of conformal invariance is discussed in Section \ref{sec:spont}, where our strategy to find approximate solutions valid at small momentum is described.
We determine the tensor and scalar fluctuations
at low momenta and extract the holographic correlators
$\vev{\cO}, \vev{\cO\cO},\vev{T^{\mu\nu}\cO}$ and $\vev{T^{\mu\nu}T^{\alpha\beta}}$.
In Section \ref{sec:promote} we promote the spurion field to a dynamical mode by introducing a new set of boundary conditions for the metric. We conclude, summarize our assumptions and discuss several open questions in section \ref{sec:discuss}.
We end this paper with four appendices. In the first we elaborate on the equations of motions of the fluctuations. In appendix \ref{app:matching} we elaborate on the matching procedure and explicitly show the existence of an overlapping region between the boundary and the near-horizon region. Appendix \ref{app:coulomb} is devoted to an application of our matching procedure to the case of the Coulomb branch of the ${\cal N}=4$ SYM theory that was previously analyzed using a different procedure in \cite{Papadimitriou:2004rz,DeWolfe:2000xi,Arutyunov:2000rq,Mueck:2001cy}. In appendix \ref{app:toy} we present a family of toy models that describe an RG flow between two fixed points.
\section{Holographic RG flows}\label{sec:RG}
In most known examples, theories with a holographic dual have a classical weakly-coupled gravitational description when they are in a strong-coupling and large-$N$ limit, the canonical example being $\cN=4$ $SU(N)$ super Yang-Mills at large 't Hooft coupling. As we will see later we will study toy models of classical gravity coupled to matter with no known field theory description. We will assume that if a dual description of any of the models exists, the corresponding field theory dual will also be strongly coupled and in some sort of large-$N$ limit, as in the known examples.
A holographic dual of a theory with an RG flow between two fixed points will be a geometry that interpolates between two anti-de Sitter spaces with different radii. In Gaussian coordinates, the metric will be
\begin{align}
\notag &ds^2_{UV}=dr^2+e^{2r/\Luv}\eta_{\mu\nu}dx^\mu dx^\nu, \ \ r\to +\infty,\\
&ds^2_{IR}=dr^2+e^{2r/\Lir}\eta_{\mu\nu}dx^\mu dx^\nu, \ \ r\to-\infty,
\end{align}
where, imposing a null energy condition on the bulk matter fields the radius close to the boundary $r\to \infty$ is larger than the radius close to the horizon $r\to-\infty$: $\Luv>\Lir$. The radial coordinate $r$ maps to the RG scale of the dual theory, while the coordinates $x^\mu$ map to the space-time where the dual theory lives. Other RG flows where the IR geometry is not AdS are also possible, however, they do not describe a CFT in the far IR and for this reason we will no consider them.
Asymptotically the geometry has the following isometry
\begin{equation}
r\to r+\lambda, \ \ x^\mu\to e^{-\lambda/L}x^\mu,
\end{equation}
where $\lambda$ is a constant and $L=\Luv$ or $L=\Lir$ depending on which limit we take. A shift in the $r$ coordinate is then equivalent to a dilatation transformation of the space-time coordinates.
There are examples in supergravity of geometries that interpolate between two different AdS spaces (e.g. \cite{Freedman:1999gk,Freedman:1999gp,Girardello:1999bd,Bigazzi:2001ta,Berg:2001ty,Halmagyi:2005pn,Hotta:2008xt}), but they describe RG flows driven by a relevant deformation of the Lagrangian and not spontaneous breaking of conformal symmetry. Since we are interested mainly in the latter, we will construct our own holographic toy models with the desired properties, keeping the discussion as general as possible, so the results could easily be extended to particular examples in supergravity in case some are eventually found. Our construction is inspired by similar models that have been studied in the past \cite{Martelli:2001tu,D'Hoker:2002aw,Freedman:2003ax}.
It is relatively easy to construct holographic models dual of a flow between fixed points, taking as inspiration some of the known supergravity flows. The simplest case is a scalar field with some potential $V$ coupled to Einstein gravity in $d+1$ dimensions.
\begin{equation}\label{EHaction}
S_{EH}=\frac{1}{\kappa^2}\int d^{d+1} x \sqrt{-g} \left( -\frac{1}{2}R-\frac{1}{2}\partial_M \phi \partial^M \phi -V(\phi)\right).
\end{equation}
Note that for this theory the null energy condition is automatically satisfied.
The requirements on the potential are
\begin{itemize}
\item[a)] There are at least two critical points where the potential is negative, one is a maximum and the other is a minimum.
\item[b)] There is a classical solution that interpolates between the two critical points.
\end{itemize}
The general case is still too complicated, but in certain cases it is possible to find a system of first order equations whose solutions are also solutions of the full second order system. That is the case when the potential is written in terms of a superpotential $W$\footnote{Locally in field space the potential can always be written in terms of a superpotential, by solving the equation \eqref{potential} understood as a differential equation. For each solution one has a different RG flow geometry, that could be dual to a theory with explicit or spontaneous breaking of conformal invariance \cite{Skenderis:1999mm,Skenderis:2006jq,Skenderis:2006rr}. Here we assume a prior knowledge of the superpotential, which is assumed to be valid in all configuration space, but the analysis could be extended to situations where this is not the case. We thank Ioannis Papadimitriou for pointing this out.}
\begin{equation}\label{potential}
V=\frac{1}{2}\left[(\partial W(\phi))^2-\frac{d}{d-1}W(\phi)^2 \right].
\end{equation}
One introduces an ansatz for the metric with Poincar\'e invariance along the space-time directions
\begin{equation}\label{gaussmet}
ds^2=dr^2+e^{2A}\eta_{\mu\nu}dx^\mu dx^\nu,
\end{equation}
and the first order equations that give solutions for the Einstein plus scalar equations of motion are
\begin{equation}\label{floweq}
\phi'=-\partial W, \ \ A'=\frac{W}{d-1},
\end{equation}
where primes denote radial derivatives. Critical points of the superpotential are also critical points of the potential, so in order to have a flow we should demand that the superpotential has two critical points. Without loss of generality we can assume that one of the critical points is at $\phi=0$,
\begin{equation}\label{superpot}
W\simeq \frac{d-1}{\Luv}+\frac{\Delta_{UV}}{2\Luv} \phi^2, \ \ V\simeq-\frac{d(d-1)}{2\Luv^2}+\frac{\Delta_{UV}(\Delta_{UV}-d)}{2\Luv^2}\phi^2.
\end{equation}
Here $\Delta_{UV}$ is the dimension of the operator dual to the field $\phi$. We will assume in the following that the dual operator is relevant, so $\Delta_{UV}<d$. In this case the potential has a maximum.
The second critical point will be at some value $\phi_m$, such that
\begin{equation}\label{superpot2}
W\simeq \frac{d-1}{\Lir}+\frac{d-\Delta_{IR}}{2\Lir} (\phi-\phi_m)^2, \ \ V\simeq-\frac{d(d-1)}{2\Lir^2}+\frac{\Delta_{IR}(\Delta_{IR}-d)}{2\Lir^2}(\phi-\phi_m)^2.
\end{equation}
We will demand that $\Delta_{IR}>d$, so the dual operator becomes irrelevant in the IR fixed point and the potential has a minimum. Solutions can be simply obtained by using the field $\phi$ as radial coordinate,
\begin{equation}
ds^2=\frac{d\phi^2}{(\partial W)^2}+e^{2A(\phi)}\eta_{\mu\nu} dx^\mu dx^\nu, \ \ \partial A(\phi)=-\frac{1}{d-1}\frac{W}{\partial W}.
\end{equation}
A family of simple superpotentials satisfying the requirements is given in Appendix \ref{app:toy}. In Figure \ref{figpot} we have plotted an example of a superpotential with the required properties and its corresponding potential.
Note that the asymptotic value of the scalar field is proportional to the expectation value of the scalar operator, and that one would recover pure AdS if this value is set to zero. An explicit way to see this is first expanding the superpotential when $\phi\to 0$ and then solving the equation of motion for $\phi$,
\begin{equation}
\phi\simeq C_1 e^{-\Delta_{UV} r/L_{UV}}+\cdots,
\end{equation}
where $C_1 \propto \left\langle {\cal O} \right\rangle$. Higher order terms can be found continuing the expansion of the superpotential and solving iteratively the equation, they have higher powers of $C_1 e^{-\Delta r/L}$.
With this solution we can solve for $A$
\begin{equation}
A=A_0+\frac{r}{L_{UV}}-\frac{1}{4(d-1)}C_1^2 e^{-2\Delta_{UV} r/L_{UV}}+\cdots.
\end{equation}
Therefore, in the limit where the expectation value vanishes, $C_1\to 0$, the geometry becomes
\begin{equation}
\phi \to 0, \ \ \ A\to A_0+\frac{r}{L_{UV}}.
\end{equation}
Which is just pure AdS with no scalar turned on. A technical but important point is that we will take the dimensions $\Delta_{UV}$ and $\Delta_{IR}$ not to be integers, the
reason being that the analysis simplifies and one avoids potential contributions to the Weyl anomaly from multitrace operators. We will also impose that the number of dimensions is even (since there is no anomaly in odd dimensions) and that $d>2$ for the calculation of correlators (since for $d=2$ one does not expect to have a dilaton \cite{Coleman:1973ci}).
\FIGURE[t]{
\includegraphics[width=6cm]{W.eps}
\includegraphics[width=6cm]{V.eps}
\caption{ An holographic RG flow interpolates between the minimum and the maximum of the superpotential on the left side, or equivalently between the maximum and the minimum of the potential on the right side.
}
\label{figpot}
}
The construction can be generalized to several scalars, that may have a non-trivial metric in configuration space. The basics will be the same, the flow between two fixed points will be dual to a solution to the equations interpolating between two critical points of the superpotential with the right properties. In this case one has to worry about the directions transverse to the flow in configuration space, and make sure that there are no unstable modes.
\section{Conformal anomaly matching in holography}\label{sec:matching}
Our purpose in this section is to show explicitly how the anomaly matching arguments presented in \cite{Komargodski:2011vj,Komargodski:2011xv}\footnote{See also \cite{Schwimmer:2010za} for the original arguments regarding anomaly matching in theories with spontaneously broken conformal invariance.} for the generating functional are realized in gauge/gravity models.
\subsection{Field theory preliminaries}
Consider first a conformally invariant theory. In the presence of a background metric conformal invariance is explicitly broken. However, one can introduce a compensator field such that the theory is invariant under an extended symmetry. The compensator field, or spurion, will enter as a Weyl factor of the background metric, as was discussed in \cite{Komargodski:2011vj,Komargodski:2011xv},
\begin{equation}
g_{\mu\nu} \to \hat g_{\mu\nu}=e^{2\tau} g_{\mu\nu}.
\end{equation}
The extended symmetry involves both a Weyl rescaling of the metric and a shift of the spurion:
\begin{equation}\label{spuriont}
g_{\mu\nu}\to e^{2\sigma}g_{\mu\nu}, \ \ \tau\to \tau-\sigma.
\end{equation}
Correlation functions of the energy-momentum tensor in the CFT can be obtained by deriving a generating functional (generally non-local), $\Gamma$, with respect to the background metric. The generating functional should have the same symmetries as the original theory, so it should be a diffeomorphism-invariant functional of the dressed metric $\hat g_{\mu\nu}$. In the absence of anomalies this will be the end of the story. However, when the number of dimensions $d$ is even, a CFT in curved space has in general a conformal anomaly. The generating functional should reproduce the anomaly, which implies that it is not completely invariant under the transformation \eqref{spuriont} but it transforms as
\begin{equation}
\left.\delta_\sigma \Gamma\right|_{\tau=0} = \int d^d x\sqrt{-g} \sigma {\cal A}_d,
\end{equation}
where ${\cal A}_d$ is a polynomial of the curvature and its derivatives with $d$ derivatives of the metric. The anomaly in the generating functional coincides with the expectation value of the trace of the energy-momentum tensor.
\begin{equation}
\vev{T^\mu_{\ \mu}}=\left.g_{\mu\nu}\frac{2}{\sqrt{-g}}\frac{\delta \Gamma}{\delta g_{\mu\nu}}\right|_{\tau=0}={\cal A}_d.
\end{equation}
For instance, in two-dimensions the anomaly polynomial is
\begin{equation}\label{Atwo}
{\cal A}_2=-\frac{c}{24\pi}R,
\end{equation}
where $c$ is the central charge of the CFT. In four and larger dimensions the number of independent coefficients is larger. Concretely, in four dimensions there are three possible independent contributions to the anomaly
\begin{equation}\label{Afour2}
{\cal A}_4=c \cW^2-a E_4+b \square R,
\end{equation}
where $\cW^2$ is the Weyl tensor squared and $E_4$ is the Euler density. The last term is not directly related to an anomaly, but it is a contact term that may appear in the energy-momentum tensor correlators and that can be eliminated through the addition of a local counterterm. The first and second terms are labeled by the coefficients $c$ and $a$. The $a$-anomaly (known as type A) is the equivalent to the conformal anomaly in two dimensions, and it shares some properties with chiral anomalies: it is determined by a one-loop diagram with $d/2+1$ legs, there is no ultraviolet divergence involved and there is a set of descent equations related to the anomaly \cite{Boulanger:2007ab}. The $c$-anomaly (known as type B) on the other hand is related to UV divergences of correlation functions (see e.g. \cite{Schwimmer:2010za}). In dimensions larger than four there is also a type A anomaly proportional to the Euler density. Type B anomalies also appear but their number increases with the number of dimensions, their classification is based on the construction of Weyl-invariant polynomials of the curvature, although in the presence of other sources there can be additional B anomalies.
Now consider a situation where conformal symmetry has been broken, either spontaneously or explicitly, and the theory flows to an IR fixed point. In the latter case it is still possible to maintain a symmetry of the form \eqref{spuriont}, by dressing dimensionful quantities with the spurion field. In other words, any mass scale $m$ should be multiplied by a factor $e^{-\tau}$. In this way the generating functional in both cases takes essentially the same form. The generating functional will suffer in general from IR divergences due to the massless degrees of freedom of the IR CFT. One can avoid this by introducing an IR regulator $\mu$ and integrating out all the degrees of freedom above this scale, but keeping the degrees of freedom of the IR CFT. The resulting partition function will take then the form
\begin{equation}
Z=e^{\Gamma[\mu; \hat g_{\mu\nu}]}Z_{IR\; CFT}[\hat g_{\mu\nu}].
\end{equation}
With the IR regulator, $\Gamma[\mu; \hat g_{\mu\nu}]$ is a local functional of the dressed metric and couplings.
From the anomaly matching arguments of \cite{Schwimmer:2010za} and \cite{Komargodski:2011vj,Komargodski:2011xv}, the variation of the total partition function is determined by the coefficients of the conformal anomaly in the UV theory, $a_{UV}$ and $c_{UV}$. However, the IR CFT in general contributes to the anomaly with different coefficients, $a_{IR}$ and $c_{IR}$. Therefore, the generating functional $\Gamma[\mu; \hat g_{\mu\nu}]$ must account for the difference. This implies that there is a term in the generating functional whose variation gives
\begin{equation}
\delta_\sigma \Gamma[\mu; \hat g_{\mu\nu}] = \int d^d x \sqrt{-g} \sigma \left(\cA_d^{UV}-\cA_d^{IR}\right).
\end{equation}
For instance, in two dimensions
\begin{equation}
\Gamma[\mu; \hat g_{\mu\nu}] \supset -\frac{c_{UV}-c_{IR}}{24\pi}\int d^2 x \sqrt{- g} \tau\left( R+(\partial\tau)^2\right).
\end{equation}
The terms in four and six dimensions can be found in \cite{Komargodski:2011vj} and \cite{Elvang:2012st} respectively. In general, the coefficients of the anomalous terms coming from the variation of the functional should be proportional to the difference between the coefficients of the anomalies in the UV and IR CFTs. In two and four dimensions this allows to prove a $c$- or $a$- theorem \cite{Komargodski:2011vj,Komargodski:2011xv},
\begin{equation}
c_{UV}\geq c_{IR}, \ \ a_{UV}\geq a_{IR}.
\end{equation}
Of course in two dimensions the $c$-theorem was proven long ago by Zamolodchikov using a different approach \cite{Zamolodchikov:1986gt}. The new proof is based on the analytic properties of scattering amplitudes, unfortunately it seems that the extension of these arguments to six dimensions are not straightforward and a similar statement does not exist there \cite{Elvang:2012st}.
This is in contrast with holographic models, where there are various existing proofs of $a$-theorems in arbitrary number dimensions \cite{Alvarez:1998wr,Freedman:1999gp,Myers:2010tj}. We will show that the proof given in \cite{Freedman:1999gp} for RG flow geometries implies that the coefficient of the anomalous term in the generating functional is positive. Therefore, there may still be an argument purely within field theory that fixes the sign of the coefficient in six dimensions, even though it would be more subtle than in two and four dimensions.
\subsection{Spurion field for Weyl transformations}
We will now explain how to introduce a spurion field in a holographic dual. We start with the RG flow metric \eqref{gaussmet} and do the change of coordinates $\rho=e^{-2A}$
\begin{equation}\label{FGmetric}
ds^2=\ell^2(\rho)\frac{d\rho^2}{4\rho^2}+\frac{1}{\rho}\eta_{\mu\nu}dx^\mu dx^\nu,
\end{equation}
where
\begin{equation}
\ell(\rho)=\frac{1}{A'},
\end{equation}
asymptotes $\Luv$ in the $\rho\to 0$ limit and $\Lir$ in the $\rho\to \infty$ limit. In these two limits the metric just becomes $AdS$ in Fefferman-Graham coordinates. We can generalize this form of the metric to arbitrary solutions of the equations of motion
\begin{equation}
ds^2=\ell^2(\rho,x)\frac{d\rho^2}{4\rho^2}+\frac{1}{\rho}g_{\mu\nu}(x,\rho)dx^\mu dx^\nu,\ \ \ \phi=\phi(x,\rho),
\end{equation}
where, close to the boundary $\rho\to 0$ the lapse factor becomes a constant $\ell\to \Luv$. In this limit the metric and the scalar field will have an expansion in powers of the radial coordinate (for $d$ even dimensions) \cite{deHaro:2000xn}
\begin{align}\label{metexp}
\notag g_{\mu\nu}(x,\rho)&=g_{\mu\nu}^{(0)}(x)+g_{\mu\nu}^{(2)}(x)\rho+\cdots+g_{\mu\nu}^{(d)}(x)\rho^{d/2} +h_{\mu\nu}^{(d)}(x)\rho^{d/2}\log\rho\\ \notag
&+\cdots+g_{\mu\nu}^{(d-\Delta_{UV})}(x)\rho^{d-\Delta_{UV}}+\cdots \\
\phi(x,\rho)&=\phi^{(d-\Delta_{UV})}(x)\rho^{(d-\Delta_{UV})/2}+\cdots+\phi^{(\Delta_{UV})}(x)\rho^{\Delta_{UV}/2}+\cdots
\end{align}
For odd $d$ dimensions, the logarithmic term proportional to $h_{\mu\nu}^{(d)}(x)$ is not present. As we will see this term is crucially related to the conformal anomaly.
We are assuming that $\Delta_{UV}$ is not an integer, otherwise there can be additional logarithmic terms in the expansion of the scalar field that will make the analysis more complicated.
In this expansion all the coefficients are completely determined by the leading terms $g_{\mu\nu}^{(0)}$ and $\phi^{(d-\Delta_{UV})}(x)$, up to the terms $g_{\mu\nu}^{(d)}(x)$ and $\phi^{(\Delta_{UV})}(x)$ for which the boundary conditions when $\rho\to\infty$ are also needed. The leading term in the expansion of the scalar field $\phi^{(d-\Delta_{UV})}(x)\neq 0$ maps to a source for the dual operator in the field theory and breaks explicitly conformal invariance. The scalar solution still vanishes at the boundary, so the asymptotic AdS form of the metric is not affected. From the perspective of the holographic dual we are introducing a relevant deformation, so the UV physics will not be affected by it.
The coefficients $g_{\mu\nu}^{(n)}$ have been computed in pure gravity and in theories with scalar fields as well \cite{deHaro:2000xn,Bianchi:2001de,Bianchi:2001kw,Papadimitriou:2004rz}.
Each term has $n$ derivatives, so the expansion above can also be seen as a derivative expansion. In particular, if the derivatives are small enough, it can be extended to values of $\rho$ that are not asymptotically close to $\rho=0$. This implies that for small enough derivatives
\begin{equation}
|\partial_\alpha g_{\mu\nu}^{(0)}|, \ \ |\partial_\alpha\phi^{(0)}| \ \ \ll \frac{1}{\rho_{IR}^{1/2}},
\end{equation}
the expansion above will also be valid in a region of the geometry where the space asymptotes $AdS$ with radius $\Lir$, that we will call the near-horizon region, even though a solution with explicit dependence on the space-time coordinate will most likely continue to a different geometry when $\rho\to \infty$.
Note that the solution in the near-horizon region is determined by the same leading term of the metric $g_{\mu\nu}^{(0)}$, but that the scalar field will have a different expansion
\begin{equation}
\phi(x,\rho)=\phi^{(d-\Delta_{IR})}(x)\rho^{(d-\Delta_{IR})/2}+\cdots+\phi^{(\Delta_{IR})}(x)\rho^{\Delta_{IR}/2}+\cdots
\end{equation}
From the perspective of the near-horizon AdS, $\phi^{(d-\Delta_{IR})}$ acts as a ``source'' although of course it is not independent of the boundary value $\phi^{(d-\Delta_{UV})}$, and the same is true for the subleading terms. For the explicit solution of the holographic RG flow, one sees that the scalar solution interpolates between the two scalings in a way determined by the superpotential.
In the case where the geometry is the dual of a theory with spontaneous breaking of symmetry, the source term will vanish $\phi^{(d-\Delta_{UV})}=0$. This is the case we have considered in the previous section, here we will keep the source term in order to make the discussion more general.
We now proceed to introduce the spurion field. Note that in field theory the spurion was introduced by making a Weyl transformation of the metric and of dimensionful quantities. The question is what is the realization of this in the gravity dual. The answer for infinitesimal Weyl transformations in pure AdS is the so-called Penrose-Brown-Henneaux diffeomorphism \cite{Penrose:1986ca,Brown:1986nw} introduced a holographic context in \cite{Imbimbo:1999bj}. We will generalize it to the RG flow geometries we introduced in the previous section and to non-infinitesimal transformations. The PBH transformation has also been used recently to construct the effective action of the dilaton from the action of probe branes embedded in $AdS$ \cite{Elvang:2012st}. The difference with this approach is that the probe brane represents a breaking of conformal invariance that is suppressed in the large-$N$ limit, so that to leading order the geometry that describes the UV fixed point is not affected and a decoupled dilaton shows up in the spectrum as a fluctuation of the brane. In the cases we study the breaking is not suppressed and the effect of the dilaton is seen indirectly in correlation functions.
In pure AdS space the metric is \eqref{FGmetric} with a constant $\ell^2(\rho)=L^2$. The infinitesimal PBH transformation takes the form
\begin{align}
&\rho\to e^{-2\tau}\rho\simeq (1-2\tau)\rho, \ \ \ x^\mu\to x^\mu+a^\mu,\\
&a^\mu=\frac{L^2}{2}\int_0^\rho d\rho' g^{\mu\nu}(x,\rho')\partial_\nu\tau(x),
\end{align}
and it maps a solution with boundary metric $g_{\mu\nu}^{(0)}(x)$ to a new solution where the original metric has changed by an infinitesimal Weyl transformation
\begin{equation}\label{varmet}
\delta g_{\mu\nu}^{(0)}(x)=+2\tau(x) g_{\mu\nu}^{(0)}(x).
\end{equation}
A similar infinitesimal transformation exists for the more general case \eqref{FGmetric}
\begin{align}\label{inftPBH}
&\rho\to e^{-2\tau}\rho\simeq (1-2\tau)\rho, \ \ \ x^\mu\to x^\mu+a^\mu,\\
&a^\mu=\frac{1}{2}\int_0^\rho d\rho' \ell^2(\rho')g^{\mu\nu}(x,\rho')\partial_\nu\tau(x),
\end{align}
The effect of this transformation over the leading term of the metric is again \eqref{varmet}. The leading terms of the scalar field transforms as
\begin{equation}
\delta \phi^{(d-\Delta_{UV})}=-(d-\Delta_{UV})\tau(x) \phi^{(d-\Delta_{UV})}, \ \ \ \delta \phi^{(d-\Delta_{IR})}=-(d-\Delta_{IR})\tau(x) \phi^{(d-\Delta_{IR})}.
\end{equation}
Subleading terms in the expansion of the metric and the scalar field \eqref{metexp} have a covariant dependence on the leading terms, so the effect of this transformation is to make them dependent on the combinations $g_{\mu\nu}^{(0)}+\delta g_{\mu\nu}^{(0)}(x)$, $\phi^{(d-\Delta_{UV})}+\delta \phi^{(d-\Delta_{UV})}$. There is however one term that does not change covariantly, due to the logarithmic term in the expansion of the metric \eqref{metexp}. We have then, that when $\rho\to 0$,
\begin{equation}
\delta g_{\mu\nu}^{(d)}=\frac{\delta g_{\mu\nu}^{(d)}}{\delta g_{\alpha\beta}^{(0)}}\delta g_{\alpha\beta}^{(0)}+\frac{\delta g_{\mu\nu}^{(d)}}{\delta\phi^{(d-\Delta_{UV})}}\delta \phi^{(d-\Delta_{UV})}-2\tau(x)h_{\mu\nu}^{(d)},
\end{equation}
while, when $\rho\to\infty$,
\begin{equation}
\delta g_{\mu\nu}^{(d)}=\frac{\delta g_{\mu\nu}^{(d)}}{\delta g_{\alpha\beta}^{(0)}}\delta g_{\alpha\beta}^{(0)}+\frac{\delta g_{\mu\nu}^{(d)}}{\delta\phi^{(d-\Delta_{IR})}}\delta \phi^{(d-\Delta_{IR})}-2\tau(x)h_{\mu\nu}^{(d)}.
\end{equation}
Introducing a spurion field requires to promote the infinitesimal PBH transformation to a finite transformation, including non-linear effects
\begin{align}
\rho\to e^{-2\tau}\rho+\delta \rho,\ \ x^\mu\to x^\mu+a^\mu,
\end{align}
where
\begin{equation}
a^\mu=\frac{1}{2}\int_0^{\rho e^{-2\tau}} d\rho' \ell^2(\rho')g^{\mu\nu}(x,\rho')\partial_\nu\tau(x)+\delta a^\mu.
\end{equation}
The boundary values of the fields are affected by this transformation, so in general the resulting solution will not interpolate between two AdS spaces any more. However, if the derivatives of $\tau(x)$ are small enough we can apply the same derivative expansion \eqref{metexp} in the near-horizon region up to some value of the radial coordinate $\rhir$.
Indeed, the non-linear generalization of the PBH transformation is consistent with an expansion of $\delta \rho$ and $\delta a^\mu$ in terms of the number of derivatives of $\tau$
\begin{align}
\notag &\delta \rho=\delta\rho^{(2)}(\partial^2\tau)+\delta\rho^{(4)}(\partial^4\tau)+\delta\rho^{(6)}(\partial^6\tau)+\cdots,\\
&\delta a^\mu=\delta a^\mu_{(3)}(\partial^3\tau)+\delta a^\mu_{(5)}(\partial^5\tau)+\delta a^\mu_{(7)}(\partial^7\tau)+\cdots
\end{align}
Each term is fixed by demanding that the metric keeps the form
\begin{equation}\label{FGmetric2}
ds^2=\ell^2(\rho e^{-2\tau})\frac{d\rho^2}{4\rho^2}+\frac{1}{\rho}e^{2\tau}\tilde g_{\mu\nu}(\rho e^{-2\tau},x)dx^\mu dx^\nu.
\end{equation}
For instance, the contribution to the shift in $\rho$ that is second order in derivatives is determined by the equation
\begin{equation}
\left[e^{2\tau}\partial_\rho+\frac{\ell'}{\ell} -\frac{1}{\rho e^{-2\tau}}\right]\delta \rho^{(2)} =-\frac{\rho e^{-2\tau}}{2}g^{\alpha\beta}\partial_\alpha\tau\partial_\beta\tau.
\end{equation}
Although straightforward, this expansion can be quite cumbersome. It is also not completely unambiguous, for instance the equation above has a homogeneous solution
\begin{equation}
\delta \rho^{(2)}_H= \rho e^{-2\tau}\frac{1}{\ell} f(x),
\end{equation}
for some arbitrary function $f(x)$ that has to be second order in derivatives in order to be consistent with the expansion, an example is $f(x)=e^{-2\tau}g_{(0)}^{\alpha\beta}\partial_\alpha\tau\partial_\beta\tau$ . Close to the boundary, this can be seen as a field redefinition of $\tau$ adding higher derivative terms
\begin{equation}
\tau\to \tau+\frac{1}{\Luv} e^{-2\tau}g_{(0)}^{\alpha\beta}\partial_\alpha\tau\partial_\beta\tau+O(\partial^4).
\end{equation}
The final effect of the transformation is to do a finite Weyl rescaling of the sources for the metric and the scalar field, both at the boundary and in the near-horizon region
\begin{align}\label{dressedsources}
\notag & g_{\mu\nu}^{(0)}(x)\to e^{2\tau(x)} g_{\mu\nu}^{(0)}(x)\equiv \hat g_{\mu\nu},\\
\notag & \phi^{(d-\Delta_{UV})}\to e^{-(d-\Delta_{UV})\tau(x)} \phi^{(d-\Delta_{UV})}\equiv \hat \phi_{UV},\\ & \phi^{(d-\Delta_{IR})}\to e^{-(d-\Delta_{IR})\tau(x)} \phi^{(d-\Delta_{IR})}\equiv \hat \phi_{IR}.
\end{align}
All other terms in the derivative expansion depend on these sources and transform covariantly according to the Weyl transformation, except for the anomalous contribution
\begin{equation}\label{shiftmetric}
g_{\mu\nu}^{(d)}[g_{\mu\nu}^{(0)},\phi^{(d-\Delta)}]\to g_{\mu\nu}^{(d)}[\hat g_{\mu\nu},\hat \phi]-2\tau(x)h_{\mu\nu}^{(d)}[\hat g_{\mu\nu},\hat \phi]
\end{equation}
The additional contribution appears only in even $d$ dimensions, and is due to the logarithmic term in the expansion of the metric in \eqref{metexp}.
\subsection{Anomalous contribution to the generating functional}
Having established how the spurion should be introduced in the holographic dual, we can now derive the anomalous contribution to the generating functional and check if the results agree with the expectation from field theory.
According to the AdS/CFT correspondence, the generating functional of the field theory is the on-shell action of the gravitational theory, as a function of the boundary values of the fields. We will assume a large-$N$, strong coupling approximation, such that the on-shell action reduces to the classical gravitational action \eqref{EHaction} evaluated on the solutions. We will define the on-shell action with cutoffs in the radial direction, $\rhuv\ll 1$ and $\rhir\gg 1$
\begin{align}\label{action}
\notag S[\rhuv,\rhir]&=-\frac{1}{2\kappa^2}\int d^d x\int_\rhuv^\rhir d\rho \sqrt{-G} \left(R+\partial_M \phi \partial^M \phi +2V(\phi)\right)\\ &+\frac{1}{\kappa^2}\left.\int d^{d} x \sqrt{-\tilde G} K\right|_{\rhir}-\frac{1}{\kappa^2}\left.\int d^{d} x \sqrt{-\tilde G} K\right|_{\rhuv}.
\end{align}
On the second line we have added Gibbons-Hawking terms, since we will do variations of the classical solution that affect the value of the metric at both cutoffs. Here $K$ is the extrinsic curvature, defined from the metric components $G_{\mu\nu}=\frac{1}{\rho}g_{\mu\nu}$. $\sqrt{-\tilde G}$ is the square root of the determinant of $G_{\mu\nu}$, while $\sqrt{-G}$ is the square root of the determinant of the full metric. $N^2=g_{\rho\rho}=\frac{\ell^2}{4\rho^2}$ is the lapse function. The explicit form of the extrinsic curvature is
\begin{equation}
K_{\mu\nu}=\frac{1}{2N} \partial_\rho G_{\mu\nu}=\frac{1}{\ell}\left(\partial_\rho-\frac{1}{\rho}\right) g_{\mu\nu}, \ \
K=G^{\mu\nu} K_{\mu\nu}=\rho g^{\mu\nu}K_{\mu\nu}.
\end{equation}
In anti-de Sitter space, on-shell action has the form \cite{Henningson:1998gx,Imbimbo:1999bj}
\begin{equation}\label{intS}
S=-\frac{L}{2\kappa^2}\int_{\rho_{UV}} d\rho d^d x\,\rho^{-\frac{d}{2}-1}\sqrt{-g^{(0)}}\,b(x,\rho),
\end{equation}
where, close to the boundary $\rho\sim \rho_{UV}\to 0$,
\begin{equation}
b(x,\rho)=\sum_{n=0}^\infty b_{2n}(x)\rho^n+\cdots.
\end{equation}
Where the dots refer to possible contributions that are not integer powers of $\rho$, which can appear in the presence of matter fields in the bulk. From this expansion one can compute the form of the divergent contributions to the on-shell action. Note that in even dimensions there is a divergent logarithmic contribution from the order $d$ term,
\begin{equation}
S_{\rm div}\supset \frac{1}{\kappa^2}\int d^d x\, \log(\rho_{UV}) b_{d}(x),
\end{equation}
that in fact is proportional to the anomaly
\begin{equation}\label{bd}
b_d=-\frac{d}{2}L^{d-2} \cA_d.
\end{equation}
Where here and in the following the anomaly polynomial $\cA_d$ has been defined in such a way that coefficient of the Euler density $E_d$ is one. The divergence is eliminated by adding a counterterm $S_{ct}=-S_{\rm div}$.
Under an infinitesimal PBH transformation the $\rho$ coordinate is rescaled by a $e^{-2\tau}$ factor, so that a change of variables in the integral appearing in \eqref{intS} puts this factor in the limits of integration $\rhuv\to \rhuv e^{-2\tau}$, $\rhir\to \rhir e^{-2\tau}$. Then, the on-shell action is shifted by a finite term proportional to the anomaly
\begin{equation}
\delta S_{\rm div} =-\frac{dL^{d-1}}{\kappa^2}\int d^d x\,\tau(x)\sqrt{-g^{(0)}} \cA_d.
\end{equation}
This can be compensated by a change of the finite part of the action if one combines the PBH transformation with a Weyl transformation of the boundary metric, in such a way that the boundary metric is left invariant. Note that this term is scheme-dependent, in a different scheme one could choose the cutoffs to depend on the spacetime coordinates, $\rhuv\to \rhuv e^{-2\sigma}$ and $\rhir\to \rhir e^{-2\sigma}$,
in such a way that the value of $\tau$ is shifted $\tau\to \tau+\sigma$. Demanding that there are no anomalous contributions before performing the PBH transformation will fix the scheme to $\sigma=0$.
In our case the action takes the form
\begin{equation}
S=-\frac{1}{\kappa^2}\int_{\rho_{UV}}^{\rho_{IR}} d\rho d^d x\,\ell(\rho)\rho^{-\frac{d}{2}-1}\sqrt{-g^{(0)}}\,b(x,\rho).
\end{equation}
We can expand the function $b(x,\rho)$ in the $AdS$ regions
\begin{align}
&\rho\sim \rhuv, \ \ b(x,\rho)=\sum_{n=0}^\infty b_{2n}^{UV}(x)\rho^n,\\
&\rho\sim \rhir, \ \ b(x,\rho)=\sum_{n=0}^\infty b_{2n}^{IR}(x)\rho^n.
\end{align}
The UV expansion is the usual $\rho\to 0$ expansion. The IR expansion is a derivative expansion that is valid for finite $\rho$ deep in the IR $AdS$ region but not all the way to the horizon, where the expansion breaks down. The coefficients $b_d^{UV}(x)$ and $b_d^{IR}(x)$ take the same form as \eqref{bd}, with $L$ changed to $\Luv$ and $\Lir$, respectively. The factor $\ell(\rho)$ becomes a constant, $\Luv$ or $\Lir$, plus $\rho$-dependent corrections. Since the flow is driven by the matter fields, these contributions take the form of non-integer powers of $\rho$.
From these expansions we can infer that there are contributions proportional to the anomaly coming from both the UV and IR cutoff.
\begin{equation}
S\supset -\frac{d}{2\kappa^2}\int d^d x\, \sqrt{-g^{(0)}}\left(\Lir^{d-1}\log(\rhir)-\Luv^{d-1}\log(\rhuv)\right)\cA_d.
\end{equation}
Following the same arguments as for the pure $AdS$ case, a PBH transformation will add to the action a term linear in $\tau$ and proportional to the difference between the UV and IR anomalies
\begin{equation}\label{anomterm1}
\delta S=-\frac{d}{\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right)\int d^d x\,\tau(x) \sqrt{-\hat g^{(0)}}\,\hat \cA_d.
\end{equation}
The contribution to the energy-momentum tensor of the theory can be computed from a variation of the expression above with respect to the metric. We will now give an alternative derivation that will serve as a cross-check of our result. Using the equations of motion, the on-shell action can also be expressed as
\begin{equation}\label{varonshell}
S[\rhuv,\rhir]=-\frac{1}{\kappa^2}\int d^d x\int_\rhir^\rhuv d\rho \sqrt{-G} \left(\frac{1}{2}\pi^{\mu\nu} \partial_\rho G_{\mu\nu}+\pi_\phi \partial_\rho \phi\right).
\end{equation}
This has the usual form of a classical action with a vanishing Hamiltonian, a well-known result in the ADM formalism that has also been used in the Hamilton-Jacobi formulation of RG flows \cite{deBoer:1999xf,Papadimitriou:2004rz}. The conjugate momenta to the metric and the scalar field are defined as
\begin{equation}
\pi^{\mu\nu}=\frac{\kappa^2}{\sqrt{-G}}\frac{\delta S}{\delta \partial_\rho G_{\mu\nu}}= K^{\mu\nu}-KG^{\mu\nu}, \ \ \ \pi_\phi=\frac{\kappa^2}{\sqrt{-G}}\frac{\delta S}{\delta \partial_\rho \phi}=\partial_\rho \phi.
\end{equation}
Because the action is on-shell, a variation of the action only has contributions from the cutoffs
\begin{equation}\label{variation}
\delta S[\rhuv,\rhir]=-\left.\frac{1}{\kappa^2}\int d^d x\,\sqrt{-G} \left(\frac{1}{2}\pi^{\mu\nu} \delta G_{\mu\nu}+\pi_\phi \delta \phi\right)\right|^{\rhuv}_{\rhir}.
\end{equation}
The variation at the boundary should vanish for dynamical modes, this corresponds to the Dirichlet boundary condition $\delta \phi=0$, $\delta G_{\mu\nu}=0$.
The regulated expectation value of the energy-momentum tensor is
\begin{equation}
T_{\mu\nu}^{\rm reg}=-\frac{1}{\kappa^2}\rho^{-\frac{(d-2)}{2}}\pi_{\mu\nu}.
\end{equation}
At the boundary $\rhuv\to 0$, this is divergent and one needs to add counterterms that cancel those divergences, which can be done by adding covariant local terms at the cutoff \cite{deHaro:2000xn}. In the near-horizon region the regulated energy-momentum tensor is finite and no additional contributions should be added.
After eliminating the divergences, the $\rhuv\to 0$ limit leaves only finite contributions to the energy-momentum tensor. After the holographic renormalization procedure has been implemented, the variation of the action takes the form
\begin{equation}
\delta S = \int d^d x \sqrt{-g^{(0)}}\left[\frac{1}{2}\vev{T^{\mu\nu}}\delta g^{(0)}_{\mu\nu}+\vev{\cO}\delta \phi^{(d-\Delta_{UV})} \right].
\end{equation}
In our case the expectation value of the energy-momentum tensor has contributions from the boundary and the near-horizon region. Near-horizon contributions depend in general on $\rhir$, that acts as an IR regulator. The exception is the order $d$ term (with no logarithmic factors), that has the same form as the renormalized boundary contribution. Using the results of \cite{Henningson:1998gx, deHaro:2000xn}, the contributions of order $d$ are
\begin{equation}
\vev{T_{\mu\nu}}_d[g^{(0)},\phi^{(d-\Delta_{UV})}] = \frac{d}{2\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right)e^{-(d-2)\tau}g^{(d)}_{\mu\nu}[g^{(0)},\phi^{(d-\Delta_{UV})}]+\cdots
\end{equation}
where the dots refer to contributions depending on lower order terms $g^{(n<d)}$.
When we do the non-linear PBH transformation, the energy-momentum tensor becomes dependent on the dressed sources \eqref{dressedsources}. The transformation is fully covariant, except for the additional shift of the order $d$ component \eqref{shiftmetric}, that produces an additional contribution to the energy-momentum tensor
\begin{equation}
\delta \vev{T_{\mu\nu}}[\hat g,\hat \phi_{UV}]= -\tau(x) \frac{d}{\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right)e^{-(d-2)\tau} h^{(d)}_{\mu\nu}[\hat g_{\mu\nu},\hat \phi_{UV}].
\end{equation}
As was explained in \cite{deHaro:2000xn}, $h^{(d)}_{\mu\nu}$ equals the variation with respect to the metric of the conformal anomaly, so this implies that the generating functional contains a new term
\begin{equation}
\Gamma\supset \frac{d}{\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right)\int d^d x\,\sqrt{-\hat g}\tau(x)\,\hat \cA_d,
\end{equation}
In agreement with our previous derivation \eqref{anomterm1}. Note however that the variation of the Euler density is zero, so strictly speaking this derivation only captures the contributions to the conformal anomaly that are Weyl-invariant.
The overall coefficient of the Euler density in the new term is proportional to the difference of the $a$-central charges of the UV and IR fixed points \cite{Henningson:1998gx}
\begin{equation}
a_{UV}-a_{IR} \propto \frac{d}{\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right).
\end{equation}
The holographic $a$-theorem of \cite{Freedman:1999gp} implies that $\Luv\geq \Lir$, so indeed the coefficient of the anomalous term is positive. For dimensions larger than four this is a prediction from holography, there is no proof in field theory of this statement. The holographic calculation may be a hint that such a proof actually exists.
For two or four dimensions the result is in agreement with the field theory analysis. Note that the trace of the new contribution to the energy-momentum tensor vanishes $g^{(0)\,\mu\nu}\delta \vev{T_{\mu\nu}}=0$, so the new contribution does not affect to the trace anomaly in the one-point function. The trace anomaly is proportional to the difference of central charges, but it is determined by the trace of the $g^{(d)}_{\mu\nu}$ term
\begin{equation}
g^{\mu\nu}\vev{T_{\mu\nu}}[ g,\phi_{UV}]= \frac{d}{\kappa^2}\left(\Luv^{d-1}-\Lir^{d-1}\right) \cA_d.
\end{equation}
This is the expected result based on physical grounds, by introducing the cutoff $\rhir$ we are discarding the massless degrees of freedom at the IR fixed point, so the anomaly generated by degrees of freedom above the cutoff is proportional to the total anomaly minus the anomaly of the IR CFT.
\section{Spontaneous breaking and Goldstone boson}\label{sec:spont}
When conformal symmetry is spontaneously broken we expect in principle that a massless Goldstone boson associated to dilatations will be present at low energies.\footnote{In $d= 2$ dimensions the analysis becomes more subtle, since IR divergences prevent the appearance of free massless modes, although it is still possible to have massless scalar states in the theory \cite{Coleman:1973ci}.} According to the anomaly matching arguments of \cite{Komargodski:2011vj} its effective action should contain an anomalous term similar to the one we have computed for the spurion field. Although it is encouraging that one should find the same result for the generating functional as in the field theory calculation, there are some caveats in the use of the spurion field that make a separate analysis of the dilaton mode necessary. The caveats are the use of a radial cutoff in order to compute the generating functional in a derivative expansion and the fact that introducing the spurion field alters the boundary values of the metric and other fields; according to the usual dictionary of the AdS/CFT correspondence, the spurion should not be interpreted as a dynamical mode of the dual field theory.
Our goal in this section is to study correlation functions of the energy-momentum tensor and the scalar operator whose expectation value breaks conformal invariance. When the symmetry is broken spontaneously the behavior of these correlators at low momenta is constrained. In particular, a massless pole should appear in the transverse part of mixed tensor-scalar correlator, due to the existence of zero energy states.
In order to compute correlation functions we need to study small fluctuations of the metric and the scalar field. Related works are the calculation of correlation functions \cite{DeWolfe:2000xi,Arutyunov:2000rq,Mueck:2001cy,Papadimitriou:2004rz} in a geometry dual of a particular state in the Coulomb branch of $\cN=4$ SYM \cite{Freedman:1999gk}, that is an example where there is a massless pole that one can identify with the dilaton. In this case the near-horizon geometry is not AdS, but a singular geometry and there is a mass gap in the spectrum. In \cite{Porrati:1999ew} a calculation in a four-dimensional flow between two fixed points was carried out, with the result that at small momenta the two-point function of a probe scalar becomes that of the IR CFT.
A calculation of correlators in a two dimensional flow was also made in \cite{Berg:2002hy}.
Although for a general superpotential we were not able to find the exact solutions to the equations of motion, we can do an expansion in momentum that is possible to solve order by order. In order to capture the low momentum behavior of the correlation functions this expansion is sufficient. The expansion however breaks down very close to the horizon, where regularity or other physical conditions have to be imposed in order to fix the solution.
Fortunately, we can find the near-horizon solution using a different approximation, that in this case is to approximate the near-horizon geometry by $AdS$. There is a region in the geometry where the two approximations overlap, and the matching between the two fixes the form of the correlators at small momenta. In appendix \ref{app:coulomb} we check that this procedure indeed captures the leading low momentum behavior in a geometry dual of $\cN=4$ SYM in the Coulomb branch, where the full solution can be computed analytically.
\subsection{Tensor and scalar fluctuations}
We will solve the equations of motion for the scalar and tensor modes using a low momentum approximation. We will distinguish two regions in the geometry. In the first region, that goes from the boundary to close to the horizon, the solutions are expanded in momentum $q^2$. We solve first for the zero momentum solution and then use it as a starting point to find the next order corrections iteratively. For very low momentum this expansion is valid in the region close to the horizon where the geometry is approximately AdS as long as $q^2 L_{IR}^2 e^{-2r/L_{IR}}\ll 1$. In this way one can construct the two independent solutions corresponding to normalizable and non-normalizable asymptotic behavior at the boundary. In order to compute correlation functions of the dual theory one needs to impose a boundary condition at the horizon that fixes a relation between the coefficient of the two solutions. However, when $r\to -\infty$ the low-momentum expansion will break down, and this defines the second (near-horizon) region. We take the geometry in the second region to be AdS as a good approximation, and solve for arbitrary values of the momentum $q^2$, imposing the boundary condition that the solutions are ingoing at the horizon.\footnote{This would give retarded correlators. An equivalent condition is to impose a regularity condition at the horizon in Euclidean signature.} For low enough momentum there is an overlap between the first and second regions, so that, to leading order, the low-momentum expansion of the solution found in the second region should coincide with the near-horizon limit of the solution found in the first region. Comparing the two solutions one can then fix the relation between normalizable and non-normalizable modes and use this information to compute the correlation functions of the dual theory.
In the models we are considering the dual of the RG flow is described by Einstein gravity coupled to a scalar field. The equations of motion are
\begin{equation}\label{einsteqs}
R_{MN}=-\partial_M\phi\partial_N\phi-\frac{2}{d-1}g_{MN}V(\phi), \ \ \ \nabla^2 \phi=\frac{\partial V}{\partial \phi}.
\end{equation}
We will adopt Gaussian coordinates for the metric,
\begin{equation}
ds^2=g_{MN}dx^M dx^N=dr^2+g_{\mu\nu}(r,x)dx^\mu dx^\nu.
\end{equation}
The choice $g_{\mu r}=0$, $g_{rr}=1$ corresponds to fixing $d+1$ diffeomorphisms up to transformations of the form
\begin{equation}
\delta g_{MN}=\nabla_{(M} \xi_{N)}=0, \quad {\rm if} \ M=r \; {\rm and / or} \; N=r.
\end{equation}
For the background metric $g_{\mu\nu}=e^{2A}\eta_{\mu\nu}$, we can distinguish between $d$-dimensional diffeomorphisms
\begin{equation}\label{diff1}
\xi_r=0, \ \ \xi_\mu=e^{2 A} \sigma_\mu(x),
\end{equation}
and translations in the radial direction
\begin{equation}\label{gaugetr}
\xi_r=-\sigma(x), \ \ \xi_\mu=\partial_\mu\sigma(x) e^{2 A} \int^r dr' e^{-2 A(r')}.
\end{equation}
For $\sigma$ a constant, these transformations become scale transformations in the $AdS$ boundary, so it is natural to associate them with dilatations in the dual field theory.
We will expand the equations to linear order in fluctuations
\begin{equation}
g_{\mu\nu}=e^{2A}(\eta_{\mu\nu}+h_{\mu\nu}), \ \ \phi=\phi_0+\varphi,
\end{equation}
and use the flat metric to raise and contract indices. For instance, the trace of the metric fluctuation is defined as $h=h^\mu_\mu=\eta^{\mu\nu}h_{\mu\nu}$.
We will use an expansion in Fourier modes. Each mode of the metric admits the following decomposition in transverse and longitudinal parts, traceless and not traceless
\begin{equation}\label{hdec}
h_{\mu\nu}=h_{\mu\nu}^{TT}+h_{\mu\nu}^{TL}+h_{\mu\nu}^T+h_{\mu\nu}^L.
\end{equation}
More explicitly, we define the projectors
\begin{align}\label{projector}
\notag &P_T^{\mu\nu}=q^2\eta^{\mu\nu}-q^\mu q^\nu,\ \ P_L^{\mu\nu}=q^\mu q^\nu\\
&\Pi^{\mu\nu,\alpha\beta}=P_T^{\mu\alpha}P_T^{\nu\beta}-\frac{1}{d-1}P_T^{\mu\nu}P_T^{\alpha\beta},
\end{align}
so that
\begin{align}\label{qtens}
\notag & h_{\mu\nu}^{TT}=\frac{1}{(q^2)^2}\Pi_{\mu\nu}^{\ \ \ \alpha\beta}h_{\alpha\beta},\\
\notag & h_{\mu\nu}^{TL}=\frac{1}{(q^2)^2}\left(P_{T \mu}^{\ \ \alpha}P_{L \nu}^{\ \ \beta}+P_{T \nu}^{\ \ \beta}P_{L \mu}^{\ \ \alpha} \right)h_{\alpha\beta},\\
\notag & h_{\mu\nu}^T=\frac{1}{(d-1)(q^2)^2}P_{T\ \mu\nu}P_T^{\alpha\beta} h_{\alpha\beta},\\
& h_{\mu\nu}^L=\frac{1}{(q^2)^2}P_{L\ \mu}^{\ \ \alpha}P_{L \ \nu}^{\ \ \beta} h_{\alpha\beta}.
\end{align}
For convenience, we will also define $h=h_T+h_L$, where
\begin{equation}
h_T=\eta^{\mu\nu}h_{\mu\nu}^T=\frac{1}{q^2}P_T^{\alpha\beta}h_{\alpha\beta}, \ \ h_L=\eta^{\mu\nu}h_{\mu\nu}^L=\frac{1}{q^2}P_L^{\alpha\beta}h_{\alpha\beta}.
\end{equation}
The details of the derivation of the equations of motion can be found in Appendix \ref{app:eoms}. Here we quote the results for the two modes we will analyze in the following:
\begin{itemize}
\item Tensor fluctuation:
\begin{equation}\label{eqTT}
0={h^{TT}_{\mu\nu}}''+dA'{h^{TT}_{\mu\nu}}'-q^2e^{-2A}{h^{TT}_{\mu\nu}}.
\end{equation}
\item Scalar fluctuations:\\
There are two constraints for the trace components of the metric
\begin{align}
\label{eqA} &0= (d-1)A'h'+2\partial V\varphi -2\phi_0'\varphi'-q^2 e^{-2A} h_T,\\
\label{consT} &0=h_T'+2\phi_0'\varphi,
\end{align}
and, defining $h'=e^{-2A}H$, two second order equations for the metric and the scalar fluctuations:
\begin{align}
\label{scalareqs1} &0= e^{-2A}H'+\frac{4}{d-1}\partial V \varphi+4\phi_0'\varphi',\\
\label{scalareqs2} &0=\varphi''+dA'\varphi'-\partial^2 V\varphi-q^2 e^{-2A}\varphi+\frac{\phi_0'}{2}e^{-2A}H.
\end{align}
\end{itemize}
The equation of motion for the vector fluctuation is simply ${h^{TL}}'=0$. This fluctuation is not dynamical and can be set to zero using the residual diffeomorphism invariance \eqref{diff1}.
\subsection{Tensor fluctuation}
We start with the simpler case of a tensor fluctuation, given by \eqref{eqTT}, with $h_{\mu\nu}^{TT}\equiv \frac{1}{(q^2)^2}\Pi_{\mu\nu}^{\ \ \alpha\beta}\varepsilon_{\alpha\beta} h^{TT}$, where $\varepsilon_{\mu\nu}$ is an arbitrary symmetric tensor. We will also define $Q\equiv \sqrt{-q^2}=\sqrt{\omega^2-\vec{q}^2}$. The analysis can be extended to Euclidean signature by considering spacelike momenta $Q=i\sqrt{-\vec{q}^2}$.
In general, we cannot solve \eqref{eqTT} exactly but we can solve it perturbatively for small $Q^2$
\begin{equation}\label{tenssol}
h^{TT}=h_{b}^{TT}+h_{N}^{TT}\int dr e^{-dA}+O(Q^2),
\end{equation}
where $h_{b}^{TT}$ and $h_{N}^{TT}$ are the coefficients of the non-normalizable and normalizable modes, respectively. The value of $h_{b}^{TT}$ determines the tensor part of the boundary metric. They do not depend on the radial coordinate but may depend on $q$. One can solve recursively for the next orders
\begin{equation}
h^{TT}=\sum_{n\geq 0} Q^{2n} h^{TT}_{(n)},
\end{equation}
where $H^{(0)}=H_b^T$ and the recursive equation is
\begin{equation}
{h^{TT}_{(n)}}''+dA'{h^{TT}_{(n)}}'=-e^{-2A}Q^2 h^{TT}_{(n-1)}
\end{equation}
The solution to this equation takes the form
\begin{equation}
h^{TT}_{(n)}=-\int dr e^{-dA} \int dr e^{(d-2)A}Q^2 h^{TT}_{(n-1)}.
\end{equation}
One should actually keep terms up to order $O(Q^d)$ (for $d$ even), since the normalizable term is of this order.
If $Q^2$ is small enough this solution can be a good approximation even close to the horizon where the geometry is approximately AdS. Assuming $(Q L_{IR})^2 e^{-2r/\Lir}\ll 1$,
\begin{eqnarray}\label{hb}
\nonumber h^{TT} &\simeq& h_b^{TT}\left[1+O(Q^2 e^{-2r/\Lir})\right] \\
&+& e^{-dr/\Lir}\left[-\frac{\Lir}{d}h_{N}^{TT} +\left(\frac{\Lir Q}{2(d-2)} \right)^dh_b^{TT}+O(Q^2e^{-2r/\Lir})\right].
\end{eqnarray}
This expression is valid for $d>2$ and even. The perturbative solution breaks down very close to the horizon at $r\to -\infty$ since the factor $e^{-2A}$ that multiplies $Q^2$ in the equation becomes very large. Therefore we cannot use this solution in order to impose boundary conditions in the IR. For that we will have to solve the equation of motion in the near horizon region, where it takes the form
\begin{equation}
0={h^{TT}_h}''+\frac{d}{\Lir}{h^{TT}_h}'-e^{-2r/\Lir}q^2 h^{TT}_h.
\end{equation}
The solutions to this equation are Bessel functions. There are several possible choices of boundary conditions at the horizon. We impose ingoing boundary conditions at the horizon, following the prescription of \cite{Son:2002sd}. This condition basically means that the solution behaves as an ingoing plane-wave in the vicinity of the horizon. Note that if the solution is continued analytically to Euclidean signature, this choice corresponds to fixing the exponential behavior of the solution, in such a way that the solution is regular.
The solution satisfying ingoing boundary conditions at the horizon takes the form
\begin{equation}\label{hh}
h^{TT}_h=C\left(\Lir Q e^{-r/\Lir}\right)^{d/2} H^{(1)}_{\frac{d}{2}}\left(\Lir Q e^{-r/\Lir}\right).
\end{equation}
where $H_\frac{d}{2}^{(1)}(x)$ is the Hankel function of the first kind.
In the low momentum limit there is a region in the geometry where both the expansion in momentum of the solution $h^{TT}$ and the AdS approximation $h^{TT}_h$ are valid. This will be when $r\to-\infty$ but $\Lir Q e^{-r/\Lir}\ll 1$. In this case we should be able to match the leading contributions of both solutions \eqref{hb} and \eqref{hh}. In the case at hand, for $d$ even dimensions, the solution \eqref{hb} in the limit $r\rightarrow - \infty$ takes the form
\begin{equation}
h^{TT}\simeq h_{b}^{TT}-\frac{\Lir}{d}h_{N}^{TT} e^{-dr/\Lir}+\cdots
\end{equation}
while the solution \eqref{hh} in the limit $\Lir Q e^{-r/\Lir}\ll 1$ is
\begin{equation}
h^{TT}_h \simeq C\left(-\frac{i a_d}{\pi}+\cdots +\frac{ib_d}{\pi}(\Lir Q)^d\log(\Lir Q)^2e^{-dr/\Lir}+\cdots \right)
\end{equation}
where for $d=2,4,6$, $a_d=2,4,6$ and $b_d=1/2, 1/8,1/48$. There are analytic terms proportional to $(\Lir Q)^d$, as the term proportional to $h_{b}^{TT}$ in \eqref{hb}, but for $d\geq 4$ those will only introduce contact terms in the correlation function and we can ignore them.
In both cases we have identified the leading terms of the normalizable and non-normalizable solutions. Note that the non-normalizable solution can have contributions that are less suppressed than the normalizable solution when $r\to\-\infty$. The matching of the two solutions determines
\begin{equation}
C=\frac{i\pi}{a_d} h_{b}^{TT}, \ \ h_{N}^{TT}=\frac{d}{\Lir}\frac{b_d}{a_d}(\Lir Q)^d\log(\Lir Q)^2h_{b}^{TT}.
\end{equation}
Then, the ration between the non-normalizable and normalizable solutions is
\begin{equation}\label{irGtensor}
\frac{h_{N}^{TT}}{ h_{b}^{TT}}=\frac{d}{\Lir}\frac{b_d}{a_d}(\Lir Q)^d\log(\Lir Q)^2,
\end{equation}
The dependence on the momentum is in agreement with the two-point function of the energy-momentum tensor in a CFT.
In terms of the boundary value of the metric $h_{b\,\mu\nu}$, we can write the fluctuation as
\begin{equation}
h^{TT}_{\mu\nu}=\frac{1}{(q^2)}\Pi_{\mu\nu}^{\ \ \ \alpha\beta}h_{b\,\alpha\beta}\left[1+G^{TT}(q)e^{-dr/\Luv}+\cdots\right],
\end{equation}
where, at low momentum
\begin{equation}\label{gtt}
G^{TT}(q)\simeq -\frac{\Luv}{\Lir}\frac{b_d}{a_d}(\Lir Q)^d\log(\Lir Q)^2.
\end{equation}
\subsection{Scalar fluctuation}
A scalar fluctuation involves both the scalar field and the scalar components of the metric, that are coupled through (\ref{scalareqs1},\ref{scalareqs2}). We will solve the equations using the same approximations as for the tensor mode, but this is a more complicated case because we are dealing with coupled equations. In order to solve the system we will first write a single equation for the scalar fluctuation $\varphi$, solve it, and then check the remaining equations, including the constraints. We will find that there are only two independent solutions. The relation between the coefficients of the two independent solutions, which will determine the correlator of the dual scalar operator, will be fixed by a boundary condition at the horizon.
We can decouple the scalar field by multiplying \eqref{scalareqs2} by $e^{2A}/\phi_0'$, taking the derivative once with respect to the radial coordinate, substituting $H'$ by the expression obtained from \eqref{scalareqs1} and multiplying by $\phi_0' e^{-2A}$. The result is a third order equation for $\varphi$, for which there are three solutions. In order to get a simplified expression one can use the first order equations of the background scalar and metric:
\begin{align}
\notag &A'=\frac{W}{d-1}, \ \ \phi_0'=-\partial W, \ \ A''=-\frac{(\partial W)^2}{d-1},\\ & \phi_0''=\partial^2 W\partial W, \ \ \phi_0'''=-\partial^2 W(\partial W)^2-(\partial^2 W)^2\partial W .
\end{align}
We will also change variables from the radial coordinate $r$ to the background scalar field $\phi_0$, using that $\partial_r=\phi_0'\partial=-\partial W\partial$, where $\partial$ is now a derivative with respect to the background field $\phi_0$. In addition, we substitute the potential and its derivative by expressions in terms of the superpotential \eqref{potential}. The resulting equation has the form
\begin{equation}
\partial^3 \varphi+\alpha_2\partial^2\varphi+\alpha_1\partial\varphi+\alpha_0\varphi=0.
\end{equation}
Where
\begin{align}
\alpha_2 &=-\frac{(d+2) W}{(d-1) \partial W}+\frac{2 \partial^2 W}{\partial W},\\
\alpha_1 &= -\frac{q^2 e^{-2 A}}{(\partial W)^2}+\frac{d^2 W \partial^2 W-3 d W \partial^2 W+2 d W^2+2 W \partial^2 W}{(d-1)^2
(\partial W)^2}-\frac{(\partial^2 W)^2}{(\partial W)^2}-2,\\
\notag \alpha_0 &=\frac{q^2 e^{-2 A} \partial^2 W}{(\partial W)^3}+\frac{-\frac{\partial^3 W \left(d^2 (-W)-d W+2 W\right)}{(\partial W)^2}-\frac{2 d W^2 \partial^2 W}{(\partial W)^3}}{(d-1)^2}\\
&-\frac{d W (\partial^2 W)^2-2 W (\partial^2 W)^2}{(d-1)
(\partial W)^3}-\frac{\partial^4 W}{\partial W}+\frac{(\partial^2 W)^3}{(\partial W)^3}+\frac{2 \partial^2 W}{\partial W}-\frac{2 \partial^3 W \partial^2 W}{(\partial W)^2}.
\end{align}
The form of the equation is quite involved, but we can simplify things by factorizing out one solution. This is possible thanks to the residual diffeomorphism \eqref{gaugetr}, under which the scalar field transforms as
\begin{equation}
\delta \phi= \phi_0' \xi_r =-\sigma(x)\partial W.
\end{equation}
The metric is also affected by this transformation
\begin{equation}\label{metg}
\delta g_{\mu\nu}= \partial_\mu \xi_\nu+ \partial_\nu \xi_\mu+\xi_r\partial_r g_{\mu\nu}=2 e^{2 A}\left( \partial_\mu\partial_\nu\sigma(x)\int^r dr' e^{-2 A}-\frac{W}{d-1}\eta_{\mu\nu}\sigma(x)\right).
\end{equation}
The \emph{gauge mode} $\varphi_g=\partial W$ is a solution to the equation of motion. This fact enables us to factorize it as follows
\begin{equation}\label{eqscalar}
(\partial^2+a_1\partial +a_0)\left(\partial-\frac{\partial^2 W}{\partial W }\right)\varphi=0,
\end{equation}
where
\begin{align}
\notag &a_1=3\frac{\partial^2 W}{\partial W}-\frac{(d+2) W}{ (d-1) \partial W},\\
&a_0=-\frac{q^2 e^{-2 A}}{(\partial W)^2}-\frac{4}{d-1}\frac{ W\partial^2 W}{ (\partial W)^2}+\frac{2 d }{ (d-1)^2}\frac{W^2}{(\partial W)^2}+\frac{2 \partial^3 W}{\partial W}-2.
\end{align}
The full solution is a superposition of the gauge solutions plus two more solutions
\begin{equation}
\varphi=C_g \partial W+C_1\varphi_1+C_2\varphi_2.
\end{equation}
Where $C_g$, $C_1$ and $C_2$ and arbitrary coefficients. Note that the gauge mode is a valid solution for any value of the momentum and of the radial coordinate and that it vanishes at the boundary in the way expected for a normalizable solution $\varphi \sim \phi_0$. The transformation of the metric \eqref{metg} on the other hand does not vanish at the boundary. Then, imposing a Dirichlet boundary condition on the metric will fix this mode, we will do this explicitly below but first we will fix the boundary conditions at the horizon.
We proceed to find the two other solutions $\varphi_1$ and $\varphi_2$. In order to simplify things further, we define
\begin{equation}\label{gaugeinvG}
\left(\partial-\frac{\partial^2 W}{\partial W }\right)\varphi=\frac{W}{(\partial W)^2}e^{-dA} G.
\end{equation}
Then, $G$ satisfies the following second order equation
\begin{equation}\label{eqG}
\partial^2 G+\partial B \partial G-q^2\frac{e^{-2A}}{(\partial W)^2}G=0,
\end{equation}
where
\begin{equation}
B=2 \log(W)-\log(\partial W)-(d-2)A.
\end{equation}
In general, we cannot solve \eqref{eqG} exactly but we can use the matching procedure in the same way as for the tensor components of the metric. The function $G$ is explicitly gauge-invariant, it coincides up to a momentum-independent factor with the variable $R$ defined in section 4 of \cite{Bianchi:2001de} when the $g_{rr}$ component of the metric is fixed.
First we solve \eqref{eqG} perturbatively for small $q^2$. The solution takes the form
\begin{eqnarray}\label{q2sol}
\nonumber G &=& C_2\left[ 1+q^2\int d\phi _0 e^{-B} \int d\phi _0 \frac{e^{B-2A}}{(\partial W)^2} +\mathcal{O}(q^4) \right] \\
&+& C_1 \left[ \int d\phi _0 e^{-B} + q^2 \int d\phi _0 e^{-B} \int d\phi _0 \frac{e^{B-2A}}{(\partial W)^2} \int d\phi _0 e^{-B} +\mathcal{O}(q^4) \right]
\end{eqnarray}
Then we solve \eqref{eqG} in the near horizon region, where it takes the form
\begin{equation}
\partial ^2 G + \frac{\frac{d-2}{\lambda}-1}{\phi_0 - \phi _m} \partial G +\left(\frac{L_{IR}Q}{\lambda}\right)^2 (\phi_0 - \phi _m)^{\frac{2}{\lambda} -2} G = 0
\end{equation}
The solutions to this equation are Bessel functions. We will pick up a solution satisfying ingoing boundary conditions at the horizon (regularity in the Euclidean)
\begin{eqnarray}\label{horizonSol}
\nonumber G &=& C_h \left( L_{IR}Q (\phi_0 - \phi _m)^{\frac{1}{\lambda}} \right) ^{-\nu} H^{(1)}_{\nu} \left( L_{IR}Q (\phi_0 - \phi _m)^{\frac{1}{\lambda}} \right) \\
\nu &\equiv& \frac{d-2-2\lambda}{2}
\end{eqnarray}
where $C_h$ is an arbitrary coefficient and we have defined
\begin{equation}
\lambda = d- \Delta _{IR}.
\end{equation}
Note that the relation between $\varphi$ and $G$ \eqref{gaugeinvG} is such that close to the horizon $\varphi$ has different power-like dependence on $(\phi_0-\phi_m)$, but the exponential behavior is the same. Then, the ingoing condition on $G$ is equivalent to an ingoing condition for $\varphi$. Note that the gauge mode does not contribute to the exponential behavior since $\partial W \sim (\phi_0-\phi_m)$, so it is not affected by the ingoing boundary condition. One can systematically find corrections to this solution by expanding the coefficients of the differential equation in powers of $(\phi-\phi_m)$ and solving order by order.
Expanding the perturbative solution \eqref{q2sol} around the horizon $\phi_0 = \phi _m$ we have
\begin{eqnarray}\label{Gboundary}
G & \simeq & C_2 + C_1 \frac{L_{IR}\lambda ^2}{(d-1)^2(2\lambda-d+2)}\left(\phi_0 - \phi _m\right)^{\frac{2\lambda -d+2}{\lambda}}
\end{eqnarray}
Expanding the near horizon solution \eqref{horizonSol} in small momenta
\begin{equation}
G\simeq C_h \left[ a + b \left( L_{IR}Q (\phi_0 - \phi _m)^{\frac{1}{\lambda}} \right) ^{2\lambda -d+2} \right]
\end{equation}
where
\begin{equation}
a=\frac{2^{-\nu}(1+i \cot(\pi \nu))}{\Gamma (\nu+1)} , \qquad b=-\frac{i 2^{\nu} \Gamma (\nu)}{\pi}
\end{equation}
As we show in Appendix \ref{app:matching}, higher order corrections to the near-horizon solution do not affect the leading terms in the low momentum expansion. Matching the two solutions fixes
\begin{equation}
\frac{C_1}{C_2} = \frac{b}{a} \left(L_{IR}Q\right)^{2\lambda-d+2} = \frac{b}{a} \left(L_{IR}Q\right)^{d-2\Delta_{IR}+2}
\end{equation}
Near the boundary the solution for $G$ takes the same form as \eqref{Gboundary} with the replacements $L_{IR}\rightarrow L_{UV}$, $\lambda = \Delta _{UV}$ and $\phi_0 - \phi _m \rightarrow \phi_0$ (see \eqref{superpot}-\eqref{superpot2}). Solving \eqref{gaugeinvG} for $\varphi$ and including the gauge solution $\varphi=C_g \partial W$, we can then write its asymptotic form
\begin{align}
\notag \varphi &= \phi _0 \left[ C_g \frac{\Delta_{UV}}{L_{UV}} + \frac{C_1}{ (\Luv Q)^2} \frac{\Delta_{UV}}{2(d-1)(2\Delta_{UV} -d +2)} \left(L_{UV}Q \phi _0 ^ {\frac{1}{\Delta_{UV}}}\right)^2 \right]\\
&+\phi_0 ^{\frac{d}{\Delta_{UV}}-1} \left[C_2 L_{UV} \frac{d-1}{\Delta_{UV}(d-2\Delta_{UV})} \right]. \label{scalarSol}
\end{align}
In order to simplify the expressions we will define the leading order coefficients of the expansion close to the boundary $\phi_0\to 0$ as:
\begin{equation}
\varphi \simeq \varphi_N \phi_0+C_N \phi_0^{1+\frac{2}{\Delta_{UV}}}+\varphi_b\phi_0^{\frac{d}{\Delta_{UV}}-1}.
\end{equation}
Where
\begin{align}
\varphi_N= \frac{\Delta_{UV}}{L_{UV}}C_g , \ \ C_N =\frac{\Delta_{UV}C_1}{2(d-1)(2\Delta_{UV} -d +2)}, \ \ \varphi_b= \frac{(d-1)L_{UV} C_2}{\Delta_{UV}(d-2\Delta_{UV})}.
\end{align}
The metric fluctuations $h$ and $h_T$ can be computed from \eqref{scalareqs1} and \eqref{consT} respectively. The expansion of the transverse component close to the boundary is, to leading order
\begin{align}\label{htexp}
\notag h_T &= h_{T\, b}-2C_g(W-W(0))-2\int_0^{\phi_0} d\phi_0\,(\varphi-C_g \partial W)\\ &\simeq h_{T\, b}-\varphi_N\phi_0^2-\frac{C_N}{\Delta_{UV}+1}\phi_0^{2+\frac{2}{\Delta_{UV}}}-\frac{2\Delta_{UV}}{d}\varphi_b\phi_0^{\frac{d}{\Delta_{UV}}}.
\end{align}
Here $h_{T\, b}$ is the boundary value of the metric, that we fix by imposing Dirichlet boundary conditions. This determines the lower limit of integration, that should be such that no additional contributions change the value of $h_{T\, b}$.
The expansion of the full trace is
\begin{equation}\label{hexp}
h \simeq h_{ b}-\frac{d}{d-1}\varphi_N\phi_0^2-C_N\frac{d(\Delta_{UV}+1)-2}{(d-1)(\Delta_{UV}+1)}\phi_0^{2+\frac{2}{\Delta_{UV}}}+\frac{4\Delta_{UV}(\Delta_{UV}-d)}{d(d-1)}\varphi_b\phi_0^{\frac{d}{\Delta_{UV}}}
\end{equation}
Plugging \eqref{htexp} and \eqref{hexp} into \eqref{eqA}, we find the following condition
\begin{equation}\label{cond}
\varphi_N =(d-2\Delta_{UV}-2)\frac{C_N}{(Q\Luv)^2}.
\end{equation}
This relation implies that the close to the boundary the solution for the scalar field has the same radial dependence as the solutions in the decoupled system, where the background scalar is set to zero and the geometry is pure $AdS$.
In terms of $C_g$:
\begin{equation}\label{Cg}
C_g = -\frac{L_{UV}}{(d-1)} \frac{C_1}{(\Luv Q)^2} = -\frac{L_{IR}^2}{(d-1)\Luv} \frac{b}{a} \left( L_{IR}Q \right)^{d-2\Delta_{IR}} C_2
\end{equation}
Summarizing, if we define the coefficients of the boundary expansion of the scalar field and the scalar component of the metric as
\begin{eqnarray}
\varphi &=& \varphi_b \phi_0^{\frac{d}{\Delta_{UV}}-1}+\varphi_N\phi_0\cdots \\
h &=& h_b + h_N \phi_0^{\frac{d}{\Delta_{UV}}}\cdots
\end{eqnarray}
we find that
\begin{equation}
\varphi_N=C_g \frac{\Delta_{UV}}{L_{UV}}= G_s(q)\varphi_b,
\end{equation}
where
\begin{equation}\label{gs}
G_s(q)=\frac{b}{a}\frac{1}{2\Delta_{UV}-d}\frac{\Lir^2}{\Luv^2} \left( L_{IR}Q \right)^{d-2\Delta_{IR}}.
\end{equation}
For the metric we find, using \eqref{consT} and \eqref{eqtracea}
\begin{eqnarray}
h_{N T} &=& G^T (q) \varphi_b, \\
h_{N L} &=& G^L (q) \varphi_b,
\end{eqnarray}
where
\begin{eqnarray}
G^T (q) &=& -2 \frac{\Delta_{UV}}{d}, \\
G^L (q) &=& \frac{2\Delta_{UV}(2\Delta_{UV}-d-1)}{d(d-1)} .
\end{eqnarray}
We have checked that the equations for the gauge-invariant variable $R$ in \cite{Bianchi:2001de} and the equation for $G$ are equivalent and lead to the same results in the Coulomb branch geometry. The conventions are such that to map to our results one should change the normalization of the superpotential and the scalar field as $W\to -W/2$, $\phi\to\phi/\sqrt{2}$. In this case, the equation that determines the normalizable part of $\phi$, (4.10) in \cite{Bianchi:2001de}, is proportional to $\partial G$ and then, using the solutions for $G$ that we derive, one can see that the overall factor $1/q^2$ is canceled, leaving a $q^{2\Delta_{IR}-d}$ dependence. In the Coulomb branch $\partial G$ goes as $O(q^0)$ to leading order and the $1/q^2$ factor eventually introduces a pole in the correlation function.
\subsection{One and two-point functions}
In the previous sections we have found the solutions of the linearized equations of motion for the metric and the scalar fields in a low-momentum approximation. We will use them to compute correlation functions applying the usual holographic dictionary. The classical on-shell action for the fluctuations equals the generating functional of the dual field theory. The coefficients of the non-normalizable modes $h_{b\,\mu\nu}$ and $\varphi_b$ are proportional to sources in the dual field theory for the energy-momentum tensor and the scalar operator $\cO$. One can then compute correlation functions taking the variation of the on-shell action with respect to these coefficients. In order to have a well-defined variation one has first to ensure that the action is finite, this requires adding counterterms depending on the fields evaluated at a radial cutoff that acts as a regulator.
We will now compute the on-shell action for the fluctuations up to quadratic order (this is all is needed for one- and two-point functions), adding the necessary counterterms. Because of the low-momentum expansion we will need to worry only about counterterms with no derivatives of the fields, although a full treatment will require to take into account also counterterms with derivatives.
In order to compute the correlation functions of the scalar operator and the energy-momentum tensor, we need to evaluate the on-shell action for the linearized fluctuations. We can use the results of \cite{Mueck:2001cy,Bianchi:2001de,Papadimitriou:2004rz} adapted to our case. The on-shell action to quadratic order in the fluctuations is
\begin{equation}
S[g_{\mu\nu}+e^{2A}h_{\mu\nu},\phi_0+\varphi]=S[g_{\mu\nu},\phi_0]+\delta S\left[g_{\mu\nu}+\frac{1}{2}e^{2A}h_{\mu\nu},\phi_0+\frac{1}{2}\varphi ;e^{2A}h_{\mu\nu},\varphi \right],
\end{equation}
where the variation of the on-shell action coincides with \eqref{varonshell} but now the only contribution is evaluated at the boundary and we are using the Gaussian coordinate $r$, so the extrinsic curvature is $K_{\mu\nu}=\frac{1}{2}\partial_r g_{\mu\nu}$ and the variation is
\begin{equation}
\delta S[g_{\mu\nu},\phi_0;e^{2A}h_{\mu\nu},\varphi ]=\lim_{r\to \infty} -\frac{1}{\kappa^2}\int d^d x\sqrt{-g}\left[\frac{1}{2}(K^{\mu\nu}-K g^{\mu\nu})e^{2A}h_{\mu\nu}+\phi_0' \varphi \right].
\end{equation}
We will implicitly assume the $r\to \infty$ limit. In order for the action to be well defined we will have to add counterterms that cancel the divergent pieces. The counterterms take the form \cite{Papadimitriou:2004rz}\footnote{This choice of counterterms corresponds to a particular renormalization scheme, as explained in \cite{Papadimitriou:2004rz}. In this case the energy density of the vacuum will be zero for the particular flows we are considering. In general any function which is a solution to the differential equation that relates the potential to the superpotential in \eqref{potential} can be used as counterterm, for superpotentials that describe explicit breaking. We want to thank Ioannis Papadimitriou for explaining this point to us and Kostas Skenderis for further clarifications.}
\begin{equation}
S_{ct}=-\frac{1}{\kappa^2}\int d^d x\sqrt{-g}\left[\frac{d-1}{\Luv}+\frac{d-\Delta_{UV}}{2\Luv}\phi^2\right]+\cdots.
\end{equation}
The first term cancels the volume divergence of the on-shell action, while the second term is necessary in order to cancel divergences from the scalar action. The reason is also explained in \cite{Papadimitriou:2004rz}, the counterterm should be of the form of a superpotential $W$ for a flow that breaks explicitly the symmetry, expanding this superpotential to quadratic order in the scalar field one finds a counterterm of the form above. Note that the superpotential that determines the flow, $W$, has a different quadratic term because it describes spontaneous breaking.
For our analysis it is enough to consider counterterms involving no derivatives. In order to compute corrections that are of higher order in the low momentum expansion, we will need to add more counterterms depending on the curvature of the background metric and derivatives of the scalar field.
To linear order in the fluctuations, the finite contributions in the $r\to \infty$ or, equivalently, the $\phi_0\to 0$ limit are
\begin{equation}
S_{(1)}=-\frac{d-2\Delta_{UV}}{\kappa^2\Luv}\int d^d x\, e^{dA}\phi_0 \varphi =-\frac{d-2\Delta_{UV}}{\kappa^2\Luv}\int d^d x\, \varphi_b.
\end{equation}
This shows that the one-point function of the scalar field is a nonzero constant. Here $\varphi_b$ is dimensionless, while the product of the source for the scalar field $J_b$ times the expectation value $\vev{\cO}$ has dimension $d$. Then, $\varphi_b$ is related to the actual source by a factor $\varphi_b=M^{\Delta_{UV}-d}J_b$, where $M$ is some mass scale. The expectation value of the scalar operator is
\begin{equation}
\vev{\cO}=\frac{\delta S^{(1)}}{\delta J_b}=-\frac{d-2\Delta_{UV}}{\kappa^2\Luv}M^{\Delta_{UV}-d}.
\end{equation}
We can use this expression to fix the relation between the source $J_b$ and the boundary value of the scalar field $\varphi_b$ as
\begin{equation}\label{sourceJ}
\varphi_b=\frac{\kappa^2\Luv}{2\Delta_{UV}-d}\vev{\cO}J_b.
\end{equation}
The on-shell action to quadratic order is, in our gauge
\begin{align}
\notag S_{(2)} &=-\frac{1}{\kappa^2}\int d^d x\sqrt{-g}\left[\frac{1}{8}{ h_{\mu\nu} h^{\mu\nu}}'-\frac{1}{8}h^\nu_\nu{ h^\mu_\mu}' +\frac{1}{2}\varphi\varphi'+\frac{d-\Delta_{UV}}{2\Luv}\varphi^2\right.\\
&\left.+\left(\frac{1}{4}\phi_0'+\frac{d-\Delta_{UV}}{2\Luv}\phi_0\right) h^\mu_\mu \varphi\right].\label{regaction}
\end{align}
where indices are raised with the flat metric $\eta_{\mu\nu}$. Using the background scalar field $\phi_0$ as radial coordinate, the on-shell action becomes
\begin{align}
\notag S_{(2)} &=-\frac{1}{\kappa^2}\int d^d x\, e^{dA}\partial W\left[-\frac{1}{8}h_{\mu\nu}{ \partial h^{\mu\nu}}+\frac{1}{8}h^\nu_\nu{ \partial h^\mu_\mu} -\frac{1}{2} \varphi \partial \varphi + \frac{d-\Delta_{UV}}{2\Luv\partial W}\varphi^2\right.\\
&\left.+\left(\frac{1}{4}+\frac{d-2\Delta_{UV}}{2\Luv}\frac{\phi_0}{\partial W}\right) h^\mu_\mu \varphi\right].
\label{regaction2}
\end{align}
Expanding close to the boundary as
\begin{equation}
\varphi=\varphi_b \phi_0^{\frac{d}{\Delta_{UV}}-1}+\varphi_N \phi_0+\cdots, \ \ h_{\mu\nu}=h_{b\ \mu\nu}+h_{N\ \mu\nu}\phi_0^{\frac{d}{\Delta_{UV}}}\cdots,
\end{equation}
One can check that the divergent contributions to the action vanish:
\begin{align}
S_{(2)} &=-\frac{1}{\Luv\kappa^2}\int d^d x\, \phi_0^{1-\frac{d}{\Delta_{UV}}}\left[-\frac{d-\Delta_{UV}}{2} \phi_0^{2\frac{d}{\Delta_{UV}}-1}\varphi_b^2+\frac{d-\Delta_{UV}}{2} \phi_0^{2\frac{d}{\Delta_{UV}}-1}\varphi_b^2\right]=0.
\label{regaction3}
\end{align}
The first contribution comes from the term $\propto \varphi \partial\varphi $, while the second contribution comes from the term $\propto \varphi^2$ in \eqref{regaction2}.
The finite contributions to the action are
\begin{align}
S_{(2)} &=-\frac{1}{\kappa^2\Luv}\int d^d x\, \left[-\frac{d}{8}h_N^{\mu\nu}h_{b\,\mu\nu}+\frac{d}{8} h^{\ \mu}_{N\ \mu} h^{\ \nu}_{b\ \nu}+\frac{d-2\Delta_{UV}}{2}\varphi_N\varphi_b+\frac{2d-3\Delta_{UV}}{4} h^{\ \mu}_{b\ \mu} \varphi_b\right].
\label{regaction4}
\end{align}
Using the decomposition \eqref{hdec} and \eqref{qtens} and using that $h^{TL}_{\mu\nu}=0$,\footnote{As we commented before, this is always possible using residual diffeomorphisms.} we can write the action as
\begin{align}
\notag S_{(2)} &=-\frac{1}{\kappa^2\Luv}\int \frac{d^d q}{(2\pi)^d} \left[-\frac{d}{8}\left(h^{TT}_{N,\mu\nu}\frac{1}{(q^2)^2}\Pi^{\mu\nu,\alpha\beta}h_{b\ \alpha\beta}\right.\right.
\\ \notag &\left. +h^T_{N\,\mu\nu}\frac{1}{(q^2)^2(d-1)} P_T^{\mu\nu}P_T^{\alpha\beta}h_{b\ \alpha\beta} +h^L_{N\,\mu\nu}\frac{1}{(q^2)^2} P_L^{\mu\alpha}P_L^{\nu\beta}h_{b\ \alpha\beta}\right)\\
&\left. +\frac{d}{8} h^{\ \mu}_{N \ \ \mu} h^{\ \nu}_{b \ \ \nu}+\frac{d-2\Delta_{UV}}{2}\varphi_N\varphi_b+\frac{2d-3\Delta_{UV}}{4} h^{\ \mu}_{b \ \ \mu} \varphi_b\right].\label{regaction5}
\end{align}
The solution we have found depends on the boundary sources as
\begin{align}
& h^{TT}_{N,\mu\nu}=\frac{1}{(q^2)^2}\Pi^{\mu\nu,\alpha\beta}G^{TT}(q)h_{b\ \alpha\beta},\\
& h_{N\,T}=\frac{1}{q^2} P_T^{\mu\nu}h_{N\,\mu\nu}=G^T(q) \varphi_b,\\
& h_{N\,L}=\frac{1}{q^2} P_L^{\mu\nu}h_{N\,\mu\nu}=G^L(q) \varphi_b,\\
& \varphi_N=G_s(q)\varphi_b.
\end{align}
Therefore, using that $\frac{1}{q^2}P_T^{\alpha\beta}+\frac{1}{q^2}P_L^{\alpha\beta}=\eta^{\alpha\beta}$,
\begin{align}
\notag S_{(2)} &=-\frac{1}{\kappa^2\Luv}\int \frac{d^d q}{(2\pi)^d} \left[-\frac{d}{8}h_{b,\mu\nu}\frac{1}{(q^2)^2}\Pi^{\mu\nu,\alpha\beta}G^{TT}(q)h_{b\ \alpha\beta}\right.
\\ &\left.+\frac{1}{8}\varphi_b\left[ \frac{1}{q^2} P_T^{\alpha\beta}G_s^T(q)+\frac{1}{q^2} P_L^{\alpha\beta}G_s^L(q)\right]h_{b\ \alpha\beta}+\frac{d-2\Delta_{UV}}{2}\varphi_b G_s(q)\varphi_b\right].
\label{regaction6}
\end{align}
where
\begin{align}
&G_s^T(q)=\frac{d(d-2)}{d-1}G^T(q)+d G^L(q)+2(2d-3\Delta_{UV}),\\
&G_s^L(q)=d G^T(q)+2(2d-3\Delta_{UV}).
\end{align}
Therefore we find
\begin{eqnarray}
G_s^T(q) &=& \frac{2\Delta_{UV}(2\Delta_{UV}-d)}{d-1}+G_s^L(q), \\
G_s^L(q) &=& 4(d-2\Delta_{UV}), \\
G_s^T(q) - G_s^L(q) &=& \frac{2\Delta_{UV}(2\Delta_{UV}-d)}{d-1}.
\end{eqnarray}
The two-point function of the energy-momentum tensor can be obtained by taking the derivative of the action twice with respect to the boundary metric, and at low momentum is determined by \eqref{gtt}
\begin{equation}\label{TTcorr}
\vev{T^{\mu\nu} T^{\alpha\beta}}\propto \frac{\Lir^d}{\kappa^2\Luv}\frac{1}{(Q^2)^2}\Pi^{\mu\nu,\alpha\beta}Q^d\log(\Lir Q)^2.
\end{equation}
up to a numerical factor depending on the number of dimensions. Note also that the overall coefficient is dimensionless. This is the expected behavior when the dual field theory flows to an IR CFT. As expected for spontaneous breaking of conformal symmetry, there are no contributions to the trace of the energy-momentum tensor in the two-point function.
The scalar correlation function is obtained from \eqref{regaction6} taking the derivative twice with respect to the source $J_b$, defined in \eqref{sourceJ},
\begin{equation}\label{OOcorr}
\vev{\cO\cO}=\frac{\kappa^2\Luv}{2\Delta_{UV}-d}\vev{\cO}^2 G_s(q)\propto \frac{\kappa^2\Lir^{d+2-2\Delta_{IR}}}{\Luv}\vev{\cO}^2Q^{d-2\Delta_{IR}}.
\end{equation}
up to a numerical factor depending on the number of dimensions and the dimension of the scalar operator $\Delta_{UV}$. Note that the dimension of the two-point function agrees with its UV value $2\Delta_{UV}-d$. Since $\Delta_{IR}>d$, there is a singularity at small momentum, which is in agreement with the expectations from field theory. However, if the dilaton was a free massless mode, we would expect this singularity to be a pole, we will comment more on this in the Discussion. Note that for a generic value of $\Delta_{IR}$ a pole cannot appear at higher order in the momentum expansion because the overall singular factor will be multiplied by integer powers of $Q^2$. On the other hand, if $\Delta_{IR}$ is an integer such a pole could appear, but checking if this is the case would require a separate analysis, since the expansion we have used throughout the paper assumes that $\Delta_{IR}$ is not an integer.
The mixed correlator of the energy momentum-tensor and the scalar operator is obtained by taking a derivative with respect to each source:
\begin{equation}
\left< T^{\mu\nu} \mathcal{O} \right> = \frac{\left<\mathcal{O}\right>}{4(2\Delta_{UV}-d)} \left[ \frac{1}{q^2} P_T^{\mu\nu}G_s^T(q) + \frac{1}{q^2} P_L^{\mu\nu}G_s^L(q) \right].
\end{equation}
Writing the projection operators explicitly we are left with
\begin{align}
\notag \left< T^{\mu\nu} \mathcal{O} \right> &= \frac{\left<\mathcal{O}\right>}{4(2\Delta_{UV}-d)} \left[ \frac{1}{q^2} P_T^{\mu\nu}(G_s^T -G_s^L )+\eta ^{\mu\nu}G_s^L(q)\right]\\
\label{TOcorr} &=\frac{\Delta_{UV}\left<\mathcal{O}\right>}{2(d-1)} \frac{1}{q^2} P_T^{\mu\nu}-\vev{\cO}\eta ^{\mu\nu}.
\end{align}
The first term is the one expected when conformal symmetry is spontaneously broken. The second term is a contact term, that appears in the retarded correlator
\begin{equation}
\vev{T^{\mu\nu}(x)\cO(y)}_R\sim \vev{\cO}\eta^{\mu\nu}\delta^{(d)}(x-y),
\end{equation}
A similar term appears in the correlation function computed in the Coulomb branch flow \cite{Mueck:2001cy,Bianchi:2001de,Papadimitriou:2004rz}. In \cite{Bianchi:2001de} it was argued that this term appears because the generating functional has a term of the form $\sim \vev{\cO}_{J_b} J_b$, which gives an additional contribution. Subtracting it one would recover the true energy-momentum tensor of the field theory in the absence of sources.
We have argued that the spurion does not correspond to the Goldstone boson of spontaneous breaking of conformal invariance according to the usual holographic dictionary between normalizable solutions and dynamical modes. However, correlators of the energy-momentum tensor and scalar operator have a low energy-momentum behavior which is in agreement with the expectations from field theory. So what is the dilaton? It should correspond to a normalizable solution of the equations of motion at zero momentum. The normalizable solution in \eqref{q2sol} is the one with $C_2=0$, but the condition \eqref{cond} at $q^2=0$ implies that $C_1=0$ as well. Therefore, the only possible solution has the same form as the linearized form of the spurion, but with appropriate boundary conditions for the metric:
\begin{equation}
\varphi=\tau(x)\partial W, \ \ h_{\mu\nu}=-2\tau(x)(W-W(0))\eta_{\mu\nu}.
\end{equation}
Note the similarity with the linearized form of the spurion, the difference is in the boundary value of the metric, that for the dilaton mode is fixed while for the spurion it changes by a Weyl factor.
\subsection{Comparison with field theory}\label{sec:compare}
Let us first show that the results we have obtained are consistent with low-energy theorems for a field theory with spontaneous breaking of symmetry. There are two versions of the theorem, that can be found in textbooks, e.g. \cite{Weinberg:1996kr} and that we will sketch again here for convenience. The first version is based on the effective action. Suppose the action and measure of the path integral are invariant under a continuous symmetry, and for simplicity let us assume translation invariance and constant field configurations
\begin{equation}
\delta \phi_m =i \epsilon t_{mn}\phi_n,
\end{equation}
where $\phi_n$ are generic fields, $\epsilon$ a small parameter and $t_{mn}$ the symmetry generators.
If the transformation is a dilatation then $t_{mn}$ is the matrix of conformal dimensions.
The effective potential will be invariant under this symmetry
\begin{equation}
\sum_{m,n}\frac{\delta V[\phi]}{\delta \phi_m} t_{mn} \phi_n =0.
\end{equation}
Taking a derivative of this expression with respect to $\phi_l$, one finds the following condition
\begin{equation}
\sum_m \frac{\delta V[\phi]}{\delta \phi_m} t_{ml} +\sum_{m,n}\frac{\delta^2 V[\phi]}{\delta \phi_m\delta \phi_l} t_{mn} \phi_n =0.
\end{equation}
In a vacuum configuration, the first term vanishes since it corresponds to a minimum of the effective potential. The second term can be identified with the inverse of the zero-momentum propagator
\begin{equation}
\sum_{m,n}\Delta^{-1}_{ml}(0) t_{mn} \phi_n =0.
\end{equation}
Therefore, there is an eigenvector of the zero-momentum inverse propagator with zero eigenvalue. This is usually interpreted as a pole at zero momentum for the propagator itself. However, from this argument one can only establish the existence of a singularity at zero momentum, not the nature of the singularity itself. The result we find for \eqref{OOcorr} is then consistent with this version of Goldstone's theorem.
The second version of the theorem is based on the Ward identities of the theory. For concreteness we will specialize to space-time transformations. Using the energy-momentum tensor we can construct the following charges that generate translations, Lorentz transformations and dilatations:
\begin{equation}
P^\mu=\int d^3 x\, T^{\mu 0}, \ \ M^{\mu\nu}=\int d^3 x\, x^{[\mu}T^{\nu]0}, \ \ D^\mu=\int d^3 x\, x_\nu T^{\nu 0}.
\end{equation}
When conformal symmetry is spontaneously broken by a constant expectation value of an operator $\cO$ of dimension $\Delta$, the commutator of the generators with the operator vanish for translations and Lorentz transformations, but not for dilatations.
\begin{equation}\label{comm}
\vev{[P^\mu,\cO]}=0, \ \ \vev{[M^{\mu\nu},\cO]}=0, \ \ \vev{[D^\mu,\cO]}=-\Delta \vev{\cO}.
\end{equation}
On the other hand, since the symmetry is only spontaneously broken, the currents associated to the symmetries should be conserved
\begin{equation}\label{ward}
\partial_\nu^y\vev{[T^{\mu\nu}(y),\cO(x)]}=0, \ \ \partial_\alpha^y\vev{[y^{[\mu}T^{\nu]\alpha}(y),\cO(x)]}=0, \ \ \partial_\mu^y\vev{[y_\nu T^{\nu\mu}(y),\cO(x)]}=0.
\end{equation}
The next step is to write the commutator between the energy-momentum tensor and the scalar field in terms of a spectral decomposition:
\begin{equation}
\vev{[T^{\mu\nu}(y),\cO(x)]}=\int \frac{d^p}{(2\pi)^3}\left[\rho^{\mu\nu}(p)e^{ip(y-x)}+\overline{\rho}^{\mu\nu}(p)e^{-ip(y-x)} \right].
\end{equation}
From \eqref{comm} and \eqref{ward}, one finds that the spectral function should take the form
\begin{equation}
\rho^{\mu\nu}(p)=(\eta^{\mu\nu}p^2-p^\mu p^\nu)\theta(p_0)\rho(-p^2),
\end{equation}
with the condition
\begin{equation}
\mu^2 \rho(\mu^2)=0, \ \ \int d\mu^2\, \rho(\mu^2)=\Delta \vev{\cO}.
\end{equation}
The two conditions can be satisfied only if
\begin{equation}
\rho(\mu^2)=\Delta \vev{\cO}\delta(\mu^2).
\end{equation}
A delta function in the commutator between $T^{\mu\nu}$ and $\cO$ implies that there should be a pole in the real part of the time-ordered correlators, proportional to $\Delta \vev{\cO}$. The holographic calculation \eqref{TOcorr} indeed shows it.
\subsection{Promoting the spurion to a dynamical mode}\label{sec:promote}
Since the spurion behaves as a regular and normalizable mode for the scalar field, we can promote it to a dynamical mode by introducing a new set of boundary conditions for the metric. This may be interesting in the context of effective low energy theories for applications to condensed matter, as it can be an example of ``holographic deconstruction'' following the proposal made in \cite{Nickel:2010pr}. In this case, the interesting IR dynamics is captured by the geometry below some cutoff in the radial direction. The IR cutoff effectively gaps the modes living between the cutoff and the boundary, but the information about the UV physics enters through the coupling of the modes below the cutoff with a massless mode, a ``Goldstone boson'' with a radial profile between the IR cutoff and the boundary. The spurion field will be one of these modes for theories with spontaneous breaking of conformal invariance.
Starting with the on-shell action \eqref{action} we can impose Dirichlet conditions for the scalar field (we will fix the source to vanish, so this is allowed) and a mixed condition for the metric if we add additional boundary terms. First, we split the variation into the traceless and the trace contributions
\begin{equation}
\delta G_{\mu\nu}=G_{\mu\nu}\delta \tau(x)+\delta G_{\mu\nu}^T, \ \ G^{\mu\nu}\delta G_{\mu\nu}^T=0.
\end{equation}
We will impose a Dirichlet condition as usual for the traceless part $\delta G_{\mu\nu}^T=0$,\footnote{Although the Weyl transformation also changes the traceless components, we are ultimately interested in the effective action of the dilaton in flat space, where these components vanish and the Weyl transformation does not affect them.} but for the trace part we will introduce a Neumann condition. In order to have a good variational principle we should add boundary terms to the action that cancel the contribution from the on-shell action $S_{\rm tot}=S+I_{UV}+I_{IR}$
\begin{equation}\label{bc}
I_{UV}=\frac{d-1}{2\kappa^2\rhuv}\int d^d x \sqrt{-\tilde G(\rhuv)}, \ \ I_{IR}=-\frac{d-1}{2\kappa^2\rhir}\int d^d x \sqrt{-\tilde G(\rhir)}.
\end{equation}
Note that $\sqrt{-G}$ in \eqref{variation} has an additional factor $\ell/2\rho$.
Now, using \eqref{variation}, the total variation of the action is
\begin{equation}
\delta S_{\rm tot}[\rhuv,\rhir]=\delta S[\rhuv,\rhir]+\delta I_{UV}+\delta I_{IR}=0.
\end{equation}
Imposing the Dirichlet conditions $\delta\phi=0$, $\delta G_{\mu\nu}^T=0$, this takes the form
\begin{equation}\label{offhellvariation}
\delta S_{\rm tot}[\rhuv,\rhir]=-\left.\frac{1}{2\kappa^2}\int d^d x\,\sqrt{-G} \left(G_{\mu\nu} \pi^{\mu\nu}-\frac{d(d-1)}{\ell} \right)\delta\tau(x)\right|^{\rhuv}_{\rhir}.
\end{equation}
The variation vanish for any $\delta \tau(x)$ if
\begin{equation}\label{trmetbc}
\left. g^{\alpha\beta}\rho \partial_\rho g_{\alpha\beta}\right|^\rhuv_\rhir=0
\end{equation}
Since the variation is evaluated at the cutoffs, we can use the expansion \eqref{metexp} in the boundary condition \eqref{trmetbc}. At the boundary there are divergent terms that should be removed with local counterterms. The reminder in the $\rhuv\to 0$ limit is a finite contribution. For $d>2$, the leading contribution in the derivative expansion of \eqref{trmetbc} is
\begin{equation}
\rhir \,g^{(0)\, \alpha\beta}g_{\alpha\beta}^{(2)}+\cdots =0.
\end{equation}
This will determine the equations of motion of the dilaton. For a flat background metric $g_{\mu\nu}^{(0)}=\eta_{\mu\nu}$, the infinitesimal PBH transformation \eqref{inftPBH} changes the metric to
\begin{equation}
g_{\mu\nu}^{(2)}\simeq \Lir^2\partial_\mu\partial_\nu\delta\tau,
\end{equation}
so, to leading order, the boundary condition \eqref{trmetbc} is satisfied for a massless mode
\begin{equation}
\square \delta\tau\simeq 0.
\end{equation}
The full equations of motion can be derived from the effective action of the dilaton, that can be constructed by computing the generating functional as before including the boundary contributions \eqref{bc} and performing the non-linear PBH transformation. The anomalous term with a coefficient proportional to the difference of central charges will also appear in this case.
\section{Discussion}\label{sec:discuss}
We have shown how the conformal anomaly matching arguments presented in \cite{Komargodski:2011vj,Komargodski:2011xv} are realized in theories with holographic duals. The dilaton/spurion is introduced through a coordinate transformation and our analysis shows that the effective action contains a term whose variation is the difference between the anomalies of the UV and IR theory.
$$
\delta_\sigma \Gamma \propto \sigma \left(\cA_d^{UV}-\cA_d^{IR}\right).
$$
The term is naturally present in even dimensions and absent in odd dimensions, where there is no anomaly.
Let us recapture the various assumptions that we made. The analysis we have presented is valid for theories whose holographic dual consists of Einstein gravity plus matter fields, with a geometry that interpolates smoothly two $AdS$ spaces in the UV and IR.
We have assumed that the potential is derived from a global superpotential which is a somewhat mild assumption in view of the fact that it allows for a quite general form of a potential. We have assumed that the scalar operator $\mathcal{O}$ carries a non-integer dimension. We made this assumption to facilitate the tedious analysis which goes into finding the small momentum approximate solution. Tacitly we are assuming that the usual AdS/CFT correspondence holds, i.e. a large N (even though we do not identify one CFT, hence the role of N in this underlying theory is not apparent) and weak-strong duality. This allows us to identify the on-shell action with classical gravitational action evaluated on the solution. Moreover we do not include higher curvature gravity terms, hence we consider only theories with $a=c$.
The null energy condition guarantees $a_{UV}\geq a_{IR}$ and makes the coefficient of the anomalous term positive definite. This is a general statement within the present, best understood, class of holographic models, however it is not quite close to be a general statement about field theories (even leaving large-$N$ and strong coupling aside). Note, in the first place, that in the holographic theories we are considering the coefficients of the anomaly polynomial at the fixed points are not independent, but there is a single independent coefficient which can be taken to be the coefficient of the Euler density. For instance, in four dimensions
\begin{equation}
\cA_4= c\cW^2-a E_4,
\end{equation}
$c=a$ is fixed. In particular this means that the holographic $c$-theorem (or rather $a$-theorem) does not apply only to the $a$ coefficient but to all the other coefficients of the anomaly polynomial. We know this to be not the case in general (e.g. \cite{Anselmi:1997am,Anselmi:1997ys}). It will then be interesting to extend the analysis to cases where $a\neq c$. In holographic models this can be achieved by including higher derivative curvature corrections, an example is Gauss-Bonnet gravity in five dimensions, for which a holographic $a$-theorem also exists for the $a$ coefficient \cite{Myers:2010xs}. An interesting observation is that the relation between the anomaly coefficient and the radius of anti-de Sitter is not straightforward when higher derivative corrections are included, so that flows where the radius of anti-de Sitter is larger in the dual to the IR theory could be allowed.\footnote{We thank Andrei Parnachev for pointing this out to us.}
Let us now comment on the behavior of the scalar correlator for the case of spontaneous breaking, whose low-momentum behavior we have found to be
$$
\vev{\cO\cO}\sim (\sqrt{-q^2})^{d-2\Delta_{IR}}.
$$
where $\Delta_{IR}>d$. Note that this is more singular than the pole behavior we expected to see. We can compare our results with the results for the singular Coulomb branch geometry. There, it was found that there is a pole in this correlator \cite{Mueck:2001cy,Bianchi:2001de,Papadimitriou:2004rz}. Although we understand this as coming from the properties of the solution close to the horizon, we are not sure about the physical origin of this difference in the two cases. A qualitative difference is that in the Coulomb branch the spectrum is gapped, which probably limits the singular behavior of the scalar correlator to a pole, while in the cases we study there is always a large number of massless degrees of freedom. It would be interesting to explore other models with spontaneous breaking of global symmetries, and compare the two-point function of the operator that triggers the breaking in cases where the spectrum is gapped with those where there are more massless states beside the Goldstone bosons.
There are several examples where massless degrees of freedom affect the qualitative behavior of correlators. For instance, in the Landau-Ginzburg model long-range interactions can in general modify the momentum dependence of the correlation function of the order parameter, leading to non-analytic dependence on momentum \cite{Ferrell:1972zz}. Long-range interactions can also modify the dispersion relation of Goldstone bosons in non-relativistic theories (some examples are discussed in section 6 of \cite{Brauner:2010wm}), avoiding the conclusions of the Nielsen-Chadha theorem \cite{Nielsen:1975hm}.
Said this, the behavior of the scalar correlator is quite surprising to us. Our na\"ive expectation was that $\cO$ would reproduce the propagator of a weakly coupled dilaton, that would lead to a different momentum dependence. Assume that, as na\"ively expected, the low energy effective field theory description consists of the IR CFT plus an almost free dilaton. The dilaton does not couple directly to marginal operators, so the only allowed couplings to the CFT are to irrelevant operators, that would decouple at very low energy. The effective action at low energies in that case would be
$$
\cL_{IR}\simeq \cL_{IR\, CFT}+\cL_{\rm dilaton}+g\left(\vev{\cO}\right)e^{\Delta_{IR}\tau} \cO_{\Delta_{IR}}.
$$
Here $g(\vev{\cO})$ is the coupling of the irrelevant operator that drives the IR theory away from the fixed point in the RG flow. This coupling depends on the expectation value of the operator that breaks conformal invariance. Using that $\vev{\cO_{IR}\cO_{IR}}\sim (\sqrt{-q^2})^{2\Delta_{IR}-d}$, the dilaton propagator would take the form
$$
\vev{\tau \tau}\sim \frac{1}{q^2+(\sqrt{-q^2})^{2\Delta_{IR}-d}}.
$$
Clearly, this is different from the scalar correlator $\vev{\cO\cO}$ we have computed. In our case, the low-momentum behavior of the correlator coincides with that of an operator of dimension $d-\Delta_{IR}$ in the IR CFT.
$$
\vev{\cO\cO}\sim (\sqrt{-q^2})^{2(d-\Delta_{IR})-d}.
$$
This is the dimension of the {\em coupling} $g(\vev{\cO})$. We do not have a formal argument in field theory that explains this behavior, heuristically it looks like fluctuations of the expectation value lead to fluctuations of the coupling in the effective IR theory. Note that there is a strong IR divergence, the space-dependent Euclidean correlator grows like a power-law at long distances:
$$
\vev{\cO(x)\cO(0)}\sim |x|^{2(\Delta_{IR}-d)}.
$$
This is even stronger than the logarithmic divergence of a massless scalar field in two dimensions, that precludes the formation of a condensate \cite{Coleman:1973ci}. Taken at face value, it implies that there is no unitary field theory dual to the class of models we have chosen. This behavior may be avoided in different models with no global definition of a superpotential or with more than one scalar field. Recall that having a globally defined superpotential is not a generic situation. The power law we observe in the scalar correlator seems to be related to the coefficient of the quadratic term of the superpotential at the IR critical value $\phi_m$, which for a single scalar field in the bulk is necessarily negative. However, when there are more scalar fields classical trajectories in field space could connect two critical points of the superpotential along directions of positive curvature, a cartoon is given in figure \ref{fig1}. Another possibility is that there is no global definition of the superpotential, so close to the IR critical value $\phi_m$ the local form of the superpotential has a positive coefficient in front of the quadratic term.
\FIGURE[h]{
\includegraphics[width=8.5cm]{figure1.eps}
\caption{ Cartoon of a trajectory between two critical points of the superpotential in a two-dimensional space. The trajectory approaches the critical points along the directions of positive curvature.
}
\label{fig1}
}
One may ask whether the model we constructed really corresponds to spontaneous breaking of symmetry. We already made some arguments in the Introduction that this is indeed the case, let us elaborate on them. In the first place we have chosen the superpotential in such a way that in the background solution only the normalizable mode of the scalar is turned on, which in the field theory side is interpreted as having a non-zero expectation value for the dual operator but no sources. Since there are no sources, the renormalized action that we have computed is independent of the expectation value.\footnote{Note that this may be different for marginal operators, or operators with integer dimensions such that multi-trace operators are marginal.} Therefore, we are allowed to make the bulk transformation
\begin{equation}
\delta\phi=\tau(x)\partial W, \ \ \delta g_{\mu\nu}=-2\tau(x)(W-W(0))\eta_{\mu\nu}.
\end{equation}
which shifts the coefficient of the normalizable scalar solution and hence of the expectation value of the dual operator. This also corresponds to the zero mode which we identified in the spectrum of fluctuations. The metric changes in the bulk, but the boundary metric is held fix. This transformation can be interpreted as having a moduli space of vacua spanned by the expectation value, with the `dilaton' mode corresponding to fluctuations along the moduli. Moreover, the boundary conditions in both the UV and IR do not break the scale invariance explicitly. In the model we study these fluctuations are very strong which leads to the singular behavior that we observe in the scalar correlator. Further evidence in support of a scenario with spontaneous breaking of symmetry is that the results for the correlation functions of the energy-momentum tensor agree with the expectations from Goldstone's theorem.
It would be interesting to study more general models with one or several scalar fields and see if the singular behavior can be avoided. These models are also interesting for other reasons, for instance they can also be used to study the spontaneous breaking of Abelian or non-Abelian global symmetries. A particularly intriguing fact is that, to our knowledge, there are no examples in consistent truncations of ten-dimensional supergravity of geometries interpolating between two AdS spaces where the flow is dual to a spontaneously broken theory. The known models are either singular or involve explicit breaking. In ten-dimensional supergravity those models can exist, in the case of $\cN=4$ SYM in the Coulomb branch one expects to have several $AdS_5$ throats corresponding to different stacks of D3 branes, and explicit examples have been constructed for $\cN=4$ itself \cite{Costa:1999sk,Costa:2000gk} and in geometries that explore the baryonic branch of Klebanov-Witten theories, interpolating between two $AdS_5$ spaces \cite{Klebanov:2007us,Martelli:2007mk}. The fact that the smooth geometries correspond to localized branes in the internal space while smooth distributions seem to lead generically to singular geometries \cite{PandoZayas:2000sq,Freedman:1999gk} suggests that the singularity is an artifact of the approximation, and that by resolving at small enough distances, the singularity will split in multiple $AdS_5$ throats.
Summarizing, our result for the correlators leads to the following possibilities:
\begin{itemize}
\item For some reason the low momentum expansion breaks down and there are additional terms in the scalar correlator that makes it less singular, modifying it to a $1/q^2$ pole. We checked corrections to the next few orders and did several consistency checks and could not figure out how this would happen. We also checked the method in the Coulomb branch singular solutions where the analytic solutions are known and found no problem there with the low momentum expansion, we reproduce previous results that found a $1/q^2$ pole in the scalar correlator. In any case, it would still be desirable to have an analytic example where the horizon geometry is $AdS$ and the low momentum expansion is not necessary.
\item The singular behavior is an artifact of the large-$N$, strong coupling approximation that is implicit in the holographic approach and a proper treatment will avoid these problems. This would imply that the supergravity approximation breaks down, which is difficult to understand from the bulk perspective, since the classical solution is completely smooth and the curvature is small everywhere.
\item The model does not correspond to any known consistent truncation of string theory, so the that the field theory dual is not known. It is possible that it does not exist, at least in the form of a quantum field theory satisfying the usual properties of locality and unitarity. One would expect that inconsistencies in the field theory would also be manifested in some form in the gravity dual. We do not observe them at the level at which we are doing our analysis, but they may appear elsewhere.
\end{itemize}
In our opinion, the most likely explanation is the last point. From the field theory point of view, in order to have spontaneous breaking of scale invariance it is necessary to have a moduli space of flat directions. These are ubiquitous in supersymmetric theories. For instance, in four dimensions the simplest example is $\cN=4$ SYM, where quantum corrections do not modify the classical moduli space. In a string theory setup, the moduli space of $\cN=4$ can be explored by moving D3 branes around. As we have explained we do not expect a fully smooth gravity dual solution describing the whole theory to the far IR when we do a consistent truncation to five dimensions, since there are always some missing low energy degrees of freedom (corresponding to the unbroken gauge groups on the D3 branes). It is probably not easy to find a situation where this difficulty is overcome. A lesson is that one should be cautions when working with holographic toy models, even when the background solutions have no obvious singularities or instabilities, other kind of inconsistencies may arise in correlation functions.
We would like to explore these issues in the future.
\section*{Acknowledgements}
We thank Ofer Aharony for many useful discussions and suggestions and Stefano Cremonesi, Ori Ganor, Rob Myers, Andrei Parnachev, Koenraad Schalm, Adam Schwimmer and Stefan Theisen for useful comments.
We also thank Ioannis Papadimitriou and Kostas Skenderis for very helpful comments on the holographic renormalization and the fluctuation analysis.
This work was supported in part by the Israel Science Foundation (grant number 1468/06).
|
1,116,691,499,673 | arxiv | \section{INTRODUCTION}
The efficiency of the new large telescopes to increase our
knowledge of the universe and its constituents will depend
significantly on how astronomers can utilize them.
Only creatively-used and well-operated telescopes
will be able to contribute their share to the development of
astronomy.
Operational concepts currently discussed by the various large
telescope projects can be divided into two main groups which differ
mainly by the composition of their user communities. On the one side are
the privately-owned observatories with access restricted to a small
community of astronomers. For many of these observatories it
is most economical and also easiest to continue operation of their
telescopes as in the past. The astronomers
perform the observations themselves and are fully responsible for
the acquired data. Normally they are assisted by experienced
telescope operators. In the following this situation will be referred
to as conventional observing, often also referred to as
classical observing.
National and international observatories on the
other hand are actively experimenting with service, or queue,
observing modes$^{1,2,3}$. The attempt is to take better advantage of the
varying conditions by combining the best-suited observations independent
of individual astronomical projects (cf. [4]).
Active interaction with the scientist,
who proposed the observations and will analyze the data, during the observing
process becomes
nearly impossible in this case. The break in the
observational chain has to be recovered by operational procedures
which enable the astronomers to maintain control over their
observations. At the same time interesting new
astronomical possibilities emerge with such a mode.
The next
generation instrumentation which comes with the new telescopes will
also deliver unprecedented data at the cost of increased complexity.
Astronomers will need every assistance in preparation and execution of
their observations possible. Often when observations are defined in
sufficient detail the actual data acquisition can be delegated to the
observatory. Many aspects of the advantages of the various modes and
the changes involved have been collected in [4].
In the following we will discuss some of the advantages of the two
observational modes and develop criteria when they are suited best (\S2).
The concepts of the VLT specific data flow project which is designed
to accommodate the needs of service observing are presented in section 3.
Open issues and future directions are discussed in the conclusions.
\section{ADVANTAGES --- ONE WAY OR ANOTHER}
The way current observatories are operated has been developed to
efficiently distribute the scarce resource of observing time.
Major guidelines
are scientific merit of the proposed observations, fair
distribution among the subfields of astronomy and -- to a certain
degree -- democracy. Each successful applicant
is awarded a certain amount of time at fixed
calendar dates to perform the experiment. All the observatory provides
is a functioning telescope and instruments with no or little
guidance on how to best perform the observations. The observatory's
r\^ole is essentially to administer the resources while developing
and maintaining the infrastructure.
With the astronomers guiding the observations at the
telescope a quick scientific assessment of the data in respect to their
suitability for the project can be done. The vital link of the
astronomical observing experience and the researcher is maintained
which assures that the data can be analyzed properly. Psychological
aspects like the personal involvement of the astronomers with the data
acquisition or the astronomers' detachment from their
regular work during observing trips can also contribute to the
creativity of the scientific process. Advantages of this conventional
mode for the observatory are the interaction of observatory staff
with the astronomers, the experience brought in by the external users
to the observing process, and the possibility to transfer the
responsibility of the data acquisition to the astronomers directly.
This is particularly important in cases of specialized observations,
the outcome of which can not be predicted.
Typical research favored by these operations is based on observations
which can be obtained under ``regular'' environmental conditions
delivered
by the site and the telescope. Projects which require special
conditions are clearly discriminated, unless there is easy
access to the facilities as is often the case at
observatories with small user communities. There, astronomers
typically obtain a larger share of observing time and can
mix their own projects to make adequate use of particular
meteorological circumstances. Other projects which are typically
disfavored in this mode are surveys since the normally large time
demands can not be readily allocated at observatories which serve
large communities. Another type of project which normally suffers
from this conventional scheduling mode are targets of opportunity
and general time series observations of objects with time scales
of weeks or months. Smaller communities, where the data exchange is
organized informally, have been able to arrange for such occasions, but
the larger observatories had to introduce formal and often
awkward rules to handle these situations.
The previous paragraph lists a few astronomical reasons why it would
be advantageous to change scheduling procedures. The exploration of
parameter space mostly inaccessible so far can add
significantly to astronomical progress. It is this widening of the
observational options which is scientifically most interesting and
leads to the discussion of service observing.
The possibility
to select observations matching the prevailing conditions best will
enable projects depending on special circumstances.
This observational edge may speed up results
which otherwise would rely on the meteorological luck of the draw.
Surveys can be
carried out mixed with other observations. It has to be noted
that most massive surveys have been obtained in service mode as
the example of the Palomar Observatory and the ESO/SERC Sky Surveys,
the
current near-IR surveys (DENIS and 2MASS), or the searches for massive
halo objects (MACHO, EROS, OGLE) with their scientific spin-offs show.
Another observational niche difficult to access at large
observatories is the monitoring of variable objects and observations
of targets of opportunity. Such synoptic observations can very easily
be fitted into a flexible scheduling process, which is a prerequisite
for service observing. A small number of objects spaced around the
sky requiring just a few observations can easily be accommodated in
service mode, while they may not reach critical program size for
time allocation in the conventional case.
Another aspect more difficult to quantify may be the
more efficient use of the available time as programs will be performed
to the exact amount of exposures needed. Finer adjustment to the
moon phases gains some dark hours which are often lost when programs
are scheduled conventionally in blocks of complete nights$^{5,6}$.
Observing projects may improve by the needed preparations which entail a
complete road maps of how the data will be analyzed. This should
happen even though there will be no formal requirement to do so.
A further
possible spinoff is the availability of an extensive archive of
observations. Archival research, although not yet a primary resource,
may contribute significantly to projects where the combination
of data from different wavelength regimes is advantageous.
While there remain many astronomical projects which clearly are
best served in conventional mode, the above examples show that
there are a few reasons why it may be interesting to explore some
other operational modes with new telescopes. The selection of
the most appropriate observing mode for each project will be an
important decision which will need careful evaluation by the
astronomer.
The move to new operational schemes must be driven
by improvements in the scientific process. Many discussions on
operational issues have focussed on advantages for observatories,
which are not negligible, but can not by themselves justify the
proposed, stringent changes. The astronomical community must
embrace the new opportunities for success. It should also be noted
that all observatories plan to offer both observing modes
leaving all options open.
\section{INFORMATION EXCHANGE IN A COMPLEX OBSERVATORY ~~~~~~~~ THE VLT DATA FLOW}
Removing the astronomers from the actual observations at the telescope
implies that
other means must be provided to retain their control over the
observational process. The development of the
procedures to guarantee the astronomer's participation has started at
ESO within the On-Line Data Flow project (see also [1]).
Its basis was developed in the VLT
Science Operations Plan$^5$ and a document which defines the
astronomical requirements on the observatory information chain$^7$.
The link between the astronomer and the observatory should be close
and transparent. It also should remain flexible.
There are a few very clearly separated stages in the astronomical
information and data cycle$^{1,7}$.
Each phase has demands and services which have
to be identified and carefully combined. The definition of
the observing program and its scientific evaluation by peers as well as the
definition of the individual observations should be provided by the
astronomy community. The observatory solely supports and administers
this process. Scheduling of programs in conventional mode and
the observations in service mode, however, are performed by
the observatory.
The demand for exact observation definition drives the
requirement for the second phase of the astronomer -- observatory
interaction.
To built a schedule which optimally matches the prevailing conditions
with the observational requirements the scheduler will need accurate
input from a meteorological site monitor, telescope and instrument
status, and astronomical restrictions (e.g. moon phase). The actual
observations will be handled by specific telescope, instrument, and
detector software$^8$ which will return the
data products (frames and logs) to the data flow. The further data
handling involves
archiving and pipeline processing for provisional quality control. All
these processes fall into the responsibility of the observatory.
These considerations led to the adoption of a two-layered system
relaying information among the various subprocesses. The VLT could be
run from the control system alone without the data flow
superstructure, but the technical description of the
instrument and the interaction with the scheduling process are
considered too detailed and cumbersome for astronomers who infrequently
interact with the system. The data flow acts as the intermediary between the
astronomers and the technical software. There are four fundamental
agents the data flow is connecting: the astronomer, the scheduler,
the technical software, i.e. the observing facilities, and the archive.
The VLT concept does reflect their basic needs. ``Observation
Blocks'' contain the complete information relevant for an
individual observation$^{1,9}$.
An observation in what follows is
considered a single pointing of the telescope with a specific instrument
setup to acquire a coherent data set.
Apart from obvious quantities, like
coordinates and instrumental setup, observation blocks can also
contain global requirements of importance to the scheduler, general
comments of interest to the observer, and links to reduction
procedures or quality control. A specific feature is the modularity by
which observation blocks can refer to other observation blocks to
combine observations. It is thus possible, e.g., to link regular
observations with the acquisition of calibration data.
All information on the instrument and its operation during the
observation is encapsulated in ``instrument templates$^{1,9}$.'' These
structures define commonly used setups and are embedded in the
observation blocks. The astronomer defines the
specific parameters of the setup in a template parameter file
which accompanies
the template. This should ease the astronomers' interaction with the
VLT system as many details of instrument operations can be served by
the templates. Some observations will not be offered with templates in
which case the option to
drop to the level of the VLT technical software is still available.
Astronomers whose proposals successfully passed the selection process
will hence have to prepare the observation blocks and the templates for
each observation in their program. This preparatory phase
is foreseen for all proposals and guarantees the close
involvement of the astronomers. Even conventional observations
will be prepared in this preparatory phase to familiarize the
astronomers with the system. In this case the astronomers will use
their observation blocks at the telescope during their assigned
nights. During service time all available observation blocks are
provided in a central database polled by the scheduling software. The
scheduling is a very complex process which currently is not yet fully
defined for the VLT. Since the best performance is achieved only when
sufficient information is available and the detailed procedure depends
on many different sources, it is important to collect as much
intelligence as possible. Observations with detailed descriptions
of their requirements are more likely to achieve the requested
quality as they can be scheduled accordingly.
A long-term plan for the semester or a fair fraction of it schedules
conventional observing runs and defines the requested instrument
setups. This will be required even with instrument changes becoming
possible at short notice as special filters or gratings will have
to be mounted ahead of time.
Flexible scheduling itself will rely on local information sources
which describe the prevailing conditions of and at the observatory.
Meteorological input is provided by a site monitor.
Image quality assessment (including
sky background), possible forecast of critical parameters (e.g. cloud
cover, precipitable water vapor, seeing) will provide the basis of
the selection process.
ESO has maintained a program to characterize prevailing
meteorological conditions over the past several years$^{10}$
and is embarking on a
project to forecast some of the meteorological parameters on the basis
of a few hours.
Options for operations with limited information have to be developed
as well.
An important aspect of the scheduling process is the underlying
criteria which govern the selection. This is largely unexplored
territory for all observatories with the notable exceptions of HST and
the NOAO operations of the WYIN telescope$^2$. Important results are
expected from simulations with mock projects which encompass a large
set of observations and a variety of requirements. Although it will
be impossible to fully simulate the scheduling of a semester
describing
the creativity of programs of real astronomers and vagaries of
real-time operations, simple strategies can be tested and compared. The
effect of operational overheads, instrument changes,
decision time scales,
importance of program completion, and weighing of different
conditions can be explored ahead of time.
The telescope and instrument software returns raw data frames to the
data flow. Archiving and further processing complete the cycle. The
data archive captures all relevant information for a given
observation. This includes the original request contained in the
observation block and template together with the actual
conditions. Other relevant observations, e.g. calibrations and
standard star data, linked to the project are also
stored in this central place. The astronomers will receive
(or retrieve)
their data from this archive. Once data become public it will be
accessible by the whole astronomical community.
At the VLT a routine pipeline processing of all data obtained in
service mode and, possibly, conventional observing will
be attempted. The pipeline results are used for a quick assessment
of the data, potentially influencing the further observations of the
night. They will provide preliminary removal of instrumental and detector
effects.
The quality control will follow the pipeline reductions to ensure that
the observations correspond to what was requested.
To test the concepts and procedures of the data flow a set of
reference proposals has been defined. These observational projects
with some scientific background have been designed to cover a large
range of observational requests and techniques. They will be used to
check the interfaces and the interactions of the various parts of the
data flow. For a first check they will play the r\^ole of external
astronomers. The reference proposals can also be used for simple
scheduling simulations.
\section{BUILDING THE EXPERIENCE}
The complexity of the operations of modern large telescopes should
not be underestimated. The required information exchange between the
astronomers and the observatory represents a vital link to assure
that the observatory delivers what is requested and expected by the
astronomers. The future will look different for the regular user
even observing conventionally. Telescope operations
have been long ago delegated to specialists and astronomers have
accepted the help provided by telescope operators. The complexity
of the instrumentation and the observational procedures will further
emphasize the astronomers' understanding of technical aspects.
It should be the goal of the operations to keep the astronomers'
interaction with the facilities and the staff as simple as possible.
Every
possible help the observatory can provide should be available to
support the astronomers in their scientific experiments. They should
be able to concentrate on the observational aspects rather than
technicalities. Nevertheless, the astronomers will have
to be provided with sufficient information so that they can understand
and assess their data in all observational aspects.
To assure acceptance of these conceptual changes the
collaboration of the astronomical community has to be assured.
The early involvement of future users of the system
can only improve the operations. Several science test cases for the
VLT have been solicited from the European astronomical community.
These test cases will be used in addition to the internal reference
proposals to test the procedures. They have the additional advantage
to be based on real science projects and the external astronomers are
experts in the requested observations who can provide helpful
criticism. Since the data flow is the VLT's
interface with the astronomical community it will largely
define the perception of the observatory. The input from users is
essential for a successful development.
The recommissioning of ESO's New Technology Telescope (NTT) will be
combined with the start of service observing at ESO. Several programs
have been approved and the data flow system will undergo a first real
test before the end of 1996. At first, the service mode will be restricted to
direct imaging in the optical, some of the operationally least
demanding observations. Spectroscopic observations in the service
mode will offered only during the following semester.
\section{CONCLUSIONS}
The introduction of new observing modes in combination with the
improved instrumental capabilities is a daunting task. The prospective
advantages are significant and may provide an important observational
edge over other approaches. At the same time the success of the
experiment almost entirely depends on the acceptance by the community.
The astronomers will have to learn to optimally use the new
possibilities. Since many of the changes have been initiated by the
observatories it will be their r\^ole to convince the rest of the
community of the gains. A fundamental requirement is the smooth
operation of the observatory and the improved data quality has to
become an essential argument.
At the VLT the data flow will link the observatory to the astronomers
and expand the interaction between them. It presents the astronomers
with all observational possibilities and lets them make best
use of the facilities. An integrated approach has yielded a
system which will entail all operational aspects. It should be noted
that there exists a clear separation between the needed
infrastructure, the data flow, and the operational model. A flexible
data flow system will provide the options to built and improve
operational models without major limitations.
Definition of an operational model for the VLT will have to tackle
open questions like the scale of the preparatory phase, the
exact criteria for the scheduling process, and
the degree of pipeline processing. A convincing model must include
compelling reasons for the expanded preparatory phase. Flexible
scheduling drives most of the complexity of the VLT data cycle. The
operational model will have to set the astronomical priorities,
possibly based on the results from simulations.
First lessons from
simulating the information and data cycle with mock projects will be
followed by service operations of the NTT in early 1997. Further
refinement in the VLT environment at the NTT$^{11}$
will provide a solid basis for a successful start of service
observations at the VLT itself.
Service observing will remain an experiment for the first few years.
It must not be seen in isolation as it is introduced to compliment the
current observational capabilities. Observing with the VLT will be
possible in conventional as well as service modes.
The selection of which mode suits the program and its observation best
should ideally be based on astronomical criteria. This can be achieved
when sufficient trust has been built through reliable delivery of
high-quality data.
\section{ACKNOWLEDGEMENTS}
Building the operations of a complete observatory is not a small
task and depends on many people. The views expressed in this article
are based on discussions with many colleagues at ESO. The data flow
project is headed by P. Quinn and includes M. Albrecht, E. Allaert,
D. Baade, A. M. Chavan, P. Grosb\o l, M. Peron, G. Raffi, and
J. Spyromilio. They have been instrumental in developing some of
the ideas. I am also grateful to A. Renzini, the VLT project scientist,
for many discussions regarding these issues.
|
1,116,691,499,674 | arxiv | \section{introduction}
At the large hadron collider (LHC), the searches for new physics beyond the standard model (SM) have a preference for the colored particles. It is due to two reasons. First, from the argument for solving the gauge hierarchy problem, colored partners of top quark are expected, to cancel the quadratic divergence of Higgs mass incurred by top quark. Second, viewing from detectability, colored particles have sizable production rates even at the well motivated TeV scale. Nevertheless, it is also of importance to investigate the status and prospects of new electroweak (EW) particles. They are not less motivated in particle physics. But at the LHC these particles, typically with small production rates, are inclined to be buried in the huge SM EW and/or QCD backgrounds, except for those with characterized signatures, e.g., large missing transverse energy or same-sign di-lepton (SSDL). The latter frequently originates from particles with a larger electric charge, and the doubly charged Higgs bosons, denoted as $H^{\pm\pm}$, is a good case in point.
A lot of works have been done on the LHC search for $H^{\pm\pm}$ that come from the (scalar) $SU(2)_L$ triplet representation with hypercharge $\pm1$ (denoted as $\Delta$).~\footnote{$H^{\pm\pm}$ can also be arranged in a singlet~\cite{Zee:1985id}, doublet~\cite{Law:2013dya} $SU(2)_L$ and even higher dimensional~\cite{Cirelli:2005uq,Babu:2009aq,Cai:2011qr} representations. Some of them may produce similar signatures studied in this paper.} As a matter of fact, extension to the SM Higgs sector by $\Delta$ is well inspired by various new physics contexts, e.g., solving the hierarchy problem~\cite{GMM,little}, providing a viable dark matter candidate~\cite{FileviezPerez:2008bj} and in particular generating neutrino masses via the seesaw mechanism~\cite{type2}. In supersymmetry, such triplets provide an effective way to lift the SM-like Higgs boson mass, thus greatly relieving the fine-tuning problem~\cite{Kang:2013ft}. In addition, a light $\Delta$ on the loop of Higgs decay into a pair of photon may appreciably affect the corresponding branching ratio~\cite{Arhrib:2011vc,Chun:2013ft,Kang:2013ft,Dev:2013ff}; it would be of particular interest if we were at the early stage of LHC, which hinted a sizable di-photon excess.
Most of the previous works on $H^{\pm\pm}$ searches concentrate on the heavy mass region, while in this article we will focus on the complementary region, the light mass region, i.e. lighter than $2m_W$ but above $m_W$. Extensive attentions are paid on the decay modes of $H^{\pm\pm}$ dominated by either the SSDL~\cite{Perez:2008ha,Rentala:2011mr} or di-$W$ \cite{Han:2007bk, Chiang, Ding:2014nga}, or the cascade decay among scalar fields \cite{H_cas,Chakrabarti:1998qy,Han:2015}. For a comprehensive discussion on the relative importance of the decay channels of $H^{\pm\pm}$, see Ref.~\cite{Melfo:2011nx}. The search for $H^{\pm\pm}$ through the SSDL channel has been peformed at the LHC, which already excludes the mass of $H^{\pm\pm}$ up to about 300 GeV~\cite{Chatrchyan:2012ya,ATLAS:2012hi}. However, in the current experimental searches other decay modes like di-$W$ may still allow a much lighter $H^{\pm\pm}$~\cite{Kanemura:2013vxa}, for instance, even below $2m_W$. Note that such $H^{\pm\pm}$ decays into di-$W$ with one being off-shell, thus this channel is dubbed $WW^*$.
Mainly owing to the softness of the final products, hunting for $H^{++}\rightarrow WW^*$ is a challenging task at LHC even with merits of relatively large pair production cross section and the remarkable SSDL signature. So it is very important to elaborate the LHC search for such light $H^{\pm\pm}$. We shall perform the detailed background simulation on SSDL, especially including the non-prompt $\bar t t$ background which is the dominant one nevertheless ignored before. We find that $H^{\pm\pm}$ should be observable at the 14 TeV LHC with $10-30\rm\,fb^{-1}$ integrated luminosity. The last but not the least, here we take a simplified model approach and discuss the search for $H^{\pm\pm}$ in the simplified model at the LHC, which makes our result less model-dependent and can be conveniently translated into
other specific models~\cite{Rentala:2011mr,Alves:2011wf}.
This paper is organized as follows. In Section~\ref{model}, we describe some details about the simplified model for
the doubly charge Higgs bosons in $SU(2)_L$ triplet representation and consider some relevant constraints. Section~\ref{property} is devoted to the properties of the doubly charged Higgs bosons including its productions
and decays at the LHC. In Section~\ref{LHCsearch}, we study the detailed collider simulation for both signal
and background events, and present the LHC reach of the doubly charged Higgs boson.
Finally we conclude and give a outlook in Section~\ref{conclusion},
and some necessary details are given in Appendix A.
\section{The SM Extension with a Hypercharge $Y=\pm1$ Triplet Higgs} \label{model}
\subsection{The simplified model}
There are a lot of motivated new physics models which have a $SU(2)_L$ triplet Higgs boson $\Delta$ with hypercharge $Y=\pm1$. In order to make our discussion as general as possible, in this work we take the simplified model approach and make the assumption that in the simplified model new particles other than $\Delta$ are absent or decoupled. Thus, the relevant terms in the Lagrangian can be written as
\begin{eqnarray}
\mathcal{L}\supset\mathcal{L}_{\rm kin}+\mathcal{L}_Y-V(\Phi,\Delta),
\end{eqnarray}
where $\mathcal{L}_{\rm kin}, \mathcal{L}_Y$ and $V(\Phi,\Delta)$ are the kinetic term, the Yukawa interaction, and the Higgs potential, respectively. Let us define the SM Higgs doublet and the triplet as
\begin{eqnarray}\label{fields}
\Phi = \left( \begin{array}{c}
\phi^+ \\
\phi^{0}
\end{array} \right),\;\;
\Delta = \left( \begin{array}{cc}
\frac{\delta^+}{\sqrt{2}} & \delta^{++} \\
\delta^{0} & -\frac{\delta^+}{\sqrt{2}}
\end{array} \right),
\end{eqnarray}
with $\phi^0=\frac{1}{\sqrt{2}}(\phi+v_\phi+i\chi),\; \delta^0=\frac{1}{\sqrt{2}}(\delta+v_\Delta+i\eta)$.
Generically, the scalar potential $V(\Phi,\Delta)$ generates a non-vanishing vacuum expectation value (VEV) $v_\Delta$ for the neutral component of $\Delta$. The most general scalar potential is
\begin{eqnarray}\label{higgspotential}
V(\Phi,\Delta) &=& m^2 \Phi^{\dagger}\Phi+ M^2 {\rm Tr}
(\Delta^{\dagger}\Delta) + \lambda_1 (\Phi^{\dagger}\Phi)^2 + \lambda_2 [{\rm Tr}(\Delta^{\dagger}\Delta)]^2 + \lambda_3 {\rm Tr}[(\Delta^{\dagger}\Delta)^2] \nonumber \\
&& + \lambda_4 (\Phi^{\dagger}\Phi){\rm Tr}(\Delta^{\dagger}\Delta) +
\lambda_5 \Phi^{\dagger}\Delta \Delta^{\dagger} \Phi
+ \left[\mu (\Phi^{\intercal} \mathrm{i}\tau_2 \Delta^{\dagger}\Phi) + h.c.\right].
\end{eqnarray}
If $\mu=0$, the potential will respect a $Z_2$ symmetry acting on $\Delta$ and the triplet may do not acquire VEV. Otherwise, $\delta^0$ is supposed get a non-vanishing VEV. After minimizing the potential Eq. (\ref{higgspotential}) and considering very small $v_\Delta$ (grounded on reason discussed soon later), one gets
\begin{eqnarray}
v_{\Delta} \simeq \frac{\mu}{\sqrt{2}} \frac {v^2_\phi}{ M^2+\frac{1}{2}\left(\lambda_4+\lambda_5\right) v^2_\phi}
=\frac{\mu}{\sqrt{2}} \frac {v^2_\phi}{ M^2_\Delta}.
\end{eqnarray}
We can see that there are typically two ways to achieve a sufficiently small $v_{\Delta}$: (A) $\mu$ is around the weak scale, and then the triplet is pushed up to the TeV region; (B) by contrast, the triplet is around the weak scale with $M_\Delta\sim v_\phi=246$ GeV, and then $\mu$ is forced to lie below the GeV scale as $\mu=v_\Delta$.\footnote{Since as $\mu\ra0$ a symmetry arises, this case is at least technically natural according to the 't Hooft principle.}
We now explain why $v_\Delta$ is restricted to be very small. The Higgs kinetic terms are
\begin{eqnarray}\label{kinetic1}
\mathcal{L}_{\rm kin}\supset(D_\mu\Phi)^\dagger(D^\mu\Phi)+{\rm Tr}\left[D_\mu\Delta)^\dagger(D^\mu\Delta)\right],
\end{eqnarray}
where the covariant derivatives are defined by
\begin{eqnarray}\label{kinetic2}
D_\mu\Phi=\left(\partial_\mu+i\frac{g}{2}\tau^aW^a_\mu+i\frac{g^\prime}{2}B_\mu\right)\Phi,
\;\;
D_\mu\Delta=\partial_\mu\Delta+i\frac{g}{2}[\tau^aW^a_\mu,\Delta]+ig^\prime B_\mu\Delta,
\end{eqnarray}
with $(W^a_\mu,g)$ and $(B_\mu, g^\prime)$ are, respectively, the $SU(2)_L$ and $U(1)_Y$ gauge fields and couplings, and $\tau^a=\sigma^a/2$ with $\sigma^a (a=1,2,3)$ the Pauli matrices. According to Eqs.~(\ref{fields}), (\ref{kinetic1}) and (\ref{kinetic2}), the masses of the $W$ and $Z$ gauge boson at tree level are
\begin{eqnarray}
m^2_W=\frac{g^2}{4}(v^2_\phi+2v^2_\Delta),\;\; m^2_Z=\frac{g^2}{4\cos\theta_W}(v^2_\phi+4v^2_\Delta).
\end{eqnarray}
Asides from the SM contributions, they receive additional contributions from the triplet. As a consequence, the oblique parameter $\rho$ will be modified. Now, it is given by
\begin{eqnarray}
\rho=\frac{m^2_W}{m^2_Z\cos^2\theta_W}=\frac{1+2x^2}{1+4x^2}\approx1-2x^2,
\end{eqnarray}
with $x=v_\Delta/v_\phi$. The current experimental value of $\rho$~\cite{Beringer:1900zz} imposes a strict constraint on the deviation of $\rho$ from 1 and yields the upper bound $x\lesssim 0.01$, or in other words, $v_\Delta\lesssim 2.46$ GeV. We will turn back to this latter.
Although almost irrelevant to our later LHC studies, we for completeness still incorporate the Yukawa interactions of the triplet field, which are crucial in generating neutrino masses in type-II seesaw mechanism.~\footnote{In this paper we will use this model as the benchmark model for the completion of the simplified model.} It takes the form of
\begin{eqnarray} \label{yukawa}
-\mathcal{L}_Y &\supset& y_{ij} L^{T}_{i} \mathcal{C} i \tau_2 \Delta L_{j} + h.c.\nonumber \\
&=&y_{ij}\left[\nu^T_i \mathcal{C}P_L\nu_j\delta^0- \frac{1}{\sqrt{2}}(\nu^T_i \mathcal{C}P_L\ell_j-\ell^T_i \mathcal{C}P_L\nu_i)\delta^+ -\overline{\ell^C_i}P_L\ell_j\delta^{++}\right]
+h.c.~,~\,
\end{eqnarray}
where $y_{ij}(i,j=1,\,2,\,3)$ is an arbitrary symmetric complex matrix, $\mathcal{C}=i\gamma^0\gamma^2$ is the charge conjugation operator, and $L^T_i=(\nu_{iL},\ell_{iL})$ is a left-handed lepton doublet in the SM. After the EW symmetry breaking, the Majorana neutrino mass terms are generated
\begin{eqnarray*}
(M_\nu)_{ij}=\sqrt{2} y_{ij} v_\Delta~.
\end{eqnarray*}
To end up this subsection, we give a quick recapitulation of the scalar mass spectrum. In addition to the three Nambu-Goldstone $G^\pm$ and $G^0$ which are absorbed by the longitudinal components of the $W^\pm$ and $Z$ gauge bosons, the model has seven physical Higgs bosons ($H^{\pm\pm}, H^{\pm}, H^0, A^0$, and $h$). The doubly charged Higgs $H^{\pm\pm}$ is purely from the triplet ($H^{\pm\pm}=\Delta^{\pm\pm}$), while the other Higgs bosons would be in general mixtures of the SM Higgs and triplet fields. Such mixings are proportional to $x$ and hence seriously suppressed. For simplicity, the masses of these triplet-like Higgs bosons are collected together as
follows (neglecting $\mathcal{O}(v^2_\Delta/v^2_\phi)$ terms)
\begin{eqnarray}\label{spectum}
M_{H^{\pm \pm}}^2 &\approx& M^2_\Delta - \frac{1}{2}\lambda_5v^2_\phi~,~
\nonumber\\
M_{H^{\pm}}^2 &\approx &M^2_\Delta - \frac{1}{4}\lambda_5v^2_\phi ~,~ \nonumber\\
M_{H,A}^2 &\approx & M^2_\Delta~.~
\end{eqnarray}
So we can see that the quartic $\lambda_5-$term is responsible for the masses splittings, which satisfy the relations
\begin{eqnarray}\label{mass}
M_{H^{\pm \pm}}^2-M_{H^{\pm}}^2=M_{H^{\pm}}^2-M_{H,A}^2=-\frac{1}{4}\lambda_5v^2_\phi.
\end{eqnarray}
It is shown that there exits three patterns of the mass spectrum for the triplet-like Higgs bosons. When $\lambda_5=0$, all the triplet-like Higgs bosons are degenerate in mass. However, in the case $\lambda_5>0$ ($\lambda_5<0$), the resulting mass orderings become $M_{H,A}>M_{H^\pm}>M_{H^{\pm\pm}}$ ($M_{H,A}<M_{H^\pm}<M_{H^{\pm\pm}}$).
\subsection{Possible constraints}
There are various possible theoretical and experimental constraints on the triplet Higgs model or Type-II seesaw
model~\cite{Arhrib:2011uy,Chun:2012jw,Aoki:2012jj,Queiroz:2014zfa}. Here, we only include some constraints
which are closely relevant to our study.
\subsubsection{On the magnitude of $v_\Delta$}
As discussed above, the VEV $v_\Delta\neq 0$ modifies the tree-level relation for the electroweak $\rho$ parameter as $\rho\approx 1-2 v^2_\phi/v^2_\Delta $. However, this mass splittings between the component of $\Delta$ will induce an additional positive contribution, with proportional to mass splitting, to $\rho$ to cancel the effect lead by $v_\Delta$, for example, an upper limit from perturbativity ($\lambda_5\lesssim 3$) to be $v_\Delta\lesssim 7 ~{\rm GeV}$, for $m_H=120~{\rm GeV}$ \cite{Melfo:2011nx}. Conservatively, we take the upper bound $v_\Delta \lesssim 2 ~{\rm GeV}$, which is corresponding to $x=v^2_\phi/v^2_\Delta\lesssim 0.01$.
The lepton flavor violations involving $\mu$ and $\tau$ provide the strongest constraint on the $y_{ij}$ and thus $v_\Delta \sim (M_\nu)_{ij}/y_{ij}$. To accommodate the currently favored experimental constraints, there is a lower limit $ v_\Delta M_{H^{\pm\pm}}\gtrsim 100 ~ {\rm eV\, GeV}$~\cite{Akeroyd:2009nu}, which is quite loose. A relevant constraint comes from the neutrino masses. If the Yukawa coupling of triplet scalar is the unique origin for neutrino mass, the current observations from the neutrino oscillation experiments and cosmological bounds give \cite{Beringer:1900zz}:
\begin{eqnarray}\label{neutrinomass}
m_\nu=\sqrt{2}y_{ij} v_\Delta\lesssim 10^{-10} ~{\rm GeV}~.
\end{eqnarray}
For our purpose, a larger $v_\Delta$ is of interest. Then, for $v_\Delta=1\, {\rm GeV}$ one needs an extremely small $y_{ij}\lesssim 10^{-10}$ to accommodate the correct neutrino mass scales. But it is not of concern in the simplified model which is not a model for neutrino physics. For example, beyond the simplified model maybe there are some other source for generating neutrino masses and then the Yukawa couplings can be forbidden absolutely. In summary, $v_\Delta$ can be as large as 1 GeV without spoiling any constraints; moreover, the Yukawa couplings $y_{ij}$ can be made arbitrarily small in order to suppress the direct decay into a pair of lepton.
\subsubsection{Experimental bounds on $M_H^{\pm\pm}$}
The mass of doubly charged Higgs $M_H^{\pm\pm}$ has been constrained in the past experiments such as SLC and LEP, independently of the decay modes of $H^{\pm\pm}$. From the LEP experiment, the width of $Z$ boson has been precisely measured. When $M_H^{\pm\pm}$ is less than half of the $Z$ boson mass, the new decay mode $Z \rightarrow H^{\pm\pm}H^{\mp\mp}$ will open. Then the total decay width of the $Z$ boson will receive a sizable contribution from the partial width as
\begin{eqnarray}
\Gamma(Z \rightarrow H^{\pm\pm}H^{\mp\mp}) = \frac{G_Fm_Z^3}{6\pi\sqrt{2}}(1-2s_W^2)^2\L1-\frac{4M_{H^{\pm\pm}}^2}{m_Z^2}\right)^{\frac{3}{2}}~.~
\end{eqnarray}
On the other hand, from \cite{Beringer:1900zz} we know
\begin{eqnarray}
\Gamma_Z^{NP} < 3 ~ \textrm{MeV}~ (95\% \textrm{CL})~,
\end{eqnarray}
and this puts a stringent constraint on the mass of doubly charged scalar. The lower mass bound can be obtained $M_{H^{\pm\pm}} > 42.9$ GeV at $95\%$ confidential level.
The mass bound on $M_H^{\pm\pm}$ can also be taken through its direct searches at the LHC. The ATLAS Collaboration has searched for doubly-charged Higgs bosons via pair production in the SSDL channel. Based on the data sample corresponding to an integrated luminosity of 4.7 $\text{fb}^{-1}$ at $\sqrt{s} = 7$ TeV, the masses below 409 GeV, 375 GeV and 398 GeV have been excluded respectively for $e^{\pm}e^{\pm}$, $e^{\pm}\mu^{\pm}$ and $\mu^{\pm}\mu^{\pm}$ by assuming a branching ratio of $100\%$ for each final state \cite{ATLAS:2012hi}. Besides pair production, the CMS Collaboration also considered the associated production $pp \rightarrow H^{\pm\pm}H^{\mp}$, in which the masses of $H^{\pm\pm}$ and $H^{\mp}$ are assumed to be degenerate. Using three or more isolated charged lepton final states, the upper limit on $M_{H^{\pm\pm}}$ is driven under specific assumptions on branching ratios \cite{Chatrchyan:2012ya}. However, other decay modes for $H^{\pm\pm}$ such as di-$W$ will become dominant under some conditions. The preliminary search for doubly-charged Higgs boson based on this channel is also studied in Ref.~\cite{Kanemura:2013vxa}. By fully utilizing the result of the SSDL search by the ATLAS Collaboration (with $4.7~ \text{fb}^{-1}$ integrated luminosity at $\sqrt{s} = 7$ TeV), the lower limit is obtained to be 60 GeV at the $95\%$ C.L.. Moreover, considering the integrated luminosity of $20~ \text{fb}^{-1}$, the lower bound can be evaluated to 85 GeV. Since the treatment for backgrounds and signals in the $WW^*$ channel will be in principle different from the SSDL case, a detailed analysis on this topic is necessary. In this article, we concentrate on this scenario and elaborate the search for such a $H^{\pm\pm}$ at LHC.
\section{Production and decay of $H^{\pm\pm}$}\label{property}
\subsection{Production}
The prospect for the production of doubly charged scalar $H^{\pm\pm}$ has been widely studied at the hadron colliders such as Tevatron and LHC. For an elaborate discussion on this topic, please see \cite{Rentala:2011mr}.
The main production processes for $H^{\pm\pm}$ at the LHC are the pair production via Drell-Yan process $p p \rightarrow \gamma^\ast /Z \rightarrow H^{\pm\pm} H^{\mp\mp}$ and the associated production $p p \rightarrow W^{\pm\ast} \rightarrow H^{\pm\pm} H^{\mp}$. Note that these processes only depend on the mass of the doubly charged Higgs boson $m_{H^{\pm\pm}}$ and independent on $v_\Delta$; even it is as large as 1 GeV. The next-to-leading (NLO) QCD corrections to the pair production can increase the cross section by about $20-30\%$ \cite{Muhlleitner:2003me}. Moreover, the authors have calculated the two-photon fusion process and found its contribution to the pair production can be comparable with the NLO QCD corrections to the Drell-Yan process \cite{Han:2007bk}. For conservatively, we only consider the leading-order (LO) cross section in this work.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.5]{prod.eps}
\caption{\label{xsec} The leading order production cross sections for $H^{\pm\pm}H^{\mp\mp}$, $H^{\pm\pm}H^{\mp}$ and $H^{\pm}H^{\mp}$ at
the 14 TeV LHC. We assume the degenerate mass of $H^{\pm\pm}$ and $H^{\pm}$ for $H^{\pm\pm}H^{\mp}$ associate production. }
\end{center}
\end{figure}
In Fig~\ref{xsec}, we show the LO production cross sections for the corresponding charged Higgs pair productions at the 14 TeV LHC. The production rate ranges from a few fbs to a few pbs in the mass range of [50, 500] GeV.
We have also shown in this figure the production rate of $H^{\pm\pm}H^{\mp}$ associated production, assuming mass degeneracy between $H^{\pm\pm}$ and $H^\pm$, whose rate is a few times larger than the $H^{\pm\pm}H^{\mp\mp}$ pair production. Hereafter, we only consider the $H^{\pm\pm}H^{\mp\mp}$ pair production as a more conservative study.
\subsection{Decays}
In the simplified model given in the previous section, the possible decay modes for a light $H^{\pm\pm}$ considered in this paper include: (1) the lepton-number violating (LNV) decay mode $H^{\pm\pm}\rightarrow \ell^\pm_i\ell^\pm_j$; (2) the $WW^*$ decay mode $H^{\pm\pm}\rightarrow W^\pm W^{\pm\ast} \rightarrow W^\pm f\bar{f} $; (3) the cascade decay mode $H^{\pm\pm}\rightarrow H^\pm W^{\pm\ast} \rightarrow W^\pm f\bar{f}$. The corresponding decay rates can be found in Appendix \ref{DHdecay}. In particular, in the models with type-II seesaw mechanism, the LNV decays are proportional to Yukawa coupling $y_{ij}$, consequently inversely proportional to $v_\Delta$ due to $v_\Delta=M_v/y$. In contrast, the $WW^*$ mode is proportional to $v_\Delta$, which means that the higher the value of $v_\Delta$ is, the more important we expect the $WW^*$ mode to be, with a corresponding decrease in the LNV. As for the cascade decay mode, it is induced by the gauge interactions and highly sensitive to the mass splitting $\Delta M= M_{H^{\pm\pm}}-M_{H^{\pm}}$.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=1.0]{BRvd.eps}
\caption{\label{BRvd} The branching ratios of the doubly charged Higgs boson decay versus $v_\Delta$ for $M_{H^{\pm\pm}}=100 {\rm GeV}$ (dash line) and $M_{H^{\pm\pm}}=150 {\rm GeV}$ (solid line). The red and blue lines are for the LNV decays and $WW^*$ mode,
respectively. }
\end{center}
\end{figure}
To be quantitative, the mentioned facts above have been demonstrated in Fig.~\ref{BRvd} and Fig.~\ref{BRmdh}. In Fig.~\ref{BRvd}, it is shown that, with the degenerate mass spectrum of triplet-like Higgs bosons, a relatively large $v_\Delta$ with $v_\Delta=1{\rm GeV}$ will lead the $WW^*$ mode to be the dominant decay channel of $H^{\pm\pm}$, when $M_{H^{\pm\pm}}$ is in the mass range of [100, 150] GeV. But degeneracy will be lifted for a sizable $\lambda_5$; see Eq.~(\ref{spectum}). Furthermore, for $\lambda_5<0$, which means $\Delta M= M_{H^{\pm\pm}}-M_{H^{\pm}}>0$, the cascade decays of $H^{\pm\pm}$ will open. We show all the possible decay modes of $H^{\pm\pm}$ in Fig.~\ref{BRmdh}. It is found that, for a relatively light $H^{\pm\pm}$, a mass splitting $\Delta M= 5{\rm GeV}$ makes the cascade decays rapidly overcome the $WW^*$ mode and become the dominant channel. Again, in the type-II seesaw, due to a relatively large $v_\Delta$ chosen here, the branching ratios for the LNV decays of $H^{\pm\pm}$ are always vanishingly small.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=1.0]{BRmdh.eps}
\caption{\label{BRmdh} The branching ratios of the doubly charged Higgs boson decay versus $M_{H^{\pm\pm}}$ for $\Delta M=2 {\rm GeV}$ (solid line) and $\Delta M=5 {\rm GeV}$ (dash line) with $v_\Delta=1 {\rm GeV}$. The yellow, red, and blue
lines are for the cascade decays, di-W mode, and LNV decays, respectively. }
\end{center}
\end{figure}
It is the right place to comment about the associated production $H^{\pm\pm}H^{\mp}$ with $H^\pm$ subsequently decaying into $H^{\pm\pm}$. The distribution of $H^{\pm\pm}$ can be similar with the direct $H^{\pm\pm} H^{\mp\mp}$ pair production as long as the mass splitting $\Delta M$ keeps small. What's more, as we can see from Fig~\ref{xsec}, the cross section of associate production is about 2 times larger than the pair production. Thus, when $\Delta M$ is small, the extra contribution from the associated production will possibly help the discovery of $H^{\pm\pm}$ (But still safe from the current LHC constraints which will be mentioned latter). Even though we only consider the direct $H^{\pm\pm}H^{\mp\mp}$ pair production in the following discussion, in technical view, our result can be generalized to include the associated production by rescaling.
\section{The LHC prospect of light $H^{\pm\pm}$}\label{LHCsearch}
In this Section, we first collect the current LHC searches for $H^{\pm\pm}$ using the SSDL signature and find that the light region of $H^{\pm\pm}$ in our scenario has not been probed yet. Then we conduct a detailed study of the discovery prospect for light $H^{\pm\pm}$ at the future LHC. It is found that the 14 TeV LHC is able to cover all the mass region of light $H^{\pm\pm}$, using the SSDL signature, aided by multi-jets and missing energy.
\subsection{The status of $H^{++}$ facing the SSDL searches}
The searches of $H^{\pm\pm}$ from the ATLAS and CMS Collaborations are both based on its LNV decays. However, when $v_\Delta$ is significantly large and the mass spectrum of triplet-like Higgs bosons are nearly degenerate, $H^{\pm\pm}$ mainly decays into $WW^*$. The search for a light $H^{\pm\pm}$ via the $WW^*$ channel at LHC, using the SSDL signature, is our main aim in this work.
\begin{itemize}
\item Searching for $H^{\pm\pm}$ through the SSDL signature has been done before~\cite{ATLAS:2013tma,ATLAS:2012sna,Chatrchyan:2012ira,ATLAS:2012ai, CMS:2012xza}, and very strong bounds on $M_{H^{\pm\pm}}$ were derived. However, in those searches, besides the existence of SSDL, they required either a number of $b$-tagged jets, very large missing transverse energy $E^{miss}_T$ or very large $H_T = \sum_i p_T(j_i)+E^{\rm miss}_T$, which is the scalar sum of transverse momentum of jets and $E^{miss}_T$. However, here the light $H^{++}$ decay produces neither bottom quarks nor large $E^{miss}_T$/ $H_T$, so those bounds can be evaded easily. The latter fact can also be seen from the top panels of Fig~\ref{distribution}. In the mass range we have considered, we have $E^{miss}_T \lesssim 100$ GeV and $H_T \lesssim 400$ GeV.
\item Strong bounds ($\sim 400$ GeV) have also been derived for $M_{H^{\pm\pm}}$ if $H^{\pm\pm}$ directly decays into SSDL~\cite{ATLAS:2012hi,Chatrchyan:2012ya}. But in our scenario the SSDL signature comes from the consequent decay products along the $WW^*$ chain, and hence the invariant mass $m_{ll}$, which is peaked around the mass of $H^{\pm\pm}$ thus being a very efficient cut for $H^{\pm\pm} \to l^{\pm}_il^{\pm}_j$, no longer works well here; see the panel in the middle left of Fig~\ref{distribution}. In addition to that, the rate of SSDL in our scenario is suppressed by the $W$ boson decay branch ratio. Therefore, there is no bound from these searches as well.
\item Until recently, the CMS Collaboration has searched for the SSDL signals with jets in low $E^{miss}_T$ and low $H_T$ region both with and without $b$-tagging~\cite{Chatrchyan:2013fea}. First, from the CMS data, we estimate the upper limit of new physics events in each signal region (SR), $N_i^{max}$. Then, following the similar procedure as in~\cite{Guo:2013asa}, we recast the analysis in~\cite{Chatrchyan:2013fea} and calculate our signal events in each SR, $N_i^{new}$. Finally, we denote the ratio $R\equiv \max_i\{N_i^{max}/N_i^{new}\}$, which indicates the CMS search sensitive to our signal process at the 8 TeV LHC. In other words, if our model was excluded, the cross section would be $R$ times larger than the prediction in the model. In the first row of Table~\ref{sig}, we list the value of $R$ for each $M_{H^{\pm\pm}}$. It is seen that $R\sim 4$, i.e., the production rates should be 4 times larger for discovery. Thereby, the benchmark points are free from this constraint even if we take into account the contribution from the associated production.
\end{itemize}
\subsection{Backgrounds}
The backgrounds of the SSDL signature can be divided into three categories: real SSDL from rare SM processes, non-prompt lepton backgrounds, and opposite-sign dilepton events with charge misidentifications.
The non-prompt lepton backgrounds, which are the dominant background for SSDL, arise from events either with jets misidentifying as leptons or with leptons resulting from heavy flavor quark decay (HF fake). To suppress the non-prompt lepton backgrounds caused by jet misidentification, in our simulation we require the leptons in the final state to be both ``tight"~\cite{Aad:2011mk} and isolated, where the isolated lepton final state means that the scalar sum the transverse momentum of calorimeter energy within a cone of $R=0.3$ around the lepton excluding the lepton itself must be less than $16\%$ of lepton's $p_T$. { We find that the rate of jets mis-identified as leptons after the above requirements is highly suppressed, smaller than $\mathcal{O}(10^{-6})$.} Thus in the following analysis we only need to consider the non-prompt background from the heavy flavor quark decay, concretely, the semi-leptonic $t\bar{t}$ events with a non-prompt lepton from $b$-quark decay. {With our detector setup, the probability of an isolated lepton produced from $b$ quark decay is $\sim \mathcal{O}(0.1\%)$. }The dominant processes that genetate the SSDL in SM and their production cross sections at the 14 TeV LHC are listed in Table~\ref{smxsec}. The NLO production cross sections, except for $t\bar{t}Z$ and $W^\pm W^\pm jj$ are calculated by MCFM-6.6~\cite{Campbell:2011bn,Campbell:2012dh}. The NLO cross section of $t\bar{t}Z$ is taken from Ref.~\cite{Lazopoulos:2008de,Hirschi:2011pa,Garzelli:2011is,Kardos:2011na,Garzelli:2012bn}.
As for $W^\pm W^\pm jj$, a conservatively estimated constant $K$-factor 1.5 is multiplied on its LO cross section which is calculated by MadGraph5~\cite{Alwall:2011uj}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|} \hline
Processes & $\sigma /$pb \\ \hline
$t\bar{t}$ & 843.338 \\ \hline
$W^+Z$ & 29.82 \\ \hline
$W^-Z$ & 18.33 \\ \hline
$ZZ$ & 16.12 \\ \hline
$W^+t\bar{t}$ & 0.507 \\ \hline
$W^-t\bar{t}$ & 0.262 \\ \hline
$Zt\bar{t}$ & $1.09$ \\ \hline
$W^+W^+ j j$ & $0.2377 \times 1.5$ \\ \hline
$W^-W^- j j$ & $0.1037 \times 1.5$ \\ \hline
\end{tabular}
\end{center}
\caption{Production cross sections of background processes at the 14 TeV LHC}
\label{smxsec}
\end{table}
Let us comment on the other subdominant backgrounds. The first is about the real SSDL from the rare SM processes. The relevant SM backgrounds involving Higgs boson are $tth$ (0.6 pb), $Wh$ (1.5 pb) and $Zh$ (0.8 pb), where the Higgs boson decays into $WW^*$ and $ZZ^*$ with branching ratio 21\% and 2.5\%, respectively. Among these, the most important background is $W_l(h\rightarrow W_lW_j)$. Its production rate is similar with $W^\pm W^\pm j j$, whose contribution to our signal region is found to be small. The cross sections of $tt(h\rightarrow V_lV_j)$, $W_l(h\rightarrow Z_lZ_j)$ and $Z_l(h\rightarrow W_lW_j)$ are at least one order of magnitude smaller than the corresponding backgrounds with similar final states which have been incorporate in our work, e.g., $ttV$ and $WZ$. Therefore, these backgrounds can be neglected. The second is about the background due to charge mis-identification, which is dominated by the Drell-Yan processes, leptonic decay of $t\bar{t}$ and $W^+W^-$, in which the electrons undergone hard bremsstrahlung with subsequent photon conversion. As pointed out in~\cite{sslbgrate}, this kind of background usually contributes less than 5\% of the total backgrounds and thus will be neglected also.~\footnote{This background can also be suppressed by the isolated lepton requirement.}
\subsection{Event generation and analysis}
The signals and backgrounds are generated by MadGraph$5\_$v$1\_5\_11$~\cite{Alwall:2011uj}, where Pythia6~\cite{Sjostrand:2006za} and Delphes$\_3.0.9$~\cite{deFavereau:2013fsa} have been packed to implement parton shower and detector simulation. We implement the simplified model for doubly charged Higgs in FeynRules~\cite{Alloul:2013bka}, generating the UFO format of this model for MadGraph. Some important details in our simulation are summarized here.
In the first, the matrix element of signals and all backgrounds, except for $W^\pm W^\pm j j$, are generated up to 2 jets.
Next, we use the MLM matching adopted in MadGraph5 to avoid double counting matrix element and parton shower generation of additional jets.
In the last, while generating backgrounds from the rare SM processes involving weak gauge bosons, we let them decay at the parton level (In this way the helicity information is also retained.).
The resulting cross sections can be obtained after multiplying the cross sections in Table~\ref{smxsec} by the corresponding branching ratios. Note that only the gauge bosons which decay into $e/\mu$ constitute the backgrounds.
\begin{figure}
\begin{center}
\includegraphics[scale=0.37]{met.eps}
\includegraphics[scale=0.37]{ht.eps}
\includegraphics[scale=0.37]{mll.eps}
\includegraphics[scale=0.37]{mjjj.eps}
\includegraphics[scale=0.37]{drll.eps}
\includegraphics[scale=0.37]{dphillmis.eps}
\caption{\label{distribution} The distributions for corresponding kinematic variables after SSDL cut of backgrounds and signals. The number of events for signals have been magnified by a ratio as shown in the corresponding figure in order to highlight the distribution of signals.}
\end{center}
\end{figure}
With the backgrounds and signal events from simulation, we consider the event selection procedure in the following:
\begin{itemize}
\item Events should contain exactly a pair of SSDL and those with additional leptons are vetoed. The leptons are
required to satisfy
\begin{align}
p_{T,1/2} > 10 ~\text{GeV}, \quad |\eta| < 2.5.
\end{align}
\item We require at least one jet and moreover no $b$-tagged jets\footnote{In the simulation, we take the $b$-tagging efficiency 0.7~\cite{ATLAS:2012aoa}.} in the signal events. The jets are required to have
\begin{align}
p_T>20~ \text{GeV},\quad |\eta|<4.5~.~
\end{align}
\item The LNV decays of $H^{\pm\pm}$ will give small missing energy whereas the hadronic decay of $H^{\pm\pm}$ will give $H_T $ with magnitude proportional to $H^{\pm\pm}$ mass. Thus we require
\begin{align}
E^{miss}_T > 20~ \text{GeV},\quad H_T > 100~\text{GeV}~.~
\end{align}
\item The invariant mass of SSDL pair should be smaller than $H^{\pm\pm}$ mass, i.e.,
\begin{align}\label{ll:cut}
m_{ll}<75 ~\text{GeV}
\end{align}
\item Since $H^{\pm\pm}$ is light, it can be fairly boosted when it is produced at the 14 TeV LHC. As a result, the SSDL pair and the missing transverse momentum will tend to align with each other. Therefore, we impose cuts
\begin{align}
\Delta R(l_1, l_2) < 1.5,\quad
|\Delta \phi(ll, p^{miss}_T)| < 1.5~,~
\end{align}
where $R(l, l)$ and $\Delta \phi(ll, p^{miss}_T)$ correspond to the angle difference and azimuthal difference between the SSDL system and missing transverse momentum, respectively.
\item In $H^{\pm\pm}$ decay, two hadronically decaying $W$ bosons produce many jets, especially at the larger $M_{H^{\pm\pm}}$ region. We require that there be at least three jets in the signal events, whose invariant mass should be smaller than 150 GeV.
\end{itemize}
The cuts efficiencies for backgrounds and signals are listed in Table~\ref{background} and Table~\ref{sig}, respectively.
Since our signal events are generated through the process $p p \rightarrow H^{++}(\to W^+ l \nu,) H^{--}(\to W^{-} j j)$, the events numbers in the 3rd row of Table~\ref{sig} are calculated by $\mathcal{L} \times \sigma(H^{++}H^{--}) \times Br(W \rightarrow \text{hadrons}) \times Br(W \rightarrow l\nu) \times 2 = 2.88 \times \sigma(H^{++}H^{--})$, where the integrated luminosity $\mathcal{L}=10$ fb$^{-1}$ and the cross section is shown in Fig.1.
We make some observations from these two tables.
\begin{itemize}
\item As expected, the SSDL cut is the most efficient one to suppress the huge $t\bar{t}$ background, which produces SSDL owing to the heavy flavour quark decay. Even though the requirement of SSDL suppress the $t\bar{t}$ by more than three orders of magnitude, it still stays as the most dominant background for the SSDL signal because of its larger production rate.
\item After SSDL, non-$b$-tagged jet is imposed to further reduce the backgrounds. We also apply the most well studied $E^{miss}_T$ and $H_T$ cuts for comparison, even though they only show very weak discriminative power because of the small $M_{H^{\pm\pm}}$ region. Additionally, it should be noted that the mild cuts of $E^{miss}_T$ and $H_T$ can suppress the non-prompt QCD background where jets can fake as leptons.
\item Since all those signal benchmark points have very small SSDL invariant mass, the signal can be hardly influenced by the cut $m_{ll}< 75$ GeV while all backgrounds turn out to be a few times smaller after this cut.
\item Another feature of the signal process, i.e., alignment of SSDL, can also substantially improve the signal significance. In the background events, SSDL usually comes from two different mother particles decays. So, they tend to have relatively large azimuthal angle difference. In contrast, the lightness of $H^{\pm\pm}$ ensures SSDL and the corresponding transverse missing energy align with each other. This condition can be seen from the corresponding $\Delta \phi(ll, p^{miss}_T)$ distribution shown in the bottom of Fig~\ref{distribution}.
\item In the last, as seen in the middle right of Fig~\ref{distribution}, the backgrounds either have less than three jets (di-boson background) or have relatively large invariant mass of three leading jets ($t\bar{t}$ background). So after we impose more than 3 jets with invariant mass of three leading jets smaller than 150 GeV ($m_{jjj}< 150$ GeV), all the backgrounds are suppressed by an order of magnitude, while the signals are only a few times smaller.
\end{itemize}
Increasing $M_{H^{\pm\pm}}$ yields two competitive effects on the cuts. On one hand, the products, both leptons and jets, from a heavier $H^{\pm\pm}$ decay tend to become more energetic, and consequently one has a higher rate of SSDL and a better sensitivity after the $N_j >2$ cut. On the other hand, a larger $M_{H^{\pm\pm}}$ also renders $m_{ll}$ relatively larger, which makes the cut less efficient due to Eq.~(\ref{ll:cut}); moreover, the angular difference cuts also become slightly weaker with larger $m_{H^{++}}$, understood by nothing but less boosted $H^{\pm\pm}$.
To have an impression on the discovery potential, we calculate the signal significance
\begin{align}
\sigma=S/\sqrt{B+(\beta B)^2}~,~
\end{align}
in which we have assumed { Poisson statistics uncertainty $\sqrt{B}$
and} the systematic error $\beta=5$\%{ \footnote{Because the number of background events in our analysis is very small, the statistical uncertainty is around 35\%. The systematic uncertainty up to $\sim \mathcal{O}(10\%)$ does not affect our results much.} }.
The signal significance for all benchmark points are given in the last row of Table~\ref{sig}. From it we are justified to draw such a conclusion: $H^{\pm\pm}$ in the whole region of $100-150$ GeV can be discovered at the 14 TeV LHC with 10-30 fb$^{-1}$ integrated luminosity.
We choose the cuts such that our search is most conservative in the whole mass range that we are interested in. As for a specific benchmark point, we can further optimize the corresponding cuts to get a better search sensitivity. For example, for a heavier $H^{\pm\pm}$ one can lower down the $m_{ll}$ cut in Eq.~(\ref{ll:cut}) to get a better signal significance. For $m_{H^{\pm\pm}}=100$ GeV, the $m_{jjj}$ cut can even be dropped; then the signal significance can be as high as 5.3$\sigma$.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
&$t\bar{t}$ & $W_l^{+}Z_l$ & $W_l^{-}Z_l$ & $Z_lZ_l$ & $t\bar{t}W_l^{+}$ & $t\bar{t}W_l^{-}$ & $t\bar{t}Z_l$ & $W_l^{+}W_l^{+}jj$ & $W_l^{-}W_l^{-}jj$ \\ \hline
Events Number & 8433380 & 4278.0 & 2629.7 & 729.9 & 1080.9 & 558 & 733 & 162.1& 70.7 \\ \hline
2SSL & 1978.6 & 499.7 & 314.1 & 56.5 & 88.4 & 52.4 & 35.7& 56.1 & 26.1 \\ \hline
$N_j >0$, $N_b$=0 & 698.4 & 380.3 & 245.4 & 47.9 &14.7 & 8.0 & 5.8 & 53.5 & 24.7 \\ \hline
$E_T^{miss}>20$ & 639.1 & 336.3 & 214.0 & 17.2 & 14.0 & 7.7 & 5.3 & 50.7 & 22.7\\ \hline
$H_T>100$ GeV & 621.7 & 244.0 & 155.6 & 10.5 & 13.9 & 7.6 & 5.3 & 49.5 & 22.1 \\ \hline
$m_{ll}<$75 GeV & 367.3 & 102.2 & 58.6 & 5.5 & 4.5 & 2.3 & 1.7 & 14.2 & 5.1 \\ \hline
$\Delta R(l,l)<1.5$ & 137.2 & 49.3 & 29.2 & 2.9 & 2.2 & 1.4 & 1.1 & 6.2 & 2.7 \\ \hline
$\Delta \phi(ll,p^{miss}_T)<1.5$ & 74.9 & 16.6 & 8.9 & 0.7 & 1.0 & 0.4 & 0.4 & 2.3 & 0.8 \\ \hline
$N_j >2$, $m_{jjj} < 150$ GeV & 6.9 & 0.6 & 0.5 & 0.03 & 0.06 & 0.03 & 0 & 0.05 & 0.02 \\ \hline
\end{tabular}
\end{center}
\caption{\label{background} The cuts flow for backgrounds. The number has normalised to 10 $fb^{-1}$. $W_l$ and $Z_l$ represent the leptonic decays of the gauge bosons. }
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
&100 & 110 & 120 & 130 & 140 & 150 \\ \hline
Ratio required to be excluded & 4.5 & 4.0 & 4.2 & 4.3 & 4.5 & 4.3 \\ \hline
Events Number & 2608 & 1864 & 1365 & 1024 & 786 & 612 \\ \hline
2SSL & 126.3 & 123.3 & 102.9 & 84.5 & 70.2 & 57.8 \\ \hline
$N_j >0$, $N_b$=0 & 114.0 & 112.9 & 94.7 & 78.1 & 64.6 & 53.1 \\ \hline
$E_T^{miss}>20$ & 104.1 & 103.7 & 87.5 & 72.4 & 60.8 & 50.4 \\ \hline
$H_T>100$ GeV & 95.5 & 95.0 & 82.5 & 69.5 & 59.2 & 49.4 \\ \hline
$m_{ll}<$75 GeV & 95.5 & 95.0 & 81.5 & 65.8 & 53.2 & 41.6 \\ \hline
$\Delta R(l,l)<1.5$ & 76.4 & 72.2 & 59.5 & 46.5 & 37.7 & 30.0 \\ \hline
$\Delta \phi(ll,p^{miss}_T)<1.5$ & 61.3 & 56.8 & 46.5 & 36.6 & 29.7 & 23.4 \\ \hline
$N_j >2$, $m_{jjj} < 150$ & 11.2 & 16.3 & 14.4 & 13.6 & 11.3 & 8.8 \\ \hline
$\sigma$ & 3.89 & 5.64 & 4.98 & 4.70 & 3.91 & 3.04 \\ \hline
\end{tabular}
\end{center}
\caption{\label{sig} Cut flow for signal benchmark points. The events number has been normalised to 10 $fb^{-1}$.
The first row shows
the ratios needed for the production rate so that the benchmark points can be excluded by the CMS search~\cite{Chatrchyan:2013fea}.
In the last row, we show the corresponding signal significances for those benchmark points in our search.}
\label{bksectionbr}
\end{table}
To end up this Section, we comment on possible effects on the $H^{\pm\pm}$ search sensitivity, if we consider different triplet mass spectra. As discussed before, for a non-degenerate spectrum with proper mass splitting, one should include the $H^{\pm\pm} H^\mp$ associated production, which will significantly increase the sensitivity if $H^{\pm\pm}$ becomes the lightest component in the triplet~\cite{H_cas}. In contrast to that, if $H^0$ is the lightest, the cascade decay of $H^{\pm\pm}$ will open; then we can naively expect that the sensitivity will deteriorate due to the decrease of Br$(H^{\pm\pm}\rightarrow WW^*)$~\cite{Chakrabarti:1998qy}.
\section{Conclusion and discussion}\label{conclusion}
The doubly charged Higgs boson $H^{\pm\pm}$ is predicted in a lot of new physics models beyond the SM, and in this paper we implement LHC analysis of $H^{\pm\pm}$ search based on a simplified model with a triplet scalar with hypercharge $\pm1$. The LHC searches for $H^{\pm\pm}$ have been studied widely, but most of the searches focus on the relatively heavy ($\gtrsim 200$ GeV) $H^{\pm\pm}$ dominantly decaying into a pair of SSDL. In this paper we focus on the complimentary region, $m_W\lesssim M_{H^{\pm\pm}}\lesssim 2m_W$. Such light $H^{\pm\pm}$ is hidden at the current colliders as long as the $WW^*$ mode is dominant, which is possible even in the type-II seesaw mechanism when the triplet VEV is significantly large ($\sim$1 GeV) and the mass spectrum of triplet-like Higgs bosons are nearly degenerate. To investigate the LHC prospect of $H^{\pm\pm}$ in that scenario, we performed the detailed signal and background simulations, especially including the non-prompt $t\bar{t}$ background, which is the dominant one but ignored before. We found that $H^{\pm\pm}$ can be discovered at the 14 TeV LHC with 10-30 fb$^{-1}$ integrated luminosity.
\section*{Acknowledgements}
We would like to thank Prof. Eung Jin Chun for helpful discussion.
This research was supported in part by the China Postdoctoral Science Foundation under
grant number 2013M530006 (KZ), by the Natural Science
Foundation of China under grant numbers 10821504, 11075194, 11135003, and 11275246, and by the National
Basic Research Program of China (973 Program) under grant number 2010CB833000 (JL, TL, and YL).
\newpage
|
1,116,691,499,675 | arxiv | \section{Introduction}
QCD depends on a remarkably small number of parameters. In
conventional discussions these include the strong coupling constant
$\alpha_s$, the quark masses $m_i$, and the topological parameter
$\Theta$. But unlike with QED, the theory of electrons and photons,
the connection with physical observables is rather non-trivial. In
electrodynamics, both the electric charge and the electron mass are
directly observable.
In QCD, because of asymptotic freedom
\cite{Politzer:1973fx,Gross:1973id, Gross:1973ju} and dimensional
transmutation\cite{Coleman:1973jx}, the strong coupling constant is
tied to the overall scale. That in turn is connected with the
particle masses. A natural scale to use is the mass of the proton;
once that is determined the strong coupling constant is no longer an
adjustable parameter. The quark masses are most directly tied to the
pseudo-scalar spectrum. They can be adjusted to give the correct
masses to the pions, kaons, etc.
The parameter $\Theta$ is perhaps the most subtle. When non-zero,
this gives rise to CP violation. It is thus natural to adjust it to
give the correct neutron electric dipole moment. Since such has not
been seen, we know that $\Theta$ is either zero or very small. This
is the strong CP problem.
At the heart of these issues is the confinement phenomenon. The
underlying quarks are not free particles. The connection to the
scattering of physical particles is subtle, and ambiguities, such as
the so called renormalons, can arise. These ambiguities are closely
related to defining $\Theta$, a non-trivial task since typical fields
in the path integral are non-differentiable. The present talk
summarizes some of these issues. Many of these topics are covered in
considerably more detail in Ref. \cite{Creutz:2018dgh}.
\section {Quark masses and {$\Theta$}}
Looking back a previous editions of this meeting, I see that many of
my contributions were closely related to this subject. This is
particularly true of my presentation at the 1996 edition in
Como \cite{Creutz:1996wg}. There I started by considering a naive
variable change on a quark field
\begin{equation}
\psi
\longrightarrow e^{i\gamma_5\Theta} \psi
\label{rotate}
\end{equation}
This will modify the usual mass term
\begin{equation}
\overline\psi \psi \longrightarrow \cos(\Theta)\ \overline\psi \psi
+i\sin(\Theta)\ \overline\psi \gamma_5 \psi
\label{variables}
\end{equation}
This suggests that it might interesting to study
a generalized mass term
\begin{equation}
m\ \overline \psi\psi \rightarrow m_1\ \overline\psi\psi+im_5
\ \overline\psi\gamma_5\psi
\end{equation}
and explore how QCD depends on the two parameters {$m_1$} and
{$m_5$}. Were the above change of variables valid, one might expect
physics to only depend on the combination $\sqrt{m_1^2+m_5^2}$. We
will shortly see that this is false.
Our tool is an effective chiral Lagrangian. Working with two flavors,
consider the usual ``Mexican hat'' or ``wine bottle bottom'' potential
\begin{equation}
V=(\sigma^2+\vec\pi^2-v^2)^2 -m_1\sigma.
\label{effective}
\end{equation}
Here the mass term $m_1\overline\psi\psi \longrightarrow m_1\sigma$
tilts the sombrero, thereby putting the lowest energy state at
positive sigma and giving the pion a small mass $M_\pi^2\propto m_1$.
This is sketched in Fig. (\ref{sombrero}).
\begin{figure}
\centerline{ \includegraphics[height=.25\textheight]{v3.eps}}
\caption{The starting effective potential. The mass term results in the minimum being in the positive $\sigma$ direction.}
\label{sombrero}
\end{figure}
So in this picture, what does $m_5$ do? Indeed, a term like
\begin{equation}
im_5 \overline \psi\gamma_5\psi \longrightarrow m_5\eta
\end{equation}
does not appear in the above effective potential. The effect of $m_5$
is of higher order in the chiral theory. The first thing $m_5$ does
is to induce an expectation value for the eta field
\begin{equation}
\langle \eta \rangle \propto m_5/M_\eta^2.
\end{equation}
To proceed, note that the flavored chiral rotation $\psi \rightarrow
e^{i\tau_3 \gamma_5 \Theta}$ mixes the eta field $i\overline\psi
\gamma_5 \psi\sim \eta$ with the isovector scalar field
$\overline\psi \vec\tau_3 \psi \sim {a_0}_3$. The
combination $(\eta, \vec a_0)$ represents a chiral pair in direct
analogue with the original combination $(\sigma,\vec \pi).$
Flavored chiral symmetry is consistent with a coupling between
these fields of form
\begin{equation}
\sim \ \left(\pmatrix {\sigma & \vec \pi\cr}\cdot \pmatrix
{\eta\cr \vec a_0}
\right)^2.
\label{distortion}
\end{equation}
Here the square appears to preserve parity symmetry.
With an expectation for eta this gives an effective term
\begin{equation}
(\sigma\eta+\vec\pi \cdot\vec a_0)^2
\rightarrow \langle\eta\rangle^2 \sigma^2.
\end{equation}
Including this in our original potential, $m_5$ induces a distortion
proportional to the sigma field squared
\begin{equation}
V\rightarrow V-\alpha m_5^2 \sigma^2.
\end{equation}
where $\alpha$ is an undetermined constant. The sign of this term is
related to pi--eta mixing. This gives rise to a quadratic warping of
the effective potential, as sketched in Fig. \ref{m5}.
\begin{figure}
\centerline{ \includegraphics[height=.25\textheight]{m5.eps}}
\caption{Including the $m_5$ term in the theory generates a quadratic
warping of the effective potential.}
\label{m5}
\end{figure}
From this argument, we see that {$m_5$} also gives the pions a mass
\begin{equation}
M_\pi^2\propto m_5^2.
\end{equation}
It is crucial to note that unlike the original mass $m_1$ this
is quadratic and not linear in {$m_5$}. Also note that this term
induces a barrier between {$\sigma>0$} and {$\sigma<0$}. If we
look at the structure of the theory as a function of the two
parameters $m_1$ and $m_5$, whenever $m_5$ is non-vanishing a first
order transition appears at $m_1=0$. This is sketched in
Fig. \ref{m1m5}.
\begin{figure}
\centerline{ \includegraphics[height=.25\textheight]{m1m5.eps}}
\caption{The phase structure as a function of $m_1$ and $m_5$
exhibits a first order transition along the $m_5$ axis. The
transition is denoted by the dashed line and occurs when the
conventional parameter $\Theta$ takes the value pi.}
\label{m1m5}
\end{figure}
The transition occurs when the conventional parameter
$\Theta$ is $\pi$. The mapping between $\Theta$ and the mass parameters is given by
\begin{equation}
{m_5\over m_1}=\tan(\Theta/2).
\end{equation}
The crucial conclusion is that although Eq.~(\ref{rotate}) may look
like a harmless change of variables, physics does not only depend on
the combination $\sqrt{m_1^2+m_5^2}$. The rotation in
Eq.~(\ref{variables}) is ``anomalous,'' and physics depends
non-trivially on $\Theta$. The important point of this section is
that $m_1$ and $m_5$ are physically independent parameters.
\section{Why is {$\psi \longrightarrow e^{i\gamma_5\Theta} \psi$} not
a symmetry?}
The fact that gamma five rotations are anomalous has been known for
some time \cite{Adler:1969gk,Adler:1969er, Bell:1969ts}. However a
deep connection between $\Theta$ and the fermionic measure in the path
integral was later elucidated in the work of Fugikawa
\cite{Fujikawa:1979ay}. When a gauge field configuration has
non-trivial topology, the Dirac operator has chiral zero modes. The
index theorem relates the number of these modes to the topological
index of the gauge field
\begin{equation}
n_+-n_- =\nu.
\end{equation}
Here $\nu$ is the gauge field winding number, and $n_+(n_-)$ counts
the number of left (right) zero modes. Now if we define the trace of
$\gamma_5$ via a sum over the modes of the Dirac operator, we find
that this trace need not vanish
\begin{equation}
{\rm Tr} \gamma_5\equiv \sum_i \langle \psi_i| \gamma_5.
|\psi_i\rangle=\nu
\end{equation}
On configurations carrying topology, the chiral change of variables will introduce a phase in the fermionic measure of the path integral
\begin{equation}
d \psi \rightarrow e^{i \nu \Theta}\ d\psi.
\end{equation}
Thus this change of variables is equivalent to inserting a factor of $e^{i \nu \Theta}$ into the path integral
\begin{equation}
Z=\int (dA)(d\psi)(d\overline\psi)\ e^{-\beta S}
\longrightarrow
\int (dA)(d\psi)(d\overline\psi)\ e^{i\nu\Theta}\ e^{-\beta S}.
\end{equation}
This represents a physically different theory. With $\Theta$ present, the theory is CP violating since the $m_5$ term is.
This emphasizes yet further that {$m_1$} and {$m_5$} are inequivalent
parameters that have nothing to do with each other.
A natural question at this point is whether there is an independent
{$m_5$} for each flavor? The answer is no because flavored chiral
symmetries remain valid. A change of variables such as
\begin{equation}
\psi \longrightarrow e^{i\gamma_5\lambda_\alpha\Theta} \psi
\end{equation}
is a valid symmetry when $\lambda_\alpha$ is one of the trace less
generators of the flavor group. Rotations of this form allow us to
move the chiral phase between different flavors. It is perhaps
amusing to consider rotating any non-trivial $\Theta$ into the top
quark. In this sense, the top quark does not fully decouple from low
energy physics.
\section{The strong CP problem}
The parameter $m_5$ is inherently CP violating. But the experimental
bounds on such a symmetry breaking are extremely small, of order
$10^{-10}$ for the ratio $m_5\over m_1$. Why should this be so?
This is actually only a problem for theories that unify the strong
with the electro-weak interactions. We know experimentally that the
weak interactions do violate CP. Thus on reducing energies from the
weak scale, it would be ``natural'' for a residue of this breaking to
to survive in the low energy strong interactions. On the other hand,
if the strong interactions are never unified with the others, then the
symmetries of the theory make $\Theta=0$ stable under renormalization.
One possible solution to this conundrum involves introducing a new
particle called the axion. It is arranged to make the parameter
$\Theta$ into a dynamical field with the ground state of this field at
$\Theta=0$. Effectively this corresponds to adding something like
$(\partial_\mu \Theta)^2$ to the action. The coefficient of this is
arbitrary, and thus the coupling of the axion particle to the strong
interactions can be made arbitrarily small.
While potentially viable, this procedure seems rather ad hoc and
involves introducing a new particle. One might also worry that on
descending from the unification scale a linear term in the $\Theta$
field might survive to give an expectation value to the CP violating
effect.
\section{The up-quark mass}
It has been occasionally suggested that a vanishing up-quark mass
would solve the strong CP problem. The argument is that flavored
rotations can put all phases in the up-quark mass, and if the up-quark
mass vanishes, it cannot have a phase. The shortcoming of this naive
argument is that the up-quark mass involves the two parameters $m_1$
and $m_5$, and these parameters are independent. The strong CP
problem is why is $m_5$ small, and has nothing to do with the parameter
$m_1$.
Nevertheless, the possibility of a vanishing up-quark mass has received considerable attention over the years, and thus it is interesting to explore what this means. To begin with, we know that the pion masses are non-zero, and thus both $m_u$ cannot $m_d$ vanish. So to proceed we introduce an
up--down mass difference term to the theory
\begin{equation}
m\ \overline \psi\psi \rightarrow m_1\ \overline\psi\psi+m_2
\ \overline\psi\tau_3\psi.
\end{equation}
The term $m_2\overline\psi\tau_3\psi \sim {a_0}_3$ transforms as an
isovector scalar. This, just as the $m_5$ term, does not appear
in the starting effective potential of Eq.~(\ref{effective}). To
study its effect, we parallel the earlier discussion on $m_5$ and note
that $m_2$ will give ${a_0}_3$ an expectation value
\begin{equation}
\langle {a_0}_3 \rangle \propto m_2/M_{a_0}^2
\end{equation}
This will enter the same effective term as in Eq.~(\ref{distortion})
and warp the effective potential. But now the distortion is downward
in the $\pi_3$ direction,
\begin{equation}
V\rightarrow V-\alpha m_2^2 \pi_3^2.
\end{equation}
If we do not include an overall tilt in the $\sigma$ direction, the
{$\pi_3$} field will gain an expectation value! This represents the so
called the CP violating ``Dashen phase'' \cite{Dashen:1970et}. This
situation is sketched in Fig. \ref{mdiff}.
\begin{figure}
\centerline{ \includegraphics[height=.25\textheight]{mdiff.eps}}
\caption{The presence of a quark mass difference distorts the effective potential downward in the $\pi_3$ direction. }
\label{mdiff}
\end{figure}
To understand this phase better, consider fixing the down quark mass
to some positive value and vary the up-quark mass. As the up-quark
becomes lighter, the pions will decrease in mass, as sketched in
Fig. \ref{iso2}. Note that the pions remain massive as the up
quark goes to zero. At that point the mass gap of the theory remains,
and no singularity occurs. As we continue the up-quark mass into the
negative regime, the pions continue to become lighter, with the
breaking of isospin making the neutral pion lighter than the charged
ones. But if we make the up-quark mass sufficiently negative, the
neutral pion mass can vanish. Beyond that point {$\pi_3$} gains its
expectation value, and we are in the Dashen phase. Note that in this
regime the product of the quark masses is negative and we are formally
at $\Theta=\pi.$
\begin{figure}
\centerline{ \includegraphics[height=.4\textheight]{iso2.eps}}
\caption{As the up-quark mass is reduced below that of the down quark, isospin breaking separates the charged and neutral pion. As the up-quark goes through zero mass, no singularity is expected. If the up-quark mass becomes sufficiently negative, the neutral pion can condense and we enter the CP violating Dashen phase.}
\label{iso2}
\end{figure}
We now have at a rather simple picture for the qualitative behavior of
two flavor QCD as a function of the conventional parameters
$\alpha_s\ m_u \ m_d\ \ \Theta$. These map in a non-linear way into
the overall scale, the tilt of the effective potential, and a possible
quadratic warping. However in general the tilt and warp need not be
in the same direction, and the fourth parameter represents a possible
angle between them, as sketched in Fig. \ref{warp}.
\begin{figure}
\centerline{ \includegraphics[height=.25\textheight]{warp.eps}}
\caption{The four parameters of two flavor QCD map onto the effective chiral Lagrangian as the overall scale, the tilt, the warp, and the angle between the tilt and warp.}
\label{warp}
\end{figure}
Extending these arguments, we can obtain the full two flavor phase diagram
as a function of the parameters $m_1,\ m_2,\ m_3$ \cite{Creutz:2010ts}
as sketched in Fig. \ref{phasediagram}. The plane at $m_1=0$
extends the first order transition from Fig. \ref{m1m5}. In addition,
there is another first order transition extending into the $m_5=0$ plane;
the order parameter here is the sign of the non-vanishing expectation
for the $\pi_3$ field.
\begin{figure}
\centerline{ \includegraphics[height=.3\textheight]{phasediagram.eps}}
\caption{The full phase diagram for two flavor QCD as a function of
$m_1,\ m_2,\ m_3$.}
\label{phasediagram}
\end{figure}
\section{Symmetries in the masses}
This diagram has a variety of symmetries. First, it is invariant
under changing the sign of the parameter $m_5$. This is associated
with CP and will protect $m_5$ from any additive renormalization. What
about the other mass parameters? To proceed, concentrate on {$m_5=0$}
plane, sketched in Fig. \ref{iso4}. At the edge of the Dashen phase
is a second order transition where the neutral pion becomes
mass-less. The order parameter for the transition is the expectation
value for the neutral pion field, $\langle \pi_0 \rangle$.
The next invariance to note is under exchanging the up and down quark
masses. This is isospin, and protects the quark mass difference $m_2$
from additive renormalization. Then there is a symmetry under
$m_u\leftrightarrow -m_d$. This represents isospin at $\Theta=\pi$
and protects $m_1$ from additive renormalization. Another symmetry,
not really independent of the above, is under flipping the signs of
both quark masses. This can be implemented by a flavored chiral
rotation
\begin{equation}
\psi \rightarrow e^{i\pi\tau_3\gamma_5} \psi
\end{equation}
which is not anomalous.
\begin{figure}
\centerline{ \includegraphics[height=.25\textheight]{iso4.eps}}
\caption{The phase diagram for two flavor QCD at $m_5=0 $as a function of
the up and down quark masses.}
\label{iso4}
\end{figure}
A crucial observation is that this diagram is not symmetric under
$m_u\leftrightarrow -m_u$. The concept of a vanishing up-quark mass
is not protected by any symmetry! While symmetries protect the three
parameters $m_1,\ m_2,\ m_3$ individually, these quantities in general
can have independent renormalizations. One might try to define
\begin{equation}
\hbox{``}{m_u}\hbox{''}= {m_1+m_2\over 2}+im_5
\end{equation}
however this combines independent parameters and is an artificial
construct.
So this leaves us with a conundrum. Can any experiment tell if the up
quark mass vanishes? If not, is $m_u=0$ a well defined concept? The
issues here are all non-perturbative, so relating the up-quark mass to
a perturbative definition cannot answer this. Non-perturbative issues
require a regulator such as the lattice. And at least naively the
lattice can answer the question. One should adjust the lattice
parameters until the physical hadron spectrum comes out right. Then
one can read off the input lattice quark masses and see if $m_u=0$.
There is a complication to this process. As shown by 't Hooft
some time ago \cite{'tHooft:1976fv}, a non-vanishing down quark mass
can induce an effective mass for the up-quark. Both the combinations
$i\overline u \gamma_5 u$ and $i\overline d \gamma_5 d$ couple to the
pion. This means that through the pion field, a left handed up-quark
can turn into a right handed one by an amount proportional to the down
quark mass. This is sketched in Fig. \ref{induced2}. Because of this
effect, ratios of quark masses are not renormalization group invariant
when non-perturbative effects are taken into account.
\begin{figure}
\centerline{ \includegraphics[height=.15\textheight]{induced2.eps}}
\caption{A non-vanishing down quark mass can, through the pion field,
convert a left handed up quark into a right handed one.}
\label{induced2}
\end{figure}
Can we use the topology of the gauge fields to get a handle on this
issue? The fermion determinant suppresses non-trivial topology when a
quark mass vanishes. The concept of a vanishing quark mass is
equivalent to the vanishing of the average gauge field topology. So
this leaves us with the question of how to define lattice topology.
This is also rather non-trivial. As is well known, typical
configurations in the path integral involve non-differentiable fields.
Indeed, the space of lattice fields is simply connected in most
formulations. Topology is lost at the outset since small instantons
can ``fall through the lattice.'' Many studies over the years
\cite{Teper:1985rb,Bruckmann:2009cv} have attempted to get around this
by some sort of cooling process to remove short distance fluctuations
from the gauge fields. In this way the action is observed to settle
into multiple well defined instantons. The problem is that while this
procedure is often stable, it is not unique. The net winding can
depend on the details of the cooling algorithm
\cite{Creutz:2010ec}. Additional questions are what action should we
use to cool, and how long should we cool.
Can we use the index theorem to resolve this? Topology is associated
with zero modes of the Dirac operator. So we might try to count the
small real eigenvalues of the Wilson fermion operator. The issue here
is that at finite cutoff these modes are not exact zeros. How should
we define ``small'' for the eigenvalues? The result will depend on
the density of real eigenvalues in the first Wilson circle, which in
general need not vanish. One might instead count zero modes of the
overlap operator \cite{Neuberger:1997fp}, which do occur at the
origin. The problem here is that the overlap operator is not unique,
depending on a parameter often called the ``domain wall height.''
This parameter is closely related to the Wilson operator and its
eigenvalues.
Should we care if topology is ambiguous? This is an abstract
theoretical construction and not something directly measured in
laboratory experiments. One should concentrate on physical
quantities, such as the mass of the eta prime. The famous
Witten-Veneziano \cite{Witten:1979vv,Veneziano:1979ec} formula does
relate topological susceptibility of the pure gauge theory to the eta
prime mass, but this is only a result in the limit of a large number
of colors.
\section{Summary}
QCD depends on $N_f+1$ possible mass parameters, where $N_f$ is the
number of fermion flavors. One of these is the CP violating parameter
usually referred to as $\Theta$. The effects of $\Theta$ are not
visible in perturbation theory. Indeed, theories with different
values of this parameter have identical perturbative expansions.
Existing experiments have found no evidence for a non-vanishing value
for $\Theta$. This is a puzzle for models of unification since CP
violation is evident in weak interaction processes. One possible
solution involves introducing an axion to make $\Theta$ into a
dynamical field that then relaxes to zero. The possibility of $m_u=0$
is not a viable solution since it involves an unnatural fine tuning.
Three unrelated parameters, $m_1$, $m_2$, and $m_5$, contribute to the
up-quark mass. Only one of these is associated with CP violation.
The other two have nothing to do with the puzzle.
|
1,116,691,499,676 | arxiv | \section{Discussion}
\label{sec:analysis}
\begin{figure}[t]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[scale=0.75]{discounting_tikz.pdf}
\caption{Perplexity with noising on Penn Treebank while varying the value of $\gamma_0$. Using discounting to scale
$\gamma_0$ (yielding $\gamma_\mathrm{AD}$) maintains gains
for a range of values of noising probability, which is not true for the unscaled case.}
\label{fig:discounting}
\end{minipage}\hspace{0.04\textwidth}%
\begin{minipage}{0.48\textwidth}
\vspace{-0.15cm}
\includegraphics[scale=0.75]{kl_bar_plot.pdf}
\vspace{-0.15cm}
\caption{Mean KL-divergence over
validation set between softmax distributions of noised and unnoised models
and lower order distributions.
Noised model distributions are closer to the uniform and
unigram frequency distributions.}
\label{fig:kls}
\end{minipage}
\end{figure}
\input{discounting.tex}
\input{intermodel.tex}
\section{Sketch of Noising Algorithm}
We provide pseudocode of the noising algorithm corresponding to bigram Kneser-Ney
smoothing for $n$-grams (In the case of sequence-to-sequence tasks,
we estimate the count-based parameters separately for source and target).
To simplify, we assume a batch size of one. The noising
algorithm is applied to each data batch during training. No noising is applied at
test time.
\begin{algorithm}
\caption{Bigram KN noising (Language modeling setting)}\label{bgkn}
\begin{adjustwidth}{0.35cm}{0.0cm}
\textbf{Require} counts $c(x)$, number of distinct continuations $N_{1+}(x, \bullet)$,
proposal distribution $q(x) \propto N_{1+}(\bullet, x)$\\
\textbf{Inputs} $X$, $Y$ batch of unnoised data indices, scaling factor $\gamma_0$\\
\rule{\linewidth}{0.3pt}\vspace{0em}
\end{adjustwidth}
\begin{algorithmic}[0]
\Procedure{NoiseBGKN}{$X,Y$}\Comment{$X=(x_1,\dots,x_t), Y = (x_2,\dots,x_{t+1})$}
\State $\tilde{X}, \tilde{Y}\gets X, Y$
\For{$j=1,\dots,t$}
\State $\gamma\gets\displaystyle \gamma_0 N_{1+}(x_j, \bullet)/c(x_j)$
\If{$\sim \mathrm{Bernoulli}(\gamma)$}
\State $\tilde{x}_j \sim \mathrm{Categorical}(q)$ \Comment{Updates $\tilde{X}$}
\State $\tilde{y}_j \sim \mathrm{Categorical}(q)$
\EndIf
\EndFor\label{euclidendwhile}
\State \Return $\tilde{X}, \tilde{Y}$ \Comment{Run training iteration with noised batch }
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Preliminaries}
We consider language models where given a sequence of indices $X = (x_1, x_2, \cdots, x_T)$,
over the vocabulary $V$,
we model
\begin{equation*}
p(X) = \prod_{t=1}^T p(x_t | x_{<t})
\end{equation*}
In $n$-gram models, it is not feasible to model the full context
$x_{<t}$ for large $t$ due to the exponential number of possible histories.
Recurrent neural network (RNN) language models can (in theory) model
longer dependencies, since they operate over distributed hidden states
instead of modeling an exponential number of discrete counts
\citep{bengio2003neural,mikolov2012statistical}.
An $L$-layer
recurrent neural network is modeled as
$h^{(l)}_t = f_\theta(h^{(l)}_{t-1}, h^{(l-1)}_t)$,
where $l$ denotes the layer index, $h^{(0)}$ contains the one-hot encoding of $X$,
and in its simplest form
$f_\theta$ applies an affine transformation followed by a nonlinearity.
In this work, we use RNNs with a more complex form of $f_\theta$,
namely long short-term memory (LSTM) units \citep{hochreiter1997long}, which
have been shown to ease training and allow RNNs to capture longer
dependencies.
The output distribution over the vocabulary $V$ at time $t$ is
$p_\theta(x_t|x_{<t}) = \mathrm{softmax}(g_\theta(h^{(L)}_t))$,
where $g: \mathbb{R}^{|h|} \rightarrow \mathbb{R}^{|V|}$
applies an affine transformation.
The RNN is then trained by minimizing over its parameters $\theta$ the
sequence cross-entropy loss
$\ell(\theta) = -\sum_t \log p_\theta(x_t | x_{<t})$,
thus maximizing the likelihood $p_\theta(X)$.
As an extension, we also consider encoder-decoder or
sequence-to-sequence \citep{cho2014learning,sutskever2014sequence} models where given
an input sequence $X$ and output sequence $Y$ of length $T_Y$, we model
\begin{equation*}
p(Y|X) = \prod_{t=1}^{T_Y} p(y_t | X, y_{<t}).
\end{equation*}
and minimize the loss $\ell(\theta) = -\sum_t \log p_\theta(y_t | X, y_{<t})$.
This setting can also be seen as conditional language modeling, and encompasses
tasks such as machine translation, where $X$ is a source
language sequence and $Y$ a target language sequence, as well as language modeling, where
$Y$ is the given sequence and $X$ is the empty sequence.
\section{Conclusion}
In this work, we show that data noising is
effective for regularizing neural network-based sequence models.
By deriving a correspondence between noising
and smoothing, we are able to adapt advanced smoothing methods
for $n$-gram models to the neural network setting,
thereby incorporating well-understood generative assumptions of language.
Possible applications include
exploring noising for improving performance in low resource settings,
or examining how these techniques generalize to sequence modeling in other domains.
\subsection{Scaling $\gamma$ via Discounting}
We now examine whether discounting has the desired effect of noising
subsequences according to their uncertainty.
If we consider the discounting
$$\gamma_\mathrm{AD}(x_1) =\displaystyle\gamma_0 \frac{N_{1+}(x_1, \bullet)}{c(x_1)}$$
we observe that the denominator $c(x_1)$ can dominate
than the numerator $N_{1+}(x_1, \bullet)$.
Common tokens are often noised infrequently when discounting is used to rescale the
noising probability, while rare tokens are noised comparatively much more frequently,
where in the extreme case when a token appears exactly once, we have $\gamma_\mathrm{AD} = \gamma_0$.
Due to word frequencies following a Zipfian power law distribution, however,
common tokens constitute the majority of most texts, and thus discounting leads
to significantly less noising.
We compare the performance of models trained with a fixed $\gamma_0$ versus
a $\gamma_0$ rescaled using discounting.
As shown in Figure~\ref{fig:discounting}, bigram discounting
leads to gains in perplexity for a much broader range of $\gamma_0$.
Thus the discounting ratio seems to effectively capture the ``right'' tokens to noise.
\section{Experiments}
\input{langmodel}
\input{translation}
\subsection{Noised versus Unnoised Models}
\label{ssec:modelcompare}
\paragraph{Smoothed distributions}
In order to validate
that data noising for RNN models has a similar effect to that of smoothing
counts in $n$-gram models, we consider three models trained
with unigram noising as described
in Section~\ref{ssec:lm} on
the Penn Treebank corpus with $\gamma=0$ (no noising),
$\gamma=0.1$, and $\gamma=0.25$.
Using the trained models, we measure the Kullback-Leibler divergence
$D_\mathrm{KL}(p\|q) = \sum_i p_i \log(p_i/q_i)$
over the validation set between
the predicted softmax distributions, $\hat{p}$,
and the uniform distribution
as well as the unigram frequency distribution.
We then take the mean KL divergence over all tokens in the validation set.
Recall that in interpolation smoothing, a weighted combination of
higher and lower order $n$-gram models is used.
As seen in Figure~\ref{fig:kls}, the softmax distributions
of noised models are significantly
closer to the lower order frequency distributions than unnoised models,
in particular in the case of the unigram distribution,
thus validating our analysis in Section~\ref{ssec:noisesmooth}.
\begin{table}
\begin{center}
\begin{tabular}{l r r r}
\toprule
Noising & Bigrams & Trigrams \\
\midrule
none (dropout only) & 2881 & 381\\
blank noising & 2760 & 372\\
unigram noising & 2612 & 365\\
\bottomrule
\end{tabular}
\end{center}
\caption{Perplexity of last unigram for unseen bigrams and trigrams in
Penn Treebank validation set. We compare noised and unnoised models
with noising probabilities chosen such that models have
near-identical perplexity on full validation set.}
\label{tab:unseen}
\end{table}
\paragraph{Unseen $n$-grams}
Smoothing is most beneficial for increasing the probability of
unobserved sequences. To measure whether noising has a similar
effect, we consider bigrams and trigrams in the
validation set that do not appear in the training set.
For these unseen bigrams (15062 occurrences) and trigrams (43051 occurrences),
we measure the perplexity for noised and unnoised models with near-identical
perplexity on the full set.
As expected, noising yields lower perplexity for these unseen instances.
\section{Introduction}
Language models are a crucial component in many domains, such
as autocompletion, machine translation, and speech recognition.
A key challenge when performing estimation in language modeling is the
\textit{data sparsity} problem: due to large vocabulary sizes
and the exponential number of possible
contexts, the majority of possible sequences are rarely or never
observed, even for very short subsequences.
In other application domains, data augmentation has been key to improving
the performance of neural network models in the face of insufficient data.
In computer vision, for example, there exist well-established primitives for synthesizing
additional image data,
such as by rescaling or applying affine distortions to images \citep{LeCun1998,Krizhevsky2012}.
Similarly, in speech recognition adding a background audio track or applying small shifts along the time dimension
has been shown to yield significant gains, especially in noisy settings~\citep{deng2000large,hannun2014deep}.
However, widely-adopted noising primitives have not yet been developed for neural
network language models.
Classic $n$-gram models of language cope with rare and unseen sequences
by using smoothing methods, such as interpolation or absolute discounting~\citep{chen1996empirical}.
Neural network models, however, have no notion of discrete counts, and
instead use distributed representations to combat
the curse of dimensionality \citep{bengio2003neural}.
Despite the effectiveness of distributed representations,
overfitting due to data sparsity remains an issue.
Existing regularization methods, however, are typically applied
to weights or hidden units within the network~\citep{srivastava2014dropout,le2015simple}
instead of directly considering the input data.
In this work, we consider noising primitives as
a form of data augmentation
for recurrent neural network-based language models.
By examining the expected pseudocounts
from applying the noising schemes, we draw connections between
noising and
linear interpolation smoothing.
Using this connection, we then derive noising schemes that are analogues of more
advanced smoothing methods.
We demonstrate the effectiveness of
these schemes for regularization through experiments on language modeling and machine translation.
Finally, we validate our theoretical claims by examining
the empirical effects of noising.
\subsection{Language Modeling}
\label{ssec:lm}
\paragraph{Penn Treebank}
\begin{table}[bt]
\begin{center}
\begin{tabular}{l r r r }
\toprule
Noising scheme & & Validation & Test \\
\midrule
\multicolumn{4}{c}{Medium models (512 hidden size)}\\
\midrule
\multicolumn{2}{l}{none (dropout only)}
& 84.3 & 80.4 \\
\multicolumn{2}{l}{blank}
& 82.7 & 78.8 \\
\multicolumn{2}{l}{unigram}
& 83.1 & 80.1 \\
\multicolumn{2}{l}{bigram Kneser-Ney}
& \textbf{79.9} & \textbf{76.9} \\
\midrule
\multicolumn{4}{c}{Large models (1500 hidden size)}\\
\midrule
\multicolumn{2}{l}{none (dropout only)}
& 81.6 & 77.5 \\
\multicolumn{2}{l}{blank}
& 79.4 & 75.5 \\
\multicolumn{2}{l}{unigram}
& 79.4 & 76.1 \\
\multicolumn{2}{l}{bigram Kneser-Ney}
& \textbf{76.2} & \textbf{73.4} \\
\midrule
\multicolumn{2}{l}{\cite{zaremba2014recurrent}} & 82.2 & 78.4\\
\multicolumn{2}{l}{\cite{gal2015dropout} variational dropout (tied weights)} & 77.3 & 75.0\\
\multicolumn{2}{l}{\cite{gal2015dropout} (untied weights, Monte Carlo)} & {---\hspace{0.15cm}} & \textbf{73.4}\\
\bottomrule
\end{tabular}
\end{center}
\caption{Single-model perplexity on Penn Treebank with different noising schemes. We also compare
to the variational method of \cite{gal2015dropout}, who also train LSTM
models with the same hidden dimension. Note that performing Monte Carlo dropout at test time
is significantly more expensive than our approach, where test time is unchanged.}
\label{tab:ptb}
\end{table}
\begin{table}[bt]
\begin{center}
\begin{tabular}{l r r }
\toprule
Noising scheme & Validation & Test \\
\midrule
none & 94.3 & 123.6 \\
blank & 85.0 & 110.7 \\
unigram & 85.2 & 111.3 \\
bigram Kneser-Ney & 84.5 & 110.6 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Perplexity on Text8 with different noising schemes.}
\label{tab:text8}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{perplexity_curves.pdf}
\caption{Penn Treebank corpus.}
\label{fig:ptbcurve}
\end{subfigure}
\hspace{1em}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{text8_perplexity_curves.pdf}
\caption{Text8 corpus.}
\label{fig:text8curve}
\end{subfigure}
\caption{Example training and validation curves for an unnoised model and model regularized using the bigram Kneser-Ney noising scheme.}
\label{fig:trainingcurve}
\end{figure}
We train networks for word-level language modeling on the Penn Treebank dataset,
using the standard preprocessed splits with a 10K size vocabulary~\citep{mikolov2012statistical}.
The PTB dataset contains 929k training tokens, 73k validation tokens, and 82k test tokens.
Following \cite{zaremba2014recurrent},
we use minibatches of size 20 and unroll for 35
time steps when performing backpropagation through time.
All models have two hidden layers and use LSTM units.
Weights are initialized uniformly in the range $[-0.1, 0.1]$.
We consider models with hidden sizes of $512$ and $1500$.
We train using stochastic gradient descent with an initial learning
rate of 1.0, clipping the gradient if its norm exceeds 5.0.
When the validation cross entropy does not decrease after a training epoch,
we halve the learning rate.
We anneal the learning rate 8 times before stopping training, and pick
the model with the lowest perplexity on the validation set.
For regularization, we apply feed-forward dropout~\citep{pham2014dropout}
in combination with our noising schemes.
We report results in Table~\ref{tab:ptb} for the best setting of the dropout rate (which
we find to match the settings reported in~\cite{zaremba2014recurrent})
as well as the best setting of noising probability $\gamma_0$ on the validation set.\footnote{Code will be made available at: \url{http://deeplearning.stanford.edu/noising}}
Figure~\ref{fig:trainingcurve} shows the training and validation perplexity
curves for a noised versus an unnoised run.
Our large models match the state-of-the-art regularization method for single model performance on this task.
In particular, we find that picking $\gamma_\mathrm{AD}(x_1)$ and $q(x)$ corresponding
to Kneser-Ney smoothing yields significant gains in validation perplexity,
both for the medium and large size models.
Recent work ~\citep{merity2016pointer,zilly2016recurrent}
has also achieved impressive results on this task
by proposing different architectures
which are orthogonal to our data augmentation schemes.
\paragraph{Text8}
In order to determine whether noising remains effective with a larger dataset, we
perform experiments on the Text8 corpus\footnote{\url{http://mattmahoney.net/dc/text8.zip}}.
The first 90M characters are used for training,
the next 5M for validation, and the final 5M for testing, resulting in
15.3M training tokens, 848K validation tokens, and 855K test tokens.
We preprocess the data by mapping all words which appear 10 or fewer times
to the unknown token, resulting in a 42K size vocabulary. Other parameter
settings are the same as described in the Penn Treebank experiments, besides
that only models with hidden size 512 are considered, and
noising is not combined with feed-forward dropout. Results are given in
Table~\ref{tab:text8}.
\section*{Acknowledgments}
We thank Will Monroe for feedback on a draft of this paper,
Anand Avati for help running experiments, and Jimmy Wu for
computing support.
We also thank the developers of Theano~\citep{2016theano} and Tensorflow~\citep{abadi2016tensorflow}.
Some GPUs used in this work were donated by NVIDIA Corporation.
ZX, SW, and JL were supported by an NDSEG Fellowship, NSERC PGS-D Fellowship,
and Facebook Fellowship, respectively.
This project was funded in part by DARPA MUSE award FA8750-15-C-0242 AFRL/RIKF.
\section{Method}
\input{background}
\setlength{\abovedisplayskip}{10pt}
\setlength{\belowdisplayskip}{10pt}
\subsection{Smoothing and Noising}
Recall that for a given context length $l$, an $n$-gram model of order $l+1$ is optimal
under the log-likelihood criterion.
Hence in the case where an RNN with finite context achieves near
the lowest possible cross-entropy loss, it behaves like an $n$-gram model.
Like $n$-gram models, RNNs are trained using maximum likelihood, and can easily
overfit~\citep{zaremba2014recurrent}. While generic regularization methods such $L_2$-regularization and dropout are effective,
they do not take advantage of specific properties of sequence modeling.
In order to understand sequence-specific regularization,
it is helpful to examine $n$-gram language models,
whose properties are well-understood.
\paragraph{Smoothing for $n$-gram models}
When modeling $p(x_t|x_{<t})$,
the maximum likelihood estimate $c(x_{<t}, x_t) / c(x_{<t})$
based on empirical counts puts zero probability on unseen
sequences, and thus smoothing is crucial for obtaining good estimates.
In particular, we consider interpolation,
which performs a weighted average between higher and lower order
models. The idea is that when there are not enough observations of
the full sequence, observations of subsequences can help us
obtain better estimates.%
\footnote{For a thorough review of smoothing methods, we defer
to~\cite{chen1996empirical}.}
For example, in a bigram model,
$p_\mathrm{interp}(x_t|x_{t-1}) = \lambda p(x_t|x_{t-1}) +
(1-\lambda)p(x_t)$,
where $0 \le \lambda \le 1$.
\paragraph{Noising for RNN models}
We would like to apply well-understood smoothing methods such as
interpolation to RNNs, which are also trained using maximum likelihood.
Unfortunately, RNN models have no notion of counts, and we cannot
directly apply one of the usual smoothing methods.
In this section, we consider two
simple noising schemes which we proceed to show correspond to smoothing methods.
Since we can noise the data while training an RNN,
we can then incorporate well-understood generative
assumptions that are known to be helpful in the domain.
First consider the following two noising schemes:
\begin{itemize}[leftmargin=*]
\item \textbf{unigram noising}\hspace{1em} For each $x_i$ in $x_{<t}$,
with probability $\gamma$ replace $x_i$ with a sample from the unigram
frequency distribution.
\item \textbf{blank noising}\hspace{1em} For each $x_i$ in $x_{<t}$,
with probability $\gamma$ replace $x_i$ with a placeholder
token \blanktoken.
\end{itemize}
While blank noising can be seen as a way to avoid overfitting
on specific contexts, we will see that both schemes are related to smoothing,
and that unigram noising provides a path
to analogues of more advanced smoothing methods.
\subsection{Noising as Smoothing}
\label{ssec:noisesmooth}
We now consider the maximum likelihood estimate of $n$-gram probabilities
estimated using the pseudocounts of the noised data.
By examining these estimates, we draw a connection between linear interpolation
smoothing and noising.
\paragraph{Unigram noising as interpolation}
To start, we consider the simplest case of bigram probabilities.
Let $c(x)$ denote the count of a token $x$ in the original data, and let
$\noised{c}(x) \eqdef \E_{\tx}\left[c(\tx)\right]$ be the expected count of $x$
under the unigram noising scheme. We then have
\begin{align*}
\noised{p}(x_t|x_{t-1}) &= \displaystyle\frac{\noised{c}(x_{t-1}, x_t)}{\noised{c}(x_{t-1})}\\
&= [(1-\gamma)c(x_{t-1}, x_t) + \gamma\ p(x_{t-1}) c(x_t)] / c(x_{t-1})\\
&= (1-\gamma)p(x_t | x_{t-1}) + \gamma\ p(x_t),
\end{align*}
where $\noised{c}(x) = c(x)$ since our proposal distribution $q(x)$
is the unigram distribution, and the last line follows since
$c(x_{t-1})/p(x_{t-1}) = c(x_t)/p(x_t)$ is equal to
the total number of tokens in the training set.
Thus we see that the noised data has pseudocounts
corresponding to \emph{interpolation} or a mixture of different order
$n$-gram models with fixed weighting.
More generally, let $\tx_{<t}$ be noised tokens from $\tx$.
We consider the expected prediction under noise
\begin{align*}
\noised{p}(x_t|x_{<t}) &= \E_{\tx_{<t}}\left[p(x_t|\tx_{<t})\right] \\
&= \sum_{\setsty{J}} \underbrace{\pi(|\setsty{J}|)}_{p(|J| \text{\ swaps})} \sum_{x_{\setsty{K}}} \underbrace{p(x_t | x_{\setsty{J}}, x_{\setsty{K}})}_{p(x_t|\text{noised\ context})} \prod_{z \in x_\setsty{K}} \underbrace{p(z)}_{p(\text{drawing\ } z)}
\end{align*}
where the mixture coefficients are
$\pi(|\setsty{J}|) = (1-\gamma)^{|\setsty{J}|} \gamma^{t-1-|\setsty{J}|}$
with $ \sum_{\setsty{J}} \pi( |\setsty{J|}) = 1$. $\setsty{J} \subseteq \{1,2,\ldots,t-1\}$
denotes the set of indices whose corresponding tokens are left unchanged,
and $\setsty{K}$ the set of indices that were replaced.
\paragraph{Blank noising as interpolation}
Next we consider the blank noising scheme and show that it corresponds to
interpolation as well. This also serves as an alternative explanation for the gains that other related work
have found with the ``word-dropout'' idea~\citep{kumar2015ask,dai2015semi,bowman2015generating}.
As before, we do not noise the token being predicted $x_t$.
Let $\tx_{<t}$ denote the random variable where each of its tokens is
replaced by \blanktoken{} with probability $\gamma$, and let $x_\setsty{J}$ denote the
sequence with indices $\setsty{J}$ unchanged, and the rest replaced by
\blanktoken{}. To make a prediction, we use the expected probability over
different noisings of the context
\begin{equation*}
\noised{p}(x_t|x_{<t}) = \E_{\tx_{<t}}\left[p(x_t|\tx_{<t})\right]
= \sum_{\setsty{J}} \underbrace{\pi(|\setsty{J}|)}_{\hspace{-0.5em} p(|J| \text{\ swaps})} \underbrace{p(x_t | x_{\setsty{J}})}_{p(x_t | \text{noised\ context})},
\end{equation*}
where $\setsty{J} \subseteq \{1,2,\ldots,t-1\}$,
which is also a mixture of the unnoised probabilities over subsequences of the current context.
For example, in the case of trigrams, we have
\begin{align*}
\noised{p}(x_3|x_1, x_2) =\ &\pi(2)\ p(x_3 |x_1, x_2 ) + \pi(1)\ p(x_3 |x_1, \blanktok) + \pi(1)\ p(x_3 |\blanktok, x_2 ) + \pi(0)\ p(x_3 |\blanktok, \blanktok )
\end{align*}
where the mixture coefficient $\pi(i) = (1-\gamma)^i \gamma^{2-i}$\nolinebreak.
\subsection{Borrowing Techniques}
With the connection between noising and smoothing in place, we now consider how we can improve
the two components of the noising scheme by considering:
\begin{enumerate}
\item Adaptively computing noising probability $\gamma$ to reflect our confidence
about a particular input subsequence.
\item Selecting a proposal distribution $q(x)$ that is less naive
than the unigram distribution by leveraging higher order
$n$-gram statistics.
\end{enumerate}
\paragraph{Noising Probability}
\label{ssec:absdiscount}
Although it simplifies analysis, there is no reason why we should
choose fixed $\gamma$; we now consider defining an adaptive $\gamma(x_{1:t})$ which
depends on the input sequence. Consider the following bigrams:
\begin{center}
{``and the''} \hspace{3cm} {``Humpty Dumpty''}
\end{center}
The first bigram is one of the most common in English corpora;
its probability is hence well estimated and should not be
interpolated with lower order distributions.
In expectation, however, using fixed $\gamma_0$ when noising
results in the same lower order interpolation weight $\pi_{\gamma_0}$
for common as well as rare bigrams.
Intuitively, we should define $\gamma(x_{1:t})$ such that commonly
seen bigrams are less likely to be noised.
The second bigram, ``Humpty Dumpty,'' is relatively uncommon, as are
its constituent unigrams.
However, it forms what \cite{brown1992class} term a ``sticky pair'':
the unigram ``Dumpty'' almost always follows the unigram ``Humpty'',
and similarly, ``Humpty'' almost always precedes ``Dumpty''.
For pairs with high mutual information, we
wish to avoid backing off from the bigram to the unigram distribution.
Let $N_{1+}(x_1, \bullet) \eqdef |\{x_2 : c(x_1, x_2) > 0\}|$
be the number of distinct continutions following $x_1$,
or equivalently the number of bigram types beginning with $x_1$ \citep{chen1996empirical}.
From the above intuitions, we arrive at the \textit{absolute discounting}
noising probability
$$\gamma_\mathrm{AD}(x_1) = \gamma_0 \frac{N_{1+}(x_1, \bullet)}{\sum_{x_2} c(x_1, x_2)}$$
where for $0 \le \gamma_0 \le 1$ we have $0 \le \gamma_\mathrm{AD} \le 1$,
though in practice we can also clip larger noising probabilities to $1$.
Note that this encourages noising of unigrams that precede many possible other
tokens while discouraging noising of common unigrams, since if we ignore
the final token, $\sum_{x_2} c(x_1, x_2) = c(x_1)$.
\paragraph{Proposal Distribution}
\begin{table}[bt]
\begin{center}
\begin{tabular}{c l l l}
\toprule
Noised & $\gamma(x_{1:2})$ & $q(x)$ & Analogue\\
\midrule
$x_1$ & $\gamma_0$ & $q(``\blanktok")=1$ & interpolation \\
$x_1$ & $\gamma_0$ & unigram & interpolation \\
$x_1$ & $\gamma_0 N_{1+}(x_1, \bullet)/c(x_1)$ & unigram & absolute discounting \\
$x_1, x_2$ & $\gamma_0 N_{1+}(x_1, \bullet)/c(x_1)$ & $q(x) \propto N_{1+}(\bullet, x)$ & Kneser-Ney \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Noising schemes} Example noising schemes and their bigram smoothing analogues.
Here we consider the bigram probability $p(x_1, x_2) = p(x_2|x_1)p(x_1)$.
Notation: $\gamma(x_{1:t})$ denotes the noising probability for a given input sequence $x_{1:t}$,
$q(x)$ denotes the proposal distribution, and $N_{1+}(x, \bullet)$ denotes
the number of distinct bigrams in the training set where $x$ is the first unigram. In all but the last case
we only noise the context $x_1$ and not the target prediction $x_2$.}
\label{tab:noiseschemes}
\end{table}
While choosing the unigram distribution as the proposal
distribution $q(x)$ preserves unigram frequencies, by borrowing from
the smoothing literature we find another distribution performs better.
We again begin with two motivating examples:
\begin{center}
``San Francisco''\hspace{3cm}``New York''
\end{center}
Both bigrams appear frequently in text corpora.
As a direct consequence, the unigrams ``Francisco'' and ``York'' also appear frequently.
However, since ``Francisco'' and ``York'' typically follow
``San'' and ``New'', respectively, they should not have high probability
in the proposal distribution as they might if we use unigram frequencies~\citep{chen1996empirical}.
Instead, it would be better to increase
the proposal probability of unigrams with diverse histories,
or more precisely unigrams that complete a large number of bigram types.
Thus instead of drawing from the unigram distribution,
we consider drawing from
$$q(x) \propto N_{1+}(\bullet, x)$$
Note that we now noise the prediction $x_t$ in addition to the context $x_{1:t-1}$.
Combining this new proposal distribution with the discounted $\gamma_\mathrm{AD}(x_1)$
from the previous section, we obtain the noising
analogue of Kneser-Ney smoothing.
Table~\ref{tab:noiseschemes} summarizes the discussed noising schemes.
\subsection{Training and Testing}
During training, noising is performed per batch and is done online such that
each epoch of training sees a different noised version of the training data.
At test time, to match the training objective we should sample multiple
corrupted versions of the test data, then average the predictions~\citep{srivastava2014dropout}.
In practice, however, we find that simply using the maximum
likelihood (uncorrupted) input sequence works well; evaluation runtime remains
unchanged.
\subsection{Extensions}
The schemes described are for the language model setting.
To extend them to the sequence-to-sequence or encoder-decoder setting, we noise both
$x_{<t}$ as well as $y_{<t}$.
While in the decoder we have $y_{<t}$ and $y_t$ as analogues to language model
context and target prediction, it is unclear whether noising $x_{<t}$ should be beneficial.
Empirically, however, we find this to be the case (Table~\ref{tab:mt}).
\section{Related Work}
Our work can be viewed as a form of data augmentation,
for which to the best of our knowledge
there exists no widely adopted schemes
in language modeling with neural networks.
Classical regularization methods such as $L_2$-regularization are typically applied to the
model parameters, while dropout is applied
to activations which can be along the forward as well as the recurrent
directions~\citep{zaremba2014recurrent,semeniuta2016recurrent,gal2015dropout}.
Others have introduced methods for recurrent neural networks encouraging
the hidden activations to remain
stable in norm, or constraining the recurrent weight matrix to have eigenvalues
close to one~\citep{krueger2015regularizing,arjovsky2015unitary,le2015simple}.
These methods, however, all consider weights and hidden units instead of the
input data, and
are motivated by the vanishing and exploding gradient problem.
Feature noising has been demonstrated to be effective for structured
prediction tasks, and has been interpreted as an explicit
regularizer~\citep{wang2013feature}.
Additionally,~\cite{wager2014altitude} show that noising can
inject appropriate generative assumptions into discriminative
models to reduce their generalization error, but do not consider sequence models~\citep{wager2016data}.
The technique of randomly zero-masking input word embeddings
for learning sentence representations has been proposed by~\cite{iyyer2015deep},~\cite{kumar2015ask}, and~\cite{dai2015semi},
and adopted by others such as \cite{bowman2015generating}.
However, to the best of our knowledge, no analysis has been provided besides
reasoning that zeroing embeddings may result in a model ensembling effect
similar to that in standard dropout.
This analysis is applicable to classification tasks involving
sum-of-embeddings or bag-of-words models, but does not capture sequence-level effects.
\cite{bengio2015scheduled} also make an empirical observation that the method
of randomly replacing words with fixed probability with a draw from the
uniform distribution
improved performance slightly for an image captioning task;
however, they do not examine why performance improved.
\chapter{#2}\label{chp:#1}}
\newcommand\Section[2]{\section{#2}\label{sec:#1}}
\newcommand\Subsection[2]{\subsection{#2}\label{sec:#1}}
\newcommand\Subsubsection[2]{\subsubsection{#2}\label{sec:#1}}
\ifthenelse{\isundefined{\definition}}{\newtheorem{definition}{Definition}}{}
\ifthenelse{\isundefined{\assumption}}{\newtheorem{assumption}{Assumption}}{}
\ifthenelse{\isundefined{\hypothesis}}{\newtheorem{hypothesis}{Hypothesis}}{}
\ifthenelse{\isundefined{\proposition}}{\newtheorem{proposition}{Proposition}}{}
\ifthenelse{\isundefined{\theorem}}{\newtheorem{theorem}{Theorem}}{}
\ifthenelse{\isundefined{\lemma}}{\newtheorem{lemma}{Lemma}}{}
\ifthenelse{\isundefined{\corollary}}{\newtheorem{corollary}{Corollary}}{}
\ifthenelse{\isundefined{\alg}}{\newtheorem{alg}{Algorithm}}{}
\ifthenelse{\isundefined{\example}}{\newtheorem{example}{Example}}{}
\newcommand\cv{\ensuremath{\to}}
\newcommand\cvL{\ensuremath{\xrightarrow{\mathcal{L}}}}
\newcommand\cvd{\ensuremath{\xrightarrow{d}}}
\newcommand\cvP{\ensuremath{\xrightarrow{P}}}
\newcommand\cvas{\ensuremath{\xrightarrow{a.s.}}}
\newcommand\eqdistrib{\ensuremath{\stackrel{d}{=}}}
\newcommand{\E}{\ensuremath{\mathbb{E}}}
\newcommand\KL[2]{\ensuremath{\text{KL}\left( #1 \| #2 \right)}}
\subsection{Machine Translation}
\begin{table}[bt]
\begin{center}
\begin{tabular}{l l l}
\toprule
Scheme & Perplexity & BLEU \\
\midrule
dropout, no noising & 8.84 & 24.6\\
blank noising & 8.28 & 25.3 ($+0.7$)\\
unigram noising & 8.15 & 25.5 ($+0.9$) \\
bigram Kneser-Ney & \textbf{7.92} & \textbf{26.0 ($+1.4$)} \\
\midrule
\hfill source only & 8.74 & 24.8 ($+0.2$) \\
\hfill target only & 8.14 & 25.6 ($+1.0$) \\
\bottomrule
\end{tabular}
\end{center}
\caption{ Perplexities and BLEU scores for machine translation task. Results
for bigram KN noising on only the source sequence and only the target sequence are given as well.}
\label{tab:mt}
\end{table}
For our machine translation experiments we consider
the English-German machine translation track of IWSLT 2015\footnote{\url{http://workshop2015.iwslt.org/}}.
The IWSLT 2015 corpus consists of sentence-aligned subtitles of TED and TEDx talks.
The training set contains roughly 190K sentence pairs with 5.4M tokens.
Following \cite{luong2015stanford}, we
use TED tst2012 as a validation set and report
BLEU score results~\citep{papineni2002bleu} on tst2014.
We limit the vocabulary
to the top 50K most frequent words for each language.
We train a two-layer LSTM encoder-decoder network \citep{sutskever2014sequence,cho2014learning}
with $512$ hidden units in each layer.
The decoder uses an attention mechanism~\citep{bahdanau2014neural} with the dot alignment function~\citep{luong2015effective}.
The initial learning rate is 1.0 and
we start halving the learning rate when the relative difference in perplexity on the validation set
between two consecutive epochs is less than $1\%$.
We follow training protocols as described in \cite{sutskever2014sequence}:
(a) LSTM parameters and word embeddings
are initialized from a uniform distribution
between $[-0.1,0.1]$, (b) inputs
are reversed, (c) batch size is set to 128, (d) gradient clipping is performed
when the norm exceeds a threshold of 5.
We set hidden unit dropout rate to 0.2 across all settings as suggested in \cite{luong2015effective}.
We compare unigram, blank, and bigram Kneser-Ney noising.
Noising rate $\gamma$ is selected on the validation set.
Results are shown in Table~\ref{tab:mt}.
We observe performance gains for both blank noising and unigram noising, giving roughly $+0.7$ BLEU score on the test set. The proposed bigram Kneser-Ney noising scheme gives an additional performance boost of $+0.5$-$0.7$ on top of the blank noising and unigram noising models, yielding a total gain of $+1.4$ BLEU.
|
1,116,691,499,677 | arxiv | \section*{Introduction}
The understanding of Ricci-flat metrics is a classical issue in Riemannian geometry. When they are non-compact, these metrics have at most Euclidean volume growth by Bishop-Gromov inequality and those which exactly have Euclidean volume growth are asymptotically conical. In dimension $4$, this implies that they are Asymptotically Locally Euclidean (ALE), that is they are asymptotic to some quotient of Euclidean space, see Definition \ref{defn-ALE}.
The class of ALE Ricci-flat metrics models the formation of singularities in various noncollapsed situations, in particular for spaces with Ricci curvature bounds \cite{and,Ban-Kas-Nak}. Moreover, these metrics model potential singularities of a $4$-dimensional Ricci flow with bounded scalar curvature \cite{bz}. Even more recently however, it has been shown that Ricci flows on a closed Riemannian manifold of dimension less than $8$ with bounded scalar curvature exist for all time: \cite{Buz-DiM}. Finally, ALE Ricci flat metrics actually appear as finite-time blow-up limits of some Ricci flows \cite{app-EH}. Their stability therefore becomes a crucial question for the Ricci flow.
\subsection*{Adapting Perelman's $\lambda$-functional to the ALE setting}
In \cite{Per-Ent}, Perelman introduced three functionals denoted $\lambda$, $\mu$ and $\nu$ of which the Ricci flow is the gradient flow. These functionals were the core of his spectacular proofs and revolutionized the understanding of the formation of singularities of the Ricci flow.
In particular, recall that if $(M^n,g)$ is a closed smooth Riemannian manifold, Perelman's energy, denoted by $\lambda(g)$, is defined as follows:
\begin{equation}
\lambda(g):=\inf_{\|\varphi\|_{L^2}=1}\int_M4|\nabla^g\varphi|^2_g+\mathop{\rm R}\nolimits_g\varphi^2\,d\mu_g.\label{Per-Def-Lam}
\end{equation}
In other terms, $\lambda(g)$ is the bottom of the spectrum of the Schr\"odinger operator $-4\Delta_g+\mathop{\rm R}\nolimits_g$. Then Perelman showed that $\lambda$ is monotone non-decreasing along the Ricci flow and it is constant on Ricci-flat metrics only.
Let us notice some major inconvenients to use the same definition (\ref{Per-Def-Lam}) in a non-compact setting : if $(M^n,g)$ is an ALE metric in a neighborhood (with respect to some natural topology designed on polynomially decaying tensors at a certain rate) of a given ALE Ricci-flat metric then it can be shown that $\lambda(g)=0$. In other words, the $\lambda$-functional in its usual $L^2$-constrained form is not well-suited because of the lack of non trivial minimizers.
Our goal here is twofold. On the one hand, we aim at defining a functional on suitable neighborhoods of any ALE Ricci-flat metrics which detects Ricci-flat metrics only. On the other hand, we wish to define an adequate notion of linear stability for an ALE Ricci-flat metric tied to our functional in order to study the relation between linear stability and dynamical stability along the Ricci flow. The second goal will be addressed in a forthcoming paper.
Our work relies on that of Haslhofer \cite{Has-Per-Fct} who introduced a functional which we denote by $\lambda_{\operatorname{ALE}}^0$ and where the minimization in (\ref{Per-Def-Lam}) takes place among test functions $\varphi$ such that $\varphi-1$ is compactly supported. It is a convenient functional when the scalar curvature is nonnegative and integrable: in \cite{Has-Per-Fct}, $\lambda_{\operatorname{ALE}}^0(g)$ is compared to the ADM mass $m_{\operatorname{ADM}}(g)$ of $g$ in order to give a quantitative version of the positive mass theorem on asymptotically flat manifolds (satisfying additional assumptions).
Despite its good properties with respect to the Ricci flow or its link to the ADM mass, the functional $\lambda_{\operatorname{ALE}}^0$ is only defined on a suitable neighborhood of metrics of a given ALE Ricci-flat metric $(N^n,g_b)$ whose scalar curvatures are either integrable or decay sufficiently fast at infinity. Such sets of metrics are not closed with respect to the topology induced by H\"older spaces modeled on polynomially decaying tensors with rate $\tau>\frac{n-2}{2}$. This classical restriction on the rate ensures that any deformation of the metric has its gradient lying in $L^2$. This observation is a major drawback to establish finer properties of the functional $\lambda_{\operatorname{ALE}}^0$ such as a \L ojasiewicz inequality we discuss below.
To remedy these issues, we refine the definition of $\lambda_{\operatorname{ALE}}^0$ by substracting the ADM mass. It gives in turn a new functional called $\lambda_{\operatorname{ALE}}$ and formally defined by:
\begin{equation}
\lambda_{\operatorname{ALE}}(g):= \lambda_{\operatorname{ALE}}^0(g)-m_{\operatorname{ADM}}(g).\label{formal-intro-lambda}
\end{equation}
At first sight, the functional $\lambda_{\operatorname{ALE}}$ still seems to require the integrability of the scalar curvature to make sense of each term: it turns out that (\ref{formal-intro-lambda}) can be reinterpreted as the limit of the difference of two a priori divergent integrals: see Section \ref{extension tilde lambda} for a precise statement. Again, in case a metric $g$ has integrable scalar curvature and lies in a neighborhood of an ALE Ricci-flat metric, the formula (\ref{formal-intro-lambda}) makes sense and lets one to compute $\lambda_{\operatorname{ALE}}$ more explicitly. We show that this functional fulfills our specifications: it is defined on a whole neighborhood of an ALE Ricci-flat metric whether its ADM mass is finite or not. Furthermore, it is analytic, its gradient vanishes on Ricci-flat ALE metrics only and the Ricci flow is moreover its gradient flow: see Section \ref{extension tilde lambda} for rigorous statements and proofs.
The second variation of $\lambda_{\operatorname{ALE}}$ at an ALE Ricci-flat metric $(N^n,g_b)$ along divergence-free variations is half the Lichnerowicz operator $L_{g_b}:=\Delta_{g_b}+2\mathop{\rm Rm}\nolimits(g_b)\ast$, i.e. if $h$ is a smooth compactly supported $2$-tensor on $N$,
\begin{equation}
\delta^2_{g_b}\lambda(h,h)=\frac{1}{2}\left<L_{g_b}h,h\right>_{L^2},\quad \mathop{\rm div}\nolimits_{g_b}h=0.
\end{equation}
This fact alone strongly suggests that the linear stability of an ALE Ricci-flat metric $(N^n,g_b)$ should be defined in terms of the non-positivity of the associated Lichnerowicz operator $L_{g_b}$ restricted to divergence-free variations: this guess is not new and was investigated in the setting of Ricci-flat cones by Haslhofer-Hall-Siepmann \cite{Hal-Has-Sie}. Linear stability actually gives some non trivial local information in the integrable case, i.e. in the case where the space of ALE Ricci-flat metrics in the neighborhood of a fixed ALE Ricci-flat metric is a smooth finite-dimensional manifold: see Proposition \ref{local maximum stable integrable} for a formal statement.
\begin{prop}\label{prop-sta-int}
Any ALE Ricci-flat metric $(N^n,g_b)$ which is locally stable and integrable is a local maximum for the functional $\lambda_{\operatorname{ALE}}$.
\end{prop}
The statement of Proposition \ref{prop-sta-int} echoes \cite{Has-Sta}: the proof is however technically different.
We also obtain in Section \ref{sec-covid-mass} new quantitative positive mass theorems in the same spirit of \cite{Has-Per-Fct} for small metric perturbations of \emph{stable} Ricci-flat ALE metrics: see Corollary \ref{local positive mass}. Note that the positive mass theorem generally does not hold on ALE manifolds \cite{LeB-Counter-Mass}.
The positive mass theorem however holds for spin ALE manifolds, see \cite{nak}. Pushing Nakajima's estimate further similarly to \cite{Has-Per-Fct} in the asymptotically Euclidean setting, we prove the following \emph{global} property satisfied by the functional $\lambda_{\operatorname{ALE}}$ on spin manifolds.
\begin{prop}[Proposition \ref{prop-spin-def-local}]
Let $(N^4,g)$ be an ALE metric of order $\tau>1 = \frac{4}{2}-1$ on a spin manifold asymptotic to $\mathbb{R}^4\slash\Gamma$ for $\Gamma\subset SU(2)$.
Assume the scalar curvature $\mathop{\rm R}\nolimits_g$ is integrable and non-negative. Then, we have $$\lambda_{\operatorname{ALE}}(g)\leq 0,$$
that is
$$m_{\operatorname{ADM}}(g)\geqslant\lambda_{\operatorname{ALE}}^0(g)\geqslant 0,$$
with equality if and only if $(N^4,g)$ is one of Kronheimer's gravitational instantons \cite{kro}.
\end{prop}
These well-known gravitational instantons are therefore the (only) maximizers of $\lambda_{ALE}$ with non negative and integrable scalar curvature.
\subsection*{A \L{}ojasiewicz inequality for $\lambda_{\operatorname{ALE}}$}
A certain number of difficulties arises when it comes to study the dynamical stability of ALE Ricci-flat metrics along the Ricci flow.
A first obstacle is the presence of a non-trivial kernel of the Lichnerowicz operator. This issue already occurs in the case of a closed Ricci-flat metric. The non-compactness of the underlying space is an additional source of trouble since $0$ is not isolated in the spectrum of the linearized operator. This fact explains a polynomial-in-time convergence instead of an exponential rate in the case of an integrable Ricci-flat metric on a closed manifold, as it was demonstrated in \cite{Has-Sta}.
One tool that has been quite popular to study the stability of fixed points of geometric evolution equations is the notion of \L{}ojasiewicz-Simon inequalities. Its name comes from both the classical work of \L{}ojasiewicz \cite{loj} on finite dimensional dynamical systems of gradient type and that of L. Simon \cite{sim} who extended systematically these inequalities to functionals defined on infinite dimensional spaces. The main geometric applications obtained in \cite{sim} concern the uniqueness of tangent cones of isolated singularities of minimal surfaces in Euclidean space together with the uniqueness of tangent maps of minimizing harmonic maps with values into an analytic closed Riemannian manifold. These geometric equations have the advantage to be strongly elliptic. Notice that all these results do not hold true if one drops the assumption on the analyticity of the data under consideration.
\L{}ojasiewicz-Simon inequalities have been extensively used these last years in the context of mean curvature flow, especially to prove the uniqueness of blow-ups \cite{Col-Min-Uni-MCF}.
In the compact setting, \L{}ojasiewicz inequalities have been proved for Perelman's $\lambda$-functional in the neighborhood of \emph{compact} Ricci-flat metrics in \cite{Has-Sta} in the integrable case, and in \cite{Has-Mul} in the general case, see also \cite{Kro-Sta-Ins} for Ricci solitons. They have been applied to characterize the stability and instability of the Ricci flow at compact Ricci-flat metrics.
Our main application is to prove a similar result for Ricci-flat ALE metrics. We provide a general scheme of proof for \L{}ojasiewicz inequalities on non-compact manifolds based on the theory of elliptic operators between weighted Hölder spaces.
We first show how such an inequality can be proved "by hand" in the integrable and stable situation, but we realize that this only leads to an actual \L{}ojasiewicz inequality in dimensions greater than or equal to $5$. We introduce a general scheme of proof, based on that of \cite{Col-Min-Ein-Tan-Con}, for weighted \L{}ojasiewicz inequalities for ALE metrics which holds without integrability or stability assumption in dimensions greater than or equal to $4$ and we provide an optimal exponent in the integrable case. By weighted \L{}ojasiewicz inequalities, we mean that the norm of the gradient of the corresponding functional is a weighted $L^2$-norm. More specifically here, we consider the space $L^2_{\frac{n}{2}+1}$ which, roughly speaking, is the space of tensors $T$ such that $r\cdot T$ belongs to $L^2$, $r$ being the distance from a fixed point: see Definition \ref{def-weighted-sobolev-norms} for a formal definition. The necessity of using such weighted norms rather than using the usual $L^2$ norm is explained below.
Our main result is that the functional $\lambda_{\operatorname{ALE}}$ satisfies an $L^2_{\frac{n}{2}+1}$-\L ojasiewicz inequality in a neighborhood of any ALE Ricci-flat metric with respect to the topology of weighted H\"older spaces $C^{2,\alpha}_{\tau}$, $\alpha\in(0,1)$, with polynomial decay of rate $\tau\in (\frac{n-2}{2},n-2)$.
\begin{theo}\label{dream-thm-loja-intro}
Let $(N^n,g_b)$ be an ALE Ricci-flat manifold of dimension $n\geq 4$. Let $\alpha\in(0,1)$ and $\tau\in(\frac{n-2}{2},n-2)$. Then there exist a neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ of $g_b$, a constant $C>0$ and $\theta\in (0,1)$ such that for any metric $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, we have the following $L^2_{\frac{n}{2}+1}$-\L ojasiewicz inequality,
\begin{equation}
|\lambda_{\operatorname{ALE}}(g)|^{2-\theta}\leq C\|\nabla \lambda_{\operatorname{ALE}}(g)\|_{L^2_{\frac{n}{2}+1}}^{2}.\label{loj-ineq-lambda-ALE}
\end{equation}
Moreover, if $(N^n,g_b)$ has integrable infinitesimal Ricci-flat deformations, then $\theta=1$.
In particular, if $n\geq 5$, one has the following $L^2$-\L ojasiewicz inequality for integrable Ricci-flat ALE metrics: if $\tau\in(\frac{n}{2},n-2)$ then for any $0<\delta<\frac{2\tau-(n-2)}{2\tau-(n-4)}$, there exists $C>0$ such that for all $g\in B_{C^{2,\alpha}_\tau}(g_b,\epsilon)$,
$$ |\lambda_{\operatorname{ALE}}(g)|^{2-\theta_{L^2}}\leq C \|\nabla \lambda_{\operatorname{ALE}}(g)\|_{L^2}^{2}, \quad\theta_{L^2}:=2-\frac{1}{\delta}.$$
\end{theo}
Here $\nabla \lambda_{\operatorname{ALE}}$ denotes the gradient of $\lambda_{\operatorname{ALE}}$ in the $L^2$ sense.
The fact that our spaces are non-compact induces quite a lot of new difficulties. In particular, the spectrum of the Lichnerowicz operator is not discrete anymore and $0$ belongs to the essential spectrum. This explains the need of considering weighted Sobolev spaces different from $L^2$ for which the differential of $\nabla\lambda_{\operatorname{ALE}}$ at a Ricci-flat ALE metric is Fredholm. We underline the fact that while Theorem \ref{dream-thm-loja-intro} gives an optimal $L^2_{\frac{n}{2}+1}$-\L ojasiewicz inequality, we cannot reach the usual optimal $L^2$-\L{}ojasiewicz exponent $\theta_{L^2} = 1$ (note that we approach it as $\tau$ is close to $n-2$ and the dimension tends to $+\infty$) in the integrable case: see also \cite[Theorem $2.1$]{Har-jen-Loja-Hil} for a proof of this fact in a general linear setting. This is consistent with the known fact that the DeTurck-Ricci flow only converges polynomially fast for perturbations of the Euclidean space: see for instance \cite{Sch-Sch-Sim} and \cite{app-scal}. Indeed, an exponent $\theta_{L^2} =1$ implies that the convergence is exponential.
\begin{rk}
Most of the analysis of this article should apply to the Ricci-flat asymptotically conical case. However, the other classical asymptotics in dimension $4$, namely ALF, ALG and ALH should require more involved arguments.
\end{rk}
In a forthcoming article, we use the $L^2_{\frac{n}{2}+1}$-\L{}ojasiewicz inequality \eqref{loj-ineq-lambda-ALE} to investigate the dynamical stability of Ricci-flat ALE metrics.
\\
\subsection*{Outline of paper}
In Section \ref{sec-rel-ene-ALE}, we begin by recalling the basics of ALE Ricci-flat metrics including the definitions of polynomially weighted H\"older and Sobolev function spaces. Next, Section \ref{sec-mass-def} recalls the definition of the $\operatorname{ADM}$-mass of an ALE Ricci-flat metric and discusses various topologies on the set of nearby metrics which appear in the literature on the study of the mass on ALE metrics. Section \ref{sec-def-fun-lambda-0} introduces the $\lambda_{\operatorname{ALE}}^0$-functional and studies its basic properties in the previously mentioned topology: this is the content of Proposition \ref{existence propriete-wg}. Section \ref{sec-first-sec-var} is devoted to compute the first and second variations of $\lambda_{\operatorname{ALE}}^0$ which are summarized in Propositions \ref{first-var-prop} and \ref{second-var-prop}.
In Section \ref{extension tilde lambda}, Proposition \ref{lambdaALE analytic} proves that substracting the ADM-mass to $\lambda_{\operatorname{ALE}}^0$ yields a much better-behaved and analytic functional that we denote $\lambda_{\operatorname{ALE}}$. The second variation of $\lambda_{\operatorname{ALE}}$ is computed in Proposition \ref{snd-var-gal-lambda}. Moreover, the monotonicity of $\lambda_{\operatorname{ALE}}$ along the Ricci flow is established in Proposition \ref{prop-mono-lambda}. We also take the opportunity to define the linear stability of an ALE Ricci-flat metric: Lemma \ref{lemma-equiv-def-stable} gives two other equivalent ways of defining the notion of linear stability for such a class of metrics.
In Sections \ref{sec-ene-est-pot-fct} and \ref{fred-sec-prop}, we prove some technical results which are crucial for the rest of the paper. We first prove energy bounds for the potential function appearing in the definition of $\lambda_{\operatorname{ALE}}$ in Proposition \ref{prop-ene-pot-fct}. We then give the Fredholm properties of the Hessian of $\lambda_{\operatorname{ALE}}$, the Lichnerowicz Laplacian, in weighted H\"older and Sobolev spaces: this is the content of Proposition \ref{prop-lic-fred}.
In Sections \ref{loj-sim-sec-int-case} and \ref{sec-loja-ineq-gal-case}, we prove \L{}ojasiewicz inequalities satisfied by the functional $\lambda_{\operatorname{ALE}}$. We first consider a na\"ive proof "by hand" in the integrable and stable case in dimension greater than or equal to $5$ in Section \ref{loj-sim-sec-int-case}. More specifically, Section \ref{sec-int-ric-fla} starts with a precise description of neighborhoods of integrable ALE Ricci-flat metrics: see Proposition \ref{gauge fixing ALE integrable}. Using it, we prove Proposition \ref{prop-sta-int} reformulated more accurately in Proposition \ref{local maximum stable integrable}. In Section \ref{naive-loja-sec}, a first attempt to prove a \L ojasiewicz inequality for integrable ALE Ricci-flat metrics is given by following an idea due to Haslhofer \cite{Has-Sta}: Proposition \ref{prop-baby-loja} yields an infinitesimal version of the \L ojasiewicz inequality for $\lambda_{\operatorname{ALE}}$ with a non-trivial \L ojasiewicz exponent $\theta$ in dimension greater than or equal to $5$. This can be interpreted as a preliminary step to the general case developed in Section \ref{sec-loja-ineq-gal-case}.
We then prove a general \L{}ojasiewicz inequality in Section \ref{sec-loja-ineq-gal-case} by extending the classical reduction to the finite-dimensional situation of \cite{sim}: more precisely, we adapt the concise version of \cite{Col-Min-Ein-Tan-Con} to this non-compact setting in Sections \ref{sec-gal-loja} and \ref{sec-prop-loja-ineq}. This requires the study of the Fredholm properties of the Lichnerowicz operator between weighted H\"older and Sobolev spaces. The proof of a general \L ojasiewicz inequality (\ref{loj-ineq-lambda-ALE}) needs a priori energy estimates which are taken care by Proposition \ref{prop-energy-est} in Section \ref{sec-proof-gal-loja} as required by Proposition \ref{Lojasiewicz ineq weighted}. Section \ref{sec-proof-gal-loja} ends with the proof of Theorem \ref{dream-thm-loja-intro} restated as Theorem \ref{theo-loja-ALE} (the general case) and Theorem \ref{theo-loja-int-opt} in the integrable case.
In Section \ref{sec-covid-mass}, we give some connections between the mass $m_{\operatorname{ADM}}$ and the functional $\lambda_{\operatorname{ALE}}$. We deduce that the positive mass theorem holds in neighborhoods of Ricci-flat ALE metrics which are local maximizers of $\lambda_{\operatorname{ALE}}$. In particular, any compactly supported (or sufficiently decaying) deformation with nonnegative scalar curvature of a given integrable and locally stable Ricci-flat ALE metric is Ricci-flat: see Corollary \ref{local positive mass}. Proposition \ref{prop-spin-def-local} shows that ALE Ricci-flat metrics on spin manifolds are global maximizers of $\lambda_{\operatorname{ALE}}$.
We then recall some basic results about real analytic maps between Banach spaces in Appendix \ref{app-A}, we prove a divergence-free gauge-fixing for ALE metrics in Appendix \ref{app-B}. Finally, we also recall the variations of some geometric quantities in Appendix \ref{app-C}.
\subsection{Acknowledgements}
The first author is supported by grant ANR-17-CE40-0034 of the French National Research Agency ANR (Project CCEM).
\section{A relative energy for ALE Ricci-flat metrics}\label{sec-rel-ene-ALE}
Let us introduce a non-compact version of Perelman's $\lambda$-functional and study its properties on ALE metrics.
\subsection{Main definitions}~~\\
We start by defining the class of metrics as well as the function spaces we will be interested in.
\begin{defn}[Asymptotically locally Euclidean (ALE) manifolds]\label{defn-ALE}
We will call a Riemannian manifold $(N^n,g)$ \emph{asymptotically locally Euclidean} (ALE) of order $\tau>0$ if the following holds: there exists a compact set $K\subset N$, a radius $R>0$, $\Gamma$ a subgroup of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$ and a diffeomorphism $\Phi : (\mathbb{R}^n\slash\Gamma )\backslash B_e(0,R)\mapsto N\backslash K$ such that, if we denote $g_e$ the Euclidean metric on $\mathbb{R}^n\slash\Gamma$, we have, for all $k\in \mathbb{N}$,
$$ \rho^k\big|\nabla^{g_e,k}(\Phi^*g-g_e)\big|_e = O(\rho^{-\tau}),$$
on $\big(\mathbb{R}^n\slash\Gamma\big) \backslash B_e(0,R)$, where $\rho = d_e(.,0)$.
\end{defn}
If $g$ is an ALE metric on $N$, then we denote by $\rho_g$ any smooth positive extension of (the push-forward of) the radial distance on $\mathbb{R}^n$, $\Phi_{\ast}\rho$. In particular, we will use the fact that the level sets $\{\rho_g=R\}$ of $\rho_g$ are smooth closed connected hypersurfaces for sufficiently large height $R$ constantly.
We will study ALE metrics in a neighborhood of a Ricci-flat ALE metric. Let us start by defining this neighborhood thanks to weighted norms :
\begin{defn}[Weighted Hölder norms for ALE metrics]\label{def-weighted-norms}
Let $(N,g,p)$ be an ALE manifold of dimension $n$, $\beta>0$. For any tensor $s$, we define the following weighted $C^{k,\alpha}_\beta$-norm :
$$\| s \|^g_{C^{k,\alpha}_\beta} := \sup_{N}\rho_g^\beta\Big( \sum_{i=0}^k\rho_g^{i}|\nabla^{g,i} s|_{g} + \rho_g^{k+\alpha}[\nabla^{g,k}s]_{C^{0,\alpha}}\Big).$$
\end{defn}
\begin{defn}[Weighted Sobolev norms for ALE metrics]\label{def-weighted-sobolev-norms}
Let $\beta>0$, and $(N,g,p)$ an ALE manifold of dimension $n$. For any tensor $s$, we define the following weighted $L_\beta^2$-norm :
$$\| s \|_{L^{2}_{\beta}} ^g:= \Big(\int_N |s|^2 \rho_{g}^{2\beta-n}d\mu_{g}\Big)^\frac{1}{2}.$$
We moreover define the $H^k_{\beta}$-norm of $s$ as
$$\|s\|_{H^k_\beta}^g:= \sum_{i= 0}^k {\|\nabla^i s \|^g_{L^{2}_{\beta+i}}}.$$
\end{defn}
\begin{rk}
The usual $L^2$ space equals $L^2_\frac{n}{2}$ with the above definition. The intuition behind these norms is that $ \rho_{g}^{-\beta}\in C^{k,\alpha}_\beta$ and for all $\beta'>\beta$, $\rho_{g}^{-\beta'}\in H^k_\beta$.
\end{rk}
\begin{rk}\label{sobolev embeddings}
Moreover, we have the following embeddings: for $k$ large enough depending on the dimension: $$H^k_\beta\subset C^{0,\alpha}_\beta,$$ and for $\beta<\beta'$, we have $$C^{k,\alpha}_{\beta'}\subset H^k_\beta,$$ see \cite[Theorem $1.2$]{Bart-Mass}. Notice that in \cite{Bart-Mass}, $W^{k,2}_{\beta}$ coincides with our space $H^k_{-\beta}$ for $\beta\in\mathbb{R}$.
\end{rk}
\begin{note}
In the rest of this article, we will almost always work in a neighborhood of a fixed Ricci-flat metric with a given topology. Since the above definitions do not formally depend on the type of tensor, we will often abusively omit to mention these informations and simply denote these spaces $C^{k,\alpha}_\tau$ for instance.
\end{note}
Finally, we state a version of Hardy's inequality proved by Minerbe \cite{Min-Har-Ine} for Riemannian metrics $(N^n,g)$ with nonnegative Ricci curvature and maximal volume growth, i.e. $\mathop{\rm AVR}\nolimits(g):=\lim_{r\rightarrow+\infty}r^{-n}\mathop{\rm Vol}\nolimits_gB_g(p,r)>0$ for some point $p$ and hence for all points by Bishop-Gromov Theorem:
\begin{theo}[Minerbe's Hardy's inequality]\label{thm-min-har-inequ}
Let $(N^n,g)$ be a complete Riemannian manifold such that $\mathop{\rm Ric}\nolimits(g)\geq 0$ and $\mathop{\rm AVR}\nolimits(g)>0$. Then for some point $p\in N$,
\begin{equation*}
\int_Nr_p^{-2}\varphi^2\,d\mu_g\leq C(n,\mathop{\rm AVR}\nolimits(g))\int_N|\nabla^{g}\varphi|_{g}^2\,d\mu_{g},\quad \forall \varphi\in C_c^{\infty}(N),
\end{equation*}
where $r_p(x):=d_g(p,x)$ if $x\in N$.
\end{theo}
\subsection{The mass of an ALE metric}\label{sec-mass-def}~~\\
Next, we define the class of ALE metrics (with a finite amount of regularity at infinity) we will focus on at the beginning of this article:
let $(N^n,g_b)$ be an ALE Ricci-flat metric and let us consider for $\tau>\frac{n-2}{2}$ and $\alpha\in(0,1)$, the following space of metrics
\begin{equation}
\mathcal{M}^{2,\alpha}_{\tau}(g_b):= \left\{\text{$g$ is a metric on $N$}\,|\,g-g_b\in C^{2,\alpha}_{\tau}(S^2T^*N)\,,\, \mathop{\rm R}\nolimits_{g} = O(\rho_{g_b}^{-\tau'})\,\text{for some $\tau'>n$}\right\}.
\end{equation}
It turns out that this space is convex:
\begin{lemma}\label{lemm-charac-M}
We have the following characterization of the space $\mathcal{M}^{2,\alpha}_{\tau}(g_b)$: a metric $g\in \mathcal{M}^{2,\alpha}_{\tau}(g_b)$ if and only if $g-g_b\in C^{2,\alpha}_{\tau}(S^2T^*N)$ and the function $\mathop{\rm div}\nolimits_{g_b}\mathop{\rm div}\nolimits_{g_b}(g-g_b)-\Delta_{g_b}\mathop{\rm tr}\nolimits_{g_b}(g-g_b)$ is in $C^0_{\tau'}(N)$ for some $\tau'>n$.
In particular, the space $\mathcal{M}^{2,\alpha}_{\tau}(g_b)$ is convex.
\end{lemma}
\begin{proof}
If $g_1$ and $g_2$ belong to $\mathcal{M}^{2,\alpha}_{\tau}(g_b)$, it is straightforward that any convex combination $\lambda_1g_1+\lambda_2g_2$, $\lambda_i\geq 0$, $i=1,2$, $\lambda_1+\lambda_2=1$, is again a metric on $N$ such that $\lambda_1g_1+\lambda_2g_2-g_b\in C_{\tau}^{2,\alpha}(S^2T^*N)$. Now, by linearizing the scalar curvatures of the metrics $g_i=:g_b+h_i$, $i=1,2$ with respect to the metric $g_b$ thanks to [(\ref{lem-lin-equ-scal-first-var}, Lemma \ref{lem-lin-equ-Ric-first-var}], one gets:
\begin{equation*}
\mathop{\rm R}\nolimits_{g_b+h_i}=\mathop{\rm R}\nolimits_{g_b}+\mathop{\rm div}\nolimits_{g_b}\mathop{\rm div}\nolimits_{g_b}h_i-\Delta_{g_b}\mathop{\rm tr}\nolimits_{g_b}h_i+O(\rho_{g_b}^{-2\tau-2}).
\end{equation*}
Since $\mathop{\rm R}\nolimits_{g_b+h_i}=O(\rho_{g_b}^{-\tau_i'})$, for some $\tau'_i>n$, $i=1,2$, one concludes that
\begin{equation}
\mathop{\rm div}\nolimits_{g_b}\mathop{\rm div}\nolimits_{g_b}h_i-\Delta_{g_b}\mathop{\rm tr}\nolimits_{g_b}h_i=O\left(\rho_{g_b}^{-\min\{2\tau+2,\tau_i'\}}\right).\label{decay-lin-scal-cur}
\end{equation}
In particular, by summing the previous Taylor expansions of the scalar curvatures of $g_b+h_i$, $i=1,2$ together with (\ref{decay-lin-scal-cur}), one observes that:
\begin{equation*}
\begin{split}
\mathop{\rm R}\nolimits_{g_b+\lambda_1h_1+\lambda_2h_2}&=\sum_{i=1}^2\lambda_i\left(\mathop{\rm div}\nolimits_{g_b}\mathop{\rm div}\nolimits_{g_b}h_i-\Delta_{g_b}\mathop{\rm tr}\nolimits_{g_b}h_i\right)+O(\rho_{g_b}^{-2\tau-2})\\
&=O\left(\rho_{g_b}^{-\min\{2\tau+2,\tau_1',\tau'_2\}}\right).
\end{split}
\end{equation*}
This proves the convexity of the space under consideration since $2\tau+2>n$.
\end{proof}
\begin{rk}
We already see the importance of the assumption $\tau>\frac{n-2}{2}$ which will be crucial all along this paper: it ensures that the nonlinear terms in the expansion of the Ricci or scalar curvature around a given ALE Ricci-flat metric decay faster than the linear ones and are integrable. When dealing with improper integrals for instance,
only the linear terms have to be taken care of.
\end{rk}
We endow the space $\mathcal{M}^{2,\alpha}_{\tau}(g_b)$ with the distance induced by the norm $\|\cdot\|_{C^{2,\alpha}_{\tau}}$ as a convex subspace of $C^{2,\alpha}_{\tau}(S^2T^*N)$ and write $\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ for $\mathcal{M}^{2,\alpha}_{\tau}(g_b)\,\cap B_{C^{2,\alpha}_{\tau}}(0_{S^2T^*N},\varepsilon).$
Our choice of notations follow that of Dai-Ma \cite{Dai-Ma-Mass}, Lee-Parker \cite{Lee-Parker} and Bartnik \cite{Bart-Mass} where they consider the more classical space of metrics
\begin{equation}
\mathcal{M}_\tau:= \left\{\text{$g$ is a metric on $N$}\,|\, g-g_b\in C^{1,\alpha}_{\tau}(S^2T^*N)\,|\,\mathop{\rm R}\nolimits_g \in L^1\right\},
\end{equation}
on which the mass of an ALE metric is well-defined:
\begin{equation}
m_{\operatorname{ADM}}(g):=\lim_{R\rightarrow+\infty}\int_{\{\rho_{g_b}=R\}}\left<\mathop{\rm div}\nolimits_{g_b}(g-g_b)-\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}(g-g_b),\mathbf{n}\right>_{g_b}\,d\sigma_{g_b},\label{def-mass}
\end{equation}
where $\mathbf{n}$ denotes the outward unit normal of the closed smooth hypersurfaces $\{\rho_{g_b}=R\}$ for $R$ large.
\subsection{Definition of the functional $\lambda_{\operatorname{ALE}}^0$ and its main properties}\label{sec-def-fun-lambda-0}~~\\
We will use the renormalized Perelman's functional introduced by Haslhofer in \cite{Has-Per-Fct} to study ALE metrics that are close to Ricci-flat metrics. It can be defined in the following way for ALE metrics.
\begin{defn}[$\lambda_{\operatorname{ALE}}^0$, a first renormalized Perelman's functional]
Let $(N^n,g_b)$ be an ALE Ricci-flat metric and let $g\in \mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$. Define the $\mathcal{F}_{\operatorname{ALE}}$-energy by:
\begin{eqnarray}
\mathcal{F}_{\operatorname{ALE}}(w,g):=\int_N\big(4|\nabla^g w|_g^2 +\mathop{\rm R}\nolimits_g w^2 \big)\,d\mu_g,
\end{eqnarray}
where $w-1\in C^{\infty}_c(N)$, where $C^{\infty}_c(N)$ is the space of compactly supported smooth functions.
The $\lambda_{\operatorname{ALE}}^0$-functional associated to the $\mathcal{F}_{\operatorname{ALE}}$-energy is:
$$\lambda_{\operatorname{ALE}}^0(g) := \inf_w \mathcal{F}_{\operatorname{ALE}}(w,g),$$
where the infimum is taken over functions $w:N\rightarrow \mathbb{R}$ such that $w-1\in C_c^\infty(N)$.
\end{defn}
\begin{rk}
By testing the infimum condition with $w \equiv 1$, we get the upper bound
\begin{equation}
\lambda_{\operatorname{ALE}}^0(g)\leq \int_N\mathop{\rm R}\nolimits_g\,d\mu_g.\label{borne sup lambda ALE}
\end{equation}
An assumption on the convergence rate $\tau$ and $\tau'$ in the definition of the space $\mathcal{M}^{2,\alpha}_{\tau}(g_b)$ is crucial to make sense of the functional $\lambda_{\operatorname{ALE}}^0(g)$: in particular, it ensures the integrability of the scalar curvature $\mathop{\rm R}\nolimits_g$ of such an ALE metric $g$. The fact that $\lambda_{\operatorname{ALE}}^0(g)>-\infty$ is not trivial will be established in the following section: it depends on Hardy's inequality (Theorem \ref{thm-min-har-inequ}). See the proof of Proposition \ref{existence propriete-wg}.
\end{rk}
Let us now prove that the functional $\lambda_{\operatorname{ALE}}^0$ has nice properties in sufficiently small neighborhoods of Ricci-flat ALE metrics. Let us mention that according to \cite{Ban-Kas-Nak,Che-Tian-Ric-Fla}, any $n$-dimensional Ricci-flat ALE metrics is ALE of order $n$.
\begin{prop}\label{existence propriete-wg}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in (0,1)$.
Then, there exists some positive $\varepsilon$ such that for any metric $g$ in a neighborhood $\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)= \mathcal{M}^{2,\alpha}_{\tau}(g_b)\cap B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ of $g_b$, the infimum defining the functional $\lambda_{\operatorname{ALE}}^0(g)$ is attained by the unique solution $w_g$ to the following equation,
\begin{equation}
\left\{
\begin{aligned}
-4\Delta_g w_g + \mathop{\rm R}\nolimits_g w_g =0, \label{equ-criti-lambda}\\
w_g-1 \in C^{2,\alpha}_{\tau}(N)\cap C^{1,\alpha}_{n-2}(N).
\end{aligned}
\right.
\end{equation}
Moreover, $w_g$ is positive on $N$ and we have the following expansion of $w_g$ at infinity :
\begin{equation}
w_g = 1 - \frac{\lambda_{\operatorname{ALE}}^0(g) |\Gamma|}{4(n-2)\mathop{\rm Vol}\nolimits{\mathbb{S}^{n-1}}}\frac{1}{\rho_{g_b}^{n-2}} +O(\rho_{g_b}^{-n+2-\gamma}), \label{developpement w}
\end{equation}
for some positive $\gamma$ and where $|\Gamma|$ is the cardinal of $\Gamma$.
Next, we have the equalities :
\begin{equation}
\lambda_{\operatorname{ALE}}^0(g)=\int_N \big(4|\nabla^g w_g|_g^2 +\mathop{\rm R}\nolimits_g w_g^2 \big)\,d\mu_g= \int_N \mathop{\rm R}\nolimits_g w_g \,d\mu_g,\label{egalite lambda 1}
\end{equation}
and,
\begin{equation}
\lambda_{\operatorname{ALE}}^0(g) = \lim_{R\to \infty} 4\int_{\{\rho_{g_b} = R\}} \langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle\, d\sigma_g,\label{egalite lambda 2}
\end{equation}
where $\mathbf{n}_{g_b}$ denotes the outward unit normal of $\{\rho_{g_b}=R\}$.
Finally, the map $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\rightarrow w_g-1\in C^{2,\alpha}_{\tau}(N)$ is analytic in the sense of Definition \ref{def-analytic}. \end{prop}
\begin{proof}
First of all, let us show that $\lambda_{\operatorname{ALE}}^0(g)$ is finite, i.e. $\lambda_{\operatorname{ALE}}^0(g)>-\infty$.
Since $(N^n,g_b)$ is a Ricci-flat ALE metric $(N^n,g_b)$, Theorem \ref{thm-min-har-inequ} ensures that the following Hardy inequality holds true:
\begin{equation}
C_H\int_N\frac{\varphi^2}{\rho_{g_b}^2}d\mu_{g_b}\leq \int_N|\nabla^{g_b}\varphi|_{g_b}^2\,d\mu_{g_b},\quad \varphi\in C_c^{\infty}(N),\label{har-inequ}
\end{equation}
for some positive constant $C_H$ depending on $g_b$, the dimension $n$ and the base point $p\in N$ used in Definition \ref{def-weighted-norms} of $\rho_g$.
Since the metrics $g$ and $g_b$ are equivalent, i.e. $C^{-1}g_b\leq g\leq Cg_b$ for some positive constant depending on the neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, the same Hardy inequality holds with a positive constant $C_H(g_b)/2$ if $\varepsilon$ is chosen small enough. Moreover, (\ref{har-inequ}) is valid for functions $w$ on $N$ such that $w-1\in C_{\tau}^{2,\alpha}(N)$. This implies that:
\begin{equation*}
\begin{split}
\int_N4|\nabla^gw|^2_g+\mathop{\rm R}\nolimits_gw^2d\mu_g&=\int_N4|\nabla^g(w-1)|^2_g+\mathop{\rm R}\nolimits_g(w-1+1)^2d\mu_g\\
&\geq2C_H\int_N\frac{(w-1)^2}{\rho_g^2}d\mu_g-\varepsilon\int_N\frac{(w-1)^2}{\rho_g^2}d\mu_g-c\int_N|\mathop{\rm R}\nolimits_g|d\mu_g\\
&\geq-c\int_N|\mathop{\rm R}\nolimits_g|d\mu_g,
\end{split}
\end{equation*}
if $\varepsilon$ is chosen not greater than $2C_H$ and where $c$ is a universal positive constant that may vary from line to line.
This proves the finiteness of $\lambda_{\operatorname{ALE}}^0(g)$ together with the fact that the operator $-4\Delta_g+\mathop{\rm R}\nolimits_g$ is non-negative and dominates $-\Delta_g$ in the $L^2$ sense, i.e. if $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ then
\begin{equation}
\langle-4\Delta_g\varphi+\mathop{\rm R}\nolimits_g\varphi,\varphi\rangle\geq c\|\nabla^g\varphi\|^2_{L^2},\quad \forall \varphi\in C_c^{\infty}(N).\label{dom-lap-per-ope}
\end{equation}
In particular, by density, inequality (\ref{dom-lap-per-ope}) holds for functions in $C^2_{\tau}(N)$ with $2\tau>n-2$.
\begin{claim}\label{claim-iso}
The operator $-4\Delta_g+\mathop{\rm R}\nolimits_g: C^{2,\alpha}_{\tau}(N)\rightarrow C^{0,\alpha}_{\tau+2}(N)$ is an isomorphism of Banach spaces for all $\alpha\in(0,1)$. Moreover, the map $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\mapsto (-4\Delta_g+\mathop{\rm R}\nolimits_g)^{-1}\mathop{\rm R}\nolimits_g \in C^{2,\alpha}_{\tau}(N) $ is analytic.
\end{claim}
\begin{proof}[Proof of Claim \ref{claim-iso}]
Consider the map $\Psi:B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\times C^{2,\alpha}_{\tau}(N)\rightarrow C^{0,\alpha}_{\tau+2}(N)$ defined by $\Psi(g,v):=-4\Delta_gv+\mathop{\rm R}\nolimits_gv$. The map $\Psi$ is analytic in the sense of Definition \ref{def-analytic}.
According to [Theorem $8.3.6$ $(a)$, \cite{Joy-Book}], $\Delta_g: C^{2,\alpha}_{\tau}(N)\rightarrow C^{0,\alpha}_{\tau+2}(N)$ is an isomorphism of Banach spaces for all $\alpha\in(0,1)$. Fix $\alpha\in(0,1)$. Since $\mathop{\rm R}\nolimits_g: C^{2,\alpha}_{\tau}(N)\rightarrow C^{0,\alpha}_{\tau+2}(N)$ is a compact operator, the operator $-4\Delta_g+\mathop{\rm R}\nolimits_g: C^{2,\alpha}_{\tau}(N)\rightarrow C^{0,\alpha}_{\tau+2}(N)$ is a Fredholm operator of index $0$. In particular, it is an isomorphism if (and only if) it is injective. This in turn is ensured by (\ref{dom-lap-per-ope}) since $(N,g)$ has infinite volume.
Therefore, the analytic version of the implicit function Theorem given by Lemma \ref{th fcts implicites} applied to the map $\Psi$ gives us the expected result.
\end{proof}
Now, let $\alpha\in(0,1)$ such that Claim \ref{claim-iso} holds: since $\mathop{\rm R}\nolimits_g\in C^{0,\alpha}_{\tau+2}(N)$, there exists a unique solution $v_g\in C^{2,\alpha}_{\tau}(N)$ to $-4\Delta_gv_g+\mathop{\rm R}\nolimits_gv_g=-\mathop{\rm R}\nolimits_g$ and the map $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\rightarrow v_g\in C^{2,\alpha}_{\tau}(N)$ is analytic. In particular, if $w_g:=1+v_g$ then $-4\Delta_gw_g+\mathop{\rm R}\nolimits_gw_g=0$. Let us show that this implies (\ref{egalite lambda 1}) and (\ref{egalite lambda 2}) by integrating by parts over sublevel sets $\{\rho_{g_b}\leq R\}$ of large radii $R$ whose boundary is $\{\rho_{g_b}= R\}$:
\begin{equation*}
\begin{split}
\int_{\{\rho_{g_b}\leq R\}}4|\nabla^gw_g|^2+\mathop{\rm R}\nolimits_gw_g^2\,d\mu_g=\,&\int_{\{\rho_{g_b}\leq R\}}-4\Delta_gw_g\cdot w_g+\mathop{\rm R}\nolimits_gw_g\cdot w_g\,d\mu_g\\
&+4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle \cdot w_g\,d\sigma_g\\
=\,&0+4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle\, d\sigma_g +4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle \cdot v_g\,d\sigma_g\\
=\,&4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle\,d\sigma_g+\textit{o}(1),
\end{split}
\end{equation*}
as $R$ tends to $+\infty$. Similarly, by using (\ref{equ-criti-lambda}):
\begin{equation*}
\begin{split}
\int_{\{\rho_{g_b}\leq R\}}\mathop{\rm R}\nolimits_gw_g\,d\mu_g&=4\int_{\{\rho_{g_b}\leq R\}}\Delta_gw_g\,d\mu_g=4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle \,d\sigma_g.
\end{split}
\end{equation*}
Since $\mathop{\rm R}\nolimits_g$ is integrable, taking a limit in the previous identity as $R$ tends to $+\infty$ is meaningful. To sum it up:
\begin{equation}
\int_{N}\mathop{\rm R}\nolimits_gw_g\,d\mu_g=\lim_{R\rightarrow+\infty}4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle \,d\sigma_g=\int_{N}4|\nabla^gw_g|^2+\mathop{\rm R}\nolimits_gw_g^2\,d\mu_g.\label{equ-diff-form-lambda}
\end{equation}
Finally, to end the proof of (\ref{egalite lambda 1}) and (\ref{egalite lambda 2}), it suffices to show that:
\begin{equation}
\int_N4|\nabla^g(w_g+\varphi)|^2_g+\mathop{\rm R}\nolimits_g(w_g+\varphi)^2\,d\mu_g\geq \int_N4|\nabla^gw_g|^2_g+\mathop{\rm R}\nolimits_gw_g^2\,d\mu_g, \quad \forall \varphi\in C^{\infty}_{c}(N).\label{w_g-min-F-ALE}
\end{equation}
This amounts to proving that:
\begin{equation}
\int_N4|\nabla^g\varphi|^2_g+\mathop{\rm R}\nolimits_g\varphi^2\,d\mu_g+2\int_N4\langle\nabla^gw_g,\nabla^g\varphi\rangle+\mathop{\rm R}\nolimits_gw_g\varphi \,d\mu_g\geq 0,\quad\forall \varphi\in C^{\infty}_{c}(N).\label{w_g-min-F-ALE-bis}
\end{equation}
This is implied by (\ref{dom-lap-per-ope}) together with (\ref{equ-criti-lambda}) after an integration by parts on the second integral of the lefthand side of the previous inequality (\ref{w_g-min-F-ALE-bis}).
We are left with proving the positivity of $w_g$ and the asymptotic expansion (\ref{developpement w}).
By Kato inequality, one can check that
\begin{eqnarray}
\lambda_{\operatorname{ALE}}^0(g)=\mathcal{F}_{\operatorname{ALE}}(w_g,g)\geq \mathcal{F}_{\operatorname{ALE}}(|w_g|,g).\label{min-F-bis}
\end{eqnarray}
Notice that (\ref{w_g-min-F-ALE}) still holds for functions $\varphi\in H_c^1(N)$ (the completion of compactly supported functions for the $H^1$-norm), by taking the completion with respect to the norm $\varphi\rightarrow\|\nabla^g\varphi\|_{L^2}$. In particular, the function $|w_g|$ is a test function, i.e. $|w_g|-1\in H_c^1(N)$ and it is a minimizer of $\lambda_{\operatorname{ALE}}^0(g)$ by (\ref{min-F-bis}). As such, $|w_g|$ is a continuous weak solution to (\ref{equ-criti-lambda}). Elliptic regularity together with elliptic Schauder estimates imply that $|w_g|$ is a $C^{2,\alpha}_{loc}$-solution, so that $w_g$ has a sign and tends to $1$ at infinity, i.e. $w_g$ is nonnegative. Now, $w_g$ is positive by the strong maximum principle for parabolic equations. Indeed, if $$W_g(x,t):=\exp\left(\frac{\sup_N\mathop{\rm R}\nolimits_g}{4} t\right)w_g(x),$$ for $x\in N$ and $t\in\mathbb{R}$, then $W_g$ is a super-solution to the heat equation:
\begin{equation*}
\left(\partial_t-\Delta_g\right)W_g\geq 0.
\end{equation*}
Therefore, the maximum principle leads to $W_g(x,t)\geq \int_NK(x,y,t)w_g(y)d\mu_g(y)>0$ for $t\geq 0$ where $K(x,y,t)$ denotes the positive heat kernel associated to $\Delta_g$.\\
Let us prove the asymptotic estimate (\ref{developpement w}).\\
Observe that if $w_g=1-c\rho_{g_b}^{2-n}+\textit{O}(\rho_{g_b}^{2-n-\gamma})$ for some $\gamma>0$ up to first order for some constant $c$ then (\ref{egalite lambda 2}) implies necessarily that
\begin{eqnarray}
4c(n-2)\mathop{\rm Vol}\nolimits\left(\mathbb{S}^{n-1}/\Gamma\right)=\lim_{R\rightarrow+\infty}4\int_{\{\rho_{g_b}= R\}}\langle\nabla^gw_g,\mathbf{n}_{g_b}\rangle\,d\sigma_g=\lambda_{\operatorname{ALE}}^0(g).\label{identification-lambda-sol-inf}
\end{eqnarray}
Let us observe that $-4\Delta_gv_g=-\mathop{\rm R}\nolimits_gw_g=\textit{O}(\rho_{g_b}^{-\tau'})$ with $\tau'>n$. Therefore, by [Theorem $8.3.6$ $(b)$, \cite{Joy-Book}], there exists a unique solution $u_g\in C^{1,\alpha}_{n-2}(N)$ to $-4\Delta_gu_g=-\mathop{\rm R}\nolimits_gw_g$: here $\mathop{\rm R}\nolimits_g$ is only assumed to lie in $C^0_{\tau'}$ for some $\tau'>n$ which does not ensure higher regularity on $u_g$ in $C^{2,\alpha}_{n-2}$. Moreover, it is shown that:
\begin{equation*}
\begin{split}
u_g&=-\left(\frac{|\Gamma]}{4(n-2)\mathop{\rm Vol}\nolimits\mathbb{S}^{n-1}}\int_N\mathop{\rm R}\nolimits_gw_g\,d\mu_g\right)\rho_{g_b}^{2-n}+\textit{O}(\rho_{g_b}^{2-n-\gamma})\\
&=-\frac{\lambda_{\operatorname{ALE}}^0(g)|\Gamma]}{4(n-2)\mathop{\rm Vol}\nolimits\mathbb{S}^{n-1}}\rho_{g_b}^{2-n}+\textit{O}(\rho_{g_b}^{2-n-\gamma}),
\end{split}
\end{equation*}
for some positive $\gamma$ where we used (\ref{identification-lambda-sol-inf}) together with the first equality of (\ref{equ-diff-form-lambda}) in the second line. To conclude, one gets that $v_g-u_g$ is a harmonic function on $N$ that converges to $0$ at infinity. These facts imply that $v_g=u_g$ by the maximum principle.
\end{proof}
\begin{rk}
Under the assumptions of Proposition \ref{existence propriete-wg}, the usual $L^2$-constrained $\lambda$-functional, i.e. the bottom of the $L^2$-spectrum of the operator $-4\Delta_g+\mathop{\rm R}\nolimits_g$ is vanishing because the mass can escape to infinity, and therefore this functional does not give any information: indeed, by (\ref{dom-lap-per-ope}), $\lambda_1(-4\Delta_g+\mathop{\rm R}\nolimits_g)\geq c\lambda_1(-\Delta_g)=0$. One can then argue by contradiction as in \cite{Cheng-Yau} to show that if $\lambda_1(-4\Delta_g+\mathop{\rm R}\nolimits_g)$ were positive then geodesic balls of large radii would grow faster than any polynomial.
\end{rk}
The functional $\lambda_{\operatorname{ALE}}^0$ is moreover invariant by diffeomorphisms decaying at infinity.
\begin{prop}\label{inv-diff-lambda}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\alpha\in (0,1)$ and $\tau\in(\frac{n-2}{2},n-2)$.
Let $g$ be a metric in $ \mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ and let $w_g$ be the minimizer of $\lambda_{\operatorname{ALE}}^0(g)$ whose existence is ensured by Proposition \ref{existence propriete-wg}. Consider $\phi$ a diffeomorphism close to the identity in the $C^{3,\alpha}_{\tau-1}(TN)$ topology. Then, we have $\lambda_{\operatorname{ALE}}^0(\phi^*g) = \lambda_{\operatorname{ALE}}^0(g)$ and $w_{\phi^*g} = \phi^*w_g$.
\end{prop}
\begin{proof}
Let $g=:g_b+h$ be a metric such that $g_b+h\in \mathcal{M}^{2,\alpha}_{\tau}(g_b)$, and consider $w_g$ the minimizer of $\lambda_{\operatorname{ALE}}^0(g)$. Let $\phi : N\to N$ be a diffeomorphism defined as $\phi :x \mapsto \exp_x^{g}(X(x))$ for a vector field $X\in C^{3,\alpha}_{\tau-1}(TN)$ close to $0_{TN}$. Consider $\phi^*g$ which is also a metric on $N$ of order $\tau$ lying in a small neighborhood of $g_b$ in the $C^{2,\alpha}_{\tau}$ topology. Moreover, since $\mathop{\rm R}\nolimits_{\phi^*g}= \phi^*\mathop{\rm R}\nolimits_g$, we still have $\mathop{\rm R}\nolimits_{\phi^*g}\in C^0_{\tau'}(N)$ for some $\tau'>n$. Then, for any $w$ such that $w-1$ is compactly supported, $\phi^*w-1$ is also compactly supported and we clearly have $\mathcal{F}_{\operatorname{ALE}}(\phi^*w,\phi^*g) = \mathcal{F}_{\operatorname{ALE}}(w,g)$ by the change of variables Theorem. Therefore we have $\lambda_{\operatorname{ALE}}^0(\phi^*g) = \lambda_{\operatorname{ALE}}^0(g)$, and finally, Proposition \ref{existence propriete-wg} ensures that $w_{\phi^*g} = \phi^*w_g$.
\end{proof}
We end this section by giving another sufficient condition to ensure the finiteness of $\lambda_{\operatorname{ALE}}^0$ together with its behavior under scalings of the metrics:
\begin{lemma}\label{scaling lambdaALE}
Let $(N^n,g)$ be a complete metric with $\mathop{\rm R}\nolimits_g\in L^1(N)$ and non-negative scalar curvature. Then, $$0\leq \lambda_{\operatorname{ALE}}^0(g)\leq \|\mathop{\rm R}\nolimits_g\|_{L^1}.$$
Moreover, in case $(N^n,g)$ is a complete metric such that $\lambda_{\operatorname{ALE}}^0(g)$ is finite then for any $s>0$, we have $$\lambda_{\operatorname{ALE}}^0(sg) = s^{\frac{n-2}{2}}\lambda_{\operatorname{ALE}}^0(g).$$
\end{lemma}
\begin{proof}
By testing the function $w\equiv1$ in the definition of $\lambda_{\operatorname{ALE}}^0(g)$, one gets the upper bound. Moreover, one has the straightforward inequality for any function $w$ such that $w-1\in C_c^{\infty}(N)$: $$0\leq \int_N|\nabla^g w|^2_g\,d\mu_g\leq \int_N|\nabla^g w|^2_g+\mathop{\rm R}\nolimits_gw^2\,d\mu_g.$$ By considering the infimum over such functions $w$, one gets the expected lower bound on $\lambda_{\operatorname{ALE}}^0(g)$.
For any smooth $w$ such that outside a compact set we have $w\equiv 1$,
$\mathcal{F}_{\operatorname{ALE}}(w,sg) = s^{\frac{n}{2}- 1}\mathcal{F}_{\operatorname{ALE}}(w,g)$, because of the scaling behavior of the different operations: $|\nabla^{sg}f|_{sg}^2=s^{-1}|\nabla^{g}f|_{g}^2$, $\mathop{\rm R}\nolimits_{sg}= s^{-1}\mathop{\rm R}\nolimits_g$ and $d\mu_{sg} = s^{\frac{n}{2}}d\mu_g$.
We therefore have $ \lambda_{\operatorname{ALE}}^0(sg) = s^{\frac{n}{2}- 1}\lambda_{\operatorname{ALE}}^0(g)$.
\end{proof}
\section{First and second variations of $\lambda_{\operatorname{ALE}}^0$}\label{sec-first-sec-var}
In this section, we compute the first and second variations of the functional $\lambda_{\operatorname{ALE}}^0$ introduced in Section \ref{sec-rel-ene-ALE}. Before doing so, we define the notion of a potential function associated to a metric $g$ lying in a $C^{2,\alpha}_{\tau}$-neighborhood of an ALE Ricci-flat metric $g_b$.
\begin{defn}
For a metric $g$ in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, with $\tau\in(\frac{n-2}{2},n-2)$, let us define the potential function associated to $g$ by $$f_g:=-2\ln w_g,$$
where $w_g$ is defined as in Proposition \ref{existence propriete-wg} .
\end{defn}
Notice that $f_g$ is well-defined by the positivity of $w_g$ ensured by Proposition \ref{existence propriete-wg}. Moreover, we sum up the properties shared by $f_g$ in the next proposition which follow in a straightforward way from Proposition \ref{existence propriete-wg}:
\begin{prop}\label{prop-pot-fct}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$.
Then there exists some positive $\varepsilon$ such that $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\rightarrow f_g\in C^{2,\alpha}_{\tau}(N)$ is analytic and satisfies on $N$,
\begin{equation}
2\Delta_gf_g-|\nabla^gf_g|^2_g+\mathop{\rm R}\nolimits_g =0. \label{equ-criti-lambda-pot}
\end{equation}
Moreover, the asymptotic expansion holds true for $g\in\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$,
\begin{equation}
f_g=\frac{\lambda_{\operatorname{ALE}}^0(g) |\Gamma|}{2(n-2)\mathop{\rm Vol}\nolimits{\mathbb{S}^{n-1}}}\frac{1}{\rho_{g_b}^{n-2}}+\textit{O}(\rho_{g_b}^{-n+2-\gamma}),
\end{equation}
for some positive real number $\gamma$.
Finally, if $g\in\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$,
\begin{equation}
\begin{split}
\lambda_{\operatorname{ALE}}^0(g) &= \int_N \big(|\nabla^g f_g|_g^2 +\mathop{\rm R}\nolimits_g \big)e^{-f_g}\,d\mu_g\\
& =2\int_N\left(|\nabla^gf_g|^2_g-\Delta_gf_g\right)\,e^{-f_g}d\mu_g\\
&= -\lim_{R\to \infty} 2\int_{\{\rho_{g_b} = R\}} \langle\nabla^gf_g,\mathbf{n}_{g_b}\rangle\, d\sigma_g. \label{egalite-lambda-2-bis}
\end{split}
\end{equation}
\end{prop}
Before stating the first variation of $\mathcal{F}_{\operatorname{ALE}}$ for arbitrary variations, we introduce several notions associated to a smooth metric measure space $(N^n,g,\nabla^gf)$ where $f$ is a given $C^1_{loc}$ function on $N$. The \textbf{weighted laplacian} of a tensor $T$ on $N$ denoted by $\Delta_fT$ is defined by:
\begin{equation}
\Delta_fT:=\Delta_gT-\nabla^g_{\nabla^gf}T,
\end{equation}
where $\Delta_g$ denotes the rough Laplacian associated to the Riemannian metric $g$.
The \textbf{weighted divergence} of a $C^1_{loc}$ vector field $X$ on $N$ is defined by:
\begin{equation}
\mathop{\rm div}\nolimits_fX:=\Delta_gX-g(\nabla^gf,X).\label{def-wei-div-vec}
\end{equation}
Finally, the \textbf{weighted divergence} of a $C^1_{loc}$ symmetric $2$-tensor $T$ on $N$ is defined by:
\begin{equation}
\mathop{\rm div}\nolimits_fT:=\mathop{\rm div}\nolimits_gT-T(\nabla^gf).\label{def-wei-div-sym}
\end{equation}
\begin{prop}[First variation of $\mathcal{F}_{\operatorname{ALE}}$]\label{first-var-prop} Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$.
The first variation of $\mathcal{F}_{\operatorname{ALE}}$ at a couple $(g,f)\in \mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)\times C^{2}_{\tau}(N)$ along directions $(h,\varphi)\in C^{2}_{\tau}(S^2T^*N)\times C^{2}_{\tau}(N)$ such that $g+h\in \mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ is
\begin{equation}
\begin{split}
\delta_{g,f}\mathcal{F}_{\operatorname{ALE}}(h,\varphi)&=-\int_N\langle h,\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f\rangle_g\, e^{-f}d\mu_g\\
&+\int_N\left(2\Delta_gf-|\nabla^gf|^2_g+\mathop{\rm R}\nolimits_g\right)\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\varphi\right)e^{-f}d\mu_g+m_{\operatorname{ADM}}(g_b+h),
\label{first-var-F}
\end{split}
\end{equation}
where $m_{\operatorname{ADM}}(g_b+h)$ is the mass of the metric $g_b+h$ defined in (\ref{def-mass}).
Finally, the first variation of $\lambda_{\operatorname{ALE}}^0$ on a neighborhood of $\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ is:
\begin{equation}
\begin{split}
\delta_g \lambda_{\operatorname{ALE}}^0(h)=&-\int_N\langle h,\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\rangle_g \,e^{-f_g}d\mu_g+m_{\operatorname{ADM}}(g_b+h).\label{first-var-lambda}
\end{split}
\end{equation}
\end{prop}
\begin{rk}
Notice that (\ref{first-var-lambda}) gives a link between the variation of the functional $\lambda_{\operatorname{ALE}}^0$, the mass of an ALE metric with integrable scalar curvature and its associated Bakry-\'Emery tensor.
\end{rk}
\begin{proof}
We follow [Chapter $2$, \cite{Cho-Boo}] closely by using (\ref{lem-lin-equ-scal-first-var}) from Lemma \ref{lem-lin-equ-Ric-first-var}:
\begin{equation*}
\begin{split}
\delta_{g,f}\big[\big(|\nabla^gf|^2_g+\mathop{\rm R}\nolimits_g\big)\,&e^{-f}d\mu_g\big](h,\varphi)\\
=\,&\left(\mathop{\rm div}\nolimits_g(\mathop{\rm div}\nolimits_gh)-\Delta_g\mathop{\rm tr}\nolimits_gh-\langle h,\mathop{\rm Ric}\nolimits(g)\rangle-h(\nabla^gf,\nabla^gf)\right) e^{-f}d\mu_g\\
&+\left(2\langle \nabla^gf,\nabla^g\varphi\rangle+(\mathop{\rm R}\nolimits_g+|\nabla^gf|^2_g)\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\varphi\right)\right)e^{-f}d\mu_g.
\end{split}
\end{equation*}
Now, by integrating by parts twice on the domain $\{\rho_{g_b}\leq R\}$ with $R$ sufficiently large such that $\{\rho_{g_b}= R\}$ is a smooth compact hypersurface:
\begin{equation*}
\begin{split}
\int_{\{\rho_{g_b}\leq R\}}\mathop{\rm div}\nolimits_g(\mathop{\rm div}\nolimits_gh)\,e^{-f}d\mu_g=\,&\int_{\{\rho_{g_b}\leq R\}}\left(h(\nabla^gf,\nabla^gf)-\langle h,\nabla^{g,2}f\rangle\right)e^{-f}d\mu_g\\
&+\int_{\{\rho_{g_b}=R\}}\left<\mathop{\rm div}\nolimits_gh+h(\nabla^gf),\mathbf{n}_{g_b}\right>\,e^{-f}d\sigma_g.
\end{split}
\end{equation*}
Moreover,
\begin{equation*}
\begin{split}
-\int_{\{\rho_{g_b}\leq R\}}\Delta_g\mathop{\rm tr}\nolimits_gh\,e^{-f}d\mu_g=\,&-\int_{\{\rho_{g_b}\leq R\}}\langle\nabla^g\mathop{\rm tr}\nolimits_gh,\nabla^gf\rangle \,e^{-f}d\mu_g\\
&-\int_{\{\rho_{g_b}= R\}}\left<\nabla^g\mathop{\rm tr}\nolimits_gh,\mathbf{n}_{g_b}\right>\,e^{-f}d\sigma_g\\
=\,&\int_{\{\rho_{g_b}\leq R\}}\left(\Delta_gf-|\nabla^gf|^2_g\right)\mathop{\rm tr}\nolimits_gh\,e^{-f}d\mu_g\\
&-\int_{\{\rho_{g_b}= R\}}\left<\nabla^g\mathop{\rm tr}\nolimits_gh,\mathbf{n}_{g_b}\right>\,e^{-f}d\sigma_g\\
&-\int_{\{\rho_{g_b}=R\}}\mathop{\rm tr}\nolimits_{g}h\langle\nabla^gf,\mathbf{n}_{g_b}\rangle\,e^{-f}d\sigma_g,\\
2\int_{\{\rho_{g_b}\leq R\}}\langle \nabla^gf,\nabla^g\varphi\rangle\,e^{-f}d\mu_g=\,& -2\int_{\{\rho_{g_b}\leq R\}}\left(\Delta_gf-|\nabla^gf|^2_g\right)\varphi\,e^{-f}d\mu_g\\
&+2\int_{\{\rho_{g_b}=R\}}\varphi\langle\nabla^gf,\mathbf{n}_{g_b}\rangle\,e^{-f}d\sigma_g.
\end{split}
\end{equation*}
Therefore the expected result follows by summing the previous equalities and let $R$ go to $+\infty$ by using the asymptotics assumed on $h$, $f$ and $\varphi$ together with the definition of the mass of $g_b+h$ given in (\ref{def-mass}).
In order to compute the first variation of $\lambda_{\operatorname{ALE}}^0$, one proceeds similarly by setting $\varphi:=\delta_gf(h)$ and by using
(\ref{equ-criti-lambda-pot}) to cancel the second integral term on the righthand side of (\ref{first-var-F}). By density of $C_c^{\infty}$ in $L^2_{\frac{n}{2}-1}$, notice that (\ref{first-var-lambda}) still holds true for variations $h\in L^2_{\frac{n}{2}-1}$ since $\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g=\textit{O}(\rho_{g_b}^{-\tau-2})\in L^2_{\frac{n}{2}+1}$.
\end{proof}
As a first consequence of Proposition \ref{first-var-prop}, we recover (in our non-compact setting) the fact stated in [Remark $4.6$, \cite{Has-Sta}] that the weighted $L^2$-norm of the Bakry-\'Emery tensor $\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g$ associated to the functional $\lambda_{\operatorname{ALE}}^0$ is dominated by that of the Ricci curvature. More precisely, we have the following result:
\begin{coro}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$.
Then there exists $\varepsilon>0$ such that if $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, the tensor $\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g$ is weighted divergence-free, i.e.
\begin{equation}
\mathop{\rm div}\nolimits_{f_g}\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)=0.\label{wei-div-free-obs-ten}
\end{equation}
In particular, if $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$,
\begin{equation}
\|\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\|_{L^2(e^{-f_g}d\mu_g)}\leq \|\mathop{\rm Ric}\nolimits(g)\|_{L^2(e^{-f_g}d\mu_g)}.\label{easy-inequ-ric-bak-eme}
\end{equation}
\end{coro}
Notice that the quantitative version of the reverse inequality of (\ref{easy-inequ-ric-bak-eme}) is more delicate to prove in general: see \cite[Theorem C]{Has-Sta} for closed Ricci-flat metrics in the integrable case.
\begin{proof}
Let us prove that $\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g$ is divergence-free in the (weighted) sense of (\ref{def-wei-div-sym}):
\begin{equation}
\begin{split}
2\mathop{\rm div}\nolimits_{f_g}\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)&=2\mathop{\rm div}\nolimits_g\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)-2\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)(\nabla^gf_g)\\
&=\nabla^g\mathop{\rm R}\nolimits_g+\mathop{\rm div}\nolimits_g\mathop{\rm \mathscr{L}}\nolimits_{\nabla^gf_g}(g)-2\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)(\nabla^gf_g)\\
&=\nabla^g\mathop{\rm R}\nolimits_g+\frac{1}{2}\nabla^g\mathop{\rm tr}\nolimits_g\mathop{\rm \mathscr{L}}\nolimits_{\nabla^gf_g}(g)+\Delta_g\nabla^gf_g+\mathop{\rm Ric}\nolimits(g)(\nabla^gf_g)\\
&-2\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)(\nabla^gf_g)\\
&=\nabla^g\left(\mathop{\rm R}\nolimits_g+2\Delta_gf_g-|\nabla^gf_g|^2_g\right)\\
&=0.
\end{split}
\end{equation}
Here, we have used the Bianchi identity (its traced version) in the second line together with the Bochner formula for vector fields in the third line and the one for functions in the fourth line. The last line comes from [(\ref{equ-criti-lambda-pot}), Proposition \ref{prop-pot-fct}].\\
The proof of (\ref{easy-inequ-ric-bak-eme}) is essentially due to (\ref{wei-div-free-obs-ten}).
Indeed, if $X$ is any smooth vector field which is compactly supported (or decaying faster than $\rho_{g_b}^{-\frac{n}{2}+2}$) on $N$, then
\begin{equation*}
\begin{split}
\left<\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g,\mathop{\rm \mathscr{L}}\nolimits_X(g)\right>_{L^2(e^{-f_g}d\mu_g)}=0,
\end{split}
\end{equation*}
by integration by parts. In particular, by applying this fact to $X=\nabla^{g}f_g=O(\rho_{g_b}^{-\tau-1})$ by Proposition \ref{prop-pot-fct}, one gets,
\begin{equation}
\begin{split}
\|\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\|_{L^2(e^{-f_g}d\mu_g)}^2=\,&\|\mathop{\rm Ric}\nolimits(g)\|^2_{L^2(e^{-f_g}d\mu_g)}+2\left<\mathop{\rm Ric}\nolimits(g),\nabla^{g,2}f_g\right>_{L^2(e^{-f_g}d\mu_g)}\\
&+\|\nabla^{g,2}f_g\|^2_{L^2(e^{-f_g}d\mu_g)}\\
=\,&\|\mathop{\rm Ric}\nolimits(g)\|^2_{L^2(e^{-f_g}d\mu_g)}+2\left<\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g,\nabla^{g,2}f_g\right>_{L^2(e^{-f_g}d\mu_g)}\\
&-\|\nabla^{g,2}f_g\|^2_{L^2(e^{-f_g}d\mu_g)}\\
=\,&\|\mathop{\rm Ric}\nolimits(g)\|^2_{L^2(e^{-f_g}d\mu_g)}-\|\nabla^{g,2}f_g\|^2_{L^2(e^{-f_g}d\mu_g)}\\
\leq\,& \|\mathop{\rm Ric}\nolimits(g)\|^2_{L^2(e^{-f_g}d\mu_g)}.
\end{split}
\end{equation}
All the integrals and integration by parts are justified here by the sufficiently fast decays at infinity satisfied by $\mathop{\rm Ric}\nolimits(g)$ and $f_g$ and their covariant derivatives.
\end{proof}
We are in a good position to compute the second variation of $\lambda_{\operatorname{ALE}}^0$. We first need one more definition:
\begin{defn}\label{defn-Lic-Op}
Let $(N^n,g)$ be a Riemannian metric. Then the \emph{Lichnerowicz operator} associated to $g$ acting on symmetric $2$-tensors, denoted by $L_{g}$, is defined by:
\begin{equation}
L_{g}h:=\Delta_{g}h + 2\mathop{\rm Rm}\nolimits(g)(h)-\mathop{\rm Ric}\nolimits(g)\circ h-h\circ\mathop{\rm Ric}\nolimits(g),\quad h\in C_{loc}^2(S^2T^*N),\label{defn-Lic-op-eq}
\end{equation}
where $\Delta_{g}=-\nabla^*\nabla$ and where $\mathop{\rm Rm}\nolimits(g)(h)(X,Y) := h(\mathop{\rm Rm}\nolimits(g)(e_i,X)Y,e_i)$ for an orthonormal basis $(e_i)_{i=1}^n$ with respect to $g$. In particular, if $(N^n,g)$ is a Ricci-flat metric, then,
\begin{equation}
L_{g}h:=\Delta_{g}h + 2\mathop{\rm Rm}\nolimits(g)(h),\quad h\in C_{loc}^2(S^2T^*N).\label{defn-Lic-op-eq}
\end{equation}
\end{defn}
With this definition in hand, we are able to identify the second variation of $\lambda_{\operatorname{ALE}}^0$ at an ALE Ricci-flat metric as follows.
\begin{prop}[Second variation of $\lambda_{\operatorname{ALE}}^0$ at a Ricci-flat metric]\label{second-var-prop}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$. Then the second variation of $\lambda_{\operatorname{ALE}}^0$ at $g_b$ along a divergence-free variation $h\in S^2T^*N$ such that $g_b+h\in \mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ is:
\begin{equation}
\delta^2_{g_b}\lambda_{\operatorname{ALE}}^0(h,h)=\frac{1}{2}\langle L_{g_b}h,h\rangle_{L^2}.
\end{equation}
\end{prop}
\begin{proof}
Recall by Lemma \ref{lem-lin-equ-Ric-first-var} that, if we denote the Bianchi operator $B_g(h):= \mathop{\rm div}\nolimits_g\big(h-\frac{1}{2}(\mathop{\rm tr}\nolimits_g h)g\big)$, we have
\begin{equation*}
\begin{split}
\delta_{g_b}(-2\mathop{\rm Ric}\nolimits)(h)&=L_{g_b}h-\mathop{\rm \mathscr{L}}\nolimits_{B_{g_b}(h)}(g_b)\\
&=L_{g_b}h+\frac{1}{2}\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h}(g_b),
\end{split}
\end{equation*}
if $\mathop{\rm div}\nolimits_{g_b}h=0$. Since $f_{g_b}=0$ then, denoting $\delta_{g_b}f(h)$ the first order variation of $g\mapsto f_g$ at $g_b$ in the direction $h$, we have:
\begin{equation*}
\delta_{g_b}\mathop{\rm \mathscr{L}}\nolimits_{\nabla^gf_g}(h)=\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_b}\delta_{g_b}f(h)}(g_b).
\end{equation*}
Therefore, according to Proposition \ref{first-var-prop},
\begin{equation*}
2\delta_{g_b}^2\lambda_{\operatorname{ALE}}^0(h,h)=\langle L_{g_b}h,h\rangle_{L^2}+ \frac{1}{2}\langle\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h}(g_b),h\rangle_{L^2}-\langle \mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_b}\left(\delta_{g_b}f(h)\right)}(g_b),h\rangle_{L^2}.
\end{equation*}
By integrating by parts:
\begin{equation*}
\begin{split}
2\delta_{g_b}^2\lambda_{\operatorname{ALE}}^0(h,h)&=\langle L_{g_b}h,h\rangle_{L^2}-\langle\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h,\mathop{\rm div}\nolimits_{g_b}h\rangle_{L^2}+2\langle \delta_{g_b}f(h),\mathop{\rm div}\nolimits_{g_b}h\rangle_{L^2}\\
&=\langle L_{g_b}h,h\rangle_{L^2},
\end{split}
\end{equation*}
if $\mathop{\rm div}\nolimits_{g_b}h=0$.
\end{proof}
\begin{rk}\label{remark jauge div}
Notice that if $h$ is a symmetric $2$-tensor $h$ on $N$ then by differentiating (\ref{equ-criti-lambda-pot}) at $g_b$ gives:
\begin{eqnarray}
2\Delta_{g_b}\left(\delta_{g_b}f_{g}(h)-\frac{\mathop{\rm tr}\nolimits_{g_b}h}{2}\right)&=&-\mathop{\rm div}\nolimits_{g_b}(\mathop{\rm div}\nolimits_{g_b}h).\label{equ-first-var-pot-fct}
\end{eqnarray}
Now, if $\mathop{\rm div}\nolimits_{g_b}h=0$ then the function $\delta_{g_b}f_{g}(h)-\frac{\mathop{\rm tr}\nolimits_{g_b}h}{2}$ is a harmonic function on $N$. In the setting of Proposition \ref{second-var-prop}, the maximum principle applied to the previous function shows that it vanishes identically, i.e. the pointwise volume $e^{-f_g}d\mu_g$ is preserved under such a variation. On the other hand, if $B_{g_b}(h)=0$ then one gets by (\ref{equ-first-var-pot-fct}) that $\delta_{g_b}f_{g}(h)-\frac{\mathop{\rm tr}\nolimits_{g_b}h}{4}=0$. This implies that the second variation of $\lambda_{\operatorname{ALE}}^0$ at $g_b$ along $h\in B_{g_b}^{-1}(0)$ satisfies:
\begin{equation*}
2\delta^2_{g_b}\lambda_{\operatorname{ALE}}^0(h,h)=\langle L_{g_b}h,h\rangle_{L^2}+\frac{1}{2}\|\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h\|^2_{L^2}.
\end{equation*}
This leaves us with a less tractable formula for the second derivative of $\lambda_{\operatorname{ALE}}^0$.
\end{rk}
\section{A functional defined on a $C^{2,\alpha}_\tau$-neighborhood of Ricci-flat ALE metrics}\label{extension tilde lambda}
\subsection{First and second variations of $\lambda_{\operatorname{ALE}}$}~~\\
Let $(N^n,g_b)$ be an ALE Ricci-flat metric. We first recall that there exists a sequence of metrics $C^{2,\alpha}_\tau$-converging to $ g_b $ while having unbounded mass and $\lambda_{\operatorname{ALE}}^0$-functional.
\begin{exmp}\label{exemple masse infinie}
Denote for $A>0$ large enough, $\chi_A$ a cut-off function supported in $\{\rho_{g_b}>A\}$ where $g_b$ has ALE coordinates, and constant equal to $1$ on $\{\rho_{g_b}>2A\}$ and assume that for $c>0$ uniform and all $k\in \{1,2,3\}$, its $k$-th derivative is bounded by $cA^{-k}$. Let us define the metric
$$g_{A,m}:= \left(1+\chi_A\frac{m}{\rho_{g_b}^{n-2}}\right)^{\frac{4}{n-2}}g_b,$$
whose scalar curvature vanishes on $\{\rho_{g_b}>2A\}$ by the usual variation of the scalar curvature for conformal changes of metric.
Then, for some constant $C>0$ we have $$\|g_{A,m}-g_e\|_{C^{2,\alpha}_\tau}\leq C |m|A^{\tau-(n-2)}, $$
and for some $c_n>0$, we have $m_{\operatorname{ADM}}(g_{A,m}) = c_n m$. As a consequence, by choosing $m\to \pm \infty$ while $|m|A^{\tau-(n-2)}\to 0$, we get a sequence of metrics $C^{2,\alpha}_\tau$-converging to $g_b$ while its mass tends to $\pm\infty$.
\end{exmp}
The computation of the first variation of $\lambda_{\operatorname{ALE}}^0$ in Proposition \ref{first-var-prop} motivates the study of the difference $\lambda_{\operatorname{ALE}}^0-m_{\operatorname{ADM}}$.
\begin{defn}[$\lambda_{\operatorname{ALE}}$, a renormalized Perelman's functional]\label{defn-ale-lambda}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric and let $g\in \mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ for $\tau>\frac{n-2}{2}$. We define
$$\lambda_{\operatorname{ALE}}(g) := \lambda_{\operatorname{ALE}}^0(g)-m_{\operatorname{ADM}}(g).$$
\end{defn}
\begin{rk}
This is reminiscent of the introduction of the mass in General Relativity in order to replace the Hilbert-Einstein functional $\int_N\mathop{\rm R}\nolimits_gd\mu_g$ by $\int_N\mathop{\rm R}\nolimits_gd\mu_g-m_{\operatorname{ADM}}(g)$ which is better-behaved in the setting of Asymptotically Euclidean metrics. Our functional $\lambda_{\operatorname{ALE}}$ is moreover an approximation up to third order:
$$\int_N\mathop{\rm R}\nolimits_{g_b+h}\,d\mu_{g_b+h}-m_{\operatorname{ADM}}(g_b+h)- \lambda_{\operatorname{ALE}}(g_b+h) = O\left(\|h\|^3_{C^{2,\alpha}_\tau}\right).$$
\end{rk}
Notice that Definition \ref{defn-ale-lambda} only makes sense for metrics lying in $\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$ a priori. The following proposition ensures the functional $\lambda_{\operatorname{ALE}}$ is well-defined on a whole neighborhood of a given Ricci-flat ALE metric in the $C^{2,\alpha}_{\tau}$-topology.
\begin{prop}\label{lambdaALE analytic}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$. Then, the functional $\lambda_{\operatorname{ALE}}$, initially defined on $\mathcal{M}^{2,\alpha}_{\tau}(g_b,\varepsilon)$, extends to a $C^{2,\alpha}_\tau$-neighborhood of $ g_b $ as an analytic functional
\begin{itemize}
\item whose $L^2(e^{-f_g}d\mu_g)$-gradient at $g$ is $-(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g),$
\item and whose second variation at $g_b$ for divergence-free $2$-tensors is $\frac{1}{2}L_{g_b}$.
\end{itemize}
Moreover, if $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$,
\begin{equation}
\begin{split}
\lambda_{\operatorname{ALE}}(g)=\lim_{R\rightarrow+\infty}\Bigg(\int_{\{\rho_{g_b}\leq R\}}&\left(|\nabla^{g}f_{g}|^2_{g}+\mathop{\rm R}\nolimits_{g}\right)\,e^{-f_{g}}d\mu_{g}\\
&-\int_{\{\rho_{g_b}=R\}}\left<\mathop{\rm div}\nolimits_{g_b}(g)-\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}(g),\mathbf{n}_{g_b}\right>_{g_b}\,d\sigma_{g_b}\Bigg).\label{true-def-lambda}
\end{split}
\end{equation}
Finally, the functional $\lambda_{\operatorname{ALE}}$ is kept unchanged on $C^{2,\alpha'}_{\tau'}$-neighborhoods of $g_b$ for $\tau'\in (\frac{n-2}{2},\tau]$ and $\alpha'\in(0,\alpha]$.
\end{prop}
Before proving Proposition \ref{lambdaALE analytic}, we make a couple of remarks.
\begin{rk}
The largest definition space seems to be $h\in H^2_{\frac{n}{2}-1}$ where the first derivative $\langle h, \mathop{\rm Ric}\nolimits_g+\nabla^{g,2}f_g\rangle_{L^2(e^{-f_g}d\mu_g)}$ is well defined for $g-g_b\in H^2_{\frac{n}{2}-1}$. The second derivative $\frac{1}{2}\langle h, L_{g_b}h\rangle_{L^2(e^{-f_g}d\mu_g)}$ is also well defined for $2$-tensors in $H^2_{\frac{n}{2}-1}$. However, it is not clear if the definition of the mass $m_{\operatorname{ADM}}$ is invariant by changes of coordinates under these assumptions.
\end{rk}
\begin{rk}
As already noticed in the Introduction, $\lambda_{\operatorname{ALE}}(g)$ is the limit of the difference of two integrals which could be divergent in general. However, if $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ is such that its scalar curvature is integrable, then $\lambda_{\operatorname{ALE}}(g)$ really is the difference of $\lambda_{\operatorname{ALE}}^0(g)$ with the mass $m_{\operatorname{ADM}}(g)$.
\end{rk}
\begin{proof}[Proof of Proposition \ref{lambdaALE analytic}]
Let us consider a small $h$ in $ C^{2,\alpha}_\tau$ for $\tau>\frac{n-2}{2}$ such that $g_b+h$ is a metric on $N$. Thanks to [(\ref{lem-lin-equ-scal-first-var}), Lemma \ref{lem-lin-equ-Ric-first-var}], we have
\begin{align}
\mathop{\rm R}\nolimits_{g_b+h} =& \int_0^1 \mathop{\rm div}\nolimits_{g_b+th}( \mathop{\rm div}\nolimits_{g_b+th}h - \nabla^{g_b+th}\textup{tr}_{g_b+th}h )\,dt\\
=& \mathop{\rm div}\nolimits_{g_b}( \mathop{\rm div}\nolimits_{g_b}h - \nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h ) \nonumber\\
&+\int_0^1 \Big[\big(\mathop{\rm div}\nolimits_{g_b+th}( \mathop{\rm div}\nolimits_{g_b+th} - \nabla^{g_b+th}\textup{tr}_{g_b+th})\big)-\big(\mathop{\rm div}\nolimits_{g_b}( \mathop{\rm div}\nolimits_{g_b} - \nabla^{g_b}\textup{tr}_{g_b})\big)\Big](h)\, dt\nonumber\\
=&:\mathop{\rm div}\nolimits_{g_b}( \mathop{\rm div}\nolimits_{g_b}h - \nabla^{g_b}\textup{tr}_{g_b}h ) + Q_{g_b}(h),\label{dvp scal}
\end{align}
where $Q_{g_b}:C^{2,\alpha}_\tau\to \mathbb{R}$ is analytic and satisfies for some positive constant $C$, for any two symmetric $2$-tensors $h$ and $h'$, $$\|Q_{g_b}(h)- Q_{g_b}(h)\|_{C^{0,\alpha}_{2\tau +2}}\leq C\|h-h'\|_{C^{2,\alpha}_{\tau}} \left(\|h\|_{C^{2,\alpha}_{\tau}}+\|h'\|_{C^{2,\alpha}_{\tau}}\right).$$
For a symmetric $2$-tensor $h\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, denote $v_{g_b+h}\in C^{2,\alpha}_\tau$ to be the unique solution to
$$-4\Delta_{g_b+h}v_{g_b+h} + \mathop{\rm R}\nolimits_{g_b+h}v_{g_b+h} = -\mathop{\rm R}\nolimits_{g_b+h} =- \mathop{\rm div}\nolimits_{g_b}\left(\mathop{\rm div}\nolimits_{g_b}h-\nabla^{g_b}\textup{tr}_{g_b}h\right) - Q_{g_b}(h) \in C^{0,\alpha}_{\tau+2}.$$
Its existence is ensured because $-4\Delta_{g_b+h} + \mathop{\rm R}\nolimits_{g_b+h}: C^{2,\alpha}_\tau\to C^{0,\alpha}_{\tau+2}$ is invertible: indeed, we are in the invertibility range $0<\tau<n-2$ of the Laplacian as already noticed in Claim \ref{claim-iso}.
We have already seen in (the proof of) Proposition \ref{existence propriete-wg} by integration by parts against $1+v_{g_b+h}$ that we actually have
$$\lambda_{\operatorname{ALE}}^0(g_b+h) = \int_N (1+v_{g_b+h})\mathop{\rm R}\nolimits_{g_b+h}\,d\mu_{g_b+h},$$
if $g_b+h\in\mathcal{M}^{2,\alpha}(g_b,\varepsilon)$.
Let us now consider the following expression, for $g_b+h\in\mathcal{M}^{2,\alpha}(g_b,\varepsilon)$, $$ \int_N (1+v_{g_b+h})\mathop{\rm R}\nolimits_{g_b+h}\,d\mu_{g_b+h} - m_{\operatorname{ADM}}(g_b+h). $$ Use \eqref{dvp scal} together with the fact that
\begin{equation*}
\begin{split}
&\int_N\mathop{\rm div}\nolimits_{g_b}( \mathop{\rm div}\nolimits_{g_b}h - \nabla^{g_b}\textup{tr}_{g_b}h)\,d\mu_{g_b} - m_{\operatorname{ADM}}(g_b+h) =\\
&\lim_{R\rightarrow+\infty}\Bigg(\int_{\{\rho_{g_b}\leq R\}}\mathop{\rm div}\nolimits_{g_b}( \mathop{\rm div}\nolimits_{g_b}h - \nabla^{g_b}\textup{tr}_{g_b}h)\,d\mu_{g_b}-\int_{\{\rho_{g_b}=R\}}\left<\mathop{\rm div}\nolimits_{g_b}h-\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h,\mathbf{n}_{g_b}\right>_{g_b}\,d\sigma_{g_b}\Bigg)\\
&=0,
\end{split}
\end{equation*}
noticed in (\ref{def-mass}), to obtain
\begin{align}
\lambda_{\operatorname{ALE}}(g_b+h)=&\;\int_N v_{g_b+h}\mathop{\rm R}\nolimits_{g_b+h}\,d\mu_{g_b+h} + \int_N Q_{g_b}(h)\,d\mu_{g_b+h} \\
&+ \Big(\int_N\mathop{\rm div}\nolimits_{g_b}( \mathop{\rm div}\nolimits_{g_b}h - \nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h) - m_{\operatorname{ADM}}(g_b+h) \Big)\\
=&\; \int_N v_{g_b+h}\mathop{\rm R}\nolimits_{g_b+h}\,d\mu_{g_b+h} + \int_N Q_{g_b}(h)\,d\mu_{g_b+h}.\label{last-exp-make-sense-lambda}
\end{align}
This last expression (\ref{last-exp-make-sense-lambda}) is well-defined and analytic on a $C^{2,\alpha}_\tau$-neighborhood of $g_b$. A similar argument leads to the proof of (\ref{true-def-lambda}) by using the first expression of $\lambda_{\operatorname{ALE}}^0(g)$ given in [(\ref{egalite-lambda-2-bis}), Proposition \ref{prop-pot-fct}], $\lambda_{\operatorname{ALE}}^0(g)=\int_N(|\nabla^gf_g|^2_g+\mathop{\rm R}\nolimits_g)\,e^{-f_{g}}d\mu_g$ instead.
Moreover, for any metric $g$ in this $C^{2,\alpha}_\tau$-neighborhood of $g_b$ and from the computations of Proposition \ref{first-var-prop}, the $L^2(e^{-f_g}d\mu_g)$-gradient of $\lambda_{\operatorname{ALE}}$ at $g$ is $-(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2} f_g)$. Since $h\mapsto m_{\operatorname{ADM}}(g_b+h)$ is linear, the Hessian of $\lambda_{\operatorname{ALE}}$ at $g_b$ for divergence-free deformations is $\frac{1}{2}L_{g_b}$.
Finally, $\lambda_{\operatorname{ALE}}$ is independent of $(\tau',\alpha')\in(\frac{n-2}{2},\tau]\times(0,\alpha]$ since the potential function $f_g$ is. Indeed, let $\frac{n-2}{2}<\tau'\leq \tau<n-2$ and $0<\alpha'\leq \alpha<1$ and let us show that the map $g\rightarrow f_g$ is constant as $g$ varies in $B_{C^{2,\alpha'}_{\tau'}}(g_b,\varepsilon)\,\cap\,B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$. Recall from Proposition \ref{existence propriete-wg} that $w_g:=e^{-\frac{f_g}{2}}$ satisfies $-4\Delta_gw_g+\mathop{\rm R}\nolimits_gw_g=0$ and $w_g-1\in C^{2,\alpha}_{\tau}$ by definition. Let $w:=w_g$ (respectively $w':=w_g$) if $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ (respectively if $g\in B_{C^{2,\alpha'}_{\tau'}}(g_b,\varepsilon)$). Then the difference $w-w'\in C^{2,\alpha'}_{\tau'}$ lies in the kernel of $-4\Delta_g+\mathop{\rm R}\nolimits_g$. Claim \ref{claim-iso} of the proof of Proposition \ref{existence propriete-wg} leads to $w=w'$.
\end{proof}
The next proposition computes the second variation of $\lambda_{\operatorname{ALE}}$ at any metric $C^{2,\alpha}_{\tau}$-close to a given ALE Ricci flat metric. This is used in the proof of Theorem \ref{theo-loja-ALE} in order to check condition [(\ref{item-0-bis}), Proposition \ref{Lojasiewicz ineq weighted}].
\begin{prop}\label{snd-var-gal-lambda}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$. Then there exists $\varepsilon>0$ such that for any $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ and any $h\in C^{2,\alpha}_{\tau}$,
\begin{equation}
\begin{split}\label{snd-var-gal-lambda-formula}
\delta^2_g\lambda_{\operatorname{ALE}}&(h,h)=\\
&\frac{1}{2}\int_N\left\langle\Delta_{f_g}h+2\mathop{\rm Rm}\nolimits(g)(h)-\mathop{\rm \mathscr{L}}\nolimits_{B_{f_g}(h)}(g),h\right\rangle_g\,e^{-f_g}d\mu_g\\
&\quad+\frac{1}{2}\int_N\left\langle h\circ\mathop{\rm Ric}\nolimits_{f_g}(g)+\mathop{\rm Ric}\nolimits_{f_g}(g)\circ h-2\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h)\right)\mathop{\rm Ric}\nolimits_{f_g}(g),h\right\rangle_g\,e^{-f_g}d\mu_g.
\end{split}
\end{equation}
Here, $\mathop{\rm Ric}\nolimits_{f_g}(g)$ denotes the Bakry-\'Emery tensor $\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g$ associated to the smooth metric measure space $(N^n,g,\nabla^gf_g)$ and $B_{f_g}(h)$ denotes the weighted linear Bianchi gauge defined by
$$B_{f_g}(h):=\mathop{\rm div}\nolimits_{f_g}h-\nabla^g\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h)\right).$$
\end{prop}
\begin{rk}
In \eqref{snd-var-gal-lambda-formula}, notice that the function $\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h)$ is nothing but the infinitesimal variation of the weighted volume $e^{-f_g}d\mu_g$ and the weighted linear Bianchi gauge $B_{f_g}(h)$ differs from the linear Bianchi gauge defined in [\ref{defn-bianchi-op}, Lemma \ref{lem-lin-equ-Ric-first-var}] by $-h(\nabla^gf_g)+\nabla^g\delta_gf(h)$. This vector field is in turn the variation of the vector field $\nabla^g f_g$.
\end{rk}
\begin{rk}
Proposition \ref{snd-var-gal-lambda} recovers the second variation of $\lambda_{\operatorname{ALE}}$ at $g=g_b$ along divergence-free variations.
\end{rk}
\begin{proof}
We consider $\varepsilon>0$ so small that $f_g$ is well-defined by Proposition \ref{prop-pot-fct}. Now, as the $L^2(e^{-f_g}d\mu_g)$-gradient of $\lambda_{\operatorname{ALE}}$ is $-\mathop{\rm Ric}\nolimits(g)-\nabla^{g,2}f_g=:-\mathop{\rm Ric}\nolimits_{f_g}(g)$ by Proposition \ref{lambdaALE analytic}, we deduce the following formula with the help of Lemma \ref{lem-lin-equ-Ric-first-var} and [\eqref{first-var-lie-der-app}, Lemma \ref{lemma-app-lie-der-lin}]:
\begin{equation}
\begin{split}\label{form-var-bak-eme}
2\delta_g(-\mathop{\rm Ric}\nolimits_{f_g}(g))(h)&=\left(L_gh-\mathop{\rm \mathscr{L}}\nolimits_{B_g(h)}(g)\right)-\mathop{\rm \mathscr{L}}\nolimits_{\nabla^g(\delta_gf(h))}(g)-\mathop{\rm \mathscr{L}}\nolimits_{\nabla^gf_g}(h)+\mathop{\rm \mathscr{L}}\nolimits_{h(\nabla^gf_g)}(g)\\
&=\Delta_gh+2\mathop{\rm Rm}\nolimits(g)(h)-\mathop{\rm Ric}\nolimits(g)\circ h-h\circ\mathop{\rm Ric}\nolimits(g)\\
&\quad-\nabla^g_{\nabla^gf}h-\nabla^{g,2}f_g\circ h-h\circ\nabla^{g,2}f_g-\mathop{\rm \mathscr{L}}\nolimits_{B_{f_g}(h)}(g)\\
&=\Delta_{f_g}h+2\mathop{\rm Rm}\nolimits(g)(h)-\mathop{\rm Ric}\nolimits_{f_g}(g)\circ h-h\circ\mathop{\rm Ric}\nolimits_{f_g}(g)-\mathop{\rm \mathscr{L}}\nolimits_{B_{f_g}(h)}(g).
\end{split}
\end{equation}
Here we have used the general fact that $\mathop{\rm \mathscr{L}}\nolimits_{\nabla^gf}T=\nabla^g_XT+T\circ\nabla^{g,2}f+\nabla^{g,2}f\circ T$ for any symmetric $C^1_{loc}$ $2$-tensor and any $C^1_{loc}$ function $f$.
Next, we observe that the volume variation is
\begin{equation}\label{form-weig-var}
\delta_g(e^{-f_g}d\mu_g)(h)=\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h)\right)e^{-f_g}d\mu_g.
\end{equation}
Finally, we compute the variation with respect to the norm on symmetric $2$-tensors induced by the metric $g$ as follows:
\begin{equation}
\begin{split}\label{gal-form-var-norm}
\delta_g\left(\left\langle h,T\right\rangle_g\right)(h)&=-g^{im}h_{mn}g^{nk}g^{jl}h_{ij}T_{kl}-g^{ik}g^{jm}h_{mn}g^{nl}h_{ij}T_{kl}\\
&=-\left\langle h\circ T+T\circ h,h\right\rangle_g,
\end{split}
\end{equation}
for any symmetric $2$-tensor $T$. Then \eqref{snd-var-gal-lambda-formula} follows by considering the linear combination $\frac{1}{2}$\eqref{form-var-bak-eme}+\eqref{form-weig-var} to which we add \eqref{gal-form-var-norm} applied to $T:=-\mathop{\rm Ric}\nolimits_{f_g}(g)$.
\end{proof}
We end this section by establishing the weighted elliptic equation satisfied by the volume variation at a metric $C^{2,\alpha}_{\tau}$-close to a given ALE Ricci flat metric.
Again, this is used in the proof of Theorem \ref{theo-loja-ALE} in order to check conditions [(\ref{item-0}), (\ref{item-0-bis}), Proposition \ref{Lojasiewicz ineq weighted}].
\begin{prop}\label{var-vol-var-ell-eqn-prop}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$. Then there exists $\varepsilon>0$ such that for any $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ and any $h\in C^{2,\alpha}_{\tau}$,
\begin{equation}
\begin{split}\label{var-vol-var-ell-eqn-for}
\Delta_{f_g}\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h)\right)=\frac{1}{2}\left(\mathop{\rm div}\nolimits_{f_g}(\mathop{\rm div}\nolimits_{f_g}h)-\langle h,\mathop{\rm Ric}\nolimits_{f_g}(g)\rangle_g\right).
\end{split}
\end{equation}
\end{prop}
\begin{proof}
We consider $\varepsilon>0$ so small that $f_g$ is well-defined by Proposition \ref{prop-pot-fct}. In particular, the potential function $f_g$ satisfies the Euler-Lagrange equation \eqref{equ-criti-lambda-pot} that we differentiate along a variation $h\in C^{2,\alpha}_{\tau}$ as follows:
\begin{equation}
\begin{split}\label{diff-var-for-eul-lag}
2\delta_g\left(\Delta_gf_g\right)(h)&=2\Delta_g\delta_gf(h)-2\langle B_g(h),\nabla^gf_g\rangle_g-2\langle h,\nabla^{g,2}f_g\rangle_g,\\
\delta_g(|\nabla^gf_g|^2_g)(h)&=-h(\nabla^gf_g,\nabla^gf_g)+2\langle\nabla^gf_g,\nabla^g(\delta_gf(h))\rangle_g,\\
\delta_g\mathop{\rm R}\nolimits(h)&=\mathop{\rm div}\nolimits_{g}\mathop{\rm div}\nolimits_{g}h-\Delta_{g}\mathop{\rm tr}\nolimits_gh-\left<h,\mathop{\rm Ric}\nolimits(g)\right>_g.
\end{split}
\end{equation}
Here, we have used [\eqref{first-var-lie-der-app}, Lemma \ref{lemma-app-lie-der-lin}] in the first line and the last line is simply [\eqref{lem-lin-equ-scal-first-var}, Lemma \ref{lem-lin-equ-Ric-first-var}]. By considering a suitable linear combination of the first variations described in \eqref{diff-var-for-eul-lag} and using \eqref{equ-criti-lambda-pot} leads to:
\begin{equation*}
\begin{split}
0&=\delta_g\left(2\Delta_gf_g-|\nabla^gf_g|^2_g+\mathop{\rm R}\nolimits_g\right)(h)\\
&=2\Delta_{f_g}\left(\delta_gf(h)-\frac{\mathop{\rm tr}\nolimits_gh}{2}\right)-\langle h,\mathop{\rm Ric}\nolimits_{f_g}(g)\rangle_g\\
&\quad-2\langle \mathop{\rm div}\nolimits_gh,\nabla^gf_g\rangle_g-\langle h,\nabla^{g,2}f_g\rangle_g+h(\nabla^gf_g,\nabla^gf_g)+\mathop{\rm div}\nolimits_g\mathop{\rm div}\nolimits_gh.
\end{split}
\end{equation*}
This ends the proof of the desired equation satisfied by $\delta_gf(h)-\frac{\mathop{\rm tr}\nolimits_gh}{2}$ once we observe that:
\begin{equation*}
\mathop{\rm div}\nolimits_{f_g}(\mathop{\rm div}\nolimits_{f_g}h)=-2\langle \mathop{\rm div}\nolimits_gh,\nabla^gf_g\rangle_g-\langle h,\nabla^{g,2}f_g\rangle_g+h(\nabla^gf_g,\nabla^gf_g)+\mathop{\rm div}\nolimits_g\mathop{\rm div}\nolimits_gh.
\end{equation*}
\end{proof}
\subsection{Further properties of $\lambda_{\operatorname{ALE}}$}~~\\
We can approximate $C^{2,\alpha}$-perturbations by metrics which are Ricci flat outside a compact subset: this is the content of the following lemma that we state without proof.
\begin{lemma}\label{cutoffC2alphatau}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$ and $\alpha\in (0,1)$. Let $g$ be a metric in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ for $\varepsilon$ sufficiently small. Then, for a sequence of cut-off functions $\chi_s$ for $s>1$ vanishing in smaller and smaller neighborhoods of infinity, we have for any $\alpha'\in(0,\alpha)$ and $\tau'<\tau$,
\begin{equation*}
\begin{split}
&\chi_sg + (1-\chi_s)g_b\xrightarrow[s\to +\infty]{C^{2,\alpha'}_{\tau'}} g,\\
&\lambda_{\operatorname{ALE}}^0(\chi_sg + (1-\chi_s)g_b)-m_{\operatorname{ADM}}(\chi_sg + (1-\chi_s)g_b)\xrightarrow[s\to +\infty]{} \lambda_{\operatorname{ALE}}(g).
\end{split}
\end{equation*}
\end{lemma}
The following proposition sums up the scaling properties and the diffeomorphism invariance of the functional $\lambda_{\operatorname{ALE}}$: it echoes Proposition \ref{inv-diff-lambda} and Lemma \ref{scaling lambdaALE} established for $\lambda_{\operatorname{ALE}}^0$.
\begin{prop}\label{scaling diffeo tildelambda}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$ and $\alpha\in (0,1)$.
Let $g$ be a metric in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ and consider $\phi$ a diffeomorphism close to the identity in the $C^{3,\alpha}_{\tau-1}(TN)$ topology. Then, we have $\lambda_{\operatorname{ALE}}(\phi^*g) = \lambda_{\operatorname{ALE}}(g)$.
Moreover, if $s>0$,
$$\lambda_{\operatorname{ALE}}(sg) = s^{\frac{n-2}{2}}\lambda_{\operatorname{ALE}}(g).$$
\end{prop}
\begin{proof}
Let us prove the result assuming that $\mathop{\rm R}\nolimits_g=0$ in a neighborhood of infinity: the general case is obtained by approximation thanks to Lemma \ref{cutoffC2alphatau}. Under this assumption, we know that $\lambda_{\operatorname{ALE}}^0$ is bounded and has the expected behavior by scaling and action of diffeomorphism thanks to Proposition \ref{inv-diff-lambda} and Lemma \ref{scaling lambdaALE}.
The mass behaves the same way by rescaling. Indeed, if $g$ is ALE asymptotic to $g_e$ at infinity, then $sg$ is ALE asymptotic to $sg_e$ at infinity, and we therefore have
$$m_{\operatorname{ADM}}(sg) = s^{\frac{n}{2}- 1} m_{\operatorname{ADM}}(g),$$
by studying the scaling of the operators involved.
The invariance of $m_{\operatorname{ADM}}$ by $C^{3,\alpha}_{\tau-1}$-diffeomorphisms was proved in \cite{Bart-Mass}.
\end{proof}
We end this section with the proof of the monotonicity of the functional $\lambda_{\operatorname{ALE}}$ along the Ricci flow in case it stays in a neighborhood of a Ricci-flat metric.
\begin{prop}\label{prop-mono-lambda}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$ and $\alpha\in (0,1)$ and let $(g(t))_{t\in[0,T]}$ be a solution to the Ricci flow on $N$ starting from $g(0)\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$.
Then, $t\in[0,T]\rightarrow \lambda_{\operatorname{ALE}}(g(t))\in\mathbb{R}$ is non-decreasing along the Ricci flow as long as $g(t)\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ for every $t\in[0,T]$ and,
\begin{equation}
\frac{d}{dt}\lambda_{\operatorname{ALE}}(g(t))=2\|\mathop{\rm Ric}\nolimits(g(t))+\nabla^{g(t),2}f_{g(t)}\|_{L^2(e^{-f_{g(t)}})}^2.\label{first-mono-lambda}
\end{equation}
Moreover, $\lambda_{\operatorname{ALE}}(g(\cdot))$ is constant in time on Ricci-flat metrics only.
\end{prop}
\begin{rk}
Notice that Proposition \ref{prop-mono-lambda} is not a direct consequence of Proposition \ref{lambdaALE analytic} since in general, the curve $t\in[0,T]\rightarrow g(t)\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ induced by a solution to the Ricci flow is only $C^0$ continuous a priori when interpreted with values into the space $C^{2,\alpha}_{\tau}$.
\end{rk}
\begin{rk}
To check that a solution to the Ricci flow stays in a neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ of $g_b$ is a delicate problem. In a forthcoming paper, we will investigate this question in case $(N^n,g_b)$ is stable.
\end{rk}
\begin{proof}
Let $R\geq R_0>0$ where $R_0$ is sufficiently large so that the level sets $\{\rho_{g_b}=R\}$ are closed smooth hypersurfaces. Then if $(g(t))_{t\in[0,T]}$ is a Ricci flow in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, thanks to the proof of Proposition \ref{first-var-prop} by taking into account that $h(t):=-2\mathop{\rm Ric}\nolimits(g(t))$ and $\varphi(t):=\delta_{g(t)}f(-2\mathop{\rm Ric}\nolimits(g(t)))$,
\begin{equation}
\begin{split}\label{delicate-first-var}
&\int_{\{\rho_{g_b}\leq R\}}\left(|\nabla^{g(t)}f_{g(t)}|^2_{g(t)}+\mathop{\rm R}\nolimits_{g(t)}\right)\,e^{-f_{g(t)}}d\mu_{g(t)}\\
&-\int_{\{\rho_{g_b}=R\}}\left<\mathop{\rm div}\nolimits_{g_b}(g(t)-g_b)-\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}(g(t)-g_b),\mathbf{n}_{g_b}\right>_{g_b}\,d\sigma_{g_b}\Bigg\rvert_{t=t_1}^{t_2}=\\
&2\int_{t_1}^{t_2}\int_{\{\rho_{g_b}\leq R\}}|\mathop{\rm Ric}\nolimits(g(t))+\nabla^{g(t),2}f_{g(t)}|^2_{g(t)}\,e^{-f_{g(t)}}d\mu_{g(t)}dt\\
&+\int_{t_1}^{t_2}\int_{\{\rho_{g_b}=R\}}\left<\nabla^{g(t)}\mathop{\rm R}\nolimits_{g(t)}+2\mathop{\rm Ric}\nolimits(g(t))(\nabla^{g(t)}f_{g(t)}),\mathbf{n}_{g(t)}\right>_{g(t)}\,e^{-f_{g(t)}}d\sigma_{g(t)}dt\\
&+\int_{t_1}^{t_2}\int_{\{\rho_{g_b}= R\}}2\left(\mathop{\rm R}\nolimits_{g(t)}+\delta_{g(t)}f(-2\mathop{\rm Ric}\nolimits(g(t)))\right)\langle\nabla^{g(t)}f_{g(t)},\mathbf{n}_{g(t)}\rangle_{g(t)}\,e^{-f_{g(t)}}d\sigma_{g(t)}dt\\
&+2\int_{t_1}^{t_2}\int_{\{\rho_{g_b}=R\}}\left<\mathop{\rm div}\nolimits_{g_b}(\mathop{\rm Ric}\nolimits(g(t))-\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}(\mathop{\rm Ric}\nolimits(g(t))),\mathbf{n}_{g_b}\right>_{g_b}\,d\sigma_{g_b}dt.
\end{split}
\end{equation}
All we need to check is that the boundary integrals go to $0$ as $R$ tends to $+\infty$ uniformly in time. First, observe that by properties of the map $g\rightarrow f_g$ summarized in Proposition \ref{prop-pot-fct},
\begin{equation}
\begin{split}\label{easy-peasy-est}
&\left|\int_{t_1}^{t_2}\int_{\{\rho_{g_b}=R\}}\left<2\mathop{\rm Ric}\nolimits(g(t))(\nabla^{g(t)}f_{g(t)}),\mathbf{n}_{g(t)}\right>_{g(t)}\,e^{-f_{g(t)}}d\sigma_{g(t)}dt\right|\leq CR^{n-2\tau-4},\\
&\left|\int_{t_1}^{t_2}\int_{\{\rho_{g_b}= R\}}2\left(\mathop{\rm R}\nolimits_{g(t)}+\delta_{g(t)}f(-2\mathop{\rm Ric}\nolimits(g(t)))\right)\langle\nabla^{g(t)}f_{g(t)},\mathbf{n}_{g(t)}\rangle_{g(t)}\,e^{-f_{g(t)}}d\sigma_{g(t)}dt\right|\leq CR^{n-2\tau-2},
\end{split}
\end{equation}
for some positive constant $C=C(n,g_b,\varepsilon)$. Since $\tau>\frac{n-2}{2}$, the righthand sides of (\ref{easy-peasy-est}) decay to $0$ as $R$ tends to $+\infty$. Now, the remaining boundary integrals in (\ref{delicate-first-var}) would cancel each other if the last integral was expressed in terms of the evolving metric $g(t)$ thanks to the Bianchi identity. To conclude, it is sufficient to notice that:
\begin{equation*}
|\mathbf{n}_{g(t)}-\mathbf{n}_{g_b}|_{g_b}+\left|e^{-f_{g(t)}}\frac{d_{\sigma_{g(t)}}}{d_{\sigma_{g_b}}}-1\right|\leq C\rho^{-\tau},
\end{equation*}
for some positive constant $C=C(n,g_b,\varepsilon)$. Indeed, this implies that:
\begin{equation*}
\begin{split}
\Bigg\lvert\int_{t_1}^{t_2}&\int_{\{\rho_{g_b}=R\}}\Bigg(\left<\nabla^{g(t)}\mathop{\rm R}\nolimits_{g(t)},\mathbf{n}_{g(t)}\right>_{g(t)}\,e^{-f_{g(t)}}\frac{d\sigma_{g(t)}}{d\sigma_{g_b}}\\
&+2\left<\mathop{\rm div}\nolimits_{g_b}(\mathop{\rm Ric}\nolimits(g(t))-\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}(\mathop{\rm Ric}\nolimits(g(t))),\mathbf{n}_{g_b}\right>_{g_b}\Bigg)\,d\sigma_{g_b}dt\Bigg\rvert\leq CR^{n-2-2\tau}.
\end{split}
\end{equation*}
Here we have used that $\nabla^{g(t)}\mathop{\rm Ric}\nolimits(g(t))=O(\rho_{g_b}^{-\tau-2})$ for $t>0$ by Shi's estimates \cite{Shi-Def}.
By letting $R$ tend to $+\infty$ in (\ref{delicate-first-var}) gives the expected result:
\begin{equation*}
\lambda_{\operatorname{ALE}}(g(t_2))-\lambda_{\operatorname{ALE}}(g(t_1))=2\int_{t_1}^{t_2}\int_{N}|\mathop{\rm Ric}\nolimits(g(t))+\nabla^{g(t),2}f_{g(t)}|^2_{g(t)}\,e^{-f_{g(t)}}d\mu_{g(t)}dt.
\end{equation*}
\end{proof}
\begin{rk}
Under our assumptions, the mass $m_{\operatorname{ADM}}$, when it is defined, is constant along the Ricci flow by \cite{Dai-Ma-Mass} (see also \cite{Li-Yu-ALE}), the variations of the functional $\lambda_{\operatorname{ALE}}$ therefore only come from those of $\lambda_{\operatorname{ALE}}^0$.
\end{rk}
\subsection{Local properties of stable ALE Ricci flat metric}~~\\
Propositions \ref{second-var-prop} and \ref{lambdaALE analytic} justify the following notion of stability for an ALE Ricci-flat metric.
\begin{defn}\label{defn-stable}
An ALE Ricci-flat metric $(N^n,g_b)$ asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$ is said to be \emph{linearly stable} if the second variation of $\lambda_{\operatorname{ALE}}$ at $g_b$ along a divergence-free variation $h\in C^{2,\alpha}_{\tau}(S^2T^*N)$, $\tau\in\left(\frac{n-2}{2},n-2\right)$, $\alpha\in(0,1)$, is nonpositive, i.e. if $L_{g_b}$ is a nonpositive operator in the $L^2$ sense when restricted to divergence-free variations in $ C^{2,\alpha}_{\tau}(S^2T^*N)$.
\end{defn}
Definition \ref{defn-stable} is relevant with respect to the functional $\lambda_{\operatorname{ALE}}$ but it has the apparent disadvantage that it depends on a choice of parameters $(\tau,\alpha)\in\left(\frac{n-2}{2},n-2\right)\times(0,1)$ a priori. The following lemma shows that Definition \ref{defn-stable} is actually independent of this choice of parameters.
\begin{lemma}\label{lemma-equiv-def-stable}
Let $(N^n,g_b)$, $n\geq 4$, be an ALE Ricci-flat metric. Then the following assertions are equivalent:
\begin{enumerate}
\item \label{first-def-sta}$(N^n,g_b)$ is linearly stable in the sense of Definition \ref{defn-stable}.\\
\item \label{sec-def-sta}$\left<-L_{g_b}h,h\right>_{L^2}\geq 0$ for all $h\in C_c^{\infty}(S^2T^*N)$.\\
\item \label{thir-def-sta}$\left<-L_{g_b}h,h\right>_{L^2}\geq 0$ for all $h\in H^2_{\frac{n}{2}-1}(S^2T^*N)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We proceed by proving the implications $(\ref{first-def-sta})\Rightarrow (\ref{sec-def-sta})\Rightarrow (\ref{thir-def-sta})\Rightarrow (\ref{first-def-sta}).$
Notice first that we have the following inclusions $$C^{\infty}_c(S^2T^*N)\subset C^{2,\alpha}_{\tau}(S^2T^*N)\subset H^2_{\frac{n}{2}-1}(S^2T^*N),$$ for any $\tau>\frac{n}{2}-1$ and $\alpha\in(0,1)$ according to Remark \ref{sobolev embeddings}. Moreover, $C_c^{\infty}(S^2T^*N)$ is dense in $H^2_{\frac{n}{2}-1}(S^2T^*N)$. In particular, the implications $(\ref{sec-def-sta})\Rightarrow (\ref{thir-def-sta})\Rightarrow (\ref{first-def-sta})$ are straightforward.
We claim that if $(\ref{first-def-sta})$ holds true then
\begin{equation}
\left<-L_{g_b}h,h\right>_{L^2}\geq 0,\quad\text{for all $h\in C^{2,\alpha}_{\tau}(S^2T^*N)$.}\label{cond-1-bis-lin-sta}
\end{equation}
Taken (\ref{cond-1-bis-lin-sta}) for granted, it is immediate to conclude the proof of the implication $(\ref{first-def-sta})\Rightarrow (\ref{sec-def-sta}).$
Therefore, all is left to prove is (\ref{cond-1-bis-lin-sta}) under Condition $(\ref{first-def-sta})$. By Proposition \ref{prop-decomp-2-tensor}, if $h$ is a symmetric $2$-tensor in $C^{2,\alpha}_{\tau}$, $\beta:=\tau\in\left(\frac{n-2}{2},n-2\right)\subset(1,n-1)$ then there exist a symmetric $2$-tensor $h'$ in $C^{2,\alpha}_{\tau}$ and a vector field $X\in C^{3,\alpha}_{\tau-1}$ such that $h=h'+\mathop{\rm \mathscr{L}}\nolimits_X(g_b)$. Now, observe by bilinearity that:
\begin{equation}
\begin{split}\label{easy-quad-form}
\left<-L_{g_b}h,h\right>_{L^2}=\,&\left<-L_{g_b}h',h'\right>_{L^2}+\left<-L_{g_b}h',\mathop{\rm \mathscr{L}}\nolimits_X(g_b)\right>_{L^2}\\
&+\left<-L_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),h'\right>_{L^2}+\left<-L_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),\mathop{\rm \mathscr{L}}\nolimits_X(g_b)\right>_{L^2}.
\end{split}
\end{equation}
On the one hand, an integration by parts shows that:
\begin{equation}\label{easy-IBP}
\left<L_{g_b}h',\mathop{\rm \mathscr{L}}\nolimits_X(g_b)\right>_{L^2}=\left<h',L_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b)\right>_{L^2}.
\end{equation}
Here we have used the fact that $2\tau+1>n-1$ to handle the boundary term in the integration by parts.
On the other hand, by using the flow $(\phi^X_t)_{t\in \mathbb{R}}$ generated by the vector field $X$, one observes that the one-parameter family $((\phi_t^X)^*g_b)_{t\in \mathbb{R}}$ is a curve of Ricci-flat metrics. In particular, by differentiating the Ricci-flat equation at $t=0$ with the help of Lemma \ref{lem-lin-equ-Ric-first-var},
\begin{equation}\label{identif-im-lie-der}
L_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b)=\mathop{\rm \mathscr{L}}\nolimits_{B^X}(g_b),\quad B^X:=B_{g_b}(\mathop{\rm \mathscr{L}}\nolimits_X(g_b)).
\end{equation}
Plugging (\ref{identif-im-lie-der}) in (\ref{easy-IBP}) leads to:
\begin{equation}
\begin{split}
\left<L_{g_b}h',\mathop{\rm \mathscr{L}}\nolimits_X(g_b)\right>_{L^2}=\,&\left<h',\mathop{\rm \mathscr{L}}\nolimits_{B^X}(g_b)\right>_{L^2}\\
=\,&-2\left<\mathop{\rm div}\nolimits_{g_b}h',B^X\right>_{L^2}=0,\label{delicate-orth-decom}
\end{split}
\end{equation}
since by definition, $h'$ is divergence-free. Here again, the integration by parts is legitimated by the fact that $B^X=O(\rho_{g_b}^{-\tau-1})$.
Going back to (\ref{easy-quad-form}), the vanishing (\ref{delicate-orth-decom}) shows that it is sufficient to prove that
\begin{equation}
\left<-L_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),\mathop{\rm \mathscr{L}}\nolimits_X(g_b)\right>_{L^2}\geq 0,\label{last-cond-fulfill}
\end{equation}
since Condition (\ref{first-def-sta}) is assumed to hold. Because of (\ref{identif-im-lie-der}), it is equivalent to check the following:
\begin{equation}
\begin{split}\label{IBP-Bianchi-lemma-loc-stab}
-\left<\mathop{\rm \mathscr{L}}\nolimits_X(g_b),\mathop{\rm \mathscr{L}}\nolimits_{B^X}(g_b)\right>_{L^2}=\,&2\left<\mathop{\rm div}\nolimits_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),B^X\right>_{L^2}\\
=\,&2\|B^X\|_{L^2}^2+\left<\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),B^X\right>_{L^2}\\
=\,&2\|B^X\|_{L^2}^2-\left<\mathop{\rm tr}\nolimits_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),\mathop{\rm div}\nolimits_{g_b}B^X\right>_{L^2}\\
=\,&2\|B^X\|_{L^2}^2+\frac{1}{2}\left<\mathop{\rm tr}\nolimits_{g_b}\mathop{\rm \mathscr{L}}\nolimits_X(g_b),-\Delta_{g_b}\mathop{\rm \mathscr{L}}\nolimits_{X}(g_b)\right>_{L^2}\\
=\,&2\|B^X\|_{L^2}^2+\frac{1}{2}\|\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}\mathop{\rm \mathscr{L}}\nolimits_{X}(g_b)\|_{L^2}^2\geq 0.
\end{split}
\end{equation}
Here, we have integrated by parts in the first, third and last lines. The second line uses the definition of $B^X$ given in (\ref{identif-im-lie-der}) only. Taking into account the Ricci-flatness of $g_b$, the penultimate line is obtained by considering the trace of (\ref{identif-im-lie-der}) with respect to the metric $g_b$. This concludes the proof of (\ref{last-cond-fulfill}).
\end{proof}
We end this section with the following strong positivity property shared by the Lichnerowicz operator associated to a stable ALE Ricci-flat metric $(N^n,g_b)$: this result established in \cite[Theorem $3.9$]{Der-Kro} has been proved to be useful for the dynamical stability of integrable Ricci-flat ALE metrics. The proof of this result is essentially due to Devyver \cite{Dev-Gau-Est} in a more general setting.
\begin{theo}\label{theo-der-kro}
Let $(N^n,g_b)$ be a linearly stable ALE Ricci-flat metric. Then there exists some positive constant $\varepsilon(g_b)\in[0,1)$ such that
\begin{equation*}
(1-\varepsilon(g_b))\left<-\Delta_{g_b}h,h\right>_{g_b}\leq \left<-L_{g_b}h,h\right>_{g_b},
\end{equation*}
for all $h\in H^2_{\frac{n}{2}-1}$ which is $L^2(g_b)$-orthogonal to $\ker_{L^2(g_b)}L_{g_b}.$
\end{theo}
\section{Energy estimates on the potential function}\label{sec-ene-est-pot-fct}
If $(N^n,g_b)$ is an ALE Ricci-flat metric, we establish energy estimates on the gradient and the (weighted) laplacian of the potential function $f_g$ associated to a metric $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ in terms of the norm $\|g-g_b\|_{H^2_{\frac{n}{2}-1}}$. These estimates will be crucially used in the proof of Proposition \ref{prop-energy-est}.
\begin{prop}\label{prop-ene-pot-fct}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$ and $\alpha\in (0,1)$. Then there exists a neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ of $g_b$ such that the following energy estimates on the potential function $v_g:=e^{-f_g}-1$ and its first variation hold true:
\begin{equation}
\|\nabla^gv_g\|_{L^2}\leq C(n,g_b,\varepsilon)\|\nabla^{g_b}(g-g_b)\|_{L^2}\label{grad-est-int-ene},
\end{equation}
and, if $g_1$ and $g_2$ are metrics in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$,
\begin{equation}
\left\|\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|_{L^2}\leq C(n,g_b,\varepsilon)\|\nabla^{g_t}h\|_{L^2},\label{est-grad-first-der-pot-fct-ene}
\end{equation}
where $g_t:=g_1+(t-1)h:=g_1+(t-1)(g_2-g_1)$ for $t\in[1,2]$.
Moreover,
\begin{equation}\label{first-var-pot-fct-ell-equ-gal-case-prop}
\left\|\Delta_{g_t,f_{g_t}}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|_{L^2_{\frac{n}{2}+1}}\leq C(n,g_b,\varepsilon)\left(\|\nabla^{g_t}h\|_{L^2}+\|\nabla^{g_t,2}h\|_{L^2_{\frac{n}{2}+1}}\right),\quad t\in[1,2].
\end{equation}
\end{prop}
\begin{proof}
Let us remark first that if $v_g:=w_g-1=e^{-f_g}-1$ then:
\begin{equation*}
\begin{split}
\Delta_g(w_g-1)^2&=\Delta_gv_g^2=2|\nabla^gv_g|^2+2\Delta_gv_g\cdot v_g\\
&=2|\nabla^gv_g|^2+\frac{1}{2}\mathop{\rm R}\nolimits_gw_g\cdot v_g\\
&=2|\nabla^gv_g|^2+\frac{1}{2}\mathop{\rm R}\nolimits_gv_g^2+\frac{1}{2}\mathop{\rm R}\nolimits_gv_g.
\end{split}
\end{equation*}
Integrating by parts the previous identity gives:
\begin{eqnarray}
2\|\nabla^gv_g\|^2_{L^2}&\leq&\int_N|\mathop{\rm R}\nolimits_g|v_g^2\,d\mu_g-\frac{1}{2}\int_N\mathop{\rm R}\nolimits_gv_g\,d\mu_g.\label{first-int-est-nabla-v}
\end{eqnarray}
Notice that the last term on the righthand side is kept unchanged for the following reason: if $n=4$, $v_g$ or equivalently, $f_g$, is not in $L^2$ so one needs to proceed in a more subtle way than just using Young's inequality. By tracing Lemma \ref{Ric-lin-lemma-app} together with [(\ref{lem-lin-equ-scal-first-var}), Lemma \ref{lem-lin-equ-Ric-first-var}], recall that pointwise:
\begin{equation*}
\left|-\mathop{\rm R}\nolimits_g+\mathop{\rm div}\nolimits_{g_b}(\mathop{\rm div}\nolimits_{g_b}h)-\Delta_{g_b}\mathop{\rm tr}\nolimits_{g_b}h\right|\leq C(n,g_b,\varepsilon)\left(|\nabla^{g_b}h|_{g_b}^2+|h|_{g_b}|\nabla^{g_b,2}h|_{g_b}\right).
\end{equation*}
In particular, by integrating by parts, if $\gamma>0$,
\begin{equation}
\begin{split}
\left|\int_N\mathop{\rm R}\nolimits_gv_g\,d\mu_g\right|&\leq C\left(\int_N|\nabla^{g_b}h|_{g_b}|\nabla^{g_b}v_g|_{g_b}\,d\mu_g\right)+C\int_N\left(|\nabla^{g_b}h|^2_{g_b}+|h|_{g_b}|\nabla^{g_b,2}h|_{g_b}\right)|v_g|d\mu_g\\
&\leq\gamma\|\nabla^{g_b}v_g\|_{L^2}^2+C\left(\gamma^{-1}\|\nabla^{g_b}h\|_{L^2}^2+\|\rho_{g_b}^{-1}h\|_{L^2}\|\rho_{g_b}\cdot v_g\cdot |\nabla^{g_b,2}h|\|_{L^2}\right)\\
&\leq \gamma\|\nabla^{g_b}v_g\|_{L^2}^2+C\left(\gamma^{-1}\|\nabla^{g_b}h\|_{L^2}^2+\|\rho_{g_b}^{-1}h\|_{L^2}\|\rho_{g_b}^{-1}\cdot v_g\|_{L^2}\|\rho_{g_b}^2\nabla^{g_b,2}h\|_{C^0}\right)\\
&\leq \gamma\left(\|\rho_{g_b}^{-1}\cdot v_g\|_{L^2}^2+\|\nabla^{g_b}v_g\|_{L^2}^2\right)+C\gamma^{-1}\left(\|\nabla^{g_b}h\|_{L^2}^2+\|\rho_{g_b}^{-1}h\|_{L^2}^2\right),
\label{sec-int-est-nabla-v}
\end{split}
\end{equation}
where $C=C(n,g_b,\varepsilon)$ is a positive constant that may vary from line to line, and where we used the inequality $2\|\rho_{g_b}^{-1}h\|_{L^2}\|\rho_{g_b}^{-1}\cdot v_g\|_{L^2}\leq \gamma\|\rho_{g_b}^{-1}\cdot v_g\|_{L^2}^2+\gamma^{-1}\|\rho_{g_b}^{-1}h\|_{L^2}^2$. Here we have used Cauchy-Schwarz inequality together with the fact that $\nabla^{g_b,2}h$ decays at least quadratically since $\|h\|_{C^2_0}$ is finite. To sum it up, (\ref{first-int-est-nabla-v}) and (\ref{sec-int-est-nabla-v}) give by Hardy's inequality for any $\gamma>0$:
\begin{equation*}
\begin{split}
\|\nabla^gv_g\|^2_{L^2}&\leq\int_N|\mathop{\rm R}\nolimits_g|v_g^2\,d\mu_g+C\gamma\|\nabla^gv_g\|_{L^2}^2+C\gamma^{-1}\|\nabla^{g_b}h\|_{L^2}^2.
\end{split}
\end{equation*}
By choosing $\gamma$ small enough together with the fact that $\sup_{N}\rho^2_{g_b}|\mathop{\rm R}\nolimits_g|$ can be made arbitrarily small by shrinking $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ if necessary, Hardy's inequality yields:
\begin{equation}
\begin{split}\label{est-grad-pot-fct-ene}
\|\nabla^gv_g\|^2_{L^2}&\leq\frac{1}{2}\|\nabla^gv_g\|^2_{L^2}+C(n,g_b,\varepsilon,\gamma)\|\nabla^{g_b}h\|_{L^2}^2,
\end{split}
\end{equation}
i.e. one gets (\ref{grad-est-int-ene}) as expected.
\\
Now, let us turn to the proof of (\ref{est-grad-first-der-pot-fct-ene}).\\
We invoke the variation of equation (\ref{equ-criti-lambda-pot}) satisfied by the potential function $f_g$ established in [\eqref{var-vol-var-ell-eqn-for}, Proposition \ref{var-vol-var-ell-eqn-prop}].
Consequently, $\delta_{g_t}f(h)$ satisfies for $t\in[1,2]$:
\begin{equation}
\begin{split}\label{first-var-pot-fct-ell-equ-gal-case}
\Delta_{f_{g_t}}\left(\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}-\delta_{g_t}f(h)\right)&=\frac{1}{2}\left(\mathop{\rm div}\nolimits_{f_{g_t}}(\mathop{\rm div}\nolimits_{f_{g_t}}h)-\langle h,\mathop{\rm Ric}\nolimits_{f_{g_t}}(g_t)\rangle_{g_t}\right).
\end{split}
\end{equation}
Now, thanks to Proposition \ref{prop-pot-fct} and the fact that $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$,
\begin{equation}
\rho_{g_b}|\nabla^{g_t}f_{g_t}|_{g_t}+\rho^2_{g_b}\left(|\mathop{\rm Ric}\nolimits(g_t)|_{g_t}+|\nabla^{g_t,2}f_{g_t}|_{g_t}\right)\leq C(n,g_b,\varepsilon).\label{est-pot-fct-ene-est}
\end{equation}
Once we multiply the previous elliptic equation (\ref{first-var-pot-fct-ell-equ-gal-case}) by $\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}$, let us integrate by parts as follows:
\begin{equation}
\begin{split}\label{grad-first-var-pot-fct-ene-est}
\left\|\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|^2_{L^2(e^{-f_{g_t}}d\mu_{g_t})}&=-\frac{1}{2}\int_N\left\langle\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right),\mathop{\rm div}\nolimits_{f_{g_t}} h\right\rangle_{g_t}\,e^{-f_{g_t}}d\mu_{g_t}\\
&\quad-\frac{1}{2}\int_N\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\left\langle h,\mathop{\rm Ric}\nolimits_{f_{g_t}}(g_t)\right\rangle_{g_t}\,e^{-f_{g_t}}d\mu_{g_t}\\
&=:I_1+I_2.
\end{split}
\end{equation}
The first integrals $I_1$ on the righthand side of the previous computation can be handled as follows:
\begin{equation}
\begin{split}\label{I_1-est}
|I_1|&\leq \frac{1}{4}\left\|\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|^2_{L^2(e^{-f_{g_t}}d\mu_{g_t})}+C\|\mathop{\rm div}\nolimits_{f_{g_t}}h\|^2_{L^2(d\mu_{g_t})}\\
&\leq \frac{1}{4}\left\|\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|^2_{L^2(e^{-f_{g_t}}d\mu_{g_t})}\\
&\quad+C(n,g_b,\varepsilon)\left(\|\nabla^{g_t}h\|^2_{L^2(d\mu_{g_t})}+\|\rho_{g_b}^{-1}|h|_{g_b}\|_{L^2(d\mu_{g_t})}^2\right)\\
&\leq \frac{1}{4}\left\|\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|^2_{L^2(e^{-f_{g_t}}d\mu_{g_t})}+C(n,g_b,\varepsilon)\|\nabla^{g_t}h\|^2_{L^2(d\mu_{g_t})},
\end{split}
\end{equation}
where we have used Young's inequality in the first line. The second inequality follows from (\ref{est-pot-fct-ene-est}) and the third inequality uses Hardy's inequality from Theorem \ref{thm-min-har-inequ} together with the fact that $g_t$ (and therefore $f_{g_t}$ by Proposition \ref{prop-pot-fct}) is arbitrary close to $0$ in the $C^{2,\alpha}_{\tau}$-topology.
The integral $I_2$ can be estimated from above in a similar way for any $t\in[1,2]$ and $\gamma\in(0,1)$:
\begin{equation}
\begin{split}\label{I_2-est}
|I_2|&\leq C(n,g_b,\varepsilon)\int_N\rho_{g_b}^{-2}\left|\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right||h|_{g_t}\,d\mu_{g_t}\\
&\leq \gamma \int_N\rho_{g_b}^{-2}\left|\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right|^2\,d\mu_{g_t}+C(\gamma,n,g_b,\varepsilon)\int_N\rho_{g_b}^{-2}|h|^2_{g_t}\,d\mu_{g_t}\\
&\leq \frac{1}{4}\left\|\nabla^{g_t}\left(\delta_{g_t}f(h)-\frac{\mathop{\rm tr}\nolimits_{g_t}h}{2}\right)\right\|^2_{L^2(e^{-f_{g_t}}d\mu_{g_t})}+C(\gamma,n,g_b,\varepsilon)\|\nabla^{g_t}h\|_{L^2(d\mu_{g_t})},
\end{split}
\end{equation}
if $\gamma=\gamma(n,g_b,\varepsilon)$ is chosen sufficiently small. Here, we have used Hardy's inequality in the last inequality.
Putting (\ref{grad-first-var-pot-fct-ene-est}), (\ref{I_1-est}) and (\ref{I_2-est}) altogether lead to the expected estimate (\ref{est-grad-first-der-pot-fct-ene}).
By considering the $L^2_{\frac{n}{2}+1}$ norm of (\ref{first-var-pot-fct-ell-equ-gal-case}), (\ref{first-var-pot-fct-ell-equ-gal-case-prop}) follows by using Hardy's inequality and (\ref{est-pot-fct-ene-est}).
\end{proof}
We conclude this section by stating a quantitative version of Proposition \ref{prop-ene-pot-fct} whose proof is very similar and is therefore omitted:
\begin{prop}\label{prop-ene-pot-fct-bis}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$ and $\alpha\in (0,1)$. Then there exists a neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ of $g_b$ such that if $g_1$ and $g_2$ are metrics in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ and $h\in C^{2,\alpha}_{\tau}$,
\begin{equation}
\begin{split}
\left\|\nabla^{g_1}\left(\delta_{g_2}f(h)-\delta_{g_1}f(h)\right)\right\|_{L^2}&\leq C(n,g_b,\varepsilon)\|g_2-g_1\|_{C^{2,\alpha}_{\tau}}\|h\|_{H^2_{\frac{n}{2}-1}},\\
\left\|\Delta_{g_1,f_{g_1}}\left(\delta_{g_2}f(h)-\delta_{g_1}f(h)\right)\right\|_{L^2_{\frac{n}{2}+1}}&\leq C(n,g_b,\varepsilon)\|g_2-g_1\|_{C^{2,\alpha}_{\tau}}\|h\|_{H^2_{\frac{n}{2}-1}}.
\end{split}
\end{equation}
\end{prop}
\section{Fredholm properties of the Lichnerowicz operator in weighted spaces}\label{fred-sec-prop}
Another motivation for our weighted H\"older spaces is that they ensure that the Lichnerowicz operator which controls the second variations of $\lambda_{\operatorname{ALE}}$ has adequate Fredholm properties. Indeed, the Lichnerowicz operator $L_{g_b}:H^{2}(S^2T^*N)\rightarrow L^2(S^2T^*N)$ is symmetric and bounded, but is \emph{not} Fredholm and does not have satisfying analytical properties. This becomes the case when considering weighted spaces.
\begin{prop}[Fredholm properties of the Lichnerowicz operator] \label{prop-lic-fred}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric, asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma\neq \{\textup{Id}\}$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. If $\beta\in(0,n-2)\cup(n-2,n)$, then $$L_{g_b}:C^{2,\alpha}_{\beta}(S^2T^*N)\rightarrow C^{0,\alpha}_{\beta+2}(S^2T^*N)$$
is Fredholm for every $\alpha\in(0,1)$, and
$$L_{g_b}:H^2_{\beta}(S^2T^*N)\rightarrow L^2_{\beta+2}(S^2T^*N)$$
is Fredholm.
Moreover, if a $2$-tensor $h\in C^{2,\alpha}_{\beta}(S^2T^*N)$ is in the kernel of $L_{g_b}$, then $h\in C^{\infty}_{n}(S^2T^*N)$ and is divergence-free.
In dimension $n\geq 3$, the operator $L_{g_b}:H^2_{\frac{n}{2}-1}(S^2T^*N)\rightarrow L^2_{\frac{n}{2}+1}(S^2T^*N)$ is Fredholm and both its kernel and $L^2$-cokernel equal $\ker_{L^2}L_{g_b}$. As a consequence, there exists $C>0$ such that for any $h\perp_{L^2} \ker_{L^2}L_{g_b}$, we have the following control:
\begin{equation}
\|\nabla^{g_b, 2}h\|_{L^2_{\frac{n}{2} +1}}\leq\|h\|_{H^2_{\frac{n}{2}-1}} \leq C \|L_{g_b}h\|_{L^2_{\frac{n}{2}+1}}.\label{controle hessienne lic n}
\end{equation}
\end{prop}
\begin{rk}
The statement holds for $\mathbb{R}^n$ if we additionally assume $\beta<n-1$.
\end{rk}
\begin{proof}
The fact that the two operators are Fredholm is a consequence of the theory of elliptic operators between weighted Hölder spaces. Indeed, the elements of the kernel of the operator $-\nabla^*\nabla$ on a flat nontrivial quotient of $\mathbb{R}^n$ are sums of homogeneous $2$-tensors of order $k$ or $-n+2-k$ for $k\in \mathbb{N}\backslash\{1\}$ where $1$ is not in the set of possible values because there are no nonvanishing linear functions on $\mathbb{R}^n$ invariant by the group action induced by $\Gamma$.\\
Let us now consider a $2$-tensor $h\in C^{2,\alpha}_{\beta}(S^2T^*N)$ which satisfies $L_{g_b}h = 0$, then, since $\mathop{\rm div}\nolimits_{g_b}L_{g_b}=\frac{1}{2}(\nabla^{g_b})^{\ast}\nabla^{g_b}\mathop{\rm div}\nolimits_{g_b}$, by the maximum principle, we have $\mathop{\rm div}\nolimits_{g_b}h=0$. At infinity, we have $h = H^{n-2} + O(\rho_{g_b}^{-n+2-\epsilon})$ for a harmonic homogeneous $2$-tensor $H^{n-2}\sim \rho_{g_b}^{-n+2}$. Now, such a divergence-free $2$-tensor $H^{n-2}$ must vanish by \cite[Lemma 4.1]{ozu2}. We therefore have $ h = O(\rho_{g_b}^{-n+2-\epsilon}) $, and since the next decay rate in the kernel of $-\nabla^*\nabla$ is $\rho_{g_b}^{-n}$, we have $h\in C^{\infty}_{n}(S^2T^*N)$.
\\
For the weighted Sobolev spaces, the operator $$L_{g_b}:H^2_{\frac{n}{2}-1}(S^2T^*N)\rightarrow L^2_{\frac{n}{2}+1}(S^2T^*N)$$ whose kernel is the kernel of $L_{g_b}$ on $H^2_{\frac{n}{2}-1}(S^2T^*N)$ is reduced to the $L^2$-kernel of $L_{g_b}$ because there is no exceptional value between $\frac{n}{2}-1$ and $n-2$. Its cokernel is the kernel of $L_{g_b}$ on $L^2_{\frac{n}{2}-1}(S^2T^*N)\approx \left(L^2_{\frac{n}{2}+1}(S^2T^*N)\right)^*$ (see Note \ref{note L2 cokernel} below) which is also equal to $\ker_{L^2}L_{g_b}$. This operator is in particular of index $0$. By Banach bounded inverse theorem, this implies that there exists $C>0$ depending on $g_b$ such that for any $h\perp\ker_{L^2}L_{g_b}$, we have
$$\|\nabla^{g_b,2}h\|_{L^2_{\frac{n}{2}+1}}\leq \|h\|_{H^2_{\frac{n}{2}-1}}\leq C \|L_{g_b} h \|_{L^2_{\frac{n}{2}+1}}.$$
Note that the $L^2$-product is well-defined between elements of $L^2_{\frac{n}{2}-1}(S^2T^*N)$ and elements of $\ker_{L^2}L_{g_b} \subset C^0_n(S^2T^*N)\subset L^2_{\frac{n}{2}+1}(S^2T^*N)$.
\end{proof}
\begin{note}\label{note L2 cokernel}
For any $s\in \mathbb{R}$, the dual of $ L^2_{\frac{n}{2}+s} $ classically identifies with $ L^2_{\frac{n}{2}-s} $ because by definition, this is the set of tensors for which the $L^2$-product is defined for any $2$-tensor in $ L^2_{\frac{n}{2}+s} $. We therefore define the $L^2$-cokernel of a symmetric operator $H^2_{\frac{n}{2}+s-2}\mapsto L^2_{\frac{n}{2}+s}$ as its kernel on the dual of its image: $L^2_{\frac{n}{2}-s}$. This is the $L^2$-orthogonal of its image.
We keep this definition on subsets of $ L^2_{\frac{n}{2}+s}$, for instance, consider a smooth elliptic operator $L$ asymptotic to the Euclidean Laplacian at infinity between $ C^{2,\alpha}_{\frac{n}{2}+s-2} $ and $ C^{0,\alpha}_{\frac{n}{2}+s} \subset L^2_{\frac{n}{2}+s-\epsilon}$ for all $\epsilon>0$. Assume moreover that ${\frac{n}{2}+s}$ is not a critical exponent of the Laplacian. Then, the $L^2$-cokernel of $L$ is the kernel of $L$ on $C^{0,\alpha}_{\frac{n}{2}-s}$. Indeed, by the above discussion, we first identify it with the kernel on $L^2_{\frac{n}{2}-s+\epsilon}$ which by elliptic regularity is the kernel on $H^k_{\frac{n}{2}-s+\epsilon}$ for all $k$. For $k$ large enough, this embeds in $C^{0,\alpha}_{\frac{n}{2}-s+\epsilon}$, and finally, by choosing $\epsilon$ small enough so that there is no critical exponent of the Laplacian in $[\frac{n}{2}-s,\frac{n}{2}-s+\epsilon] $, this is also the kernel of $L$ on $C^{0,\alpha}_{\frac{n}{2}-s}$.
\end{note}
\section{Properties of $\lambda_{\operatorname{ALE}}$ in the integrable case}\label{loj-sim-sec-int-case}
Now that we have proved that the functional is well-defined and well-behaved in relevant function spaces, we start by investigating the stable integrable case (which corresponds to all known examples) first where a \L{}ojasiewicz inequality can be proved ``by hand".
\subsection{Integrability of Ricci-flat ALE metrics}\label{sec-int-ric-fla}
We say that a Ricci-flat ALE metric $(N^n,g_b)$ is integrable if the moduli space of Ricci-flat ALE metric on $N$ with the same cone at infinity is a smooth manifold around $g_b$.
\begin{defn}[Integrable Ricci-flat ALE metric]\label{definition integrable}
A Ricci-flat ALE metric $(N^n,g_b)$ is \emph{integrable} if for all $v\in\ker_{L^2}L_{g_b}$ small enough, there exists a (unique) Ricci-flat ALE metric $\Bar{g}_v$ satisfying $\Bar{g}_v-(g_b+v)\perp \ker_{L^2}L_{g_b}$, and such that $\mathop{\rm div}\nolimits_{g_b}\Bar{g}_v=0$ and $\|\Bar{g}_v-g_b\|_{C^{2,\alpha}_{n}}\leq 2 \|v\|_{C^{2,\alpha}_{n}}$.
\end{defn}
We will need the following description of the neighborhood of an \emph{integrable} Ricci-flat ALE metric to restrict ourselves to deformations which are transverse to the Ricci-flat deformations.
\begin{prop}\label{gauge fixing ALE integrable}
Let $n\geqslant 4$ and $(N^n,g_b)$ be an integrable ALE Ricci-flat metric, asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in(1,n)$ and $\alpha\in(0,1)$. Then there exist $C>0$ and $\varepsilon>0$ such that for any metric $g$ satisfying $ \|g-g_b\|_{C^{2,\alpha}_\tau(g_b)}\leq \epsilon $, there exists a Ricci-flat ALE metric $ g'_b $ such that
\begin{itemize}
\item $\| g_b - g'_b \|_{C^{2,\alpha}_n(g_b)}\leq C \|g-g_b\|_{C^{2,\alpha}_\tau(g_b)}$,
\item $ g - g'_b \perp_{L^2(g'_b)}\ker_{L^2(g'_b)}L_{g'_b} $, and
\item $\mathop{\rm div}\nolimits_{g'_b}g = 0$.
\end{itemize}
\end{prop}
\begin{proof}
According to \cite[Corollary 5.16]{ozu2} and Definition \ref{definition integrable}, the integrability assumption rewrites in the following way. For any $v\in\ker_{L^2(g_b)}L_{g_b}$, there exists a unique Ricci-flat ALE metric $\bar{g}_v$ satisfying
\begin{enumerate}
\item $\mathop{\rm div}\nolimits_{g_b}\Bar{g}_v=0$,
\item $\Bar{g}_v-(g_b+v)\perp \ker_{L^2}L_{g_b}$,
\item $\mathop{\rm Ric}\nolimits(\bar{g}_v) = 0$.
\end{enumerate}
Moreover, since these metrics are obtained from the implicit function theorem, Lemma \ref{th fcts implicites}, applied to $(v,g)\mapsto \mathop{\rm Ric}\nolimits(g) + \frac{1}{2}\mathcal{L}_{\mathop{\rm div}\nolimits_{g_b}\Bar{g}_v}g$ seen as an operator from $(\ker_{L^2}L_{g_b})\times C^{2,\alpha}_\tau$ to $C^{0,\alpha}_{\tau+2}$ for $\tau\in (\frac{n-2}{2},n-2)$, they consequently vary analytically in $v$. Similarly, the elements of $\ker_{L^2(\bar{g}_v)}L_{\bar{g}_v}$ vary analytically in $v$ as solutions $h\in L^2(g_b)$ of the parametrized equation $(v,h)\mapsto L_{\bar{g}_v} h = 0$.
According to Proposition \ref{prop-gauge-div-free} which is proved by implicit function theorem, for any $\alpha\in(0,1)$, there exists $\varepsilon>0$ for which for any metric $g\in B_{C^{2,\alpha}_\tau}(g_b,\varepsilon)$ and any $v\in \ker_{L^2}L_{g_b}$, there exists a unique vector field $X(g,v)\in C^{2+1,\alpha}_{\tau-1}(TN)$ depending analytically on $g$ and $v$ for which
$$\mathop{\rm div}\nolimits_{(\exp_{X(g,v)})_*\bar{g}_v}g =0,$$
where $\exp_{X}g:x \mapsto \exp_x^{g}(X(x))$. Define $\phi_{g,v} := (\exp_{X(g,v)})^{-1}$. We will naturally look for a $2$-tensor $v$ such that $g - \phi_{g,v}^*\bar{g}_v\perp \ker_{L^2(\phi_{g,v}^*\bar{g}_v)}L_{\phi_{g,v}^*\bar{g}_v}$, where the $L^2$-scalar product is defined with respect to $\phi_{g,v}^*\bar{g}_v$ and such that the elements of $\ker_{L^2(\phi_{g,v}^*\bar{g}_v)}L_{\phi_{g,v}^*\bar{g}_v}$ vary analytically in $v$.
Let us therefore consider the analytic map
$$F:(g,v)\mapsto \pi^{\phi_{g,v}^*\bar{g}_v}\big(\phi_{g,v}^*\bar{g}_v-g\big),$$
where $\pi^{\phi_{g,v}^*\bar{g}_v}$ is the $L^2(\phi_{g,v}^*\bar{g}_v)$-orthogonal projection on $\ker_{L^2(\phi_{g,v}^*\bar{g}_v)}L_{\phi_{g,v}^*\bar{g}_v}$. Note that this projection is smooth on $C^{2,\alpha}_\tau$ for $\tau>0$ since the elements of $\ker_{L^2(\phi_{g,v}^*\bar{g}_v)}L_{\phi^*_{g,v}g_{\bar{v}}}$ decay like $\rho_{g_b}^{-n}$ at infinity.
We consider $F$ on a neighborhood of $(g_b,0)$ in $C^{2,\alpha}_\tau\times \ker_{L^2(g_b)}L_{g_b}$ with values in \newline$\ker_{L^2(\phi_{g,v}^*\bar{g}_v)}L_{\phi_{g,v}^*\bar{g}_v}$. Here the map $F$ is analytic. We can apply the implicit function theorem as stated in Lemma \ref{th fcts implicites} to the map $F$. Indeed, $F(g_b,0)=0$ and $d_{(g_b,0)}F(0,v) = v$ is an isomorphism, and the spaces $C^{2,\alpha}_\tau\times \ker_{L^2(g_b)}L_{g_b}$ and $\ker_{L^2(\phi_{g,v}^*\bar{g}_v)}L_{\phi_{g,v}^*\bar{g}_v}$ are Banach spaces. We then conclude that there exists an analytic map (unique as a continuous map) $V$ such that for all metrics $g$ in a $C^{2,\alpha}_\tau$-neighborhood of $g_b$, we have
$$ F(g,V(g))=0. $$
Now, for any $g$ satisfying $ \|g-g_b\|_{C^{2,\alpha}_\tau(g_b)}\leq \epsilon $ for $\epsilon>0$ small enough, we consider $g'_b = \phi_{g,V(g)}^*\bar{g}_{V(g)}$ which satisfies the desired properties.
\end{proof}
\begin{rk}
The integrability of the Ricci-flat ALE metric $(N^n,g_b)$ is crucial to obtain the above statement.
\end{rk}
Next, we prove that if a Ricci-flat ALE metric is stable and integrable then it is a local maximum of $\lambda_{\operatorname{ALE}}$: this result echoes [Theorem $A$, \cite{Has-Sta}]. Before stating and proving this result, we make a pause to discuss the relevant notion of stability we need here:
\begin{defn}[Locally stable integrable Ricci-flat ALE metrics]\label{definition integrable-loc-stable}
An integrable Ricci-flat ALE metric $(N^n,g_b)$ is \emph{locally stable} if for any metric $g$ satisfying $ \|g-g_b\|_{C^{2,\alpha}_\tau(g_b)}\leq \epsilon $ for $\epsilon>0$ small enough, there exists a linearly stable Ricci-flat ALE metric $ g'_b $ satisfying the conclusions of Proposition \ref{gauge fixing ALE integrable} .
\end{defn}
Let us show that being linearly stable and integrable implies being locally stable.
\begin{prop}\label{prop-int-lin-sta-loc-sta}
Let $(N^n,g_b)$ be an integrable and linearly stable ALE Ricci-flat metric. Then it is locally stable in the sense of Definition \ref{definition integrable-loc-stable}.
\end{prop}
\begin{proof}
Denote $\Bar{g}_0:=g_b$ and for $v\in \ker_{L^2(\Bar{g}_0)} L_{\Bar{g}_0}$ small enough, let $\Bar{g}_v$ be a Ricci-flat ALE metric satisfying $\Bar{g}_v-(\Bar{g}_0+v)\perp \ker_{L^2}L_{\Bar{g}_0}$ and $\textup{div}_{\bar{g}_0}\bar{g}_v=0$ since $\Bar{g}_0$ is assumed to be integrable. The map $v\in C^{2,\alpha}_\tau\rightarrow \Bar{g}_v\in C^{2,\alpha}_\tau$ is analytic: as already seen in the proof of Proposition \ref{gauge fixing ALE integrable}, this implies that there exists a basis of each $\ker_{L^2(\bar{g}_v)} L_{\Bar{g}_v}$ which depends analytically on $v$. Therefore, the $L^2(\bar{g}_v)$-projection on $\ker_{L^2(\bar{g}_v)}L_{\bar{g}_v}$ denoted by $\pi_v: H_{\frac{n}{2}-1}^1(\bar{g}_0)\to H^1_{\frac{n}{2}-1}(\bar{g}_0)$ depends analytically on $v$ since $\ker_{L^2(\bar{g}_v)}L_{\bar{g}_v}\subset C^\infty_n$ by Theorem \ref{prop-lic-fred}.
As $\Bar{g}_0$ is assumed to be linearly stable, thanks to Lemma \ref{lemma-equiv-def-stable}, there exists $c>0$ such that if $h_0\perp \ker_{L^2(\bar{g}_0)} L_{\Bar{g}_0}$ and $h_0\in H^1_{\frac{n}{2}-1}$, then one has
\begin{equation}
\left<-L_{\Bar{g}_0} h_0,h_0\right>_{L^2}\geq c\|\nabla^{\Bar{g}_0}h_0\|_{L^2}^2,\label{stability at bar g0}
\end{equation}
by Theorem \ref{theo-der-kro}.
Now, if $h\perp \ker_{L^2(\bar{g}_v)} L_{\Bar{g}_v}$ and $h\in C_c^\infty(S^2T^*N)$, we decompose it as $h = h_0+h'$ where
$h_0\perp \ker_{L^2(\bar{g}_0)} L_{\Bar{g}_0}$ and $h'\in \ker_{L^2(\bar{g}_0)} L_{\Bar{g}_0}$.
Let us first show that $h'$ is small when $v$ is. We have $0 = \pi_vh$ and $h' = \pi_0 h$, and since $v\mapsto \pi_v$ is analytic, there exists $C>0$ such that for $v$ small enough, one has $\|h'\|_{H^1_{\frac{n}{2}-1}(\bar{g}_0)}\leq \|v\|_{C^{2,\alpha}_{\tau}}\|h\|_{H^1_{\frac{n}{2}-1}(\bar{g}_0)}$.
Using the fact that $h'\in\ker_{L^2(\bar{g}_0)} L_{\bar{g}_0}$, one gets immediately that
\begin{equation*}
\begin{split}
\left<-L_{\Bar{g}_0} h,h\right>_{L^2(\bar{g}_0)}=\,&\left<-L_{\Bar{g}_0} h_0,h_0\right>_{L^2(\bar{g}_0)}\\
\geq\,& c\|\nabla^{\Bar{g}_0}h_0\|_{L^2(\bar{g}_0)}^2,
\end{split}
\end{equation*}
where we used the inequality \eqref{stability at bar g0}.
In particular, if $\|v\|_{C^{2,\alpha}_\tau}$ is chosen small enough, then we have $\|\nabla^{\Bar{g}_0}h_0\|_{L^2(\bar{g}_0)}\geq \|\nabla^{\Bar{g}_0}h\|_{L^2(\bar{g}_0)}-\|\nabla^{\Bar{g}_0}h'\|_{L^2(\bar{g}_0)}\geq \frac{1}{\sqrt{2}}\|\nabla^{\Bar{g}_0}h\|_{L^2(\bar{g}_0)}$ and $\|\nabla^{\Bar{g}_0}h\|^2_{L^2(g_0)}\geqslant \frac{1}{2}\|\nabla^{\Bar{g}_v}h\|_{L^2(\bar{g}_v)}^2$ since $\|\Bar{g}_v-\Bar{g}_0\|_{C_{n}^{2,\alpha}}\leq 2\|v\|_{C_{n}^{2,\alpha}}$ by Definition \ref{definition integrable}. Therefore,
\begin{equation}
\left<-L_{\Bar{g}_0} h,h\right>_{L^2(\bar{g}_0)}\geq c\|\nabla^{\Bar{g}_0}h\|^2_{L^2(\bar{g}_0)}\geqslant \frac{c}{4}\|\nabla^{\Bar{g}_v}h\|_{L^2(\bar{g}_v)}^2,\label{fir-est-int-lin-sta}
\end{equation}
where $c$ is a positive constant independent of $v$ and $h$ that may vary from line to line. On the other hand, by linearizing $L_{\Bar{g}_0}$ with respect to $L_{\Bar{g}_v}$, we have
\begin{equation}
\begin{split}\label{sec-est-int-lin-sta}
\left|\left<(L_{\Bar{g}_0}-L_{\Bar{g}_v}) h,h\right>_{L^2(\bar{g}_v)}\right|&\leq\left|\left<(\Delta_{\Bar{g}_0}-\Delta_{\Bar{g}_v}) h,h\right>_{L^2(\bar{g}_v)}\right|+2\left|\left<\left(\mathop{\rm Rm}\nolimits(\Bar{g}_0)-\mathop{\rm Rm}\nolimits(\Bar{g}_v)\right)\ast h,h\right>_{L^2(\bar{g}_v)}\right|\\
&\leq\sum_{i=0}^2\left\langle\nabla^{\bar{g}_v,i}(\bar{g}_0-\bar{g}_v)\ast\nabla^{\bar{g}_v,2-i}h, h \right\rangle_{L^2(\bar{g}_v)}\\
&\leq C\|\bar{g}_v-\bar{g}_0\|_{C^{2,\alpha}_{n}}\sum_{i=1}^2\left\|\rho_{\bar{g}_0}^{-i}\nabla^{\bar{g}_0,2-i}h\ast h \right\|_{L^1(\bar{g}_0)}\\
&\quad+\left\langle\nabla^{\bar{g}_v}(\bar{g}_0-\bar{g}_v)\ast\nabla^{\bar{g}_v}h, h\right\rangle_{L^2(\bar{g}_v)}+\left\langle(\bar{g}_0-\bar{g}_v)\ast\nabla^{\bar{g}_v}h\ast \nabla^{\bar{g}_v}h\right\rangle_{L^2(\bar{g}_v)}\\
&\leq C\|v\|_{C^{2,\alpha}_{n}}\|\nabla^{\Bar{g}_v}h\|^2_{L^2(\bar{g}_v)},
\end{split}
\end{equation}
for some positive constant $C$ independent of $v$ and $h$.
Here we have used the density of $C_c^\infty$ in $H^1_{\frac{n}{2}-1}$, integration by parts in the third inequality and Hardy's inequality (Theorem \ref{thm-min-har-inequ}) in the last line.
Combining (\ref{fir-est-int-lin-sta}) and (\ref{sec-est-int-lin-sta}), one gets for $v$ small enough, and for some constant $c>0$ independent on $v$ and $h$:
\begin{equation}
\left<-L_{\Bar{g}_v} h,h\right>_{L^2(\bar{g}_v)}\geq c\|\nabla^{\Bar{g}_v}h\|^2_{L^2(\bar{g}_v)},
\end{equation}
for any $h\perp\ker_{L^2(\Bar{g}_v)} L_{\Bar{g}_v}$. This shows that $\bar{g}_v$ is also linearly stable.
\end{proof}
Notice that all known examples of $4$-dimensional Ricci flat ALE metrics are hyperk\"ahler which implies that they are integrable. Furthermore, each infinitesimal deformation lying in the kernel of the corresponding Lichnerowicz operator is the first jet of a curve of hyperk\"ahler metrics. Since a hyperk\"ahler metric is linearly stable, an ALE hyperk\"ahler metric is integrable and locally stable in the sense of Definition \ref{definition integrable-loc-stable}: see \cite[Section $11$, Chapter $12$]{Besse} for more details.
\begin{prop}\label{local maximum stable integrable}
Let $(N^n,g_b)$, $n\geq 4$, be an ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$ and let $\alpha\in(0,1)$. If $(N^n,g_b)$ is assumed to be integrable and linearly stable then it is a local maximum for the energy $\lambda_{\operatorname{ALE}}$ with respect to the topology defined by $C^{2,\alpha}_{\tau}(S^2T^*N)$.
\end{prop}
\begin{proof}
Consider the following Taylor expansion of the functional $\lambda_{\operatorname{ALE}}$ at $g_b$ of order $3$:
\begin{equation}
\lambda_{\operatorname{ALE}}(g_b+h)=\lambda_{\operatorname{ALE}}(g_b)+\delta_{g_b}\lambda_{\operatorname{ALE}}(h)+\delta^2_{g_b}\lambda_{\operatorname{ALE}}(h,h)+\int_0^1\frac{(1-t)^2}{2}\delta_{g_b+th}^3\lambda_{\operatorname{ALE}}(h,h,h)\,dt.\label{tay-exp-ord-3}
\end{equation}
Now, by definition of $\lambda_{\operatorname{ALE}}$, $\lambda_{\operatorname{ALE}}(g_b)=0$ and by (\ref{first-var-lambda}), one has $\delta_{g_b}\lambda_{\operatorname{ALE}}(h)=0$ as well by Proposition \ref{lambdaALE analytic}. Finally, by Proposition \ref{second-var-prop}, $\delta^2_{g_b}\lambda_{\operatorname{ALE}}(h,h)=\frac{1}{2}\langle L_{g_b}h,h\rangle_{L^2}$ if $\mathop{\rm div}\nolimits_{g_b}h=0$. Since $(N^n,g_b)$ is integrable and linearly stable, it is integrable and locally stable by Proposition \ref{prop-int-lin-sta-loc-sta} and thanks to Theorem \ref{theo-der-kro} and Proposition \ref{gauge fixing ALE integrable}, it suffices to estimate the integral on the righthand side of (\ref{tay-exp-ord-3}) in such a way that it can be absorbed by $\|\nabla^{g_b} h\|^2_{L^2}$. More precisely, we claim the following:
\begin{claim}\label{third-der-est}
\begin{equation*}
|\delta_{g_b+th}^3\lambda_{\operatorname{ALE}}(h,h,h)|\leq C\|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b} h\|_{L^2}^2,\quad \mathop{\rm div}\nolimits_{g_b}h=0,
\end{equation*}
for some positive constant $C=C(n,g_b,\varepsilon)$ uniform in $t\in[0,1]$.
\end{claim}
\begin{proof}[Proof of Claim \ref{third-der-est}]
Recall from (\ref{first-var-lambda}) that:
\begin{equation*}
\delta^3_{g_b+th} \lambda_{\operatorname{ALE}}(h,h,h)=-\frac{d^2}{dt^2}\int_N\langle \mathop{\rm Ric}\nolimits(g_b+th)+\nabla^{g_b+th,2}f_{g_b+th},h\rangle_{g_b+th} \,e^{-f_{g_b+th}}d\mu_{g_b+th}.
\end{equation*}
Denote the curve of metrics $(g_b+th)_{t\in[0,1]}$ by $(g_t)_{t\in[0,1]}$: the family $(g_t)_{t\in[0,1]}$ is uniformly equivalent in $t\in[0,1]$ since $g_t$ lies in an arbitrarily small neighborhood of $g_b$ in the $C^{2,\alpha}_{\tau}$ topology. Moreover, Proposition \ref{prop-pot-fct} ensures that $\|f_{g_t}\|_{C^{2,\alpha}_{\tau}}\leq \varepsilon\left(\|g_t-g_b\|_{C^{2,\alpha}_{\tau}}\right)$, where $\varepsilon(\cdot)$ is a positive function on $[0,+\infty)$ that tends to $0$ as its argument goes to $0$. Notice by [(\ref{lin-Bian-app}), Lemma \ref{Ric-lin-lemma-app}] applied to $g_1:=g_b$ and $g_2:=g_b+h$, that the Bianchi gauge satisfies:
\begin{equation*}
\begin{split}
B&=\mathop{\rm div}\nolimits_{g_1}(g_t-g_1)-\frac{1}{2}\nabla^{g_1}\mathop{\rm tr}\nolimits_{g_1}(g_t-g_1)+g_t^{-1}\ast(g_t-g_1)\ast\nabla^{g_1}g_t\\
&=-\frac{t}{2}\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h+t^2g_t^{-1}\ast h\ast\nabla^{g_b}h,
\end{split}
\end{equation*}
since $\mathop{\rm div}\nolimits_{g_b}h=0$. Finally, since $\nabla^{g_t}T=\nabla^{g_b}T+g_t^{-1}\ast \nabla^{g_b}(g_t-g_b)\ast T$ for any tensor $T$, one gets
\begin{equation}
\begin{split}
\mathop{\rm \mathscr{L}}\nolimits_B(g_t)&=-t\nabla^{g_b,2}\mathop{\rm tr}\nolimits_{g_b}h+t^2\nabla^{g_b}(g_t^{-1}\ast h\ast\nabla^{g_b}h)\\
&+tg_t^{-1}\ast \nabla^{g_b}h\ast \left(-\frac{t}{2}\nabla^{g_b}\mathop{\rm tr}\nolimits_{g_b}h+t^2g_t^{-1}\ast h\ast\nabla^{g_b}h\right).\label{lie-der-gauge-bianchi}
\end{split}
\end{equation}
Moreover, it can be shown with the help of Lemmata \ref{Ric-lin-lemma-app} and \ref{lem-lin-equ-Ric-first-var} that:
\begin{equation}
\left|-2\mathop{\rm Ric}\nolimits(g_t)-tL_{g_b}h-t\nabla^{g_b,2}\mathop{\rm tr}\nolimits_{g_b}h\right|\lesssim |\mathop{\rm Rm}\nolimits(g_b)|_{g_b}|h|^2_{g_b}+|\nabla^{g_b}h|^2_{g_b}+|h|_{g_b}|\nabla^{g_b,2}h|_{g_b},
\end{equation}
and similarly,
\begin{equation}
\left|-2\partial_t\mathop{\rm Ric}\nolimits(g_t)-L_{g_b}h-\nabla^{g_b,2}\mathop{\rm tr}\nolimits_{g_b}h\right|\lesssim |\mathop{\rm Rm}\nolimits(g_b)|_{g_b}|h|^2_{g_b}+|\nabla^{g_b}h|^2_{g_b}+|h|_{g_b}|\nabla^{g_b,2}h|_{g_b},
\end{equation}
where the symbol $\lesssim$ denotes less than or equal to up to a positive multiplicative constant uniform in $t\in[0,1]$ which might depend on $n$, $g_b$, $\varepsilon$.
By [(\ref{sec-der-Ric-rough}), Lemma \ref{Ric-lin-lemma-app}],
\begin{equation}
\begin{split}
\left|\int_N\left<\frac{\partial^2}{\partial t^2}\mathop{\rm Ric}\nolimits(g_t),h\right>_{g_t}\,e^{-f_{g_t}}d\mu_{g_t}\right|&\lesssim \int_N|h|^2_{g_b}|\nabla^{g_b,2}h|_{g_b}+|\mathop{\rm Rm}\nolimits(g_b)||h|_{g_b}^3+|\nabla^{g_b}h|^2_{g_b}|h|_{g_b}\,d\mu_{g_b}\\
&\lesssim \left(\|\nabla^{g_b,2}h\|_{C^0_{2}}+\|h\|_{C^0_{0}}\right)\|\rho_{g_b}^{-1}h\|_{L^2}^2+\|h\|_{C^0_{0}}\|\nabla^{g_b}h\|_{L^2}^2\\
&\lesssim \|h\|_{C^2_{\tau}}\|\nabla^{g_b}h\|^2_{L^2},
\end{split}
\end{equation}
where we have used Hardy's inequality that holds on $(N^n,g_b)$ thanks to Theorem \ref{thm-min-har-inequ}. A similar estimate holds for mixed derivatives with respect to the parameter $t\in[0,1]$ that involve the terms $\mathop{\rm Ric}\nolimits(g_t)$, the scalar product on symmetric $2$-tensors induced by $g_t$ and the Riemannian volume $d\mu_{g_t}$.
We use Proposition \ref{prop-pot-fct} that ensures that
\begin{equation}\label{control-covid-delta-f}
\|\delta^k_{g_t}f(h,...,h)\|_{C^{2,\alpha}_{\tau}}\leq C\left(\|g_t-g_b\|_{C^{2,\alpha}_{\tau}}\right)\|h\|^k_{C^{2,\alpha}_{\tau}},\,\quad k\geq 0,
\end{equation}
to handle the derivatives falling on $e^{-f_{g_t}}$.
For instance, let us handle the term involving $\left<\partial_t\mathop{\rm Ric}\nolimits(g_t),h\right>_{g_t}\delta_{g_t}f(h)$ as follows:
\begin{equation*}
\begin{split}
\left|\int_N\left<\partial_t\mathop{\rm Ric}\nolimits(g_t),h\right>_{g_t}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_t}\right|\lesssim\,&\left|\int_N\langle L_{g_b}h+\nabla^{g_b,2}\mathop{\rm tr}\nolimits_{g_b}h,h\rangle_{g_b}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_b}\right|\\
&+\|h\|_{C^{2,\alpha}_{\tau}}\int_N |\mathop{\rm Rm}\nolimits(g_b)|_{g_b}|h|^2_{g_b}+|\nabla^{g_b}h|^2_{g_b}\,d\mu_{g_b}\\
&+\|h\|_{C^{2,\alpha}_{\tau}}\int_N|h|^2_{g_b}|\nabla^{g_b,2}h|_{g_b}\,d\mu_{g_b}\\
\lesssim\,&\left|\int_N\langle \Delta_{g_b}h+\nabla^{g_b,2}\mathop{\rm tr}\nolimits_{g_b}h,h\rangle_{g_b}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_b}\right|\\
&+\|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b}h\|_{L^2}^2.
\end{split}
\end{equation*}
Here we have used Hardy's inequality in the first line to get rid of the zeroth order term appearing in the Lichnerowicz operator, in the first term of the second line by using that the curvature $\mathop{\rm Rm}\nolimits(g_b)$ decays at least quadratically and in the third line invoking the quadratic decay of $\nabla^{g_b,2}h$. Finally, the weighted Riemannian measure $e^{-f_{g_t}}d\mu_{g_t}$ on the righthand side of the first line has been turned into $d\mu_{g_b}$ since they are uniformly equivalent as measures by (\ref{control-covid-delta-f}) and the fact that $g_b+h$ and $g_b$ are uniformly equivalent as metrics. Notice that we have only made use of $h\in C^2_0$ here.
It remains to estimate the terms involving the second covariant derivatives of $h$: by integration by parts,
\begin{equation*}
\begin{split}
\left|\int_N\langle \Delta_{g_b}h,h\rangle_{g_b}\delta_{g_t}f(h)e^{-f_{g_t}}d\mu_{g_b}\right|&\lesssim\int_N |\nabla^{g_b}h|^2_{g_b}|\delta_{g_t}f(h)|+|\nabla^{g_b}h|_{g_b}|h|_{g_b}|\nabla^{g_b}\delta_{g_t}f(h)|_{g_b}d\mu_{g_b}\\
&\quad+\int_N|\nabla^{g_b}h|_{g_b}|h|_{g_b}|\delta_{g_t}f(h)||\nabla^{g_b}f_{g_t}|_{g_b}d\mu_{g_b}\\
&\lesssim \|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b}h\|_{L^2}^2+\|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b}h\|_{L^2}\|\rho_{g_b}^{-1}h\|_{L^2}\\
&\lesssim \|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b}h\|_{L^2}^2.
\end{split}
\end{equation*}
Here, we have made constant use of (\ref{control-covid-delta-f}). In particular, we have used the fact that $\nabla^{g_b}f_{g_t}$ and $\nabla^{g_b}\delta_{g_t}f(h)$ decay at least linearly together with Cauchy-Schwarz inequality in the second line and Hardy's inequality (Theorem \ref{thm-min-har-inequ}) in the third line. The integral involving the Hessian of $\mathop{\rm tr}\nolimits_{g_b}h$ can be handled similarly.
We are left with estimating integrals involving (the $t$-derivatives of) $\nabla^{g_t,2}f_{g_t}$. Let us notice first that any term which contains a $t$-derivative that falls either on the scalar product on symmetric $2$-tensors induced by $g_t$ or on the volume element $d\mu_{g_t}$ can be estimated as previously by using Hardy's inequality (Theorem \ref{thm-min-har-inequ}) in a quite straightforward way. Consequently, only the integrals that contain $t$-derivatives of $\nabla^{g_t,2}f_{g_t}$ and $e^{-f_{g_t}}$ will be estimated.
We start with terms involving derivatives of $e^{-f_{g_t}}$ only. By integration by parts,
\begin{equation*}
\begin{split}
\Bigg|\int_N\langle\nabla^{g_t,2}f_{g_t},h\rangle_{g_t}\delta_{g_t}^2f(h,h)&e^{-f_{g_t}}d\mu_{g_t}\Bigg|\\
\lesssim\,&\int_N\left(|h|_{g_t}|\nabla^{g_t}f_{g_t}|^2_{g_t}+|\mathop{\rm div}\nolimits_{g_t}h|_{g_t}|\nabla^{g_t}f_{g_t}|_{g_t}\right)|\delta_{g_t}^2f(h,h)|d\mu_{g_t}\\
&+\int_N|h|_{g_t}|\nabla^{g_t}f_{g_t}|_{g_t}|\nabla^{g_t}\delta_{g_t}^2f(h,h)|_{g_t}\,d\mu_{g_t}\\
\lesssim\,&\|h\|^2_{C^{2,\alpha}_{\tau}}\left(\|\rho_{g_b}^{-1}h\|_{L^2}\|\nabla^{g_t}f_{g_t}\|_{L^2}+\|\nabla^{g_t}h\|_{L^2}\|\nabla^{g_t}f_{g_t}\|_{L^2}\right)\\
\lesssim\,&\|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_t}h\|_{L^2}\|\nabla^{g_t}f_{g_t}\|_{L^2},
\end{split}
\end{equation*}
where we have used the linear decay of $\nabla^{g_t}f_{g_t}$ together with the fact that $\|h\|_{C^{2,\alpha}_{\tau}}\lesssim 1$. Now, from [(\ref{grad-est-int-ene}), Proposition \ref{prop-ene-pot-fct}],
\begin{equation*}
\begin{split}
\left|\int_N\langle\nabla^{g_t,2}f_{g_t},h\rangle_{g_t}\delta_{g_t}^2f(h,h)e^{-f_{g_t}}d\mu_{g_t}\right|&\lesssim\|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_t}h\|_{L^2}^2,
\end{split}
\end{equation*}
as desired.
With the help of (\ref{first-var-lie-der-app}) from Lemma \ref{lemma-app-lie-der-lin}, we proceed to estimate terms involving $t$-derivatives of $\nabla^{g_t}f_{g_t}$: by integrating by parts,
\begin{equation}
\begin{split}\label{huge-est-covid}
\left|\int_N\langle\frac{\partial}{\partial t}\nabla^{g_t,2}f_t,h\rangle_{g_t}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_t}\right|\lesssim& \left|\int_N\langle\nabla^{g_t,2}\delta_{g_t}f(h),h\rangle_{g_t}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_t}\right|\\
& +\left|\int_N\langle\mathcal{L}_{\nabla^{g_t}f_{g_t}}(h),h\rangle_{g_t}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_t}\right|\\
& +\left|\int_N\langle\mathcal{L}_{h(\nabla^{g_t}f_{g_t})}(g_t),h\rangle_{g_t}\delta_{g_t}f(h)\,e^{-f_{g_t}}d\mu_{g_t}\right|.
\end{split}
\end{equation}
Let us estimate the first integral on the righthand side of the previous inequalities (\ref{huge-est-covid}):
\begin{equation}
\begin{split}\label{intermed-est-covid}
\Bigg|\int_N\langle\nabla^{g_t,2}\delta_{g_t}f(h),h\rangle_{g_t}\delta_{g_t}f(h)\,&e^{-f_{g_t}}d\mu_{g_t}\Bigg|\\
\lesssim\,& \int_N|\nabla^{g_t}\delta_{g_t}f(h)|_{g_t}|\mathop{\rm div}\nolimits_{g_t}h|_{g_t}|\delta_{g_t}f(h)|d\mu_{g_t}\\
&+\int_N|\nabla^{g_t}\delta_{g_t}f(h)|_{g_t}|h|_{g_t}\left(|\nabla^{g_t}f_{g_t}|_{g_t}+|\nabla^{g_t}\delta_{g_t}f(h)|_{g_t}\right)d\mu_{g_t}.
\end{split}
\end{equation}
Now, since $\mathop{\rm div}\nolimits_{g_b}h=0$, one has $|\mathop{\rm div}\nolimits_{g_t}h|_{g_t}\lesssim |h|_{g_b}|\nabla^{g_b}h|_{g_b}.$ Using that $|\nabla^{g_t}\delta_{g_t}f(h)|_{g_t}$ decays at least linearly together with (\ref{control-covid-delta-f}) applied to $k=1$,
\begin{equation*}
\begin{split}
\int_N|\nabla^{g_t}\delta_{g_t}f(h)|_{g_t}|\mathop{\rm div}\nolimits_{g_t}h|_{g_t}|\delta_{g_t}f(h)|d\mu_{g_t}&\lesssim \|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b}h\|_{L^2}^2+\|\rho_{g_b}^{-1}h\|_{L^2}\|\nabla^{g_b}h\|_{L^2}\\
&\lesssim \|h\|_{C^{2,\alpha}_{\tau}}\|\nabla^{g_b}h\|_{L^2}^2,
\end{split}
\end{equation*}
by Young's inequality and Hardy's inequality (Theorem \ref{thm-min-har-inequ}). Thanks to [(\ref{est-grad-first-der-pot-fct-ene}), Proposition \ref{prop-ene-pot-fct}],
\begin{equation*}
\int_N|\nabla^{g_t}\delta_{g_t}f(h)|^2_{g_t}|h|_{g_t}d\mu_{g_t}\lesssim\|h\|_{C^0}\|\nabla^{g_t}\delta_{g_t}f(h)\|_{L^2}^2\lesssim \|h\|_{C^0}\|\nabla^{g_b}h\|_{L^2}^2.
\end{equation*}
The other integrals involved on the righthand sides of (\ref{intermed-est-covid}) and (\ref{huge-est-covid}) can be treated in a similar way. The same is true for terms involving the second $t$-derivatives of $\nabla^{g_t,2}f_{g_t}$ by using (\ref{sec-var-lie-der-app}) from Lemma \ref{lemma-app-lie-der-lin}.
\end{proof}
This concludes the proof of Proposition \ref{local maximum stable integrable}.
\end{proof}
\subsection{A first way to prove a \L{}ojasiewicz inequality}\label{naive-loja-sec}~~\\
In order to prove a \L{}ojasiewicz inequality in the stable integrable case, a natural strategy consists in proving it at an infinitesimal level and then it is sufficient to control the nonlinear terms, see \cite{Has-Sta} for example in the case of a closed manifold. Here, we succintly mention how one can implement this strategy in a non-compact situation. This will be proven in general in the next section.
We start by proving an infinitesimal version of \L{}ojasiewicz inequality.
\begin{prop}\label{prop-baby-loja-l2n/2+1} Let $(N^n,g_b)$, $n\geq 4$, be a linearly stable ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n-2}{2},n-2\right)$. Then the following \L{}ojasiewicz inequality holds true for any sufficiently small symmetric $2$-tensor $h\in C_{\tau}^{2}(S^2T^*N)$:
\begin{equation}
\langle-L_{g_b} h,h\rangle_{L^2}\leq C\|L_{g_b}h\|^2_{L^2_{\frac{n}{2}+1}}, \quad \label{loja-ineq-stab-babyL2n/2+1}
\end{equation}\
for some positive constant $C=C(n,\tau,g_b)$.\\
\end{prop}
\begin{rk}
This is the first order version of the inequality
$$|\lambda_{\operatorname{ALE}}(g)|\leq C \|\nabla \lambda_{\operatorname{ALE}}(g)\|^2_{L^2_{\frac{n}{2}+1}},$$
which is an $L^2_{\frac{n}{2}+1}$-\L{}ojasiewicz inequality with optimal exponent $\theta=1$.
\end{rk}
\begin{proof}
Let us prove inequality (\ref{loja-ineq-stab-baby}) for functions on $(N^n,g_b)$ first. Let $u\in C_{\tau}^{2}(N)$ which implies in particular that for any $\tau'<\tau$, $\Delta_{g_b} u\in L^2_{\tau'+2}$ and $u\in L^2_{\tau'}$. Let us now assume that $\tau>\frac{n}{2}-1$ which naturally implies that $u\in L^2_{\frac{n}{2}-1}$, $\nabla^{g_b} u\in L^2_{\frac{n}{2}}$ and $\Delta_{g_b} u \in L^2_{\frac{n}{2}+1}$. By Cauchy-Schwarz inequality, we have $\langle-\Delta_{g_b} u,u\rangle_{L^2}\leq \|\Delta_{g_b} u\|_{L^2_{\frac{n}{2}+1}}\|u\|_{L^2_{\frac{n}{2}-1}}$. Now, by Hardy's inequality from Theorem \ref{thm-min-har-inequ}, we get
\begin{equation*}
\|u\|_{L^2_{\frac{n}{2}-1}}^2 = \|\rho_{g_b}^{-1}u\|_{L^2_{\frac{n}{2}}}^2\leq C \|\nabla^{g_b}u\|^2_{L^2} = C\langle-\Delta_{g_b} u,u\rangle_{L^2}.
\end{equation*}
Therefore,
\begin{equation}
\langle-\Delta_{g_b} u,u\rangle_{L^2}\leq C(n,\tau,g_b)\|\Delta_{g_b} u\|_{L^2_{\frac{n}{2}+1}}\langle-\Delta_{g_b} u,u\rangle_{L^2}^\frac{1}{2}.\label{int-baby-loj-fcts}
\end{equation}
Now, notice that the proof goes almost verbatim for symmetric $2$-tensors $h\in C_{\tau}^{2}(S^2T^*N)$ such that $\|h\|_{C_{\tau}^0}\leq 1$. Notice also that it is sufficient to restrict to tensors orthogonal to $\ker_{L^2}L_{g_b}$.
Indeed, by Cauchy-Schwarz inequality, $\langle-L_{g_b} h,h\rangle_{L^2}\leq \|L_{g_b} h\|_{L^2_{\frac{n}{2}+1}}\|h\|_{L^2_{\frac{n}{2}-1}}$. On the one hand, by following the same reasoning as in the proof of (\ref{int-baby-loj-fcts}), one then gets:
\begin{equation}
\langle-L_{g_b} h,h\rangle_{L^2}\leq C(n,\tau,g_b)\|L_{g_b} h\|_{L^2_{\frac{n}{2}+1}}\langle-\Delta_{g_b} h,h\rangle_{L^2}^\frac{1}{2}.\label{int-baby-loj-tensors}
\end{equation}
On the other hand, by Theorem \ref{theo-der-kro}, if $h\in C^{2}_{\tau}(S^2T^*N)$ and if $h\perp\ker_{L^2}L_{g_b}$ (which is well-defined because $\tau>0$ and $\ker_{L^2}L_{g_b}\subset C^0_n$), one gets thanks to (\ref{int-baby-loj-tensors}),
\begin{equation}
\langle-L_{g_b} h,h\rangle_{L^2}\leq C(n,\tau,g_b)\|L_{g_b} h\|_{L^2_{\frac{n}{2}+1}}\langle-L_{g_b} h,h\rangle_{L^2}^\frac{1}{2}.\label{int-baby-loj-tensors-bis}
\end{equation}
\end{proof}
We now state and prove an interpolation inequality between weighted Sobolev spaces. By definition of our weighted norms, we have $\|.\|_{L^2}\leq \|.\|_{L^2_{\frac{n}{2}+1}}$. We will show that assuming that our tensors decay at infinity, we have a weaker reverse inequality.
\begin{lemma}\label{lemma-interpol-delig}
Let $T$ be in $L^2(S^2T^*N)\,\cap \,L^2_\beta(S^2T^*N)$ for $\beta>\frac{n}{2}+1$. Then, for $\delta\in(0,1)$ such that $\beta = \frac{n}{2}+\frac{1}{1-\delta}$ we have the following control :
\begin{equation}
\|T\|_{L^2_{\frac{n}{2}+1}}\leq \|T\|_{L^2}^\delta\|T\|_{L^2_\beta}^{1-\delta}.\label{interpolation L2}
\end{equation}
In particular, if $T$ is small enough in norm $C^0_{\tau+2}$ for $\tau>\frac{n-2}{2}$, we have $$\|T\|_{L^2_{\frac{n}{2}+1}}\leq \|T\|_{L^2}^\delta,$$ for any $\delta$ such that $ \tau+2> \frac{n}{2}+\frac{1}{1-\delta}$, that is $\delta<\frac{2\tau-(n-2)}{2\tau-(n-4)}$.
\end{lemma}
\begin{proof}
For this, let us choose $0<\delta<1$ such that $\beta=\frac{n}{2}+\frac{1}{1-\delta}$ which always exists since $\beta>\frac{n}{2}+1$. We then have,
\begin{align}
\|T\|^2_{L^2_{\frac{n}{2}+1}} &= \int_{N}|T|_{g_b}^{2\delta}\cdot |T|_{g_b}^{2(1-\delta)}\rho_{g_b}^{2}\,d\mu_{g_b}\nonumber\\
&\leq \left(\int_{N}|T|_{g_b}^2\,d\mu_{g_b}\right)^{\delta}\cdot\left(\int_{N}|T|_{g_b}^2\rho_{g_b}^{\frac{2}{1-\delta}}\,d\mu_{g_b}\right)^{1-\delta}\nonumber\\
&=\left(\int_{N}|T|_{g_b}^2\,d\mu_{g_b}\right)^{\delta}\cdot\left(\int_{N}|T|_{g_b}^2\rho_{g_b}^{2\beta-n}\,d\mu_{g_b}\right)^{1-\delta}\nonumber\\
&=\|T\|_{L^2}^{2\delta}\|T\|_{L^2_\beta}^{2(1-\delta)}\nonumber.
\end{align}
Here we have made use of the assumption $\frac{2}{1-\delta} = 2\beta-n$ together with the definition of the space $L^2_\beta(S^2T^*N)$.
\end{proof}
We use this inequality in order to find a \L{}ojasiewicz inequality with an $L^2$ norm of $\nabla\lambda_{\operatorname{ALE}}$ on the right-hand side.
\begin{prop}\label{prop-baby-loja} Let $(N^n,g_b)$, $n\geq 5$, be a linearly stable ALE Ricci-flat metric asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$. Let $\tau\in\left(\frac{n}{2},n-2\right)$. Then for any $0<\delta<\frac{2\tau-(n-2)}{2\tau-(n-4)}$, there exists $C>0$ such that for all the following \L{}ojasiewicz inequality holds true for any sufficiently small symmetric $2$-tensor $h\in C_{\tau}^{2}(S^2T^*N)$:
\begin{equation}
\langle-L_{g_b} h,h\rangle_{L^2}^{2-\theta_{L^2}}\leq C\|L_{g_b}h\|^2_{L^2},\quad\theta_{L^2}:=2-\frac{1}{\delta}, \quad \label{loja-ineq-stab-baby}
\end{equation}\
for some positive constant $C=C(n,\theta_{L^2},\tau,g_b)$.\\
\end{prop}
\begin{proof}
Let us use the interpolation inequality [\eqref{interpolation L2}, Lemma \ref{lemma-interpol-delig}] on the $L^2_{\frac{n}{2}+1}$- norm on the right-hand side of the inequality \eqref{loja-ineq-stab-babyL2n/2+1}. This leads to the desired inequality (\ref{loja-ineq-stab-baby}) with $\theta>0$ such that $1-\frac{\theta}{2} = \frac{1}{2\delta}$ by considering $T:= L_{g_b} h$. Note however that this leads to $\theta>0$ only if $\delta>\frac{1}{2}$ which is only possible when $\tau>\frac{n}{2}$.
\end{proof}
\begin{rk}
The proof above works in all dimensions and yields a nontrivial infinitesimal \L{}ojasiewicz inequality (that is with $0<\theta<1$) assuming that $\tau>\frac{n}{2}$. However, in dimension $3$ and $4$, this imposes $\tau>n-2$ and induces several difficulties later on. We will also see in a forthcoming work that we cannot expect a $C^{2,\alpha}_\tau$-convergence for $\tau>n-2$ along the Ricci flow: this shows that we crucially need a \L{}ojasiewicz inequality adapted to the case $\tau<n-2$.
\end{rk}
Both of the exponents obtained for the $L^2_{\frac{n}{2}+1}$ and the $L^2$-\L{}ojasiewicz inequalities of Propositions \ref{prop-baby-loja-l2n/2+1} and \ref{prop-baby-loja} are moreover optimal as one can see in the following example on functions which obviously extends to conformal deformations.
\begin{exmp}
Let $A\gg 1$ and $\tau>0$. Consider the cut off function $\chi_A$ of Example \ref{exemple masse infinie}. Let us define $u_{A,\tau} = \chi_A \rho_{g_b}^{-\tau}\in C^{2,\alpha}_{\tau}$.
Then, denoting $f(u_{A,\tau})\sim g(A,\tau)$ if there exists $C>0$ independent of $A$ large enough such that $C^{-1}g(A,\tau)<f(u_{A,\tau})<Cg(A,\tau)$, we have the following controls:
\begin{itemize}
\item $-\int_{N}u_{A,\tau}\Delta_{g_b} u_{A,\tau}dv_{g_b}=\int_{N}|\nabla^{g_b} u_{A,\tau}|^2dv_{g_b} \sim A^{(n-2)-2\tau}$,
\item $\|\Delta_{g_b} u_{A,\tau}\|_{L^2}^2=\int_{N}|\Delta_{g_b} u|^2dv_{g_b} \sim A^{(n-4)-2\tau}$, and
\item $\|\Delta_{g_b} u_{A,\tau}\|_{L^2_{\frac{n}{2}+1}}^2=\int_{N}\rho_{g_b}^2|\Delta_{g_b} u|^2dv_{g_b} \sim A^{(n-2)-2\tau}$.
\end{itemize}
We therefore see that we exactly have
$$-\int_{N}u_{A,\tau}\Delta_{g_b} u_{A,\tau}dv_{g_b}\sim \|\Delta_{g_b} u_{A,\tau}\|_{L^2_{\frac{n}{2}+1}}^2, $$
and
$$-\int_{N}u_{A,\tau}\Delta_{g_b} u_{A,\tau}dv_{g_b}\sim \|\Delta_{g_b} u_{A,\tau}\|_{L^2}^{2\delta}$$
with $\delta = \frac{2\tau-(n-2)}{2\tau-(n-4)}$.
\end{exmp}
\begin{rk}\label{rk-exp-loja}
Notice that the exponent $2-\theta$ is smaller than $1$ and is asymptotically $1$ as the dimension $n$ increases. This is in contrast with the case where $(N^n,g_b)$ is a closed Riemannian manifold endowed with an integrable Ricci-flat metric $g_b$.
\end{rk}
By bounding from above the nonlinear terms of $\|\nabla\lambda_{\operatorname{ALE}}(g_b+h)\|_{L^2}$ by $\frac{1}{2}\|L_{g_b}h\|_{L^2}$ and by bounding the higher order terms of $|\lambda_{\operatorname{ALE}}(g_b+h)|$ by $\frac{1}{2}|\langle-L_{g_b}h,h\rangle_{L^2}|$ in an analogous way to the proof of Proposition \ref{local maximum stable integrable}, we get an $L^2$-\L{}ojasiewicz inequality. Note that controlling these nonlinear terms at this point is far from being an easy task, and in particular it does not seem to be possible to do so in dimension $3$ and $4$, see the discussion below Remark \ref{rk-exp-loja}. For these reasons, we do not state it and refer the reader to Theorem \ref{theo-loja-int-opt}.
There are several difficulties to develop a similar argument in dimension $n=4$.
\begin{itemize}
\item The exponent given in \eqref{loja-ineq-stab-baby} does not provide a \L{}ojasiewicz inequality with $\theta>0$.
\item In the proof of the bounds for the nonlinear terms, a reason is that $2 = n-2 = \frac{n}{2}$ as well as $0 = \frac{n}{2}-2$ are exceptional values of the Laplacian in this dimension, which complicates the use of Fredholm theory. In particular, the space $\big(\ker_{L^2}L_{g_b}\big)^\perp$ is not closed for the norms $ H^2_0 $ (or $H^2_{-\epsilon}$).
\item There are also asymptotically constant $2$-tensors in the kernel of the Lichnerowicz Laplacian which have to be dealt with to obtain informations on the $L^2 = L^2_2$-norm of the Hessian of functions. Indeed an inequality $$\|\nabla^{g_b,2}h\|_{L^2}\leq C \|L_{g_b}h\|_{L^2}$$
for $h\perp \ker_{L^2}L_{g_b}$ is contradicted by the elements of $\ker_{C^0}L_{g_b}\cap (\ker_{L^2}L_{g_b})^\perp$.
\end{itemize}
A priori, on this last point, the orthogonality is not well defined because the elements of $\ker_{L^2}L_{g_b}$ are $O(r^{-4})$ while the elements of $\ker_{C^0}L_{g_b}$ are $O(1)$ and this might lead to a nonconvergent integral. However, the elements of $\ker_{L^2}L_{g_b}$ are asymptotic to $H_2\cdot \rho_{g_b}^{-4}$ for $H_2$ a $2$-tensor whose coefficients are eigenfunctions of the spherical Laplacian for the second eigenvalue while the elements of $\ker_{C^0}L_{g_b}$ are asymptotic to some constant $2$-tensor $H_0$. Since $\int_{\mathbb{S}^3}\langle H_0,H_2\rangle_{\mathbb{S}^3}\, d\mu_{\mathbb{S}^3} = 0$, the $L^2$-product of an element of $\ker_{L^2}L_{g_b}$ and an element of $\ker_{C^0}L_{g_b}$ is well-defined.
\section{A \L{}ojasiewicz inequality for $\lambda_{\operatorname{ALE}}$: the general case}\label{sec-loja-ineq-gal-case}
In this section, we check that the classical proof of the \L{}ojasiewicz-Simon inequality, and in particular its version summarized in \cite{Col-Min-Ein-Tan-Con}, holds in the context of weighted spaces. We will then deduce a \L{}ojasiewicz-Simon inequality for $\lambda_{\operatorname{ALE}}$ in the neighborhood of a given Ricci-flat ALE metric.
\subsection{A general $L^2_{\frac{n}{2}+1}$-\L{}ojasiewicz inequality for functionals on ALE metrics}\label{sec-gal-loja}~~\\
The scheme of proof of \L{}ojasiewicz-Simon inequality by Lyapunov-Schmidt reduction summarized in \cite{Col-Min-Ein-Tan-Con} extends to the setting of weighted norms. Indeed, the fact that the function spaces are modeled on H\"older spaces $C^{k,\alpha}$ is not so essential in Colding-Minicozzi's proof, the crucial properties of these spaces being that they are Banach and that the linearization of the gradient is Fredholm between them.
Before we state the main result of this section, we discuss the notion of the gradient of functionals in the setting of ALE metrics when defined on $C^{2,\alpha}_{\tau}$ or on $C^{0,\alpha}_{\tau+2}$, $\tau\in \left(\frac{n-2}{2},n-2\right)$, $\alpha\in(0,1)$. If $F:\mathcal{O}\rightarrow \mathbb{R}$ is a $C^1$ functional defined on an open subset $\mathcal{O}$ of $C^{2,\alpha}_{\tau}$ then it admits a unique gradient denoted by $\nabla F$ defined on $\mathcal{O}$ with values into $L^{2}_{\frac{n}{2}+1}$ such that $D_{h}F(v)=\left<\nabla F(h),v\right>_{L^2}$ for all $h\in\mathcal{O}$ and $v\in L^2_{\frac{n}{2}-1}$. Similarly, we use the same notation to denote the gradient of any $C^1$ functional defined on $C^{0,\alpha}_{\tau+2}\subset L^2_{\frac{n}{2}+1}$, the only difference here being that the gradient belongs to $L^2_{\frac{n}{2}-1}$.
\begin{prop}\label{Lojasiewicz ineq weighted}
Let $\tau\in(\frac{n-2}{2},n-2)$, $\alpha\in (0,1)$ and let $(N^n,g_b)$ be an ALE Ricci-flat metric. Let $E$ (respectively $F$) be a closed subspace of $L^2_{\frac{n}{2}-1}(S^2T^*N)$ (respectively of $L^2_{\frac{n}{2}+1}(S^2T^*N)$) such that $ C^{2,\alpha}_{\tau}(S^2T^*N)\,\cap\,E$ and $C^{0,\alpha}_{\tau+2}(S^2T^*N)\,\cap\,F$ are respectively a closed subset of $C^{2,\alpha}_{\tau}(S^2T^*N)$ and a closed subset of $C^{0,\alpha}_{\tau+2}(S^2T^*N)$. Let $G : \mathcal{O}\subset C^{2,\alpha}_{\tau}(S^2T^*N) \to \mathbb{R}$ be an analytic functional in the sense of Definition \ref{def-analytic} defined on $\mathcal{O}$, a neighborhood of $0$ in $ C^{2,\alpha}_{\tau}(S^2T^*N)\cap E$.
If it satisfies,
\begin{enumerate}
\item\label{item-0} the gradient of $G$, $\nabla G : \mathcal{O}\to C^{0,\alpha}_{\tau+2}$ has a Fr\'echet derivative at each point which varies continuously, with $\nabla G(0)=0$, and
\begin{equation}
\big\|\nabla G(x)-\nabla G(y)\big\|_{L^2_{\frac{n}{2}+1}} \leq C\|x-y\|_{H^{2}_{\frac{n}{2}-1}},\label{lip-bd-nabla-G}
\end{equation}
\item \label{item-0-bis}the Fr\'echet derivative of $\nabla G:\mathcal{O}\rightarrow C^{0,\alpha}_{\tau}$ is continuous when interpreted as a map from $H^{2}_{\frac{n}{2}-1}$ to $L^2_{\frac{n}{2}+1}$,
\item the linearization $L$ of (the extension to $H^2_{\frac{n}{2}-1}$ of) $\nabla G$ at $0$,
\begin{enumerate}
\item \label{item-1}is bounded from $H^{2}_{\frac{n}{2}-1}$ to $L^2_{\frac{n}{2}+1}$ and Fredholm $H^{2}_{\frac{n}{2}-1}\cap E$ to $L^2_{\frac{n}{2}+1}\cap F$,
\item \label{item-2} its kernel $\mathbf{K}$ on $H^{2}_{\frac{n}{2}-1}\cap E$ equals its $L^2$-cokernel on $L^2_{\frac{n}{2}+1}\cap F$,
\item \label{item-3} is bounded from $C^{2,\alpha}_{\tau}$ to $C^{0,\alpha}_{\tau+2}$,
\item \label{item-4} is Fredholm from $C^{2,\alpha}_{\tau}\,\cap\, E$ to $C^{0,\alpha}_{\tau+2}\,\cap\, F$,
\item \label{item-5} its kernel on $C^{2,\alpha}_\tau\,\cap \,E$ equals its $L^2$-cokernel on $C^{0,\alpha}_{\tau+2}\,\cap\,F$ and is equal to $\mathbf{K}$,
\end{enumerate}
\end{enumerate}
then, there exist $\theta \in (0,1]$ and $C>0$ such that for all sufficiently small $x$ in $C^{2,\alpha}_{\tau}(S^2T^*N)\cap E$,
\begin{equation*}
|G(x)-G(0)|^{2-\theta}\leq C\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}^{2}.
\end{equation*}
Moreover, the constant $\theta$ is independent of $\alpha\in(0,1)$ and is a monotone nondecreasing function of $\tau\in\left(\frac{n-2}{2},n-2\right)$.
\end{prop}
\begin{rk}
We will use this quite general statement with
\begin{itemize}
\item the closed subspaces $E = \ker\mathop{\rm div}\nolimits_{g_b}\subset C^{2,\alpha}_{\tau}$ and $F:=\mathop{\rm div}\nolimits_{g_b}^{\ast}\left(C^{\infty}_c(TN)\right)^{\perp}.$
\item the functional $G={\lambda}_{ALE}(g_b +.)$ defined on a $C^{2,\alpha}_\tau$-neighborhood of $0$ by Section \ref{extension tilde lambda},
\item its $L^2(e^{-f_g}d\mu_g)$-gradient, $$h\mapsto\mathop{\rm Ric}\nolimits(g_b+h) + \nabla^{g_b+h,2}f_{g_b+h},$$
\item the linearization of $h\mapsto\mathop{\rm Ric}\nolimits(g_b+h) + \nabla^{g_b+h,2}f_{g_b+h}$ at $0$ on $E$ is $\frac{1}{2}L_{g_b}$.
\end{itemize}
\end{rk}
\begin{rk}
Proposition \ref{Lojasiewicz ineq weighted} is stated in terms of function spaces modelled on symmetric $2$-tensors, the only reason for this restriction being its main application to the functional $\lambda_{\operatorname{ALE}}$ defined on metrics. The condition $\frac{n-2}{2}<\tau< n-2$ is assumed to ensure the Fredholmness of the linearization of the gradient in the particular case of $\lambda_{\operatorname{ALE}}$.
\end{rk}
\subsection{Proof of Proposition \ref{Lojasiewicz ineq weighted}}\label{sec-prop-loja-ineq}~~\\
Let us consider $G$ satisfying the assumptions of Proposition \ref{Lojasiewicz ineq weighted} and let us prove that we can reduce our context to the finite-dimensional case by Lyapunov-Schmidt reduction by following the scheme of proof of \cite[Section 7]{Col-Min-Ein-Tan-Con}. We first have to find a replacement for \cite[Lemma 7.5]{Col-Min-Ein-Tan-Con} which generally does not hold in the setting of weighted Hölder spaces because of the following remark.
\begin{rk}\label{cokernel weighted spaces}
Even if $L$ is selfadjoint, its kernel and its $L^2$-cokernel might be different because we are dealing with weighted H\"older spaces. For example, the $L^2$-cokernel of the Lichnerowicz operator $L_{g_b} : C^{2,\alpha}_{\tau}(S^2T^*N)\to C^{0,\alpha}_{\tau+2}(S^2T^*N)$, that is the space $\mathbf{C}$ such that $L_{g_b}(C^{2,\alpha}_{\tau}) = \mathbf{C}^\perp \cap C^{0,\alpha}_{\tau+2}$ is the kernel of $L$ on $C^{k,\alpha}_{n-(\tau+2)}(S^2T^*N)$ where $-2<n-(\tau+2)<\frac{n}{2}-1$ (see Note \ref{note L2 cokernel}) which is larger than $\mathbf{K}$ if $\tau>n-2$. We therefore always have $\mathbf{K}\subset\mathbf{C}$ here and the inclusion can be strict in our applications because there are asymptotically constant $2$-tensors in the kernel of the Lichnerowicz operator on a Ricci-flat ALE metrics if we do not impose a decay at infinity.
\end{rk}
Denote by $\mathbf{K}$ the finite dimensional kernel (and $L^2$-cokernel for $0<\tau<n-2$) of $L : C^{2,\alpha}_{\tau}(S^2T^*N)\cap E\to C^{0,\alpha}_{\tau+2}(S^2T^*N)\cap F$, and define $\Pi_\mathbf{K}$ to be the associated $L^2$-projection onto $\mathbf{K}$. Define the mapping:
\begin{equation*}
\mathcal{N} =\nabla G + \Pi_\mathbf{K}.
\end{equation*}
\begin{lemma}\label{reduction LS}
There is an open subset $$\mathcal{U}\subset C^{0,\alpha}_{\tau+2}(S^2T^*N)\,\cap\,F,$$ about $0$ and a map $\Phi : \mathcal{U}\to C^{2,\alpha}_{\tau}(S^2T^*N)\cap E$ with $\Phi(0) = 0$, and $C>0$, so that for any $x,y\in \mathcal{U}$ and $z\in C^{2,\alpha}_{\tau}(S^2T^*N)\,\cap E$ sufficiently small,
\begin{itemize}
\item $\Phi\circ \mathcal{N}(z) = z$ and $\mathcal{N}\circ\Phi(x) = x$,
\item $ \|\Phi(x)\|_{C^{2,\alpha}_{\tau}}\leq C \|x\|_{C^{0,\alpha}_{\tau+2}} $ and $ \|\Phi(x)-\Phi(y)\|_{H^2_{\frac{n}{2}-1}}\leq C \|x-y\|_{L^2_{\frac{n}{2}+1}} $,
\item the function $ f := G\circ \Phi$ is analytic on $\mathcal{U}$. In particular, it is analytic on $\mathbf{K}$.
\end{itemize}
\end{lemma}
\begin{proof}
By assumptions (\ref{item-0}) and (\ref{item-5}), the mapping $\mathcal{N} : C^{2,\alpha}_{\tau}\,\cap\, E\to C^{0,\alpha}_{\tau+2}\,\cap\,F$ is $C^1$ and its Fr\'echet derivative at $0$ is
$$D_0\mathcal{N} = L + \Pi_\mathbf{K}.$$
Note that since $L$ is Fredholm of index $0$ by assumptions (\ref{item-4}) and (\ref{item-5}), it is enough to prove that $D_0\mathcal{N}$ is injective and Fredholm in order to use the implicit function theorem to define $\Phi$. The kernel $\mathbf{K}$ being finite dimensional, the projection $\Pi_{\mathbf{K}}$ is a compact operator which implies that $D_0\mathcal{N}$ is Fredholm. Now, by assumption (\ref{item-5}), the $L^2$-cokernel of $L$ is $\mathbf{K}$. Therefore, if $L(x) + \Pi_\mathbf{K}(x) = 0$, then, by projecting on $\mathbf{K}^\perp$, we have $L(x) = 0$, hence $x\in \mathbf{K}$ and $\Pi_\mathbf{K}(x) =0$, thus, finally $x = 0$. We can then conclude like in \cite[Lemma 7.5]{Col-Min-Ein-Tan-Con} by using the implicit function theorem, Lemma \ref{th fcts implicites}.
The bound $ \|\Phi(x)\|_{C^{2,\alpha}_{\tau}}\leq C \|x\|_{C^{0,\alpha}_{\tau+2}} $ comes from the integral mean value theorem in Banach spaces and the fact that
\begin{equation}
D_y\Phi = \big(D_{\Phi(y)}\mathcal{N}\big)^{-1}\label{inverse linearisation Phi}
\end{equation}
is continuous and bounded from $\mathcal{U} \subset C^{0,\alpha}_{\tau+2}\,\cap\,F$ to $C^{2,\alpha}_{\tau}$.
The Lipschitz bound $ \|\Phi(x)-\Phi(y)\|_{H^2_{\frac{n}{2}-1}}\leq C \|x-y\|_{L^2_{\frac{n}{2}+1}} $ for sufficiently small $x,y\in \mathcal{U}$ is equivalent to $ \|x-y\|_{H^2_{\frac{n}{2}-1}}\leq C \|\mathcal{N}(x)-\mathcal{N}(y)\|_{L^2_{\frac{n}{2}+1}} $ for sufficiently small $x,y\in \mathcal{O}$. This in turn is implied if $ \|x-y\|_{H^2_{\frac{n}{2}-1}}\leq C \|D_0\mathcal{N}(x-y)\|_{L^2_{\frac{n}{2}+1}} $ for sufficiently small $x,y\in \mathcal{O}$ by assumption (\ref{item-0-bis}). Now, the same reasoning that led us to prove that $\mathcal{N}$ is a local diffeomorphism at $0$ between weighted H\"older spaces, $L+\Pi_{\mathbf{K}}:H^{2}_{\frac{n}{2}-1}\,\cap\,E\rightarrow L^2_{\frac{n}{2}+1}\,\cap\,F$ is an isomorphism of Banach spaces by assumptions (\ref{item-1}) and (\ref{item-2}): this implies the desired lower bound on $\|D_0\mathcal{N}(x-y)\|_{L^2_{\frac{n}{2}+1}}$ since $L+\Pi_{\mathbf{K}}=D_0\mathcal{N}$ on $C^{2,\alpha}_{\tau}\,\cap\,E$.
The analyticity of $f$ on $\mathcal{U}$ (and therefore on $\mathbf{K}$ by assumption (\ref{item-5})) comes from the analyticity of $G$ and that of $\Phi$ ensured by the analytic implicit function theorem stated in Lemma \ref{th fcts implicites}.
\end{proof}
\begin{rk}
In case the $L^2$-cokernel $\mathbf{C}$ and the kernel $\mathbf{K}$ are different, one can instead define $$\mathcal{N} := \Pi_{\mathbf{C}^\perp}\circ \nabla G +\Pi_\mathbf{K},$$
and define $\Phi$ on its image. This however induces some technical difficulties in weighted spaces. Since we do not need it presently we only considered the case when $\mathbf{C} = \mathbf{K}$.
\end{rk}
Let us now adapt \cite[Lemma 7.10]{Col-Min-Ein-Tan-Con} to our slightly different case.
\begin{lemma}\label{control nabla f Pi nabla G}
There exists $C>0$ such that for any $x$ in a small enough neighborhood of $0$ in $C^{2,\alpha}_{\tau}(S^2T^*N)\,\cap\, E$, we have
\begin{equation}
\|\nabla f(\Pi_\mathbf{K}(x))\|_{L^2_{\frac{n}{2}-1}}\leq C\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}.\label{est-nabla-f-nabla-G}
\end{equation}
More generally, if $y_t:=\Pi_{\mathbf{K}}(x)+t\nabla G(x)$, $t\in[0,1]$, for $x$ in a small enough neighborhood of $0$ in $C^{2,\alpha}_{\tau}(S^2T^*N)\,\cap\, E$, then:
\begin{equation}
\|\nabla f(y_t)\|_{L^2_{\frac{n}{2}-1}}\leq C\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}},\label{est-nabla-f-nabla-G-path}
\end{equation}
for some positive constant $C$ independent of $x$ and $t$.
\end{lemma}
\begin{proof}
For $y\in \mathcal{U}$ sufficiently small, since $f = G\circ \Phi$, we have $D_yf(v)=D_{\Phi(y)}G\circ D_{y}\Phi(v)$ for $v\in L^{2}_{\frac{n}{2}+1}$. In particular, if $v\in L^{2}_{\frac{n}{2}+1}$
\begin{equation*}
\begin{split}
|D_yf(v)|\leq\,& \|\nabla G(\Phi(y))\|_{L^2_{\frac{n}{2}+1}}\|D_y\Phi(v)\|_{H^2_{\frac{n}{2}-1}}\\
\leq\,&C\|\nabla G(\Phi(y))\|_{L^2_{\frac{n}{2}+1}}\|v\|_{L^2_{\frac{n}{2}+1}},
\end{split}
\end{equation*}
where $C$ is a positive constant independent of $v\in L^{2}_{\frac{n}{2}+1}$. Here, we have used the Lipschitz bound on $\Phi$ established in Lemma \ref{reduction LS}. By definition of the gradient of $f$, one gets:
\begin{equation*}
\|\nabla f(y)\|_{L^2_{\frac{n}{2}-1}}\leq C\|\nabla G\circ\Phi(y)\|_{L^2_{\frac{n}{2}+1}},
\end{equation*}
In particular, for any $x\in C^{2,\alpha}_{\tau}\,\cap\, E$ sufficiently small, we have
\begin{equation}
\|\nabla f(\Pi_\mathbf{K}(x))\|_{L^2_{\frac{n}{2}-1}}\leq C\|\nabla G\circ\Phi( \Pi_\mathbf{K}(x))\|_{L^2_{\frac{n}{2}+1}}.\label{lovely-inequ-nabla-f-G}
\end{equation}
Now, since $x = \Phi\big(\Pi_\mathbf{K}(x)+\nabla G(x)\big)$, the Lipschitz bound (\ref{lip-bd-nabla-G}) for $\nabla G$ and the one obtained in Lemma \ref{reduction LS} for $\Phi$ yield
\begin{align*}
\|\nabla G(\Phi\circ \Pi_\mathbf{K}(x)) - \nabla G(x)\|_{L^2_{\frac{n}{2}+1}} &= \|\nabla G(\Phi( \Pi_\mathbf{K}(x))) -\nabla G(\Phi(\Pi_\mathbf{K}(x)+\nabla G(x)))\|_{L^2_{\frac{n}{2}+1}}\\
&\leq C \|\Phi( \Pi_\mathbf{K}(x))- \Phi(\Pi_\mathbf{K}(x)+\nabla G(x))\|_{H^2_{\frac{n}{2}-1}}\\
&\leq C \|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}.
\end{align*}
This ends the proof of the desired estimate (\ref{est-nabla-f-nabla-G}) by the triangular inequality.
In order to prove (\ref{est-nabla-f-nabla-G-path}), notice that since $f = G\circ \Phi$ and consequently $\Phi\circ \mathcal{N} = \mathrm{Id}$, we have $G = f\circ \mathcal{N}$ which by differentiation implies that we have
\begin{equation}
D_{\Phi(y_t)} G = D_{y_t}f\circ D_{\Phi(y_t)} \mathcal{N}.\label{nablaG}
\end{equation}
Now, since $ \nabla_{\Phi(y_t)}\mathcal{N} $ is invertible with a bounded inverse by Lemma \ref{reduction LS}, we deduce from \eqref{nablaG} that $D_{y_t}f = D_{\Phi(y_t)} G\circ (D_{\Phi(y_t)} \mathcal{N})^{-1}$ and therefore that
\begin{equation}
\|D_{y_t}f\|_1\leq C \|D_{\Phi(y_t)} G\|_2,\label{ineq diff f diff G}
\end{equation}
where the norm $\|.\|_1$ is that of operators from $L^2_{n+1}$ to $\mathbb{R}$ and the norm $\|.\|_2$ is that of operators from $H^2_{\frac{n}{2}-1}$ to $\mathbb{R}$. Since for the $L^2(g_b)$ scalar product, the dual of $L^2_{\frac{n}{2}+1}$ is identified with $L^2_{\frac{n}{2}-1}$ and that of $L^2_{\frac{n}{2}-1}$ with
$L^2_{\frac{n}{2}+1}$, \eqref{ineq diff f diff G} rewrites in the following way thanks to the $L^2(g_b)$ gradients:
\begin{equation}
\|\nabla f(y_t)\|_{L^2_{\frac{n}{2}-1}}\leq C\|\nabla G (\Phi(y_t))\|_{L^2_{\frac{n}{2}+1}}.\label{nabla f nablaG}
\end{equation}
Now, we invoke \eqref{nabla f nablaG} to observe that it remains to control $\|\nabla G(\Phi(y_t))\|_{L^2_{\frac{n}{2}+1}}$ from above by $\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}$. For this, we use \eqref{lip-bd-nabla-G} which yields
\begin{equation*}
\begin{split}
\|\nabla G(\Phi(y_t))-\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}
&\leq C\|\Phi(y_t)-x\|_{H^2_{\frac{n}{2}-1}}\\
&= C \|\Phi(y_t)-\Phi(y_1)\|_{H^2_{\frac{n}{2}-1}}\\
&\leq C\|y_t-y_1\|_{L^2_{\frac{n}{2}+1}}\\
&= C (1-t)\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}.
\end{split}
\end{equation*}
This implies that $\|\nabla G(\Phi(y_t))\|_{L^2_{\frac{n}{2}+1}}\leq (1+C)\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}$ by the triangular inequality and therefore by \eqref{nabla f nablaG},
\begin{equation}
\|\nabla f(y_t)\|_{L^2_{\frac{n}{2}-1}}\leq C \|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}.\label{nabla f nablaG(x)}
\end{equation}
This ends the proof of the desired estimate.
\end{proof}
Let us now show that an adaptation of \cite[Lemma 7.15]{Col-Min-Ein-Tan-Con} to our situation yields a control in $L^2_{\frac{n}{2}+1}$ only.
\begin{lemma}\label{loja orth noyau 1}
There exists $C>0$ such that for sufficiently small $x\in C^{2,\alpha}_{\tau}(S^2T^*N)\cap E$, we have
\begin{equation}
|G(x)-f(\Pi_\mathbf{K}(x))|\leq C \|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}^2.\label{est G - f Pi}
\end{equation}
\end{lemma}
\begin{proof}
Define for all $t\in [0,1]$,
$$y_t:= \Pi_\mathbf{K}(x) + t\nabla G(x),$$
for which we have $\Phi(y_1) = x$, $y_0 = \Pi_\mathbf{K}(x)$ and $\frac{d}{dt}y_t =\nabla G(x)$.
By integration, we have
\begin{align}
G(x)-f(\Pi_\mathbf{K}(x)) &= f(y_1)-f(y_0) \nonumber\\
&= \int_0^1 \langle \nabla f(y_t) , \nabla G(x) \rangle_{L^2} \,dt,\label{difference-G-fPi}
\end{align}
where $\nabla f(y_t)$ (respectively $\nabla G(x)$) is interpreted as an element of $L^2_{\frac{n}{2}-1}$ (respectively $L^2_{\frac{n}{2}+1}$). Thanks to [\eqref{est-nabla-f-nabla-G-path}, Lemma \ref{control nabla f Pi nabla G}], there exists $C>0$ such that for all $t\in[0,1]$, we have
\begin{equation*}
\|\nabla f (y_t) \|_{L^2_{\frac{n}{2}-1}}\leq C \|\nabla G(x) \|_{L^2_{\frac{n}{2}+1}}.
\end{equation*}
The estimate \eqref{est G - f Pi} then comes from Cauchy-Schwarz inequality.
\end{proof}
We can then conclude exactly like in the end of the proof of \cite[Theorem 7.3]{Col-Min-Ein-Tan-Con}, by using the finite dimensional \L{}ojasiewicz inequality on $\mathbf{K}$. Note that this is the only step for which the analyticity of the functional is used.
Denote $f_\mathbf{K}$ the restriction of $f$ to the finite-dimensional space $\mathbf{K}$ which is an analytic function. Let $x\in C^{2,\alpha}_\tau\cap E$ be small enough. Thanks to the estimate \eqref{est-nabla-f-nabla-G}, and thanks to the finite-dimensional \L{}ojasiewicz inequality \cite{loj}, we have for some $0<\theta\leq 1$,
\begin{align}
C^2\|\nabla G(x)\|_{L^2_{\frac{n}{2}+1}}^2&\geq C\|\nabla f (\Pi_\mathbf{K}(x))\|_{L^2_{\frac{n}{2}-1}}^2\\
&\geq \|\nabla f_{|\mathbf{K}}(\Pi_\mathbf{K}(x))\|_{L^2_{\frac{n}{2}-1}}^2\\
&\geq |f_\mathbf{K}(\Pi_\mathbf{K}(x))-f_\mathbf{K}(0)|^{2-\theta}\\
&=|f(\Pi_\mathbf{K}(x))-G(0)|^{2-\theta},
\end{align}
where $C$ denotes the positive constant of \eqref{est-nabla-f-nabla-G}. Here, we see $\mathbf{K}$ equipped with the $L^2_{\frac{n}{2}-1}$-norm (but any other norm would just change the constants as the dimension is finite). We can then finally use the estimate \eqref{est G - f Pi} together with the triangular inequality to obtain a general \L{}ojasiewicz inequality :
\begin{equation}
|G(x)-G(0)|^{2-\theta}\leq C\|\nabla G(x)\|^2_{L^2_{\frac{n}{2}+1}},\label{loja L2-n/2+1}
\end{equation}
with $0<\theta\leq 1$. We therefore obtain a \L{}ojasiewicz inequality with a $L^2_{\frac{n}{2}+1}$-norm for the gradient. The above constant $\theta$ moreover does not depend on $\alpha$ or $\tau$.
\subsection{Proof of a general \L{}ojasiewicz inequality for $\lambda_{\operatorname{ALE}}$ on ALE metrics}\label{sec-proof-gal-loja}~~\\
Let us now ensure that our functional $G=\lambda_{\operatorname{ALE}}$ satisfies the assumptions of Proposition \ref{Lojasiewicz ineq weighted}. We start by defining the set $E =\ker \mathop{\rm div}\nolimits_{g_b}$. Let us spend some time understanding the image $L_{g_b}(C^{2,\alpha}_{\tau}\cap \ker\mathop{\rm div}\nolimits_{g_b})$ which intuitively is $C^{0,\alpha}_{\tau+2}\cap \ker\mathop{\rm div}\nolimits_{g_b}$. However, strictly speaking, we a priori cannot see $C^{0,\alpha}_{\tau+2}\cap \ker\mathop{\rm div}\nolimits_{g_b}$ as a closed subset of $C^{0,\alpha}_{\tau+2}$ since the equation $\mathop{\rm div}\nolimits_{g_b}h=0$ is not well-defined as it takes one derivative. It has to be understood in the weak sense. Let us be more precise about this in order to define the set $F:=\mathop{\rm div}\nolimits^*_{g_b}(C^\infty_c(TN))^\perp$.
\begin{lemma}\label{definition E et F pour loja}
Let us define $E := \ker \mathop{\rm div}\nolimits_{g_b}$, $\mathbf{K} := \ker_{L^2} L_{g_b}$ and define $\Pi_\mathbf{K}$ the $L^2(g_b)$-projection on $\mathbf{K}$. Then the image $(L_{g_b} + \Pi_{\mathbf{K}})(C^{2,\alpha}_{\tau}\cap E)$ is $C^{0,\alpha}_{\tau+2}\cap \mathop{\rm div}\nolimits^*_{g_b}(C^\infty_c(TN))^\perp$.
\end{lemma}
\begin{proof}
Consider $h\in C^{2,\alpha}_{\tau}\cap \ker\mathop{\rm div}\nolimits_{g_b}$ and a smooth compactly supported vector field $X$ and denote the $L^2$-adjoint of $\mathop{\rm div}\nolimits_{g_b}$ by $\mathop{\rm div}\nolimits^{\ast}_{g_b}$. Notice that for such a vector field $X$, we have $\mathop{\rm div}\nolimits^*_{g_b}X=-\frac{1}{2}\mathop{\rm \mathscr{L}}\nolimits_X(g_b)$. Now, similarly to the proof of Lemma \ref{lemma-equiv-def-stable}, observe that,
\begin{align}
\langle L_{g_b}h,\mathop{\rm div}\nolimits^*_{g_b}X \rangle_{g_b} &= \langle h,L_{g_b}(\mathop{\rm div}\nolimits^*_{g_b}X) \rangle_{g_b}\\
&=\langle h,\mathop{\rm div}\nolimits^*_{g_b}B_{g_b}\mathop{\rm div}\nolimits^*_{g_b}X \rangle_{g_b}\\
&=\langle \mathop{\rm div}\nolimits_{g_b}h,B_{g_b}\mathop{\rm div}\nolimits^*_{g_b}X \rangle_{g_b} = 0
\end{align}
and we see that since the elements of $\mathbf{K}$ are divergence-free according to Proposition \ref{prop-lic-fred}, the image of $L_{g_b} + \Pi_{\mathbf{K}}$ is included in $C^{0,\alpha}_{\tau+2}\cap \mathop{\rm div}\nolimits^*_{g_b}(C^\infty_c(TN))^\perp$.
Conversely, let $v\in C^{0,\alpha}_{\tau+2}\cap \mathop{\rm div}\nolimits^*_{g_b}(C^\infty_c(TN))^\perp$ and let us show that there exists $h_0\in C^{2,\alpha}_{\tau}\cap \ker\mathop{\rm div}\nolimits_{g_b}$ such that $v=L_{g_b}h_0+\Pi_{\mathbf{K}}h_0$.
Thanks to the Fredholm properties of $L_{g_b}$, see Proposition \ref{prop-lic-fred}, one has $(L_{g_b} + \Pi_{\mathbf{K}})(C^{2,\alpha}_{\tau}) = C^{0,\alpha}_{\tau+2}$.
In particular, there exists $h\in C^{2,\alpha}_{\tau}$ such that $v=L_{g_b}h + \Pi_{\mathbf{K}}h$.
Now, Proposition \ref{prop-decomp-2-tensor} ensures that we have a unique decomposition $h = h_0 + \mathop{\rm div}\nolimits^*_{g_b}V$ for a vector field $V\in C^{3,\alpha}_{\tau-1}$ and a symmetric $2$-tensor $h_0\in C^{2,\alpha}_{\tau}\cap \ker\mathop{\rm div}\nolimits_{g_b}$. Let us prove that $\mathop{\rm div}\nolimits^*_{g_b}V=0$. This will prove the expected result.
Since $v\in \mathop{\rm div}\nolimits^*_{g_b}(C^\infty_c(TN))^\perp$ and by density,
$$\langle L_{g_b}h+\Pi_{\mathbf{K}}h,\mathop{\rm div}\nolimits^{*}_{g_b}V\rangle_{L^2}=0.$$
Notice that $\langle \Pi_{\mathbf{K}}h,\mathop{\rm div}\nolimits^{*}_{g_b}V\rangle_{L^2}=0$ independently since $\Pi_{\mathbf{K}}h\in \mathbf{K}$ is divergence-free.
On the other hand, by symmetry of $L_{g_b}$,
\begin{equation}
\begin{split}
0&=\langle L_{g_b}h,\mathop{\rm div}\nolimits^*_{g_b}V\rangle_{L^2}\\
&=\langle h,L_{g_b}\mathop{\rm div}\nolimits^*_{g_b}V\rangle_{L^2}\\
&=\langle h,\mathop{\rm div}\nolimits^*_{g_b}B_{g_b}(\mathop{\rm div}\nolimits^*_{g_b}V)\rangle_{L^2}\\
&=\langle h_0,\mathop{\rm div}\nolimits^*_{g_b}B_{g_b}(\mathop{\rm div}\nolimits^*_{g_b}V)\rangle_{L^2}+\langle \mathop{\rm div}\nolimits^*_{g_b}V,\mathop{\rm div}\nolimits^*_{g_b}B_{g_b}(\mathop{\rm div}\nolimits^*_{g_b}V)\rangle_{L^2}\\
&=\langle \mathop{\rm div}\nolimits^*_{g_b}V,\mathop{\rm div}\nolimits^*_{g_b}B_{g_b}(\mathop{\rm div}\nolimits^*_{g_b}V)\rangle_{L^2}.
\end{split}
\end{equation}
Here, we have used integration by parts and the fact that $h_0$ is divergence-free in the last line. A similar computation to [\eqref{IBP-Bianchi-lemma-loc-stab}, Lemma \ref{lemma-equiv-def-stable}] tells us that $B_{g_b}(\mathop{\rm div}\nolimits^*_{g_b}V)=0=\nabla^{g_b}\mathop{\rm div}\nolimits_{g_b}V$. This in turn implies by Bochner formula for vector fields that $\Delta_{g_b}V=0$ since $g_b$ is Ricci-flat. The use of the maximum principle then shows that $V=0$. This fact ends the proof of the lemma.
\end{proof}
We now check conditions [(\ref{item-0}), \eqref{lip-bd-nabla-G}, (\ref{item-0-bis})] from Proposition \ref{Lojasiewicz ineq weighted}.
\begin{prop}[Energy estimates]\label{prop-energy-est}
Let $(N^n,g_b)$ be an ALE Ricci-flat metric, asymptotic to $\mathbb{R}^n\slash\Gamma$, for some finite subgroup $\Gamma$ of $SO(n)$ acting freely on $\mathbb{S}^{n-1}$ and let $\tau\in (\frac{n-2}{2},n-2)$. Then there exists a neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ of $g_b$ such that the following energy estimate holds true:
\begin{equation}
\|\nabla\lambda_{\operatorname{ALE}}(g_2)-\nabla\lambda_{\operatorname{ALE}}(g_1)\|_{L^2_{\frac{n}{2}+1}}\leq C(n,g_b,\varepsilon)\|g_2-g_1\|_{H^2_{\frac{n}{2}-1}}, \label{energy-est}
\end{equation}
for any metric $g_2$, $g_1$ in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$.
Moreover, the differential of the map $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\rightarrow \nabla\lambda_{\operatorname{ALE}}(g)\in B_{C^{0,\alpha}_{\tau}}(0,\varepsilon)$ satisfies:
\begin{equation}
\|D_{g_2}\nabla\lambda_{\operatorname{ALE}}-D_{g_1}\nabla\lambda_{\operatorname{ALE}}(h)\|_{L^2_{\frac{n}{2}+1}}\leq C(n,g_b,\varepsilon)\|g_2-g_1\|_{C^{2,\alpha}_{\tau}}\|h\|_{H^2_{\frac{n}{2}-1}}, \label{energy-est-bis}
\end{equation}
for any metric $g_2$, $g_1$ in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ and $h\in H^2_{\frac{n}{2}-1}$.
\end{prop}
\begin{proof}
Let us fix a neighborhood $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ such that the properties on the potential function $f_g$ established in Proposition \ref{prop-pot-fct} hold true.
Strictly speaking, the symbol $\nabla \lambda_{\operatorname{ALE}}(g)$ means the gradient of $\lambda_{\operatorname{ALE}}$ at $g$ with respect to the Hilbert space $L^2(d\mu_{g_b})$. Thanks to Proposition \ref{lambdaALE analytic}, $\nabla\lambda_{\operatorname{ALE}}(g)$ can be computed in coordinates from the $L^2(e^{-f_g}d\mu_g)$-gradient of $\lambda_{\operatorname{ALE}}$ as follows. Indeed, consider an orthonormal frame with respect to $g_b$ at a point such that $g_{ij}=(1+\lambda_i)\delta_{ij}$ and observe that:
\begin{equation*}
\nabla\lambda_{\operatorname{ALE}}(g)_{ij}=-e^{-f_g}\frac{d\mu_g}{d\mu_{g_b}}(1+\lambda_i)^{-1}(1+\lambda_j)^{-1}\left(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\right)_{ij},
\end{equation*}
which can be schematically written as
\begin{equation}
\nabla\lambda_{\operatorname{ALE}}(g)=-e^{-f_g}\frac{d\mu_g}{d\mu_{g_b}}g^{-1}\ast g^{-1}\ast g_b\ast g_b\ast (\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g).\label{l2-grad-lam-gb}
\end{equation}
The estimate (\ref{l2-grad-lam-gb}) then implies that for two metrics $g_i$, $i=1,2$ in a $C^{2,\alpha}_{\tau}$-neighborhood of $g_b$,
\begin{equation}
\begin{split}
|\nabla\lambda_{\operatorname{ALE}}(g_2)-\nabla\lambda_{\operatorname{ALE}}(g_1)|_{g_b}\leq& C |\mathop{\rm Ric}\nolimits(g_2)+\nabla^{g_2,2}f_{g_2}-\mathop{\rm Ric}\nolimits(g_1)-\nabla^{g_1,2}f_{g_1}|_{g_b}\\
&+C\left(|g_2-g_1|_{g_b}+|f_{g_2}-f_{g_1}|\right)\rho_{g_b}^{-2},
\label{red-case-l2-wei-grad}
\end{split}
\end{equation}
where $C=C(n,g_b,\varepsilon,\tau)$ are positive constants. Here we have used the weakened assumption on the decay of the Ricci curvatures $\mathop{\rm Ric}\nolimits(g_i)=O(\rho_{g_b}^{-2-\tau})=O(\rho_{g_b}^{-2})$ together with the decay of the Hessians $\nabla^{g_i,2}f_{g_i}=O(\rho_{g_b}^{-2})$ from Proposition \ref{prop-pot-fct}. In particular, Hardy's inequality from Theorem \ref{thm-min-har-inequ} and Proposition \ref{prop-ene-pot-fct} show that:
\begin{equation}
\begin{split}
\|\nabla\lambda_{\operatorname{ALE}}(g_2)-\nabla\lambda_{\operatorname{ALE}}(g_1)\|_{L^2_{\frac{n}{2}+1}}&\leq C \|\mathop{\rm Ric}\nolimits(g_2)+\nabla^{g_2,2}f_{g_2}-\mathop{\rm Ric}\nolimits(g_1)-\nabla^{g_1,2}f_{g_1}\|_{L^2_{\frac{n}{2}+1}}\\
&\quad+C\left(\|g_2-g_1\|_{H^2_{\frac{n}{2}-1}}+\|\nabla^{g_b}(f_{g_2}-f_{g_1})\|_{L^2_{\frac{n}{2}}}\right)\\
&\leq C \|\mathop{\rm Ric}\nolimits(g_2)+\nabla^{g_2,2}f_{g_2}-\mathop{\rm Ric}\nolimits(g_1)-\nabla^{g_1,2}f_{g_1}\|_{L^2_{\frac{n}{2}+1}}\\
&\quad+C\|g_2-g_1\|_{H^2_{\frac{n}{2}-1}}.\label{prel-est-ener-lam}
\end{split}
\end{equation}
We first do the proof of the energy estimates (\ref{energy-est}) in case one of the two metrics is $g_b$, which implies in particular that $\nabla\lambda_{\operatorname{ALE}}(g_b)=0$.
Let us write $h:=g-g_b$ where $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$.
By Lemmata \ref{Ric-lin-lemma-app} and \ref{lem-lin-equ-Ric-first-var}, linearizing the Ricci curvature of $g$ at $g_b$ gives schematically:
\begin{equation}
\begin{split}
-2\mathop{\rm Ric}\nolimits(g)&=-2\mathop{\rm Ric}\nolimits_{g_b}+L_{g_b}h-\mathop{\rm \mathscr{L}}\nolimits_{B_{g_b}(h)}(g_b)+Q(h,\nabla^{g_b}h,\nabla^{g_b,2}h)\\
&=L_{g_b}h-\mathop{\rm \mathscr{L}}\nolimits_{B_{g_b}(h)}(g_b)+Q(h,\nabla^{g_b}h,\nabla^{g_b,2}h),\label{ric-lin-bianchi}
\end{split}
\end{equation}
where $Q(h,\nabla^{g_b}h,\nabla^{g_b,2}h)$ satisfies pointwise
\begin{equation*}
\left|Q(h,\nabla^{g_b}h,\nabla^{g_b,2}h)\right|_{g_b}\leq C(n,g_b,\varepsilon)\left(|\mathop{\rm Rm}\nolimits(g_b)|_{g_b}|h|^2_{g_b}+|\nabla^{g_b}h|_{g_b}^2+|h|_{g_b}|\nabla^{g_b,2}h|_{g_b}\right).
\end{equation*}
Therefore, by using the fact that $g=g_b+h\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$,
\begin{eqnarray}
\|\mathop{\rm Ric}\nolimits(g)\|_{L^2_{\frac{n}{2}+1}}\leq C(n,g_b,\varepsilon)\|h\|_{H^2_{\frac{n}{2}-1}},\label{L^2-a-priori-bound-Ric}
\end{eqnarray}
where we suppressed the dependence of the norm on $S^2T^*N$.\\
Now, let us estimate $\|\nabla^{g,2}f_g\|_{L^2}$. According to the Bochner formula applied to the smooth metric measure space $\left(N,g,\nabla^gf_g\right)$:
\begin{equation}
\begin{split}
\left(\Delta_g-\langle\nabla^gf_g,\nabla^g\cdot\rangle_g\right)|\nabla^g f_g|_g^2&=2|\nabla^{g,2}f_g|^2_g+2(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g)(\nabla^gf_g,\nabla^gf_g)\\
&+2\left\langle\nabla^g\left(\Delta_gf_g-\langle\nabla^gf_g,\nabla^gf_g\rangle_g\right),\nabla^gf_g\right\rangle_g.\label{bochner-for-hess-f}
\end{split}
\end{equation}
Once (\ref{bochner-for-hess-f}) is multiplied by $\rho_{g_b}^2$, an integration by parts, legitimated by the decay of $f_g$ established in Proposition \ref{prop-pot-fct}, gives:
\begin{equation}
\begin{split}\label{IPP-Monster}
\|\nabla^{g,2}f_g\|&_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2=\|\Delta_{g,f_g}f_g\|^2_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}-\int_N(\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g)(\nabla^gf_g,\nabla^gf_g)\,\rho_{g_b}^2e^{-f_g}d\mu_g\\
&+2\int_N\Delta_{g,f_g}f_g\,\rho_{g_b}\,\langle\nabla^{g}\rho_{g_b},\nabla^gf_g\rangle_g\,e^{-f_g}d\mu_g-\int_N\nabla^{g,2}f_g(\nabla^gf_g,\nabla^g(\rho_{g_b}^2e^{-f_g}))\,d\mu_g\\
&\leq c\||\nabla^gf_g|^2_g+\mathop{\rm R}\nolimits_g\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2+\frac{1}{2}\|\nabla^{g,2}f_g\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2+c\|\nabla^gf_g\|_{L^2(e^{-f_g}d\mu_g)}^2\\
&+c\left(\||\nabla^gf_g|^2_g\|^2_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}+\|\mathop{\rm Ric}\nolimits(g)\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2\right)\\
&\leq\frac{1}{2}\|\nabla^{g,2}f_g\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2\\
&+c\left(\||\nabla^gf_g|^2_g\|^2_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}+\|\nabla^gf_g\|^2_{L^2(e^{-f_g}d\mu_g)}+\|\mathop{\rm Ric}\nolimits(g)\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2\right),
\end{split}
\end{equation}
where we use (\ref{equ-criti-lambda-pot}) together with Young's inequality in the third line and where $c=c(n)$ denotes a positive constant that may vary from line to line. By Proposition \ref{prop-pot-fct}, $\rho_{g_b}|\nabla^gf_g|_g\leq C(n,\varepsilon,g_b)$ pointwise. Consequently, we get:
\begin{equation*}
\|\nabla^{g,2}f_g\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2\leq C(n,\varepsilon,g_b)\left(\|\nabla^gf_g\|^2_{L^2(e^{-f_g}d\mu_g)}+\|\mathop{\rm Ric}\nolimits(g)\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2\right),
\end{equation*}
which in turn implies:
\begin{equation}
\|\nabla^{g,2}f_g\|_{L^2(\rho_{g_b}^2e^{-f_g}d\mu_g)}^2\leq C(n,\varepsilon,g_b)\left(\|\nabla^gf_g\|^2_{L^2(e^{-f_g}d\mu_g)}+\|h\|_{H^2_{\frac{n}{2}-1}}^2\right),\label{intermediate-est-hessian}
\end{equation}
thanks to (\ref{L^2-a-priori-bound-Ric}). Indeed, by Proposition \ref{existence propriete-wg},
\begin{eqnarray}
|w_g-1|=|w_g-w_{g_b}|\leq \frac{1}{2},\quad g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon),\label{c^0-est-pot-fct}
\end{eqnarray}
if $\varepsilon$ is chosen small enough, so that the metrics $w_g\cdot g$ and $g$ are uniformly bi-Lipschitz on $N$.
Finally, it remains to estimate $\|\nabla^gf_g\|_{L^2(e^{-f_g}d\mu_g)}$ or equivalently $\|\nabla^gf_g\|_{L^2}$ from above. Concatenating (\ref{intermediate-est-hessian}) and [(\ref{grad-est-int-ene}), Proposition \ref{prop-ene-pot-fct}] ends the proof of (\ref{energy-est}) in case $g_1=g_b$ once (\ref{prel-est-ener-lam}) is invoked.\\
Let us treat the general case, i.e. let $g_1$ and $g_2$ be two metrics in $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, let $h:=g_2-g_1$ and $g_t:=g_1+(t-1)h$ for $t\in[1,2]$, and let us estimate the difference of $\nabla\lambda_{\operatorname{ALE}}(g_2)$ and $\nabla\lambda_{\operatorname{ALE}}(g_1)$ with the help of Lemmata \ref{Ric-lin-lemma-app} and \ref{lem-lin-equ-Ric-first-var} as follows:
\begin{equation}
\begin{split}\label{first-diff-lambda}
&-2\mathop{\rm Ric}\nolimits(g_2)-\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_2}f_{g_2}}(g_2)+2\mathop{\rm Ric}\nolimits(g_1)+\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_1}f_{g_1}}(g_1)\\
=&L_{g_1}h-\mathop{\rm \mathscr{L}}\nolimits_{B_{g_1}(h)}(g_2)+\int_1^2\frac{\partial}{\partial t}\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}f_{g_t}}(g_t)\,dt+Q(h,\nabla^{g_1}h,\nabla^{g_1,2}h).
\end{split}
\end{equation}
Now, observe that:
\begin{equation}
\begin{split}\label{int-first-der-lie-f}
\int_1^2\frac{\partial}{\partial t}\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}f_{g_t}}(g_t)\,dt&=\int_1^2-\mathop{\rm \mathscr{L}}\nolimits_{h(\nabla^{g_t}f_{g_t})}(g_t)+\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}f_{g_t}}(h)+\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}(\delta_{g_t}f(h))}(g_t)\,dt.
\end{split}
\end{equation}
On the one hand, one has
\begin{equation}
\begin{split}\label{one-first-term-lie}
\int_1^2\left|\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}f_{g_t}}(h)\right|_{g_1}\,dt&\lesssim\int_1^2|\nabla^{g_t}h|_{g_1}|\nabla^{g_t}f_{g_t}|_{g_1}+|h|_{g_1}|\nabla^{g_t,2}f_{g_t}|_{g_1} \,dt\\
&\lesssim \rho_{g_b}^{-1}|\nabla^{g_1}h|_{g_1}\|h\|_{C^{2,\alpha}_{\tau}}+\rho_{g_b}^{-2}|h|_{g_1}\|h\|_{C^{2,\alpha}_{\tau}}.
\end{split}
\end{equation}
Here, the sign $\lesssim$ means less than or equal up to a positive constant uniform in $t\in[1,2]$ which might depend on $n$, $g_b$, $\varepsilon$.
On the other hand, one has similarly,
\begin{equation}
\begin{split}\label{sec-first-term-lie}
\left|\int_1^2\mathop{\rm \mathscr{L}}\nolimits_{h(\nabla^{g_t}f_{g_t})}(g_t)\,dt\right|_{g_1}&\lesssim\int_1^2|\nabla^{g_t}(h(\nabla^{g_t}f_{g_t}))|_{g_1} \,dt\\
&\leq \rho_{g_b}^{-1}|\nabla^{g_1}h|_{g_1}\|h\|_{C^{2,\alpha}_{\tau}}+\rho_{g_b}^{-2}|h|_{g_1}\|h\|_{C^{2,\alpha}_{\tau}}.
\end{split}
\end{equation}
Then by (\ref{first-diff-lambda}) together with (\ref{int-first-der-lie-f}), (\ref{one-first-term-lie}) and (\ref{sec-first-term-lie}), one has pointwise:
\begin{equation*}
\begin{split}
|\mathop{\rm Ric}\nolimits(g_2)+\nabla^{g_2,2}f_{g_2}-&\mathop{\rm Ric}\nolimits(g_1)-\nabla^{g_1,2}f_{g_1}|_{g_1}
\\
&\lesssim|L_{g_1}h|_{g_1}+\left|\mathop{\rm \mathscr{L}}\nolimits_{B_{g_1}(h)}(g_1)\right|_{g_1}+\int_1^2|\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}(\delta_{g_t}f(h))}(g_t)|_{g_1}\,dt\\
&+|\mathop{\rm Rm}\nolimits(g_1)|_{g_1}|h|^2_{g_1}+|\nabla^{g_1}h|_{g_1}^2+|h|_{g_1}|\nabla^{g_1,2}h|_{g_1}\\
&+\rho_{g_b}^{-1}|\nabla^{g_1}h|_{g_1}\|h\|_{C^{2,\alpha}_{\tau}}+\rho_{g_b}^{-2}|h|_{g_1}\|h\|_{C^{2,\alpha}_{\tau}}\\
&\lesssim |L_{g_1}h|_{g_1}+\left|\mathop{\rm \mathscr{L}}\nolimits_{B_{g_1}(h)}(g_1)\right|_{g_1}+\int_1^2|\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}(\delta_{g_t}f(h))}(g_t)|_{g_1}\,dt\\
&+\rho_{g_b}^{-1}|\nabla^{g_1}h|_{g_1}+\rho_{g_b}^{-2}|h|_{g_1},
\end{split}
\end{equation*}
where we have used the fact that $g_i\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, $i=1,2$ in the last line.
Let us notice that by definition of the space $H^2_{\frac{n}{2}-1}$ and that of the linear Bianchi gauge given in \eqref{defn-bianchi-op},
\begin{equation*}
\begin{split}
\|L_{g_1}h\|_{L^2_{\frac{n}{2}+1}}&\lesssim\|\mathop{\rm Rm}\nolimits(g_1)\ast h\|_{L^2_{\frac{n}{2}+1}}+\|\nabla^{g_1}h\|_{L^2}+\|\nabla^{g_1,2}h\|_{L^2_{\frac{n}{2}+1}}\\
&\lesssim \|h\|_{H^2_{\frac{n}{2}-1}},\\
\|\mathop{\rm \mathscr{L}}\nolimits_{B_{g_1}(h)}(g_1)\|_{L^2_{\frac{n}{2}+1}}&\lesssim\left\|\nabla^{g_1}\left(\mathop{\rm div}\nolimits_{g_1}h-\frac{\nabla^{g_1}\mathop{\rm tr}\nolimits_{g_1}h}{2}\right)\right\|_{L^2_{\frac{n}{2}+1}}\\
&\lesssim \|\nabla^{g_1,2}h\|_{L^2_{\frac{n}{2}+1}} \lesssim \|h\|_{H^2_{\frac{n}{2}-1}},
\end{split}
\end{equation*}
which implies that:
\begin{equation}
\begin{split}\label{last-but-not-least}
\|\mathop{\rm Ric}\nolimits(g_2)+\nabla^{g_2,2}f_{g_2}-&\mathop{\rm Ric}\nolimits(g_1)-\nabla^{g_1,2}f_{g_1}\|_{L^2_{\frac{n}{2}+1}}\\
&\lesssim \|h\|_{H^2_{\frac{n}{2}-1}}+\left\|\int_1^2\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}(\delta_{g_t}f(h))}(g_t)\,dt\right\|_{L^2_{\frac{n}{2}+1}}\\
&\lesssim \|h\|_{H^2_{\frac{n}{2}-1}}+\int_1^2\|\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}(\delta_{g_t}f(h))}(g_t)\|_{L^2(\rho_{g_b}^2d\mu_{g_t})}\,dt.
\end{split}
\end{equation}
Here, we have used Cauchy-Schwarz inequality together with the fact that the metrics $g_t$, $t\in[1,2]$ are uniformly equivalent in the last line.
According to (\ref{prel-est-ener-lam}) and (\ref{last-but-not-least}), it remains to control $\|\mathop{\rm \mathscr{L}}\nolimits_{\nabla^{g_t}(\delta_{g_t}f(h))}(g_t)\|_{L^2(e^{-f_{g_t}}d\mu_{g_t})}$ from above uniformly in $t\in[1,2]$. By using the Bochner formula for the smooth metric measure space $\left(N^n,g_t,\nabla^{g_t}f_{g_t}\right)$ endowed with the measure $\mu_t:=e^{-f_{g_t}}d\mu_{g_t}$:
\begin{equation*}
\begin{split}
\Delta_{g_t,f_{g_t}}|\nabla^{g_t} \left(\delta_{g_t}f(h)\right)|_{g_t}^2&:=\left(\Delta_{g_t}-\langle\nabla^{g_t}f_{g_t},\nabla^{g_t}\cdot\rangle_{g_t}\right)|\nabla^{g_t} \left(\delta_{g_t}f(h)\right)|_{g_t}^2\\
&=2|\nabla^{g_t,2}\delta_{g_t}f(h)|^2_{g_t}+2\left\langle\nabla^{g_t}\Delta_{g_t,f_{g_t}}\left(\delta_{g_t}f(h)\right),\nabla^{g_t}\left(\delta_{g_t}f(h)\right)\right\rangle_{g_t}\\
&+2(\mathop{\rm Ric}\nolimits(g_t)+\nabla^{g_t,2}f_{g_t})(\nabla^{g_t}\left(\delta_{g_t}f(h)\right),\nabla^{g_t}\left(\delta_{g_t}f(h)\right)).
\end{split}
\end{equation*}
Therefore we proceed in the same way as we did in (\ref{IPP-Monster}) to get:
\begin{equation}
\begin{split}\label{est-hess-first-var-pot-fct-ene}
\|\nabla^{g_t,2}\left(\delta_{g_t}f(h)\right)\|_{L^2(\rho_{g_b}^2d\mu_{t})}&\lesssim\|\Delta_{g_t,f_{g_t}}\left(\delta_{g_t}f(h)\right)\|_{L^2(\rho_{g_b}^2d\mu_{t})}+\|\nabla^{g_t}\left(\delta_{g_t}f(h)\right)\|_{L^2(d\mu_{t})}.
\end{split}
\end{equation}
In order to estimate the righthand side of the previous inequality in terms of $\|h\|_{H^2_{\frac{n}{2}-1}}$ only, we invoke [(\ref{est-grad-first-der-pot-fct-ene}), (\ref{first-var-pot-fct-ell-equ-gal-case-prop}), Proposition \ref{prop-ene-pot-fct}] together with the triangular inequality.
Coming back to (\ref{est-hess-first-var-pot-fct-ene}), this implies that:
\begin{equation}
\begin{split}
\|\nabla^{g_t,2}\left(\delta_{g_t}f(h)\right)\|_{L^2(\rho_{g_b}^2d\mu_{g_t})}&\lesssim\|\Delta_{g_t,f_{g_t}}\left(\delta_{g_t}f(h)\right)\|_{L^2(\rho_{g_b}^2d\mu_{g_t})}+\|\nabla^{g_t}h\|_{L^2}\\
&\lesssim \|\nabla^{g_1}h\|_{L^2}+\|\nabla^{g_1,2}h\|_{L^2_{\frac{n}{2}+1}}\lesssim \|h\|_{H^2_{\frac{n}{2}-1}}.
\end{split}
\end{equation}
Here we used the fact that $|\nabla^{g_t}h|_{g_t}\lesssim |\nabla^{g_1}h|_{g_1}+|\nabla^{g_1}(g_t-g_1)|_{g_1}|h|_{g_1}$ to get the second inequality since again, $g_i\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, $i=1,2$. This ends the proof of estimate \eqref{energy-est}.\\
The proof of \eqref{energy-est-bis} is along the same lines as those of the proof of \eqref{energy-est}. Therefore, we only sketch its main steps. According to Proposition \ref{snd-var-gal-lambda}, the differential of the $L^{2}(e^{-f_g}d\mu_g)$-gradient of $\lambda_{\operatorname{ALE}}$ denoted by $\nabla^{L^2(e^{-f_g}d\mu_g)}\lambda_{\operatorname{ALE}}$ at a metric $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ along a variation $h\in H^2_{\frac{n}{2}-1}$ is:
\begin{equation}
\begin{split}\label{for-diff-nabla-lam-wei}
2D_g\left(\nabla^{L^2(e^{-f_g}d\mu_g)}\lambda_{\operatorname{ALE}}\right)(h)&=\Delta_{f_g}h+2\mathop{\rm Rm}\nolimits(g)(h)-\mathop{\rm \mathscr{L}}\nolimits_{B_{f_g}(h)}(g)\\
&\quad+h\circ\mathop{\rm Ric}\nolimits_{f_g}(g)+\mathop{\rm Ric}\nolimits_{f_g}(g)\circ h-2\left(\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h)\right)\mathop{\rm Ric}\nolimits_{f_g}(g).
\end{split}
\end{equation}
As explained in (\ref{l2-grad-lam-gb}) at the beginning of the proof of \eqref{energy-est}, dealing either with the $L^{2}(e^{-f_g}d\mu_g)$-gradient or the $L^{2}(d\mu_{g_b})$-gradient of $\lambda_{\operatorname{ALE}}$ lead to the same expected estimate. Linearizing (\ref{for-diff-nabla-lam-wei}) applied to $g_2\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ at a metric $g_1\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$ leads to \eqref{energy-est-bis}: indeed, the only difficulty consists in estimating the terms involving $\delta_gf(h)$ or equivalently, the volume variation $\frac{\mathop{\rm tr}\nolimits_gh}{2}-\delta_gf(h).$ This is done with the help of Proposition \ref{var-vol-var-ell-eqn-prop} by linearizing \eqref{var-vol-var-ell-eqn-for} applied to $g_2$ at $g_1$ together with the use of Proposition \ref{prop-ene-pot-fct-bis}.
\end{proof}
We are in a good position to prove the main result of this section:
\begin{theo}[A \L{}ojasiewicz inequality for ALE metrics]\label{theo-loja-ALE}
Let $(N^n,g_b)$, $n\geqslant 4$, be a Ricci-flat ALE metric. Let $\tau\in \left(\frac{n-2}{2},n-2\right)$ and $\alpha\in(0,1)$. Then the functional $\lambda_{\operatorname{ALE}}$ satisfies the following $L^2_{\frac{n}{2}+1}$-\L{}ojasiewicz inequality: there exists $\varepsilon>0$, a constant $C>0$ and $\theta\in(0,1)$ such that for all $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$:
\begin{equation}
|\lambda_{\operatorname{ALE}}(g)|^{2-\theta}\leq C\|\mathop{\rm Ric}\nolimits(g) + \nabla^{g,2}{f_g}\|^2_{L^2_{\frac{n}{2}+1}}. \label{Lojasiewicz lambda ALE l2n/2+1}
\end{equation}
\end{theo}
\begin{proof}
It suffices to prove that the functional $G:=\lambda_{\operatorname{ALE}}$ and its derivatives satisfy the assumptions of Proposition \ref{Lojasiewicz ineq weighted}.
Let us first note that $\lambda_{\operatorname{ALE}}$ is analytic by Proposition \ref{lambdaALE analytic}.
Now, the energy estimates (\ref{lip-bd-nabla-G}) hold true thanks to [\eqref{energy-est}, Proposition \ref{prop-energy-est}].
Moreover, according to [(\ref{first-var-prop}), Proposition \ref{first-var-lambda}] and Proposition \ref{lambdaALE analytic}, the gradient of $\lambda_{\operatorname{ALE}}$ in $L^2(e^{-f_g}d\mu_g)$ is $-(\mathop{\rm Ric}\nolimits(g) + \nabla^{g,2}f_g)$ for $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$. Condition [(\ref{item-0-bis}), Proposition \ref{Lojasiewicz ineq weighted}] is ensured by [\eqref{energy-est-bis}, Proposition \ref{prop-energy-est}].
Next, observe that by Proposition \ref{scaling diffeo tildelambda}, the inequality \eqref{Lojasiewicz lambda ALE l2n/2+1} is invariant by diffeomorphisms of $N$ in the connected component of the identity such that their generating vector field lies in a neighborhood of $0_{TN}$ in $C^{3,\alpha}_{\tau-1}(TN)$. For this reason, we invoke Proposition \ref{prop-gauge-div-free} to restrict our space of metrics to the space $B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)\,\cap\,E$ where $E:= \ker_{C^{2,\alpha}_{\tau}}(\mathop{\rm div}\nolimits_{g_b})$ and we restrict the image to $C^{0,\alpha}_{\tau+2}\,\cap \,F$ with $F:=\mathop{\rm div}\nolimits^*_{g_b}(C^\infty_c(TN))^\perp$ as in Lemma \ref{definition E et F pour loja}. This space of metrics is the space of divergence-free metrics with respect to $g_b$. It is crucial to gauge the diffeomorphism invariance of $\lambda_{\operatorname{ALE}}$ away to expect the Fredholmness of its Hessian.
We will denote $\lambda_{\operatorname{ALE}}^E$ the restriction of $\lambda_{\operatorname{ALE}}$ to $E$ which is also analytic.
Since $\|\nabla \lambda_{\operatorname{ALE}}^E\|_{L^2_{\frac{n}{2}+1}}\leq \|\nabla \lambda_{\operatorname{ALE}}\|_{L^2_{\frac{n}{2}+1}}$, the inequalities go in the right direction, and it is enough to prove the corresponding \L ojasiewicz inequality for $\lambda_{\operatorname{ALE}}^E$.
Notice that the linearization of $\nabla \lambda_{\operatorname{ALE}}^E$ at $0\in S^2T^*N$ is half the Lichnerowicz operator $\frac{1}{2}L_{g_b}$ by Proposition \ref{second-var-prop}. Proposition \ref{prop-lic-fred} ensures that $L_{g_b}$ is symmetric and is a bounded operator from $C^{2,\alpha}_{\tau}(S^2T^*N)$ to $C^{0,\alpha}_{\tau+2}(S^2T^*N)$. It is moreover bounded from $H^2_{\frac{n}{2}-1}(S^2T^*N)$ to $L^2_{\frac{n}{2}+1}(S^2T^*N)$. The Fredholmness of $L_{g_b}$ follows from Proposition \ref{prop-lic-fred}. Conditions (\ref{item-2}) and (\ref{item-5}) are met thanks to Proposition \ref{prop-lic-fred}. This ends the proof of Theorem \ref{theo-loja-ALE}.
\end{proof}
Let us finally prove that we obtain the optimal exponent in the integrable situation.
Let $g_b$ be an integrable Ricci-flat ALE metric. In this situation, the idea is to replace the kernel $\ker_{L^2}L_{g_b}$ by the actual zero-set of $\lambda_{\operatorname{ALE}}$ among $C^{2,\alpha}_\tau$ divergence-free perturbations of $g_b$ which is an analytic manifold whose tangent space at $g_b$ is $\ker_{L^2}L_{g_b}$.
\begin{theo}\label{theo-loja-int-opt}
Let $(N^n,g_b)$, $n\geq 4$, be a Ricci-flat ALE metric whose infinitesimal Ricci-flat deformations are integrable. Let $\tau\in(\frac{n-2}{2},n-2)$ and $\alpha\in(0,1)$.
Then there exists $\epsilon>0$ and a constant $C>0$ such that for all $g\in B_{C^{2,\alpha}_{\tau}}(g_b,\varepsilon)$, the following $L^2_{\frac{n}{2}+1}$-\L{}ojasiewicz inequality holds:
\begin{equation*}
|\lambda_{\operatorname{ALE}}(g)|\leq C \|\mathop{\rm Ric}\nolimits(g)+\nabla^{g,2}f_g\|_{L^2_{\frac{n}{2}+1}}^{2}.
\end{equation*}
In particular, if $n\geq 5$ and $\tau\in(\frac{n}{2},n-2)$ then for any $0<\delta<\frac{2\tau-(n-2)}{2\tau-(n-4)}$, there exists $C>0$ such that for all $g\in B_{C^{2,\alpha}_\tau}(g_b,\epsilon)$, we have the following $L^2$-\L{}ojasiewicz inequality:
$$ |\lambda_{\operatorname{ALE}}(g)|^{2-\theta}\leq C \|\nabla \lambda_{\operatorname{ALE}}(g)\|_{L^2}^{2}, \quad\theta:=2-\frac{1}{\delta}.$$
\end{theo}
\begin{proof}
Denote $W_{g_b}:= \{\bar{g}_v$, \text{ for small } $v\in\ker_{L^2}L_{g_b}\}$, where the $ \bar{g}_v $ are the metrics of Definition \ref{definition integrable}. The functional $\lambda_{\operatorname{ALE}}$ and its gradient vanish on $W_{g_b}$ since the metrics $\bar{g}_v$ are Ricci-flat, and the \L{}ojasiewicz inequality trivially follows on this space.
Let $g$ be $C^{2,\alpha}_\tau$-close enough to $g_b$. Then, by Proposition \ref{gauge fixing ALE integrable} there exists a unique $\bar{g}_v\in W_{g_b}$ and a diffeomorphism $\phi:N\to N$ in the connected component of the identity such that its infinitesimal generator belongs to $C^{3,\alpha}_{\tau-1}(TN)$ and such that
\begin{itemize}
\item $ \phi^*g - \bar{g}_v \perp_{L^2(\bar{g}_v)}\ker_{L^2(\bar{g}_v)}L_{\bar{g}_v} $, and
\item $\mathop{\rm div}\nolimits_{\bar{g}_v}\phi^*g = 0$.
\end{itemize}
Therefore, up to changing the reference Ricci-flat metric by a metric in $W_{g_b}$, since $\lambda_{\operatorname{ALE}}$ and the $L^2$-norm of its gradient are invariant by the pull-back by $\phi$ thanks to Proposition \ref{scaling diffeo tildelambda}, the situation is reduced to proving a \L{}ojasiewicz inequality on the orthogonal of the kernel, which is exactly the statement of Lemma \ref{loja orth noyau 1}.
\end{proof}
\section{Deformation of Ricci-flat ALE metrics, scalar curvature and mass}\label{sec-covid-mass}
Let us now mention some applications of the functional ${\lambda}_{ALE}$ that we introduced and its properties.
\subsection{Local positive mass theorems}~~\\
Let us mention some direct applications of the functional $\lambda_{\operatorname{ALE}}$ and its properties for Ricci-flat ALE metrics.
\begin{coro}\label{local positive mass}
Let $(N^n,g_b)$ be a Ricci-flat ALE metric which is either \emph{integrable} and \emph{stable} or a local maximizer of ${\lambda}_{ALE}$ with respect to the $C^{2,\alpha}_\tau$-topology. Then any deformation of $g_b$ small enough in $C^{2,\alpha}_\tau$ which has nonnegative and integrable scalar curvature in $C^0_{\tau'}$, for some $\tau'>n$, satisfies
$$m_{\operatorname{ADM}}(g)\geq 0,$$
with equality on Ricci-flat metrics only.
\end{coro}
\begin{proof}
Consider $(N^n,g_b)$ a stable Ricci-flat ALE. By Proposition \ref{local maximum stable integrable}, it is a local maximum for ${\lambda}_{\operatorname{ALE}}$ in the $C^{2,\alpha}_\tau$-topology and therefore, for any metric $g$ sufficiently $C^{2,\alpha}_\tau$-close to $g_b$, we have $\lambda_{\operatorname{ALE}}^0(g)\leq 0$ with equality only if the metric is Ricci-flat. Since $\mathop{\rm R}\nolimits_g\geq 0$, if $\mathop{\rm R}\nolimits_g\in L^1$ we have $\lambda_{\operatorname{ALE}}^0(g)\geq 0$. This implies that the mass is nonnegative : $$0\geq\lambda_{\operatorname{ALE}}(g) = \lambda_{\operatorname{ALE}}^0(g)-m_{\operatorname{ADM}}(g)\geq -m_{\operatorname{ADM}}(g).$$
Moreover, if we have equality, we necessarily have $\lambda_{\operatorname{ALE}}(g)= 0$, $\lambda_{\operatorname{ALE}}^0(g)= 0$ and $m_{\operatorname{ADM}}(g)=0$ and since the only maximizers of $\lambda_{\operatorname{ALE}}$ are Ricci-flat, $g$ has to be Ricci-flat.
\end{proof}
\begin{rk}
Corollary \ref{local positive mass} is not a consequence of a previously known positive mass theorem. Actually, the positive mass theorem is known to be false in the ALE context \cite{LeB-Counter-Mass}.
\end{rk}
Finally, notice as in \cite{Hal-Has-Sie} that there are counter-examples to the rigidity part of the positive mass theorem among self-similar solutions to the Ricci flow. Indeed, Feldman, Ilmanen and Knopf \cite{Fel-Ilm-Kno} have constructed complete expanding gradient K\"ahler-Ricci solitons on the total space of the tautological line bundles $L^{-k}$, $k>n$ over $\mathbb{CP}^{n-1}$. These solutions on $L^{-k}$ are $U(n)$-invariant and asymptotic to the cone $C(\mathbb{S}^{2n-1}/\mathbb{Z}_k)$ endowed with the Euclidean metric $\frac{1}{2}i\partial\overline{\partial}\, |\cdot|^2$, where $\mathbb{Z}_k$ acts on $\mathbb{C}^n$ diagonally. The curvature tensor of these solitons decay exponentially fast to $0$ at infinity, in particular these metrics are ALE and their mass vanish. On the other hand, the scalar curvature of these metrics is positive everywhere.
\begin{rk}
We can define and control $\lambda_{\operatorname{ALE}}^0$ on the example of Feldman-Ilmanen-Knopf, denoted by $g_{\operatorname{FIK}}$. Since for any $s>0$, we have $\lambda_{\operatorname{ALE}}^0(sg) = s^{\frac{n}{2}- 1}\lambda_{\operatorname{ALE}}^0(g)$ by Lemma \ref{scaling lambdaALE}, and therefore since their example is an expanding soliton, we can prove that the Ricci flow starting at $g_{\operatorname{FIK}}$ satisfies $\lambda_{\operatorname{ALE}}^0(g_{\operatorname{FIK}}(t)) = (1+ct)^{\frac{n}{2}- 1}\lambda_{\operatorname{ALE}}^0(g_{\operatorname{FIK}})>0$ for some $c>0$. This is in contrast with the compact situation where a Ricci-flow starting at a metric with positive $\lambda$-functional necessarily develops a finite-time singularity.
\end{rk}
\subsection{Global properties on spin manifolds}~~\\
On spin $4$-manifolds, the stability of Ricci-flat ALE metrics as maximizers of ${\lambda}_{ALE}$ is ensured globally.
\begin{prop}\label{prop-spin-def-local}
Let $(N^4,g)$ be an ALE metric of order $\tau>1 = \frac{4-2}{2}$ on a spin manifold asymptotic to $\mathbb{R}^4\slash\Gamma$ for $\Gamma\subset SU(2)$.
Assume the scalar curvature $\mathop{\rm R}\nolimits_g$ is integrable and non-negative. Then, we have $$\lambda_{\operatorname{ALE}}(g)\leq 0,$$
with equality if and only if $(N^4,g)$ is a hyperk\"ahler (Ricci-flat) ALE metric.
\end{prop}
\begin{proof}
First of all, Lemma \ref{scaling lambdaALE} ensures the finiteness of $\lambda_{\operatorname{ALE}}(g)$ under such assumptions on the scalar curvature. Witten's formula \cite{wit} for the mass on spin asymptotically Euclidean manifolds which was extended by Nakajima \cite{nak} to spin ALE metrics with group in $SU(2)$, states that there exists $\psi$, a Dirac spinor asymptotic to a constant spinor with unit-norm for which
\begin{equation}
m_{\operatorname{ADM}}(g)=\int_N(4|\nabla^g \psi|_g^2+\mathop{\rm R}\nolimits_g|\psi|_g^2)\,d\mu_g\label{witten formula}.
\end{equation}
Using $w=|\psi|_g$ as a test function and Kato's inequality $|\nabla^g \psi|_g\geq |\nabla^g |\psi|_g|_g$, we find a lower bound $$m_{\operatorname{ADM}}(g)\geq \lambda_{\operatorname{ALE}}^0(g)$$
similarly to \cite{Has-Per-Fct}.
Now, according to \cite[(3.9)]{Cal-Gau-Her}, Dirac spinors satisfy a pointwise \emph{improved} Kato inequality, and we have $\sqrt{1-\frac{1}{4}}|\nabla^g \psi|_g\geqslant|\nabla^g |\psi|_g|_g$. The equality $m_{\operatorname{ADM}}(g)= \lambda_{\operatorname{ALE}}^0(g)$ therefore implies that the spinor is parallel. Then one conclude that the ALE metric $g$ under consideration is hyperk\"ahler: see \cite[Proof of Theorem 3.3]{nak} for a proof of this fact.
\end{proof}
\begin{rk}
We therefore recover that Ricci-flat ALE metrics on spin manifolds are \emph{stable} (they are actually hyperkähler by \cite{nak}). Recall that it is a folklore conjecture (see \cite[Section $1$, $3)$]{Ban-Kas-Nak} for instance) that all $4$-dimensional simply connected ALE Ricci-flat metrics are hyperk\"ahler. In light of the results of this paper, it is tempting to study the stability of $4$-dimensional Ricci-flat ALE metrics as a first step towards the previous conjecture.
\end{rk}
\newpage
|
1,116,691,499,678 | arxiv | \section{INTRODUCTION}
\IEEEPARstart{M}{ultiple} robots system (\emph{\small MRS}) allows simpler, cheaper, modular robotic units to be reorganized into a group based on the task at hand. It can be as effective as a task-specific, larger, monolithic robot, which may be more expensive and has to be rebuilt according to the task \cite{yang2018grand}. Inspired by nature \cite{parrish1999complexity}, \emph{MRS} has evolved into a variety of forms, such as multiple mobile robots \cite{parker2016multiple}, multiple manipulators \cite{suarez2018can}, multiple drones \cite{chung2018survey} and multiple underwater robots \cite{branch2019front}, and has been applied in various scenarios like construction \cite{petersen2019review}, transportation \cite{koung2021cooperative} and rescue \cite{tian2020search}.
Compared with a single robot, the fundamental challenge of the \emph{MRS} lies in the cooperation among robots. Cooperation may happen in several aspects, such as knowledge sharing and physical manipulation. In this paper, we focus on the motion planning of the \emph{MRS} when manipulating bulky objects. In the task shown in Fig. \ref{fig_sim_scene}, the robots need to cooperate and transport a large object, and first of all plan a path connecting the start and goal while satisfying a set of user-specified constraints.
As a combination of mobile robot and manipulator, mobile manipulator (\emph{MM}) inherits the mobility from the mobile robot and the dexterity from the manipulator \cite{khatib1999robots}. In the multiple mobile robots system, they are implicitly or explicitly required to maintain specific formation to improve the efficiency of communication and collaboration \cite{oh2015survey}. In the multiple manipulators system, they are rigidly connected to achieve robust manipulation of the object \cite{marino2017distributed}. Therefore, formation and closed-chain constraints will meet in the multiple mobile manipulators system. Furthermore, compared with the mobile robot and the manipulator, \emph{MM} is redundant. This implies that the same task at the end effector can be executed in different ways in the configuration space, which gives the possibility of avoiding forbidden regions and optimizing the robot configurations \cite{chiaverini2016redundant}.
Closed-chain, formation, redundancy and obstacles are inevitable factors for motion planner of multiple mobile manipulators. Firstly, closed-chain constraint implicitly defines a lower-dimensional manifold in the configuration space, hence the probability that a uniform sample will satisfy this constraint is zero \cite{berenson2011constrained, kingston2018sampling, qureshi2021constrained}. Moreover, obstacles in the environment will affect the connectivity of this manifold and bring additional challenges to the problem. Secondly, redundancy means that the motion of the system is under-constrained. Although it provides the chance to achieve extra behaviors, designing meaningful behaviors is nontrivial as they may vary significantly in different applications \cite{ancona2017redundancy}\cite{navarro2017framework}. Finally, the influences of the redundancy and closed-chain constraint on the formation constraint are closely coupled. Redundancy is the basis for formation optimization. For example, due to the coordination of the torso and arms, human could transport the object while adjusting their formation. On the other hand, closed-chain constraint and the limited reachable workspace of the manipulator will greatly reduce the space of formation optimization, which is even worse in cluttered environments.
\subsection{Contribution}
To simultaneously deal with closed-chain and formation of the whole system, redundancy of the mobile manipulator and obstacles in the environment, this paper proposes a motion planning framework with the following innovations.
\begin{itemize}[leftmargin=*]
\item Hierarchical framework: the centralized layer plans the object's motion, and the formation of the system is optimized in the decentralized layer. A unique feature is that closed-chain, obstacle-avoidance and the lower bound of the formation constraints can be quickly checked in the centralized layer, which ensures compatibility with the decentralized layer. Moreover, the decentralized layer is distributed, and the redundancy of each robot can be explored independently in real-time. Therefore, the complex planning problem can be simplified while not violating the constraints.
\item Diversified and efficient obstacle-avoidance strategies: on the basis of the closed-chain constraint, the system can either adjust the formation to bypass obstacles or adjust the self-motion of the redundant robots to cross obstacles. As a result, the motion planner exhibits excellent performance in complex environments, which is verified by comparing it with the benchmark planners.
\item The framework is unified and can be applied to different numbers of heterogeneous mobile manipulators. Moreover, to the best of our knowledge, this is the first time to achieve six-dimensional object manipulation by multiple mobile manipulators in cluttered real-world environments.
\end{itemize}
\subsection{Outline of the Paper}
The outline of this paper is as follows. In Section \ref{section_related_work}, related works on redundancy, closed-chain and formation constraints of the \emph{MRS} will be reviewed. Section \ref{section_problem_definition} will give a formal definition of the motion planning problem in detail. After that, we will introduce the proposed hierarchical framework in Section \ref{section_motion_planner}. The feasibility and superiority of the motion planner will be verified by simulation and real-world experiments in Section \ref{section_exp}. Finally, the paper is concluded in Section \ref{section_conclusion}.
\section{Related Works} \label{section_related_work}
The motion planners of multiple mobile manipulators in the literature are mostly inherited from its two subsystems: multiple manipulators and multiple mobile robots. The former focuses on the closed-chain constraint, while the latter emphasizes the formation constraint. In addition, these two classes of algorithms usually do not solve the redundancy explicitly. Therefore, in the following, we will first review the algorithms that deal with the redundancy of a single \emph{MM} and then extend to related works from two perspectives: closed-chain constraint and formation constraint.
\subsection{Redundancy of the Mobile Manipulator}
Redundancy of the \emph{MM} can be solved at position-level \cite{ancona2017redundancy}\cite{wiedmeyer2020real} or velocity-level \cite{seraji1998unified}\cite{chiaverini1997singularity}. For the former, redundancy parameters will be designed to restrict the motion of the \emph{MM}. For the latter, redundancy can be dealt with by task-augmentation and task-priority algorithms. Compared with the task-augmentation algorithm \cite{seraji1998unified}, task-priority algorithm \cite{chiaverini1997singularity} is superior when dealing with conflict among different tasks. In this algorithm, the additional task is only satisfied in the null space of the primary task, which gives a higher priority to the primary task. For example, in our previous work \cite{zhang2019task}, the motion of the end effector and the mobile robot were treated separately as the primary task and the additional task. The experimental results showed that the manipulator could grasp the object while the mobile robot was moving.
In the position-based and velocity-based algorithms, redundancy resolution will be transformed into nonlinear optimization problems about redundancy parameters and additional tasks, respectively. To speed up the optimization process and avoid local minimum, capability map ($\emph{CM}$) can be queried to determine the seed \cite{zhang2020novel}\cite{zhang2022cooperative}. $\emph{CM}$ was initially proposed in \cite{zacharias2007capturing}\cite{vahrenkamp2012manipulability} and used to handle the base placement of the \emph{MM} \cite{burget2015stance, chen2018dexterous, xu2020planning, wang2021optimal}. It encodes the distribution of the user-specified constraints and could guide the iterative direction of the optimization algorithm. The effectiveness and superiority of $\emph{CM}$ were verified in our previous work in the human-robot collaboration task \cite{zhang2022cooperative}. However, \emph{CM} has only been used in a single \emph{MM}, and how to combine it with the \emph{MRS} is still an open question. In this paper, we will extend it to multiple mobile manipulators to accelerate the proposed framework.
\subsection{Closed-Chain Constraint}\label{section_related_work_closed_chain}
Closed-chain constraint manifold has its dimension lower than that of the configuration space, hence uniform sampling in the configuration space has zero probability of generating a valid state that satisfies this constraint \cite{berenson2011constrained, kingston2018sampling, qureshi2021constrained}. One solution to this problem is introducing an allowable tolerance \cite{bonilla2017noninteracting}. Therefore, the volume of the satisfying subset grows, and various sampling-based motion planners for the unconstrained problem can be adopted. However, much of the complexity of handling the constraint transfers from the higher-level motion planner to the lower-level motion controller. In the case that multiple robots are rigidly connected, motion deviation caused by constraint relaxation may damage the robots and the transported object.
Another approach to resolve the closed-chain constraint is \emph{Projection} (\emph{PJ}). Several algorithms were developed based on it, such as $\emph{RGD}$ \cite{yakey2001randomized} and $\emph{CBiRRT}$ \cite{berenson2011task}. It iteratively projects a random sample onto the manifold according to the Jacobian-inverse gradient descent operation. Combining the projection operator with the workspace decomposition technique, \cite{zhang2021task} realized the motion planning of three mobile manipulators in the simulated environment. However, the constraint Jacobian is not guaranteed to be invertible all the time, and this approach relies heavily on gradient descent operation, which is time-consuming for high-dimensional robots. To improve the computational efficiency, \emph{Atlas} (\emph{AT}) \cite{jaillet2017path}, which uses piece-wise tangent spaces to locally approximate the constraint manifold, and \emph{Tangent Bundle} (\emph{TB}) \cite{kim2016tangent}, which is similar to \emph{AT} but with lazy states checking, were proposed. However, to get the tangent space, the kernel of the constraint Jacobian has to be computed, which requires complex matrix decomposition. Therefore, the computational efficiency will be reduced.
In addition, inverse kinematics can be seen as a special projection operator. Compared with the Jacobian-based projection operator, it is more efficient, especially for robots with the analytical inverse kinematic solvers. For example, $\emph{RLG}$, which combines the random sampling technique with inverse kinematics, was developed to deal with the closed-chain constraint \cite{gharbi2008sampling}. $\emph{RLG}$ works well for the non-redundant multiple manipulators system. However, it is difficult to integrate with other constraints simultaneously. Moreover, the kinematic characteristic of the fixed base manipulator is different from the \emph{MM}, hence $\emph{RLG}$ is hard to extend to multiple mobile manipulators directly.
\subsection{Formation Constraint}
Formation constraint is widely studied in the field of multiple mobile robots, and typical algorithms are the leader-follower approach, behavioral approach and virtual structure approach \cite{oh2015survey}. In the domain of multiple mobile manipulators, \cite{tang2018obstacle} defined a virtual structure named system outlined rectangle ($\emph{SOR}$), and the leader-follower approach was applied to plan the motion of the system, where the leader adjusted the width of the $\emph{SOR}$ according to the environment. Similarly, the convex region was defined in \cite{alonso2017multi}, and then the formation control problem was transformed into an optimization problem with respect to the convex region. However, the obstacle-avoidance strategies in \cite{tang2018obstacle, alonso2017multi, jiao2015transportation} are conservative. As long as the virtual structure (e.g., \emph{SOR} or convex region) intersects with obstacles, the system is considered in collision. As a result, $\emph{bypassing}$ becomes the only way to avoid obstacles, no matter how small they are. However, by utilizing the redundancy of the system, $\emph{crossing}$ obstacles is sometimes a more reasonable choice, which is common in the human-human collaborative tasks.
Obstacle crossing is challenging and rarely studied previously. It not only requires $\emph{cooperation}$ among different mobile manipulators but also requires $\emph{coordination}$ between the mobile robot and the manipulator. In our previous work, the traversability of obstacles was studied in the multiple mobile robots system with a deformable sheet \cite{hu2022multi}. The experimental results showed that the system could intelligently bypass or cross obstacles, which greatly increases the success rate of the motion planner in cluttered environments. Therefore, in this paper, we will extend this idea from multiple mobile robots to multiple mobile manipulators.
In summary, the coupling among redundancy, closed-chain, formation and obstacle-avoidance makes the motion planning of multiple mobile manipulators one of the most challenging problems in robotics. Although outstanding works have been done from a single perspective, e.g., task-priority algorithm \cite{chiaverini1997singularity} for redundancy, projection-based frameworks \cite{yakey2001randomized}\cite{berenson2011task}\cite{jaillet2017path}\cite{kim2016tangent} for the closed-chain constraint and virtual structure-based approach \cite{alonso2017multi} for the formation constraint, they are unable to solve the above challenges simultaneously. As a result, the system cannot fully take its advantage brought by multi-robot and redundancy, and is prone to failure in cluttered environments.
\section{PROBLEM DEFINITION} \label{section_problem_definition}
In this section, we derive the kinematic model of the system and define some mathematical notations for subsequent discussion. Moreover, a formal definition of the motion planning problem along with the analysis is presented afterward.
\subsection{Modeling of the Multiple Mobile Manipulators System}
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{figs/kinematic_model.pdf}
\caption{Kinematic model of the multiple mobile manipulators system.}
\label{fig_robot_model}
\end{figure}
Consider multiple mobile manipulators ($\emph{M}^3$) manipulating an object in Fig. \ref{fig_robot_model}. Let $O_wX_wY_wZ_w$, $O_{obj}X_{obj}Y_{obj}Z_{obj}$, $O_{b,i}X_{b,i}Y_{b,i}Z_{b,i}$ and $O_{e,i}X_{e,i}Y_{e,i}Z_{e,i}$ be the frames of the world, the object, the mobile base and the end effector of the $i$th \emph{MM}, where $i = 1,2,...,n$ and $n$ is the number of the robots in the system.
To simplify the model of the holonomic mobile robot, it is equivalent to an open chain\cite{zhang2019task}\cite{cheong2010consistent}. Therefore, the configuration of the $i$th mobile robot can be expressed as $\boldsymbol{q}_{b,i}\in \mathbb{R}^{n_{b,i}}$, where $n_{b,i}$ is the degrees of freedom ($\emph{DOF}$) of the mobile robot. In most cases $n_{b,i}=3$ as the holonomic mobile robot can move and rotate in all directions on the ground. The configuration of the $i$th manipulator is $\boldsymbol{q}_{a,i}\in \mathbb{R}^{n_{a,i}}$, where $n_{a,i}$ represents its $\emph{DOF}$. Therefore, the configuration space of the $i$th \emph{MM} can be formally defined as $\mathcal{C}_{\sss MM}^i \subseteq \mathbb{R}^{n_{a,i} + n_{b,i}}$, which is the set of configurations of the $i$th \emph{MM} $\boldsymbol{q}_{i} = (\boldsymbol{q}_{b,i}^T, \boldsymbol{q}_{a,i}^T)^T$.
To represent the homogenous transformation of frame $i$ with respect to frame $j$, we define $\boldsymbol{X}_i^j \in SE(3)$ and its minimum representation $\boldsymbol{t}_i^j = ({\boldsymbol{p}_i^j}^{\sss T},{\boldsymbol{\alpha}_i^j}^{\sss T})^T \in \mathbb{R}^6$, in which ${\boldsymbol{p}_i^j}$ denotes the relative position, and ${\boldsymbol{\alpha}_i^j}$ denotes a minimum description of orientation. Therefore, the configuration space of the object is defined as $\mathcal{C}_{obj} \subseteq \mathbb{R}^6$, which is the set of the object's poses with resect to the world frame $\boldsymbol{t}_{obj}^w$.
Let the configuration space of the system $\mathcal{C}_{\sss M^3} = \mathcal{C}_{\sss MM}^1 \times ... \times \mathcal{C}_{\sss MM}^i \times ... \times \mathcal{C}_{\sss MM}^n \times \mathcal{C}_{obj} \subseteq \mathbb{R}^{\sum_{i=1}^{n}{(n_{a,i} + n_{b,i})+6}}$ be the set of $\boldsymbol{c} = (\boldsymbol{q}_1^T, ..., \boldsymbol{q}_i^T, ..., \boldsymbol{q}_n^T, {\boldsymbol{t}_{obj}^w}^T)^T$. The forward kinematics of the $i$th \emph{MM} at position-level and velocity-level are defined as Eq. (\ref{eq_kin}) and Eq. (\ref{eq_diff_kin}), respectively.
\begin{equation}
\label{eq_kin}
\boldsymbol{t}{_{e,i}^w} = f_k(\boldsymbol{q}_{i})
\end{equation}
\begin{equation}
\label{eq_diff_kin}
\dot{\boldsymbol{t}}{_{e,i}^w} = \frac{\partial{f_k(\boldsymbol{q}_{i})}} {\partial{\boldsymbol{q}_{i}}} \dot{\boldsymbol{q}}_i
= \boldsymbol{J}_i(\boldsymbol{q}_{i}) \dot{\boldsymbol{q}}_i
\end{equation}
where $f_k(.)$ is the forward kinematic operator, and $\boldsymbol{J}_i(\boldsymbol{q}_{i}) \in \mathbb{R}^{6 \times ({n_{a,i} + n_{b,i}})}$ is the analytical Jacobian matrix of the $i$th \emph{MM}. When a six-$\emph{DOF}$ manipulator mounting on a mobile robot, the column of $\boldsymbol{J}_i(\boldsymbol{q}_{i})$ is larger than that of the row, leading to the redundancy of the \emph{MM}.
\subsection{Closed-Chain Constraint}\label{section_model_ccc}
Given a random sample $\boldsymbol{c} = (\boldsymbol{q}_1^T, ..., \boldsymbol{q}_i^T, ..., \boldsymbol{q}_n^T, {\boldsymbol{t}_{obj}^w}^T)^T$, we define the vector of the end effector's poses as $\boldsymbol{E} = (\boldsymbol{t}{_{e,1}^w}^T,..., \boldsymbol{t}{_{e,i}^w}^T, ..., \boldsymbol{t}{_{e,n}^w}^T)^T \in \mathbb{R}^{6n}$ where $\boldsymbol{t}{_{e,i}^w} = f_k(\boldsymbol{q}_{i})$. The vector of the grasping poses on the object is denoted as $\boldsymbol{G} = (\boldsymbol{t}{_{g,1}^w}^T,..., \boldsymbol{t}{_{g,i}^w}^T, ..., \boldsymbol{t}{_{g,n}^w}^T)^T \in \mathbb{R}^{6n}$, in which $\boldsymbol{t}{_{g,i}^w}$ represents the $i$th grasping pose with respect to the world frame. For convenience, a projection $\pi: \mathcal{C}_{\sss M^3} \rightarrow \mathcal{C}_{obj}$ is defined so that $\pi(\boldsymbol{c}) = \boldsymbol{t}_{obj}^w$. Assuming the manipulated object is rigid, $\boldsymbol{G}$ can be easily derived by the geometric transformation $g: \mathcal{C}_{obj} \rightarrow \mathbb{R}^{6n}$ in Eq. (\ref{eq_grasp}).
\begin{equation}
\label{eq_grasp}
\boldsymbol{G} = g(\pi(\boldsymbol{c}))
\end{equation}
When a rigid grasp exists between $\boldsymbol{t}{_{e,i}^w}$ and $\boldsymbol{t}{_{g,i}^w}$ for all $i = 1,2,...,n$, the closed-chain constraint ($\emph{C}^3$) forms, and it can be formally described by $f_{\sss C^3}: \mathcal{C}_{\sss M^3} \rightarrow \mathbb{R}^{6n}$ in Eq. (\ref{eq_ccc1}).
\begin{equation}
\label{eq_ccc1}
f_{\sss C^3}(\boldsymbol{c}) = \boldsymbol{E} - \boldsymbol{G} =\boldsymbol{0}
\end{equation}
Therefore, the set of configurations that satisfy the closed-chain constraint is denoted as $\mathcal{C}_{\sss C^3} \subseteq \mathcal{C}_{\sss M^3}$ in Eq. (\ref{eq_ccc2}).
\begin{equation}
\label{eq_ccc2}
\mathcal{C}_{\sss C^3} = \{ \boldsymbol{c} | \boldsymbol{c} \in \mathcal{C}_{\sss M^3}, f_{\sss C^3}(\boldsymbol{c}) = \boldsymbol{0} \}
\end{equation}
\subsection{Formation Constraint}
The fundamental difference between the closed-chain constraint ($\emph{C}^3$) and the formation constraint ($\emph{FC}$) is that $\emph{C}^3$ is ``hard'' but $\emph{FC}$ is ``soft'', in which ``hard'' constraints have to be satisfied exactly everywhere, and ``soft'' constraints are usually described by cost functions and optimized as much as possible. Therefore, $\emph{FC}$ can be transformed into an optimization problem and formally defined as follow.
\begin{equation}
\begin{aligned}
\label{eq_formation}
&\max_{\boldsymbol{c}} \quad f_{\sss FC}(\boldsymbol{c})\\
&\begin{array}{r@{\quad}r@{}l@{\quad}l}
s.t. &\boldsymbol{c} \in \mathcal{C}_{\sss C^3}\cap \mathcal{C}_{free}
\end{array}
\end{aligned}
\end{equation}
where $\mathcal{C}_{free}$ represents the set of collision-free configurations with appropriate dimensions. $f_{\sss FC}: \mathcal{C}_{\sss C^3} \rightarrow \mathbb{R}$ is the formation metric of the system. Its design philosophy and expression will be illustrated in Section \ref{section_motion_planner-decentralized} in detail.
\subsection{Problem Definition}
In addition to the notations above, we define a mapping $\Pi: \mathcal{C}_{obj} \rightarrow \mathcal{C}_{\sss C^3}$ so that given a pose of the object $\boldsymbol{t}_{obj}^w$, $\Pi(\boldsymbol{t}_{obj}^w) = \{ \boldsymbol{c} | \boldsymbol{c} \in \mathcal{C}_{\sss C^3}\cap \mathcal{C}_{free}, \pi(\boldsymbol{c}) = \boldsymbol{t}_{obj}^w \}$. It is the set of configurations that satisfy collision and closed-chain constraints while the object's pose is invariant. Therefore, the motion planning of multiple redundant mobile manipulators under closed-chain, formation and obstacle-avoidance constraints is defined as follows.
\textbf{Problem Definition}: given a start configuration $\boldsymbol{c}_{start} \in \mathcal{C}_{\sss C^3}\cap \mathcal{C}_{free}$ and a target pose of the object $\boldsymbol{t}_{obj}^w$, find a path $\tau: [0,1] \rightarrow \mathcal{C}_{\sss C^3}\cap \mathcal{C}_{free}$ such that 1) $\tau(0) = \boldsymbol{c}_{start}$; 2) $\tau(1) \in \Pi(\boldsymbol{t}_{obj}^w)$; and 3) the formation metric $f_{\sss FC}$ is optimized throughout $\tau$.
\subsection{Problem Analysis} \label{section_problem_definition_problemanalysis}
The key to the closed-chain constraint is to design efficient samplers since uniform sampling can not work directly. As discussed above, \emph{PJ}, \emph{AT} and \emph{TB} are centralized frameworks and move a random state $\boldsymbol{c} \in \mathcal{C}_{\sss M^3}$ to a new state $\boldsymbol{c}_{new} \in \mathcal{C}_{\sss C^3}$ iteratively, which is computationally expensive. Unlike the projection-based frameworks, our motion planner will not calculate $\boldsymbol{q}_i$ explicitly when dealing with the closed-chain constraint. On the contrary, we define the allowed sampling region for the mobile robot and query the capability map of the manipulator to quickly check $\pi(\boldsymbol{c})$ is extensible or not.
Ideally, it is necessary to optimize the formation on each sample and generate a globally optimal path about $f_{\sss FC}$. However, this is hard to achieve both technically and theoretically. On the one hand, most samples are not on the final path connecting $\tau(0)$ and $\tau(1)$, hence optimizing $f_{\sss FC}$ on these samples are unnecessary and time-consuming. On the other hand, the convergence to optimality is not guaranteed for the existing optimal sampling-based motion planners like $\emph{RRT}^*$ \cite{karaman2011sampling} when optimizing over something other than the path length.
Therefore, we use a hierarchical way to resolve the complex motion planning problem. The centralized layer plans the motion of the object, and the formation optimization is delayed in the decentralized layer. Unlike the decoupled framework in \cite{hekmatfar2014cooperative}, the ``hard'' closed-chain constraint $f_{\sss C^3}$ and the lower bound of the ``soft'' constraint $f_{\sss FC}$ are guaranteed in our centralized layer to avoid conflict with the decentralized layer. In this way, the complex motion planning problem of multiple mobile manipulators is simplified while not decreasing the success rate and violating the constraints.
\section{HIERARCHICAL MOTION PLANNING FRAMEWORK} \label{section_motion_planner}
\subsection{Overview of the Hierarchical Framework}
\begin{figure*}[ht]
\centering
\includegraphics[width=16.4cm]{figs/overview.PDF}
\caption{Overview of the hierarchical motion planner. The centralized layer receives the motion planning request and computes the motion of the object. Collision constraint, closed-chain constraint $f_{\sss C^3}(\boldsymbol{c})$ and the lower bound of the formation constraint $f_{\sss FC}(\boldsymbol{c})$ are checked in this layer to ensure the motion of the object is executable by the physical system. In addition, collision map, which represents the information of the environment, and capability map, which encodes the distribution of the formation constraint, are queried to speed up the motion planner. The decentralized layer, which can be deployed in a distributed way in real-time, consists of the formation optimizer and the task-priority motion controller. The former takes the object's motion as input and optimizes the formation and dexterous configuration of each mobile manipulator. The task-priority motion controller runs the feedback algorithm and sends the desired joint motion command to the lower-level hardware.}
\label{fig_overview}
\end{figure*}
Overview of the hierarchical motion planning framework is shown in Fig. \ref{fig_overview}. The centralized layer receives the motion planning request from the user and plans the motion of the object. Collision constraint, closed-chain constraint and the lower bound of the formation constraint are checked for each sample to ensure the object's motion is executable by the physical system. Moreover, collision map and capability map are queried to accelerate the checking process. The decentralized layer makes full use of the computing resource on each \emph{MM} and can be executed in a distributed way in real-time. It consists of two components: the formation optimizer and the task-priority motion controller. Taking the motion of the object as input, the former optimizes the formation and dexterous configuration of each robot as much as possible. To track the desired motion on the real redundant robot, task-priority motion controller is designed to send the joint motion command to the lower-level hardware. The details of the framework are explained as follows. Its feasibility and superiority will be verified in Section \ref{section_exp}.
\subsection{Centralized Layer}\label{section_motion_planner-cenlayer}
\IncMargin{1em}
\begin{algorithm}
\caption{\emph{Object Motion Planner}}\label{alg1}
\SetKwProg{Fn}{Function}{}{}
\Fn{\texttt{\emph{plan}}($\boldsymbol{t}_{start}$, $\boldsymbol{t}_{goal}$, $t_{obj}$)}
{
\If(){\texttt{\emph{validChecking}}($\boldsymbol{t}_{goal}$) \emph{is false}}
{
\KwRet fail\;
}
$\mathcal{T}_s.\texttt{init}(\boldsymbol{t}_{start})$\;
$\mathcal{T}_g.\texttt{init}(\boldsymbol{t}_{goal})$\;
\For{$t\leftarrow 0$ \KwTo $t_{obj}$}
{
$\boldsymbol{t}_{rand}$ $\leftarrow$ \texttt{sampleValidObjConfig}($\mathcal{T}_s$)\;
$\mathcal{T}_s.\texttt{add}(\boldsymbol{t}_{rand})$\;
\If(){\texttt{\emph{connect}}($\mathcal{T}_s, \mathcal{T}_g$) \emph{is success}}
{
\KwRet \texttt{extractPath}($\mathcal{T}_s, \mathcal{T}_g$)\;
}
\texttt{swap}($\mathcal{T}_s, \mathcal{T}_g$)\;
}
\KwRet fail\;
}
\Fn{\texttt{\emph{sampleValidObjConfig}}($\mathcal{T}_s$)}
{
\For{$j\leftarrow 1$ \KwTo maxSample}
{
$\boldsymbol{t}_{rand}$ $\leftarrow$ \texttt{sampleAndExtend}($\mathcal{T}_s$)\;
\If(){\texttt{\emph{validChecking}}($\boldsymbol{t}_{rand}$) \emph{is true}}
{
\KwRet $\boldsymbol{t}_{rand}$\;
}
}
\KwRet none\;
}
\Fn{\texttt{\emph{validChecking}}($\boldsymbol{t}_{rand}$)}
{
\If{$\boldsymbol{t}_{rand}$ \emph{is in collision}}
{
\KwRet false\;
}
\tcp{compute G according to Eq.(\ref{eq_grasp})}
$\boldsymbol{G}$ $\leftarrow$ \texttt{getGraspingPoses}($\boldsymbol{t}_{rand}$)\;
\For{\textbf{\emph{each}} $\boldsymbol{t}_{g,i}^w$ \textbf{\emph{in}} $\boldsymbol{G}$}
{
\tcp{defined in Alg.\ref{alg3}}
\If{\texttt{\emph{sampleInASR}}($\boldsymbol{t}_{g,i}^w$) \emph{is none}}
{
\KwRet false\;
}
}
\KwRet true\;
}
\end{algorithm}\DecMargin{1em}
The sampler is designed as a primitive in the centralized layer. Therefore, working with different sampling-based algorithms, such as $\emph{PRM}$ \cite{kavraki1996probabilistic} and $\emph{EST}$ \cite{hsu1997path}, becomes a trivial task. We will introduce the sampler in the structure of $\emph{RRTConnect}$ algorithm \cite{kuffner2000rrt} in the following.
For brevity of notations, we omit the superscript and subscript of $\boldsymbol{t}_{obj}^w$ in this part. For example, $\boldsymbol{t}_{start}$, $\boldsymbol{t}_{goal}$ and $\boldsymbol{t}_{rand}$ represent the start, the goal and random pose of the object with respect to the world frame, separately. Some key functions in Alg. \ref{alg1} are explained as follows.
\begin{itemize}[leftmargin=*]
\item $\texttt{plan}(\boldsymbol{t}_{start}, \boldsymbol{t}_{goal}, t_{obj})$ generates the path of the object when the planning request and allowed time are specified. Before planning, the validity of $\boldsymbol{t}_{goal}$ is first checked, and two trees are initialized. $\mathcal{T}_s$ starts from $\boldsymbol{t}_{start}$ and $\mathcal{T}_g$ starts from $\boldsymbol{t}_{goal}$. Then random valid pose $\boldsymbol{t}_{rand}$ is sampled and added to the tree $\mathcal{T}_s$. In each iteration, the two trees try to connect with each other by $\texttt{connect()}$ and extract the valid path by $\texttt{extractPath()}$. On the real robot, the path should be post-processed and time-parameterized to get the smooth object's motion. If the two trees are unable to connect, they are swapped and continue to grow in the next iteration until the path is found or the allowed time $t_{obj}$ is exceeded.
\item $\texttt{sampleValidObjConfig}(\mathcal{T}_s)$ is the sampler of the centralized layer. It generates valid object's poses that are connectable to $\mathcal{T}_s$. $\texttt{sampleAndExtend()}$ is a standard operation in the sampling-based algorithm. It first generates a randomized pose by uniform or gaussian samplers and then searches the nearest pose in $\mathcal{T}_s$ and extends to $\boldsymbol{t}_{rand}$ by the given step size. If $\boldsymbol{t}_{rand}$ is valid, it will be returned.
\item $\texttt{validChecking}(\boldsymbol{t}_{rand})$ checks whether $\boldsymbol{t}_{rand}$ can be extended to a composite configuration $\boldsymbol{c}_{rand} \in \mathcal{C}_{\sss C^3}\cap \mathcal{C}_{free}$ so that $\pi(\boldsymbol{c}_{rand}) = \boldsymbol{t}_{rand}$. Given the pose of the object, the grasping poses vector $\boldsymbol{G}$ is calculated based on Eq. (\ref{eq_grasp}). For each pose $\boldsymbol{t}_{g,i}^w$ in $\boldsymbol{G}$, the allowed sampling region is sampled to check whether the closed-chain constraint and the lower bound of the formation constraint can be satisfied for each \emph{MM}. When these conditions are met for all $i = 1,2,...,n$, $\boldsymbol{t}_{rand}$ is seen as a valid sample. The detail of $\texttt{sampleInASR()}$ is introduced in the following.
\end{itemize}
\subsubsection{Allowed Sampling Region}
\begin{figure}[ht]
\centering
\includegraphics[width=8.7cm]{figs/confliction.pdf}
\caption{(a). Although the pose of the object is collision-free, it exceeds the workspace of the robots and the closed-chain constraint is unable to be satisfied. (b). Although a configuration can be found to satisfy the closed-chain constraint, one of the robots must be in an awkward configuration due to the obstacle. Therefore, these samples should be discarded as they are unable to be executed or lead to poor and unstable formations in the decentralized layer.}
\label{fig_poor_formation}
\end{figure}
When planning the motion of the object, in addition to meeting the ``hard" closed-chain constraint $f_{\sss C^3}(\boldsymbol{c})$, it is necessary to ensure the lower bound of the ``soft" formation constraint $f_{\sss FC}(\boldsymbol{c})$. Otherwise, the centralized layer may conflict with the decentralized layer. For example, although the object's pose is collision-free in Fig. \ref{fig_poor_formation}(a), it exceeds the workspace of the robots, and the closed-chain constraint is unable to be satisfied. Similarly, for the object's pose in Fig. \ref{fig_poor_formation}(b), although a configuration can be found to satisfy the closed-chain constraint, one of the robots must be in an awkward configuration due to obstacles. Therefore, the poses of the object in these cases should be abandoned as they are inexecutable or lead to poor and unstable formations in the decentralized layer.
To check the validity of $\boldsymbol{t}_{g,i}^w$, we define the allowed sampling region ($\emph{ASR}$) for each mobile robot in Eq. (\ref{eq_asr}).
\begin{equation}
\label{eq_asr}
\begin{split}
ASR(\boldsymbol{t}_{g,i}^w, & \emph{thres}) = \{ \boldsymbol{q}_{b,i} \ | \ \exists \boldsymbol{q}_{a,i}, \ni f_k(\boldsymbol{q}_{i}) = \boldsymbol{t}_{g,i}^w \\
& \ \emph{and} \ f_{{\sss FC},i}(\boldsymbol{q}_{i}) \geq \emph{thres} \ \emph{and} \ \boldsymbol{q}_{i} \in \mathcal{C}_{free} \}
\end{split}
\end{equation}
where $f_{{\sss FC},i}(\boldsymbol{q}_{i})$ is the formation metric of the $i$th \emph{MM}, and $\emph{thres}$ is the allowed threshold of this metric. $f_{{\sss FC},i}(\boldsymbol{q}_{i})$ is a component of $f_{\sss FC}(\boldsymbol{c})$, and their expressions are defined in Eq. (\ref{eq_formation1}) and Eq. (\ref{eq_formation2}). Therefore, we can check whether existing $\boldsymbol{q}_{b,i} \in ASR(\boldsymbol{t}_{g,i}^w, \emph{thres})$ to see the validity of $\boldsymbol{t}_{g,i}^w$.
$\emph{ASR}$ changes with $\boldsymbol{t}_{g,i}^w$ and $\emph{thres}$. Due to the complexity of multiple mobile manipulators, it is usually obtained by sampling. According to the definition of $\emph{ASR}$, given a random sample $\boldsymbol{q}_{b,i}$, $\boldsymbol{q}_{a,i}$ has to be computed based on the inverse kinematics of the manipulator, and the following conditions should be checked. We name this method $\emph{IKCL}$, which means the Inverse Kinematics-based Centralized Layer.
\begin{itemize}[leftmargin=*]
\item closed-chain constraint: $f_k(\boldsymbol{q}_{i}) = \boldsymbol{t}_{g,i}^w$
\item lower bound of the formation constraint: $f_{{\sss FC},i}(\boldsymbol{q}_{i}) \geq \emph{thres}$
\item collision constraint: $\boldsymbol{q}_{i} \in \mathcal{C}_{free}$
\end{itemize}
Although computing $\boldsymbol{q}_{a,i}$ and checking these conditions for one sample is fast, this module will be invoked frequently in $\texttt{sampleInASR()}$, hence reducing the time cost of it will improve the performance of Alg. \ref{alg1} significantly. In the following, a novel tool named capability map ($\emph{CM}$) will be developed to speed up this module. Querying $\emph{CM}$ can avoid computing $\boldsymbol{q}_{a,i}$ and $f_{{\sss FC},i}(\boldsymbol{q}_{i})$ while satisfying $f_k(\boldsymbol{q}_{i}) = \boldsymbol{t}_{g,i}^w$ and $\boldsymbol{q}_{i} \in \mathcal{C}_{free}$ automatically. We name this method $\emph{CMCL}$, which means the Capability Map-based Centralized Layer.
\subsubsection{Capability Map}\label{section_CM}
$\emph{CM}$ was initially proposed to solve the base placement when the mobile robot and the manipulator move asynchronously \cite{burget2015stance, chen2018dexterous, xu2020planning, wang2021optimal}. It was improved to guide the coordinated motion planning of the \emph{MM} in our previous works \cite{zhang2020novel}\cite{zhang2022cooperative}. $\emph{CM}$ stores the configurations and/or the poses of the end effector along with the corresponding index. It is built in advance and then queried to reduce the online computational burden. Besides, using offline data structure to accelerate the online computing process can be seen in other robotic fields. For example, \c{S}ucan et al. generated a precomputed database of the constraint-satisfying states, and the planning algorithms can sample in this database to get valid states quickly \cite{csucan2012motion}. Due to its high efficiency, it has been merged into $\emph{MoveIt}$ \cite{coleman2014reducing}, which is a famous open-source motion planning software.
\IncMargin{1em}
\begin{algorithm}
\caption{\emph{Construct and Query the Capability Map}}\label{alg2}
\SetKwProg{Fn}{Function}{}{}
\Fn{\texttt{\emph{constructCM}}(discreteRes, thres)}
{
\emph{traMap} $\leftarrow$ \texttt{discreteTra}(\emph{discreteRes})\;
\emph{rotMap} $\leftarrow$ \texttt{discreteRot}(\emph{discreteRes})\;
\While{j $<$ pointNumberInTraMap}
{
\While{k $<$ pointNumberInRotMap}
{
$\boldsymbol{t}_{e,i}^{b,i} \leftarrow$ \texttt{getPose}(\emph{traMap}$_j$, \emph{rotMap}$_k$)\;
\If(){$\boldsymbol{q}_i \leftarrow$ \texttt{\emph{getIK}}($\boldsymbol{t}_{e,i}^{b,i}$) \emph{is false}}
{
\Continue
}
$f_{{\sss FC},i} \leftarrow$ \texttt{getFormationMetric}($\boldsymbol{q}_i$)\;
\If(){\texttt{\emph{selfColl}}($\boldsymbol{q}_{i}$) \emph{or} $f_{{\sss FC},i} < $ thres }
{
\Continue
}
\emph{CM} $\leftarrow$ \texttt{push}($\boldsymbol{t}_{e,i}^{b,i}$, $f_{{\sss FC},i}$)\;
$k \leftarrow k+1$\;
}
$j \leftarrow j+1$\;
}
}
\Fn{\texttt{\emph{queryCM}}($\boldsymbol{t}_{e,i}^{b,i}$)}
{
\If(){$\boldsymbol{t}_{round}$ $\leftarrow$ \texttt{\emph{round}}($\boldsymbol{t}_{e,i}^{b,i}$) \emph{not in \emph{CM}}}
{
\KwRet none\;
}
\KwRet $\emph{CM}.\texttt{at}(\boldsymbol{t}_{round})$\;
}
\end{algorithm}\DecMargin{1em}
The relationship among $O_wX_wY_wZ_w$, $O_{b,i}X_{b,i}Y_{b,i}Z_{b,i}$ and $O_{e,i}X_{e,i}Y_{e,i}Z_{e,i}$ can be expressed as Eq. (\ref{eq_forward_kinematics}).
\begin{equation}
\label{eq_forward_kinematics}
\boldsymbol{X}_{e,i}^{b,i} = (\boldsymbol{X}_{b,i}^{w})^{-1}\boldsymbol{X}_{e,i}^{w}
\end{equation}
In the mobile manipulation task, although $\boldsymbol{X}_{e,i}^{w}$ and $\boldsymbol{X}_{b,i}^{w}$ may change significantly when the mobile robot moving around, $\boldsymbol{X}_{e,i}^{b,i}$ must be within the reachable workspace of the manipulator. Therefore, the distribution of $\boldsymbol{X}_{e,i}^{b,i}$ and the corresponding collision/formation metric can be treated as prior information and stored in advance to construct $\emph{CM}$.
$\emph{CM}$ can be constructed by configuration space or cartesian space sampling. The former is faster than the latter as the inverse kinematics of the manipulator should be called for each sample in the cartesian space. However, configuration space sampling will lead to nonuniform end effector's poses, which is not good for the querying process. Moreover, $\emph{CM}$ is only constructed offline once, hence the build time is acceptable in most cases. Therefore, cartesian space sampling is preferred in this work. A comprehensive comparison between them is available in \cite{porges2014reachability}.
In addition, various helpful information about $\boldsymbol{X}_{e,i}^{b,i}$ and $\boldsymbol{q}_{a,i}$ can be stored in $\emph{CM}$. For example, the dexterity of the manipulator was saved in \cite{vahrenkamp2012manipulability}. \cite{burget2015stance} also stored statically stable configurations for the humanoid robot. In this paper, we are interested in the formation metric and self-collision information about the \emph{MM}, so they will be saved in $\emph{CM}$.
Alg. \ref{alg2} shows the pseudo-code to construct and query $\emph{CM}$, and the key functions are explained as follows.
\begin{itemize}[leftmargin=*]
\item $\texttt{constructCM}(\emph{discreteRes, thres})$: the workspace is first discretized according to the given resolution. For each $\boldsymbol{t}_{e,i}^{b,i}$, the inverse kinematics of the manipulator is invoked, and the formation metric $f_{{\sss FC},i}$ is computed based on Eq. (\ref{eq_formation1}). If $\boldsymbol{q}_i$ is not in self-collision and $f_{{\sss FC},i}$ is larger than the given threshold $\emph{thres}$, $\boldsymbol{t}_{e,i}^{b,i}$ and $f_{{\sss FC},i}$ are added to $\emph{CM}$. Poses with low $f_{{\sss FC},i}$ are discarded as they waste storage space and will lead to poor formations in the querying process. $\emph{CM}$ with different $\emph{thres}$ is shown in Fig \ref{fig_cm_and_asr}a. Colors represent the mean $f_{{\sss FC},i}$ of all 6$\emph{D}$ poses inside the 3$\emph{D}$ voxel.
\item $\texttt{queryCM}(\boldsymbol{t}_{e,i}^{b,i})$: the discrete $\emph{CM}$ is an approximation of the continuous workspace, so the requested pose may not be in $\emph{CM}$ exactly. Therefore, we round the requested pose based on $\emph{discreteRes}$ to get $\boldsymbol{t}_{round}$. For example, 0.51 will be rounded to 0.5 when the resolution is 0.1. When $\boldsymbol{t}_{round}$ is within the reachable workspace of the manipulator and the formation metric is larger than the threshold, $f_{{\sss FC},i}$ stored at $\boldsymbol{t}_{round}$ is returned.
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[width=17.5cm]{figs/cm_display.pdf}
\caption{(a). Capability map ($\emph{CM}$) of the mobile manipulator with different formation metric threshold $\emph{thres}$. Colors represent the mean formation metric of all 6$\emph{D}$ poses inside the 3$\emph{D}$ voxel (from small to large: red, green, cyan, blue). (b). The allowed sampling region ($\emph{ASR}$) of the mobile robot with different $\emph{thres}$. The origin and direction of the arrow represent the position and orientation of the mobile robot, respectively. The color indicates the formation metric when the mobile robot locates at the arrow. As can be seen, when the formation constraint is relaxed (low $\emph{thres}$), there are lots of poses in $\emph{CM}$ and $\emph{ASR}$, and most of them will lead to poor formations. When the formation constraint is strict (high $\emph{thres}$), there are few poses in $\emph{CM}$ and $\emph{ASR}$, and we have to spend much time to sample a valid one. Therefore, $\emph{thres}$ should be moderate to balance the quality of the formation and the computational time.}
\label{fig_cm_and_asr}
\end{figure*}
After constructing $\emph{CM}$ and the querying process is ready, let us take a closer look at how to check the validity of $\boldsymbol{q}_{b,i}$ based on it. In $ASR(\boldsymbol{t}_{g,i}^w, \emph{thres})$, $\boldsymbol{t}_{g,i}^w$ is constant and the same as $\boldsymbol{X}_{e,i}^{w}$ when the closed-chain constraint is satisfied. Given a sample $\boldsymbol{q}_{b,i}$, $\boldsymbol{X}_{e,i}^{b,i}$ can be calculated according to Eq. (\ref{eq_forward_kinematics}). As the formation metric and self-collision information about the \emph{MM} have been computed and stored in advance, they can be retracted from $\emph{CM}$ and indicate the validity of $\boldsymbol{q}_{b,i}$. Therefore, the inverse kinematic solver of the manipulator and the computational process of $f_{{\sss FC},i}$ are avoided.
The pseudo-code of $\texttt{sampleInASR}(\boldsymbol{t}_{g,i}^w, \emph{bounds})$ is shown in Alg. 3. $\emph{bounds}$ represents the lower and upper limits that the mobile robot configuration could be. It is useful when the heuristic information about the $\emph{ASR}$ is known. In $\texttt{sampleInASR()}$ of Alg. \ref{alg1}, $\emph{bounds}$ is set to its default value, which is the whole movable space of the mobile robot. Its value in the decentralized layer will be explained later in Alg. \ref{alg4}. $\texttt{sampleMobileRobot}($\emph{bounds}$)$ generates random mobile robot states within $\emph{bounds}$, and then the collision status is checked according to the 2$\emph{D}$ collision map of the environment. $\emph{collisionMap}$ will be updated continuously in another thread, hence the uncertainty of the environment can be captured and colliding with obstacles is avoided. After getting $\boldsymbol{t}_{e,i}^{b,i}$, we invoke $\texttt{queryCM()}$ to retract the formation metric stored at it. Finally, the pair of the valid sample $\boldsymbol{q}_{b,i}$ and the corresponding $f_{{\sss FC},i}$ are returned.
$\emph{ASR}$ can be built by calling $\texttt{sampleInASR()}$ repeatedly. Fig. \ref{fig_cm_and_asr}(b) shows the $\emph{ASR}$ with different $\emph{thres}$. The origin and direction of the arrow represent the position and orientation of the mobile robot, respectively. The color indicates the formation metric when the mobile robot locates at the arrow. As can be seen, when $\emph{thres}$ is too low, there are lots of poses in $\emph{ASR}$, and most of them will lead to poor formations in the decentralized layer. When $\emph{thres}$ is too high, there are few poses in $\emph{ASR}$, and we have to spend much time in $\texttt{sampleInASR()}$. Therefore, $\emph{thres}$ should be moderate to balance the quality of the formation and the computational time of the centralized layer.
\IncMargin{1em}
\begin{algorithm}
\caption{\texttt{sampleInASR}($\boldsymbol{t}_{g,i}^w$, $\emph{bounds}$)}\label{alg3}
\For{$j\leftarrow 1$ \KwTo maxSample}
{
$\boldsymbol{q}_{b,i}$ $\leftarrow$ \texttt{sampleMobileRobot}($\emph{bounds}$)\;
\If{\texttt{\emph{collision}}($\boldsymbol{q}_{b,i}$, collisionMap) \emph{is true}}
{
\Continue
}
$\boldsymbol{t}_{e,i}^{b,i}$ $\leftarrow$ \texttt{getTransform}($\boldsymbol{q}_{b,i}, \boldsymbol{t}_{g,i}^w$)\;
$f_{{\sss FC},i}$ $\leftarrow$ \texttt{queryCM}($\boldsymbol{t}_{e,i}^{b,i}$)\;
\If(){$f_{{\sss FC},i}$ \emph{is not none}}
{
\KwRet [$\boldsymbol{q}_{b,i}, f_{{\sss FC},i}$]\;
}
}
\KwRet none\;
\end{algorithm}\DecMargin{1em}
\subsection{Decentralized Layer}\label{section_motion_planner-decentralized}
As shown in Fig. \ref{fig_overview}, the decentralized layer consists of the formation optimizer and the task-priority motion controller. It can be deployed in a distributed way in real-time. Therefore, each robot could sense the uncertainty of the environment and adjust its motion timely.
\subsubsection{Formation Optimizer}
The characteristic of formation control naturally leads to the question of what variables to sense and what variables to control to achieve the desired formation \cite{oh2015survey}. In previous works \cite{tang2018obstacle, alonso2017multi, jiao2015transportation}, the inter-robot variables, such as the relative position and orientation among different robots, are sensed and controlled. In these works, the system is usually treated as a rigid body and losses the ability to cross obstacles. On the contrary, the information between the robot and the object is easier to sense when rigid grasp exists. Therefore, we measure the dexterity of each robot and control the geometric variables relative to the object. In this way, the formation metric of the complex system is decoupled, and the redundancy of each \emph{MM} can be explored independently.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{figs/formation_parameters.pdf}
\caption{Geometric variables represent the relative information between the robots and the object. $P_{O_{b,i}}$ and $P_{O_{obj}}$ are the projection of the $i$th end effector $O_{e,i}$ and the object $O_{obj}$ on the $X_{b,i}O_{b,i}Y_{b,i}$ plane, respectively. $\theta_{eo,i}$ is the angle between the end effector and the object. $\theta_{eb,i}$ is the angle between the end effector and the base. An ideal formation will be formed when $\theta_{eo,i}$ and $\theta_{eb,i}$ are equal to zero for all $i=1,2,...,n$.}
\label{fig_formation_def}
\end{figure}
Different metrics were proposed to measure the dexterity of the open-chain robot \cite{patel2015manipulator}, and the most popular is the manipulability developed by Yoshikawa \cite{yoshikawa1985manipulability}.
\begin{equation}
\label{eq_manipulability1}
\omega(\boldsymbol{q}_i) = \sqrt{det(\boldsymbol{J}_i(\boldsymbol{q}_i) \boldsymbol{J}_i(\boldsymbol{q}_i)^T )}
\end{equation}
where $\boldsymbol{J}_i(\boldsymbol{q}_i)$ is the analytical Jacobian matrix of the $i$th \emph{MM}. According to \cite{patel2015manipulator}, $\omega(\boldsymbol{q}_{i})$ is an unbounded index and only indicates the relative distance to singularity. Therefore, it is normalized to obtain an absolute metric.
\begin{equation}
\label{eq_normalized_manipulability}
\mu(\boldsymbol{q}_i) = \frac{\omega(\boldsymbol{q}_i)}{max(\omega(\boldsymbol{q}_i))}
\end{equation}
where ${max(\omega(\boldsymbol{q}_i))}$ is the maximum manipulability of the $i$th \emph{MM}, hence $\mu(\boldsymbol{q}_i)$ is limited within $[0,1]$.
To represent the relative information between the robots and the object, geometric variables are defined in Fig. \ref{fig_formation_def}. $O_{e,i}$ and $O_{obj}$ are the center of the $i$th end effector and the object, respectively. Their projections on the $X_{b,i}O_{b,i}Y_{b,i}$ plane are represented as $P_{O_{b,i}}$ and $P_{O_{obj}}$, respectively. $\theta_{eo,i}$ is the angle between the $i$th end effector and the object. $\theta_{eb,i}$ represents the angle between the $i$th end effector and the base. An ideal formation will be formed when $\theta_{eo,i}$ and $\theta_{eb,i}$ are equal to zero for all $i=1,2,...,n$. When they increase, the quality of the formation will decrease. For example, the formation in Fig. \ref{fig_robot_model} is better than that in Fig. \ref{fig_formation_def}. Therefore, we define $r(\boldsymbol{q}_i)$ to measure this relationship between the $i$th \emph{MM} and the object.
\begin{equation}
\label{eq_formation_rela}
r(\boldsymbol{q}_i) = f_r(\theta_{eb,i}) \times f_r(\theta_{eo,i})
\end{equation}
where $f_r(.)$ is a continuous monotonic decreasing function. It reaches the maximum of 1 when the independent variable is zero and gradually decreases to 0 as the independent variable increases. Therefore, $r(\boldsymbol{q}_i)$ is limited within $[0,1]$.
Combining $r(\boldsymbol{q}_i)$ with the dexterity of the robot, the formation metric of the $i$th \emph{MM} can be defined as follow.
\begin{equation}
\label{eq_formation1}
f_{{\sss FC},i}(\boldsymbol{q}_i) = \mu(\boldsymbol{q}_i) \times r(\boldsymbol{q}_i)
\end{equation}
It is a trade-off between $\mu(\boldsymbol{q}_i)$ and $r(\boldsymbol{q}_i)$. As can be seen, no inter-robot information is required in the definition of $f_{{\sss FC},i}(\boldsymbol{q}_i)$, hence it can be optimized in a distributed way.
The formation metric of the system can be defined as the minimum $f_{{\sss FC},i}(\boldsymbol{q}_i)$ of all robots. When $f_{{\sss FC},i}(\boldsymbol{q}_i)$ reaches its maximum for all robots, the system forms an ideal formation.
\begin{equation}
\label{eq_formation2}
f_{\sss FC}(\boldsymbol{c}) = min\{f_{{\sss FC},i}(\boldsymbol{q}_i) | 1 \leq i \leq n \}
\end{equation}
\IncMargin{1em}
\begin{algorithm}
\caption{\emph{Formation Optimizer}}\label{alg4}
\SetKwProg{Fn}{Function}{}{}
\Fn{\texttt{\emph{optimize}}($\boldsymbol{t}_{g,i}^w$, $\boldsymbol{q}_{b,i}^{ref}$, $v_{max}$, $t_{opt}$, $\epsilon$)}
{
$\emph{bounds}$ $\leftarrow$ \texttt{getBounds}($\boldsymbol{q}_{b,i}^{ref}$, $v_{max}$, $t_{opt}$)\;
\For{$t \leftarrow 0$ \KwTo $\epsilon$$t_{opt}$}
{
[$\boldsymbol{q}_{b,i}$,$f_{{\sss FC},i}$]$\leftarrow$\texttt{sampleInASR}($\boldsymbol{t}_{g,i}^w$,$\emph{bounds}$)\;
$\boldsymbol{q}_{b,i}^{seed}$ $\leftarrow$ \texttt{update}($\boldsymbol{q}_{b,i}$, $f_{{\sss FC},i}$)\;
}
$\boldsymbol{q}_{b,i}$ $\leftarrow$ \texttt{iterate}($\boldsymbol{q}_{b,i}^{seed}$, $(1-\epsilon)$$t_{opt}$)\;
\KwRet $\boldsymbol{q}_{b,i}$\;
}
\end{algorithm}\DecMargin{1em}
The formation optimizer can be executed offline or online. Generally speaking, online algorithm is superior to the offline version when dealing with the uncertainty of the environment. However, it relies on the efficient method to solve the constraint. As can be seen from Eq. (\ref{eq_formation1}), complex nonlinear relationship exists between $f_{{\sss FC},i}$ and $\boldsymbol{q}_i$. Therefore, the convergence rate may be slow, and the notorious local minimum may affect the optimization process.
To make the optimization process faster, once again, we query the precomputed $\emph{CM}$ to determine the seed. In this way, the optimizer could start with a sub-optimal formation and quickly converge to the optimal solution. The pseudo-code of the formation optimizer is shown in Alg. \ref{alg4}.
Given the grasping pose $\boldsymbol{t}_{g,i}^w$, Alg. \ref{alg4} generates the mobile robot's configuration $\boldsymbol{q}_{b,i}$ with the highest $f_{{\sss FC},i}$. $\boldsymbol{q}_{b,i}^{ref}$ is the reference configuration of the mobile robot and can be set to the current state or the state in the last iteration. $v_{max}$ is the maximum speed of the mobile robot. $t_{opt}$ is the maximum time for the optimizer, and $\epsilon \in [0,1]$ is the time ratio allowed for searching the seed. $\emph{bounds}$ represents the area that the mobile robot could travel from the reference configuration $\boldsymbol{q}_{b,i}^{ref}$ with the maximum speed $v_{max}$ within the allowed time $t_{opt}$. The motion of the mobile robot will violate the maximum velocity constraint $v_{max}$ or the end effector pose constraint $\boldsymbol{t}_{e,i}^w$ when $\boldsymbol{q}_{b,i}$ exceeds $\emph{bounds}$.
Taking advantage of the precomputed $\emph{CM}$, we sample in $\emph{ASR}$ within the allowed time $\epsilon t_{opt}$ to get the seed, and then the optimizer starts from $\boldsymbol{q}_{b,i}^{seed}$ and iterates to the optimal solution within the remaining time $(1-\epsilon)$$t_{opt}$. The Nelder-Mead simplex algorithm \cite{nelder1965simplex} is chosen as the solver in $\texttt{iterate()}$. It has no requirement on the differentiability of $f_{{\sss FC},i}$ and shows good performance in practice.
\subsubsection{Task-priority Motion Controller}
The centralized layer plans the motion of the object and the end effector $\boldsymbol{t}_{e,i}^w \in \mathbb{R}^6$. The formation optimizer in the decentralized layer generates the optimal motion of the mobile robot $\boldsymbol{t}_{b,i}^w \in \mathbb{R}^{n_{b,i}}$. To send joint motion command to the lower level hardware, closed-loop motion controller should be designed.
Historically, several algorithms were developed to compute the joint motion command of the redundant robot, such as pseudo-inverse Jacobian-based, Jacobian transpose-based, task-augmentation-based and task-priority-based algorithms \cite{seraji1998unified}\cite{chiaverini1997singularity}. Task-priority-based algorithm outperforms others when dealing with conflict among different tasks and is pruned to the mobile manipulator in this paper.
In this algorithm, the motion of the end effector and the mobile robot are treated as the primary task and the additional task, respectively. The formulation of the task-priority motion controller is shown in Eq (\ref{eq_task_priority}) \cite{chiaverini2016redundant}\cite{zhang2019task}.
\begin{equation}
\label{eq_task_priority}
\dot{\boldsymbol{q}}_{i} = \boldsymbol{J}_i^{\dag} \dot{\boldsymbol{t}}_{e,i}^{w} + [\boldsymbol{J}_{b,i} (\boldsymbol{I} - \boldsymbol{J}_i^{\dag} \boldsymbol{J}_i)]^{\dag} (\dot{\boldsymbol{t}}_{b,i}^{w} - \boldsymbol{J}_{b,i} \boldsymbol{J}_i^{\dag} \dot{\boldsymbol{t}}_{e,i}^{w})
\end{equation}
where $\boldsymbol{J}_i^{\dag} = \boldsymbol{J}_i^{T}(\boldsymbol{J}_i\boldsymbol{J}^{T}_i)^{-1}$ is the Moore-Penrose pseudoinverse of the Jacobian matrix $\boldsymbol{J}_i(\boldsymbol{q}_i)$. $\boldsymbol{I}$ is the identity matrix with dimension $(n_{b,i}+n_{a,i}) \times (n_{b,i}+n_{a,i})$. $\boldsymbol{J}_{b,i}$ is the matrix related to the additional task and derived as follows.
\begin{equation}
\boldsymbol{J}_{b,i} = \frac{\partial {\boldsymbol{t}_{b,i}^{w}} } {\partial {\boldsymbol{q}_i}} =
\left( \frac{\partial {\boldsymbol{t}_{b,i}^{w}} } {\partial {\boldsymbol{q}_{b,i}}},
\frac{\partial {\boldsymbol{t}_{b,i}^{w}} } {\partial {\boldsymbol{q}_{a,i}}} \right) =
\left( \boldsymbol{I}_{n_{b,i} \times n_{b,i}},
\boldsymbol{O}_{n_{b,i} \times n_{a,i}} \right) \notag
\end{equation}
In Eq (\ref{eq_task_priority}), the additional task is only satisfied in the null space of the primary task. When they are in conflict, the primary task will be satisfied as a higher priority. More detail about this algorithm is available in \cite{chiaverini2016redundant}\cite{chiaverini1997singularity}\cite{zhang2019task}.
The closed-loop version of Eq (\ref{eq_task_priority}) is shown as follows.
\begin{equation}
\label{eq_CLIK}
\dot{\boldsymbol{q}}_{i} = \boldsymbol{J}_i^{\dag} \boldsymbol{\omega}_{e,i} + [\boldsymbol{J}_{b,i} (\boldsymbol{I} - \boldsymbol{J}_i^{\dag} \boldsymbol{J}_i)]^{\dag} (\boldsymbol{\omega}_{b,i} - \boldsymbol{J}_{b,i} \boldsymbol{J}_i^{\dag} \boldsymbol{\omega}_{e,i})
\end{equation}
where $\emph{$\boldsymbol{\omega}_{e,i} = \boldsymbol{K}_{e,i}\left({^{d}\boldsymbol{t}_{e,i}^{w}}-{^{c}\boldsymbol{t}_{e,i}^{w}}\right)$}$ is the tracking error between the $\emph{desired}$ and $\emph{current}$ poses of the end effector. $\emph{$\boldsymbol{\omega}_{b,i} = \boldsymbol{K}_{b,i}\left({^{d}\boldsymbol{t}_{b,i}^{w}}-{^{c}\boldsymbol{t}_{b,i}^{w}}\right)$}$ is the tracking error between the $\emph{desired}$ and $\emph{current}$ poses of the mobile robot. $\boldsymbol{K}_{e,i} \in \mathbb{R}^{6\times6}$ and $\boldsymbol{K}_{b,i} \in \mathbb{R}^{n_{b,i}\times n_{b,i}}$ are the constant positive-definite gain matrix.
\section{SIMULATION AND EXPERIMENTAL RESULTS} \label{section_exp}
\subsection{Experimental Setup}
The redundancy of the \emph{MM} is first shown in Section \ref{section_exp_redundancy}. After that, we evaluate the performance of $\emph{CM}$ in Section \ref{section_exp_cm}, which lays a foundation for the following experiments. In Section \ref{section_exp_controller}, the task-priority motion controller and the relationship among $\boldsymbol{t}_{obj}^w$, $\boldsymbol{t}_{g,i}^w$ and $\boldsymbol{q}_i$ are tested. To demonstrate the superiority of the proposed hierarchical motion planner, numerous simulations and real-world experiments are conducted in Section \ref{section_exp_sim} and Section \ref{section_exp_real}, respectively.
Moreover, different numbers of heterogeneous robots are considered in this paper. As shown in Fig. \ref{fig_robot_model}, heterogeneous manipulators, including $\emph{UR5}$$\footnote{https://www.universal-robots.com}$, $\emph{ZU7}$$\footnote{https://www.jaka.com}$ and $\emph{Jaco}$$\footnote{https://www.kinovarobotics.com}$, and heterogeneous mobile robots, including the quadruped robot$\footnote{https://unitree.com}$ and the omnidirectional wheeled mobile robot, will be tested. All the codes are implemented in the architecture of $\emph{ROS}$ (Robot Operating System)\cite{ros} based on $\emph{C++}$. $\emph{CM}$ is saved in the point cloud format \cite{rusu20113d}. The computer is Intel i7-8700 (3.2GHz) with 32GB memory. The attached video shows the result of simulations and real-world experiments\footnote{https://youtu.be/c2Lxm15fY-c}.
In the real-world experiments, all robots are equipped with wireless transmission modules in the same local network for efficient communication. Leader-follower approach is adopted for the system, where the leader plans the global path of the object and each follower is equipped with a 6D force-torque sensor and a visual sensor on the end effector to detect the motion relative to the leader. Moreover, each robot is equipped with a laser scan to sense the static and dynamic obstacles in the environment.
\subsection{Redundancy of the Mobile Manipulator}\label{section_exp_redundancy}
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{figs/jaco-self-motion.png}
\caption{Redundancy of the mobile manipulator. The mobile robot moves on the ground while the end effector keeping still. }
\label{fig_redundancy}
\end{figure}
The redundancy of the \emph{MM} is shown in Fig. \ref{fig_redundancy}. As can be seen, when the end effector keeps still, there is extra \emph{DoF} for the robot to move on the ground. This is called the self-motion of the redundant robot. It represents the null space of the Jacobian matrix $\boldsymbol{J}_i(\boldsymbol{q}_i)$ and corresponds to the second term on the right side of Eq. (\ref{eq_task_priority}). Redundancy is the basis for formation optimization, but it also brings challenges to the motion planning problem.
\subsection{Performance of the Capability Map}\label{section_exp_cm}
In this part, we evaluate the time cost when checking the validity of $\emph{ASR}$ by querying $\emph{CM}$ and compare the results with other inverse kinematic solvers on the heterogeneous robots. The mean time of the checking process for $1 \times 10^3$ samples is shown in Table I.
\begin{table}[ht]
\label{table_cm_querying}
\caption{mean time when checking the validity of \emph{ASR} ($1 \times 10^3$ samples)}
\scriptsize
\begin{center}
\begin{tabular}{c|cccc}
\hline
Robot & $\emph{KDL}$\cite{kdl-url} & $\emph{TRAC-IK}$\cite{2015TRAC} & $\emph{IKFast}$\cite{diankov2010automated} & $\emph{\textbf{CM}(ours)}$ \\ \hline
$\emph{UR5}$-based $\emph{MM (ms)}$ & 4813.4 & 471.3 & 441.7 & 0.26 \\
$\emph{ZU7}$-based $\emph{MM (ms)}$ & 9406.8 & 751.4 & 440.5 & 0.26 \\
$\emph{Jaco}$-based $\emph{MM (ms)}$ & 7843.1 & 3937.0 & 409.7 & 0.27 \\ \hline
\end{tabular}
\end{center}
\end{table}
$\emph{KDL}$ (Kinematics and Dynamics Library) provides the Jacobian-based inverse kinematic solver and is a component of $\emph{Orocos}$ \cite{kdl-url}. $\emph{TRAC-IK}$ \cite{2015TRAC} is built on top of $\emph{KDL}$ and also combines the sequential quadratic nonlinear optimization approach to handle joint limits. $\emph{IKFast}$ is a powerful inverse kinematics solver that can analytically solve different manipulators' kinematic equations \cite{diankov2010automated}. From Table I, we can see that querying $\emph{CM}$ is faster than invoking the inverse kinematic solvers. Moreover, $\emph{CM}$ querying time is stable, which is a good property when planning the motion of heterogeneous robots.
\subsection{Task-Priority Motion Controller and Multiple Robots System Model}\label{section_exp_controller}
The task-priority motion controller and the relationship among $\boldsymbol{t}_{obj}^w$, $\boldsymbol{t}_{g,i}^w$ and $\boldsymbol{q}_i$ are tested in this part. The system consists of three robots and one object. The orientation (\emph{roll-pitch-yaw}) of the object is required to move along the sinusoidal trajectory while the center (\emph{x-y-z}) keeps stationary. For each object's pose $\boldsymbol{t}_{obj}^w$, the grasping poses vector $\boldsymbol{G}$ will be calculated according to Eq. (\ref{eq_grasp}). Given the grasping pose $\boldsymbol{t}_{g,i}^w$, the formation optimizer plans the optimal mobile robot's pose $\boldsymbol{t}_{b,i}^w$. Finally, the task-priority motion controller receives $\boldsymbol{t}_{g,i}^w$ and $\boldsymbol{t}_{b,i}^w$, and calculates the desired joint velocity $\dot{\boldsymbol{q}}_{i}$ based on Eq. (\ref{eq_CLIK}).
\begin{figure}[ht]
\centering
\includegraphics[width=8.7cm]{figs/three-robot-sin-scene.pdf}
\caption{The orientation (\emph{roll-pitch-yaw}) of the object is required to move along the sinusoidal trajectory while the center (\emph{x-y-z}) of the object, which is represented as the blue point, keeps stationary.}
\label{fig_sin_motion_scene}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{figs/three-robot-sin-data.pdf}
\caption{(a) Trajectory of the object in \emph{roll} direction. (b)-(d) Trajectories of the mobile robot in (\emph{x, y, yaw}) directions. (e)-(j) Trajectories of end effector in (\emph{x, y, z, roll, pitch, yaw}) directions. }
\label{fig_sin_motion_data}
\end{figure}
The snapshots during the task are shown in Fig. \ref{fig_sin_motion_scene}, and the motion of the system is available in the attached video. Fig. \ref{fig_sin_motion_data}(a) shows the desired and actual trajectories of the object in \emph{roll} direction, and the trajectories in \emph{pitch} and \emph{yaw} directions are similar to that in \emph{roll} direction and not shown here. The desired and actual trajectories of the mobile robot and the end effector of \emph{MM1} are shown in Fig. \ref{fig_sin_motion_data}(b)-(j), respectively. The trajectories of \emph{MM2} and \emph{MM3} are similar to that of \emph{MM1} and not shown here. As can be seen from Fig. \ref{fig_sin_motion_data}, the mobile robot and the end effector are able to track the desired trajectories simultaneously. Meanwhile, multiple robots could cooperate with each other and manipulate the object precisely.
\subsection{Performance of the Hierarchical Motion Planner} \label{section_exp_sim}
\subsubsection{Simulation Scene}
As obstacles or the number of robots increase, the simulation scenes are shown in Fig. \ref{fig_sim_scene}(a)-(f). The size of the scenes is $15m \times 25m$. The start and goal states are displayed on the most left and most right, respectively. Snapshots during the tasks are also shown in Fig. \ref{fig_sim_scene}. The pictorial motion of the tasks is available in the attached video. In these tasks, the start and goal states are assumed to satisfy the closed-chain and collision constraints, and valid paths connecting the start and goal states exist theoretically. Moreover, the height of some obstacles is lower than the maximum allowed height of the object, which means that obstacles may be crossed by the system.
\subsubsection{Simulation 1} \label{section_exp_sim1}
\begin{figure*}[ht]
\centering
\includegraphics[width=18cm]{figs/scene_sim.pdf}
\caption{(a)-(f). Simulation scenes from simple to complex. The size of the scenes is $15m \times 25m$. Robot models are snapshots during the tasks. The height of some obstacles is lower than the maximum allowed height of the object, hence obstacles may be crossed by the system like the path in (b)-(f).}
\label{fig_sim_scene}
\end{figure*}
\begin{table*}[ht]
\label{table_planner}
\caption{comparison of the proposed hierarchical framework with centralized frameworks (\emph{PJ} \cite{yakey2001randomized}\cite{berenson2011task}, \emph{AT} \cite{jaillet2017path} and \emph{TB} \cite{kim2016tangent}) }
\begin{center}
\scriptsize
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{l|l|cccccccccccc}
\hline
\multirow{2}{*}{Task} & \multicolumn{1}{c|}{\multirow{2}{*}{Mehtod}} & \multicolumn{2}{c|}{$\emph{RRT}$\cite{lavalle1998rapidly}} & \multicolumn{2}{c|}{$\emph{RRTConnect}$\cite{kuffner2000rrt}} & \multicolumn{2}{c|}{$\emph{BKPIECE}$\cite{csucan2009kinodynamic}} & \multicolumn{2}{c|}{$\emph{EST}$\cite{hsu1997path}} & \multicolumn{2}{c|}{$\emph{STRIDE}$\cite{gipson2013resolution}} & \multicolumn{2}{c}{$\emph{PRM}$\cite{kavraki1996probabilistic}} \\ \cline{3-14}
& \multicolumn{1}{c|}{} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}} Time(s)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Time(s)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Time(s)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Time(s)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Time(s)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \begin{tabular}[c]{@{}c@{}}Time(s)\end{tabular} \\ \hline
\multirow{5}{*}{Fig. \ref{fig_sim_scene}(a)} & \emph{PJ}\cite{yakey2001randomized}\cite{berenson2011task} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 9/10 & 7.75$\pm$6.39 & 2/10 & 11.87$\pm$3.80 & 10/10 & 23.48$\pm$2.34 \\
& \emph{AT}\cite{jaillet2017path} & 0/10 & --- & 10/10 & 3.37$\pm$0.55 & 0/10 & --- & 10/10 & 1.78$\pm$1.11 & 7/10 & 6.59$\pm$6.28 & 10/10 & 21.73$\pm$1.53 \\
& \emph{TB}\cite{kim2016tangent} & 3/10 & 17.64$\pm$3.80 & 10/10 & 2.29$\pm$0.43 & 0/10 & --- & 10/10 & 1.58$\pm$1.71 & 7/10 & 3.60$\pm$2.72 & 10/10 & 22.31$\pm$1.55 \\
& $\emph{\textbf{IKCL} (ours)}$ & 10/10 & 7.34$\pm$4.34 & 10/10 & 0.97$\pm$0.10 & 0/10 & --- & 7/10 & 5.20$\pm$3.62 & 4/10 & 3.60$\pm$3.70 & 10/10 & 20.25$\pm$0.19 \\
& $\emph{\textbf{CMCL} (ours)}$ & 10/10 & 1.10$\pm$0.28 & \textbf{10/10} & \textbf{0.74$\pm$0.20} & 9/10 & 14.21$\pm$5.24 & 10/10 & 1.42$\pm$0.66 & 10/10 & 1.36$\pm$0.74 & 10/10 & 20.19$\pm$0.16 \\ \hline
\multirow{5}{*}{Fig. \ref{fig_sim_scene}(b)} & \emph{PJ}\cite{yakey2001randomized}\cite{berenson2011task} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{AT}\cite{jaillet2017path} & 4/10 & 12.85$\pm$4.92 & 8/10 & 4.21$\pm$2.76 & 0/10 & --- & 5/10 & 6.12$\pm$6.49 & 0/10 & --- & 6/10 & 22.90$\pm$1.81 \\
& \emph{TB}\cite{kim2016tangent} & 7/10 & 10.64$\pm$2.40 & 8/10 & 5.71$\pm$5.75 & 0/10 & --- & 5/10 & 4.67$\pm$3.81 & 0/10 & --- & 9/10 & 23.75$\pm$6.42 \\
& $\emph{\textbf{IKCL} (ours)}$ & 10/10 & 0.81$\pm$1.11 & 10/10 & 3.34$\pm$1.56 & 3/10 & 20.01$\pm$2.56 & 10/10 & 12.17$\pm$6.05 & 10/10 & 8.77$\pm$6.56 & 8/10 & 20.20$\pm$0.18 \\
& $\emph{\textbf{CMCL} (ours)}$ & \textbf{10/10} & \textbf{0.18$\pm$0.10} & 10/10 & 1.74$\pm$0.61 & 8/10 & 19.25$\pm$2.93 & 10/10 & 6.35$\pm$4.39 & 10/10 & 6.38$\pm$5.88 & 10/10 & 20.69$\pm$0.05 \\ \hline
\multirow{5}{*}{Fig. \ref{fig_sim_scene}(c)} & \emph{PJ}\cite{yakey2001randomized}\cite{berenson2011task} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{AT}\cite{jaillet2017path} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{TB}\cite{kim2016tangent} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& $\emph{\textbf{IKCL} (ours)}$ & 10/10 & 8.82$\pm$6.09 & 10/10 & 6.99$\pm$5.71 & 0/10 & --- & 8/10 & 14.40$\pm$8.57 & 9/10 & 17.77$\pm$8.37 & 7/10 & 35.32$\pm$5.72 \\
& $\emph{\textbf{CMCL} (ours)}$ & 10/10 & 2.35$\pm$1.10 & \textbf{10/10} & \textbf{1.84$\pm$0.48} & 3/10 & 12.90$\pm$6.06 & 10/10 & 5.39$\pm$2.30 & 10/10 & 8.95$\pm$7.06 & 10/10 & 31.11$\pm$0.24 \\ \hline
\multirow{5}{*}{Fig. \ref{fig_sim_scene}(d)} & \emph{PJ}\cite{yakey2001randomized}\cite{berenson2011task} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{AT}\cite{jaillet2017path} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{TB}\cite{kim2016tangent} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& $\emph{\textbf{IKCL} (ours)}$ & 10/10 & 8.30$\pm$4.48 & 10/10 & 7.98$\pm$7.06 & 0/10 & --- & 7/10 & 16.79$\pm$9.34 & 4/10 & 22.46$\pm$2.99 & 10/10 & 33.47$\pm$1.10 \\
& $\emph{\textbf{CMCL} (ours)}$ & \textbf{10/10} & \textbf{3.24$\pm$1.31} & 10/10 & 3.36$\pm$0.97 & 2/10 & 18.73$\pm$2.37 & 10/10 & 9.15$\pm$3.05 & 9/10 & 7.94$\pm$3.91 & 10/10 & 32.20$\pm$0.45 \\ \hline
\multirow{5}{*}{Fig. \ref{fig_sim_scene}(e)} & \emph{PJ}\cite{yakey2001randomized}\cite{berenson2011task} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{AT}\cite{jaillet2017path} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{TB}\cite{kim2016tangent} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& $\emph{\textbf{IKCL} (ours)}$ & 10/10 & 23.84$\pm$12.28 & 10/10 & 27.09$\pm$12.72 & 0/10 & --- & 0/10 & --- & 1/10 & 46.41$\pm$0 & 6/10 & 54.41$\pm$2.66 \\
& $\emph{\textbf{CMCL} (ours)}$ & \textbf{10/10} & \textbf{5.88$\pm$2.20} & 10/10 & 6.53$\pm$3.15 & 0/10 & --- & 10/10 & 27.47$\pm$9.96 & 5/10 & 28.49$\pm$17.64 & 8/10 & 52.72$\pm$0.78 \\ \hline
\multirow{5}{*}{Fig. \ref{fig_sim_scene}(f)} & \emph{PJ}\cite{yakey2001randomized}\cite{berenson2011task} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{AT}\cite{jaillet2017path} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& \emph{TB}\cite{kim2016tangent} & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
& $\emph{\textbf{IKCL} (ours)}$ & 10/10 & 10.17$\pm$3.89 & 10/10 & 5.73$\pm$1.91 & 1/10 & 47.58$\pm$0 & 10/10 & 26.06$\pm$10.89 & 10/10 & 19.10$\pm$6.51 & 10/10 & 53.95$\pm$1.35 \\
& $\emph{\textbf{CMCL} (ours)}$ & 10/10 & 5.12$\pm$1.74 & \textbf{10/10} & \textbf{4.77$\pm$1.65} & 2/10 & 24.71$\pm$13.34 & 10/10 & 8.25$\pm$2.94 & 10/10 & 12.15$\pm$4.67 & 10/10 & 53.11$\pm$0.87 \\ \hline
\end{tabular}}
\end{center}
\end{table*}
In this simulation, we compare the proposed hierarchical framework with the centralized frameworks, such as $ \emph{Projection (PJ)}$ \cite{yakey2001randomized}\cite{berenson2011task}, \emph{Atlas (AT)} \cite{jaillet2017path} and $\emph{Tangent Bundle (TB)}$ \cite{kim2016tangent}. In addition, we also compare the performance of $\emph{IKCL}$ with $\emph{CMCL}$ to see the advantage of $\emph{CM}$. As discussed in Section \ref{section_related_work_closed_chain}, \emph{PJ} computes the Jacobian of the system and projects a random sample $\boldsymbol{c}$ onto the manifold based on Newton procedure$\footnote{The constrained Jacobian matrix of the system and the iterative procedure of the projection-based framework are available in the Appendix.}$. \emph{AT} and \emph{TB} are built on top of \emph{PJ} with some modifications. For the fairness of the comparison, in $\emph{IKCL}$, the inverse kinematic solver is chosen as $\emph{KDL}$, which is based on the Jacobian of the single \emph{MM}.
In the tasks shown in Fig. \ref{fig_sim_scene}(a)-(f), the start state $\boldsymbol{t}_{start}$ and goal state $\boldsymbol{t}_{goal}$ of the object are specified by the user. However, \emph{PJ}, \emph{AT} and \emph{TB} are centralized frameworks. Not only the state of the object but also the state of the robots should be specified. Therefore, a random goal state $\boldsymbol{c}_{goal}$ will be selected for each scene so that $\boldsymbol{c}_{goal} \in \mathcal{C}_{\sss C^3}\cap \mathcal{C}_{free}$ and $\pi(\boldsymbol{c}_{goal}) = \boldsymbol{t}_{goal}$. Moreover, formation constraint can not be resolved directly in \emph{PJ}, \emph{AT} and \emph{TB}, hence only closed-chain and collision constraints are considered in these centralized frameworks. \emph{PJ}, \emph{AT} and \emph{TB} on the multiple mobile manipulators system are developed based on $\emph{OMPL}$ \cite{sucan2012open}, which is a famous sampling-based motion planning software.
The performances of different frameworks are shown in Table II. The terms and settings are explained as follows. The maximum allowed planning time $t_{obj}$ in Alg. \ref{alg1} is set to $20s$, $20s$, $30s$, $30s$, $50s$ and $50s$ for the tasks in Fig. \ref{fig_sim_scene}(a)-(f), respectively. Each framework is combined with six random planning algorithms, namely $\emph{RRT}$ \cite{lavalle1998rapidly}, $\emph{RRTConnect}$ \cite{kuffner2000rrt}, $\emph{BKPIECE}$ \cite{csucan2009kinodynamic}, $\emph{EST}$ \cite{hsu1997path}, $\emph{STRIDE}$\cite{gipson2013resolution} and $\emph{PRM} $\cite{kavraki1996probabilistic}. Each combination runs 10 times to get the statistical results. ``Success/Total" columns represent the successful and total simulations conducted. ``Time(s)" columns represent the mean time and the standard deviation of the successful simulations. When all simulations fail, ``Time(s)" columns are set to ``---".
$\emph{PRM}$ constructs a roadmap of the entire environment that can be used for multiple queries. It runs out of the given time to build the roadmap and then searches a valid path. Due to the post-process of the motion planner, the time cost of $\emph{PRM}$ will be slightly over the maximum allowed time (see the last column in Table II). The other five planning algorithms construct trees that can be used for a single query. They terminate as long as finding a valid path. $\emph{BKPIECE}$ relies heavily on the projection evaluator to guide the exploration of the continuous space. However, designing an efficient projection evaluator for the high-dimensional constrained space is nontrivial, hence it shows the worst performance in the simulations. From Fig. \ref{fig_sim_scene} and Table II, we can also get the following results.
Overall, the performance of \emph{PJ}, \emph{AT} and \emph{TB} increases sequentially on these tasks, which is similar to the results reported in the literature. They work well when the scenes are simple, like the task in Fig. \ref{fig_sim_scene}(a). When obstacles increase, their performance degrades significantly. Moreover, we also found that \emph{PJ}-like frameworks are sensitive to the hyper-parameters when debugging the code, such as the constraint tolerance in \emph{PJ} and the maximum radius of the chart validity region in \emph{AT} and \emph{TB}, which puts extra burdens on the developers.
From Table II, we know that $\emph{IKCL}$ outperforms \emph{PJ}-like frameworks significantly. This owes to the proposed framework as we plan the motion of the object and the robots hierarchically while \emph{PJ}-like frameworks plan the motion of the system simultaneously. Comparing the results of $\emph{IKCL}$ with $\emph{CMCL}$, we can see that $\emph{CM}$ can further improve the performance of the proposed motion planner. The combination of these features makes our motion planner outperform the benchmark significantly. In addition, $\emph{RRTConnect}$ (or $\emph{RRT}$) shows the best performance when working with our framework, hence it is chosen as the default algorithm in the following experiments.
\subsubsection{Simulation 2} \label{section_exp_sim2}
\begin{table*}[ht]
\label{table_planner2}
\caption{performance of the decoupled framework \cite{hekmatfar2014cooperative}}
\begin{center}
\scriptsize
\resizebox{\linewidth}{!}{
\begin{tabular}{c|cccccccccccc}
\hline \multirow{2}{*}{Task} & \multicolumn{2}{c|}{\emph{RRT} \cite{lavalle1998rapidly}} & \multicolumn{2}{c|}{\emph{RRTConnect} \cite{kuffner2000rrt}} & \multicolumn{2}{c|}{\emph{BKPIECE} \cite{csucan2009kinodynamic}} & \multicolumn{2}{c|}{\emph{EST} \cite{hsu1997path}} & \multicolumn{2}{c|}{\emph{STRIDE} \cite{gipson2013resolution}} & \multicolumn{2}{c}{\emph{PRM} \cite{kavraki1996probabilistic}} \\ \cline{2-13}
& \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{Time(s)} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{Time(s)} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{Time(s)} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{Time(s)} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & \multicolumn{1}{c|}{Time(s)} & \begin{tabular}[c]{@{}c@{}}Success/\\ Total\end{tabular} & Time(s) \\ \hline
Fig. \ref{fig_sim_scene}(a) & 10/10 & 0.06$\pm$0.01 & 10/10 & 0.05$\pm$0.02 & 10/10 & 0.77$\pm$0.35 & 10/10 & 0.08$\pm$0.04 & 10/10 & 0.07$\pm$0.03 & 10/10 & 20.01$\pm$0.00 \\
Fig. \ref{fig_sim_scene}(b) & 10/10 & 0.04$\pm$0.01 & 10/10 & 0.05$\pm$0.01 & 10/10 & 0.53$\pm$0.20 & 10/10 & 0.07$\pm$0.03 & 10/10 & 0.09$\pm$0.03 & 10/10 & 20.01$\pm$0.00 \\
Fig. \ref{fig_sim_scene}(c) & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
Fig. \ref{fig_sim_scene}(d) & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
Fig. \ref{fig_sim_scene}(e) & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\
Fig. \ref{fig_sim_scene}(f) & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- & 0/10 & --- \\ \hline
\end{tabular}}
\end{center}
\end{table*}
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{figs/sceneE-decoupled-framework.pdf}
\caption{Robot path planned by the decoupled framework \cite{hekmatfar2014cooperative}. Although the motion of the object is collision-free, \cite{hekmatfar2014cooperative} is unable to find a valid path for the robots that satisfies the collision, closed-chain and formation constraints simultaneously.}
\label{fig_FDHF}
\end{figure}
In this simulation, we compare the proposed hierarchical framework with the decoupled framework. When the closed-chain constraint and the lower bound of the formation constraint are not guaranteed in Fig. \ref{fig_overview}, or $\emph{ASR}$ is not checked in Alg. \ref{alg1} (line 28-33), the centralized layer and the decentralized layer become fully decoupled, resulting in a framework similar to \cite{hekmatfar2014cooperative}. The performance of the decoupled framework in the tasks in Fig. \ref{fig_sim_scene}(a)-(f) is shown in Table III.
When the scenes and the tasks are simple, the centralized layer and the decentralized layer are always compatible, so checking the closed-chain constraint and the lower bound of the formation constraint can be avoided. Therefore, the decoupled framework performs well in Fig. \ref{fig_sim_scene}(a)-(b). However, obstacles will complicate the motion planning problem and make the two layers in conflict. Therefore, in Fig. \ref{fig_sim_scene}(c)-(f), the performance of the decoupled framework decreases dramatically. A failure case is shown in Fig. \ref{fig_FDHF}. As can be seen, although the motion of the object is collision-free, \cite{hekmatfar2014cooperative} cannot find a valid path for the robots that satisfies the collision, closed-chain and formation constraints simultaneously.
\subsubsection{Simulation 3} \label{section_exp_sim3}
In this simulation, we compare the proposed hierarchical framework with algorithms based on the virtual structure. In \cite{alonso2017multi}, the object and the robots are encircled by polygons on the ground. As long as the virtual structure intersects with obstacles, the system is considered in collision. Therefore, the obstacle-avoidance strategy is conservative. Take the task in Fig. \ref{fig_sim_scene}(b) as an example. Our motion planner is able to find a shorter path that crosses obstacles. However, the path in Fig. \ref{fig_sim_scene}(b) becomes infeasible for \cite{alonso2017multi}, and they can only find a longer path that bypasses obstacles, which is shown in Fig. \ref{fig_sceneB_SOR}. Crossing obstacles is an essential ability for the multiple robots system and rarely mentioned in the previous works. It changes the topological structure of the configuration space and will increase the success rate of the motion planner in cluttered environments.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{figs/sceneB-virtual-structure.pdf}
\caption{Robot path planned by algorithms based on the virtual structure \cite{alonso2017multi}. The dotted lines represent the virtual polygon that encircles the robots and the object. Compared with the path in Fig. \ref{fig_sim_scene}(b), the proposed motion planner is able to find a shorter path that crosses obstacles, while \cite{alonso2017multi} only find a longer path that bypasses obstacles.}
\label{fig_sceneB_SOR}
\end{figure}
\subsection{Real-World Experiments} \label{section_exp_real}
In this part, we extend the proposed motion planner to real-world multiple mobile manipulators. $v_{max}$, $t_{opt}$ and $\epsilon$ in Alg. \ref{alg4} are set to $0.1m/s$, $40ms$ and 0.5, respectively. $\boldsymbol{K}_{b,i}$ and $\boldsymbol{K}_{e,i}$ in the task-priority motion controller are set to $2 \times \boldsymbol{I}_{3 \times 3}$ and $2 \times \boldsymbol{I}_{6 \times 6}$, respectively. $f_r(.)$ in Eq. (\ref{eq_formation_rela}) is chosen as the sixth-order polynomial due to its continuity and simplicity. $\emph{thres}$ is set to 0.4 to couple the centralized and decentralized layers.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{figs/two-robot-scene.pdf}
\caption{Scene of the real-world transportation task by two robots.}
\label{fig_real_two_robots}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=8.7cm]{figs/two-robot-object-data.pdf}
\caption{(a)-(b) Trajectories of the object in the task shown in Fig. \ref{fig_real_two_robots}. (c)-(d) $\mu(\boldsymbol{q})$ and $r(\boldsymbol{q})$ of \emph{MM1} and \emph{MM2} during the task. }
\label{fig_real_two_robots_data}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=8.7cm]{figs/three-robot-scene.pdf}
\caption{Scene of the real-world transportation task by three robots.}
\label{fig_real_three_robots}
\end{figure}
\subsubsection{Experiment 1}
In this experiment, we conduct the cooperative transportation task by two robots in real-world environment. Obstacles include a pedestrian and two slopes with varying angles and heights. The system should transport the object to the destination while avoiding static and dynamic obstacles. The decentralized layer will optimize the formation of the system in real-time and computes the desired joint motion command based on the task-priority motion controller. Snapshots during the task are shown in Fig. \ref{fig_real_two_robots}. The trajectories of the object, and $\mu(\boldsymbol{q})$ and $r(\boldsymbol{q})$ of \emph{MM1} and \emph{MM2} are shown in Fig. \ref{fig_real_two_robots_data}. The motion process is explained as follows.
\begin{itemize}[leftmargin=*]
\item At \emph{t=0s}, the system is far away from obstacles. Both \emph{MM1} and \emph{MM2} start with an optimal configuration.
\item From \emph{t=5s} to \emph{t=10s}, \emph{MM1} meets the pedestrian twice. As a redundant robot, it is able to avoid the dynamic obstacle while not affecting the transportation task. As a punishment, $\mu(\boldsymbol{q})$ of \emph{MM1} is sacrificed (Fig. \ref{fig_real_two_robots_data}(c)). However, when the obstacle moves away, $\mu(\boldsymbol{q})$ can be recovered to a high value.
\item From \emph{t=28s} to \emph{t=35s}, the system adjusts \emph{z} and \emph{roll} of the object to suit the height and direction of \emph{slope1}. Thanks to the real-time formation optimizer, $\mu(\boldsymbol{q})$ and $r(\boldsymbol{q})$ of the two robots remain at a high value during this period.
\item From \emph{t=49s} to \emph{t=55s}, the height of the \emph{MM1}'s and \emph{MM2}'s end effector decreases and increases, respectively, which means that the system adjusts \emph{pitch} of the object to suit the height and direction of \emph{slope2}. During this period, \emph{MM2} gradually approaches the boundary of the workspace in \emph{z} direction, hence $\mu(\boldsymbol{q})$ decreases quickly (Fig. \ref{fig_real_two_robots_data}(d)).
\item From \emph{t=75s} to \emph{t=80s}, the system has crossed all obstacles and then lowers the object to finish the autonomous cooperative transportation task. Finally, the height of the \emph{MM2}'s end effector decreases, and $\mu(\boldsymbol{q})$ returns to a high value.
\end{itemize}
\subsubsection{Experiment 2}
In this experiment, we conduct the cooperative transportation task by three robots in real-world environment. Obstacles include a pedestrian and a cuboid with a certain height. Other settings are similar to the former experiment. Snapshots during the task are shown in Fig. \ref{fig_real_three_robots}, and the pictorial motion is available in the attached video. From this and the former experiments, we can see that multiple robots are able to cooperate with each other to realize the transportation task in cluttered environments. Moreover, each robot could plan the optimal joint configuration and track the desired trajectories precisely.
\begin{table*}[ht]
\label{table_different_frameworks}
\caption{features of different motion planning frameworks when dealing with different constraints}
\begin{center}
\scriptsize
\resizebox{\linewidth}{!}{
\begin{tabular}{c|ccccc}
\hline
Method & Closed-Chain & Formation & Redundancy & Obstacle-Avoidance & Other Key Features \\ \hline
Centralized Framework \cite{yakey2001randomized}\cite{berenson2011task}\cite{jaillet2017path}\cite{kim2016tangent}) & \checkmark & \ding{55} & \ding{55} & bypass and cross & numerical projection; time-consuming \\
Decoupled Framework \cite{hekmatfar2014cooperative} & \checkmark & \ding{55} & \ding{55} & bypass and cross & conflict between different layers \\
Virtual Structure-based \cite{alonso2017multi} & \checkmark & \checkmark & \ding{55} & only bypass & only planar object's motion \\
\textbf{Hierarchical Framework} (\emph{ours}) & \checkmark & \checkmark & \checkmark & bypass and cross & \begin{tabular}[c]{@{}c@{}}time-efficient; compatibility between \\ different layers; spatial object's motion \end{tabular} \\ \hline
\end{tabular}}
\end{center}
\end{table*}
\subsection{Discussions}
As can be seen from Fig. \ref{fig_overview}, the proposed hierarchical framework is partially coupled and partially decoupled. Coupling occurs between different layers, which means that we guarantee the closed-chain constraint $f_{\sss C^3}(\boldsymbol{c})$ and the lower bound of the formation constraint $f_{\sss FC}(\boldsymbol{c})$ in the centralized layer, and then optimize $f_{\sss FC}(\boldsymbol{c})$ on the basis of obeying $f_{\sss C^3}(\boldsymbol{c})$. Decoupling arises among different robots, which means that the decentralized layer is fully distributed, and the redundancy of each robot can be explored independently in real-time.
Compared with the centralized frameworks, which use the numerical projection operator to generate valid samples and plan the motion of the object and the robots simultaneously, the hierarchical feature can simplify the motion planning problem of the high-dimensional system, which is verified in Section \ref{section_exp_sim1}. Compared with the decoupled frameworks, which plan the motion of the object and the robots separately, coupling the two layers can avoid conflict between them and enhance the performance of the motion planner significantly, which is verified in Section \ref{section_exp_sim2}. Compared with the virtual structure-based algorithms, the proposed motion planner exhibits diversified and efficient obstacle-avoidance skills, such as bypassing and crossing, which is also verified in Section \ref{section_exp_sim3}. The features of different frameworks when dealing with the closed-chain, formation, redundancy and obstacle-avoidance constraints are summarized in Table IV.
Another advantage of the proposed motion planner is $\emph{CM}$, and its effects are twofold. On the one hand, it accelerates the performance of the centralized layer, which is verified by comparing $\emph{IKCL}$ with $\emph{CMCL}$. On the other hand, it helps the optimization algorithm determine the seed, which speeds up the decentralized layer and effectively avoids local minimum. $\emph{CM}$ is essential when the motion planning time is a key index. However, the disadvantage of $\emph{CM}$ is that extra storage space is needed, and it has to be loaded from the local hard disk when the program starts. In the future, $\emph{CM}$ can be saved in the cloud server and treated as an infrastructure to help boost the motion planner.
The proposed framework can be applied to different numbers of heterogeneous mobile manipulators. To the best of our knowledge, such complex applications have not been seen before in the robotic community. However, there is still a long way to go to deal with multiple heterogeneous robots. For example, the speed, payload, sensing capability, kinematic and dynamic characteristics of each robot may be different, hence the planner should allocate proper motion for each robot. In addition, some robots may crash when working, and the motion planner should monitor the status of each robot and prepare the emergency measure for such cases.
Another interesting ability for multiple mobile manipulators is avoiding dynamic obstacles. A simple case is shown in the experiment. However, dynamic obstacles may be complex. For example, they may be fast and lock the motion of all robots simultaneously. In such cases, what strategy to take and how to balance the motion of different robots are still open questions.
\section{CONCLUSIONS} \label{section_conclusion}
In this paper, we proposed a hierarchical framework to deal with the motion planning of multiple redundant mobile manipulators. The motion of the object is planned in the centralized layer offline. In the decentralized layer, the redundancy of each robot is explored independently in real-time to achieve the desired formation and dexterous configuration. Closed-chain, obstacle-avoidance and the lower bound of the formation constraints are checked in the centralized layer to ensure the object’s motion is executable in the decentralized layer. The hierarchical framework simplifies the complex motion planning problem while not violating the constraints, and its superiority is verified in the experiments. In addition, a novel tool named $\emph{CM}$ is introduced to boost the performance of the motion planner. The validity of the object's pose in the centralized layer can be checked quickly by looking up $\emph{CM}$. The seed of the optimization algorithm in the decentralized can also be determined by querying it. The proposed framework outperforms the golden benchmark motion planners significantly and can be applied to different numbers of heterogeneous mobile manipulators, which are also verified in the simulated and real-world experiments.
\section*{APPENDIX}
\subsection{Projection Operation for the Centralized Frameworks}
\IncMargin{1em}
\begin{algorithm}
\caption{\emph{Projection Through Iteration}}\label{alg_appen}
\SetKwProg{Fn}{Function}{}{}
\Fn{\texttt{\emph{projection}}($\boldsymbol{c}, \varepsilon$)}
{
\tcp{constrained error by Eq.(\ref{eq_ccc1})}
$\Delta\boldsymbol{x}$ $\leftarrow$ $f_{\sss C^3}(\boldsymbol{c})$\;
\While{$\|\boldsymbol{x}\| > \varepsilon$}
{
$\Delta\boldsymbol{c} \leftarrow \boldsymbol{J}_{\sss C^3}^{\dagger}(\boldsymbol{c})\Delta\boldsymbol{x}$\;
$\boldsymbol{c} \leftarrow \boldsymbol{c} - \Delta\boldsymbol{c}$\;
$\Delta\boldsymbol{x}$ $\leftarrow$ $f_{\sss C^3}(\boldsymbol{c})$\;
}
}
\end{algorithm}\DecMargin{1em}
Projection is the basis for the centralized frameworks \cite{yakey2001randomized}\cite{berenson2011task}\cite{jaillet2017path}\cite{kim2016tangent} in Section \ref{section_exp_sim1}. Alg. \ref{alg_appen} shows the iterative procedure that uses the pseudo-inverse Jacobian matrix of the closed-chain constraint function $f_{\sss C^3}(\boldsymbol{c})$.
Given a random sample $\boldsymbol{c}$, Alg. \ref{alg_appen} computes the constrained error according to Eq.(\ref{eq_ccc1}) and compares it with the maximum allowed error $\varepsilon$. When the error is larger than $\varepsilon$, the pseudo-inverse of the constrained Jacobian matrix $\boldsymbol{J}_{\sss C^3}^{\dagger}(\boldsymbol{c})$ will be calculated to project the random sample by the gradient descent operation.
\subsection{Constrained Jacobian Matrix}
In this part, we derive the constrained Jacobian matrix $\boldsymbol{J}_{\sss C^3}(\boldsymbol{c}) \in \mathbb{R}^{6n \times \sum_{i=1}^{n}{(n_{a,i} + n_{b,i})+6}}$ of the system.
\begin{equation}
\label{eq_app_1}
\boldsymbol{J}_{\sss C^3}(\boldsymbol{c}) = \frac{\partial{f_{\sss C^3}(\boldsymbol{c})}}{\partial{\boldsymbol{c}}}
= \frac{\partial{\boldsymbol{E}} - \partial{\boldsymbol{G}}}{\partial{\boldsymbol{c}}}
\end{equation}
According to the definition of $\boldsymbol{E}$ and $\boldsymbol{G}$ in Section \ref{section_model_ccc}, $\boldsymbol{E}$ has no relationship with $\boldsymbol{t}_{obj}^{w}$, and $\boldsymbol{G}$ has no relationship with $\boldsymbol{q}_i$. Therefore, Eq. (\ref{eq_app_1}) can be simplified to Eq. (\ref{eq_app_2}).
\begin{equation}
\label{eq_app_2}
\frac{\partial{\boldsymbol{E}} - \partial{\boldsymbol{G}}}{\partial{\boldsymbol{c}}} = \left( \frac{\partial{\boldsymbol{E}}}{\partial{\boldsymbol{q}_1}}, ...,
\frac{\partial{\boldsymbol{E}}}{\partial{\boldsymbol{q}_i}}, ...,
\frac{\partial{\boldsymbol{E}}}{\partial{\boldsymbol{q}_n}},
-\frac{\partial{\boldsymbol{G}}}{\partial{\boldsymbol{t}_{obj}^w}} \right)
\end{equation}
where $\frac{\partial{\boldsymbol{E}}}{\partial{\boldsymbol{q}_i}}$ and $\frac{\partial{\boldsymbol{G}}}{\partial{\boldsymbol{t}_{obj}^w}}$ are defined in Eq. (\ref{eq_app_3}) and Eq. (\ref{eq_app_4}).
\begin{align}
\label{eq_app_3}
& \frac{\partial{\boldsymbol{E}}}{\partial{\boldsymbol{q}_i}} = \left( \frac{\partial{f_k(\boldsymbol{q}_1)}}{\partial{\boldsymbol{q}_i}}, ...,
\frac{\partial{f_k(\boldsymbol{q}_i)}}{\partial{\boldsymbol{q}_i}}, ...,
\frac{\partial{f_k(\boldsymbol{q}_n)}}{\partial{\boldsymbol{q}_i}} \right)^T \\
\label{eq_app_4}
& \frac{\partial{\boldsymbol{G}}}{\partial{\boldsymbol{t}_{obj}^w}} = \left( \frac{\partial{\boldsymbol{t}_{g,1}^w}}{\partial{\boldsymbol{t}_{obj}^w}}, ...,
\frac{\partial{\boldsymbol{t}_{g,i}^w}}{\partial{\boldsymbol{t}_{obj}^w}}, ...,
\frac{\partial{\boldsymbol{t}_{g,n}^w}}{\partial{\boldsymbol{t}_{obj}^w}} \right)^T
\end{align}
where $\frac{\partial{f_k(\boldsymbol{q}_j)}}{\partial{\boldsymbol{q}_i}}$ is given by Eq. (\ref{eq_app_5}).
\begin{equation}
\label{eq_app_5}
\frac{\partial{f_k(\boldsymbol{q}_j)}}{\partial{\boldsymbol{q}_i}} = \begin{cases}
\boldsymbol{J}_i(\boldsymbol{q}_{i}), & if \ i = j \\
\boldsymbol{O}_{6 \times (n_{a,i}+n_{b,i})}, & if \ i\neq j
\end{cases}
\end{equation}
where $\boldsymbol{J}_i(\boldsymbol{q}_{i})$ is the analytical Jacobian matrix of the $i$th \emph{MM}.
Suppose the constant homogeneous transformation of frame $O_{g,i}X_{g,i}Y_{g,i}Z_{g,i}$ relative to frame $O_{obj}X_{obj}Y_{obj}Z_{obj}$ is
$$ \boldsymbol{X}_{g,i}^{obj} = \left(
\begin{array}{cc}
\boldsymbol{R}_{g,i}^{obj} & \boldsymbol{p}_{g,i}^{obj} \\
\boldsymbol{O}_{1\times3} & 1 \\
\end{array}
\right).$$
Suppose the velocity of a frame is $\boldsymbol{\xi} = (\boldsymbol{v}^T, \boldsymbol{\omega}^T)^T$, in which $\boldsymbol{v}$ and $\boldsymbol{\omega}$ represent the linear and angular velocity of the frame. According to \cite{caccavale2016cooperative}, Eq. (\ref{eq_app_6}) holds between $\boldsymbol{\xi}_{g,i}^w$ and $\boldsymbol{\xi}_{obj}^w$.
\begin{equation}
\label{eq_app_6}
\boldsymbol{\xi}_{g,i}^w = \left(
\begin{array}{cc}
\boldsymbol{I}_{3\times3} & -S(\boldsymbol{p}_{g,i}^{w}) \\
\boldsymbol{O}_{3\times3} & \boldsymbol{I}_{3\times3} \\
\end{array}
\right)
\boldsymbol{\xi}_{obj}^w
\end{equation}
where $S(.)$ is the skew-symmetric matrix operator. In addition, we use a minimum description to represent the orientation in $\boldsymbol{t}$, hence Eq. (\ref{eq_app_7}) holds between $\dot{\boldsymbol{t}}$ and $\boldsymbol{\xi}$.
\begin{equation}
\label{eq_app_7}
\boldsymbol{\xi} = \left(
\begin{array}{cc}
\boldsymbol{I}_{3\times3} & \boldsymbol{O}_{3\times3} \\
\boldsymbol{O}_{3\times3} & B(\boldsymbol{\alpha}) \\
\end{array}
\right)
\dot{\boldsymbol{t}}
\end{equation}
When \emph{roll-pitch-yaw} angle is used to represent the orientation, we have $\boldsymbol{\alpha} = (\phi, \psi, \theta)^T$ and $B(\boldsymbol{\alpha}) = \left(
\begin{array}{ccc}
c_{\psi}c_{\theta} & -s_{\theta} & 0 \\
c_{\psi}s_{\theta} & c_{\theta} & 0 \\
-s_{\psi} & 0 & 1 \\
\end{array}
\right)$,
where $s$ and $c$ represent sine and cosine operator, respectively.
Combining Eq. (\ref{eq_app_6}) and Eq. (\ref{eq_app_7}), $\frac{\partial{\boldsymbol{t}_{g,i}^w}}{\partial{\boldsymbol{t}_{obj}^w}}$ can be derived in Eq. (\ref{eq_app_8}).
\begin{equation}
\label{eq_app_8}
\frac{\partial{\boldsymbol{t}_{g,i}^w}}{\partial{\boldsymbol{t}_{obj}^w}} = \boldsymbol{W}_{i} =
\left(
\begin{array}{cc}
\boldsymbol{I}_{3\times3} & -S(\boldsymbol{p}_{g,i}^{w})B(\boldsymbol{\alpha}_{obj}^w) \\
\boldsymbol{O}_{3\times3} & -B^{-1}(\boldsymbol{\alpha}_{g,i}^w)B(\boldsymbol{\alpha}_{obj}^w) \\
\end{array}
\right)
\end{equation}
Combining Eq. (\ref{eq_app_1})-(\ref{eq_app_5}) and Eq. (\ref{eq_app_8}), $\boldsymbol{J}_{\sss C^3}(\boldsymbol{c})$ is finally given in Eq. (\ref{eq_app_9}).
\begin{equation}
\label{eq_app_9}
\boldsymbol{J}_{\sss C^3}(\boldsymbol{c}) = \left(
\begin{array}{cccc}
\boldsymbol{J}_1(\boldsymbol{q}_{1}) & ... & \boldsymbol{O} & -\boldsymbol{W}_{0} \\
\vdots & \ddots & \vdots & \vdots \\
\boldsymbol{O} & ... & \boldsymbol{J}_n(\boldsymbol{q}_{n}) & -\boldsymbol{W}_{n} \\
\end{array}
\right)
\end{equation}
\bibliographystyle{IEEEtran}
|
1,116,691,499,679 | arxiv | \section{Introduction}
The field of Gamma Ray Bursts (GRBs) was surrounded by many
years by a sort of fascinating ``aura", being for so long
a complete enigma.
Recent years saw a dramatic improvement of our knowledge
about them, especially about their phenomenology, thanks
to the data gathered by the {\it Compton Gamma Ray Observatory (CGRO)}
\cite{fishman1995}, {\it Beppo}SAX \cite{frontera2000},
{\it HETE II} \cite{lamb2004}, {\it Swift} \cite{gehrels2009}
and now {\it Fermi} \cite{atwood2009} satellites.
The theoretical work, especially in the 90',
set the stage for what is now considered a basic standard model
to explain the bulk of what we see (for reviews see e.g.
\cite{vanparadijs2000},
\cite{meszaros2002},
\cite{zhang2004}, \cite{piran2004}, \cite{meszaros2006}).
According to this standard scenario, a colossal injection of energy in a small
volume lasts for a short time.
The gravitational energy of a solar mass is liberated in a few seconds,
in a volume having a radius of a few Schwarzschild radii.
Black--body temperatures above $10^{10}$ K are then reached, and electron--positron
pairs are produced.
The mixture of photons and matter -- the fireball -- expands due to its
internal pressure accelerating the fireball to relativistic velocities.
The Lorentz factor increases as
$\Gamma\propto R$ ($R$ is the distance from the black hole)
until almost all the internal energy is converted into bulk kinetic motion.
A little ``fossil" radiation remains, but it carries a small fraction
of the initial energy.
There is then the need to reconvert the kinetic energy back to
radiation.
The fact that the spikes of emission during the prompt phase do not
lengthen with time suggests that these episodes occur at the
same distance from the black hole.
Inhomogeneities in the jet, with regions going at different
$\Gamma$--factors, produce shocks internal to the relativistic flow.
These shocks accelerates electrons and enhance magnetic fields.
Synchrotron radiation is then the natural candidate to explain
the radiation of the prompt phase.
But it faces a severe problem: if electrons produce $\sim$MeV synchrotron photons
with a reasonable efficiency, they must inevitably cool in a
time \cite{ghisellini2000}:
\begin{equation}
t_{\rm cool} \, =\, 10^{-7} \, { \epsilon_{\rm e}^3 (\Gamma^\prime -1)^3
(\Gamma/100) \over
\nu_{\rm MeV}^2 (1+U_{\rm r}+U_{\rm B}) (1+z)}\,\,\,\, {\rm s}
\end{equation}
where $U_{\rm r}$ and $U_{\rm B}$ are the radiation and magnetic energy
densities, $\epsilon_{\rm e}$ is the fraction of the dissipated energy
given to electrons, and $\Gamma^\prime$ is the relative Lorentz factor
between two colliding shells.
This time is shorter than any conceivable dynamical time and of any
detector exposure time.
The synchrotron spectrum of a cooling population of relativistic electrons
cannot be harder than $F(\nu)\propto \nu^{-1/2}$, corresponding
to a photon spectrum $\dot N(\nu)\propto \nu^{-3/2}$, while the vast majority
of the observed spectra, below their peaks, is much harder, as illustrated
by Fig. \ref{isto}, showing BATSE (onboard {\it CGRO}) and the recent
{\it Fermi} results \cite{nava2010}.
This slope is substantially softer than the
``synchrotron line of death" \cite{preece1998},
$\dot N(\nu)\propto \nu^{-2/3}$, of a non--cooling electron population
with a low energy cut--off.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[height=8.cm, width=7.5cm]{isto_alpha.ps}
& \includegraphics[height=8.cm, width=7.5cm]{isto_epeak.ps}\\
\end{tabular}
\caption{The distributions of low energy spectral indices
$\alpha$ (left) and peak energy $E_{\rm peak}$ (right)
of BATSE and {\it Fermi}/GBM bursts.
The spectral index $\alpha$ is the photon spectral index of the
spectrum below the peak energy $E_{\rm peak}$.
The vertical line (left panel) shows the cooling limit ($\alpha=-3/2$), while the
dashed line shows the low energy synchrotron slope of a non cooling electron
population with a low energy cut--off ($\alpha=-2/3$).
Adapted from \cite{nava2010}.
}
\label{isto}
\end{figure}
\section{Seeking alternatives}
{\bf Re--acceleration --}
The first obvious possibility coming in mind is that the electrons
are re--accelerated, so that they can remain at the same energy.
In the internal shock scenario this is not possible, since
electrons are kicked to high energies only once.
Another critical problem is about the global energy budget.
If I keep the radiating electrons hot by refilling their energy,
I can do so with a few (not all) electrons present.
In standard conditions, I can do so only for one in a million electron.
\vskip 0.2 cm
\noindent
{\bf Jitter radiation --}
Small scale changes in direction of the magnetic field can induce the so--called
jitter radiation, similar but not identical to the synchrotron one.
However, if the process is efficient, as it should be, the electrons
cool, and the predicted spectrum is steep.
\vskip 0.2 cm
\noindent
{\bf Self Compton --}
The cooling is very fast anyway \cite{ghisellini2000}, and the predicted first order
self Compton spectrum is even steeper than the synchrotron one:
$F_{\rm SC}(\nu)\propto \nu^{-3/4}$ in the Thomson regime.
If most of the scatterings occur in the
Klein Nishina regime, then the electron distribution may flatten,
and the synchrotron radiation is harder \cite{bosnjak2009},
but this necessarily implies that the self Compton process is dominating,
in the GeV energy band and beyond.
The few (\lower.5ex\hbox{\ltsima} 10\%) GRBs detected by {\it Fermi}/LAT at high energies
then suggests that this process is not a general solution.
\vskip 0.2 cm
\noindent
{\bf External Compton --}
The seeds for the Compton process can be
``fossil" photons remaining from the acceleration phase, or any other
radiation produced externally to the jet.
But the electrons will cool also in this case, if the process is efficient,
making a $\nu^{-1/2}$ spectrum.
\vskip 0.2 cm
\noindent
{\bf Quickly decaying magnetic fields --}
Electrons quickly going away from the acceleration site could emit
in a region of smaller magnetic field, then reducing their
synchrotron losses (and power).
The observed synchrotron spectrum is produced when the
electrons are ``young" and not cooled.
However, once they are out of the magnetised region,
they would inevitably and efficiently cool by self Compton, that is bound
to become the dominant process, with a corresponding
steep spectrum \cite{ghisellini2000}.
\vskip 0.2 cm
\noindent
{\bf Adiabatic losses ---}
The cooling time is much shorter than any conceivable dynamical time
of the entire fireball.
On the other hand, we could have many small regions
expanding quickly enough to make electrons loose energy
by adiabatic, not radiative, losses.
This may also be accompanied by a decreased magnetic field.
Needless to say, this process is by construction very inefficient.
\vskip 0.2 cm
\noindent
{\bf Quasi--thermal Comptonization ---}
Keeping the internal shock idea, but abandoning the requirement
that the shock accelerates electrons only once, we can envisage
a scenario were all electrons present in the emitting region
(i.e. the shell) are maintained hot by some unspecified process
\cite{ghisellini1999}.
The equilibrium between heating and cooling fixes the typical
energy of these electrons.
If the heating rate is simply the available dissipated energy divided by the
interaction time between two shells, one arrives to typical electron
energies that are sub--relativistic.
The main radiative process in this case is quasi--thermal Comptonization,
using as seed photons either the self absorbed synchrotron photons
produced by the electrons themselves, or the ``fossil" photons.
The Comptonization parameter $y$ becomes of the order of 10 or so,
large enough to produce a hard spectrum.
Expansion of the fireball during the Comptonization process
may quench the process itself (expansion introduces a general radial
motion for both photons and electrons; but see \cite{lazzati2009} for
a non expanding case resulting from recollimation).
Furthermore, the typical observed energy peak of the spectrum
could be too high, if the electron ``temperatures" in the comoving frame
are above $\sim 10$ keV or so.
\vskip 0.3 cm
The above ideas have been proposed to occur within the
internal shock scenario.
In the following I will list more radical ideas, that no longer
assume that the dissipation process is due to
internal shocks.
\vskip 0.3 cm
\noindent
{\bf Bulk Compton ---}
The association of long GRBs with supernovae led us
(\cite{lazzati2000}, \cite{ghisellini2000b})
to propose an alternative scenario for the production of the prompt phase
emission, namely to make use of the dense radiation field
produced by the funnel or the progenitor star (that is about to explode)
or by the young and hot remnants (if the supernova explosion
precedes the GRB).
The process can be very efficient, especially for large $\Gamma$--factors.
There is no need of shocks and no need of a transfer of energy from
protons to electrons.
Being so efficient, it is conceivable that the fireball decelerates,
leaving less energy to be dissipated during the afterglow.
This would solve another puzzle concerning GRBs.
One of the problems it faces is that the fireball has a large scattering
optical depth, and so uses a large fraction (if not all) the seed photons.
Furthermore, variability in this model should correspond to emission by
different shells, but there is a minimum ``refilling time" needed to
replace the scattered seed photons with new ones.
Also, the similarity of the spectra of long and short GRBs \cite{ghirlanda2009}
(if short bursts are not associated to a supernova) makes
the bulk Compton idea questionable.
\vskip 0.2 cm
\noindent
{\bf Deep impacts ---}
The initial fireball must punch a funnel through the progenitor star.
If the opening angle of the jetted fireball is $\theta =0.1\sim 5^\circ$, then
each one of the two oppositely directed fireballs must push
a mass $M\sim 0.5 (\theta/0.1)^2 (M_*/20\, M_\odot)$ solar masses
out of the progenitor star of mass $M_*$.
Once the funnel is clean, the fireball may still interact with some
material leftover from the previous (``piercing") phase, at a distance
of the same order of the star radius \cite{ghisellini2007}.
Moreover, shear instabilities between the fireball and its cocoon
(while the fireball is moving inside the funnel) may give important dissipation
\cite{thompson2007}, \cite{lazzati2009}.
The efficiency can be large, especially when the fireball
collides with leftover material just outside the star surface,
because it is a collision with matter that is initially almost at rest.
If these collisions occur when the scattering optical depths are large, then
the predicted spectrum has time to thermalize, and then it is a
black--body.
Under the assumption of black--body spectrum and other
specific conditions (even if they appear somewhat ad hoc),
\cite{thompson2007} showed that it is even possible to reproduce the ``Amati"
relation \cite{amati2002}, namely the correlation between the observed energetics
of the prompt radiation phase and the peak energy of the $\nu F_\nu$ spectrum
of the total prompt emission.
The problem with these interesting attempts is the presence of a black--body
component in the prompt phase spectrum.
While some GRBs do have black--body like spectra up to a few seconds
from the trigger \cite{ghirlanda2003}, the vast majority do not.
Fits with a back--body plus a power law can be successfully
applied to many more bursts \cite{ryde2005}, but the resulting power law
is rather soft.
Therefore, even if in the BATSE energy range one obtains a good fit,
the extrapolation of the power law component to lower frequencies results
in a large flux. Larger than what observed when we do have lower frequency data,
as was the case for the few GRB observed both by BATSE and by the Wide Field Camera
(WFC) onboard {\it Beppo}SAX \cite{ghirlanda2007}.
Moreover, for all those cases, a cut--off power law (without the black--body component)
not only is a good fit in the BATSE energy range, but also its extrapolation
to lower frequencies matches perfectly the WFC data.
\vskip 0.2 cm
\noindent
{\bf Reconnection ---}
The fireball could be magnetically dominated, dissipating
part of its magnetic energy through reconnection,
as envisaged in \cite{giannios2008}.
If this kind of energy dissipation lasts for a relatively
long time, then the electrons would be
reaccelerated while cooling, and the dominant radiation
process could be similar to the quasi--thermal Comptonization
mechanism.
The several spikes/pulses present in the light curve of the prompt
emission phase should correspond to different reconnection events.
This idea is attractive, and surely worth to be investigated further.
The problem with it is that assuming dissipation events having
different properties (i.e. energies, electron content, sizes, durations)
would correspond to ``Christmas tree" variations, namely each spike/pulse
should behave independently from the others.
There should be no well defined trends in the spectral properties
of the prompt emission.
Instead, these trends are present, and are rather strong.
In fact, both \cite{firmani2009} (for {\it Swift} bursts) and
\cite{ghirlanda2009} (for {\it Fermi}/GBM bursts)
found strong correlations between the peak energy $E_{\rm peak}$
and the luminosity {\it within the prompt emission of single bursts}.
Fig. \ref{090424}
illustrates the point showing the $E_{\rm peak}$--Luminosity
correlation for GRB 090424.
The slope and normalisation of these correlation is the same of
what it is found considering different bursts, and taking for each
of them the peak luminosity and the (time averaged) $E_{\rm peak}$
(the so called Yonetoku correlation, see \cite{yonetoku2004},
and \cite{ghirlanda2009} for an update).
These trends give solidity and reality to
spectral--energy correlations found in these years, demonstrating that
{\it they are not}
the result of selection effects.
\begin{figure}
\includegraphics[height=7cm, width=16cm]{090424.ps}
\caption{The left panel shows the {\it Fermi}/GBM light curve of GRB 090424.
Vertical dashed lines indicate the time bins for
which the spectrum was analysed.
The right panel shows $E_{\rm peak}$ vs luminosity for the different
time bins. The solid and dashed dark lines indicate the slope and normalisation
of the Yonetoku relation found considering different GRBs.
Different symbols indicate the rising and decaying phases of the different pulses.
Adapted from \cite{ghirlanda2009}.
}
\label{090424}
\end{figure}
\section{Conclusion}
We still do not know what is the dominant radiation process
of the prompt phase emission.
\begin{theacknowledgments}
I thank G. Ghirlanda for discussions.
This work was partially funded by a 2007 PRIN--INAF grant.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
1,116,691,499,680 | arxiv | \section{Introduction}
An algebraic function is a quantity $y$ for which there are
polynomials $u_0,\dots,u_d$, not all zero, such that
\[
u_0(x) + u_1(x)y + \cdots + u_d(x)y^d = 0.
\]
A D-finite function is a quantity $y$ for which there are
polynomials $p_0,\dots,p_r$, not all zero, such that
\[
p_0(x)y + p_1(x)y' + \cdots + p_r(x)y^{(r)} = 0.
\]
As recognized by Abel, every algebraic function is also D-finite, and it is not
hard to construct a differential equation from a known polynomial equation.
The other direction is much more difficult, as a given differential equation
may or may not have any algebraic solutions.
The problem to decide for a given differential equation whether it admits only
algebraic solutions has received a lot of attention since the 19th century,
when Schwarz, Klein, Fuchs and others studied the problem for equations
with $r=2$~\cite{gray86}, but even this special case was not fully understood until
Baldassari and Dwork~\cite{baldassari79} gave a complete decision procedure in 1979.
Only a year later, Singer~\cite{singer79} offered an algorithm that applies to equations
of arbitrary order~$r$. His algorithm is, however, only of theoretical interest, as it
relies on solving a nonlinear system of algebraic equations whose number of variables
is determined by a group-theoretic bound involving the term $(49r)^{r^2}$. This is
far from feasible, even for $r=2$. Kovacic's algorithm~\cite{kovacic86} can determine the
presence of algebraic solutions in a more reasonable time for $r=2$, but the problem
remains difficult for $r\geq3$.
If a differential equation has only algebraic solutions, their minimal polynomials
are not difficult to find. One way is to compute a truncated power series solution
of the differential equation and then use linear algebra or Hermite-Pad\'e approximation~\cite{beckermann94}
to find a candidate annihilating polynomial. From the first $N$ terms of a series
solution, we can reliably detect annihilating polynomials of degrees $d_x,d_y$ with
$(d_x+1)(d_y+1)<N$. The correctness of such a candidate can
be checked by computing the differential equation satisfied by the solution of the
candidate equation and comparing it with the input equation. If they do not match,
or if no candidate equation is found, repeat the procedure with a higher truncation
order $N$ and higher degrees $d_x,d_y$. Eventually, the correct minimal polynomial will be found.
In Sect.~\ref{sec:expand-search-space} we give an alternative method which can decide
for a given $d_y$ whether all solutions are algebraic with a minimal polynomial of degree
at most~$d_y$, regardless of the degree $d_x$ of the polynomial coefficients of the
minimal polynomial. This method has the advantage that $d_x$ need not be guessed in
advance, but it still requires to guess~$d_y$.
We are thus led to the question how we can detect with a reasonable amount of
computation time that a differential equation has at least one transcendental solution.
There are indeed several things that are worth trying.
For example, if a differential equation has a logarithmic or an exponential singularity,
it cannot only have algebraic solutions.
This test was applied for example in order to prove transcendence of the generating function
for Kreweras walks with interacting boundaries~\cite{bostan21}.
Another popular test is to determine the asymptotic behaviour of the series coefficients
of a solution of the differential equation.
If it is not of the form $\phi^n n^\alpha$ with $\alpha\in\set Q\setminus\{-1,-2,-3,\dots\}$,
this also proves the presence of a transcendental solution~\cite{flajolet09}.
A third possibility is to use arbitrary precision arithmetic~\cite{mezzarobba10a,kauers19c} to compute
eigenvalues of monodromy matrices for the differential equation.
If there is an eigenvalue that is not a root of unity, there must be a transcendental solution.
As a fourth approach, we can investigate the $p$-curvature of the differential equation~\cite{bostan14a,bostan15a}
and resort to a conjecture of Grothendieck according to which the $p$-curvature is zero for
almost all primes~$p$ if and only if the differential equation has only algebraic solutions.
Another idea is to try to prove transcendence via the criterion of Harris and Sibuya~\cite{harris85},
which says that for a D-finite function~$f$, the reciprocal $1/f$ is D-finite as well if and only
if the logarithmic derivative $f'/f$ is algebraic.
Finally, there are powerful criteria for certain special differential equations, e.g., the
criterion of Beukers and Heckman for testing algebraicity of a hypergeometric differential equation~\cite{beukers89}.
All these tests have limitations. The first three tests only provide a sufficient condition
for the existence of transcendental solutions, but there are equations with transcendental
solutions on which all three tests fail. A limitation of the $p$-curvature test is the
quantifier ``almost all'': if we encounter a prime (or several primes) for which the
$p$-curvature is nonzero, this is strong evidence in favor of a transcendental solution,
but there remains a small chance that the prime(s) were just unlucky.
The criterion of Harris and Sibuya reduces the problem of proving that $f'/f$ is transcendental
to the problem of proving that $1/f$ is not D-finite, which is typically more difficult.
In fact, this criterion is more valuable in the other direction: to prove that $1/f$ is
not D-finite, it suffices to prove that $f'/f$ is not algebraic.
The obvious limitation of the criterion of Beukers and Heckman is that it only applies
to hypergeometric functions.
In view of this situation, additional sufficient conditions for transcendental solutions
that can be tested with reasonable computational cost are of interest. Ideally, such
tests should also provide some artifacts that can serve as witness for the existence of
transcendental solutions. We propose the term \emph{transcendence certificate} for such
artifacts. For example, a logarithmic or exponential singularity can be viewed as such
a transcendence certificate. Observe that the algorithms of Kovacic and Singer mentioned
earlier do not provide any transcendence certificates but will just report ``no algebraic
solution'' as output.
The purpose of the present paper is to introduce a transcendence certificate based on
the following classical fact about algebraic functions:
\begin{prop} \cite{vanDerWaerden31,bliss33}
\label{prop:alghaspole}
Every non-constant algebraic function must have at least one pole.
\end{prop}
With our new test, we are able to prove the existence of transcendental solutions
for some equations that have no logarithmic singularities, no series solutions with illegal
coefficient asymptotics, and whose monodromy matrices have just roots of unity as eigenvalues.
\section{Preliminaries}\label{sec:prelim}
Throughout this paper, let $C$ be an algebraically closed field of
characteristic zero, and let $K = C(x)$ denote the field of rational functions
over~$C$. A Puiseux series at~$\xi\in C$ is a series of the form
$c_n(x-\xi)^{n/q} + c_{n+1}(x-\xi)^{(n+1)/q}+\cdots$ with $n\in\set Z$,
$q\in\set N$, and $c_n,c_{n+1},\ldots\in C$; we write $C((\ (x-\xi)^{1/q}\ ))$
for the field of all Puiseux series at~$\xi$ whose exponents have a common
denominator dividing $q\in\set N$. Similarly, a Puiseux series at~$\infty$ is
a series of the form $c_nx^{-n/q} + c_{n+1}x^{-(n+1)/q}+\cdots =
c_n(x^{-1})^{n/q} + c_{n+1}(x^{-1})^{(n+1)/q}+\cdots$; the field of all
Puiseux series at~$\infty$ is denoted by $C((x^{-1/q}))$. In both cases, we
call $n/q$ the \emph{starting exponent} of the series, provided that
$c_n\neq0$.
An algebraic function field $E=K[y]/\<m>$ is a field extension of the rational
function field~$K$ of finite degree, where $m$ is an irreducible polynomial
in~$K[y]$. For every $\xi\in C\cup\{\infty\}$, the element~$y\in E$ can be
identified with any of the $\deg_y(m)$ many roots of the minimal
polynomial~$m$ in the field of Puiseux series at~$\xi$; we call them the
expansions of $y$ at~$\xi$.
A Puiseux series is said to be \emph{integral} if its starting exponent is
nonnegative, i.e., if the corresponding function does not have a pole at the
expansion point. The element $y$ of $E$ is called integral at $\xi\in
C\cup\{\infty\}$ if all its Puiseux series expansions at $\xi$ are integral.
In order to extend the definition of integrality to other elements of~$E$,
note that for every expansion $f$ of~$y$ we have a field homomorphism
$h_f\colon E\to C((\ (x-\xi)^{1/q}\ ))$ (or $h_f\colon E\to C((x^{-1/q}))$ if
$\xi=\infty$) which maps $y$ to~$f$. Now $u\in E$ is called integral at $\xi$
if for all expansions~$f$ of~$y$ the series $h_f(u)$ is integral. The element
$u$ is called (globally) \emph{integral} if it is integral at every $\xi\in C$ (but not
necessarily at infinity). The set of all integral elements of~$E$ forms a free
$C[x]$-submodule of~$E$, and a basis of this module is called an
\emph{integral basis} of~$E$. We say that an element of $E$ is \emph{completely integral}
if it is integral at every $\xi\in C\cup\{\infty\}$. According to Proposition~\ref{prop:alghaspole},
the completely integral elements of $E$ are precisely the elements of~$C$.
Let $D$ denote the usual derivation w.r.t.~$x$, i.e., $D(f)=f'$, which turns
$K=C(x)$ or $E=C(x)[y]/\<m>$ into differential fields. An element $c$ of a
differential field~$F$ is called a constant if $D(c)=0$; these constants
always form a subfield of~$F$. A linear differential operator is an
expression of the form $L=p_0+p_1D + \cdots + p_rD^r$ with $p_0,\dots,p_r\in
K$. If $p_r\neq0$, we call $\ord(L)=r=\deg_D(L)$ the \emph{order} of the
operator. The operator~$L$ is called \emph{monic} if $p_r=1$. The set of all
linear differential operators will be denoted by~$K[D]$; it forms a
non-commutative ring in which the multiplication is governed by the Leibniz
rule $Dx=xD+1$. An operator~$L$ is called \emph{irreducible} if it cannot be
written as $L=L_1\cdot L_2$ with $\ord(L_1)\geq1$ and $\ord(L_2)\geq1$. Every
differential field~$F$ is a $K[D]$-left-module via the action
\[
(p_0+p_1D + \cdots + p_rD^r)\cdot y\ := \ p_0y+p_1D(y)+\cdots+p_rD^r(y).
\]
An element~$y$ of a differential field~$F$ is called a \emph{solution} of an
operator $L\in K[D]$ if $L\cdot y=0$. The set of all solutions of $L$ in a
differential field $F$ is denoted by $V(L)$. It is always a vector space over
the constant field of~$F$ and hence called the \emph{solution space} of~$L$.
If the constant field of $F$ is~$C$, then the dimension of $V(L)$ in $F$ is
bounded by the order of~$L$, but in general it is smaller. We say that $L$ has
\emph{only algebraic solutions} if there is a differential field $E=K[y]/\<m>$
such that the solution space $V(L)$ in $E$ has dimension~$\ord(L)$. If $L$ is
an irreducible operator then either all its solutions are algebraic or none of
them (except for the zero solution)~\cite[Prop.~2.5]{singer79}.
If $L=p_0+\cdots+p_rD^r\in K[D]$ is an operator of order~$r$, we call $\xi\in C$
a \emph{singularity} of $L$ if it is a pole of one of the rational functions
$p_0/p_r,\dots,p_{r-1}/p_r$. The point $\infty$ is called a singularity if,
after the substitution $x\mapsto x^{-1}$, the origin~$0$ becomes a singularity.
If $\xi\in C\cup\{\infty\}$ is not a singularity of~$L$, then $L$ has $r$
linearly independent Puiseux series solutions at~$\xi$, and they are all integral.
The notion of integrality for differential operators is defined
in a similar way as discussed above for algebraic field extensions $E=K[y]/\<m>$.
Throughout this paper, we consider only operators which have a basis of Puiseux series solutions at every point $\xi \in C \cup \{\infty\}$.
For such an operator $L\in K[D]$, we have the module $K[D]/\<L>$ where $\<L>$ denotes
the left ideal $\{P\cdot L\mid P\in K[D]\}$.
Note that $K[D]/\<L>$ is not a
ring but only a (left) $K[D]$-module. In this module, the equivalence class
$[1]_L$ has the property $L\cdot [1]_L=[L]_L=[0]_L$, so $[1]_L$ can be
considered as a solution of~$L$ in $K[D]/\<L>$, very much like the element
$y\in E$ is a root of~$m$. Similar as for algebraic function fields, we can
associate $[1]_L\in K[D]/\<L>$ with any solution~$f$ of $L$ in a Puiseux
series field $C((\ (x-\xi)^{1/q}\ ))$ or $C((x^{-1/q}))$. The association of
$[1]_L$ with $f$ extends to $K[D]/\<L>$ by mapping an equivalence class
$[P]_L$ to the series $P\cdot f$. The notions of integrality can now be
defined like before:
\begin{itemize}
\item $[P]_L$ is called (locally) \emph{integral} at some point $\xi\in C\cup\{\infty\}$
if for every Puiseux series solution $f$ of $L$ at $\xi$, the series $P\cdot f$ is integral.
\item $[P]_L$ is called (globally) \emph{integral} if it is locally integral at every point $\xi\in C$
(but not necessarily at $\infty$).
\item $[P]_L$ is called \emph{completely integral} if it is locally integral at every point $\xi\in C\cup\{\infty\}$.
\end{itemize}
Note for the second and third point that it suffices to consider points $\xi$ that
are singularities of $L$ or poles of some of the coefficients of~$P$.
For any fixed $L$ and $P$, these are only finitely many.
Also recall that we restrict our attention to operators $L$ which have a basis of Puiseux solutions, so that the quantifier \emph{``for all Puiseux series solutions''} in the definitions above is equivalent to \emph{``for all solutions''}.
The set of all integral elements in $K[D]/\<L>$ forms a free $C[x]$-left-module,
and a basis of this module is called an \emph{integral basis} of $K[D]/\<L>$.
An integral basis $\{w_1,\dots,w_r\}$ is called \emph{normal at infinity} if
there are integers $\tau_1,\dots,\tau_r\in\set Z$ such that
$\{x^{\tau_1}w_1,\dots,x^{\tau_r}w_r\}$ is a basis of the
$C(x)_\infty$-left-module of all elements of $K[D]/\<L>$ which are integral at
infinity. Here, $C(x)_\infty$ refers to the ring of all rational functions
$u/v$ with $\deg u\leq\deg v$. Integral bases which are normal at infinity
always exist, and they can be computed~\cite{kauers15b,chen17a}.
Finally, we recall some fundamental facts about operators.
The \emph{adjoint} $L^\ast$ of an operator $L\in K[D]$ is defined in such a way
that for any two operators $L,M\in K[D]$ we have $(L+M)^\ast=L^\ast+M^\ast$ and
$(LM)^\ast=M^\ast L^\ast$. We have $D^\ast=-D$ and $q^\ast=q$ for all $q\in K$.
Moreover, $\ord(L^\ast)=\ord(L)$ for every $L\in K[D]$.
The \emph{least common left multiple} of two operators $L,M\in K[D]$,
denoted by $\lclm(L,M)$, is defined as the unique monic operator of lowest order
which has both $L$ and $M$ as right factor. Its key feature is that whenever
$f$ is a solution of $L$ and $g$ is a solution of~$M$, then $f+g$ is a solution
of $\lclm(L,M)$.
For the efficient computation of the least common left multiple, see~\cite{bostan12b}.
There is a similar construction for multiplication.
The \emph{symmetric product} $L\otimes M$ of two operators $L,M\in K[D]$ is
defined as the unique monic operator of lowest order such that whenever $f$
is a solution of $L$ and $g$ is a solution of~$M$, then $fg$ is a solution
of $L\otimes M$ (regardless of the differential field to which $f$ and $g$ belong).
As a special case, the $s$th \emph{symmetric power} of an operator $L\in K[D]$
is defined as $L^{\otimes s}=L\otimes\cdots\otimes L$.
For the efficient computation of the symmetric powers, see~\cite{bronstein97a}.
By construction, we have $V(L)+V(M)\subseteq V(\lclm(L,M))$, and in general, the
inclusion is proper. However, if $\dim V(L)=\ord(L)$ and $\dim V(M)=\ord(M)$,
then we have $V(L)+V(M)=V(\lclm(L,M))$, i.e., the least common multiple cannot have
any extraneous solutions. Likewise, if $\dim V(L)=\ord(L)$ and $\dim V(M)=\ord(M)$,
the solution space of the symmetric product $L\otimes M$ is generated by all products
$fg$ with $f\in V(L)$ and $g\in V(M)$. These facts were shown by Singer~\cite{singer79}
in the context of complex functions, and again using more abstract machinery in the
book of van der Put and Singer~\cite{put03}.
\section{Pseudoconstants}
Let $L \in K[D]$ be a linear differential operator. As mentioned before, if $L$ has a logarithmic
or exponential singularity, it follows immediately that $L$ does not only have
algebraic solutions and we may view the singularity as a transcendence certificate.
We continue to exclude this case from consideration, i.e., we continue to assume
that $L$ has no logarithmic or exponential singularity at any point in $C\cup\{\infty\}$.
In other words, we assume that $L$ has a basis of Puiseux series solutions at
every point.
\begin{defi}\label{def:pseudoconstants}
Let $L \in K[D]$, and let $[P]_{L} \in K[D]/\langle L \rangle$.
\begin{enumerate}
\item $[P]_{L}$ is called a \emph{constant} if $D \cdot [P]_{L} = [0]_{L}$;
\item $[P]_{L}$ is called a \emph{pseudoconstant} if $[P]_{L}$ is completely integral
but not a constant.
\end{enumerate}
We will say for short that ``$L$ has a [pseudo]constant'' if $K[D]/\<L>$ contains a [pseudo]constant.
\end{defi}
\begin{prop}
\label{prop:constant}
Let $L \in K[D]$, and let $[P]_{L} \in K[D]/\langle L \rangle$.
Let $E$ be an extension of $K$ such that the solution space $V(L)$ of $L$ in $E$ has dimension~$\ord(L)$.
\begin{enumerate}
\item\label{prop:constant:1} $[P]_{L}$ is a constant if and only if $P \cdot f$ is a constant for every $f\in V(L)$.
\item If $[P]_L$ is a nonzero constant and $\ord(P)<\ord(L)$, then $\ord(P)=\ord(L)-1$.
\item The set of all constants forms a $C$-vector space of dimension at most~$\ord(L)$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item
Clearly, if $[P]_{L}$ is a constant, then for all $f \in V(L)$, $D\cdot (P
\cdot f) = (D \cdot [P]_{L}) \cdot f = 0$. Conversely, let $r$ be the order
of $L$ and $P$ be the representative of order at most $r-1$ of $[P]_{L}$.
Assume that $P \cdot f$ is a constant for all $f \in V(L)$, i.e., $D \cdot
(P \cdot f) = 0$. This means that $V(L) \subset V(D\cdot P)$. Since $V(D
\cdot P)$ has dimension at most~$r$ and $V(L)$ has dimension~$r$, it follows
that $V(L) = V(D \cdot P)$. This implies that $L$ and $D\cdot P$ are equal
up to an invertible factor in $K$, and therefore that $D \cdot [P]_{L} =
[D\cdot P]_{L} = [0]_{L}$.
\item If $\ord(P)<\ord(L)-1$, then $\ord(DP)<\ord(L)$, so the assumption
$D\cdot[P]_L=[DP]_L=0$ forces $DP=0$, which in turn forces $P=0$ in
contradiction to the assumption that $[P]_L$ is not zero.
\item It is clear that the constants form a $C$-vector space. In order to
prove the bound on the dimension, consider a $P\in K[D]$ with $\ord(P)<\ord(L)$
such that $[P]_L$ is a constant.
Then $D\cdot[P]_L=[DP]_L=0$, so there is a $q\in K$ with $DP=qL$. It is
clear that $q$ is uniquely determined and that the function which maps every
constant $[P]_L$ to the corresponding $q$ is $C$-linear and injective. Now
$DP=qL$ implies $(DP)^\ast=(qL)^\ast$, so $P^\ast D^\ast=L^\ast q^\ast$, so
$-P^\ast D=L^\ast q$. Since $1$ is a solution of the left hand side, it must
be a solution of the right hand side, so $0=(L^\ast q)\cdot1=L^\ast\cdot q$,
so $q\in V(L^\ast)$. We have thus constructed an injective $C$-linear map
from the space of all constants to the solution space of $L^\ast$ in $K$.
Since the dimension of the latter is at most $\ord(L)$, the claim follows.
\qedhere
\end{enumerate}
\end{proof}
If $[P]_{L}$ is a constant, then it is completely integral, but unlike in the case of algebraic
functions, the converse is not true in general. This means that pseudoconstants may exist.
\begin{ex}
Let $L = 3x(x^{2}-1)D^{2} + 2(3x^{2}-1)D$.
All its solutions are integral at every place including infinity,
therefore $[1]_{L}$ is completely integral.
However, $D\cdot [1]_{L} = [D]_{L} \neq [0]_{L}$, so it is not a constant.
Alternatively, one can observe that $L$ has a non-constant solution, and therefore $[1]_{L}$ cannot be a constant.
So $[1]_{L}$ is a pseudoconstant.
\end{ex}
In view of Prop.~\ref{prop:alghaspole}, we can regard pseudoconstants as transcendence certificates.
\begin{thm}
Let $L \in K[D]$ be such that there exists a pseudoconstant $[P]_{L} \in K[D]/\langle L \rangle$.
Then $L$ admits at least one transcendental solution.
\end{thm}
\begin{proof}
For a contradiction, assume that $L$ has only algebraic solutions.
Let $E$ be an algebraic extension of $K$ such that the solution space $V(L)$ in $E$ has dimension $\ord(L)$.
Since algebraic functions are closed under application of linear operators, $P\cdot f$ is algebraic for all $f \in V(L)$.
Since $[P]_{L}$ is completely integral, $P \cdot f$ does not have a pole at any $\xi\in C\cup\{\infty\}$.
By Prop.~\ref{prop:alghaspole}, this implies that $P \cdot f$ is constant.
Therefore, by Prop.~\ref{prop:constant}, $[P]_{L}$ is a constant, which is a contradiction.
\end{proof}
\begin{ex}
\label{ex:product_2F1}
Consider the operator
\begin{equation}
\label{eq:2}
\textstyle
L = \bigl(x^{2} - x\bigr) D^{2} + \bigl(\frac{31}{24} x - \frac{5}{6}\bigr) D + \frac{1}{48},
\end{equation}
annihilating the function
$x^{1/6}(x-1)^{13/24}{}_{2}F_{1}\bigl(\frac{7}{8},\frac{5}{6}; \frac{7}{6}; x\bigr)$.
The operator is irreducible, and therefore all its solutions have the same nature.
By Schwarz' classification and closure properties, they must be transcendental, but let us ignore this argument
for the sake of the example.
The singularities of the operator are $0$, $1$ and $\infty$, and a basis of solutions at each singularity is given by
\def1/x{1/x}
\def\expfrac#1#2{#1/#2}
\begin{align}
\label{eq:7}
y_{0,1} &= \textstyle x^{\expfrac{1}{6}} \Big( 1 + \frac{1}{12}x + \operatorname{O}(x^{2})\Big) \\
y_{0,2} & = \textstyle 1 + \frac{1}{40}x + \operatorname{O}(x^{2}) \\
y_{1,1} &= \textstyle (x{-}1)^{\expfrac{13}{24}} \Big( 1 - \frac{34}{111}(x{-}1) + \operatorname{O}((x{-}1)^{2})\Big) \\
y_{1,2} & = \textstyle 1 - \frac{1}{22}(x{-}1) + \operatorname{O}((x{-}1)^{2}) \\
y_{\infty,1} &= \textstyle (1/x)^{\expfrac{1}{6}} \Big( 1 + \frac{4}{75}\left( 1/x \right) + \operatorname{O}(\left(1/x\right)^{2})\Big) \\
y_{\infty,2} & = \textstyle (1/x)^{\expfrac{7}{8}} \Big( 1 + \frac{7}{184}\left( 1/x \right) + \operatorname{O}(\left( 1/x\right)^{2}) \Big)
\end{align}
Therefore, $[1]_L$ is a pseudoconstant, and thus the operator $L$ has no nonzero algebraic solution.
As noted in the introduction, we could also compute the monodromy matrices of $L$ around $0$, $1$ and $\infty$.
If one of them was not a root of unity, this would give another proof of transcendence.
However, numeric computations suggest that all eigenvalues are roots of unity in this example.
More precisely, the monodromy group around $0$ is generated by two matrices $M_1$ and $M_2$ with
\begin{equation}
\label{eq:11}
M_1^{3} =
\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}
\pm 10^{-17}
\begin{pmatrix}
0 & 0 \\
0 & 7.38 \pm 6.75\mathrm{i}
\end{pmatrix}
\end{equation}
and
\begin{equation}
\label{eq:12}
M_2^{24} =
\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}
\pm 10^{-13}
\begin{pmatrix}
1.45 \pm 1.42\mathrm{i} & 3.44 \pm 3.42\mathrm{i} \\
0.758 \pm 0.757\mathrm{i} & 1.96 \pm 1.96\mathrm{i}
\end{pmatrix}
\end{equation}
At $1$, the monodromy group is generated by two 6th roots of unity, and at $\infty$, by two 24th roots of unity.
\end{ex}
\begin{ex}
\label{ex:pseudoconstant-ord3}
Consider the operator
\begin{align*}
L ={}& (x - 1)^3 x^3 (x + 1)^3 D^{3} \\
& \textstyle + \frac{19}{5} (x - 1)^{2} x^{2} (x+1)^{2} \bigl(x^{2} + \frac{22069}{9576} x - \frac{195}{152}\bigr) D^{2}\\
& \textstyle -\frac{99}{80} (x - 1) x (x + 1) \bigl(x^{4} - \frac{117001919}{37422} x^{3} - \frac{105923}{5346} x^{2} + \frac{16795789}{5346} x + \frac{205}{66}\bigr) D
\\
& \textstyle -\frac{9}{20} x^{6} + \frac{517319279}{68040} x^{5} + \frac{256382531}{27216} x^{4} - \frac{19723513}{4320} x^{3} - \frac{2560752251}{272160} x^{2} - \frac{828238469}{272160} x - \frac{3}{32}.
\end{align*}
This operator has the singularities $0,1,-1,\infty$, with respective initial exponents
\begin{equation}
\label{eq:4}
\begin{array}{rccc}
(0) & -\frac{1}{8} & -\frac{3}{4} & -1 \\[1ex]
(1) & \frac{5}{7} & \frac{4}{9} & -2 \\[1ex]
(-1) & \frac{5171}{630} & \frac{3}{8} & -\frac{2}{3} \\[1ex]
(\infty) & \frac{4}{5} & \frac{3}{4} & -\frac{3}{4}
\end{array}
\end{equation}
The operator is irreducible, and therefore all its solutions have the same nature.
$L$ has the pseudoconstant $[P]_L$, with
\begin{align}
\label{eq:6}
P = {} &
(x +1)^{-6} x^{3} (x - 1)^{2} D^{2} \\
& \textstyle + (x + 1)^{-7} x^{2} (x - 1) \alpha(x) D \\
& + (x + 1)^{-8} x \beta(x),
\end{align}
where $\alpha(x)$ and $\beta(x)$ are certain polynomials of degree 3 and~6 respectively,
with coefficients in $\set Q$. So all the solutions of $L$ are transcendental.
\end{ex}
For operators with at most $3$ singularities, the nature of the solutions and the existence of pseudoconstants are determined by the initial exponents of the solutions.
Indeed, the operator is then uniquely determined up to a scalar factor by its singularities and initial exponents. Changing the position of the singularities is equivalent to applying a rational change of variables by a M\"obius transform, which preserves the nature of the solutions and the pseudoconstants.
This property does not hold for operators with more singularities, as the next example shows.
\begin{ex}
Consider the operator
\begin{align}
\label{eq:10}
L = {} & (x - 2)^{3} (x - 1)^{3} x^{3} D^{3} \\
&\textstyle {} + \frac{19}{5} (x - 2)^{2} (x - 1)^{2} x^{2} \bigl(x^{2} - \frac{16547}{9576} x + \frac{2420}{1197}\bigr) D^{2} \\
& \textstyle {}+ \frac{99}{80} (x - 2) (x - 1) x \bigl(x^{4} + \frac{8816399}{112266} x^{3} - \frac{8566381}{37422} x^{2} + \frac{7980386}{56133} x - \frac{3200}{6237}\bigr) D \\
& \textstyle {} -\frac{9}{20} x^{6} + \frac{5640547}{68040} x^{5} - \frac{20050393}{136080} x^{4} {}- \frac{2904319}{30240} x^{3} + \frac{5167531}{54432} x^{2} + \frac{1144387}{19440} x + \frac{320}{63}.
\end{align}
It has the singularities $0,1,2,\infty$, with respective initial exponents:
\begin{equation}
\label{eq:4b}
\begin{array}{rccc}
(0) & \frac{5}{7} & \frac{4}{9} & -2 \\[1ex]
(1) & \frac{5171}{630} & \frac{3}{8} & -\frac{2}{3} \\[1ex]
(2) & -\frac{1}{8} & -\frac{3}{4} & -1 \\[1ex]
(\infty) & \frac{4}{5} & \frac{3}{4} & -\frac{3}{4}
\end{array}
\end{equation}
The initial exponents are the same as those in Example~\ref{ex:pseudoconstant-ord3}, but the position of the singularities differ.
Unlike the operator in Example~\ref{ex:pseudoconstant-ord3}, the operator $L$ does not admit a pseudoconstant.
We do not know whether this operator has transcendental solutions.
\end{ex}
There are at least two ways to search for pseudoconstants for a given~$L$.
The first one uses integral bases. It is shown in Lemma~8 of \cite{chen17a} that
a basis of the $C$-vector space of all completely integral elements of $K[D]/\<L>$
is given by $\{\,x^jw_i : i=1,\dots,r; j=0,\dots,\tau_i\,\}$ whenever
$\{w_1,\dots,w_r\}$ is an integral basis that is normal at infinity and
$\tau_1,\dots,\tau_r\in\set Z$ are such that $\{x^{\tau_1}w_1,\dots,x^{\tau_r}w_r\}$
is a local integral basis at infinity. This motivates the following algorithm.
\begin{algo}
\label{algo:pseudoconstants_integral_basis}
Input: $L \in K[D]$
Output: a pseudoconstant of $L$ if there is one, otherwise $\bot$.
\step 10 Compute an integral basis $w_{1},\dots,w_{r}$ of $K[D]/\langle L\rangle$ which
is normal at~$\infty$, and the corresponding $\tau_{1},\dots,\tau_{r} \in \set Z$
\step 20 If there are $i\in\{1,\dots,r\}$ and $j\in\{0,\dots,\tau_i\}$ with
$[Dx^jw_i]_L\neq0$, return such an $x^jw_i$
\step 30 Otherwise, return~$\bot$
\end{algo}
\begin{thm}
Algorithm~\ref{algo:pseudoconstants_integral_basis} is correct.
\end{thm}
\begin{proof}
It is clear that the algorithm is correct if it does not return~$\bot$.
It remains to show that $L$ has no pseudoconstant if the algorithm does return~$\bot$.
In view of the remarks before the algorithm, every completely integral element of
$K[D]/\<L>$, and thus in particular every pseudoconstant, is a $C$-linear combination
of the~$x^jw_i$. But if all the $x^jw_i$ were constants, then, since the constants
also form a $C$-vector space, so would be all their linear combinations. Therefore,
if there are pseudoconstants at all, there must be one among the $x^jw_i$.
\end{proof}
\def\myceil#1{\lceil -#1 \rceil}%
An implementation of Algorithm~\ref{algo:pseudoconstants_integral_basis} is available in the latest version of the SageMath package \texttt{ore\_algebra}\footnote{\url{https://github.com/mkauers/ore_algebra}}.
Otherwise, in an environment where no functionality for computing integral bases is available, we can use linear
algebra to search for pseudoconstants by brute force. This has the advantage of being conceptually
more simple, but the disadvantage that we cannot easily recognize the absence of pseudoconstants.
Let $\xi_{1},\dots,\xi_{m}\in C$ be the singularities of~$L$, and assume that $\infty$ is not a singularity.
At each singularity~$\xi_{i}$, let $\frac{p_{i}}{q} \in \set Q$ be the smallest exponent appearing in
one of the solutions at~$\xi_{i}$.
Let $u = (x-\xi_{1})^{\max(0,\myceil{p_{1}/q})}\cdots (x-\xi_{m})^{\max(0,\myceil{p_{m}/q})}$, so
that $[u]_{L}$ is globally integral.
For each singularity $\xi_{i}$, choose a bound $N_{i} \in \set N$ on the degree of the denominator of a
local integral basis at~$\xi_{i}$, and let $N = N_1+\dots+N_{m}$.
We form the ansatz
\begin{equation}
\label{eq:1}
\frac{(x-\xi_{1})^{\max(0,\myceil{p_{1}/q})}\cdots (x-\xi_{m})^{\max(0,\myceil{p_{m}/q})}}{(x-\xi_{1})^{N_{1}}\cdots (x-\xi_{m})^{N_{m}}} \sum_{j=0}^{r-1} \sum_{i=0}^{N} c_{i,j} x^{i}D^{j}.
\end{equation}
with unknowns $c_{i,j}$.
Evaluating it at all solutions at $\xi_{1},\dots,\xi_{m},\infty$ gives series whose coefficients are linear combinations of the unknowns $c_{i,j}$, and setting those coefficients with negative valuations to $0$ yields a system of linear equations to solve.
Each solution is an operator which is completely integral.
However, if no non-zero solution is found, or if all solutions are constants, this is not enough to conclude that the operator does not have a pseudoconstant. It could just mean that the guessed bounds on the denominator were too conservative.
If $L$ does not have a pseudoconstant, we could try to apply some transformation to $L$ that does not change
the nature of the solutions of $L$ but may affect the existence of pseudoconstants.
For example, applying a gauge transform to $L$ does not change the nature of its solutions.
However, gauge transforms do not affect the existence of pseudoconstants either.
Indeed, let $L \in K[D]$ be a linear operator, $M \in K[D]$ be another one and $L'$ be the gauge transform of $L$
such that $V(L') = \{M \cdot f : f \in V(L)\}$. Assume that $[P]_{L'}$ is a pseudoconstant in $K[D]/\<L'>$.
Then $PM \cdot f$ does not have a pole for any $f\in V(L)$, and there exists an $f\in V(L)$ such that $PM\cdot f$
is not a constant. By definition, this implies that $[PM]_{L}$ is a pseudoconstant in $K[D]/\<L>$.
In conclusion, gauge transforms are not strong enough to create pseudoconstants.
We will see next that we may have more success with other operations.
\section{Symmetric powers}
\label{sec:expand-search-space}
Symmetric powers are useful for proving identities among D-finite functions and find
applications in algorithms for factoring operators~\cite{put03}.
They can also be used to decide for a given operator $L$ and a given $d\in\set N$
whether all solutions of $L$ are algebraic functions of degree at most~$d$.
For, if $f$ is an algebraic solution of $L$ with a minimal polynomial $m\in K[y]$ of degree~$d$,
then $m$ has $d$ distinct solutions $f_1,\dots,f_d$ in an algebraic closure $\bar K$ of~$K$
and we can write $m=(y-f_1)\cdots(y-f_d)$.
The solutions $f_1,\dots,f_d$ of $m$ are conjugates of~$f$, and since $L$ has coefficients in~$K$,
we have $L\cdot\sigma(f)=\sigma(L\cdot f)=0$ for every automorphism $\sigma$ that fixes~$K$.
Therefore, $f_1,\dots,f_d$ are also solutions of~$L$.
For every~$i$, the $i$th coefficient of $m = (y-f_{1})\cdots (y-f_{d})$ is the $(d-i)$th elementary symmetric
polynomial of $f_1,\dots,f_d$ and therefore an element of $L^{\otimes(d-i)}$.
As the coefficients of $m$ belong to~$K=C(x)$, they must thus show up among the rational solutions
of $L^{\otimes(d-i)}$. This observation motivates the following algorithm.
\begin{algo}\label{alg:algsols}
Input: $L\in C(x)[D]$ and $d\in\set N$.
Output: if all solutions of $L$ are algebraic functions of degree at most $d$, the minimal polynomial of one such
solution; otherwise~$\bot$.
\step 10 for $i=1,\dots,d$, compute the symmetric power $L^{\otimes i}$.
\step 20 for $i=1,\dots,d$, compute basis elements $q_{i,1},\dots,q_{i,N_i}$ of the solution space of $L^{\otimes i}$
in $C(x)$.
\step 30 form an ansatz $y^{d} + \sum_{i=1}^{d} \sum_{j=1}^{N_{i}} c_{i,j}q_{i,j}y^{d-i}$ with undetermined
coefficients $c_{i,j}$
\step 40 substitute a truncated series solution $f$ of $L$ into the ansatz, equate coefficients, and solve the resulting system for the undetermined coefficients $c_{i,j}$.
\step 50 if the system has no solution, return $\bot$.
\step 60 let $m$ be the polynomial corresponding to one of the solutions of the linear system.
\step 70 if all roots of $m$ are solutions of~$L$, return $m$
\step 80 otherwise, go back to step~4 and try again with a higher truncation order.
\end{algo}
Compared to the guess-and-prove approach mentioned in the introduction, the algorithm above has the advantage
that only one of the degrees of the minimal polynomials has to be guessed.
Algorithm~\ref{alg:algsols} indicates that symmetric powers know something about algebraicity of solutions.
The next result points in the same direction.
It says that the symmetric powers of an operator $L$ are larger if $L$ has a transcendental solution.
\begin{thm}
Let $L \in C(x)[D]$.
\begin{enumerate}
\item If $L$ has only algebraic solutions, then $\ord(L^{\otimes s}) = \operatorname{O}(s)$ as $s \to \infty$.
\item If $L$ has at least one transcendental solution and $D^{2}$ is a right factor of $L$, then $\ord(L^{\otimes s}) = \Omega(s^{2})$ for $s \to \infty$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $r$ be the order of~$L$.
\begin{enumerate}
\item Let $f_1,\dots,f_r$ be a basis of~$V(L)$, and let $m_1,\dots,m_r\in C(x)[y]$ be their respective
minimal polynomials. Furthermore, let $I_{\mathrm{rat}} = \{\,p\in C(x)[y_1,\dots,y_r] : p(f_1,\dots,f_r)=0\,\}$ be
the ideal of algebraic relations among $f_1,\dots,f_r$.
Since $m_i(y_i)\in I_{\mathrm{rat}}$, we have $\dim(I_{\mathrm{rat}})=0$.
Therefore, the ideal $I_{\mathrm{pol}}=I_{\mathrm{rat}}\cap C[x][y_1,\dots,y_r]$ has dimension~1.
As eliminating a variable cannot increase the dimension, we find that the ideal
$I_{\mathrm{const}}:=I_{\mathrm{pol}}\cap C[y_1,\dots,y_r]$ has dimension at most~1.
By definition of the dimension, this means that the dimension of the $C$-vector space
generated in $C[y_1,\dots,y_r]/I$ by the power products $y_1^{e_1}\cdots y_r^{e_r}$
with $e_1,\dots,e_r\in\set N$ such that $e_1+\cdots+e_r\leq s$ has dimension $\operatorname{O}(s^1)$, as $s\to\infty$.
Therefore, the dimension of the $C$-vector space generated by $f_1^{e_1}\cdots f_r^{e_r}$ with $e_1,\dots,e_r\in\set N$
such that $e_1+\cdots+e_r=s$ has dimension $\operatorname{O}(s^1)$, as $s\to\infty$.
This space is the solution space of $L^{\otimes s}$, and the order of $L^{\otimes s}$
matches the dimension of this space.
\item Since $D^2$ is a right factor of~$L$, we have $1$ and $x$ among the solutions of~$L$. If there
is also at least one transcendental solution~$f$, then the solution space of $L^{\otimes s}$ contains
all elements $1^{e_1}x^{e_2}f^{e_3}$ with $e_1,e_2,e_3\in\set N$ such that $e_1+e_2+e_3=s$, and the
transcendence of $f$ implies that they are all linearly independent over~$C$.
As these are $\binom{s+2}s=\Omega(s^2)$ many, the claim follows again from
$\dim_C V(L^{\otimes s})=\ord(L^{\otimes s})$. \qedhere
\end{enumerate}
\end{proof}
This theorem provides yet another heuristic test for the existence of transcendental solutions:
simply compute $L^{\otimes s}$ for the first
few $s$ and see how their orders grow. As the theorem only makes a statement for asymptotically large~$s$, looking at specific
values of $s$ will not allow us to make any definite conclusion, but it can provide convincing evidence.
\begin{ex}
Consider the operators
\begin{align}
\label{eq:3}
L_{1} & = \bigl(256x^5-3125\bigr)D^{4} + 3200x^{4} D^{3} + 9840x^{3}D^{2} + 6120 x^{2}D - 504x \\
L_{2} &= \textstyle\lclm\Bigl(D^{2},\bigl(x^{2} - x\bigr) D^{2} + \bigl(\frac{31}{24} x - \frac{5}{6}\bigr) D + \frac{1}{48}\Bigr).
\end{align}
The operator $L_{1}$ is the annihilator of the roots of $y^{5} + xy + 1$ in $K[y]$, so it only has algebraic solutions.
The operator $L_{2}$ is the lclm of the operator from Example~\ref{ex:product_2F1} and $D^{2}$, so it has a transcendental solution and it has $D^{2}$ as a right factor.
The order of the symmetric powers of the operators is growing as follows:
\begin{center}
\upshape
\begin{tabular}[c]{rrrrrrr}
$s$ & 1 & 2 & 3 & 4 & 5 \\
\hline
\rule[-4pt]{0pt}{14pt} $\ord(L_1^{\otimes s})$ & 4 & 9 & 15 & 21 & 27 \\
\hline
\rule[-4pt]{0pt}{14pt} $\ord(L_2^{\otimes s})$ & 4 & 10 & 20 & 35 & 56
\end{tabular}
\end{center}
As predicted by the theorem, for $L_1$ the growth is linear, and for $L_2$ the growth is at least quadratic (cubic).
\end{ex}
The assumption on having $D^{2}$ as a right factor in the second part of the theorem cannot be dropped, as can be seen for example with $L=D^{2}-1$, whose solutions are $\exp(x)$ and $\exp(-x)$.
The solution space of $L^{\otimes s}$ is spanned by the terms $\exp(x(i - (s-i)))$ for $i \in \{0,\dots,s\}$, and therefore has dimension $s+1 = \operatorname{O}(s)$.
More generally, for any operator of order $r\leq 2$, the order of $L^{\otimes s}$ is bounded by $\binom{s+r-1}{s} \leq s+1$.
The divisibility condition says that $1$ and $x$ are solutions of~$L$, and in order to have in addition a transcendental solution, the order of $L$ must be at least~3.
If $L$ does not have $D^2$ as a right factor, apply the theorem to $\lclm(L,D^2)$ instead of~$L$.
Note that $L$ has only algebraic solutions if and only if $\lclm(L,D^2)$ has only algebraic solutions.
More generally, if $M$ is any operator that has only algebraic solutions, then $L$ has only algebraic
solutions if and only if $\lclm(L,M)$ has only algebraic solutions. This is because, as remarked at
the end of Sect.~\ref{sec:prelim}, the least common multiple does not have any extraneous solutions.
Nevertheless, as we show next, there is no hope that $\lclm(L,M)$ could have any pseudoconstants if
not already $L$ has any.
\begin{lem}
\label{lem:lclm_pseudoconstants}
Let $L,M \in K[D]$ and $N = \lclm(L,M)$.
If $[P]_{N}$ is a nonzero completely integral element (resp. a pseudoconstant) in $K[D]/\langle N\rangle$, then
at least one of $[P]_{L}$ or $[P]_{M}$ is a non-zero completely integral element (resp. a pseudoconstant) in the
respective module.
\end{lem}
\begin{proof}
Let $[P]_{N}$ be a completely integral element of $K[D]/\langle N\rangle$.
Let $E$ be an extension of $K$ such that $V(N) \subseteq E$ has dimension $\ord(N)$.
Note that by definition of the lclm, both equivalence classes $[P]_{L}$ and $[P]_{M}$ are well-defined.
Since $V(N) = V(L) + V(M)$, both $[P]_{L}$ and $[P]_{M}$ are completely integral.
If $[P]_{N}$ is non-zero, there exists $h \in V(N)$ such that $P\cdot h \neq 0$.
Therefore there exist $f \in V(L)$ and $g \in V(M)$ such that $h = f+g$ and $P\cdot f + P \cdot g \neq 0$.
So at least one of $P \cdot f$ and $P \cdot g$ is nonzero, implying respectively that $[P]_{L}$ or $[P]_{M}$ is nonzero.
The additional property that $P$ is not a constant similarly propagates to at least one of the summands.
\end{proof}
In view of this negative result, it is remarkable that taking symmetric products can produce
pseudoconstants. For example, the function considered in Example~\ref{ex:product_2F1} is a
product of an algebraic function and a hypergeometric function. The linear operator which
annihilates only the hypergeometric function (without the algebraic function multiplier)
does not have a pseudoconstant.
If the given operator $L$ has no pseudoconstants, we can thus ask whether there is an operator
$M$ with only algebraic solutions such that $L\otimes M$ has pseudoconstants.
Of course, as long as nobody tells us how to choose~$M$, this observation is not really helpful.
What we can easily do however is to multiply the solutions of $L$ with each other.
It turns out that this is sometimes sufficient.
\begin{ex}
Consider the operator
\[
\textstyle
L = \bigl(x^2 - x\bigr) D^{2} + \bigl(\frac{49}{6} x - \frac{7}{3}\bigr) D + 12
\]
annihilating the hypergeometric function ${}_{2}F_{1}\bigl(\frac{9}{2},\frac{8}{3}; \frac{7}{3}; x\bigr)$.
The operator does not have a pseudoconstant.
However, the operator $L^{\otimes 2}$ does have a pseudoconstant
\begin{equation}
\label{eq:9}
\alpha(x) D^{2} + \beta(x) D + \gamma(x)
\end{equation}
where $\alpha$, $\beta$ and $\gamma$ are polynomials in $x$, with respective degree $11$, $10$ and $9$.
By Theorem~\ref{thm:sympow_implies_trans} below, this implies that $L$ has at least one transcendental solution.
\end{ex}
\begin{ex}
\label{ex:product_2F1_noproduct}
Consider the operator
\[
\textstyle
L = \bigl(x^2 - x\bigr) D^{2} + \bigl(\frac{65}{24} x - \frac{7}{6}\bigr) D + \frac{35}{48}
\]
annihilating the hypergeometric function ${}_{2}F_{1}\bigl(\frac{7}{8},\frac{5}{6}; \frac{7}{6}; x\bigr)$.
This is the hypergeometric function appearing in Example~\ref{ex:product_2F1}.
The operator does not have a pseudoconstant.
However, the operator $L^{\otimes 5}$ does have the pseudoconstant $[x(x-1)^3]$.
By Theorem~\ref{thm:sympow_implies_trans} below, this implies that all nonzero solutions of $L$ are transcendental.
The exponents of the solutions of $L$ at its singularities are:
\begin{equation}
\label{eq:5}
\begin{array}{rcc}
(0) & -\frac{1}{6} & 0\\[1ex]
(1) & -\frac{13}{24} & 0\\[1ex]
(\infty) & \frac{5}{6} & \frac{7}{8}
\end{array}
\end{equation}
Multiplying all the solutions by $x^{1/6}(x-1)^{13/24}$ allows to clear the poles at $0$ and $1$, without creating a pole at infinity: the exponents at infinity become $\frac{5}{6}-\frac{1}{6}-\frac{13}{24} = \frac{1}{7}$ and $\frac{7}{8}-\frac{1}{6}-\frac{13}{24} = \frac{1}{8}$, both non-negative.
This confirms the observation in Example~\ref{ex:product_2F1}.
The presence of rational exponents in $x^{1/6}(x-1)^{13/24}$ means that it does not qualify as a pseudoconstant with our definition.
However, considering symmetric powers allows to clear those denominators.
First, observe that the lowest exponents of the solutions of $L^{\otimes s}$ are $-\frac{1}{6}s$ at $0$, $-\frac{13}{24}s$ at $1$ and $\frac{5}{6}s$ at infinity.
We are looking for a pseudoconstant of the form $[x^{a}(x-1)^{b}]$ with $a,b$ integers.
Multiplying by such an element adds $a$ to the exponent at $0$, $b$ to the exponent at $1$, and subtracts $a+b$ from the exponent at infinity.
The complete integrality condition thus translates into the following inequalities:
\begin{eqnarray}
\label{eq:8}
\textstyle 0 \leq -\frac{1}{6}s + a ; &
\textstyle 0 \leq -\frac{13}{24}s + b ; &
\textstyle 0 \leq \frac{5}{6}s - a -b.
\end{eqnarray}
The solutions, for $s$ in $\{1,\dots,6\}$, are represented in Figure~\ref{fig:solutions_alg_mult}.
The smallest value of $s$ for which there is an integer solution is $5$, and we recover the pseudoconstant $[x(x-1)^{3}] = [x^{4}-3x^{3}+3x^{2}-x]$ for $L^{\otimes 5}$.
\end{ex}
\begin{figure}
\centering
\upshape
\begin{tikzpicture}[x=1cm,y=1cm]
\draw[->] (0,0) -- (3.3,0) node[right] {$a$};
\draw[->] (0,0) -- (0,4.3) node[right] {$b$};
\draw[gray] (0,0) grid[step=1] (3,4);
\foreach \a in {1, 2, 3} {\node[below] at (\a,0) {\a}; }
\foreach \b in {1,...,4} {\node[left] at (0,\b) {\b}; }
\node[below left] at (0,0) {0};
\foreach \s in {1,..., 6} {
\path[draw=blue, fill=blue!50!white, fill opacity=0.5, text=blue, text opacity=1]
(\s*1/6,\s*13/24)
-- (\s*7/24,\s*13/24) node[anchor=base west] {$s=\s$}
-- (\s*1/6,\s*4/6)
-- cycle;
}
\end{tikzpicture}
\caption{Solutions of the system~\eqref{eq:8} for $s$ in $\{1,\dots,6\}$}
\label{fig:solutions_alg_mult}
\end{figure}
\begin{thm}
\label{thm:sympow_implies_trans}
Let $L \in K[D]$ be a differential operator.
Suppose that for some $s\in\set N$ the symmetric power $L^{\otimes s}$ has a pseudoconstant.
Then $L$ has at least one transcendental solution.
\end{thm}
\begin{proof}
The solution space of $L^{\otimes s}$ is spanned by all products of $s$ solutions of $L$.
The existence of a pseudoconstant in $K[D]/\langle L^{\otimes s} \rangle$ proves that at least one solution of $L^{\otimes s}$ is transcendental, and therefore at least one solution of $L$ is transcendental.
\end{proof}
In other words, a pseudoconstant for $L^{\otimes s}$ can be viewed as a transcendence certificate for~$L$.
As shown by the previous examples, such a certificate may exist even if $L$ itself does not have pseudoconstants.
So it is worthwhile to search for pseudoconstants of symmetric powers.
As shown by the following theorem, we cannot increase our chances to find a pseudoconstant any further by adding
some rational solutions to the solution space of~$L$.
\begin{prop}
\label{prop:lclm_D_pseudoconstants}
Let $M\in K[D]$ be an operator that has only solutions in~$K$, let $L\in K[D]$, and let $s\in\set N$.
If $\lclm(L,M)^{\otimes s}$ has a pseudoconstant then there is a $d\in\{1,\dots,s\}$ such that $L^{\otimes d}$
has a pseudoconstant.
\end{prop}
\begin{proof}
First note that
\[
L_s := \lclm(L,M)^{\otimes s} =
\lclm\bigl(L^{\otimes s}, L^{\otimes (s-1)} \otimes M, \dots, M^{\otimes s}\bigr).
\]
By Lemma~\ref{lem:lclm_pseudoconstants}, if $[P]_{L_s}$ is a pseudoconstant, then there
exists $d\in\{1,\dots,s\}$ such that $[P]_{L^{\otimes d}\otimes M^{\otimes(d-s)}}$ is
also a pseudoconstant.
This means that for every Puiseux series solution $f$ of $L$ at some point $\xi\in C\cup\{\infty\}$
and every solution $r\in C(x)$ of $M$ we have that $P\cdot(r^{d-s}f^d)$ is integral, and
that for at least one $r$ and one~$f$, the quantity $P\cdot(r^{d-s}f^d)$ is not a constant.
Fixing one such solution $r\in C(x)\setminus\{0\}$ of~$M$, it follows that $Pr^{d-s}$ is
a completely integral element of $K[D]/\<L^{\otimes d}>$ and that $[Pr^{d-s}]_{L^{\otimes d}}$
is not a constant. Thus $L^{\otimes d}$ has the pseudoconstant $[Pr^{d-s}]_{L^{\otimes d}}$.
\end{proof}
We have not been able to answer the following question:
\begin{question}\label{q}
Is it true that for every operator $L$ with at least one transcendental solution there exists
an $s\in\set N$ such that $L^{\otimes s}$ has a pseudoconstant?
\end{question}
If the answer to Question~\ref{q} is yes, then this fact in combination
with Alg.~\ref{alg:algsols} would yield a new decision procedure for the existence of transcendental solutions.
We could simply search in parallel for $s=1,2,3,\dots$ for an algebraic solution of $L$ of degree $s$
and a pseudoconstant of $L^{\otimes s}$. Exactly one of these parallel threads would have to terminate
after a finite number of steps.
A natural idea to prove the existence of pseudoconstants of $L^{\otimes s}$
for sufficiently large~$s$ is to show the linear system that emerges from a search for pseudoconstants
via the linear algebra approach has more variables than equations for sufficiently large~$s$.
Unfortunately, this does not seem to be the case.
The following example can perhaps be considered as some piece of empirical evidence that the
answer to Question~\ref{q} is no.
On the other hand, we can show (Prop.~\ref{prop:alg-constants}) that for an operator $L$ with only algebraic
solutions there is always an $s$ such that $L^{\otimes s}$ has a constant (but of course no
pseudoconstant), and this could be considered as some piece of evidence that the answer to
Question~\ref{q} may be yes.
\begin{ex}
Consider the operator
\[
\textstyle
\bigl(x^{2} - x\bigr) D^{2} + \bigl(\frac{164}{15} x - \frac{16}{3}\bigr) D + \frac{1403}{60},
\]
which annihilates the hypergeometric function ${}_{2}F_{1}\bigl(\frac{61}{10},\frac{23}{6}; \frac{16}{3}; x\bigr)$.
Thanks to Schwarz' classification, we know that the operator has no algebraic solutions.
However, an exhaustive search using integral bases could not find a completely integral element
for $L^{\otimes s}$ for any $s\leq 6$, and a heuristic search using linear algebra could not
find one for any $s\leq 30$.
\end{ex}
\begin{lem}\label{lem:ratsolmakesconstant}
Let $M\in K[D]$ and let $q\in K$ be such that $M\cdot q\neq0$.
Then $L:=\lclm(qD-q',M)$ has a nonzero constant.
\end{lem}
\begin{proof}
Note $V(L)=\Span(q)+V(M)$ and $u:=M\cdot q\neq0$. Consider $P:=u^{-1}M$.
Every $f\in V(L)$ can be written as $f=cq+m$ for a $c\in C$ and an $m\in V(M)$.
So $P\cdot f=u^{-1}(M\cdot m+cM\cdot q)=u^{-1}cu=c$.
By Prop.~\ref{prop:constant} part~\ref{prop:constant:1}, it follows that $[P]$ is
a nonzero constant of~$L$.
\end{proof}
\begin{prop}\label{prop:alg-constants}
If $L\in K[D]$ has only algebraic solutions and
$d$ is such that all the solutions of $L$ have a minimal polynomial of degree
at most~$d$, then $L^{\otimes d}$ has a nonzero constant.
\end{prop}
\begin{proof}
Since $L$ has only algebraic solutions, also $L^{\otimes d}$ has only algebraic solutions.
Moreover, $L^{\otimes d}$ has at least one nonzero rational function solution~$q$
(e.g., the product of all the conjugates of some algebraic solution of~$L$).
If $f$ is a solution of $L^{\otimes d}$, then so are all the conjugates of~$f$,
because $L^{\otimes d}$ has coefficients in~$K$.
The solution space of the minimal order annihilating operator of $f$ is generated
by $f$ and its conjugates and therefore a right factor of~$L^{\otimes d}$.
Let $f_1$ be a solution of $L^{\otimes d}$ which does not belong to $\Span(q)$,
and let $M_1$ be a minimal order annihilating operator of~$f_1$.
For $n=1,2,\dots$, let $f_n$ be a solution of $L^{\otimes d}$ which does not belong to $\Span(q)+V(M_1)+\cdots+V(M_{n-1})$,
and let $M_i$ be a minimal order annihilating operator of~$f_n$, until we have
$V(L^{\otimes d})=\Span(q)+V(M_1)+\cdots+V(M_n)$.
At this stage, we have
\[
L^{\otimes d}=\lclm(qD-q',\lclm(M_1,\dots,M_n)),
\]
and since $\lclm(M_1,\dots,M_n)\cdot q\neq0$ by the choice of $M_1,\dots,M_n$,
Lemma~\ref{lem:ratsolmakesconstant} applies.
The claim follows.
\end{proof}
\section{Conclusion}
We propose the notion \emph{transcendence certificate} for any kind of artifact
whose existence implies that a given differential operator has at least one
transcendental solution. Simple transcendence certificates are logarithmic and
exponential singularities. \emph{Pseudoconstants} introduced in
Def.~\ref{def:pseudoconstants} can also serve as transcendence certificates. We
have given examples of operators that have no logarithmic or exponential
singularities but that do have pseudoconstants.
We have also given examples of operators that have no pseudoconstants even
though they have transcendental solutions. To such operators, we can try to
apply transformations that preserve the existence of transcendental solutions
but may lead to the appearance of pseudoconstants. In particular, as shown in
Sect.~\ref{sec:expand-search-space}, it can happen that an operator $L$ has no
pseudoconstants but some symmetric power $L^{\otimes s}$ of $L$ does. A
pseudoconstant of $L^{\otimes s}$ suffices to certify the existence of a
transcendental solution of~$L$. An open question (Question~\ref{q}) is whether
the existence of transcendental solutions of $L$ implies the existence of an $s$
such that $L^{\otimes s}$ has pseudoconstants. We would be very interested in an
answer to this question.
There are further possibilities to transform an operator with no pseudoconstants
to one that may have some. For example, we could try to exploit that the
composition of a D-finite function with an algebraic function is always D-finite.
If $f$ is D-finite and $g$ is algebraic, then $f\circ g$ is algebraic if and
only if $f$ is algebraic, thus a pseudoconstant for an annihilating operator
of $f\circ g$ could serve as a transcendence certificate for an annihilating
operator of~$f$. Note that unlike the transformations considered in this paper,
the composition can not only remove singularities but also create new ones.
We have not found an example where this process reveals new pseudoconstants.
In another direction, we could try to weaken the requirements of Def.~\ref{def:pseudoconstants}. According
to our definition, $[P]_L$ is a pseudoconstant if \emph{every} local solution $f$ of $L$
is such that $P\cdot f$ has nonnegative valuation. For a transcendence certificate,
it would suffice to have \emph{one} global solution $f$ of $L$ (a complex
function defined on a Riemann surface) which is not constant and has no pole.
If we relax Def.~\ref{def:pseudoconstants} accordingly, it may be that additional
operators would have pseudoconstants. However, we would no longer know how to decide
the existence of pseudoconstants for a given operator.
\bibliographystyle{plain}
|
1,116,691,499,681 | arxiv | \section{Introduction}\label{introduction}
\setcounter{equation}{0}
At the beginning of the Universe we expect that energy density was high, and it is likely that quantum gravity was important. Since string theory provides a consistent theory of quantum gravity, we must ask what kind of states are expected in string theory under these conditions. Several ideas have been considered, using either string theory or string inspired constructions \cite{many,bran,greene1,greene2}.
Another place where matter gets crushed to high densities is in the formation of a black hole. In the classical picture of a black hole the curvature is low at the horizon, and large at the singularity. If we consider quantum mechanics on such a background geometry we run into the the black hole information paradox \cite{hawking}, which implies that unitarity of quantum mechanics is lost.
String theory has made considerable progress in understanding black holes. We can understand the entropy of extremal and near extremal holes \cite{sen,sv,cm}, and obtain Hawking radiation as a unitary process where excited string states decay \cite{dmcompare}. This suggests that string theory will change our naive picture of the black hole geometry and allow information to leak out in the Hawking radiation.
Several computations have suggested a `fuzzball' picture of the black hole interior, where the quantum gravity effects are not confined to the vicinity of the singularity, but instead spread out all through the interior of the horizon. The key effect is `fractionation': when different kinds of branes are bound together they split up into fractional brane units \cite{dmfrac}. We can regard the large entropy of the black hole as a consequence of fractionation: the entropy calculation just counts these fractional brane units with their appropriate spins and fermion zero modes. Fractionation is also responsible for the low energy of Hawking radiation quanta. More qualitatively, we can say that the fuzzball picture of the black hole interior is also a consequence of fractionation; fractional branes are low tension objects that can stretch to horizon scales instead of just planck distance from the singularity. The concrete computations leading to the fuzzball picture construct the microstates that account for the entropy. For 2-charge extremal holes we can understand all microstates, and for 3 and 4 charge extremal cases subfamilies respecting one or more U(1) symmetries have been constructed \cite{microstates,micromore}. In each case the microstate is found to have been modified in the entire interior of the hole, and there is no horizon. If the fuzzball picture were true it would resolve the information paradox, since information can escape from the surface of the fuzzball, much like it leaves from the surface of a piece of burning coal.
In this paper we wish to ask the question: can we apply our understanding of black holes to say something about the Cosmological singularity? In Fig.\ref{univ}(a) we depict a traditional radiation filled Universe. We know that we can get a larger entropy for the same energy if we put the mass into black holes of sufficiently large radius; we depict this in Fig.\ref{univ}(b). But our Universe does not look like this at all; if black holes had formed at early times they would continue to exist till today (unless they were small enough to have Hawking evaporated by now).
The situation does not change in any material way if we replace the conventional picture of the black hole interior with a `fuzzball' (Fig.\ref{univ}(c)); this affects only the interior of the hole and not gross properties like the classical attraction between holes.
But if the maximal entropy state of a black hole is this quantum fuzz, then perhaps the maximal entropy state of the Universe is given by such a quantum fuzz filling the entire Universe (Fig.\ref{univ}(d)). Using the microscopic expressions for black hole entropy we conjecture an equation of state for this fuzz, and find the evolution of the Universe with this equation of state.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.5in]{radiation.eps} \hspace{1truecm}
\includegraphics[width=2.5in]{black_holes.eps}
\vspace{.5truecm}
\hspace{4.5truecm} (a) \hspace{7truecm} (b)\\
\vspace{.5truecm}
\includegraphics[width=2.5in]{fuzzy_black_holes.eps} \hspace{1truecm}
\includegraphics[width=2.5in]{fuzz.eps}
\vspace{.5truecm}
\hspace{4.5truecm} (c) \hspace{7truecm} (d)
\caption{(a) Radiation filled Universe \quad (b) All matter in black holes \quad (c) Fuzzball picture suggests that interior of horizon is a very quantum domain \quad (d) Quantum fuzz filling the entire Universe.}
\label{univ}
\end{figure}
\section{Fractional brane states and entropy}\label{entropy}
\setcounter{equation}{0}
In this section we will recall some results from the string description of black holes, which will motivate our ansatz of the fractional brane state and its entropy. A more detailed review of these results can be found in
\cite{review}.
We will work with 10+1 dimensional M theory, using on occasion the language of 9+1 dimensional string theory when discussing branes. We will let the 10 space directions of M-theory be compactified to $T^{10}$. We will denote the spacetime dimension as $D$.
Fig.\ref{univ}(a) depicts a Universe filled with radiation. M-theory has massless quanta, so we can certainly achieve such a state. Let us fix the lengths of the sides of the torus, and explore the entropy as a function of the total energy. If the spacetime dimension is $D$ then
\begin{equation}
S~\sim~ E^{D-1\over D}
\label{one}
\end{equation}
Thus if the Universe was filled with massless radiation we would get $S\sim E^{10\over 11}$ for the 11 dimensions of M-theory and $S\sim E^{9\over 10}$ for the 10 dimensions of string theory; in the latter case $x^{11}$ has been compactified to a small length so that quanta along $x^{11}$ are not excited.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.5in]{strings.eps} \hspace{1truecm}
\includegraphics[width=2.5in]{Intersecting_branes.eps}
\vspace{.5truecm}
\hspace{4.5truecm} (a) \hspace{7truecm} (b)
\caption{(a) A string can wind several times around a compact cycle and carry vibrations \quad (b) In the `brane gas' model branes can wrap cycles and carry vibrations.}
\label{strings}
\end{figure}
\subsection{Two charges}
Since we have extended objects in our theory, we can wrap them around the cycles of the torus.
Consider string theory and let a string be wrapped $n_1$ times around a cycle of the torus; let the length of this cycle be $L$. We can add excitations to this string, which split up into left movers and right movers. First let the string be a heavy `background' object, with the excitations as small vibrations. The excitations form a massless gas in $1+1$ dimensions. The total length of the string is $L_T=n_1 L$. The energy and momentum carried by the left movers is of the form
\begin{equation}
E_L=|P_L|={\pi n_p\over L}={2\pi n_1n_p\over L_T}
\end{equation}
The entropy of the left movers is
\begin{equation}
S_L=2\sqrt{2}\pi\sqrt{n_1 n_p}
\end{equation}
where the dependence $S\sim \sqrt{n_1n_p}$ comes from the way the momentum can be distributed among different harmonics on the string, and the coefficient arises from the fact that there are 8 transverse vibration modes of the string and 8 fermionic superpartners of these modes \cite{sen}.
Note that the $n_p$ units of $P_L$ broke up into $n_1n_p$ `fractional' units of momentum because the string itself was a bound state of $n_1$ singly wrapped strings; this is a simple example of the fractionation mentioned above \cite{dmfrac}.
Adding in the right movers we have
\begin{equation}
S=2\sqrt{2}\pi\sqrt{n_1}~(\sqrt{n_p}+\sqrt{\bar n_p})
\label{two}
\end{equation}
Of course we should not really regard the vibrations of the string as small oscillations in general, and to carry out the full computation we note that the total energy $E$ of a string state is given by
\begin{equation}
E^2=(\hat n_1 L T-{2\pi \hat n_p\over L})^2+8\pi T N_L = (\hat n_1 L T+{2\pi \hat n_p\over L})^2+8\pi T N_R
\end{equation}
where $T$ is the tension of the string, $\hat n_1=n_1-\bar n_1, \hat n_p=n_p-\bar n_p$ give the net winding and net momentum carried by the string, and $N_L, N_R$ are the left and right excitation levels. The entropy is
\begin{equation}
S=2\sqrt{2}\pi (\sqrt{N_L}+\sqrt{N_R})
\end{equation}
For vanishing net winding and momentum $\hat n_1=0, ~\hat n_p=0$ we get
\begin{equation}
S=2\sqrt{\pi} ~{E\over \sqrt{ T}}
\label{three}
\end{equation}
This is a faster growth of S than (\ref{one}), and leads to the well known Hagedorn transition.
We can understand the above dependence $S\sim E$ also in the more elementary computation
(\ref{two}). The Universe will have no net string winding, so we will have winding as well as anti-winding modes. On the winding mode we have left movers (momentum) and right movers (anti-momentum), and similarly for the anti-winding modes. We find that the entropy is optimized if we put as much energy into string winding ($(n_1+\bar n_1) L T= 2 n_1 L T= {E\over 2}$) as in the momentum excitations. This gives
\begin{equation}
n_1=\bar n_1\sim E, ~~~n_p=\bar n_p\sim E, ~~~~S\sim \sqrt{n_1n_p}\sim E
\label{four}
\end{equation}
in agreement with (\ref{three}).
The purpose of carrying out the estimate in the crude form (\ref{four}) is that we wish to talk about fractional branes and antibranes. In the count (\ref{three}) we had a closed loop of an excited string, but we see that we can regard the two sides of this loop as string `winding' and `antiwinding', and the excitations as `momentum' and `anti-momentum'. We have understood the state in terms of two kinds of charges (and their anticharges): windings of the elementary string (NS1) and momentum (P). We will call such states `2-charge' states, and have found that the entropy of 2-charge states grows as $S\sim E$.
If the string has only left excitations but no right excitations then we get an extremal NS1-P state. The entropy is given by setting $\bar n_p=0$ in (\ref{two}), so we have $S=2\sqrt{2}\pi\sqrt{n_1n_p}$. We can use dualities to map this system to other forms. For example we can get D0-D4 -- a bound states of $n_0=n_1$ D0 branes and $n_4=n_p$ D4 branes. A further T-duality along a direction in the D4 gives $D1-D3$, where the $n_1$ D1 branes are perpendicular to the $n_3$ D3 branes. We depict this in Fig.\ref{fractionated}. Note that each D1 brane gets `broken up' into $n_3$ pieces. Thus there are $n_1n_3$ fractional D1 branes, and their different positions give $\sim n_1n_3$ moduli in a classical description of the branes. Quantizing the wavefunctions on this moduli space will again give the 2-charge entropy $S=2\sqrt{2}\pi\sqrt{n_1n_3}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{fractionated_branes.eps}
\caption{Different kinds of branes `fractionate' each other, giving a large entropy.}
\label{fractionated}
\end{figure}
\subsection{Three charges}
Can we get an entropy that grows with energy faster than $S\sim E$? Let us recall the microscopic description of 3-charge black holes.
Consider type IIB string theory, and let there be 5 compact directions, which we write as $T^4\times S^1$:
\begin{equation}
M_{9,1}~\rightarrow ~ M_{4,1}\times T^4\times S^1
\end{equation}
We will wrap branes on the compact directions, and obtain an object that is a black hole in the in 4+1 nonconmpact directions. The black hole in \cite{sv} was made with charges D1-D5-P, but since we started with the elementary string above we dualize this to get NS1-NS5-P. The NS1 branes are wrapped on $S^1$, the NS5 branes wrap $T^4\times S^1$, and the momentum P runs along $S^1$. The entropy is \cite{sv}
\begin{equation}
S=2\pi\sqrt{n_1n_5n_p}
\end{equation}
Let the mass of a brane of type $i$ be $m_i$. Then the energy of the extremal system is just given by adding the masses of the branes
\begin{equation}
E=n_1m_1+n_5m_5+n_pm_p
\end{equation}
Since the energy of the system is linear in the numbers of branes $n_i$, we find
\begin{equation}
S\sim E^{3\over 2}
\label{ten}
\end{equation}
Thus the 3-charge entropy grows faster with energy than the 2-charge entropy (\ref{three}).
Suppose $n_1, n_5\gg n_p$. Let us add a small amount of extra energy without changing the charges, so that the system is no longer extremal. One finds that the Bekenstein entropy of the near-extremal hole $S(E, Q_i)$ can be reproduced by assuming that we have both momentum and anti-momentum excitations, and that these do not interact \cite{cm}:
\begin{equation}
S=2\pi\sqrt{n_1n_5}(\sqrt{n_p}+\sqrt{\bar n_p})
\label{threenex}
\end{equation}
\begin{equation}
E=n_5m_5+n_1 m_1+(n_p+\bar n_p)m_p
\end{equation}
If only one charge is large, $n_5\gg n_1, n_p$ and the system is slightly off extremality, we find that we can again reproduce the Bekenstein entropy exactly \cite{malda5} by writing
\begin{equation}
S=2\pi\sqrt{n_5}(\sqrt{n_1}+\sqrt{\bar n_1})(\sqrt{n_p}+\sqrt{\bar n_p})
\label{five}
\end{equation}
\begin{equation}
E=n_5m_5+(n_1+\bar n_1) m_1+(n_p+\bar n_p)m_p
\label{six}
\end{equation}
The expression (\ref{five}) needs some explanation. The number $n_5$ is determined by the given NS5 charge for the system, but there are four other numbers: $n_1, \bar n_1, n_p, \bar n_p$. The net NS1 and P charges ($\hat n_1, \hat n_p$) give
\begin{equation}
n_1-\bar n_1=\hat n_1, ~~~~n_p-\bar n_p=\hat n_p
\end{equation}
and a third relation comes from the energy (\ref{six}). This leaves one free parameter, and we should extermize $S$ over this parameter to obtain the entropy. The result then tells us how the energy wants to partition itself into fractional excitations, and is found to exactly reproduce the Bekenstein entropy of the system.
We can make a natural extension of the above formulae for entropy to the case where all charges are comparable, and the system is not close to extremal. This case will therefore include the Schwarzschild hole. We write \cite{hms}
\begin{equation}
S=2\pi(\sqrt{n_5}+\sqrt{\bar n_5})(\sqrt{n_1}+\sqrt{n_1})(\sqrt{n_p}+\sqrt{\bar n_p})
\label{el}
\end{equation}
\begin{equation}
E=(n_5+\bar n_5)m_5+(n_1+\bar n_1) m_1+(n_p+\bar n_p)m_p
\label{seven}
\end{equation}
Again we have the conditions
\begin{equation}
n_5-\bar n_5=\hat n_5, ~~~~n_1-\bar n_1=\hat n_1, ~~~~n_p-\bar n_p=\hat n_p
\end{equation}
with the energy given by (\ref{seven}) which assumes no interaction energy between the branes and antibranes. This time there are 6 parameters and 4 conditions, and we must again extremize $S$ over the remaining 2-parameter family to get the correct $S$. The resulting $S(E, Q_i)$ agrees exactly with the Bekenstein entropy of black holes in 4+1 dimensions, for all values of charges $Q_i$ and energy $E$.
If we scale up all charges and the total energy the same way, we find that for all these entropies arising from using three kinds of charges we get $S\sim E^{3\over 2}$.
\subsection{4 charges}
A similar story is found if we compactify an additional direction, so that we have
\begin{equation}
M_{9,1}~\rightarrow ~ M_{3,1}\times T^4\times S^1\times \tilde S^1
\end{equation}
Now we have 6 directions on which we can wrap objects, and we get black holes in 3+1 noncompact dimensions. We can take the NS1-NS5-P charges that we had above and add a fourth charge: KK monopoles that have $\tilde S^1$ as their non-trivially fibred circle. We can again dualize these four charges to a variety of forms. A form that looks symmetric in the four charges consists of four D3 branes that wrap the 6 cycles of the compact $T^6$ as follows
\begin{equation}
D3_{123}~~~~D3_{145}~~~~D3_{246}~~~~D3_{356}
\end{equation}
Thus any pair of D3 branes shares one common direction.
The entropy is again given by \cite{4charge}
\begin{equation}
S=2\pi(\sqrt{n_1}+\sqrt{\bar n_1})(\sqrt{n_2}+\sqrt{\bar n_2})(\sqrt{n_3}+\sqrt{\bar n_3})(\sqrt{n_4}+\sqrt{\bar n_4})
\label{eight}
\end{equation}
where we extremize over the $n_i, \bar n_1$ subject to
\begin{equation}
n_i-\bar n_i=\hat n_i
\end{equation}
and
\begin{equation}
E=\sum_i (n_i+\bar n_i) m_i
\end{equation}
This entropy agrees exactly with the Bekenstein entropy $S(E,Q_i)$ of a hole in 3+1 dimensions.
By changing the orientation of one of the D3 branes we get a nonsupersymmetric but still extremal system, and the entropy of this system was matched to the Bekenstein entropy recently in \cite{emho}.
Note that the $n_i$ grow linearly with $E$, so the entropy (\ref{eight}) grows with energy as
\begin{equation}
S\sim E^2
\label{nine}
\end{equation}
\subsection{Proposal for entropy in the early Universe}
We have seen that if we take a gas of massless particles we get an entropy $S\sim E^{D-1\over D}$. We can consider excited string states which give $S\sim E$; we have seen that this system can be re-interpreted as a 2-charge system where the two charges fractionate each other and produce the entropy. With three charges we get $S\sim E^{3\over 2}$. With four charges we get $E\sim E^2$.
Now consider the Universe, where we have compactified all the spatial directions to a torus. The traditional big bang picture envisages a radiation filled Universe at early times. In \cite{bran} a string gas was considered and in \cite{greene1,greene2} a `brane gas' was taken. Such gases can give entropy $S\sim E$. But we have seen above that general fractional brane states can give a much higher $S$ at large $E$. We wish to adopt an equation of state for the early Universe that will reflect this
high entropy and thus correspond to the most generic state for given $E$.
We will assume that the entropy has the form
\begin{equation}
S=A'\prod_{i=1}^N (\sqrt{n_i}+\sqrt{\bar n_i})
\label{entropyass}
\end{equation}
Since we will take the net charge of each type to vanish, we have $n_i=\bar n_i$ and we get
\begin{equation}
S=2^N A' \prod_{i=1}^N \sqrt{n_i}\equiv A\prod_{i=1}^N \sqrt{n_i}
\end{equation}
The energy is
\begin{equation}
E=\sum_i m_i (n_i+\bar n_i)=2\sum_i m_i n_i
\end{equation}
Here $m_i$ is the mass of the brane of type $i$
\begin{equation}
m_i=T_p \prod_j L_j
\end{equation}
where $T_p$ is the tension of a p-brane and the product runs over all the spatial directions of the brane.
We assume that the system is in thermal equilibrium, so we will maximize the entropy (\ref{entropyass}) for given $E$.\footnote{To see if the assumption of equilibrium is true, we will have to compute the rate of interactions between fractional branes. This interaction depends on the total number of branes in the bound state. We do not address these issues here, and hope to return to them elsewhere.} To find the state with maximal entropy at a given time $t$, the $L_i$ are held fixed (which fixes the $m_i$), and the total energy is held fixed at $E$. Taking into account this energy constraint we maximize
\begin{equation}
\tilde S=S-\lambda (E_{branes}-E)= A \prod_{i=1}^N \sqrt{n_i}-\lambda( 2 \sum_i m_i n_i-E)
\end{equation}
Extremizing over $n_i$ gives
\begin{equation}
n_k=\bar n_k={E\over 2N m_k}
\end{equation}
Note that the energy is equipartitioned among all types of branes, each type getting energy (there is no sum over $k$)
\begin{equation}
E_k=n_km_k={E\over 2N}
\label{tw}
\end{equation}
\subsection{Stress tensor}
We have seen that the entropy of black holes is reproduced by assuming that the energy gets partitioned optimally between different kinds branes and antibranes. In this computation the energy is taken to be just additive; i.e. there was no energy of interaction. In \cite{hms} it was shown that with this same assumption of noninteraction between branes we can reproduce the {\it pressures} exerted by the black hole on the various compact cycles. Thus on the one hand we can take the black hole geometry and for compact directions $y_i$ look at the asymptotic fall-off of $g_{y_iy_i}$; this is related to the pressure components $T^i_i$ of the stress tensor in a weak gravity situation. On the other hand we can take the set of branes and antibranes that we obtained by extremizing an expression like (\ref{el}),(\ref{eight}), compute the pressure each brane exerts by itself on the compact directions, and just add these pressures. One again finds exact agreement between the black hole result and the microscopic computation.\footnote{In \cite{hms} the variables compared between the two computations were certain linear combinations of the pressures; for a direct computation of pressures from wrapped branes see for example \cite{cgm}. } We will thus also use a simple sum over the pressures of the branes describing our configuration.
Let us first compute the stress tensor of a single $p$-brane. The action of the brane is
\begin{equation}
S=-T_p~\int ~\sqrt{-g^{ind}} ~d^{p+1}\xi
\end{equation}
where $g^{ind}_{ab}$ is the metric induced on the worldvolume. The stress tensor is given by
\begin{equation}
T_{\mu\nu}=-{2\over \sqrt{-g}}{\delta S\over \delta g^{\mu\nu}}
\end{equation}
Let the length of the direction $x^i$ be $L_i$. Let the brane be wrapped on directions $x^1\dots x^p$. The volume of the brane is $V_p=\prod_{i=1}^p L_i$. The volume of the directions transverse to the brane is $V_{tr}=\prod_{i=p+1}^{D-1} L_i$. The total volume of the torus is $V=V_p V_{tr}$.
The stress tensor has only diagonal components. We find (there is no sum over $k$)
\begin{eqnarray}
T^{(p) k}{}_k=-T_p ~\prod_{i=p+1}^{D-1}\hat \delta(x_i-\bar x_i), ~~~~~~~~~&k&=1, \dots, p\nonumber\\
T^{(p) k}{}_k=0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&k&=p+1, \dots, (D-1)
\end{eqnarray}
where $\hat\delta$ is the covariant delta function ($\int \hat\delta(x) \sqrt{-g_{xx}}~ dx=1$), and $\bar x_i$ give the position of the $p$-brane in the transverse coordinates.
Now suppose there are $n_p$ branes of this type, smeared uniformly on the transverse directions $x_i, i=p+1\dots (D-1)$. Then we get
\begin{equation}
T^{(p) k}{}_k=-T_p {n_p\over V_{tr}}=-T_p {n_pV_p\over V}= - {E_p\over V}
\end{equation}
where $E_p=n_p T_p V_p$ is the total energy carried by this type of brane. Using (\ref{tw})
we have
\begin{equation}
T^{(p) k}{}_k =- {E\over 2NV}= - {\rho\over 2N}
\end{equation}
where $\rho={E\over V}$ is the energy density. Including the contribution of the corresponding antibrane, we get from this type of brane the pressure
\begin{equation}
p=-{1\over N}\rho
\end{equation}
Now suppose there were $N_i$ types of branes wrapping the direction $x_i$. Then the pressure in the direction $x_i$ will be
\begin{equation}
p_i=-{N_i\over N}\rho\equiv w_i\rho
\label{wncq}
\end{equation}
where we have defined
\begin{equation}
w_i\equiv -{N_i\over N}
\label{wnc}
\end{equation}
Momentum modes P contribute $-1$ to $N_i$ (they have a positive pressure while branes have a negative pressure). Note that the $N_i$ appearing in (\ref{wnc}) counts the {\it types} of branes wrapping different cycles, not the {\it number} of branes along those cycles. An example might make this clearer. Take 11-dimensional M theory; then there are 10 compact spatial directions. Consider the charges
\begin{equation}
M5_{12345}, ~~~~M5_{12367}, ~~~~M5_{14567}, ~~~~P_1
\end{equation}
where the subscripts indicate the directions along which the branes wrap ($P_1$ is momentum along the direction $x_1$ common to all M5 branes). There are 4 kinds of charges; thus $N=4$. Along $x_1$ we have the contribution $+1$ from each of the M5 branes and the contribution $-1$ from P; thus we have $N_1=3-1=2$, and $w_1=-{N_1\over N}=-{1\over 2}$. For $x_2$, we have a contribution $+1$ from the first and second types of M5 branes, so we again get $N_2=2$ and $w_2=-{1\over 2}$. A similar result holds for $x_3, \dots , x_7$. Along $x_8, \dots , x_{10}$ we find no charges, so $N_i=0$ and the corresponding $w_i$ vanish. Thus we get
\begin{equation}
\{w_1, \dots , w_{10}\}\equiv \vec w = \{ -{1\over 2}, -{1\over 2}, -{1\over 2}, -{1\over 2}, -{1\over 2}, -{1\over 2}, -{1\over 2}, 0,0,0\}
\end{equation}
\subsection{Comparison between our approach and brane gas models}\label{branesec}
What new features can strings and branes bring to the early Universe?
In \cite{bran} the idea of `string gas' was examined; in \cite{greene1,greene2} the extension to `brane gases' was considered.
We will also be using branes, and some of our computations will resemble those in brane gas scenarios. But there is a significant difference in the basic idea between our approach and brane gases. Here we outline some points of this difference; this discussion should help us put forth our conjectured picture more clearly.
\bigskip
(a) In a string gas we can wrap strings along arbitrary cycles of the torus. Similarly, in the brane gas of \cite{greene2} M2 branes were wrapped on all cycles $(ij)$ of the torus. But in our computation we need to take a set of branes that are mutually BPS; it is only such sets
that manifest fractionation and the consequent large entropy (\ref{entropyass}). M2 branes are mutually BPS if they share no common directions, M5 branes need to share three common directions,
and an M2 - M5 combination must share one common direction.
\bigskip
(b) In the brane gas model of \cite{greene2} the entropy comes from vibrations living on the surface of the M2 branes. This entropy is proportional to the area of the brane, and at a suitably high temperature the energy cost of each unit area of the brane is balanced by the entropy carried by that area. This gives a Hagedorn phase with $S\sim E$.
In our case the energy $E$ goes to creating certain sets of branes and antibranes that fractionate each other, and thereby create the entropy
(\ref{entropyass}). Even if we have just three kinds of charges, this entropy grows as $S\sim E^{3\over 2}$, much faster than the Hagedorn entropy. Thus at high energy densities we expect to get the fractional brane state rather than a Hagedorn state.
\bigskip
(c) In the brane gas model when two branes intersect they tend to annihilate. Thus sets of branes that do not generically intersect are expected to last for longer times, and govern the long time dynamics of the system.
The situation is quite the opposite in our case. Consider the three charges NS1-NS5-P which we considered for the 3 charge black hole. The NS1 is bound to the NS5, and thus `lives in the plane of the NS5'. The P charges are carried by excitations of the NS1-NS5 bound state along the direction common to the NS1 and NS5. For fractionation to occur, all charges must `see' each other.
One may then wonder why the branes and antibranes do not immediately annihilate to radiation. But we have already seen that if $E$ is high then there is more entropy in the fractional brane state than in radiation, so there should {\it not} be an annihilation to radiation. Let us discuss the annihilation of brane anti-brane pairs in more detail.
If we place a brane and an antibrane together, we get a system that can be described by a tachyon sitting at the top of its potential hill \cite{sent}. Classically the tachyon can sit at the top of the hill indefinitely, while quantum mechanically it will fall to its ground state. If we take a large number $N$ of branes, and one antibrane, then we can describe the branes by their gravity dual. We then find that the antibrane just falls down the throat created by the branes, and no radiation emerges for long times \cite{tachyon}; thus there is no quick annihilation between the branes and the antibrane.
In fact the annihilation process for fractional branes has been well studied in the black hole context, where one finds that decay of brane-antibrane pairs gives Hawking radiation. For the three charge system described by (\ref{threenex}) we can compute the rate of annihilation of $P\bar P$ pairs to radiation, and find that the rate matches Hawking emission exactly in spin dependence and grey body factors \cite{dmcompare}. Similar agreement is found for the 4-charge analogue of (\ref{threenex}) \cite{klebanov} and for the system described by (\ref{five}) \cite{mk}. We can even get the exact emission rate for the general hole described by (\ref{el}),(\ref{eight}), if we boost the neutral hole to add charges; this maps the neutral hole to the near-extremal system used in the above mentioned results \cite{ramadevi}. While boosting in a compact direction is not an exact symmetry of string theory, it may be a good approximation for large charges, and is similar to the idea used in Matrix theory.
All these computations suggest the following picture for black hole microstates. The state has a large number of fractional branes and antibranes, and the potential describing this system has a large number of saddle points which give metastable states.The system slowly drops from one metastable state to a lower energy one, giving
Hawking radiation, which is a process suppressed by powers of $\hbar$ for classical sized black holes. We thus expect that our system on the torus $T^{10}$ will be composed of fractional branes and antibranes with the high entropy (\ref{entropyass}), and branes and antibranes will not annihilate.\footnote{Brane-antibrane models for black holes were also considered in \cite{fractional}.}
\bigskip
In summary, let us take an analogy from nuclear physics. At low energy we see hadrons, but at high density and pressure we get a quark-gluon plasma, where deconfinement has liberated the elementary degrees of freedom to generate the highest possible entropy. At very high energies these elementary constituents are essentially noninteracting quanta. In our case we have a high energy density in the early Universe, and black hole physics suggests that the most entropically favored configuration is one of fractional branes. Black hole computations also suggest that these fractional brane quanta are free to leading order, and that we should find the total energy and pressure by adding the contributions from each brane in the state.
\section{Einstein's equations}
\setcounter{equation}{0}
We take the metric to have the form
\begin{equation}
ds^2=-dt^2+\sum_{i=1}^{D-1}a^{2}_{i}(t)dx_{i}^{2}
\label{metric}
\end{equation}
The coordinates $x_i$ are compactified with period unity ($0\le x_i<1$). The nonvanishing components of the connection are
\begin{equation}
\Gamma^t_{ii}=a_i{\dot a}_i, ~~~\Gamma^i_{ti}={{\dot a}_i\over a_i}
\end{equation}
The relevant components of the Einstein tensor are
\begin{equation}
G^t{}_t=-{1\over 2} (\sum_i{{\dot a}_i\over a_i})^2+{1\over 2} \sum_i {{\dot a}_i^2\over a_i^2}
\end{equation}
\begin{eqnarray}
G^k{}_k&=&{\ddot a_k\over a_k} +{{\dot a}_k\over a_k}(\sum_{i}{{\dot a}_i \over a_i})-{{\dot a}_k^2\over a_k^2}-{1\over 2} [2\sum_i {\ddot a_i\over a_i}+(\sum_i{{\dot a}_i\over a_i})^2-\sum_i {{\dot a}_i^2\over a_i^2}]\nonumber\\
&=&{\ddot a_k\over a_k} +{{\dot a}_k\over a_k}(\sum_{i}{{\dot a}_i \over a_i})-{{\dot a}_k^2\over a_k^2}- \sum_i {\ddot a_i\over a_i}+G^t{}_t
\label{gkkl}
\end{eqnarray}
(There is no sum over $k$ in (\ref{gkkl}).)
The Einstein equations are $G^\mu{}_\nu=8\pi G T^\mu{}_\nu$. The nonvanishing components of the stress tensor are
\begin{equation}
T^t{}_t=-\rho, ~~~~~T^k{}_k=p_k=w_k\rho
\end{equation}
so we get the field equations
\begin{equation}
-{1\over 2} (\sum_i{{\dot a}_i\over a_i})^2+{1\over 2} \sum_i {{\dot a}_i^2\over a_i^2}=-8\pi G\rho
\label{gtt}
\end{equation}
\begin{equation}
{\ddot a_k\over a_k} +{{\dot a}_k\over a_k}(\sum_{i}{{\dot a}_i \over a_i})-{{\dot a}_k^2\over a_k^2}- \sum_i {\ddot a_i\over a_i}=8\pi G (1+w_k)\rho
\label{gkkpre}
\end{equation}
Substituting (\ref{gtt}) in (\ref{gkkpre}) we get
\begin{equation}
{\ddot a_k\over a_k} +{{\dot a}_k\over a_k}(\sum_{i}{{\dot a}_i \over a_i})-{{\dot a}_k^2\over a_k^2}- \sum_i {\ddot a_i\over a_i}=(1+w_k)~[{1\over 2} (\sum_i{{\dot a}_i\over a_i})^2-{1\over 2} \sum_i {{\dot a}_i^2\over a_i^2}]
\label{gkk}
\end{equation}
\section{A Kasner type power law solution}\label{kasner}
\setcounter{equation}{0}
For the empty Universe with toroidal compactification we have the Kasner solutions \cite{kasner}, where the radii grow as powers of $t$. A power law solution was also found for the case of isotropically wrapped branes in \cite{greene1}. We will see that with the equation of state that we have chosen we can get a power law solution for any choice of the $w_i$ which characterize the brane wrappings.
Thus write
\begin{equation}
a_i=\bar a_i ~t^{\beta_i}
\end{equation}
Thus
\begin{equation}
{{\dot a}_i\over a_i}={\beta_i\over t}, ~~~~~{\ddot a_i\over a_i}={\beta_i(\beta_i-1)\over t^2}
\end{equation}
Substituting in (\ref{gkk}) gives
\begin{equation}
\beta_k={{1\over 2} (\sum_{i}\beta_i^2)(1-w_k)+{1\over 2}(\sum_{i }\beta_i)^2(1+w_k) -(\sum_i\beta_i)\over [(\sum_{i}\beta_i)-1]}
\end{equation}
We write
\begin{equation}
\sum_i \beta_i=A, ~~~~ \sum_i\beta_i^2=B
\label{sixt}
\end{equation}
Then we have
\begin{equation}
\beta_k=[{{1\over 2} B+{1\over 2} A^2-A\over (A-1)}]-w_k~ [{{1\over 2} B-{1\over 2} A^2\over (A-1)}]
\label{thir}
\end{equation}
Let us define
\begin{equation}
W\equiv \sum_i w_i, ~~~~~~U\equiv\sum_i w_i^2
\end{equation}
We can get two consistency conditions from (\ref{thir}). First we sum over $k$ in (\ref{thir}), getting
\begin{equation}
\sum_k\beta_k=A=(D-1)[{{1\over 2} B+{1\over 2} A^2-A\over (A-1)}]-W~[{{1\over 2} B-{1\over 2} A^2\over (A-1)}]
\label{fourt}
\end{equation}
Next we square the $\beta_k$ and then add:
\begin{equation}
\sum_k\beta_k^2=B=(D-1) [{{1\over 2} B+{1\over 2} A^2-A\over (A-1)}]^2+U[{{1\over 2} B-{1\over 2} A^2\over (A-1)}]^2-2W~[{{1\over 2} B+{1\over 2} A^2-A\over (A-1)}]~[{{1\over 2} B-{1\over 2} A^2\over (A-1)}]
\label{fift}
\end{equation}
One solution to these equations is $A=1, B=1$, which gives the well known vacuum Kasner solutions \cite{kasner}. To find other solutions, note that
eq.(\ref{fourt}) is linear in $B$, and gives
\begin{equation}
B=A~{2(D-2)+A(3-W-D)\over D-1-W}
\end{equation}
Substituting in (\ref{fift}) we get a quadratic equation for $A$. Solving this, we get two additional solutions, one of which is $A=0$. Collecting all these solutions we have the following cases:
\bigskip
(i)
\begin{equation}
A=0, ~~~~B=0
\end{equation}
This gives $\beta_i=0$ for all $i$, and thus corresponds to empty Minkowski space.
\medskip
(ii)
\begin{equation}
A=1, ~~~~B=1
\end{equation}
These are the known vacuum Kasner solutions. Thus there will be no matter, and the different expansions of the different directions give a self-consistent solution of the Einstein equations.
All $\beta_i$ satisfying (\ref{sixt}) with $A=B=1$ give allowed solutions.
\medskip
(iii)
\begin{eqnarray}
A&=&{2(D-1-W)\over (D-1)+(D-2)U-W^2}\nonumber\\
B&=&4~{(D-1)+(D-2)^2U-2W-(D-3)W^2\over [(D-1)+(D-2)U-W^2]^2}
\end{eqnarray}
This gives a solution with a nontrivial stress tensor contributed by branes. From (\ref{thir}) we find
\begin{eqnarray}
\beta_k&=&[{{1\over 2} B+{1\over 2} A^2-A\over (A-1)}]-w_k~ [{{1\over 2} B-{1\over 2} A^2\over (A-1)}]\nonumber\\
&=&[{2(W-1)\over W^2-D(U+1)+2U+1}]-w_k~[{2(D-2)\over W^2-D(U+1)+2U+1}]
\label{betak}
\end{eqnarray}
\section{The equations in the general case}
\setcounter{equation}{0}
Let us write
\begin{equation}
\gamma_i \equiv \f{\dot{a}_i}{a_i}
\label{gdef}
\end{equation}
Thus
\begin{equation}
\f{\ddot{a}_i}{a_i}=\dot{\gamma}_i + \gamma_i^2
\end{equation}
Equation (\ref{gkk}) gives
\begin{equation}
\dot{\gamma}_k + \gamma_k (\sum_i \gamma_i) -\sum_i (\dot{\gamma}_i + \gamma_i^2) ={1\over 2} \left[ (\sum_i \gamma_i)^2 - \sum_i \gamma_i^2 \right] (1+ w_k)
\label{eightt}
\end{equation}
Let us define
\begin{eqnarray}
{\cal {P}}&\equiv& \sum_i \gamma_i\nonumber\\
{\cal {Q}}&\equiv& \sum_i \gamma_i^2\nonumber\\
{\cal {S}}&\equiv & \sum_i w_i \gamma_i
\label{tfour}
\end{eqnarray}
Then (\ref{eightt}) is
\begin{equation}
{d\over dt}{\gamma}_k + \gamma_k {\cal {P}}-{d\over dt}{{\cal {P}}}-{\cal {Q}}={1\over 2}({\cal {P}}^2-{\cal {Q}})(1+w_k)
\label{master}
\end{equation}
Summing (\ref{master}) over $k$ gives
\begin{equation}
-(D-2) {d\over dt}{\cal {P}}+{\cal {P}}^2 -(D-1){\cal {Q}} ={1\over 2}({\cal {P}}^2-{\cal {Q}})(D-1+W)
\label{tone}
\end{equation}
Multiplying (\ref{master}) by $\gamma_k$ and then summing over $k$ gives
\begin{equation}
{1\over 2} {d\over dt}{\cal {Q}}-{\cal {P}}{d\over dt}{\cal {P}} ={1\over 2}({\cal {P}}^2-{\cal {Q}})({\cal {P}}+{\cal {S}})
\label{ttwo}
\end{equation}
Multiplying (\ref{master}) by $w_k$ and then summing over $k$ gives
\begin{equation}
{d\over dt} {\cal {S}}+{\cal {P}}{\cal {S}}-W{d\over dt}{\cal {P}}-W{\cal {Q}}={1\over 2}({\cal {P}}^2-{\cal {Q}})(W+U)
\label{tthree}
\end{equation}
Interestingly, we find that even though there are $D-1$ variables $\gamma_i$, the three moments (\ref{tfour}) form a closed system of three first order equations. We can write (\ref{tone})-(\ref{tthree}) in a more convenient form by defining
\begin{equation}
{\tilde {\cal {Q}} }={\cal {Q}}-{\cal {P}}^2
\end{equation}
Then our three equations become
\begin{eqnarray}
\dot {\cal {P}}~+~{\cal {P}}^2&=&- K_1 {\tilde {\cal {Q}} } \label{tsix}\\
\dot {\tilde {\cal {Q}} }~+~{\cal {P}} {\tilde {\cal {Q}} }&=&-{\cal {S}} {\tilde {\cal {Q}} } \label{tseven}\\
\dot {\cal {S}}~+~{\cal {P}}{\cal {S}}&=&K_2 {\tilde {\cal {Q}} } \label{teight}
\end{eqnarray}
where
\begin{eqnarray}
K_1&=&{(D-1-W)\over 2(D-2)}\label{k1}\\
K_2&=&-{1\over 2}[{1-W\over D-2}~W+U]
\label{k2}
\end{eqnarray}
If ${\cal {P}}, {\cal {Q}}, {\cal {S}}$ are known then we get the $\gamma_i$ from (\ref{master})
\begin{equation}
\dot\gamma_k+\gamma_k {\cal {P}}=-{1\over 2} {\tilde {\cal {Q}} } [{1-W\over D-2}+w_k]
\label{qsix}
\end{equation}
The $a_i$ are then determined by (\ref{gdef}).
\subsection{A more convenient form of the equations}
The left hand sides of (\ref{tsix})-(\ref{teight}) have a similar form, which suggests that we define an integrating factor. Consider eq.(\ref{teight}). We can write it as
\begin{equation}
{d\over dt} (e^{\int_{t_0}^t {\cal {P}} dt} {\cal {S}} )=K_2 e^{\int_{t_0}^t {\cal {P}} dt} {\tilde {\cal {Q}} }
\label{tnine}
\end{equation}
where $t_0$ is an arbitrary constant that we will take as the initial time where we specify data.
For any quantity $F$ we write
\begin{equation}
\hat F~\equiv~e^{\int_{t_0}^t {\cal {P}} dt}~F
\label{defhat}
\end{equation}
Then (\ref{tnine}) becomes
\begin{equation}
{d\over dt} \hat{\cal{S}}=K_2 \hat{\tilde{\cal{Q}}}
\end{equation}
Similarly, eq.(\ref{tsix}) becomes
\begin{equation}
{d\over dt} \hat{\cal{P}}=-K_1 \hat{\tilde{\cal{Q}}}
\end{equation}
Eq.(\ref{tseven}) becomes
\begin{equation}
{d\over dt} \hat{\tilde{\cal{Q}}} = -{\cal {S}} \hat{\tilde{\cal{Q}}}
\label{qone}
\end{equation}
Thus in this equation there appears the quantity $S$ and not $\hat S$. Our goal is to get a closed system of equations in the hatted variables.
To this end we note that for the number unity we can write the hatted symbol
\begin{equation}
\hat{I}~\equiv~ e^{\int_{t_0}^t {\cal {P}} dt}\cdot 1~=~e^{\int_{t_0}^t {\cal {P}} dt}
\end{equation}
Using $\hat I$ we can write (\ref{qone}) as
\begin{equation}
{d\over dt} \hat{\tilde{\cal{Q}}} = -{\hat{\cal{S}}\over\hat{I}} ~\hat{\tilde{\cal{Q}}}
\end{equation}
Note that
\begin{equation}
{d\over dt}\hat{I} = \hat{\cal{P}}
\end{equation}
so we finally do have a closed system of equations in the hatted variables. We collect these equations together for later use
\begin{eqnarray}
{d\over dt} \hat{\cal{P}}&=&-K_1 \hat{\tilde{\cal{Q}}} \label{qtwo}\\
{d\over dt} \hat{\tilde{\cal{Q}}} &=& -{\hat{\cal{S}}\over\hat{I}} ~\hat{\tilde{\cal{Q}}} \label{qthree}\\
{d\over dt} \hat{\cal{S}}&=&K_2 \hat{\tilde{\cal{Q}}} \label{qfour}\\
{d\over dt}\hat{I} &=& \hat{\cal{P}} \label{qfive}
\end{eqnarray}
Eq.(\ref{qsix}) for the $\gamma_i$ can also be written simply in hatted variables
\begin{equation}
{d\over dt} \hat \gamma_k=-\delta_k \hat {\tilde {\cal {Q}} }
\label{qseven}
\end{equation}
where
\begin{equation}
\delta_k={1\over 2} [{1-W\over D-2}+w_k]
\label{qeight}
\end{equation}
\subsection{Integrals of motion}
The hatted version of the basic equations allow us to note some simple integrals of the equations.
From (\ref{qtwo}) and (\ref{qfour}) we find immediately that
\begin{equation}
{d\over dt}~(K_2 \hat{\cal{P}} + K_1 \hat{\cal{S}})=0
\end{equation}
which gives
\begin{equation}
\hat{\cal{S}}=-{K_2\over K_1}~\hat{\cal{P}} +{\rm constant}
\end{equation}
where the constant is determined by initial conditions.
From (\ref{qeight}) and (\ref{qtwo}) we find
\begin{equation}
{d\over dt} \hat \gamma_k=-\delta_k \hat {\tilde {\cal {Q}} } = {\delta_k\over K_1} {d\over dt} \hat{\cal{P}}
\label{mgk}
\end{equation}
which gives
\begin{equation}
\hat \gamma_k={\delta_k\over K_1} \hat{\cal{P}} + F_k
\label{ff}
\end{equation}
where $F_k$ are constants determined by initial conditions.
Note that
\begin{equation}
\sum_k {\delta_k\over K_1}=1
\end{equation}
Since $\sum_k\hat \gamma_k=\hat P$, we see that we must have
\begin{equation}
\sum_k F_k=0
\end{equation}
\subsection{Physical ranges for parameters}
Note that
\begin{equation}
{\cal {P}}=\sum_i \gamma_i={d\over dt} ~\sum_i \log a_i={d\over dt} ~\log V={\dot V\over V}
\end{equation}
where $V$ is the volume of the spatial torus.
We have to choose a direction of time to call positive, and we can use this freedom to require that at the time $t_0$ where we give initial conditions
\begin{equation}
{\cal {P}}(t=t_0)\ge 0
\label{ppos}
\end{equation}
The integrating factors that converts un-hatted quantities to hatted ones is
\begin{equation}
\hat{I}=e^{\int_{t_0}^t {\cal {P}} dt}={V\over V_0}
\label{ihfa}
\end{equation}
Thus throughout the physical range of evolution we will have
\begin{equation}
\hat{I}> 0
\label{none}
\end{equation}
From (\ref{defhat}) it follows that hatted and un-hatted variables have the same sign.
From (\ref{gtt}) we find that
\begin{equation}
{1\over 2} (\sum_i \gamma_i)^2-{1\over 2} \sum_i \gamma_i^2={1\over 2} ({\cal {P}}^2- {\cal {Q}})=8\pi G \rho
\end{equation}
so the energy density is
\begin{equation}
\rho=-{1\over 16\pi G} {\tilde {\cal {Q}} }
\label{rhoq}
\end{equation}
Our matter is made up of the quanta in string theory, and we have seen that the energy of different kinds of objects will be simply added
to obtain the total energy. Each of these quanta have a positive energy, so we will have
\begin{equation}
\rho > 0, ~~~~ {\tilde {\cal {Q}} } < 0, ~~~\hat{\tilde{\cal{Q}}}< 0
\label{oone}
\end{equation}
The total energy is $E=\rho V$, so $\hat {\tilde Q}$ is just the total energy upto a (negative) constant
\begin{equation}
\hat{\tilde{\cal{Q}}} = {V\over V_0} {\tilde {\cal {Q}} } = -{16\pi G\over V_0} ~E
\end{equation}
We have found that $p_k=w_k \rho$. We will assume the dominant energy condition, which states that for each direction the pressure satisfies $p_k\le \rho$. (As in the case of energy density, the pressure is obtained by adding the pressure contributed by the different string theory objects in our state, and these satisfy the dominant energy condition.) Thus we have
\begin{equation}
w_i \le 1
\end{equation}
for all $i$. This gives $W \le (D-1)$, or equivalently, $D-1-W \ge 0$. This implies that
\begin{equation}
K_1={(D-1-W)\over 2(D-2)}\ge 0
\label{otwo}
\end{equation}
From (\ref{oone}) and (\ref{otwo}) we find that the RHS of (\ref{qtwo}) is non-negative, so $\hat P$ is a non-decreasing function of time
\begin{equation}
{d\over dt} \hat{\cal{P}} \ge 0
\label{php}
\end{equation}
From (\ref{ppos}) and (\ref{php}) we see that for all $t\ge t_0$ we will have
\begin{equation}
\hat{\cal{P}}\ge 0
\label{ppos2}
\end{equation}
From (\ref{qfive}) we see that
\begin{equation}
{d\over dt} \hat{I} \ge 0
\end{equation}
so that $\hat{I} ={V\over V_0}$ is a nondecreasing function of time. Thus the Universe will not `recollapse'
to $V\rightarrow 0$.
The variables ${\cal {S}}, \hat{\cal{S}}$ can have either sign, and this sign can change during the evolution.
\section{Solving the equations}
\setcounter{equation}{0}
We observe that in the system (\ref{qtwo})-(\ref{qfive}), three of equations have $\hat{\tilde{\cal{Q}}}$ on the right hand side. We can divide by $\hat{\tilde{\cal{Q}}}$ and absorb it in the definition of time, by writing
\begin{equation}
{1\over (-\hat{\tilde{\cal{Q}}})} ~{d\over dt} \equiv {d\over d\tau}
\end{equation}
We have put in the negative sign because $\hat{\tilde{\cal{Q}}}$ is negative; with this sign, the variable $\tau$ increases when $t$ increases. Thus the $t, \tau$ variables are related by
\begin{equation}
\tau=\int_{t_0}^t dt' ~(-\hat{\tilde{\cal{Q}}}), ~~~~(t-t_0)=\int_0^\tau ~{d\tau'\over (-\hat{\tilde{\cal{Q}}})}
\label{time}
\end{equation}
where we have chosen the lower limit of $t$ to be the time $t_0$ where we specify initial conditions. The system (\ref{qtwo})-(\ref{qfive}) then gives
\begin{eqnarray}
{d\over d\tau} \hat{\cal{P}} &=& K_1\label{mone}\\
{d\over d\tau}\hat{\tilde{\cal{Q}}} &=& {\hat{\cal{S}}\over \hat{I}} \label{mtwo}\\
{d\over d\tau}\hat{\cal{S}} &=& -K_2\label{mthree}\\
{d\over d\tau}\hat{I} &=& -{\hat{\cal{P}}\over \hat{\tilde{\cal{Q}}}}\label{mfour}
\end{eqnarray}
We can immediately solve (\ref{mone}) and (\ref{mthree}):
\begin{eqnarray}
\hat{\cal{P}}&=&K_1\tau + A_1\label{mfive}\\
\hat{\cal{S}}&=&-K_2\tau + A_2\label{msix}
\end{eqnarray}
where $A_1, A_2$ are constants.
Now (\ref{mtwo}) and (\ref{mfour}) become
\begin{eqnarray}
{d\over d\tau}\hat{\tilde{\cal{Q}}}&=& {\hat{\cal{S}}\over \hat{I}}={(-K_2\tau + A_2)\over \hat{I}}\label{mseven}\\
{d\over d\tau}\hat{I}&=&-{\hat{\cal{P}}\over \hat{\tilde{\cal{Q}}}}= -{(K_1\tau + A_1)\over \hat{\tilde{\cal{Q}}}}\label{meight}
\end{eqnarray}
From these equations we deduce that
\begin{equation}
({d\over d\tau} \hat{\tilde{\cal{Q}}})\hat{I}+\hat{\tilde{\cal{Q}}}({d\over d\tau} \hat{I})= {d\over d\tau}(\hat{\tilde{\cal{Q}}}\hat{I})=-(K_1+K_2)\tau + (A_2-A_1)
\end{equation}
which gives
\begin{equation}
\hat{\tilde{\cal{Q}}}\hat{I}= -(K_1+K_2){\tau^2\over 2}+(A_2-A_1)\tau + A_3
\label{para}
\end{equation}
where $A_3$ is another constant. Taking $\hat{I}$ from this equation in substituting it in (\ref{mseven}) gives
\begin{equation}
{d\over d\tau} \hat{\tilde{\cal{Q}}} = { (-K_2 \tau + A_2)\over -(K_1+K_2){\tau^2\over 2}+(A_2-A_1)\tau + A_3} ~ \hat{\tilde{\cal{Q}}}
\end{equation}
or
\begin{equation}
{d\over d\tau} [\log (-\hat{\tilde{\cal{Q}}})]={ (-K_2 \tau + A_2)\over -(K_1+K_2){\tau^2\over 2}+(A_2-A_1)\tau + A_3}
\label{mlog}
\end{equation}
where we have written $(-\hat{\tilde{\cal{Q}}}) $ in the argument of the $\log$ since $\hat{\tilde{\cal{Q}}}$ is negative. The quadratic in the denominator on the right hand side can be written as
\begin{equation}
-(K_1+K_2){\tau^2\over 2}+(A_2-A_1)\tau + A_3= -{(K_1+K_2)\over 2} ~(\tau-r_1) (\tau-r_2)
\label{roots}
\end{equation}
where $r_1, r_2$ are the two roots of the quadratic. We now need to know if these roots are real or complex, and if they are real, then where $\tau$ lies on the real axis with respect to these roots. We will study these roots below, but for now we write the formal solution to (\ref{mlog})
\begin{equation}
(-\hat{\tilde{\cal{Q}}})= A_4 (\tau-r_1)^{-{2 (-r_1 K_2 + A_2)\over ( K_1+K_2)(r_1-r_2)}}(\tau-r_2)^{{2(-r_2 K_2 + A_2)\over (K_1+K_2) ( r_1-r_2)}}
\label{mqth}
\end{equation}
where $A_4$ is a constant.
Thus all variables $\hat{\cal{P}}, \hat{\tilde{\cal{Q}}}, \hat{\cal{S}}, \hat{I}$ have been expressed algebraically in terms of the time parameter $\tau$. From (\ref{mfive})
we see that upto a suitable choice of origin and a constant scaling, $\tau$ is just the variable $\hat{\cal{P}}$. Thus if we use $\hat{\cal{P}}$ to measure time, then all other variables are given by rational functions of this time. To get back to the physical problem however, we need to relate $\tau$ to $t$. This is done through (\ref{time})
\begin{equation}
(t-t_0)={1\over A_4}\int_0^\tau (\tau'-r_1)^{{2 (-r_1 K_2 + A_2)\over ( K_1+K_2)(r_1-r_2)}}(\tau'-r_2)^{-{2(-r_2 K_2 + A_2)\over (K_1+K_2) ( r_1-r_2)}} d\tau'
\label{mintegral}
\end{equation}
The integral on the RHS is given by an incomplete Beta function. This function is defined by \cite{grad}
\begin{equation}
B_x(p,q)=\int_0^x s^{p-1} (1-s)^{q-1} ds
\end{equation}
The precise expression for (\ref{mintegral}) in terms of the incomplete Beta function will depend on the location of $\tau$ with respect to the roots $r_1, r_2$.
Since the relation between $t$ and $\tau$ is transcendental, we will analyze the solutions qualitatively to see the dynamical behavior that results for different choices of parameters.
\subsection{Different dynamical behaviors}
Consider the integral in (\ref{mintegral}). For what follows we recall eq.(\ref{php}) which says that $\hat{\cal{P}}$ cannot decrease with time. There are three possible cases:
\bigskip
(a) The integral (\ref{mintegral}) diverges at a finite value of $\tau$. Then we reach $t=\infty$ with finite $\tau$. Then from (\ref{mfive}) we see that $\hat{\cal{P}}$ asymptotes to a finite constant.
\bigskip
(b) The integral (\ref{mintegral}) diverges as $\tau\rightarrow\infty$. In this case $\hat{\cal{P}}\rightarrow\infty$ as $t\rightarrow\infty$.
\bigskip
(c) The integral (\ref{mintegral}) converges. In this case we have a divergence $\hat{\cal{P}}\rightarrow\infty$ at a finite time $t$.
\bigskip
\subsection{Dependence on parameters}
We now wish to see which of the above behaviors results for which choices of parameters and initial conditions. Recall that $\hat{I}$ is positive (eq.(\ref{none})) and $\hat{\tilde{\cal{Q}}}$ is negative (eq.(\ref{oone})). Thus the left hand side of (\ref{para}) is negative. Consider the function
\begin{equation}
f(\tau)\equiv (-\hat{I}\hat{\tilde{\cal{Q}}}) = {(K_1+K_2)\over 2}{\tau^2}-(A_2-A_1)\tau - A_3
\label{mf}
\end{equation}
The physical values of parameters then requires
\begin{equation}
f\ge 0
\label{fpos}
\end{equation}
The function $f(\tau)$ describes a parabola. We have two cases:\footnote{Here and in other computations below we consider only generic values of the parameters for simplicity. For example we do not explicitly look at the border $K_1+K_2=0$ between the two cases below; such special cases can be easily worked out explicitly.}
\begin{figure}[htbp]
\centering
\includegraphics[width=2in]{Downward_Parabola.eps}\hspace{1truecm}
\includegraphics[width=2.3in]{Upward_Parabola.eps}
\vspace{.2truecm}
\hspace{4.5truecm} (a) \hspace{7truecm} (b)
\vspace{.5truecm}
\caption{(a) Downward facing parabola for $K_1+K_2<0$ \quad (b) Upward facing parabola for $K_1+K_2>0$. In each case a physical choice of parameters leads to motion along the bold line segment.}
\label{parabolas}
\end{figure}
\subsubsection{$K_1+K_2<0$}
In this case the parabola is concave downwards. From (\ref{fpos}) we see that only the part of $f$ above the $\tau$-axis describes a physical evolution. If the roots $r_1, r_2$ in (\ref{roots}) are real, then the parabola will intersect the $\tau$-axis, and a part of the parabola will lie above this axis (Fig.\ref{parabolas}(a)). If the roots are complex, the parabola will be entirely below the $\tau$-axis.
In this case it is easy to show that $r_1, r_2$ will be real. The discriminant of the polynomial in (\ref{roots}) is
\begin{equation}
\Delta=(A_2-A_1)^2+2 A_3(K_1+K_2)
\end{equation}
Note that $A_3$ is the value of the negative quantity $\hat{\tilde{\cal{Q}}}\hat{I}$ at the initial time $\tau=0$, so
\begin{equation}
A_3<0
\end{equation}
Since $K_1+K_2<0$ in the present case, we find that all terms in $\Delta$ are positive and thus $\Delta\ge 0$. Thus the roots $r_1, r_2$ are real, and the parabola will look as in Fig.\ref{parabolas}(a).
The evolution will take place on the solid part of the parabola in Fig.\ref{parabolas}(a).
Since the evolution ends at a finite value of $\tau$, we find from (\ref{mfive}) that $\hat{\cal{P}}$ asymptotes to a constant, and thus we will be in case (a).
The initial data is given at $\tau=0$, which must be a point between the two roots of the parabola. Thus $r_1<0$ and $r_2>0$. Since $r_1<\tau<r_2$ during the physical evolution the relation (\ref{mintegral}) should be written as
\begin{equation}
(t-t_0)={1\over |A_4|}\int_0^\tau (\tau'-r_1)^{\alpha_1}(r_2-\tau')^{\alpha_2} d\tau'
\label{timea}
\end{equation}
where we have defined
\begin{equation}
\alpha_1={{2 (-r_1 K_2 + A_2)\over ( K_1+K_2)(r_1-r_2)}}, ~~~~\alpha_2={-{2(-r_2 K_2 + A_2)\over (K_1+K_2) ( r_1-r_2)}}
\end{equation}
We find
\begin{eqnarray}
(t-t_0)&=&{1\over |A_4|}(r_2-r_1)^{{K_1-K_2\over K_1+K_2}}~\left ( B_{\tau-r_1\over r_2-r_1}[\alpha_1+1, \alpha_2+1]~-~B_{-{r_1\over r_2-r_1}}[\alpha_1+1, \alpha_2+1] \right )\nonumber\\
\end{eqnarray}
In Fig.\ref{casea} we plot graphs for an example that illustrates case (a). We let $w_i=\{ .9, -.9, -.9, -.9, -.9, -.1, -.1, -.1, -.1, -.1\}$. We have set $t_0=2$ and have taken $\gamma_i(t=t_0)=1$, $a_i(t=t_0)=1$ for all $i$. We plot $\hat{\cal{P}}, -\hat{\tilde{\cal{Q}}}$, and one $a_i$ from each set having the same $w_i$. Note that $-\hat{\tilde{\cal{Q}}}$ is proportional to the total energy in the Universe.
\subsubsection{$K_1+K_2>0$}
In this case the parabola is concave upwards. With a little more effort we can again show that for physically allowed initial conditions, the discriminant $\Delta$ is positive, and thus $r_1, r_2$ are real; this computation is done in Appendix (\ref{aa1}). Thus the parabola intersects the $\tau$-axis, as shown in Fig.\ref{parabolas}(b). From (\ref{fpos}) we see that physical motion can take place only on the two segments of the parabola above the $\tau$-axis. Thus there appear to be two possible branches, a `left' branch and a `right' branch. With some effort we can show that for a physical choice of parameters, we cannot be on the left branch; this is done in Appendix (\ref{aa2}).
Thus motion takes place only on the right branch, as indicated by the solid part of the parabola.
We see that the motion will extend to $\tau\rightarrow\infty$; thus from (\ref{mfive}) we see that we will be in case (b) or in case (c). To distinguish between these cases consider the integral (\ref{mintegral}). Since we are on the right branch above the $\tau$-axis we see that $(\tau-r_1)>0$ and $(\tau-r_2)>0$. Thus from (\ref{mqth}) we find $A_4$ to be a real positive constant. In (\ref{mintegral}) we find that the large $\tau$ behavior gives
\begin{equation}
t-t_0\sim \int^\tau d\tau' (\tau')^{-{2K_2\over K_1+K_2}}\sim {(\tau)^{-{2K_2\over K_1+K_2}+1}}
\label{timeb}
\end{equation}
Thus the convergence of this integral is determined by the sign of $\mu\equiv -{2K_2\over K_1+K_2}+1$. Note that we have $K_1+K_2>0$ in the present part of the analysis. So we can equally well ask for the sign of $(K_1+K_2)\mu=K_1-K_2$. We thus have the two cases
\bigskip
(i) $K_1-K_2>0$:\quad In this case the integral (\ref{mintegral}) diverges at $\tau=\infty$, and we have case (b).
\bigskip
(ii) $K_1-K_2<0$:\quad In this case the integral (\ref{mintegral}) converges and we have case (c). We note however in Appendix (\ref{aa3}) that we can have this case only if at least one of the $w_i$ is less than $-1$. It is conventionally assumed that $w_i$ lie in the range $-1\le w_i\le 1$. The upper limit comes from the dominant energy condition, but there is no strong reason to require the lower limit. For the quanta that we get from string theory though we do have $-1\le w_1\le 1$, as can be seen from the definition (\ref{wnc}).
\bigskip
In either of these cases (b),(c)
we are on the `right branch' of the parabola in Fig.\ref{parabolas}(b). Thus the point $\tau=0$ where the initial data is specified lies to right of the two roots of the
parabola. So $r_1<r_2<0$, and we find from (\ref{mintegral})
\begin{equation}
(t-t_0)={(r_2-r_1)^{\alpha_1+\alpha_2+1}\over |A_4|}
\left (
B_{\tau-r_2\over \tau-r_1}[\alpha_2+1, -\alpha_1-\alpha_2-1)]-
B_{r_2\over r_1}[\alpha_2+1, -\alpha_1-\alpha_2-1)] \right )\nonumber\\
\end{equation}
In Fig.\ref{caseb} we plot graphs for an example that illustrates case (b). We have taken $w_i=-.2$ for all $i$. We have set $t_0=2$ and have taken $\gamma_i(t=t_0)=1$, $a_i(t=t_0)=1$ for all $i$. We plot $\hat{\cal{P}}, -\hat{\tilde{\cal{Q}}}$, and $a_1$.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.5in]{CaseA_P.eps} \hspace{1truecm}
\includegraphics[width=2.5in]{CaseA_Q.eps}\\
\vspace{.5truecm}
\includegraphics[width=2in]{CaseA_a1.eps}\hspace{.1truecm}
\includegraphics[width=2in]{CaseA_a2.eps} \hspace{.1truecm}
\includegraphics[width=2in]{CaseA_a6.eps}
\vspace{.5truecm}
\caption{Plots of $\hat{\cal{P}}, -\hat{\tilde{\cal{Q}}}$ and a selection of $a_i$ for $w_i=\{ .9, -.9, -.9, -.9, -.9, -.1, -.1, -.1, -.1, -.1\}$, a set that gives $K_1+K_2<0$ and illustrates case (a) behavior. We have taken $\gamma_i(2)=a_i(2)=1$ for all $i$. We see that $\hat{\cal{P}}$ asymptotes to a constant.}
\label{casea}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=2in]{CaseB_P.eps} \hspace{.1truecm}
\includegraphics[width=2in]{CaseB_Q.eps} \hspace{.1truecm}\includegraphics[width=2in]{CaseB_a1.eps}
\vspace{.5truecm}
\caption{Plots of $\hat{\cal{P}}, \hat{\tilde{\cal{Q}}}, a_1$ for the choice $w_i=-.2$ for all $i$. This gives $K_1+K_2>0$, $K_1>K_2$, and thus case (b) behavior. $\hat{\cal{P}}$ grows without bound. }
\label{caseb}
\end{figure}
\subsection{Solving for $\gamma_i, a_i$}
From (\ref{mgk}) we find
\begin{equation}
(-{1\over \hat{\tilde{\cal{Q}}}}) ~ {d\over dt} \hat\gamma_k ={d\over d\tau}\hat\gamma_k= \delta_k
\end{equation}
which gives
\begin{equation}
\hat\gamma_k=\delta_k\tau+f_k
\label{forapp2}
\end{equation}
where $f_k$ are constants. Since
\begin{equation}
\sum_k\hat\gamma_k=\hat{I}\sum_k\gamma_k=\hat{I} P=\hat{\cal{P}}
\end{equation}
we have one relation between the $f_k$
\begin{equation}
\sum_k f_k = \hat{\cal{P}}(\tau=0)=A_1
\end{equation}
Now note that
\begin{equation}
{d\over dt} (\log a_k) ={{\dot a}_k\over a_k}=\gamma_k={\hat\gamma_k\over \hat{I}}
={\delta_k\tau+f_k\over \hat{I}}
\end{equation}
Thus
\begin{equation}
{d\over d\tau} (\log a_k)=(-{1\over \hat{\tilde{\cal{Q}}}}) ~ {d\over dt} (\log a_k)=-{(\delta_k\tau+f_k)\over \hat{\tilde{\cal{Q}}}\hat{I}}= -[ {(\delta_k\tau+ f_k)\over -{(K_1+K_2)\over 2} ~(\tau-r_1) (\tau-r_2)}]
\end{equation}
where we have used (\ref{para}),(\ref{roots}). This gives
\begin{equation}
a_k=C_k ~(\tau-r_1)^{{2(\delta_k r_1 + f_k)\over (K_1+K_2)(r_1-r_2)}}(\tau-r_2)^{-{2(\delta_k r_2+f_k)\over (K_1+K_2)(r_1-r_2)}}
\label{aevolve}
\end{equation}
where $C_k$ are constants.
\subsection{Evolution as a function of $t$}
We have seen above that all variables can be expressed as algebraic functions in terms of the time parameter $\tau$. But in the metric (\ref{metric}) the natural parameter is $t$, and the relation between $\tau$ and $t$ was given through the incomplete Beta function, so this relation is not easy to picture. We can however find that evolution as a function of $t$ at late times, where the relation between $\tau$ and $t$ simplifies.
\subsubsection{Case (a) at large $t$}
From Fig.\ref{parabolas}(a) we see that the evolution takes us towards the root $\tau=r_2$. In Appendix (\ref{aa4}) we show that
\begin{equation}
\alpha_2+1<0
\label{alpharel}
\end{equation}
Then we see from (\ref{timea}) that $t$ diverges as $\tau\rightarrow r_2$
\begin{equation}
t\sim (r_2-\tau)^{\alpha_2+1}
\end{equation}
and from (\ref{aevolve}) we get
\begin{equation}
a_k\sim t^{{2(\delta_k r_2+f_k)\over 2(-r_2 K_2 +A_2)-(K_1+K_2)(r_1-r_2)}}
\label{powert}
\end{equation}
Thus the late time behavior of the $a_k$ depends on the initial conditions that specify parameters like $f_k$ and $A_1,A_2,A_3$ which give the roots $r_1, r_2$. In Appendix (\ref{aa4}) we observe that the power of $t$ in (\ref{powert}) can be written in a simpler form \begin{equation}
a_k\sim t^{{\gamma_k(\tau_2)\over {\cal {P}}(\tau_2)}}
\label{forapp}
\end{equation}
\subsubsection{Case (b) at large $t$}
In this case we have $t\rightarrow\infty$ as $\tau\rightarrow\infty$. Let us examine this large $t$ behavior. From (\ref{timeb}) we get
\begin{equation}
\tau \sim t^{{K_1+K_2\over K_1-K_2}}
\end{equation}
From (\ref{aevolve}) we find
\begin{equation}
a_k \sim t^{{2\delta_k\over K_1-K_2}}\sim t^{\beta_k}
\end{equation}
where we have used the expression (\ref{qeight}) for $\delta_k$ and the expressions (\ref{k1}),(\ref{k2}) for $K_1, K_2$ to recognize that the power of $t$ is the same as the power $\beta_k$ appearing in (\ref{betak}) for the
Kasner type solution. Thus we see that evolution for case (b) asymptotes at late times to the Kasner type solution for the given $w_k$.\footnote{In \cite{greene2} it was also observed (from numerical computations) that there was a kind of `attractor mechanism', so that solutions with generic initial data became similar to each other at late times.}
\subsubsection{Case (c) for $t\rightarrow t_f$}
In this case the $\tau$ integral converges
\begin{equation}
{1\over |A_4|}\int_0^\infty (\tau-r_1)^{{2 (-r_1 K_2 + A_2)\over ( K_1+K_2)(r_1-r_2)}}(\tau-r_2)^{-{2(-r_2 K_2 + A_2)\over (K_1+K_2) ( r_1-r_2)}}\equiv t_f-t_0
\end{equation}
To find the behavior as $t\rightarrow t_f$ we note from (\ref{mintegral}) that in this limit $\tau$ is large and we have
\begin{equation}
\tau\sim (t_f-t)^{K_1+K_2\over K_1-K_2}
\end{equation}
From (\ref{aevolve}) we get
\begin{equation}
a_k\sim (t_f-t)^{{2\delta_k\over K_1-K_2}} \sim (t_f-t)^{\beta_k}
\end{equation}
where we have noted that the powers of $(t_f-t)$ appearing here are the same as those that appear in the Kasner type power law solution $a_k\sim t^\beta_k$.
\section{Discussion}
\setcounter{equation}{0}
We have assumed an equation of state suggested by black holes; the energy goes to creating `fractional branes' which have a high entropy and can thus dominate over other kinds of matter. If we assume that the state of the Universe is always the maximal entropy state for the given energy $E$, then we get an equation of state $p_i=w_i\rho$, for which the dynamic can be solved analytically.
We discussed some of the ideas behind the fractional brane state in section (\ref{branesec}). If we have a few branes in a given volume of space, then these branes will tend to annihilate to massless quanta. But if we increase the energy $E$ in the given volume to very large values, then it becomes entropically favorable to produce a large number of suitable sets of branes and anti-branes. Branes in such a set are mutually BPS, and `fractionate' each other, producing an entropy that grows more rapidly with $E$ than the energy of radiation or a Hagedorn type string or brane gas. As we discussed in section (\ref{branesec}), black hole results suggest that at these densities the fractional brane quanta seem to behave as essentially free objects. Thus the energy $E$ and pressures $p_i$ for the state are given by just adding the contributions from the branes and antibranes. The fractional branes do not find it easy to find each other and annihilate; the rate of this annihilation
can be computed for black holes where it reproduces exactly the rate of Hawking radiation.
If we follow the state of the present Universe backwards in time, the energy density keeps increasing. It has been postulated that for sufficiently early times we reach a Hagedorn phase of strings. This phase has no pressure, but does have energy; thus as we go further back in time $E$ does not change ($dE=-PdV$) but ${\dot a}\ne 0$, so the density grows still further. This suggests that at sufficiently early times the fractional brane state should be present.
We have not analyzed the astrophysical implications of the evolution we find; there are many issues to be addressed and we hope to carry out a detailed study of these elsewhere. Here we outline some of these issues and raise some relevant questions.
We have not taken any specific choice of the $w_i$; rather we solved the problem for an arbitrary set of $w_i$. Which choice of $w_i$ that gives the largest possible entropy for M theory on $T^{10}$? We can find large sets of mutually BPS branes with 10 compact directions, but we need to prove that choosing branes and antibranes of these varieties will indeed give the entropy (\ref{entropyass}). We thus need to generalize the brane constructions that have worked so well for black holes and understand the entropy of high density states in string theory.
For generic choices of $w_i$ we get a power law expansion, and for many choices the power is too low to give us some kind of `inflation'. Note than in inflation we find the inflaton in a low entropy state, while in string/brane gas approach we seek the {\it maximal} entropy state for the given energy. In this sense we are closer to the latter approach; we look for a maximal entropy state but observe that the Hagedorn gas is not the highest entropy state in string theory.
Inflation gives a reason for the sky to look homogenous on large length scales. But there is a different source of long distance correlations that can arise with the fractional brane gas. In black holes semiclassical analysis suggests that quantum gravity effects are confined to planck or string length, but because of fractionation we find that the brane bound state has a size that grows with the number of quanta in the state, and thus we get nonlocal quantum effects all across the interior of the horizon \cite{review1}. So in our Cosmological fractional brane state we can also expect quantum correlations across macroscopic distances.
What is the fate of the fractional branes as the Universe expands?
Consider the 3-charge entropy (\ref{el}) and the 4-charge entropy (\ref{eight}). For a given energy $E$, is it more advantageous to create three kinds of charges or four? We see that if all the $n_i$ exceed unity, then the 4-charge entropy is higher, but if one of them drops below unity, then the 3-charge entropy will be higher. Such a transition was studied in \cite{emission,cgm} where it gave a microscopic description of the black hole -- black string transition. In our present problem the mass of each type of brane changes as the $a_i$ evolve, and it is possible that some $n_i$ drops below unity. In that case we would have to continue the evolution with a different set of branes and thus a different set of $w_i$.
Clearly it is crucial to know how many branes form the fractional brane state; this determines the $n_i$. Thus if the Universe was infinite and all branes were linked up to form the fractional brane state, then each $n_i$ would be infinite and the entropy per unit volume would diverge. (Note that doubling the volume more than doubles the entropy if three or more types of charges are involved.) It is clearly entropically advantageous for more and more of the matter to be linked up in the fractional brane state, but when the Universe starts the islands of matter so linked are presumably small; they might then grow rapidly as different islands come into causal contact.
Note that the {\it density} of matter in a fractional brane state need not be high as long as enough {\it total} matter is linked up in the state. For example we can make a black hole with arbitrarily low density matter as long as there is enough total energy $E$ in the ball of matter. Any fractional brane matter left over today could show up as a dark component with its own dynamics.
All these are interesting questions, and we regard the present work as just a first pass on the problem with these ideas; we hope to return to the above issues elsewhere.
\section*{Acknowledgements}
We thank S. Das, R. Furnstahl, S. Giusto, D. Kabat and J. Michelson for many helpful comments.
This work was supported in part by DOE grant DE-FG02-91ER-40690.
|
1,116,691,499,682 | arxiv | \section{Introduction}
Matrix/tensor completion is inherently ill-posed and thus demands additional constraints to ensure the existence of a nonempty and nontrivial family of solutions. Historically the focus has been on {\em the matrix} as a mathematical object rather than on properties that should be preserved based on the geometry or other structural constraints implicit to the application. Consequently, the conventional means for defining matrix completion is to formulate it as an optimization problem such as minimizing a specified norm of {\em the matrix}. However, at a minimum the choice should be consistent with properties required by the application. For example, if the geometric structure of the application's solution space is understood to be rotation-invariant, then clearly a unitary-invariant norm is not inappropriate. On the other hand, if the application involves variables with incommensurate units of measure (e.g., length and pressure variables defined in arbitrary metric or imperial units), then minimizing a unitary-invariant norm is meaningless because solutions will be dependent on the arbitrary choice of units made for the application's state variables.
We take the position that a well-posed formulation of the matrix/tensor completion problem should derive from properties of the solution space that must be enforced or conserved. In particular, we assume that the vast majority of nontrivial real-world problems involve some number of known and unknown variables with incommensurate units. Therefore the solution for a given problem must be consistent with respect to those units in the sense that arbitrary relative linear scalings of them, e.g., changing from meters to centimeters for lengths, should yield the same unique solution {\em but in the new units}. We refer to this as a {\em unit-consistent} (UC) solution. To intuitively appreciate the significance of this perspective, consider the alternative in which a change from millimeters to centimeters yields a fundamentally different solution. Which solution is ``better''? If some meta-criterion selects one over the other, then does that mean another choice of units might yield an even better solution?
In many areas of applied mathematics and engineering, the default reflex when faced with an ill-posed problem (e.g., an underdetermined set of equations) is to apply a toolbox LMSE method to define a unique solution. This urge is so strong as to go almost unnoticed, as if there is no need to reflect on whether the resulting solution is meaningful in terms of the given application. It could be argued that a slightly more sophisticated urge would be to try to identify a formulation of the ill-posed problem so as to minimize the computational complexity of the algorithm needed to solve that formulation. Such an approach almost presupposes that there is no particular ``best'' solution from the perspective of the application, so computational efficiency becomes the principal consideration.
For some practical applications it could be the case that computational considerations must supersede all else because {\em some} solution is better than {\em no} solution. This may motivate a rank-minimization formulation of the matrix completion problem as a means for producing a decomposable (factorable) result that can reduce problem complexity for subsequent operations \cite{szl,hardt}. But will the resulting solutions be unitary-invariant or unit-consistent? The application may demand one of these or some other property or properties be enforced, and identifying which is appropriate should precede consideration of possible problem formulations if only to characterize limitations of the ultimately chosen solution method.
The first contribution of this paper is a fully general, computationally efficient method for transforming a given tensor to a scale-invariant canonical form. This form ensures that whatever operation is applied to it will be scale invariant. For example, a matrix function\footnote{Examples include scale-invariant or unit-consistent PCA, MDS, neural networks and machine learning methods, etc., to replace conventional least-squares or other norm-specific criteria. See \cite{BoZ1,BoZ3} for use of this in the context of robot-control applications.} $f(A)$, can be made scale-invariant as $f(\mathcal{S}(A))$, where the function $\mathcal{S}$ ensures $A=D\cdot \mathcal{S}(DAE)\cdot E$ and $\mathcal{S}(DAE)$=$\mathcal{S}(A)$ is uniquely determined and holds for all strictly positive diagonal matrices $D$ and $E$. Alternatively, the operation can be made unit-consistent as $D\cdot f(\mathcal{S}(A))\cdot E$. In summary, the unique canonical scaling $\mathcal{S}(A)$ is invariant with respect to nonsingular diagonal scalings of its matrix argument, and this canonical scaling generalizes to arbitrary $d$-dimensional tensors.
The second contribution of this paper is a tensor completion algorithm based on this unit-scale invariant canonical form. We argue that human/subjective variables that are presumed to be unknowable but critical to effective recommender system (RS) solutions can be effectively understood as a lack of knowledge of units that are implicitly applied when humans rate products, e.g., that humans implicitly rank using subjective units that may depend on the product (or its class or attributes). Evidence for this conclusion comes from the fact that a unit-consistency constraint on its own is sufficient to perform comparably to state-of-the-art specialized RS systems according to crude/arbitrary metrics such as RMSE. Potentially more important from an analytic perspective, the UC constraint alone also determines a unique solution (assuming full support is provided by the given entries), and from it we are able to prove a {\em consensus ordering} theorem that any admissible RS solution should be expected to satisfy: if all users agree on a rank-ordering of a set of products, then recommendations (entry completions) will also satisfy that ordering.
The complexity to transform to scale-invariant form is $\Omega(n)$, where $n$ is the number of known entries. However, our most general formulation of the RS problem for tensors can involve constructions for which each user is represented as a vector of attributes, and products are similarly generalized, and each of these attributes can in principle be recursively expanded to capture whatever relationships are deemed to be of predictive interest. In practice one can expect $n$ to be relatively manageable because the collection of information is practically bounded even if the implicit index space becomes exorbitant, but the $O(1)$ per-entry completion complexity, which assumes fixed $d$, can be compromised because we permit predictions to be defined across arbitrary $k$-dimensional subtensors, i.e., our generalization can allow for formulations with $O(\binom{d}{k})$ entry-completion complexities. Fortunately, RS applications of interest will typically involve formulations for which $k=d-1$, thus preserving the optimal complexities achieved in the matrix case to arbitrary $d$-dimensional tensors.
The format of the paper is as follows. We begin by introducing the recommender system problem and arguing that unit consistency is a necessary property. We then formally define a general canonical scaling algorithm (CSA) for tensors, followed by the tensor completion algorithm based on that canonical scaling, which provide a direct completion method for recommender-system applications. We then prove general UC properties of the completion algorithm, e.g., relating to uniqueness, followed by proofs of properties that are of specific relevance to recommender systems. Notably, we prove the consensus-ordering theorem, which essentially says that if all users agree on an ordering-by-preference for a set of products, the estimated ratings (recommendations) obtained from the completion algorithm will respect that ordering. We also generalize the interpretation of the {\em consensus-ordering} theorem to apply with respect to user attributes and/or product attributes and/or the tensor extension of any other state variable. We conclude with a discussion of results and their implications for generalized recommender systems.
\section{Recommender Systems}
A motivating application for our unit-consistent (UC) framework is recommender systems in which a table (matrix) of user ratings of products is used to estimate values for unfilled entries, i.e., predict ratings for particular products not rated by particular users. For a matrix $A$ with user $i$ and product $j$, we consider the score of user $i$ on product $j$ as $A_{ij} = A(i,j)$ for $(i,j) \in \known{A}$. Define the list of entries in $\known{A}$ as $A_{r}$ and the list of entries in $\unknown{A}$ as $A_{nr}$ (similar to {\em absent} entries). A recommendation process $RS$ bases on the entries of $A_r$ to approximate the entries of $A_{nr}$. Our goal is to provide a formulation of $RS$ such that the resulting entries filled in $A_{nr}$ satisfies unit consistency and consensus-ordering constraints.
Our approach involves taking the matrix matrix of user-product ratings and transforming it to a scale-invariant canonical by applying a left and right diagonal scaling so that the product of nonzero (and unfilled) entries in each row and column is unity. This scaled form is provably unique. Each unfilled entry is then replaced with 1, thus preserving the canonical form, and the inverse of the original scaling is applied to obtain the completed form of the original matrix. This algorithm preserves unit consistency because canonical form is invariant with respect to positive scalings of the rows and columns of the given matrix.
To appreciate the need for RS unit consistency, consider a user Alice who rates products in terms of a personal ``unit of quality'' that derives in unknowable ways from various aspects of her personality and experience \cite{urec}. Suppose the RS suggests a rating of $x$ for a particular film she has not yet rated. Now consider an alternative scenario in which Alice's personal ``unit of quality'' is arbitrarily scaled by a factor of $1.05$, i.e., all of her ratings become $5\%$ larger. In this scenario we should expect the RS to suggest a rating of $1.05\times x$, i.e., the same value but now consistent with her ratings using the new unit of measure. If it does not, then the RS is not unit consistent.
To further impress the need for unit consistency, suppose in the example above that the original RS rating of $x$ for the new film is the same value Alice gave as a rating for a previous film she had seen. In other words, the RS is implicitly suggesting that she will like the new film almost identically to that previous film. If, however, the RS is not unit consistent, then the simple scaling of all of Alice's rating by $5\%$ -- as described in the alternative scenario above -- would then cause the RS-suggested rating for the new film to no longer be same as the previous film. In other words, the RS-predicted relative ranking of the new film among the films that Alice has previously rated could change. Of course, unit consistency would, by definition, ensure that the predicted ranking of the new film among Alice's rated films is unchanged because the new rating would be scaled consistently, e.g., the original RS rating of $x$ for the film would become $1.05\times x$ in the alternative scenario in which all of Alice's ratings are scaled by $1.05$.
A different perspective on the need for unit consistency is to consider the influence of Alice's ratings on the recommendations made by the RS to other users. Unit consistency implies that a fixed scaling of all of Alice's ratings, e.g., by a factor of $1.05$, will have no effect on RS-suggested ratings for other users. This is because ratings are invariant with respect to scale factors applied separately to any row or column of the ratings matrix. This invariance also ensures that no individual user is advantaged or penalized in terms of their influence on system ratings due to the magnitude of their personal ``unit of quality'' they use (implicitly) to produce their ratings.
By contrast, an approach that optimizes RS ratings to minimize a measure of squared error -- {\em hence cannot provide unit consistency} -- will implicitly apply more weight to larger-magnitude user ratings than to smaller ones. This means that users who tend to be more reserved in their ratings will have less influence on the system's ratings than users who tend to give higher ratings on average by some factor. Returning to the earlier example, the scaling of Alice's ratings by $5\%$ in a system that minimizes squared error would have the effect of giving her ranked ordering of films more influence on the system's ratings than it had before. This potential source for manipulation of the RS is avoided under the unit-consistency constraint.
There are, of course, opportunities for users to {\em selectively influence} any RS system because its suggested ratings are necessarily derived from user-provided ratings. In the case a unit-consistent RS, for example, if a user feels that film A is 5\% better than film B, and film B is 3\% better than film C, then the influence of this degree of relative preferences of the user on system ratings cannot be increased or decreased simply by scaling the ratings, e.g., scaling ratings so film A has the highest rating. The user can only affect the system's ratings for other users by changing his relative ratings for the three films, e.g., by giving them all the highest possible rating. Doing so will not change the {\em magnitude} of the user's influence on system ratings, it will only cause the system to believe he equally likes all three films. In other words, he cannot artificially bias a unit-consistent RS toward ratings that are more aligned with his personal relative preferences of the three films, he can only affect what the system assumes to be his relative preferences. In this sense, unit consistency provides a natural and intuitive form of both robustness and fairness.
Previous works (\cite{Recht,Boaz,Aaron}) have exploited unitary invariance via the SVD to achieve state-of-the-art probabilistic bounds on the minimum number of entries sufficient for retrieval. Other methods exploit the SVD indirectly via the Moore-Penrose pseudoinverse to achieve unitary and global-scale invariance. These include collaborative filtering (\cite{Daniel}), the parameter-decrease method for matrix factorization (\cite{Guangxiang}), SIFT for recommendation (\cite{Lowe}, \cite{Gihwi}), and social choice theory for recommendation (\cite{David}). Unfortunately, unitary (e.g., rotational) invariance is inherently and unavoidably sensitive to the choice of units on key variables. Consequently, we argue them to be inadmissable for nonspatial applications (i.e., where preservation of Euclidean distances between objects has no meaningful relevance), of which recommender systems represent a prime example.
In the following sections, we formally describe our framework and its properties relating to recommender systems. In Section \ref{empirical} we provide empirical results suggesting that unit consistency alone is sufficient to provide performance competitive with state-of-the-art methods on standard benchmark datasets.
\section{Canonical Scaling}
\label{CSA_section}
In this section we present the our general canonical scaling algorithm (CSA) and formally establish both its correctness and the uniqueness of its result for the canonical scaling problem (CSP) of a $d$-dimensional tensor $A$. First we define our notation:
\begin{enumerate}
\item $A\in \mathbb{R}_{\mbox{\tiny >$\! 0$}}^{n_1\times ... \times n_d}$ is a $d$-dimensional tensor with fixed dimensional extents $n_1, ...\,, n_d$. (The restriction to strictly positive, as opposed to nonnegative, entries is unnecessary for canonical scaling but will prove convenient when considering recommender-system applications of the tensor completion algorithm.)
\item $\Vec{\alpha} = \{\alpha_1, ...\,, \alpha_d\} \in \mathbb{Z}^d$ is a $d$-dimensional vector that specifies an entry of $A$ as $A(\Vec{\alpha})$.
\item $A_i$ is the $i^{th}$ element of the ordered set of all $k$-dimensional subtensors of $A$, where each subtensor is equal to $A$ but with a distinct subset of the $d-k$ extents restricted to $1$.
\item $S_k$ is a strictly positive-valued vector of length equal to the number of $k$-dimensional subtensors of $A$. Then $S_{k, i}$ is the $i^{th}$ element of $S_k$.
\item $\known{A} = \{\vec{\alpha} \mid A(\vec{\alpha}) \}$ is the set of known/defined entries of tensor $A$; and its complement, $\unknown{A}$, is the set of absent/missing entries of $A$.
\item Integer $N$ $\doteq$ $|\known{A}|$ + $|\unknown{A}|$ = $|A|$ is the cardinality of the total index space of $A$.
\item Integer $n$ $\doteq$ $|\known{A}|$ is the cardinality of known entries of $A$.
\item Integer $k<d$ denotes the dimensionality of a given subtensor.
\item $[m] = \{1, 2, ...\,, m\}$ is an index set of the first $m$ natural numbers.
\end{enumerate}
\noindent The following are two key definitions.
\begin{definition} - $\textup{\bf Sets of subtensors:}$ Let $V_i = \mathbb{R}^{n_i}$ and assume a $d$-dimensional tensor $A \in V_1 \times V_2 \times \cdots \times V_d$ and a positive integer $1 \leq k < d$. From the set of dimensional indices $[d] = \{1,\dots, d\}$ we can obtain all $\binom{d}{k}$ possible tuples of $k$ dimensional indices. Specifically, for any tuple $\{i_1, \cdots, i_k\} \subseteq [d],$ we enumerate all possible $k$-dimensional subtensors of $A$ in dimensions $V_{i_1} \times \cdots \times V_{i_{k}}$ to obtain the set of subtensors $(i)$. The union of the set of subtensors is called the ordered set of all $(k)-$dimensional subtensors of $A$. We denote this set as $\mathcal{A}$ and the finite cardinality of the set $\mathcal{A}$ is $|\mathcal{A}|$. Also, its subtensor element is assumed to be labelled as $A_i$. By formulaic notation:
\begin{equation}
\mathcal{A} = \{A_i, 1 \leq i \leq |\mathcal{A}|\} = \bigcup_{i=1}^{\binom{d}{k}} (i)
\end{equation}
\end{definition}
\begin{definition} \label{scaleDef} - $\textup{\bf Scaling $k$-dimensional subtensors of a tensor:}$
For $d$-dimensional tensor $A$, the product $A'=S_k *_k A$ is defined as a scaling of each $k$-dimensional subtensor $A_i$ of $A$ to $A'_i$ of $A'$ as $A'_i = S_{k, i}\cdot A_i$ or, equivalently, for each $\vec{\alpha} \in \upsigma(A)$
\begin{equation}
A'(\Vec{\alpha}) ~~ \equiv ~~
A(\Vec{\alpha}) ~\cdot \prod\limits_{i \text{:} \Vec{\alpha} \in A_i}\hspace{-2pt} \coeffsubtensorScaled_{k,i} ~.
\end{equation}
\label{scalingbyHadamard}
\end{definition}
The structured scaling of Definition \ref{scaleDef} is a fundamental component of the solution described in the next section for the {\em canonical scaling problem}, which requires the transformation of a given nonnegative tensor to a unique scale-invariant canonical form.
\subsection{Canonical scaling problem}
\label{CSA_subsection}
We now summarize the formulation of the CSA for arbitrary scaling of $k$-dimensional subtensors of a given nonnegative tensor $A$, which may be obtained by replacing the elements of a given tensor $A'$ with their magnitudes\footnote{The decomposition of a given tensor into the Hadamard product of a nonnegative scaling tensor and a tensor with unit-magnitude negative or complex (or quaternion, octonion, or other basis form) entries can be thought of as a scale/unit-consistent analog of a polar decomposition. In direct analogy to the positive-definite component of the matrix polar decomposition, the nonnegative scaling tensor developed in this section is unique.}. Using definition \ref{scalingbyHadamard}, we have
\\\\
\textbf{Canonical Scaling Problem (CSP):} Find the $d$-dimensional tensor $A' \in \mathbb{R}^{n_1\times \cdots \times n_d}_{> 0}$ and a positive vector $\coeffsubtensorScaled_k$ such that $A' = A *_k \coeffsubtensorScaled_k$ and the product of the known entries of each $i^{th}$-index $k$-dimensional subtensor $A'_i$ is $1$.
\\\\
The CSP formulation can be transformed to the following equivalent problem by taking logarithms of known entries and replacing the unit-product constraint with a zero-sum constraint.
\\\\
\textbf{Log Canonical Scaling Problem (LCSP):} Find the $d$-dimensional tensor $a'$ and a vector $\coeffsubtensorscaled_k$ such that $
a'(\Vec{\alpha}) \equiv
a(\Vec{\alpha}) + \sum\limits_{i : \Vec{\alpha} \in A_i} \coeffsubtensorscaled_{k,i}, \quad \forall \vec{\alpha} \in \upsigma(A)$,
and each $i^{th}$-index $k$-dimensional subtensor $a'_i$ with the sum of its known entries equal to zero.
\\\\
In previous work \cite{CSA} we established based on \cite{KT} and \cite{RZalgorithm} that LCSP is equivalent to the following \textbf{Convex Optimization Problem (COP)}:
\begin{equation}
\underset{x \in \mathbb{R}^{n_1 \times \cdots \times n_d}}{\text{minimize }} 2^{-1}\sum\limits_{\Vec{\alpha} \in \upsigma(A)}\hspace{-3pt}(x\left(\Vec{\alpha}\right) - a\left(\Vec{\alpha}\right))^2 \quad \text{subject to} \sum\limits_{\Vec{\alpha} \in \upsigma(A_i)}\hspace{-5pt} x(\Vec{\alpha}) = 0, \quad \forall \text{ subtensor } A_i.
\end{equation}
\subsubsection{Uniqueness}
In the proofs of \cite{CSA}, we show that the equivalency between \textbf{COP} and \textbf{CSP} guarantees a feasible solution, hence the existence of a tensor $A'$ that satisfies \textbf{CSP} and has $\upsigma(A) = \upsigma(A')$. Specifically, we show that the COP problem is equivalent to the following optimization problem: Briefly, let $u \in R^p, b \in R^q, $ and $C\in R^{p\times q}$ be the original convex optimization problem of finding a vector $u' \in R^p$ and $\omega \in R^q$ such that
\begin{equation}
u'^{T}=u^T + \omega^T C
\end{equation}
and
\begin{equation}
C u' = b
\end{equation}
It is proven in \cite{RZalgorithm} that this program is equivalent to the following optimization problem, for which the properties of uniqueness and existence can be established:
\begin{equation}
\begin{aligned}
\text{min } 2^{-1} &\sum^p_{j=1} (x_j-u_j)^2 &
\\
& \text{subject to} & Cx = b.
\end{aligned}
\end{equation}
Thus, the convergence theorem of the \textbf{CSP} follows from the aforementioned optimization problem, with the specific proofs given in our previous work \cite{CSA}. Moreover, we know that if there exists a solution $x$ such that $Ax = b$, and vector $\omega$ such that $A\omega = 0$, then $x + \omega$ is another solution. This leads to the following theorem (formally proven in \cite{CSA}) regarding solution uniqueness:
\begin{theorem}{(Uniqueness of A')} There exists at most one tensor $A'$ for which there exists a strictly positive vector $\coeffsubtensorScaled_k$ such that the solution $\left(A',\coeffsubtensorScaled_k \right)$ satisfies \textbf{CSP}. Furthermore, if a positive vector $T_k$ satisfies
\begin{equation}
\prod\limits_{i: \Vec{\alpha} \in A_i}\hspace{-4pt} T_{k, i} ~= ~1 \qquad \forall \Vec{\alpha} \in \upsigma(A),
\end{equation} then $\left(A', S_k \circ T_k\right)$ is also a solution, where $\circ$ is the Hadamard product.
\label{uniquess_A'}
\end{theorem}{}
\subsubsection{CSA for tensor:}
We now present Algorithm 1 for obtaining a unique scaled tensor $A'$=$\mbox{CSA}(A,k)$.
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$d$-dimensional tensor \textit{A}.}
\KwOutput{$A'$ and scaling vector $\coeffsubtensorScaled_k$.}
\SetKwFunction{FMain}{CSA}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\FMain{$A, k$}}{
\textbf{- \texttt{Step 1}: Iterative step over constraints}: Initialize $count \leftarrow 0$, variance variable $v\leftarrow 0$, and let $p$ be a zero vector of conformant length. Let $a$ be the logarithm conversion of $A$, i.e., all known entries are replaced with their logs.
\\
\For{each subtensor $A_i$ with index $i$}{
\begin{equation}
\begin{aligned}
\rho_i &= -\left[ |\upsigma(A_i)|\right]^{-1}
\sum\limits_{\Vec{\alpha} \in \upsigma(A_i)}\hspace{-4pt} a^{step}(\Vec{\alpha})
\\
a(\Vec{\alpha}) & \leftarrow a(\Vec{\alpha}) + \rho_{i},\, v\leftarrow v+\rho_i^2, \quad \text{for} \quad \Vec{\alpha} \in \upsigma(A_i)
\\
\coeffsubtensorscaled_{k,i}&\leftarrow \coeffsubtensorscaled_{k,i} - \rho_{i}
\end{aligned}
\end{equation}
}
\textbf{- \texttt{Step 2}: Convergence}: If $v$ is less than a selected threshold $\epsilon$, then exit loop. Otherwise, set $count \leftarrow count+1$ and return to step 1.
\KwRet $A' = exp(a)$ and $\coeffsubtensorScaled_k = exp(\coeffsubtensorscaled_k)$.\;}
\caption{CSA for tensor}
\end{algorithm}
The time complexity of this algorithm is $O\left(|\upsigma(A)| \right)$ and is unaffected by the final step of converting back from the log-space solution to the desired solution as $A(\vec{\alpha}) = \exp(a(\Vec{\alpha}))$, which is also $O\left(|\upsigma(A)| \right)$.
A detailed examination of the contributions of each of the set of subtensors $(i)$ to this time complexity can be found in \cite{CSA}.
In the case of $d$=$2$ when $A$ is a matrix, i.e., $\mbox{CSA}(A, k$=$1)$, the set of $k$=$1$ subtensors is simply the set of rows and columns. This is a specialized instance of a matrix problem studied in \cite{RZalgorithm} with rows and columns explicitly distinguished for the problem of scaling line products of a matrix to chosen positive values (for which a solution is not guaranteed to exist except in the case we use of all scaling values equal to $1$) but from our generalized tensor formulation it can be seen that such a distinction is unnecessary.
\section{Tensor Completion Algorithm}
The CSA process provides the basis for the following tensor completion algorithm.
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$d$-dimensional tensor A and $k$.}
\KwOutput{$A'$}
\SetKwFunction{FMain}{TCA}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\FMain{$A, k$}}{
\textbf{- \texttt{Step 1}: CSA process.
}
\\
$S_k \leftarrow \mbox{CSA}(A, k)$
\textbf{- \texttt{Step 2}: Tensor completion process}:\;
$A' = A$
\\
\For{$\Vec{\alpha} \in \unknown{A}$}{
$ A'(\Vec{\alpha}) \leftarrow \prod_{i: \vec{\alpha} \in A_i} S_{k,i}^{-1}$ .\;
}
\KwRet $A'$.\;}
\caption{Tensor completion algorithm (TCA)}
\end{algorithm}
\subsection{Uniqueness and Full Support}
Although the canonical scaled tensor is unique, the scaling vectors may not be unless there are sufficient known entries to provide {\em full support}, which is now defined.
\begin{definition} For two vectors $\Vec{\alpha} , \Vec{\alpha}' \in \mathbb{Z}^d_{>0}$ such that ${\alpha}_j \neq {\alpha}'_j$ for all $j \leq d$, define a hypercube ordered set of $2^d-1$ vectors with respect to $\Vec{\alpha}$ as $H(\Vec{\alpha}, \Vec{\alpha}') = \{\Vec{\beta}_i \neq \Vec{\alpha}, 1\leq i\leq 2^d-1\}$, where each $\Vec{\beta}_i$ has the $j$-th component $\beta_{ij}$ equals either $\alpha_j$ or $\alpha'_j$, for all $j\leq d$. Then given a positive tensor $A$, define a tensor $A$ as {\em fully-supported} if for every entry $\Vec{\alpha} \in \unknown{A}$, there exists another vector $\Vec{\alpha}'$ such that ${\alpha}_j \neq {\alpha}'_j$ for all $j \leq d$ and $\forall \Vec{\beta}_i \in H(\Vec{\alpha}, \Vec{\alpha}')$, $\Vec{\beta}_i \in \known{A}$.
\label{fully-supported-tensor}
\end{definition}
Definition \ref{fully-supported-tensor} essentially says that every unknown entry forms a vertex of a $d$-dimensional hypercube with known entries at the remaining vertices. In the $d=2$ matrix case, for example, an unknown entry $(i,j)$ must have known entries at ($i$+$p$,\,$j$), ($i$,\,$j$+$q$), and ($i$+$p$,\,$j$+$q$) for some $p\neq 0$ and $q\neq 0$. In this case, $\vec{\alpha}$ = $(i,j)$ and $\vec{\alpha}'$ = ($i$+$p$,\,$j$+$q$). Thus, full support can be expected to hold even with very high sparsity. From this point forward we will assume full support unless otherwise stated.
\subsubsection{Completion Uniqueness}
To facilitate the subsequent theorem and proof, we introduce the following definition.
\begin{definition} Given $\Vec{\alpha}$ as a coordinate vector of tensor $A$. A vector $\Vec{v} \in \mathbb{Z}^{k}_{>0}$ for $k\leq d$ is considered a $k$-dimensional subvector of $\Vec{\alpha}$ if $\Vec{v} = (\alpha_{\gamma_1}, \dots, \alpha_{\gamma_k})$ for $\{\gamma_1, \gamma_2, \dots, \gamma_k\} \subseteq [d]$. We define the set of those $k$-dimensional subvectors of a tensor $A$ as $V_k(\vec{\alpha})$.
\label{uniqueness_def}
\end{definition}
\begin{observation}
For any $\vec{\alpha}$, each choice of $k$-dimensional subvector $\Vec{v}$ has a 1-to-1 relation with a unique $k$-dimensional subtensor of $A$.
\end{observation}
Using this definition, we obtain the following theorem regarding uniqueness of the recommendation/entry-completion result.
\begin{theorem} \textbf{(Uniqueness)}
The result from $\mbox{TCA} (A, k)$ is uniquely determined even if there are distinct sets of scaling vectors $\{\coeffsubtensorScaled_k\}$ that yield the same, unique,
$\mbox{CSA}(A, k)$.
\label{uniqueness_proof}
\end{theorem}
\begin{proof}
For $A' = \mbox{TCA}(A, k)$, since entry $ \Vec{\alpha} \in \known{A}$ has $A'(\Vec{\alpha}) = A(\Vec{\alpha})$
by the uniqueness theorem \ref{uniquess_A'}, we need only prove uniqueness of any completion $\vec{\alpha} \in \unknown{A}$.
From the uniqueness result of Theorem \ref{uniquess_A'}, and assuming that $\mbox{TCA}(A, k)$ admits two distinct scaling vectors $\coeffsubtensorScaled_k$ and $\coeffsubtensorScaled_k '$, we can infer that an entry $\coeffsubtensorScaled_{k,i}$ from $\coeffsubtensorScaled_k$ represents a unique $k$-dimensional subtensor of $A$ for each $i$.
From definition \ref{fully-supported-tensor}, there exists a vector $\Vec{\alpha}'$ such that ${\alpha}_j \neq {\alpha}'_j$ for all $j \leq d$ and a hypercube set $H(\Vec{\alpha}, \Vec{\alpha}')$ of vectors $\Vec{\beta_i}$ such that $\Vec{\beta}_i \in \known{A}$. From theorem \ref{uniquess_A'}, the scaling vector $\coeffsubtensorScaled_k'$ equals $\coeffsubtensorScaled_k \circ T_k$ such that such that for $\vec{v} \in V_k(\vec{\beta}_i)$, the coefficient notation associated with $\vec{v}$ as $T_{\vec{v}}$ is equivalent to notation (4) of section 2: $T_{\vec{v}} \doteq T_{k,j}$. Here the index $j$ represents an $j$-th $k$-dimensional subtensor $A_j$ of $A$ such that $\vec{\beta}_i \in A_j$ and thus
\begin{equation}
\prod_{\substack{\vec{v}\in V_k(\vec{\beta}_i)}} \hspace{-8pt} T_{\vec{v}} ~~ = ~1 ~.
\label{prod_T_relate_i}
\end{equation}
We now show that
$A'(\Vec{\alpha}) = \prod_{S_{k,i}: \vec{\alpha} \in A_i} S_{k,i}^{-1}$
is unique, which is equivalent to $\prod_{S_{k,i}: \vec{\alpha} \in A_i} S_{k,i} = \prod_{\substack{\vec{v}\in V_k(\vec{\alpha})}} S_{\vec{v}} $,
\begin{equation}
\prod_{\substack{\vec{v}\in V_k(\vec{\alpha})}}\hspace{-8pt} S_{\vec{v}} ~~ =
\prod_{\substack{\vec{v}\in V_k(\vec{\alpha}')}}\hspace{-8pt} S_{\vec{v}}
\end{equation}
or equivalently from theorem \ref{uniquess_A'}
\begin{equation}
\prod_{\substack{\vec{v}\in V_k(\vec{\alpha})}}\hspace{-8pt} T_{\vec{v}}
~\, = ~1 ~.
\end{equation}
Without loss of generality, we consider the case $d \equiv 0$ (mod $2)$, and the other case can be proved similarly. Denote that $\Vec{\beta}_{2^d-1} = \Vec{\alpha}'$ and $\Vec{\beta}_{0} = \Vec{\alpha}$, we divide $2^d-2$ remaining vectors $\Vec{\beta}_i$ for $1 \leq i \leq 2^d-2$ into two groups. If the number of coordinates in $\vec{\beta}_i$ that match with those in $\vec{\alpha}$ equals $j$ modulo 2, then $\Vec{\beta}_i$ goes to $G_j$. Thus we have two groups $G_0$ and $G_1$. For $G \in \{G_0, G_1\}$, consider $V(G) = \bigcup_{\vec{\beta}_i \in G} V_k(\Vec{\beta}_i)$ for some $i$ and let $V_G(\vec{v}) = \{\vec{\beta}_i \in G \mid \vec{v} \in V_k(\vec{\beta}_i) \}$ and $|V_G(\vec{v})|$ as the set's cardinality. Then from equation (\ref{prod_T_relate_i})
\begin{equation}
\prod_{\vec{\beta}_i \in G}~\prod_{\substack{\vec{v}\in V_k(\vec{\beta}_i)}}\hspace{-4pt} T_{\vec{v}} ~~ = ~ 1 ~~\Leftrightarrow ~ \prod_{\vec{v} \in V(G)}\hspace{-2pt} T_{\vec{v}}^{|V_G(\vec{v})|} ~~=~ 1 ~.
\end{equation}
Consider $\vec{v} \in \bigcup_{i=1}^{2^d-2} V_k(\Vec{\beta}_i)$. Let's say that $\vec{v}$ has $l_i$ coordinates that match with those in $\Vec{\alpha}$, then the remaining $k-l_i$ elements of $\vec{v}$ match with those in $\Vec{\alpha}'$. Consider case 1 when $0 < l_i < k$. For each $\vec{v}$, we form $\vec{\beta}_i$ by choosing additional $m \leq d-k$ elements among $d-k$ remaining elements of $\Vec{\alpha}$ such that $m+l_i$ is the number of coordinates in vector $\Vec{\beta}_i$ that match with $\Vec{\alpha}$. For $j\in \{0,1\}$, if $m +l_i \equiv j (\text{mod } 2)$, then $\vec{v}$ forms $\binom{d-k}{m}$ numbers of $\vec{\beta}_i$ that belongs to $G_j$. So when we sum between $0 \leq m \leq d - k$, $|V_{G_0}(\vec{v})| = \sum_{m +l_i \equiv 0 (\text{mod } 2)} \binom{d-k}{m} = 2^{d-k-1}$ and $|V_{G_1}(\vec{v})| = \sum_{ m +l_i \equiv 1 (\text{mod } 2)} \binom{d-k}{m} = 2^{d-k-1}$. Thus,
\begin{equation}
\dfrac{T_{\vec{v}}^{|V_{G_1}(\vec{v})|}}{ T_{\vec{v}}^{|V_{G_0}(\vec{v})|}} ~=~ 1 ~.
\end{equation}
If $l_i = k$, we encounter the vector $\vec{\beta}_i = \vec{\alpha}$ when forming $\vec{\beta}_i$. If $l_i = 0$, we encounter the vector $\vec{\beta}_i = \vec{\alpha}'$ when forming $\vec{\beta}_i$. Since we omit $\vec{\alpha}$ and $\vec{\alpha}'$ from $G_0$, $|V_{G_0}(\vec{v})| = 2^{d-k-1} - 1$ and $|V_{G_1}(\vec{v})| = 2^{d-k-1}$ in either case of $l_i$. Thus,
\begin{equation} \dfrac{T_{\vec{v}}^{|V_{G_1}(\vec{v})|}}{ T_{\vec{v}}^{|V_{G_0}(\vec{v})|}} ~=~ T_{\vec{v}}
\end{equation}
and therefore
\begin{equation}
\dfrac{\prod_{\vec{v} \in V(G_1)} T_{\vec{v}}^{|V_{G_1}(\vec{v})|}} {\prod_{\vec{v} \in V(G_0)} T_{\vec{v}}^{|V_{G_0}(\vec{v})|}} = 1 ~\Rightarrow \prod_{\substack{\vec{v}\in V_k(\vec{\alpha})}}\hspace{-8pt} T_{\vec{v}}
\prod_{\substack{\vec{v}\in V_k(\vec{\alpha}')}}\hspace{-8pt} T_{\vec{v}}\,= 1 ~\Rightarrow \prod_{\substack{\vec{v}\in V_k(\vec{\alpha})}}\hspace{-8pt} T_{\vec{v}} ~= 1 ~.
\end{equation}
This equality implies that $A'$ is unchanged, and thus uniquely determined, regardless of whether there exist distinct scaling vectors $\coeffsubtensorScaled_k$ and $\coeffsubtensorScaled_k'$.
\end{proof}
\subsection{Unit Consistency}
$A'$=$\mbox{CSA}(A,k)$ guarantees scale-invariance with respect to every $k$-dimensional subtensor of $A'$. It then remains to show that the competion result from $\mbox{TCA}(A,k)$ is unit-consistent.
\begin{theorem} \textbf{(Unit-consistency)}
Given a tensor $A$ and an arbitrary conformant positive scaling vector $C_{k}$, $C_{k} *_k \mbox{TCA}(A, k)\,=\,\mbox{TCA}(C_{k} *_k A, k)$,
where $C_k$ scales all $k$-dimensional subtensors of $A$ (with operator $*_k$ as defined in definition \ref{scaleDef}).
\label{unit-consistecy}
\end{theorem}
\begin{proof}
Let $S_k \leftarrow \mbox{CSA}(A, k)$. It can be shown that $A' = \mbox{CSA}(C_k *_k A, k) = \mbox{CSA}(A, k)$ for all $A$. With an abuse of notation, we assume all unknown entries of $A'$ are assigned the value of 1, i.e., $A'(\vec{\alpha}) = 1$ for $\vec{\alpha} \in \unknown{A}$. The complete TCA process can then be defined as $\mbox{TCA}(A, k) = S_{k}^{(-1)} *_k A'$,
where $S_{k}^{(-1)} = \{S_{k, i}^{-1}\}$ is the inverse vector of $S_k$. Now, using the uniqueness theorem \ref{uniqueness_proof}, we can subsume the scaling vector $C_k$ into $S_k$ and deduce that
\begin{equation}
C_{k} *_k \mbox{TCA}(A, k) = (S_{k}^{(-1)} \circ C_{k}) *_k A'
= \mbox{TCA}(C_{k} *_k A, k)
\end{equation}
\end{proof}
The time complexity for $\mbox{CSA}(A, k)$ is clearly $O\left(|\upsigma(A)| \right)$, and the same complexity applies to the TCA process. The details are in \cite{CSA}, and in brief we consider addition from each of the set of subtensors $(i)$ to acquire the time complexity.
\subsection{Consensus-Ordering Consistency}
When $k\,$=$\,d$$-$$1$, TCA can be shown to respect an ordering relationship of the $(d$$-$$1)$-dimensional subtensors of a given $d$-dimensional tensor $A$.
\begin{definition}
Given a tensor $A$, we define a permutation for list of indices $D=\{d_1, \dots, d_{|D|}\} \subseteq [n_d]$ as $\gamma_D =\{\gamma(d_1), \dots, \gamma \left(d_{|D|}\right)\}$. Define a vector $\Vec{\alpha} \in \mathbb{Z}^{d-1}_{>0}$ that preserves/follows ordering $\gamma_D$ in tensor $A$ if
\begin{enumerate}
\item{(Known terms condition)}: $A(\vec{\alpha}, \gamma(d_l)) > 0$ for all $1 \leq l \leq |D|$.
\item{(Ordering condition)}: $A(\Vec{\alpha}, \gamma(d_1)) < \dots < A(\Vec{\alpha}, \gamma(d_l)) < \dots < A(\Vec{\alpha}, \gamma(d_{|D|}))$ for $1 < l < |D|$.
\end{enumerate}
\label{consensus}
\end{definition}
Note that tensor $A'$ from $\mbox{TCA}(A,d-1)$ also has the vector $\Vec{\alpha}$ following the same ordering $\gamma_D =\{\gamma(d_1), \dots, \gamma \left(d_{|D|}\right)\}$ with respect to tensor $A$. We have the following theorem:
\begin{theorem} (Consensus-ordering)
Given a fully-supported tensor $A$, the obtained result $ A' = \mbox{TCA}(A, d-1)$, the set $D=\{d_1, \dots, d_{|D|}\} \subseteq [n_d]$ with permutation $\gamma_D =\{\gamma(d_1), \dots, \gamma \left(d_{|D|}\right)\}$, and the set of known vectors on the subtensor as $NZ_l = \{\Vec{\alpha} \in \mathbb{Z}^{d-1}_{>0} | (\Vec{\alpha}, l) \in \known{A}\}$ with label $l \in D$, with $NZ_l = NZ$ assumed fixed for all $p \in D$. Given that each vector in $NZ$ follows ordering $\gamma_D$, then any new vector $\Vec{\alpha}' \in NZ^C$, where $NZ^C$ is the complement set of $NZ$, follows the ordering $\gamma_D$ in tensor $A'$ and the completion result in $A'$ is unique.
\label{consensus-ordering}
\end{theorem}
\begin{proof}
For any vector $\vec{\alpha} =(\alpha_1, \dots, \alpha_{d-1})$, the ordering condition from definition $\ref{consensus}$ gives:
\begin{equation}
A(\Vec{\alpha}, \gamma(d_1)) < \dots < A(\Vec{\alpha}, \gamma(d_l)) < \dots < A(\Vec{\alpha}, \gamma(d_{|D|}))
\end{equation}
Given that $A'$ from $\mbox{TCA}$ is normalized via $\mbox{CSA}$, then without loss of generality we can omit the dimension $k$ when using notation (4) from section 2, since $k$ is specified as $k = d-1$. Specifically, after the TCA process, with vector $S \equiv S_{d-1}$, $S_i \equiv S_{d-1, i}$ as the entry of vector $S$.
\begin{align*}
A(\Vec{\alpha}, \gamma(d_l)) = \prod_{i=1}^{d-1} S_{\alpha_i}^{-1}\cdot S_{\gamma(d_l)}^{-1} \cdot A'(\Vec{\alpha}, \gamma(d_l)) \doteq S^{\Vec{\alpha}} \cdot S_{\gamma(d_l)}^{-1} \cdot A'(\Vec{\alpha}, \gamma(d_l))
\end{align*}
Substituting into the above inequality, we have:
\begin{equation}
A'(\Vec{\alpha}, \gamma(d_1))\!\cdot\! S_{\gamma(d_1)}^{-1}\!
<\!
\dots\! <\! A'(\Vec{\alpha}, \gamma(d_l))\! \cdot\! S_{\gamma(d_l)}^{-1}\!
<\! \dots\! <\! A'(\Vec{\alpha}, \gamma(d_{|D|}))\! \cdot\! S_{\gamma(d_{|D|})}^{-1}
\label{ordering}
\end{equation}
Using the known-terms condition from Definition $\ref{consensus}$, the assumption that $A(\Vec{\alpha}', p)$ is unknown for all $p \in [n_d]$ and $\Vec{\alpha} \in NZ^C$\hspace{-4pt}, and the CSA result with respect to $(d-1)$-dimensional subtensor in direction $\gamma(d_l)$:
\begin{equation}
\prod_{\Vec{\alpha} \in NZ}\hspace{-2pt} A'(\Vec{\alpha},\gamma(d_l)) ~=~ 1~.
\label{normalized_proof}
\end{equation}
Substituting (\ref{normalized_proof}) into (\ref{ordering}) gives
\begin{equation}
S_{\gamma(d_1)}^{-1} < \dots < S_{\gamma(d_l)}^{-1} < \dots < S_{\gamma(d_{|D|})}^{-1}.
\label{2.2.11}
\end{equation}
For any vector $\vec{\alpha}' \in NZ^C$\hspace{-4pt}, the following entry is uniquely determined from theorem \ref{uniqueness_proof},
\begin{equation}
A'(\Vec{\alpha}', \gamma(d_l)) ~=~
S^{\Vec{\alpha}'} \cdot S_{\gamma(d_l)}^{-1},
\end{equation}
and we therefore deduce that
\begin{equation}
A'(\Vec{\alpha}', \gamma(d_1))
<
\dots < A'(\Vec{\alpha}', \gamma(d_l)) < \dots < A'(\Vec{\alpha}', \gamma(d_{|D|})),
\end{equation}
thus completing the proof.
\end{proof}
\section{UC Completion for Recommender Systems}
In this section we examine applications of UC completion in the area of recommender systems. Because of its particular relevance to matrix-formulated recommender-system problems, we briefly
discuss the special case of $d=2$, $k=1$, for a given $m\times n$ matrix\footnote{Because all results in this paper are transpositon consistent, we implicitly assume without loss of generality that $n\geq m$ purely to be consistent with our general use of $n$ as the variable that functionally determines the time and space complexity of our algorithms.} $A\in \mathbb{R}_{> 0}^{m\times n}$ with full support. For notational convenience, we define the matrix completion function $\mbox{MCA}(A)$ as a special case of TCA:
\begin{equation}
\mbox{MCA}(A) ~\equiv ~ \mbox{TCA}(A,1) ~~ \mbox{for $d=2$}\, .
\end{equation}
In this case, unit-consistency property can be expressed as $R\cdot \mbox{MCA}(A)\cdot C = \mbox{MCA}(R\cdot A\cdot C)$ for positive vectors $R$ and $C$,
which in terms of matrix multiplication with positive diagonal matrices $R$ and $C$ implies $R\cdot \mbox{MCA}(A)\cdot C = \mbox{MCA}(RAC)$. The time and space
complexity for $\mbox{MCA}(A)$ is $O(|\upsigma (A)|)$.
Both the theorem \ref{consensus-ordering} and definition \ref{consensus} allow us to quantify how we can look into the ordering of the recommender system through the following definition.
\begin{definition}
Denote $RS(\Vec{\alpha}) = A'(\Vec{\alpha})$ as the recommendation result for vector position $\Vec{\alpha}$ from tensor A with $A' = \mbox{TCA}(A, d-1)$.
\end{definition}
\subsection{Consensus-ordering}
Following by definition \ref{consensus} and theorem \ref{consensus-ordering}, we formally state consensus-ordering property in the context of matrix, 3-dimensional tensor, 4-dimensional tensor, and $d$-dimensional tensor.
\begin{corollary} (Consensus-ordering for matrix)
Given a matrix $A$, $\mbox{MCA}(A)$, set of products $P \subseteq [n]$, and set of users $U \subseteq [m]$. We have the following statements:
\begin{enumerate}
\item Given all users $u \in U$ follows ordering $\gamma_P$. Then the recommendation result $RS(u', p)$ for any user-product $(u', p) \in U^C \times P$ is unique and follows ordering $\gamma_P$.
\item Given all products $p \in P$ follows $\gamma_U$. Then the recommendation result $RS(u, p')$ for any user-product $(u, p') \in U\times P^C$ is unique and follows ordering $\gamma_U$.
\end{enumerate}
\end{corollary}
For 3D tensor, we have the following interpretation.
\begin{corollary} (Consensus ordering for $3$-dimensional tensor) Given a $3$-dimensional tensor $A \in \mathbb{R}^{n_1\times n_2\times n_3}_{>0}$. Let the first, second, and third dimension be users, attributes, and products and the subsets of users, attributes, and products as $U \subseteq [n_1], \Gamma \subseteq [n_2],$ and $P \subseteq [n_3]$. Define on a slice of user $u$ the set of known entries by the vector of attribute and product $NZ_u = \{(\alpha, p) \in [n_2] \times [n_3] \mid (u, \alpha, p) \in \known{A}\}$. Similarly, we define the set of known entries for the attribute $\alpha$ slice as $NZ_{\alpha}$ and that for the product $p$ slice $NZ_{p}$. For the recommendation result from the $\mbox{TCA}(A, 2)$ and fixed set $NZ$, we have the following statements:
\begin{enumerate}
\item Assuming that for all $u \in U$, $NZ_u = NZ$. If all the ratings from vector of attribute and product $(\alpha, p)\in NZ$ follows ordering $\gamma_U$, the recommendation result $RS(u, \alpha', p')$ for attribute-product vector $(\alpha', p')\in NZ^C$ and each user $u \in U$ is unique and also follows ordering $\gamma_U$.
\item Assuming that for all $p \in P$, $NZ_p = NZ$. If all the ratings from vector of user and attribute $(u, \alpha)\in NZ$ follows ordering $\gamma_P$, the recommendation result $RS(u', \alpha', p)$ for user-attribute vector $(u', \alpha')\in NZ^C$ and each product $p\in P$ is unique and follows ordering $\gamma_P$.
\item Assuming that for all $\alpha \in \Gamma$, $NZ_{\alpha} =NZ$. If all the scores based on vector of user and attribute $(u, p)\in NZ$ follows ordering $\gamma_{\Gamma}$, the recommendation result $RS(u', \alpha, p')$ for user-product vector $(u', p')\in NZ^C$ and each attribute $\alpha \in \Gamma$ is unique and follows ordering $\gamma_{\Gamma}$.
\end{enumerate}
\label{3d}
\end{corollary}
The strongest part of this corollary is the symmetric property with respect to the attributes, regardless of whether the attributes are all from the users or products. The cases from the matrix to the 4-dimensional tensor provide the intuition to reasoning from ordering with respect to the choice of spatial label on different coordinates.
\\\\
For the $d$-dimensional tensor, we denote user as $u$, product as $p$, $\Vec{\alpha}_u$ as the vector of user's attributes, $\Vec{\alpha}_p$ as the vector of product's attributes. We can generalize all the above cases into a $d$-dimensional scaling by the following corollary, assuming the only features we focus on are user, user's attributes, product's attributes, and product, following theorem \ref{consensus-ordering}.
\begin{corollary} (Consensus-ordering for $d$-dimensional tensor)
Given a $d$-dimensional tensor $A \in \mathbb{R}^{n_1\times \dots \times n_d}$ and the recommendation outcome $ A' = \mbox{TCA}(A, d-1)$. Denote the subset of entries in the $d$ dimension is $D \subseteq [n_d]$, and the subset of vectors indicating known subtensor labeled as $l \in [n_d]$ as $NZ_l = \{\Vec{\alpha} \in [n_1]\times\dots \times [n_{d-1}] \mid (\Vec{\alpha}, l) \in \known{A}\}$. Moreover, denote user as $u$, product as $p$, $\Vec{\alpha}_u$ as the vector of user's attribute, $\Vec{\alpha}_p$ as the vector of user's attribute. Assuming that $NZ_l = NZ$ for all $l \in D$ and every vector in $NZ$ follows ordering $\gamma_D$, we have the following cases:
\begin{enumerate}
\item The set of user-attributes vectors $ \{\Vec{\alpha} = (u, \Vec{\alpha}_u, \vec{\alpha}_p) \}$ and the set of products $D=\{p_1, \dots, p_{|D|}\}$.
\item The set of product-attributes vectors $ \{\Vec{\alpha} = (\Vec{\alpha}_u, \Vec{\alpha}_p, p) \}$ and the set of users $D=\{u_1, \dots, u_{|D|}\}$.
\item The set of user-attributes-product vectors $ \{\Vec{\alpha} = (u, \Vec{\alpha}_u, \Vec{\alpha}_p, p) \}$ and the set of user's last attribute $D=\{\alpha_{u_1}, \dots, \alpha_{u_{|D|}}\}$.
\item The set of user-attributes-product vectors $ \{\Vec{\alpha} = (u, \Vec{\alpha}_u, \vec{\alpha}_p, p)\}$ and the set of product's last attribute $D=\{\alpha_{p_1}, \dots, \alpha_{p_{|D|}}\}$.
\end{enumerate}
Then all cases conclude that any new recommendation result $RS(\Vec{\alpha}, u)$ for vector $(\Vec{\alpha}', u) \in NZ^C\times D$ is unique and follows ordering $\gamma_D$ for $D$ is defined above.
\end{corollary}
This corollary sums up how the consensus-ordering property with respect to the same index label in the original tensor preserves in the output tensor. As long as there exists unanimity among attributes on the first $d-1$ dimensions with respect to the last coordinate, the ordering follows trivially by theorem \ref{consensus-ordering}.
Both consensus-ordering and unit-consistent properties generalize to higher dimensions. For this paper, we can only deduce statements in the case of $(d-1)$-scaling. Assuming that any vector relationship, such as user-attribute-product, is on the first $(d-1)$ dimensions of the $d$-dimensional tensor and is uniformly unanimous over the subset of indices on the last coordinate. If all users and products have at least one shared coordinate, e.g., attribute, among the list of spatial indexing attributes, the ordering statement by that attribute coordinates with any new user, and the attribute vector follows by the permutation index from that indexing list. The same argument follows if we interchange the relationship of rating scores among users, product, and different attributes.
\subsubsection{Time complexity}
For the tensor completion process, the time complexity to make a user's query is the number of diagonal matrices needed to give predictions for a user in tensor $A$. We only need to choose one term in each of such diagonal matrices and multiply them all together. As a recommender query only takes $O(d)$ time complexity for a user, we can omit this term in complexity analysis. As the recommendation process is $\mbox{TCA}(A, d-1)$, we have the following theorem:
\begin{theorem} \textbf{(Time complexity)}
The time complexity for the RS procedure on tensor $A$ is $O(|\upsigma(A)|)$.
\end{theorem}
\begin{proof}
The $\mbox{TCA}(A, d-1)$ process has time complexity $O\left(|\upsigma(A)|\right)$.
\end{proof}
As $d=2$, the time complexity of the RS procedure is $O(|\upsigma(A)|)$.
\begin{corollary} \textbf{(Time complexity)}
The time complexity for the recommendation procedure on matrix $A$ is $O(|\upsigma(A)|)$.
\end{corollary}
\section{Empirical Corroboration of\\ Performance Quality}
\label{empirical}
In this section we provide empirical results comparing our approach to state-of-the-art methods on standard datasets using the standard measure of performance: root mean-squared error (RMSE)\footnote{Thanks to Nikki Hotrabhavananda and Will Starms for obtaining comparison results provided in this section.}. As a measure of squared error, we naturally argue that RMSE is a poor choice because it is strongly scale-sensitive, e.g., users who give consistently higher ratings (larger magnitude) will have a greater influence on system recommendations than users who tend to give more moderate or lower ratings. However, as we believe that our unit-consistent (UC) approach is the only one that satisfies necessary conditions that are fundamental to the problem, we expect it to perform comparably to or better than other methods according to {\em any} metric.
Our first set of comparison results is for the MovieLens-1M benchmark dataset, shown in Figure \ref{fig:movieLens}. The results show that our UC method and the top-performing state-of-the-art methods are almost indistinguishable according RMSE. This is significant because the competing methods are implicitly or explicitly designed to minimize squared error. Our UC approach, by contrast, is not at all designed to minimize squared error, and yet it performs nearly identically to methods that are tailored to minimize this measure.
Our second set of comparison results is for the Jester-2 benchmark dataset, shown in Figure \ref{fig:jester}. As can be seen, relative order of performance is very different for this dataset compared to the previous one, which tends to suggest that RMSE is ineffective for robustly characterizing relative performance across different datasets. Again, the UC method performs comparably to state-of-the-art methods. If the conclusions of our analyses are correct, we should expect the UC approach to be similarly competitive with methods that are tailored to minimize other measures of error -- {\em even according to those measures for which they are tailored to minimize}.
\begin{figure}
\hspace*{-1cm}
\begin{tikzpicture}
\begin{axis}[
nodes near coords, x tick label style={rotate=45, anchor=east},
width = 1*\textwidth,
height = 7cm,
major x tick style = transparent,
ybar=1*\pgflinewidth,
bar width=20pt,
ymajorgrids = true,
ylabel = {RMSE},
symbolic x coords={SVD++, SVD, Slope One, CoClustering, NMF, Unit-Consistent (ours), NormalPredictor},
xtick = data,
scaled y ticks = false,
ymin=0, ymax=1.8,
nodes near coords = \rotatebox{90}{{\pgfmathprintnumber[fixed zerofill, precision=2]{\pgfplotspointmeta}}},
ymin=0,
legend cell align=left,
legend style={
at={(1,1.05)},
anchor=south east,
column sep=1ex
}
]
\addplot[style={barBlack,fill=barGrey,mark=none}]
coordinates {(SVD++,0.8551)
(SVD, 0.8667)
(Slope One,0.9059)
(CoClustering, 0.9125)
(NMF,0.9166)
(Unit-Consistent (ours),0.9346)
(NormalPredictor,1.5047)};
\end{axis}
\end{tikzpicture}
\caption{The UC approach yields results comparable to state-of-the-art on the standard MovieLens-1m benchmark dataset according to RMSE. SVD and other unitarily-invariant optimization methods implicitly minimize RMSE, which is an essentially arbitrary measure of effectiveness. The fact that UC performs nearly identically according to this measure suggests that it is capturing something fundamental about the problem.}
\label{fig:movieLens}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
nodes near coords, x tick label style={rotate=45, anchor=east},
width = 1*\textwidth,
height = 7cm,
major x tick style = transparent,
ybar=1*\pgflinewidth,
bar width=20pt,
ymajorgrids = true,
ylabel = {RMSE},
symbolic x coords={CoClustering, SVD, SVD++, Unit-Consistent (ours), NMF, Slope One, NormalPredictor},
xtick = data,
scaled y ticks = false,
ymin=0, ymax=9,
nodes near coords = \rotatebox{90}{{\pgfmathprintnumber[fixed zerofill, precision=2]{\pgfplotspointmeta}}},
ymin=0,
legend cell align=left,
legend style={
at={(1,1.05)},
anchor=south east,
column sep=1ex
}
]
\addplot[style={barBlack,fill=barGrey,mark=none}]
coordinates {(CoClustering, 4.28)
(SVD,4.53)
(SVD++, 4.8872)
(Unit-Consistent (ours),5.55)
(NMF,7.04)
(Slope One,7.23)
(NormalPredictor,7.27)};
\end{axis}
\end{tikzpicture}
\caption{The UC approach also yields results comparable to state-of-the-art on the standard Jester-2 benchmark dataset. Variations in performance for the various methods across the MovieLens, Jester-2, and other standard datasets suggests that RMSE is a relatively low-accuracy measure for this problem.
Despite the large variation in results from one benchmark dataset to another, the UC method consistently performs comparably to or better than state-of-the-art methods that are tailored to minimize RMSE.}
\label{fig:jester}
\end{figure}
The most significant conclusion to be drawn is that the results presented in Figures \ref{fig:movieLens} and \ref{fig:jester} are consistent with our hypothesis that the UC method should yield near-optimal performance according to any chosen measure of error. The results provided in this section are clearly too limited to be fully convincing, so we look forward to results from a more comprehensive battery of future tests involving larger and more diverse datasets.
\section{Discussion and Future Work}
We have discussed that it is possible to transform a matrix to a unique scale-canonical form in which the product of nonzero elements in each row and column is 1. We showed that this provides a means for performing unit-consistent matrix completion by replacing missing entries in the scale-canonical form with 1s, and then applying the inverse scaling to obtain the completed matrix. We then showed that this completion algorithm can be generalized for unit-consistent tensor completion.
We have argued that black-box AI systems, or generic methods to optimize an arbitrary metric (e.g., RMSE), are unnecessary for recommender system applications because a single criterion, unit consistency, is entirely sufficient to efficiently obtain unique solutions that have provably rigorous -- and intuitively expected -- properties. We have provided empirical evidence to support our hypothesis that our approach should be competitive with state-of-the-art alternatives according to any chosen metric -- {\em even methods that are explicitly designed to optimize that chosen metric}.
It can be verified that the intermediate logspace solution we employed in our matrix and tensor scaling steps can be applied directly to achieve {\em Translation Consistent} (TC) matrix/tensor completions. Future work will more thoroughly examine its various properties and the problems to which it may be applicable.
Finally, our philosophical approach to the recommender system problem was to identify properties that should be expected of any {\em reasonable} solution. We identified unit consistency as being such a property, and we showed that it alone is sufficient to yield a unique solution without need for generic optimization and/or heuristic AI methods. This is surprising because the problem seems at first glance to demand unknowable information deriving from individual human psychology, thus appearing to be unavoidably within the domain of AI. This example shows that recognition of a few constraints on the solution space can sometimes replace profound mystery with simple clarity.
\section{Acknowledgements}
The first author would like to thank the Electrical Engineering and Computer Science (EECS) Department of the University of Missouri-Columbia for an Undergraduate Research Stipend that helped to support this work.
\bibliographystyle{plain}
|
1,116,691,499,683 | arxiv | \section{Introduction}
Young star clusters, with typical masses of $M_{\rm cl}=10^{3-6}~{\rm M}_\odot$, indicate recent or ongoing violent star formation, and are often triggered by mergers and close encounters between galaxies. Only a fraction of these young massive star clusters evolve into old globular clusters, while the majority ($60-90\%$) will dissolve into the field star population within about 30\,Myr (e.g., de Grijs \& Parmentier 2007). In order to understand the formation and fate of these clusters, it is important to study these in detail, and obtain good estimates of the mass, stellar content, dynamics, and binary population.
The dynamical mass for a cluster in virial equilibrium, consisting of single, equal-mass stars, is given by:
\begin{equation}
M_{\rm dyn} = \eta \, \frac{ R_{\rm hm} \sigma_{\rm los}^2 }{G}
\end{equation}
(Spitzer 1987), where $R_{\rm hm}$ is the (projected) half-mass radius, $\sigma_{\rm los}$ the measured line-of-sight velocity dispersion, and $\eta \approx 9.75$. For unresolved clusters, $\sigma_{\rm los}$ is usually derived from spectral-line analysis, neglecting the presence of binaries. However, observations have shown that the majority of stars form in binary or multiple systems (e.g., Duquennoy \& Mayor 1991; Kouwenhoven et al. 2005, 2007; Kobulnicky et al. 2007). When binaries are present, $\sigma_{\rm los}$ does not only include the motion of the binaries (i.e., their centre-of-mass) in the cluster potential, but additionally the velocity component of the orbital motion. This results in an overestimation of the velocity dispersion, and hence of $M_{\rm dyn}$.
\section{The effect of binaries on the dynamical mass $M_{\rm dyn}$ of a star cluster}
The systematic error introduced by the single-star assumption depends on the properties of the star cluster and of the binary population. This effect is most easily seen in the extreme case when $\sigma_{\rm los}$ is dominated by the orbital motion of binaries (which is the case for most Galactic OB associations). In this binary-dominated case, the measured $\sigma_{\rm los}$ is independent of $R_{\rm hm}$ and the true cluster mass, $M_{\rm cl}$. The inferred $M_{\rm dyn}$ from equation~(1) is then proportional to $R_{\rm hm}$. The dynamical mass overestimation is therefore $M_{\rm dyn}/M_{\rm cl} \propto R_{\rm hm}/M_{\rm cl}$. Sparse clusters are thus most sensitive to binaries; the systematic error in $M_{\rm dyn}$ could be as influential as that from the assumption of virial equilibrium. The cluster structure, stellar mass function and the presence of mass segregation also affect $M_{\rm dyn}$, but these are of much less importance (Kouwenhoven \& de Grijs, in prep.).
Whether or not the binaries affect $M_{\rm dyn}$ additionally depends on the properties of the binary population. Obviously, the most important parameters are the binary fraction (which determines the relative weight between singles and binaries when measuring $\sigma_{\rm los}$) and the semi-major axis, $a$, or period distribution (tight binaries have a larger orbital velocity component). As in a binary orbit $v_{\rm orb} \propto a^{-1/2}$, we have for the binary-dominated case $M_{\rm dyn} \propto \sigma_{\rm los}^2 \propto a^{-1}$. The distributions over eccentricity and mass ratio play a significantly smaller role than the binary fraction and semi-major axis distribution (Kouwenhoven \& de Grijs, in prep.).
Finally, we wish to stress that further systematic errors are introduced by observational selection effects. Firstly, $\sigma_{\rm los}$ is often derived from spectral lines of red giants; the velocities of these objects may or may not be representative for the cluster as a whole. Secondly, a measurement in the cluster centre, and near the tidal limit, will result in dynamical masses differing by $\sim 50\%$; caution should be exercised when interpreting the observational results.
\section{When is binarity important?}
The effect of binaries on the dynamical mass determination of a star cluster is important if the typical orbital velocity of a binary component is of order, or larger than, the velocity dispersion of the particles (single/binary) in the potential of the cluster. Our simulations indicate that, for example, the dynamical mass is overestimated by 70\% for $\sigma_{\rm los} = 1$\,km\,s$^{-1}$, 50\% for 2\,km\,s$^{-1}$, 20\% for 5\,km\,s$^{-1}$, and 5\% for 10\,km\,s$^{-1}$. Due to spectral resolution and stellar atmospheric turbulence, most {\em measured} velocity dispersions are $\sigma_{\rm los} \geqslant 5$\,km\,s$^{-1}$. Most of the known dynamical masses of massive star clusters are therefore only mildly affected by the presence of binaries. However, for low-mass star clusters, the spectroscopic velocity dispersion may result in an overestimation of the dynamical mass by a factor of two or more.
|
1,116,691,499,684 | arxiv | \section{Introduction}
Starting from a modular form $f \in S_k^{\text{new}}(\Gamma_0(N))$ with $k \geq 4$, we consider the two-dimensional $p$-adic Galois representation
$
V_f : G_{\mathbb{Q}} \longrightarrow \GL_2(F)
$
associated with $f$, where $G_{\mathbb{Q}}$ is the absolute Galois group $\Gal(\overline{\mathbb{Q}}/\mathbb{Q})$ of $\mathbb{Q}$ and $F$ is a $p$-adic field containing the Fourier coefficient of $f$. We are interested in studying the representation
\[V_{f,\chi} := V_f(k/2) \otimes \chi,\]
defined as the twist of the self-dual Tate twist $V_f(k/2)$ of $V_f$ by a Galois character $\chi$.
In \cite{BDP}, Bertolini, Darmon and Prasanna introduced a distinguished collection of algebraic cycles, called \emph{generalized Heegner cycles}, coming from graphs of isogenies between elliptic curves, lying in the product of the Kuga--Sato variety with a power of a fixed elliptic curve.
Later, Castella and Hsieh constructed in \cite{CH} Euler systems for generalized Heegner cycles; they proved, among other results, a theorem that establishes, under suitable hypotheses, the vanishing of the Selmer group $\text{Sel}(K,V_{f,\chi})$ associated with the representation $V_{f,\chi}$ as a $G_K:= \Gal(\overline{\mathbb{Q}}/K)$-representation, where $K$ is an imaginary quadratic field satisfying the so-called \emph{Heegner hypothesis} relative to $N$:
\begin{itemize}
\item all the primes dividing $N$ split in $K$.
\end{itemize}
This proves the Bloch--Kato conjecture in this case:
\begin{displaymath}
\dim_F \text{Sel}(K,V_{f,\chi}) = \text{ord}_{s=k/2} L(f,\chi,s) = 0.
\end{displaymath}
Building upon results from \cite{BDP}, the proof by Castella--Hsieh is based on a link between generalized Heegner cycles and a certain $p$-adic $L$-function attached to $f$, and on a generalization of Kolyvagin's method. We emphasize that the Heegner hypothesis is essential in \cite{CH}.
\subsubsection*{The quaternionic setting: relaxing the Heegner hypothesis}
What happens if we want to weaken the Heegner hypothesis? More explicitly, we would like to generalize the work of Castella and Hsieh to the case of an imaginary quadratic field $K$ that does not satisfy the classical Heegner hypothesis, but instead satisfies the following \emph{generalized Heegner hypothesis} relative to the level $N$ of the modular form $f$:
\begin{itemize}
\item no prime factor of $N$ ramifies in $K$, if a prime $\ell$ is inert in $K$ then $\ell^2$ does not divide $N$ and the number of prime factors of $N$ that are inert in $K$ is \emph{even}.
\end{itemize}
In this setting, we cannot work with Kuga--Sato varieties over classical modular curves, as we are not able to construct Heegner cycles on these varieties without the Heegner hypothesis. The right substitutes for modular curves in this context are Shimura curves, so it is natural to work with Kuga--Sato varieties fibered over Shimura curves.
In \cite{Brooks}, Brooks introduced a collection of generalized Heegner cycles on a Kuga--Sato variety over a Shimura curve $Sh$, coming from graphs of isogenies between abelian surfaces. The curve $Sh$ has the form of a quotient of the complex upper half plane under the action of a group that is determined by an order in an indefinite quaternion algebra over $\mathbb{Q}$ and parametrizes abelian surfaces.
Brooks proved results that generalize (some of) those in \cite{BDP} to this quaternionic setting.
Building on the work of Brooks, our goal is to generalize to a quaternionic context the key result of \cite{CH} relating their $p$-adic $L$-function to the system of generalized Heegner classes. As said before, this is a crucial point for the proof of the vanishing of the Selmer group. We construct a system of generalized Heegner cycles on the Kuga--Sato variety over the Shimura curve $Sh$ and a $p$-adic $L$-function defined as a $p$-adic measure given as a sum of values of a variation of $f$, as a modular form over our Shimura curve, at certain CM abelian surfaces. With these ingredients at hand, we will prove results on the Selmer group $\Sel(K, V_{f,\chi})$, generalizing several of Castella--Hsieh's results.
It is worth remarking that we expect the results of this paper to play a key role in the proof of a generalization of Castella's specialization results (\cite{Cas}) for Howard's big Heegner points in Hida families (\cite{How}) to the quaternionic big Heegner points introduced by Longo and Vigni (\cite{LV}). We plan to address this question in a future project.
\subsubsection*{Main results}
First of all, we fix some notation. Let $f \in S^{\text{new}}_{k}(\Gamma_0(N))$ be a newform of weight $k=2r+2 \geq 4$ and level $N$. Fix an odd prime $p \nmid N$ and a field embedding
$i_p : \overline{\mathbb{Q}} \hookrightarrow \mathbb{C}_p$, where $\mathbb{C}_p$ is the completion of the chosen algebraic closure of $\mathbb{Q}_p$ . Let
$F$ be a finite extension of $\mathbb{Q}_p$ containing the image of the Fourier coefficients of $f$ under $i_p$ and let $K$ be an imaginary
quadratic field of discriminant $D_K$ and ring of integers $\mathcal O_K$ in which $p$ splits as $p\mathcal O_K = \mathfrak{p}\overline{\mathfrak{p}}$ splits, with $\mathfrak{p}$ determined by $i_p$. Let $\chi : \Gal(K_{c_0p^{\infty}}/K) \rightarrow \mathcal{O}^{\times}_F$ be a locally algebraic anticyclotomic character of infinity type $(j, -j)$ and conductor $c_0p^s\mathcal{O}_K$ (see section \ref{char}). Denote by $V_{f,\chi} := V_f(k/2) \otimes \chi$ the twist of $V_{f}(k/2)$ by $\chi$ seen as a representation of $\Gal(\overline{\mathbb{Q}}/K)$, by $L(f,\chi,s)$ the associated Rankin $L$-series and by $\Sel(K,V_{f,\chi})$ the Block--Kato Selmer group associated with $V_{f,\chi}$ and $K$. Assume that:
\begin{itemize}[noitemsep]
\item[1.] $p \nmid 2N\phi(N^+)$ (where $\phi$ is Euler's function);
\item[2.] $c_0$ is prime to $N$, i.e., the conductor of $\chi$ is prime to $N$;
\item[3.] either $D_K > 3$ is odd or $8 \mid D_K$;
\item[4.] $p = \mathfrak{p} \overline{\mathfrak{p}}$ splits in $K$.
\end{itemize}
Moreover, assume that $K$ satisfies the generalized Heegner hypothesis relative to $N$ as described above, and factor $N$ as $N = N^+N^-$, where $N^+$ is a product of primes that split in $K$ and $N^-$ is a (necessarily square-free) product of an \emph{even} number of primes that are inert in $K$.
Our first theorem on Selmer groups, which corresponds to Theorem \ref{selmer-1}, is a vanishing result.
\begin{teoA
\label{A}
If $f$ is $p$-ordinary and $L(f,\chi,k/2)\neq 0$, then
\begin{displaymath}
\dim_F\Sel(K,V_{f,\chi}) = 0.
\end{displaymath}
\end{teoA}
Denote by $\epsilon(V_{f,\chi})$ the sign of the functional equation of $L(f,\chi,s)$. Our second theorem on Selmer groups, which corresponds to Theorem \ref{selmer-2}, is a one-dimensionality result.
\begin{teoB
\label{B}
If $\epsilon(V_{f,\chi})=-1$ and $z_{\chi} \neq 0$, then
\begin{displaymath}
\Sel(K,V_{f,\chi}) = F\cdot z_{\chi}.
\end{displaymath}
\end{teoB}
In the statement above, $z_{\chi}$ is a suitable cohomology class in $H^1(K,V_{f,\chi})$ that comes from an Euler system of generalized Heegner classes.
Let us conclude this introduction by briefly sketching the structure of the paper.
In \S 2 we
introduce the Shimura curve $Sh$ we will work with and also the notions of modular forms and $p$-adic modular forms over Shimura curves.
In \S 3 we review Serre--Tate theory and study deformations of abelian surfaces, which we will use to get power series expansions at ordinary CM points for modular forms over Shimura curves (for which $q$-expansions are not available).
In \S 4 we define an analytic anticyclotomic $p$-adic $L$-function $\LL_{f, \psi}$ as a measure on the Galois group $\Gal(K_{p^{\infty}}/K)$ with values in the ring of Witt vectors $\W=W(\overline{\mathbb{F}}_p)$, where $K_{p^{\infty}}= \cup_{n} K_{p^n}$ for $K_{p^n}$ the ring class field of conductor $p^{n}$ of $K$, $\overline{\mathbb{F}}_p$ is an algebraic closure of the field $\mathbb{F}_p$ with $p$ elements and $\psi$ is an anticyclotomic Hecke character of infinity type $(k/2,-k/2)$ and conductor $c_0\mathcal{O}_K$ with $(c_0, pN^+) = 1$. We close the chapter with an interpolation formula for our $p$-adic $L$-function, which we will use later to obtain the reciprocity law of \S 7, relating the value of $\LL_{f, \psi}$ at $\phi$ to the central critical value $L(f,\chi,k/2)$, where $\phi$ is an anticyclotomic Hecke character of infinity type $(n,-n)$ with $n \geq 0$ and $p$-power conductor such that $\chi = \psi \phi$.
In \S 5, following Brooks, we introduce a family of generalized Heegner cycles on the generalized Kuga--Sato variety over our Shimura curve $Sh$. More precisely, these cycles live in a Chow group of the generalized Kuga--Sato variety $\mathcal{X}_r = \mathcal{A}^r \times A^r$, where $\mathcal{A}$ is the universal object of the fine moduli problem associated with $Sh$ and $A$ is a fixed abelian surface with CM by $K$. Then we apply a $p$-adic Abel--Jacobi map to obtain cohomology classes from generalized Heegner cycles. In this way, we construct a system of generalized Heegner classes associated with $f$ and $\chi$, and indexed by fractional ideals of $K$, for which we prove compatibility properties.
In \S 6 we establish a relation between values of $\LL_{f, \psi}$ at Galois characters $\phi$ of infinity type $(-k/2-j,k/2+j)$ and Bloch--Kato logarithms of generalized Heegner cycles associated with $\chi$ of infinity type $(j,-j)$, with $ -k/2 < j < k/2$, where $\chi = \psi^{-1} \phi^{-1}$. This relation, for which we refer to Theorem \ref{GZ}, has the form
\[
\LL_{f,\psi}(\phi) = \text{(something)} \cdot \braket{\log_{\mathfrak{p}}(z_{\chi}), *},
\]
where ``something'' is an explicit non-zero coefficient that is comparatively less important than the main terms in the formula. The key ingredient to establish this Gross--Zagier type formula is the work of Brooks: we link our $p$-adic $L$-function to the differential operator $\theta = t \frac{d}{dt}$ on the Serre--Tate coordinates and then we use Brooks's results to obtain a formula suitably relating $\theta$ to our generalized Heegner cycles.
Finally, in \S 7 we use the previous formula and the interpolation property to establish, under a $p$-ordinarity assumption on $f$, a reciprocity law relating the analytic $p$-adic $L$-function $\LL_{f,\psi}$ to an algebraic $p$-adic $L$-function obtained as a sort of image of an Iwasawa cohomology class $\boldsymbol{z}_f \in H^1\bigl(K_{p^{\infty}},V_f(k/2)\bigr)$, obtained as an inverse limit of generalized Heegner classes, under a big logarithm map.
This reciprocity law and the costruction of an anticyclotomic Euler system associated with generalized Heegner classes, combined with an extension of Kolyvagin's method for anticyclotomic Euler systems developed in \cite{CH}, lead to the proof of Theorem A. The proof of Theorem B rests instead on the extension of Kolyvagin's method applied to another anticyclotomic Euler system associated with generalized Heegner classes.
\subsubsection*{Acknowledgements}
The contents of this paper are part of my PhD thesis at Universit\`a degli Studi di Genova. I am very grateful to my advisor, Stefano Vigni, for suggesting this problem, for his support and encouragement and for making every
meeting a pleasant and friendly time. I especially thank him for his patience, his enthusiasm and, above all, for the faith he always had in me.
I would also like to thank Francesc Castella, Ming-Lun Hsieh and Matteo Longo for enlightening conversations and correspondence.
\subsubsection*{Notation}
If $F$ is a number field or a local field, whose ring of integers will be denoted by $\mathcal O_F$, we fix an algebraic closure $\overline{F}$ of $F$ and write $G_F$ for the absolute Galois group $\Gal(\overline{F}/F)$ of $F$.
We denote by $\mathbb{A}_F$ the adele ring of a number field $F$ and by $\hat{F}$ the ring of finite adeles of $F$.
For any prime number $p$, we fix an immersion $i_p : \overline{\mathbb{Q}} \hookrightarrow \mathbb{C}_p$, where $\mathbb{C}_p$ is the completion of the chosen algebraic closure of $\mathbb{Q}_p$.
For an imaginary quadratic field $K$ and an integer $n\geq1$, we denote by $K_n$ the ring class field of $K$ of conductor $n$; in particular, $K_1$ is the Hilbert class field of $K$. We denote also by $K^{\text{ab}}$ the maximal abelian extension of $K$.
For an integer $n\geq1$ we write $\zeta_{n}$ for the primitive $n$-th root of unity $e^{\frac{2\pi i}{n}}\in\mathbb C^\times$ and we denote by $\chi_{\cyc}$ the $p$-adic cyclotomic character.
As usual, the notation for pull-back will be a high asterisk $^{*}$ while the notation for push-forward will be a down asterisk $_{*}$. Finally, unadorned tensor products are always taken over $\mathbb{Z}$.
\section{Shimura curves, CM points and modular forms} \label{shimura-chapter}
In this section we introduce the Shimura curves, attached to quaternion algebras over $\mathbb Q$, we will work with, we construct certain CM points on Shimura curves and we define modular forms and $p$-adic modular forms on Shimura curves.
\subsection{Shimura curves}
Let $K$ be an imaginary quadratic field of discriminant $D_K$ and consider a positive integer $N$, which will be the level of our modular form, such that $(N,D_K)=1$. Suppose that $K$ satisfies the {\bf generalized Heegner hypothesis} relative to $N$:
\begin{itemize}
\item no prime factor of $N$ ramifies in $K$, if a prime $\ell$ is inert in $K$ then $\ell^2$ does not divide $N$ and the number of prime factors of $N$ that are inert in $K$ is \emph{even}.
\end{itemize}
Factor $N$ as a product $N=N^+ N^-$ where $N^+$ is a product of primes that split in $K$ and $N^-$ is a (necessarily square-free)
product of (an even number of) primes that are inert in $K$.
Let $B$ be the indefinite rational quaternion algebra over $\mathbb{Q}$ of discriminant $D=N^-$ and fix a prime $p\nmid N$ that splits in $K$ and $B$.
Fix isomorphisms $\Phi_{\ell} : B_{\ell} \cong M_2(\mathbb{Q}_{\ell})$ for each prime $\ell \nmid D$ and
denote by $\mathcal{O}_B$ a maximal order of $B$ such that each $\Phi_{\ell}$ induces an isomorphism $\mathcal{O}_B \otimes \mathbb{Z}_{\ell} \cong M_2(\mathbb{Z}_{\ell})$.
Fix also an isomorphism $\Phi_{\infty}: B \otimes \mathbb{R} \cong M_2(\mathbb{R})$.
Consider the map
\[
\pi_{N^+} : \hat{\mathcal{O}}_{B}^{\times} \twoheadrightarrow\prod_{\ell \mid N^+} (\mathcal{O}_B \otimes \mathbb{Z}_{\ell})^{\times} \cong \prod_{\ell \mid N^+} \GL_2(\mathbb{Z}_{\ell}) \twoheadrightarrow \GL_2(\mathbb{Z}/N^+\mathbb{Z}).
\]
Denote by $\hat{\Gamma}_{1,N^+}$ the open compact subgroup of $\hat{\mathcal{O}}_{B}^{\times}$ composed of the elements $b \in \hat{\mathcal{O}}_{B}^{\times}$ such that
$\pi_{N^+}(b) \in \left\lbrace \left(\begin{smallmatrix}
* & * \\ 0 & 1
\end{smallmatrix} \right) \in GL_2(\mathbb{Z}/N^+\mathbb{Z})
\right\rbrace.$
Consider
the space of double cosets
\[
X_{N^+} := B^{\times}
\backslash \mathcal{H}^{\pm} \times \hat{B}^{\times} / \hat{\Gamma}_{1,N^+},
\]
where $\mathcal{H}^{\pm}= \mathbb{C} - \mathbb{R}$ is the disjoint union of the upper and lower complex half planes.
Here $\hat{\Gamma}_{1,N^+}$ acts naturally on the right on $\hat{B}^{\times}$ by right multiplication, while $B^{\times}$ acts on the left on $\hat{B}^{\times}$ through the diagonal embedding $B \hookrightarrow\hat{B}$ and on $\mathcal{H}^{\pm}$ under the fixed isomorphism $\Phi_{\infty}$ by the usual linear fractional transformations
\[
\left( \begin{smallmatrix}
a & b \\ c & d
\end{smallmatrix} \right) \cdot \tau := \frac{a\tau +b}{c\tau + d}.
\]
This is the Shimura curve associated with the Shimura datum $G(\mathbb{Q}) = B^{\times}$, $X= \mathcal{H}^{\pm}$ and $K=\hat{\Gamma}_{1,N^+}$. For more details, see \cite{Mi90}.
Because $B$ is indefinite, there is a bijection
\[
X_{N^+} \cong \mathcal{H}/\Gamma_{1,N^+},
\]
where $\mathcal{H}$ is the classical upper half plane and $\Gamma_{1,N^+}$ is the subgroup of matrices in
$\Phi_{\infty}((\hat{\Gamma}_{1,N^+} \cap B)^{\times})$ of determinant $1$ (\cite[\S 1.3]{HMT}). This bijection endows $X_{N^+}$ with a Riemann surface structure and gives, as a consequence, an analytic description of $X_{N^+}$.
The coset space $X_{N^+}$ admits a model over $\mathbb{Q}$, which is the fine moduli scheme classifying abelian surfaces with quaternionic multiplication by $\mathcal{O}_B$ and certain level structures.
\begin{defi}
Let $S$ be a $\mathbb{Z}[1/D]$-scheme. An \textbf{abelian surface with quaternionic multiplication by $\mathcal{O}_B$} (abelian surface with QM, for short) \textbf{over $S$} is a pair $(A,i)$ where
\begin{enumerate}[noitemsep]
\item $A$ is an abelian scheme $A/S$ of relative dimension 2;
\item $i$ is an optimal inclusion $i: \mathcal{O}_B \hookrightarrow \End_S(A)$ giving an action of $\mathcal{O}_B$ on $A$.
\end{enumerate}
A morphism of abelian surfaces with QM is a morphism of abelian surfaces that respects the action of $\mathcal{O}_B$.
\end{defi}
Abelian surfaces with quaternionic multiplication are often called \emph{false elliptic curves}.
\begin{defi}
Let $N^+ >0$ be an integer prime to $D$.
A \textbf{level $V_1(N^+)$-structure}, or an arithmetic level $N^+$ structure, on a QM abelian surface $(A, i)$ is an inclusion
\[
\mbox{$\raisebox{-0.59ex_{N^+} \times \mbox{$\raisebox{-0.59ex_{N^+} \mbox{\;$\lhook\joinrel\longrightarrow$\;} A[N^+]
\]
of group schemes over $S$, commuting with the action of $\mathcal{O}_B$, where $\mbox{$\raisebox{-0.59ex_{N^+}$ denotes the group scheme of $N^+$th roots of unity. The action of $\mathcal{O}_B$ on the left hand side is via the isomorphism $\mathcal{O}_B \otimes \mathbb{Z}/N^+\mathbb{Z} \cong M_2(\mathbb{Z}/N^+\mathbb{Z})$ induced by the chosen isomorphisms $\Phi_{\ell} : B_{\ell} \cong M_2(\mathbb{Q}_{\ell})$, through which one has $\mathcal{O}_B \otimes \mathbb{Z}_{\ell} \cong M_2(\mathbb{Z}_{\ell})$ for each $\ell \mid N^+$.
\end{defi}
A morphism of QM abelian surfaces with $V_1(N^+)$-level structure is a morphism of QM abelian surfaces that respects the level structures.
If $A$ is an abelian surface over an algebraically closed field $k$, a $V_1(N^+)$-level structure can be thought
of as an orbit of full level $N^+$ structures, i.e., isomorphisms
$
\mathcal{O}_B \otimes \mathbb{Z}/N^+\mathbb{Z} \cong A[N^+]
$
commuting with the action of $\mathcal{O}_B$, under the natural action of the subgroup $\bigl\{\left(\begin{smallmatrix}
* & * \\ 0 & 1
\end{smallmatrix} \right) \in GL_2(\mathbb{Z}/N^+\mathbb{Z})
\bigr\}$ of $GL_2(\mathbb{Z}/N^+\mathbb{Z})$. See \cite[\S 2.2]{Brooks} for details.
The moduli problem of QM abelian surfaces with $V_1(N^+)$-level structure is representable, as asserted by
\begin{teor} \label{moduli-thm}
For $N^+ > 3$, the moduli problem that assigns to a $Z[1/DN^+]$-scheme $S$ the set of isomorphism classes of QM-abelian surfaces over $S$ with $V_1(N^+)$-level
structure is representable by a smooth proper $\mathbb{Z}[1/DN^+]$-scheme $X$.
\end{teor}
For details, see \cite[\S 2.2. and \S 2.3]{Brooks}, \cite[\S 2]{Buz} or \cite[\S 2 and \S 3]{Kas}.
The complex points of $X$ are naturally identified with the compact Riemann surface $X_{N^+} \cong \mathcal{H}/\Gamma_{1,N^+}$.
The $\mathbb{Z}[1/DN^+]$-scheme $X$ from Theorem \ref{moduli-thm} is called the \textbf{Shimura curve of level $V_1(N^+)$} associated to the indefinite quaternion algebra $B$ and we will denote it by $X_{N^+}$, using the same notation for the scheme, the Riemann surface and the double coset space.
We denote by $\pi: \mathcal{A} \rightarrow X_{N^+}$ the universal object of the moduli problem, i.e. the \emph{universal QM abelian surface} over $X_{N^+}$.
For each geometric point $x : \Spec(L) \rightarrow \mathcal{A}$, the fiber $\mathcal{A}_x := \mathcal{A} \times_x \Spec(L)$ is an abelian surface with QM by $\mathcal{O}_B$ and $V_1(N^+)$-level structure defined over $L$, representing the isomorphism class that corresponds to the point $x$.
\subsection{Hecke operators} \label{Heckeop}
The Shimura curve $X_{N^+}$ comes equipped with a ring of Hecke correspondences, which can be introduced by using the adelic description of $X_{N^+}$. See, for example, \cite[\S 1.5]{HMT}; here the construction is for the Shimura curve relative to level structures ``of type $\Gamma_0$'', but it can be done also in our case.
In terms of abelian surfaces the Hecke operators acts in the following way.
For a prime $\ell$, a QM abelian surface $A$ over a field $k$ of characteristic prime to $\ell$ has $\ell + 1$ cyclic $\mathcal{O}_B$-submodules annihilated by $\ell$. Denote them by $C_0, \dots , C_{\ell}$ and consider the isogenies $\psi_i : A \twoheadrightarrow A/C_i$ of QM abelian surfaces. If $\nu_A$ is a $V_1(N^+)$-level structure on $A$ and $\ell \nmid N^+$, then $\psi_i$ induces a $V_1(N^+)$-level structure $ \nu_A \circ \psi_i$ on $A/C_i$. If $\ell \nmid N^+D$, the ``good'' Hecke operator $T_{\ell}$ can be described by
\[
T_{\ell} (A,\iota_A,\nu_A)= \sum_{i=0}^{\ell} (A/C_i,\iota_i,\nu_i).
\]
For more details, see \S \ref{K-S} and \cite[\S 3.6]{Brooks}.
\subsection{Igusa tower}
We are interested in working with $p$-adic modular forms over our Shimura curve, which are defined analogously to Katz's generalized $p$-adic modular forms. Therefore we want to work on a cover of the ordinary locus of the Shimura curve.
Fix a prime $p \nmid N^+D$. Since ${X_{N^+}}$ is a scheme over $\mathbb{Z}[1/N^+D]$, it can be viewed as a scheme over $\mathbb{Z}_{(p)}$.
For simplicity, denote by $Sh$ the curve
${X_{N^+}}_{/\mathbb{Z}_{(p)}}$. Since $Sh$ is a fine moduli scheme for QM abelian surfaces over $\mathbb{Z}_{(p)}$-schemes with level structures, there is a universal abelian surface $\mathcal{A} \rightarrow Sh$, which is the one associated with $X_{N^+}$ but tensored with $\mathbb{Z}_{(p)}$ over $\mathbb{Z}[1/DN^+]$.
Recall that a QM abelian surface $A$ over a field $k$ of characteristic $p$ is said to be \emph{ordinary} if $A[p](\overline{k}) \cong (\mathbb{Z}/p\mathbb{Z})^{2}$, and \emph{supersingular} otherwise. Indeed, a QM abelian surface in characteristic $p$, is either ordinary or supersingular; equivalently, it is isogenous either to a product of ordinary elliptic curves or to a product of supersingular elliptic curves, respectively.
Consider the \emph{ordinary locus} $Sh^{\ord}$ of $Sh$, i.e., the locus on which the Hasse invariant does not vanish, that is the scheme obtained by removing the supersingular points of $Sh$ in the fiber at $p$, which are those points which correspond in the moduli interpretation to abelian surfaces which have supersingular reduction modulo $p$. See \cite{Kas} for details about the ordinary locus and the Hasse invariant.
Let $\mathcal{A}^{\ord} \rightarrow Sh^{\ord}$ be the universal ordinary QM abelian surface over $Sh^{\ord}$, that is the fiber product $\mathcal{A}^{\ord}= \mathcal{A} \times_{Sh} Sh^{\ord}$.
Consider the functor $ I_n : \Sch_{/{Sh^{\ord}}} \rightarrow \Sets$ that takes an $Sh^{\ord}$-scheme $S$ to the set of closed immersions $\mbox{$\raisebox{-0.59ex_{p^n} \times \mbox{$\raisebox{-0.59ex_{p^n} \hookrightarrow \mathcal{A}^{\ord}[p^n]$ of finite flat group schemes over $S$ respecting the $\mathcal{O}_B$-action. This functor is representable by a scheme ${I_n}_{/Sh^{\ord}}$. Then ${I_n}_{/\mathbb{Z}_{(p)}}$ classifies quadruples $(A,i,\nu_{N^+},\nu_{p^n})$, where $A$ is an abelian surface, $\iota$ a quaternionic multiplication, $\nu_{N^+}$ a $V_1(N^+)$-level structure and $\nu_{p^n}$ an $\mathcal{O}_B$-immersion $\mbox{$\raisebox{-0.59ex_{p^n} \times \mbox{$\raisebox{-0.59ex_{p^n} \hookrightarrow A[p^n]$.
There is a tower
\[
\dots \longrightarrow I_{n+1} \longrightarrow I_n \longrightarrow I_{n-1} \longrightarrow \cdots.
\]
Consider the formal scheme $I_{/\mathbb{Z}_{(p)}}:= \varprojlim_n {I_n}_{/\mathbb{Z}_{(p)}}$.
This formal scheme parametrizes compatible sequences of isomorphism classes of quadruples $(A,i,\nu_{N^+},\nu_{p^n})$, where $A$ is an ordinary abelian surface, $\iota$ a quaternionic multiplication and $\nu_{N^+}$, $\nu_{p^n}$ respectively a $V_1(N^+)$ and $V_1(p^n)$ level structures.
But a sequence of compatible $V_1(p^n)$-level structures is the same as a $V_1(p^{\infty})$-level structure, that is an immersion $\nu_{p^{\infty}} : \mbox{$\raisebox{-0.59ex_{p^{\infty}} \times \mbox{$\raisebox{-0.59ex_{p^{\infty}} \hookrightarrow A[p^{\infty}]$.
Therefore this tower
parametrizes isomorphism classes of quadruples $(A,i,\nu_{N^+},\nu_{p^{\infty}})$.
There is a bijection
$
I(\mathbb{C}) \cong \varprojlim_n X_{N^+p^n}(\mathbb{C}),
$
between complex points of $I$ and compatible sequences $\{ x_n \}_n$ of complex points $x_n = (A,i,\nu_{N^+},\nu_{p^n}) \in X_{N^+p^n}(\mathbb{C})$.
See \cite{Hi04} for details about Igusa schemes in the case of modular curves (\S 6.2.12) and more in general for Shimura varieties (Ch. 8). See also \cite{Hi09}.
\subsection{CM points on Shimura curves}
In this section we will construct a collection of CM points in our Shimura curves, i.e., points that correspond to abelian surfaces with complex multiplication, indexed by fractional ideals of orders in an imaginary quadratic field.
We denote again by $Sh$ the curve $X_{N^+}$ seen as a scheme over $\mathbb{Z}_{(p)}$ and by $\mathcal{A} \rightarrow Sh$ the universal abelian surface over $Sh$.
\subsubsection{Abelian surfaces with QM and CM over $\mathbb{C}$}
\begin{teor}
Let $(A,i)$ be an abelian surface with QM by $\mathcal{O}_B$ over $\mathbb{C}$. Then either
\begin{enumerate}[noitemsep]
\item $A$ is simple and $\End^0(A) := \End(A) \otimes \mathbb{Q} = B$, or
\item $A$ is not simple, $A \sim E^2$ is isogenous to the product of an elliptic curve $E$ with CM by an imaginary quadratic field $K$ which embeds in $B$ and $\End^0(A) \cong M_2(K)$.
\end{enumerate}
\end{teor}
In particular, we are interested in the second case of the previous theorem. Abelian surfaces with QM that satisfy that second condition are said to have \textbf{complex multiplication} (CM for short) by $K$.
Suppose that $(A,i)$ is an abelian surface over $\mathbb{C}$ with QM by $\mathcal{O}_B$ and $V_1(N^+p^n)$-level structure. Then the ring
\[
\End_{\mathcal{O}_B}(A) := \bigl\lbrace f \in \End(A) \mid f \circ i(b) = i(b) \circ f\ \text{for all}\ b \in \mathcal{O}_B \bigr\rbrace
\]
is either $\mathbb{Z}$ or an order in an imaginary quadratic field $K$. If $K$ is an imaginary quadratic field and $\End_{\mathcal{O}_B}(A) = \mathcal{O}_c$, where $\mathcal{O}_c$ is the order of conductor $c$ in $\mathcal{O}_K$, then $A$ is said to have
complex multiplication
by $\mathcal{O}_c$
and the point $P=[(A,i)] \in X_{N^+p^n}(\mathbb{C})$ is said to be a \textbf{CM point of conductor $c$}.
\subsubsection{Products of CM elliptic curves} \label{prodell}
Start with an elliptic curve $E$ over $\mathbb{C}$ with complex multiplication by $\mathcal{O}_K$, take $E:= \mathbb{C} / \mathcal{O}_K$. Consider on $E$ a $\Gamma_1(M)^{\text{arit}}$-level structure given by a morphism
\[
\mu_{M} : \mbox{$\raisebox{-0.59ex_{M} \mbox{\;$\lhook\joinrel\longrightarrow$\;} E[M],
\]
where $M>3$ is an integer prime to $D$.
Consider now the self product $A := E \times E$ that is an abelian surface over $\mathbb{C}$; then its endomorphism ring is $\End(A) \cong M_2(\End(E)) \cong M_2(\mathcal{O}_K)$.
Since $K$ splits $B$, because of the generalized Heegner hypothesis, we can embed $K$ in $B$ and choose a basis $\left\lbrace b_1, b_2 \right\rbrace $ of $B$ over $K$ with $b_1, b_2 \in \mathcal{O}_B$. Then we have an an immersion $B \hookrightarrow M_2(K) = M_2(\End^0(E)) = \End^0(A)$ such that $\mathcal{O}_B \hookrightarrow M_2(\mathcal{O}_K) = M_2(\End(E)) = \End(A)$. See \cite[\S 2]{Mi79}. Hence $ \iota : \mathcal{O}_B \hookrightarrow \End(A)$ is a quaternionic multiplication for $A$.
Consider the isomorphism $i_K : B \otimes_{\mathbb{Q}} K \cong M_2(K)$ induced by $\iota$ and put \[e:= i_K^{-1}\left( \left( \begin{smallmatrix}
1 & 0 \\ 0 & 0
\end{smallmatrix}
\right) \right) \in B \otimes_{\mathbb{Q}} K,\] which is an idempotent such that $e^*=e$. Then the decomposition of $A$ is induced by $A = eA \oplus (1-e)A = \left( \begin{smallmatrix}
1 & 0 \\ 0 & 0
\end{smallmatrix}
\right)
A \oplus \left( \begin{smallmatrix}
0 & 0 \\ 0 & 1
\end{smallmatrix}
\right)
A \cong E \times E$ (multiplication by $\alpha := \left( \begin{smallmatrix}
0 & 1 \\ 1 & 0
\end{smallmatrix}
\right)$ gives an isomorphism $ eA \ \displaystyle_{\cong}^{\cdot \alpha} \ (1-e)A$).
So $A[M] = eA[M] \oplus (1-e)A[M]$. The choice of a level structure on $eA[M] = E[M]$ induces a $V_1(M)$-level structure on $A[M]$, because of the request of compatibility of the level structure with respect to the action of $\mathcal{O}_B$ (and consequently of $B$). See also the last lines of \cite[\S 2.2]{Brooks}. Thus, the fixed level structure $\mu_{M}$ on $E$ induces a $V_1(M)$-level structure
\[
\nu_{M} : \mbox{$\raisebox{-0.59ex_{M} \times \mbox{$\raisebox{-0.59ex_{M} \mbox{\;$\lhook\joinrel\longrightarrow$\;} A[M]
\]
on $A$.
Hence, starting from a $\Gamma_1(N^+p^{n})^{\text{arit}}$-level structure on $E$, we obtain a quadruple $(A,\iota,\nu_{N^+}, \nu_{p^{n}})$ which determines a CM point in $X_{N^+p^n}(\mathbb{C})$.
Starting from a $\Gamma_1(N^+p^{\infty})^{\text{arit}}$-level structure on $E$, we obtain a quadruple $(A,\iota,\nu_{N^+}, \nu_{p^{\infty}})$ which can be seen as a compatible sequence of CM points in the Shimura tower $X_{N^+p^n}$.
Start now with the elliptic curve $E_c = \mathbb{C} / \mathcal{O}_c$ over $\mathbb{C}$ with complex multiplication by an order $\mathcal{O}_c$ of $K$, with $(c,N)=1$.
The isogeny
\[
\begin{split}
E = \mathbb{C} / \mathcal{O}_K &\mbox{\;$\relbar\joinrel\twoheadrightarrow$\;} E =\mathbb{C} / \mathcal{O}_c\\
\overline{z} &\longmapsto \overline{c z},
\end{split}
\]
induces an isogeny
\[
\phi_c : A = E \times E \mbox{\;$\relbar\joinrel\twoheadrightarrow$\;} A_c:= E_c \times E_c
\]
of complex abelian surfaces.
Take on $A_c$ the quaternionic multiplication $\iota_c : \mathcal{O}_B \hookrightarrow \End(A_c)$ determined by compatibility with $\phi_c$:
\[
\iota_c(b) (\phi_c(a)) = \phi_c(\iota(b) a),
\]
for any $b \in B$, $a \in A$.
As before, a $\Gamma_1(M)^{\text{arit}}$-level structure $\mu_{c,M}$ on $E_c$, with $M$ prime to $D$, induces a $V_1(M)$-level structure $\nu_{c,M}$ on $A_c$.
Therefore, starting from a $\Gamma_1(N^+p^{n})^{\text{arit}}$-level structure on $E_c$, we obtain a quadruple $(A_c,\iota_c,\nu_{N^+}, \nu_{p^{n}})$ which determines a CM point of conductor $c$ in $X_{N^+p^n}(\mathbb{C})$.
Starting from a $\Gamma_1(N^+p^{\infty})^{\text{arit}}$-level structure on $E$, we obtain a quadruple $(A,i,\nu_{c,N^+}, \nu_{c,p^{\infty}})$ which can be seen as a compatible sequence of CM points in the Shimura tower $X_{N^+p^n}$.
Note that the isogeny $\phi_c$ doesn't necessarily respect the chosen level structures if $p$ divides $c$.
\subsubsection{The action of $\Pic(\mathcal{O}_c)$}
Denote by $\Pic(\mathcal{O}_c)$ the Picard group of the order $\mathcal{O}_c$ of conductor $c$ of an imaginary quadratic field $K$, that is
\[
\Pic(\mathcal{O}_c) = K^{\times} \backslash \hat{K}^{\times} / \hat{\mathcal{O}_c}^{\times} = I_c(\mathcal{O}_c) / P_c(\mathcal{O}_c),
\]
where
$I_c(\mathcal{O}_c)$ is the group of fractional ideals of $\mathcal{O}_c$ coprime to $c$ and $P_c(\mathcal{O}_c)$ is the subgroup of $I_c(\mathcal{O}_c)$ of principal fractional ideals.
Consider a quadruple $(A,\iota,\nu_{N^+},\nu_{p^{\infty}})$ where $(A,\iota)$ is a QM abelian surface with CM by $\mathcal{O}_c$. There is an action of $\Pic(\mathcal{O}_c)$ on the isomorphism classes of these quadruples, defined by
\[
\mathfrak{a} \star (A,\iota,\nu_{N^+},\nu_{p^{\infty}}) := (A_{\mathfrak{a}},\iota_{\mathfrak{a}},\nu_{\mathfrak{a},N^+},\nu_{\mathfrak{a},{p^{\infty}}}),
\]
where the representative $\mathfrak{a}$ is chosen to be integral and prime to $N^+pc$.
Here $A_{\mathfrak{a}} := A/A[\mathfrak{a}]$, where $A[\mathfrak{a}]$ is the subgroup of the elements of $A$ that are killed by all the endomorphisms in $\mathfrak{a}$.
The quaternionic multiplication $\iota_{\mathfrak{a}}$ and the level structures $\nu_{\mathfrak{a},N^+}, \nu_{\mathfrak{a},p^{\infty}}$ are induced by the ones of $A$. Denote by $\varphi_{\mathfrak{a}}$ the quotient isogeny $A \twoheadrightarrow A/ A[\mathfrak{a}]$, that is an isogeny of degree $N(\mathfrak{a})^2 = (\# \mathcal{O}_c/\mathfrak{a})^2$ and so prime to $N^+p$; define
\[
\iota_{\mathfrak{a}} : \mathcal{O}_B
\mbox{\;$\lhook\joinrel\longrightarrow$\;} \End(A_{\mathfrak{a}}
, \quad
b
\longmapsto \Big(\varphi_{\mathfrak{a}}(x) \mapsto \varphi_{\mathfrak{a}}\bigl(\iota(b)(x)\bigr)\Big)
\]
and
\[
\nu_{\mathfrak{a},N^+} : \mbox{$\raisebox{-0.59ex_{N^+} \times \mbox{$\raisebox{-0.59ex_{N^+}
\stackrel{\nu_{N^+}}\mbox{\;$\lhook\joinrel\longrightarrow$\;} A[N^+] \stackrel{\varphi_{\mathfrak{a}}}{\cong} A_{\mathfrak{a}}[N^+], \quad
\nu_{\mathfrak{a},p^{\infty}} : \mbox{$\raisebox{-0.59ex_{p^{\infty}} \times \mbox{$\raisebox{-0.59ex_{p^{\infty}}
\stackrel{\nu_{p^{\infty}}}\mbox{\;$\lhook\joinrel\longrightarrow$\;} A[p^{\infty}] \stackrel{\varphi_{\mathfrak{a}}}{\cong} A_{\mathfrak{a}}[p^{\infty}].
\]
See \cite[\S 2.5]{Brooks}.
If $\sigma_{\mathfrak{a}}\in \Gal(K_c/K)$ corresponds to $\mathfrak{a} \in \Pic(\mathcal{O}_c)$ through the classical Artin reciprocity map, then by Shimura's reciprocity law there is an equality
\[
(A,\iota,\nu_{N^+},\nu_{p^{\infty}})^{\sigma_{\mathfrak{a}}} =
\mathfrak{a} \star (A,\iota,\nu_{N^+},\nu_{p^{\infty}}).
\]
See, again, \cite[\S 2.5]{Brooks}.
\subsubsection{Construction of CM points} \label{CMpoints}
We want to introduce CM points indexed by ideals of orders of $K$.
Take an element $[\mathfrak{a}] \in \Pic(\mathcal{O}_c)$ and choose the representative $\mathfrak{a}$ to be integral and prime to $N^+pc$.
Consider the elliptic curve $E_{\mathfrak{a}} := \mathbb{C}/\mathfrak{a}^{-1}$ with the $\Gamma_1(N^+p^{\infty})^{\text{arith}}$-level structure defined in \cite[\S 2.3 and \S 2.4]{CH}. Put $A_{\mathfrak{a}}:= E_{\mathfrak{a}} \times E_{\mathfrak{a}}$, which, by the theory of complex multiplication, is an abelian surface defined over the ring class field $K_c$ of $K$ of conductor $c$. The abelian surface $A_{\mathfrak a}$ has QM by $\mathcal{O}_B$ and we can consider the quadruple
\[ x(\mathfrak{a}) :=(A_{\mathfrak{a}},i_{\mathfrak{a}},\nu_{\mathfrak{a},N^+},\nu_{\mathfrak{a},p^{\infty}}), \]
where $\nu_{\mathfrak{a},N^+},\nu_{\mathfrak{a},p^{\infty}}$ are the level structures induced by the ones of $E_{\mathfrak{a}}$ (as in \S \ref{prodell}).
We write $x(c) =(A_{c},i_c,\nu_{c,N^+},\nu_{c,p^{\infty}})$ when $\mathfrak{a}=\mathcal{O}_c$.
We have already used the notation above in the previous section for the action of the Picard group on a quadruple: the reason is that there is an equality
\[
\mathfrak{a} \star (A_{c},i_c,\nu_{c,N^+},\nu_{c,p^{\infty}}) = (A_{\mathfrak{a}},i_{\mathfrak{a}},\nu_{\mathfrak{a},N^+},\nu_{\mathfrak{a},p^{\infty}}).
\]
Indeed, $\mathfrak{a} \star A_c = A_c/A_c[\mathfrak{a}] \cong E_c/E_c[\mathfrak{a}] \times E_c/E_c[\mathfrak{a}]$ because $\mathfrak{a} \subseteq \mathcal{O}_c \hookrightarrow M_2(\mathcal{O}_c)$ acts diagonally and the level structures are induced by the isogeny $\varphi_{\mathfrak{a}} : A_c \twoheadrightarrow A_c/ A_c[\mathfrak{a}]$ that is the product of the isogeny $E_c \twoheadrightarrow E_c/ E_c[\mathfrak{a}]$.
\subsection{Modular forms on Shimura curves}
We recall here the definitions and some properties of modular forms and $p$-adic modular forms on Shimura curves. The references are \cite{Kas}, \cite{Brooks}, \cite{EdVP}, \cite{Hi04}.
\subsubsection{Geometric modular forms on Shimura curves}\label{geom-mod-forms}
We will need integrality conditions only at $p$, so we define modular forms over algebras $R$ over the localization $\mathbb{Z}_{(p)}$ of $\mathbb{Z}$ at the prime ideal generated by $p$.
Let $(\pi:A \rightarrow \Spec(R), \iota)$ be a QM abelian surface over a $\mathbb{Z}_{(p)}$-algebra $R$. Then $\pi_* \Omega_{A/R}$, where $\Omega_{A/R}$ is the bundle of relative differentials, inherits an action of $\mathcal{O}_B$ which tensored with the scalar action of $\mathbb{Z}_p$ gives an action of $M_2(\mathbb{Z}_p)$ on $\pi_* \Omega_{A/R}$. Write $\underline{\omega}_{A/R}$ for $e\pi_* \Omega_{A/R}$. If $\mathcal{A} \rightarrow Sh$ is the universal QM abelian surface, then $\mathcal{A} \otimes R \rightarrow Sh \otimes R$ is the universal object for $Sh \otimes R$. In the particular case $ \pi : \mathcal{A} \otimes R \rightarrow Sh \otimes R$ of the universal QM abelian surface over a $\mathbb{Z}_p$-algebra $R$, we just write $\underline{\omega}_R$ for $e\pi_* \Omega_{\mathcal{A} \otimes R/Sh \otimes R}$.
In analogy with the case of elliptic modular forms (see, for example, \cite[\S 1]{BDP}, in particular equation $(1.1.1)$), we give a geometric definition \`a la Katz for modular forms on $Sh$ over a $\mathbb{Z}_{(p)}$-algebra $R$.
For a nice exposition of Katz modular forms in the case of modular curves, see \cite[Chapter 1]{Go}. The geometric definition for modular forms on Shimura curves is due to Kassaei, see \cite[\S 4.1]{Kas}. We closely follow \cite{Brooks}.
\begin{defi}
A \textbf{modular form with respect to $B$ of weight $k \in \mathbb
Z$ and level $V_1(N^+)$ over $R$} is a global section of $\underline{\omega}^{\otimes_k}_R$, i.e., an element of $H^0(Sh \otimes R,\underline{\omega}^{\otimes_k}_R)$.
We denote by $M_k(Sh,R)$ the space of modular forms with respect to $B$, of weight $k \in \mathbb
Z$ and level $V_1(N^+)$ over $R$.
\end{defi}
Alternatively, one can define modular forms in the following ways.
\begin{defi}
Let $R_0$ be an $R$-algebra. A {\bf test object} is a quadruple $(A/R_0, \iota,\nu,\omega)$ consisting of a QM abelian surface $A$ over $R_0$, a $V_1(N^+)$-level structure $\nu$ on $A$, and a non-vanishing global section of $\underline{\omega}_{A/R_0}$.
Two test objects $(A/R_0, \iota, \nu,\omega)$ and $(A'/R_0,\iota',\nu',\omega')$ over $R_0$ are {\bf isomorphic} if there is an isomorphism $(A/R_0,\iota,\nu) \rightarrow (A'/R_0,\iota',\nu')$, of QM abelian surfaces with $V_1(N^+)$-level structure, pulling $\omega'$ back to $\omega$.
A {\bf modular form of weight $k$ and level $V_1(N^+)$ over $R$} is a rule $F$ that assigns to every isomorphism class of test objects $(A/R_0,\iota, \nu,\omega)$ over an $R$-algebra $R_0$ a value $F(A/R_0,\iota, \nu,\omega) \in R_0$ such that
\begin{itemize}[leftmargin=*]
\item (compatibility with base change) if $\varphi:R_0 \rightarrow R'_0$ is a map of $R$-algebras, inducing $\varphi : A \rightarrow A \otimes_{\varphi} R_0'$, then
$
F\big((A/R_0, \iota,\nu) \otimes_{\varphi} R'_0 ,\omega\big) = \varphi\big(F(A/R_0,\iota, \nu,\varphi^*(\omega))\big);
$
\item (weight condition) for any $\lambda\in R_0$, one has
$
F(A/R_0,\iota,\nu,\lambda\omega) =\lambda^{-k}F(A/R_0,\iota, \nu,\omega).
$
\end{itemize}
\end{defi}
\begin{defi}
A {\bf modular form of weight $k$ and level $V_1(N^+)$ over $R$} is a rule $G$ that, for any $R$-algebra $R_0$, assigns to any isomorphism class of QM abelian surfaces over $R_0$ with $V_1(N^+)$-level structure $(A/R_0,\iota, \nu)$, a translation-invariant section of $\underline{\omega}^{\otimes_k}_{A/R_0}$, subject to the following base-change axiom: if $\varphi: R_0 \rightarrow R'_0$ is a map of $R$-algebras one has
\[
G((A/R_0,\iota, \nu) \otimes_{\varphi} R'_0) = \varphi^* G(A/R_0, \iota, \nu).
\]
\end{defi}
Given a modular form as in the third definition, we get a modular form as in the first definition by taking the section assigned to the universal QM abelian surface with level structure $\mathcal{A}\otimes R \rightarrow Sh \otimes R$.
This is an equivalence because $\mathcal{A} \otimes R$ is universal. The last two definitions are related by \[G(A,\iota, \nu) = F(A,\iota,\nu,\omega) \omega^{\otimes_k},\] where $\omega$ is any translation-invariant global section. This expression is independent of the choice of $\omega$.
\subsubsection{$p$-adic modular forms on Shimura curves} \label{pmodform}
Let $R$ be a $p$-adic ring (for $p$-adic ring we mean a complete and separated,
with respect to the p-adic topology, $\mathbb{Z}_p$-algebra). Define the space $V_p(N^+, R)$ of \textbf{$p$-adic modular forms} of level $V_1(N^+)$ by
\[
V_p(N^+, R)
:= \varprojlim_m H^0 (\varprojlim_n I_n \otimes R/p^mR, \mathcal{O}_{\varprojlim_n I_n \otimes R/p^mR}
\cong \varprojlim_m \varinjlim_n H^0(I_n \otimes R/p^mR, \mathcal{O}_{I_n \otimes R/p^mR}),
\]
where $\mathcal{O}$ is the structure sheaf. When $n =0$
one can take in the limit the coordinate ring of the affine scheme obtained from $Sh \otimes R/p^mR$ by deleting the supersingular points, that is $H^0((Sh \otimes R/p^mR)^{\ord}, \mathcal{O}_{(Sh \otimes R/p^mR)^{\ord}})$. If $m=0$, we take $H^0(({I_n}_{/R}, \mathcal{O}_{{I_n}_{/R}})$.
Thus elements in $V_p (N^+, R)$ are formal functions on the tower $I_n$, i.e., $f \in V_p (N^+, R)$ is a rule that assigns to
each quadruple $(A,\iota,\nu_{N^+}, \nu_{p}^{\infty})$, where $(A,\iota,\nu_{N^+}, \nu_{p}^{\infty})$ is a QM abelian surface over an $R$-algebra $R_0$ with $V_1(N^+p^{\infty})$-level structure, a value $f(A,\iota,\nu_{N^+}, \nu_{p}^{\infty}) \in R_0$, which depends only on the isomorphism class and that is compatible with base changes. We say that a $p$-adic modular form $f$ is of weight $k \in \mathbb{Z}_p$, if for every $u \in \mathbb{Z}^{\times}_p$, we have
\[
f (A,\iota,\nu_{N^+}, \nu_{p}^{\infty}) = u^{-k} f (A,\iota,\nu_{N^+}, \nu_{p}^{\infty}u),
\]
where $(A,\iota,\nu_{N^+}, \nu_{p}^{\infty})$ is a QM abelian surface over an $R$-algebra with $V_1(N^+p^{\infty})$-level structure.
If $f$ is a modular form with respect to $B$ of weight $k$ and level $V_1(N^+)$ over $R$ as in \ref{geom-mod-forms}, then we can see it as a $p$-adic modular form $\hat{f}$ as follows.
The $V_1(N^+p^{\infty})$-level structure on $A/R_0$ determines a point $P \in eT_pA_0^t(k)$, where $A_0$ is the reduction $\text{mod}\ p$ of $A$.
A point $P \in eT_pA_0^t(k)$ determines a differential $\omega_P \in \underline{\omega}_{A/R_0}$.
Indeed, there is an isomorphism
\[
T_p(A_0^t) \cong \Hom_{\mathbb{Z}_p}(\hat{A},\hat{\mathbb{G}}_m).
\]
So, taking the homorphism $\alpha_P$ corresponding to the point $P$, one can consider the pull-back $\omega_P := \alpha_P^*(dT/T) \in \underline{\omega}_{\hat{A}/R_0} = \underline{\omega}_{A/R_0}$, of the standard differential $dT/T$ of $\hat{\mathbb{G}}_m$.
See \cite[\S 3.3]{Ka} or the proof of \cite[Lemma 4.2]{Brooks} for details.
One can define
\[
\hat{f}(A,\iota,\nu_{N^+}, \nu_{p}^{\infty}) := f(A,\iota,\nu_{N^+}, \omega_P).
\]
It follows from the definition that if $f$ is a geometric modular form of weight $k$ and level $V_1(N^+)$, then $\hat{f}$ is a $p$-adic modular form of weight $k$ and level $V_1(N^+)$.
\subsubsection{Jacquet--Langlands correspondence}
The Jacquet--Langlands correspondence establishes a Hecke-equivariant bijection between automorphic
forms on $\GL_2$ and automorphic
forms on multiplicative groups of quaternion algebras. In our setting, this can be stated as a correspondence between classical modular forms and quaternionic modular forms.
\begin{teor}[Jacquet--Langlands]
There is a canonical (up to
scaling) isomorphism
\[
S_k(\Gamma_1(N), \mathbb{C})^{D\emph{-new}} \xrightarrow[\emph{JL}]{\cong} M_k(Sh,\mathbb{C}),
\]
where $\Gamma_1(N)$ is the standard congruence group
$
\Gamma_1(N) := \left\lbrace A \in \SL_2(\mathbb{Z}) \mid A \equiv \left( \begin{smallmatrix}
1 & * \\ 0 & 1
\end{smallmatrix}
\right) \mod N \right\rbrace,
$
and $S_k(\Gamma_1(N), \mathbb{C})^{D\emph{-new}}$ is the space of classical cuspidal eigenforms with respect to $\Gamma_1(N)$, of weight $k$ and that are new at $D$.
This bijection is compatible with the Hecke-action and the Atkin-Lehner involutions on each
of the spaces.
\end{teor}
In particular, to each eigenform $f \in S_k (\Gamma_1(N),\mathbb{C}) ^{D\text{-new}}$
corresponds a unique (up to scaling) quaternionic form $f_B = \text{JL}(f)\in M_k(Sh,\mathbb{C})$ having the same Hecke eigenvalues as $f$ for the good Hecke operators $T_{\ell}$ for $(\ell,D)=1$ and the Atkin--Lehner involutions.
Anyway, one can normalize $f_B$.
More precisely, if we start from an eigenform $f \in S_k (\Gamma_1(N),\mathbb{C}) ^{D\text{-new}}$ with Nebentypus $\varepsilon_f$ with respect to the action of $\Gamma_0(N)$, where $\Gamma_0(N) := \left\lbrace A \in \SL_2(\mathbb{Z}) \mid A \equiv \left( \begin{smallmatrix}
* & * \\ 0 & *
\end{smallmatrix}
\right) \mod N \right\rbrace$, the Jacquet--Langlands correspondence asserts the existence of a holomorphic
function $f_B$ on the upper half plane, determined only up to a scalar multiple, such that $f_B$ is a modular form for the discrete subgroup $\Gamma_{1,N^+}$ of $\GL_2(\mathbb{R})$, of weight $k$, with the same eigenvalues as $f$ for the good Hecke operators and with Nebentypus $\varepsilon_f$ for the action of $\Gamma_{0,N^+}$, where $\hat{\Gamma}_{0,N+}$ is the open compact subgroup of $\hat{\mathcal{O}}_{B}^{\times}$ composed of the elements $b \in \hat{\mathcal{O}}_{B}^{\times}$ such that
$\pi_{N^+}(b) \in \left\lbrace \left(\begin{smallmatrix}
* & * \\ 0 & *
\end{smallmatrix} \right) \in GL_2(\mathbb{Z}/N^+\mathbb{Z})
\right\rbrace$, and $\Gamma_{0,N^+}^+$ is the subgroup of matrices in
$\Phi_{\infty}((\hat{\Gamma}_{0,N^+} \cap B)^{\times})$ of determinant $1$.
In particular, if we start from a classical modular form for $\Gamma_0(N)$ we obtain a quaternionic modular form with trivial Nebentypus with respect to the action of $\Gamma_{0,N^+}$.
The function $f_B$ gives rise canonically, as in \cite[\S 2.7]{Brooks}, to a modular form in the sense of the geometric definition seen before, i.e., to a section of $\underline{\omega}_{\mathbb{C}} = e\pi_*\Omega_{\mathcal{A}\otimes\mathbb{C}/ Sh \otimes \mathbb{C}}$. If $f \in S_k (\Gamma_1(N),F) ^{D\text{-new}}$, i.e. its Fourier coefficients lie in the ring of integers $\mathcal{O}_F$ of a number field $F$, then the choice of $f_B$ is ambiguous up to multiplication by a unit in $\mathcal{O}_F[1/N]$. See again \cite[\S 2.7]{Brooks}.
\section{Deformation theory and $t$-expansions for modular forms}
In order to associate with modular forms over Shimura curves power series expansions at CM points, we are interested in deformation theory. In particular, Serre--Tate deformation theory provides us with a way to do this. Thus, in this section we will study the deformation theory of QM abelian surfaces, which is closely related to the deformation theory of elliptic curves, as is well explained in \cite{Buz}. Then we will define power series expansions for modular forms on Shimura curves.
\subsection{Serre--Tate deformation theory}
Following \cite{Ka}, we introduce the Serre--Tate deformation theory for ordinary abelian varieties, which provides a way to attach power series expansions to modular forms on Shimura curves, replacing classical $q$-expansions for ``elliptic'' modular forms that are not available in our case.
Fix an algebraically closed field $k$ of characteristic
$p > 0$ (for our goals, we can take $k = \overline{\mathbb{F}}_p$) and consider an \emph{ordinary} abelian variety $A$ over $k$. Recall that an abelian variety $A$ over $k$ is said to be ordinary if $A[p](k) \cong (\mathbb{Z}/p\mathbb{Z})^{\dim(A)}$. Let $A^t$ be the dual abelian
variety, which is isogenous to $A$ and hence ordinary too. Consider the Tate modules
$
T_pA := \varprojlim_n A[p^n](k),\ T_pA^t := \varprojlim_n A^t[p^n](k)
$
of $A$ and $A^t$. Because of the ordinarity assumption on $A$, $T_pA$ and $T_pA^t$ are free $\mathbb{Z}_p$-modules of rank $g: = \dim A = \dim A^t$.
\begin{defi}
If $R$ is an artinian local ring with maximal ideal $\mathfrak{m}_R$ and residue field $k$, a \textbf{deformation} of $A$ to $R$ is
an abelian scheme $\mathcal{A}$ over $R$
together with an identification $\mathcal{A} \times_R k \cong A$.
\end{defi}
Following a construction due to Serre and Tate, we attach to such a deformation a $\mathbb{Z}_p$-bilinear form
\[
q(\mathcal{A}/R;-,-) : T_pA \times T_pA^t \longrightarrow \widehat{\mathbb{G}}_m(R) = 1 + \mathfrak{m}_R,
\]
where $\widehat{\mathbb{G}}_m := \Spf\bigl(k[T,S]/(TS-1)\bigr)$ is the completion of the multiplicative group scheme $\mathbb{G}_m:=\Spec\bigl(k[T,S]/(TS-1)\bigr)$ over $k$.
This bilinear map is constructed from the Weil pairings
\[
e_{p^n} : A[p^n] \times A^t[p^n] \longrightarrow \mbox{$\raisebox{-0.59ex_{p^n}
\]
of $k$-group schemes, as defined in \cite{Oda}. These pairings come from Cartier duality for the $p$-divisible groups $A[p^{\infty}]$ and $A^t[p^{\infty}]$ (duality of abelian schemes is compatible with Cartier duality).
Here $\mbox{$\raisebox{-0.59ex_{p^n}$ is $\Spec\bigl(k[T]/(T^{p^n}-1)\bigr)$,
the $k$-group scheme of $p^n$-th roots of unity, which can be seen inside $\mathbb{G}_m$ through the map $k[T,S]/(TS-1) \rightarrow k[T]/(T^{p^n}-1)$ defined by $T \mapsto T$ and $S \mapsto T^{p^n-1}$. For each $k$-algebra $R$, $\mbox{$\raisebox{-0.59ex_{p^n}(R)$ corresponds to the $p^n$-torsion of $\mathbb{G}_m(R)$.
For the convenience of the reader, we sketch here the construction of the bilinear map $q(\mathcal{A}/R;-,-)$, because we will use it later. Choose an integer $n\geq0$ such that $\mathfrak{m}_R^n=0$. Since $p \in \mathfrak{m}_R$, $\widehat{\mathcal{A}}(R) := \ker \bigl( \mathcal{A}(R) \rightarrow \mathcal{A}(k) = A(k) \bigr)$ is killed by $p^n$.
Let $P\in A(k)$; for any lift $\tilde{P} \in \mathcal{A}(R)$ of $P$, since $\widehat{\mathcal{A}}(R)$ is killed by $p^n$, we have that $p^n\tilde{P}$ is independent of the choice of the lift $\tilde{P}$. The existence of a lift $\tilde{P} \in \mathcal{A}(R)$ of $P\in A(k)$ is guaranteed by the smoothness of $\mathcal{A}/R$ (\cite[Corollary 2.13]{Liu}).
Therefore we obtain a map $A(k) \stackrel{``p^n"}{\longrightarrow} \mathcal{A}(R)$. If we take $P \in A[p^n](k)$, then $`` p^n"P \in \widehat{\mathcal{A}}(R)$, so we get
\[
`` p^n" : A[p^n] \longrightarrow \widehat{\mathcal{A}}(R).
\]
Because of the compatibility of the maps $`` p^n"$ when $n\gg0$, we obtain a homomorphism
\[
`` p^n" : T_pA \mbox{\;$\relbar\joinrel\twoheadrightarrow$\;} A[p^n](k) \stackrel{`` p^n"}{\longrightarrow} \widehat{\mathcal{A}}(R)
\]
that is independent of $n$.
Now, restricting the Weil pairings
\[
e_{p^n} : \widehat{A}[p^n] \times A^t[p^n] \longrightarrow \mbox{$\raisebox{-0.59ex_{p^n}
\]
for every $n \geq 1$, we obtain a perfect pairing, and then an isomorphism
\[
\widehat{A}[p^n] \stackrel{\cong}{\longrightarrow} \Hom_{\mathbb{Z}_p}\bigl(A^t[p^n],\mbox{$\raisebox{-0.59ex_{p^n}\bigr)
\]
of $k$-group-schemes. Because of the compatibility of the pairings with respect to $n$, passing to the limit, we deduce an isomorphism
\[
\widehat{A}(k) \stackrel{\cong}{\longrightarrow} \Hom_{\mathbb{Z}_p}\bigl(T_pA^t,\widehat{\mathbb{G}}_{m}\bigr),
\]
of formal groups over $k$.
Since $R$ is artinian, the $p$-divisible group $\mathcal{A}[p^{\infty}]$ has a canonical structure of an extension, as given by
\[
0 \longrightarrow \widehat{\mathcal{A}} \longrightarrow \mathcal{A}[p^{\infty}] \longrightarrow T_pA \times (\mathbb{Q}_p/\mathbb{Z}_p) \longrightarrow 0
\]
of the constant $p$-divisible group $T_pA(k) \times (\mathbb{Q}_p/\mathbb{Z}_p)$ over $R$ by $\widehat{\mathcal{A}}$, which is the unique toroidal formal group over $R$ lifting $\widehat{A}$.
Then the preceding two isomorphisms extend uniquely to isomorphisms of $R$-group schemes
\[
\widehat{\mathcal{A}}[p^n](R) \stackrel{\cong}{\longrightarrow} \Hom_{\mathbb{Z}_p}\bigl(A^t[p^n],\mbox{$\raisebox{-0.59ex_{p^n}\bigr)
\]
and
\[
\widehat{\mathcal{A}}(R) \stackrel{\cong}{\longrightarrow} \Hom_{\mathbb{Z}_p}\bigl(T_pA^t,\widehat{\mathbb{G}}_{m}\bigr)
\]
(see the proof of \cite[Theorem 2.1]{Ka}), giving pairings
\[
e_{p^n,\mathcal{A}} : \widehat{\mathcal{A}}[p^n](R) \times A^t[p^n] \longrightarrow \mbox{$\raisebox{-0.59ex_{p^n},
\]
and
\[
e_{\mathcal{A}} : \widehat{\mathcal{A}}(R) \times T_pA^t \longrightarrow \widehat{\mathbb{G}}_{m}.
\]
Finally, the map $q(\mathcal{A}/R;-,-)$ is defined by
\[
q\bigl(\mathcal{A}/R;P,Q^t\bigr) := e_{\mathcal{A}}\bigl(``p^n"P,Q^t\bigr),
\]
for $P \in T_pA$ and $Q^t \in T_pA^t$.
\begin{teor}[Serre--Tate]
With notation as above, the construction
\[
\mathcal{A}/R \longmapsto q(\mathcal{A}/R;-,-) \in \Hom_{\mathbb{Z}_p}\bigl(T_pA\otimes T_p A^t, \widehat{\mathbb{G}}_m(R)\bigr)
\]
establishes a bijection
\[
\left\lbrace
\normalsize
\begin{aligned}
\emph{isomorphism}\ &\emph{classes of}\ \\
\emph{deformations of}&\ A/k \ \emph{to}\ R
\end{aligned}
\right\rbrace
\stackrel{\cong}{\longrightarrow}
\Hom_{\mathbb{Z}_p}\big(T_pA( k ) \otimes T_p A^t(k), \widehat{\mathbb{G}}_m(R)\big).
\]
Furthermore, this correspondence is functorial in $R$, i.e., if $\mathcal{F}$ is the deformation functor from the category $\mathscr{C}$ of artinian local rings with residue field $k$ to the category of sets given by
\[
\mathscr{F} : R \longmapsto \mathscr{F}(R):=\bigl\{ \text{isomorphism classes of deformations of}\ A/k\ \text{to}\ R\bigr\},
\]
then there is an isomorphism of functors
\[
\mathscr{F} \sim \Hom_{\mathbb{Z}_p}\bigl(T_pA \otimes T_p A^t, \widehat{\mathbb{G}}_m\bigr).
\]
\end{teor}
\begin{proof} This is \cite[Theorem 2.1, 1) and 2)]{Ka}. \end{proof}
The proof of the theorem rests on the fact that there is an equivalence
\[
\left\lbrace
\normalsize
\begin{aligned}
\displaystyle
\text{isomorphism}\ &\text{classes of}\ \\
\text{deformations}\ \mathcal{A}&/R \ \text{of}\ A/k
\end{aligned}
\right\rbrace
\stackrel{\cong}{\longrightarrow}
\Large \left\lbrace
\normalsize
\begin{aligned}
\displaystyle
\text{isomorphism}\ &\text{classes of}\ \\
\text{deformations}\ \mathcal{A}[p^{\infty}]&/R \ \text{of}\ A[p^{\infty}]/k
\end{aligned}
\Large\right\rbrace,
\]
so deforming an ordinary abelian variety $A/k$ is the same as deforming its $p$-divisible group $A[p^{\infty}]$.
Taking inverse limits, we can replace the category of artinian local rings with the category of complete noetherian local rings in the preceding discussion. We can do this because of the compatibility of these correspondences with inverse limits and of the fact that every complete noetherian local ring is the inverse limit of artinian local rings (if $R$ is a complete noetherian local ring with maximal ideal $\mathfrak{m}$, then $R \cong \varprojlim_n R/\mathfrak{m}^n$). However, the procedure for computing the pairings $q_{\mathcal{A}/R}$ only makes sense for artinian local rings.
Passing to complete noetherian local ring is useful because the deformation functor is not representable by an artinian local ring in $\mathscr{C}$ but is pro-representable by a complete noetherian local ring.
Namely, the deformation functor $\mathscr{F}$ is pro-represented by a complete local noetherian ring $\mathcal{R}^u$ that is non-canonically isomorphic to the power series ring $\W[[T_{ij}, 1 \leq i,j \leq g]]$, where $\W := W(k)$ is the ring of Witt vectors over $k$ . Therefore, the functor $\mathscr{F}$ can be seen as a formal scheme $\Spf(\mathcal{R}^u)$.
Denote by $\widehat{\mathcal{A}}^u/\Spf(\mathcal{R}^u)$ the universal formal deformation of $A/k$, i.e., the formal element of $\mathscr{F}$ corresponding to the identity in $\Hom_{\widehat{\mathscr{C}}}(\mathcal{R}^u, \mathcal{R}^u)$.
Given elements $ P \in T_pA$, $P^t \in T_pA^t$, there is a map
\[
\begin{split}
\mathscr{F} &\longrightarrow \widehat{\mathbb{G}}_m\\[1mm]
\mathcal{A}/R &\longmapsto q(\mathcal{A}/R; P, P^t).
\end{split}
\]
If we pick $\mathbb{Z}_p$-bases $\{P_1,\dots,P_g\}$ and $\{P_1^t,\dots,P^t_g\}$ of $T_pA(k)$ and $T_pA^t(k)$, respectively, then we have $g^2$ functions
\[
\begin{split}
t_{ij} : \mathscr{F} &\longrightarrow \widehat{\mathbb{G}}_m\\[1mm]
\mathcal{A}/R &\longmapsto q(\mathcal{A}/R; P_i, P_j^t)
\end{split}
\]
called {\bf Serre--Tate coordinates} and $g^2$ elements $t_{ij}(\widehat{\mathcal{A}}^u/\mathcal{R}^u) \in \mathcal{R}^u$.
Writing $T_{ij} := t_{ij} -1$, there is a ring isomorphism
\[
\mathcal{R}^u \cong \W[[\left\lbrace T_{ij}\right\rbrace ]].
\]
We conclude with the following
\begin{prop} \label{liftmorph}
Let $ f : A \rightarrow B$ be a morphism of ordinary abelian varieties over $k$, let $ f^t : B^t \rightarrow A^t$ be the dual morphism of $f$ and let $\mathcal{A}/R$ and $\mathcal{B}/R$ be deformations of $A/k$ and $B/k$ to $R$. Then $f$ lifts to a morphism $ F : \mathcal{A} \rightarrow \mathcal{B}$ of deformations if and only if
\[
q\bigl(\mathcal{A}/R;P,f^t(Q^t)\bigr) = q\bigl(\mathcal{B}/R;f(P),Q^t\bigr)
\]
for every $P \in T_pA(k)$ and $Q^t \in T_pB^t(k)$. Furthermore, if a lifting exists, then it is unique.
\end{prop}
\begin{proof} This is \cite[Theorem 2.1, 4)]{Ka}. \end{proof}
\subsection{Serre--Tate coordinates for Shimura curves}
Take now an \emph{ordinary} QM abelian surface $A$ over $k$ with a $V_1(N^+)$-level structure. We want to deform our abelian surface not only as an abelian surface but also with its structures. Thus, we consider the subfunctor
$ \mathcal{M}$ of $\mathscr{F}= \Spf(\mathcal{R}^u)$ which sends an artinian local ring $R$ with residue field $k$ to the set of deformations of $A$ to $R$, where by deformation of $A$ to $R$ we mean a deformation $\mathcal{A}$ of $A$ to $R$ together with an embedding
$\mathcal{O}_B \hookrightarrow \End_{R}(\mathcal{A})$ deforming the given embedding $\mathcal{O}_B \hookrightarrow \End_k(A)$ and a $V_1(N^+)$-level structure on $\mathcal{A}$ deforming the given $V_1(N^+)$-level structure on $A$.
The $V_1(N^+)$-level structure automatically lifts uniquely, as $A[N^+]$ is \'etale over $R$, so we can ignore it in our discussion.
Consider the idempotent $e$ that acts as $\left( \begin{smallmatrix}
1 & 0 \\ 0 & 0
\end{smallmatrix} \right) \in M_2(\mathbb{Z}_p)$ on $T_pA$ ($i_K$ and $\Phi_p$ can be chosen to be compatible, by the choice of $\mathfrak{p}$ over $p=\mathfrak{p}\overline{\mathfrak{p}}$ split in $K$). We can find a
$\mathbb{Z}_p$-basis $\left\lbrace P_1,P_2\right\rbrace $ of $T_pA$ such that $eP_1 = P_1$ and $eP_2 = 0$, indeed $T_pA = eT_pA\oplus (1-e)T_pA$. Then $P_1^t \in (eT_pA)^t$.
\begin{prop} \label{defthm}
The subfunctor $\mathcal{M}$ of $\mathscr{F}$ is pro-representable by a ring $\mathcal{R}^f$ that is a quotient of $\mathcal{R}^u$. In fact, $\mathcal{R}^f$ is the quotient of $\mathcal{R}^u$ by the closed ideal generated by the relations
\[
q\bigl(\widehat{\mathcal{A}}^u/\mathcal{R}^u;bP,Q^t\bigr) =q\bigl(\widehat{\mathcal{A}}^u/\mathcal{R}^u; P,b^{*}Q^t\bigr)
\]
for any $b\in B, P \in T_pA, Q^t\in T_pA^t$. Furthermore, there is an isomorphism
\[
\mathcal{R}^f \cong \W[[T_{11}]],
\]
where $T_{11}=t_{11} -1$ and $t_{11}$ corresponds to $q\bigl(\widehat{\mathcal{A}}^u/\mathcal{R}^u;P_1,P_1^t\bigr)$.
\end{prop}
\begin{proof}
This is a consequence of Proposition \ref{liftmorph}. For details, see \cite[Proposition 4.5]{Brooks} and \cite[Proposition 3.3]{Mo}.
\end{proof}
Thus, deformations of the QM abelian surface $A/k$ depend only on the $e$-component $eT_pA$ of $T_pA$.
Since the deformation functor $\mathcal{M}$ is the deformation functor associated with $Sh^{\ord}_{\mid_{\W}}$, i.e., the ordinary part of $Sh_{\mid_{\W}}$, and the point $x \in Sh^{\ord}(k)$ corresponding to the fixed ordinary QM abelian surface $A/k$ with $V_1(N^+)$-level structure, it follows that $\mathcal{M}$ is the formal completion $\widehat{Sh}^{\ord}_x$ of $Sh^{\ord}_{\mid_{\W}}$ at $x$ and so it is the formal spectrum $\Spf(\widehat{\mathcal{O}}_{Sh^{\ord},x})$, where $\mathcal{O}_{Sh^{\ord},x}$ is the local ring of $Sh^{\ord}$ at $x$.
\subsection{Deformations of QM abelian surfaces} \label{defQM}
In the case of QM abelian surfaces, the coordinate ring of the deformation functor has only one coordinate obtained by choosing a point $P \in T_pA$ such that $eP=P$, as we have seen in the previous section. Also in the case of elliptic curves there is only one coordinate obtained by choosing a point $P \in T_pE$. Actually, there is a strict link between deformations of QM abelian surfaces and deformations of elliptic curves.
Take an ordinary QM abelian surface $A / k$ where $k$ is again $\overline{\mathbb{F}}_p$. Then, as already pointed out, its deformation theory is equivalent to the deformation theory of the $p$-divisible group $A[p^{\infty}]$ (see the proof of \cite[Theorem 2.1]{Ka}).
The $p$-divisible group $A[p^{\infty}]$ attached to $A$ inherits an action of $\mathcal{O}_B$ and hence of $\mathcal{O}_B \otimes \mathbb{Z}_p$, which is identified with $M_2(\mathbb{Z}_p)$ via the fixed isomorphism $\Phi_p$.
If we set $e = \left( \begin{smallmatrix}
1 & 0 \\ 0 & 0
\end{smallmatrix}
\right) \in M_2(\mathbb{Z}_p)$ (the idempotent $e$ acts as $\left( \begin{smallmatrix}
1 & 0 \\ 0 & 0
\end{smallmatrix}
\right) \in M_2(\mathbb{Z}_p)$ on $A[p^{\infty}]$), then $A[p^{\infty}]$ splits as $eA[p^{\infty}] \oplus (1-e)A[p^{\infty}]$. Moreover, $eA[p^{\infty}]$ and $(1-e)A[p^{\infty}]$ are isomorphic via multiplication by $\left( \begin{smallmatrix}
0 & 1 \\ 1 & 0
\end{smallmatrix}
\right)$.
Since $A$ is ordinary, there is an isomorphism $A[p^{\infty}] \cong E[p^{\infty}]^2$ for $E/k$ an ordinary elliptic curve with $E[p^{\infty}] \cong eA[p^{\infty}]$ (see \cite[Corollary 4.6]{Buz}). Following \cite{Buz}, we want to recover the deformation theory of $A$ from the deformation theory of an elliptic curve.
Deforming $A[p^{\infty}]$ with its $\mathcal{O}_B$-action is the same as deforming $A[p^{\infty}]$ with its $M_2(\mathbb{Z}_p) \cong \mathcal{O}_B \otimes \mathbb{Z}_p$-action. According to Theorem \ref{defthm}, this is equivalent to deforming $eA[p^{\infty}]$,
therefore the deformation theory of $A/k$ (or $A[p^{\infty}]$) is equivalent to the deformation theory of $E/k$ (or $E[p^{\infty}]$).
We want to relate the bilinear map $q_{\mathcal{A}}$, associated with a deformation $\mathcal{A}/R$ of a QM abelian surface $A/k$, to the map $q_\mathcal{E}$ associated with the deformation $\mathcal{E}/R$, corresponding to $\mathcal{A}/R$, of an elliptic curve $E/k$, when there is an isomorphism of $p$-divisible groups
\[
\alpha : eA[p^{\infty}] \stackrel{\cong}{\longrightarrow}E[p^{\infty}]
\]
over $k$. So we start by comparing the Weil pairings. Since the Weil pairing comes from Cartier duality for $p$-divisible groups, there is a commutative diagram for the Weil pairings
\[
\begin{tikzcd}
eA[p^n] \times (eA[p^n])^t \arrow[r, "e_{p^n,A}"] \arrow[d, "\alpha_n \times (\alpha_n^{t})^{-1}"]& \mbox{$\raisebox{-0.59ex_{p^n} \arrow[d, "="] \\
E[p^n] \times E^t[p^n] \arrow[r, "e_{p^n,E}"] & \mbox{$\raisebox{-0.59ex_{p^n} \arrow[u]
\end{tikzcd}
\]
where $\alpha_n$ is the $n$-component of $\alpha$ and the first line in the diagram is the restriction of the Weil pairing associated with $A$ to $eA[p^n] \times (eA[p^n])^t$ (Cartier duality is compatible with duality of abelian schemes, so $(eA[p^n])^t \hookrightarrow (A[p^n])^t \cong A^t[p^n]$ and $(eA[p^n])^t \cong (E[p^n])^t \cong E^t[p^n]$). This means that for each $P\in eA[p^n](k)$ and $Q^t \in E^t[p^n](k)$, we have
\[
e_{p^n,A}\bigl(P,\alpha_n^t(Q^t)\bigr) = e_{p^n,E}\bigl(\alpha_n(P),Q^t\bigr).
\]
The same is true when we take inverse limits.
Considering the completions at the origin and restricting the pairings, we obtain
\[
\begin{tikzcd}
e\widehat{A}[p^n] \times (eA[p^n])^t \arrow[d, "\cong"] \arrow[r, "e_{p^n,A}"] & \mbox{$\raisebox{-0.59ex_{p^n} \arrow[dd, "="] \\
\widehat{eA}[p^n] \times (eA[p^n])^t \arrow[d, "\alpha_n \times (\alpha_n^{t})^{-1}"]& &\\
\widehat{E}[p^n] \times E^t[p^n] \arrow[r, "e_{p^n,E}"] & \mbox{$\raisebox{-0.59ex_{p^n} \arrow[uu]
\end{tikzcd}
\]
because the functor $G \mapsto \widehat{G}$ sending a $p$-divisible group to its completion at the origin is exact and the connected-\'etale sequence is functorial. Then passing to the limits yields pairings between Tate modules and the commutative diagram
\[
\begin{tikzcd}
e\widehat{A}(k) \times (eT_pA)^t \arrow[d, "\alpha \times (\alpha^t)^{-1}"] \arrow[r, "E_{A}"] & \widehat{\mathbb{G}}_m \arrow[d, "="] \\
\widehat{E}(k) \times T_pE^t \arrow[r, "E_{E}"] & \widehat{\mathbb{G}}_m. \arrow[u]
\end{tikzcd}
\]
When we extend these pairings to $e\widehat{\mathcal{A}}$ and $\widehat{\mathcal{E}}$, everything works well because of the functoriality of the structure of extensions of $p$-divisible groups and the fact that we are deforming also the action of $\mathcal{O}_B$ and so the action of $e$, so that $e\mathcal{A}[p^{\infty}] \cong \mathcal{E}[p^{\infty}]$.
Observe that everything works fine for the $`` p^n"$ maps as well, and there is a commutative diagram
\[
\begin{tikzcd}
T_pE \arrow[r, two heads] \arrow[d, "\cong"] &E[p^n] \arrow[d, "\cong"] \arrow[r, "{`` p^n"}"] & \widehat{\mathcal{E}}(R) \arrow[d, "\cong"] \\
eT_pA \arrow[r, two heads] &eA[p^n] \arrow[r, "{`` p^n"}"] & e\widehat{\mathcal{A}}(R),
\end{tikzcd}
\]
again because we are deforming also the action of $\mathcal{O}_B$ and so the action of $e$.
In conclusion, computing the bilinear map on $eT_pA \times (eT_pA)^t$ is the same as computing it on $T_pE \times T_pE^t$, that is
\[
q(\mathcal{A};P,Q^t) = q\bigl(E,\alpha(P),(\alpha^t)^{-1}(Q^t)\bigr),
\]
for all $P \in eT_pA$ and $Q^t \in (eT_pA)^t \subseteq T_pA^t$.
\subsection{Deformation at points of $I$}
If $A/k$ is an ordinary QM abelian surface with $A[p] \cong E[p]^2$ as $\mathcal{O}_B$-group schemes, where here $\mathcal{O}_B$ acts via the natural action of $\mathcal{O}_B \otimes \mathbb{F}_p \cong M_2(\mathbb{F}_p)$ on $E[p]^2$, then there is an induced isomorphism between the set of $V_1(p)$- level structures on $A$ and the set of $\Gamma_1^{\text{arith}}(p)$-level structures
on $E$.
In the same way, when $A[p^{\infty}] \cong E[p^{\infty}]^2$ there is a bijection between the set of $V_1(p^{\infty})$- level structures on $A$ and the set of $\Gamma_1^{\text{arith}}(p^{\infty})$-level structures
on $E$. Thus, the deformation theory of a $k$-point in $I_n(k)$, or in the Igusa tower, is equivalent to the deformation theory of the associated elliptic curve viewed as a $k$-point of the scheme parameterizing elliptic curves with $\Gamma_1^{\text{arith}}(N^+p^n)$- or $\Gamma_1^{\text{arith}}(N^+p^{\infty})$-level structures.
In light of what we have seen in the previous section, we can use this equivalence to compute Serre--Tate coordinates.
\subsection{$t$-expansions for modular forms}
Let us start from an $\overline{\mathbb{F}}_p$-point $ x $ in the Igusa tower, i.e., the isomorphism class of a quadruple $(A/{\overline{\mathbb{F}}_p}, \iota, \nu_{N^+},\nu_{p^{\infty}})$.
Then the $V_1(p^{\infty})$-level structure on $A_{| \overline{\mathbb{F}}_p}$ determines a point $P^{t} \in (eT_pA)^t$ (cf. \cite[\S 3.1]{CH}). Take $P \in eT_pA$ corresponding to $P^t$ via the principal polarization. We fix the Serre--Tate coordinate $t_x$ around $x$ to be
\[
t_x := q(-;P,P^t).
\]
Denote by $\bigl({\bm{\mathcal{A}}}/\W[[T]], \bm{\iota}, \bm{\nu}_{N^+},\bm{\nu}_{p^{\infty}}\bigr)$ the universal deformation of $x$ and note that we can evaluate every $p$-adic modular form $f\in V_p(N^+,\W)$ at $\bigl({\bm{\mathcal{A}}}/\W[[T]], \bm{\iota}, \bm{\nu}_{N^+},\bm{\nu}_{p^{\infty}}\bigr)$. We call
\[
f(t_x) := f\bigl({\bm{\mathcal{A}}}/\W[[T]], \bm{\iota}, \bm{\nu}_{N^+},\bm{\nu}_{p^{\infty}}\bigr) \in \W[[T]],
\]
where $T:=t_x-1$, the \textbf{$t$-expansion of $f$ at $x$}.
\subsection{On Serre--Tate coordinates at CM points}
In this section we want to obtain a result analogous to \cite[Lemma 3.2]{CH} in our setting.
If $\mathfrak{a}$ is a prime to $cpN$ fractional ideal of $\mathcal{O}_c$ with $p \nmid c$, then $A_{\mathfrak{a}}$ has a model defined over $\mathcal{V} := \W \cap K^{\ab}$.
Here
$x(\mathfrak{a}) =(A_{\mathfrak{a}},\iota_{\mathfrak{a}},\nu_{\mathfrak{a},N^+}, \nu_{\mathfrak{a},p^{\infty}}) \in I(\mathcal{V})$ is as defined in \S \ref{CMpoints}.
Denote by $t$ the Serre--Tate coordinate around $\overline{x}(\mathfrak{a}) := x(\mathfrak{a}) \otimes_{\mathcal{V}} \overline{\mathbb{F}}_p$.
For $u= 1, \dots, p^n-1$ with $(u,p)=1$, set
\[
x(\mathfrak{a}) \star \alpha(u/p^n) := x(cp^n)^{\text{rec}_K(a^{-1}u_{\mathfrak{p}}p_{\mathfrak{p}}^{-n})} \cdot u \in I(\mathcal{V}),
\]
where $\text{rec}_K : K^{\times} \backslash \hat{K}^{\times} \rightarrow \Gal(K^{\ab}/K)$ is the geometrically normalized reciprocity law map, $a \in \hat{K}^{(cp)\times}$ is such that $\mathfrak{a} = a \hat{\mathcal{O}}_c \cap K$ and the subscript $b_{\mathfrak{p}}$ for $b \in \mathbb{Z}_p^{\times}$ denotes its image in $\hat{K}^{\times}$ under the inclusions $\mathbb{Z}_p^{\times} \subseteq K_{\mathfrak{p}}^{\times} \subseteq \hat{K}^{\times}$.
\begin{lem} \label{3.2}
With notation as above, one has that $\big(x(\mathfrak{a}) \star \alpha(u/p^n) \big)\otimes_{\mathcal{V}} \overline{\mathbb{F}}_p= \overline{x}(\mathfrak{a})$ and $t\big(x(\mathfrak{a}) \star \alpha(u/p^n)\big) = \zeta_{p^n}^{-uN(\mathfrak{a})^{-1}\sqrt{-D_K}^{-1}}$.
\end{lem}
\begin{proof}
The $p$-divisible module $eA_x[p^{\infty}]$, with $A_x$ the QM abelian surface corresponding to $x = x(\mathfrak{a}) \star \alpha(u/p^n)$, is exactly the $p$-divisible module associated with the point $x_{\mathfrak{a}} \star \mathbf{n}(up^{-n})$ considered in \cite[Lemma 3.2]{CH} (cf. \cite[\S 4.5]{CH}). Hence, $x$ in the deformation space of $\overline{x}(\mathfrak{a})$ corresponds to $x_{\mathfrak{a}} \star \mathbf{n}(up^{-n})$ in the deformation space of $x_{\mathfrak{a}}\otimes_{\mathcal{V}} \overline{\mathbb{F}}_p$, where $x_{\mathfrak{a}}$ is the CM point defined in \cite[\S 2.4]{CH}.
Set $\overline{A}_{\mathfrak{a}}:= A_{\mathfrak{a}} \otimes_{\mathcal{V}} \overline{\mathbb{F}}_p$ and $\overline{E}_{\mathfrak{a}}:= E_{\mathfrak{a}} \otimes_{\mathcal{V}} \overline{\mathbb{F}}_p$, with $E_{\mathfrak{a}}$ the elliptic curve corresponding to the CM point $x_{\mathfrak{a}}$. Since the point $P^t \in (eT_p\overline{A}_{\mathfrak{a}})^t$ that is determined by the $V_1(p^{\infty})$-level structure is the same as the point that is determined by the $\Gamma_1^{\text{arith}}(p^{\infty})$-level structure on $T_p\overline{E}_{\mathfrak{a}}^t$, the claim follows from the computations of \S 2.4 and \cite[Lemma 3.2]{CH}.
\end{proof}
\section{Anticyclotomic $p$-adic $L$-functions}
In this section we will define our $p$-adic $L$-function as a measure on $\Gal(K_{p^{\infty}}/K)$ with values in $\W$, which is again the ring of Witt vectors $W(\overline{\mathbb{F}}_p)$, i.e., the ring of integer of the completion of the maximal unramified extension $\mathbb{Q}_p^{\text{ur}}$ of $\mathbb{Q}_p$.
\subsection{Measures on $\mathbb{Z}_p$} \label{meas}
Recall that a \emph{$p$-adic measure} on $\mathbb{Z}_p$ with values in $\W$ is a $\W$-linear function $\mu : \mathcal{C}(\mathbb{Z}_p, \W) \rightarrow \W$ such that there exists a constant $B \geq 0$ with
$
\bigl|\mu(\varphi)\bigr|_p \leq B |\varphi|_p
$
for each $\varphi \in \mathcal{C}(\mathbb{Z}_p, \W)$, where $|\varphi|_p := \sup_{x\in\mathbb{Z}_p}\bigl|\varphi(x)\bigr|_p$.
Here with $\mathcal{C}(\mathbb{Z}_p, \W)$ we denote the space of continuous functions from $\mathbb{Z}_p$ to $\W$.
We will write $\int_{\mathbb{Z}_p} \varphi d\mu := \mu(\varphi)$ for the value of a measure $\mu$ on a continuous function $\varphi$. For more details, the reader is referred to \cite[Chapter 3]{Hi93}.
We denote by $M(\mathbb{Z}_p,\W)$ the space of $p$-adic measures on $\mathbb{Z}_p$ with values in $\W$. When equipped with the norm
$
|\mu|_p:=\sup_{\mid\varphi\mid_p =1}\bigl|\mu(\varphi)\bigr|_p,
$
the space $M(\mathbb{Z}_p,\W)$ is a $p$-adic Banach $\W$-module.
Recall that there is an isomorphism
\[
\begin{split}
M(\mathbb{Z}_p,\W) &\stackrel{\cong}{\longrightarrow} \W[[T]]\\
\mu &\longmapsto \Phi_{\mu}
\end{split}
\]
where
\[
\Phi_{\mu}(t) := \sum_{n=0}^{\infty} \left(\int_{\mathbb{Z}_p}\binom{x}{n} d \mu \right) T^n \in \W[[T]],\quad\text{with}\ T:=t-1,
\]
and that
\begin{displaymath}
\int_{\mathbb{Z}_p}z^x d \mu = \sum_{n=0}^{\infty} \int_{\mathbb{Z}_p} \binom{x}{n} (z-1)^n d\mu= \Phi_{\mu}(z)\ \text{for}\ z \in \W\ \text{with}\ \mid z-1\mid_p <1.
\end{displaymath}
The space of $p$-adic measures $M(\mathbb{Z}_p,\W)$ is naturally a $\mathcal{C}(\mathbb{Z}_p,\W)$-module in the following way: for $\phi \in \mathcal{C}(\mathbb{Z}_p,\W)$ and $\mu \in M(\mathbb{Z}_p,\W)$, we set
$
\int_{\mathbb{Z}_p}\varphi d \phi\cdot\mu := \int_{\mathbb{Z}_p}\varphi\phi d\mu
$
for any $\varphi \in \mathcal{C}(\mathbb{Z}_p,\W)$. Furthermore, for $\phi \in \mathcal{C}(\mathbb{Z}_p,\W)$ and $\mu \in M(\mathbb{Z}_p,\W)$, we write
\[
[\phi]\Phi_{\mu}(t) := \Phi_{\phi\mu}(t) = \int_{\mathbb{Z}_p}\phi(x)t^x d \mu \in \W[[t-1]].
\]
Note that for $m \geq 0$
\begin{equation}\label{x^m}
[x^m]\Phi_{\mu}(t) = \Phi_{x^m\mu} = \left( t\frac{d}{dt} \right) ^m \Phi_{\mu}(t).
\end{equation}
If we consider a locally constant function $\phi \in \mathcal{C}(\mathbb{Z}_p,\W)$ that factors through $\mathbb{Z}_p/p^n\mathbb{Z}_p$, then
\begin{equation}\label{twistphi}
[\phi]\Phi_{\mu} (t) = \Phi_{\phi\mu} (t) = p^{-n} \sum_{b \in \mathbb{Z}/p^n\mathbb{Z}} \phi(b) \sum_{\zeta \in \boldsymbol{\mu}_{p^n}} \zeta^{-b} \Phi_{\mu}(\zeta t) \in \W[[t-1]]
\end{equation}
for $\mu \in M(\mathbb{Z}_p,\W)$ (see \cite[\S3.5]{Hi93}). Observe that the notation $[\phi]\Phi_{\mu}$ coincides with that in \cite[(8.1)]{Bra} and \cite[\S 3.5]{Hi93}, and corresponds to $ \Phi_{\mu} \otimes \phi$ in \cite[\S 3.1]{CH}.
Furthermore, there is an equality
\begin{equation} \label{x^mtheta}
\int_{\mathbb{Z}_p} \phi(x)x^m d\mu = \left( t\frac{d}{dt} \right) ^m {([\phi]\Phi_{\mu})|}_{t=1}.
\end{equation}
\subsection{Measures on $\Gal(K_{c_0p^{\infty}}/K)$}
Let ${\mathfrak{a}_1,\dots,\mathfrak{a}_H}$ be a complete set of representatives for $\Pic(\mathcal O_{c_0})$.
As in \cite[\S 8.2]{Bra}, there is an explicit coset decomposition
$
\Gal(K_{c_0p^{\infty}}/K) = \Pic\mathcal O_{c_0p^\infty} = \bigsqcup_{j=1}^H \mathfrak{a}_j^{-1} \mathbb{Z}_p^{\times},
$
that allows us to construct a $\W$-valued measure $\mu$ on $\Pic\mathcal O_{c_0p^\infty}$ by constructing $H$ distinct $\W$-valued measures $\mu_{\mathfrak{a}_j}$ on $\mathbb{Z}_p^{\times}$, so that for every continuous function $\varphi : \Pic\mathcal O_{c_0p^\infty} \rightarrow \W$
we have
\[
\int_{\Pic\mathcal O_{c_0p^\infty}} \varphi d\mu_f = \sum_{\mathfrak{a} \in \Pic\mathcal O_{c_0}} \int_{\mathbb{Z}_p^{\times}} \varphi \mid [\mathfrak{a}] d\mu_{f,\mathfrak{a}},
\]
where $\varphi \mid [\mathfrak{a}]$ is $\varphi$ restricted to $\mathfrak{a}^{-1}\mathbb{Z}_p^{\times}$.
Therefore, a measure $\mu$ on $\Gal(K_{c_0p^{\infty}}/K)$ is equivalent to a collection $\left\lbrace \mu_{\mathfrak{a}} \right\rbrace _{\mathfrak{a} \in \Pic(c_0)}$ of $H$ measures on $\mathbb{Z}_p^{\times}$.
\subsection{Measure associated with a modular form}
Let $g$ be a $p$-adic modular form on $Sh$ over $\W$ and let $\mathfrak{a} \in \Pic(\mathcal{O}_{c_0})$.
Define a $\W$-valued measure $\mu_{g,\mathfrak{a}}$ on $\mathbb{Z}_p$ by
\[
\int_{\mathbb{Z}_p} t^x d\mu_{g,\mathfrak{a}} = g(t_{\mathfrak{a}}) \in \W[[t_{\mathfrak{a}}]],
\]
where $t_{\mathfrak{a}}$ is the Serre--Tate coordinate around $x(\mathfrak{a}) \otimes_{\W} \overline{\mathbb{F}}_p$ and $x(\mathfrak{a})$ is as defined in Section \ref{CMpoints}. Indeed, if $\mathfrak{a}$ is a prime-to-$pN$ fractional ideal of $\mathcal{O}_c$ and $p \nmid c$, then $x(\mathfrak{a})$ has a model defined over $\V := \W \cap K^{\ab}$. If the measures $\mu_{g,\mathfrak{a}}$, for $\mathfrak{a} \in \Pic(\mathcal{O}_{c_0})$, are supported on $\mathbb{Z}_p^{\times}$, then we can put them together to obtain a measure $\mu_g$ on $\Gal(K_{c_0p^{\infty}}/K)$.
\subsection{$p$-depletion of a modular form}
In order to obtain measures supported on $\mathbb{Z}_p^{\times}$, now we introduce the $p$-depletion of a modular form. We follow \cite[\S 3.6]{Brooks}.
Recall the operators $U$ and $V$. Take a QM abelian surface $A$ with ordinary reduction over a $p$-adic field $L$. Then there is a unique $p$-torsion cyclic $\mathcal{O}_B$-submodule $C$ of $A$ which reduces
mod $p$ to the kernel of the Frobenius morphism, that is the canonical subgroup (cf. \cite[Theorem 1.1]{Kas}).
Denote by $\phi_i: A \rightarrow A/C_i$, for $i = 0, \dots, p$, the distinct $p$-isogenies of QM abelian surfaces on $A$ ordered in such a way that $C_0$ is the canonical subgroup of $A$.
If $t : \mbox{$\raisebox{-0.59ex_{N^+} \times \mbox{$\raisebox{-0.59ex_{N^+} \hookrightarrow A[N^+]$ is a $V_1(N^+)$-level structure on $A$, then, since $p \nmid N^+$, $t_i = \phi_i \circ t$ is a $V_1(N^+)$-level
structure on $A/C_i$. Also if $\omega$ is a one-form on $A$, then there is a unique one-form $\omega_i$ on
$A/C_i$ such that $\phi_i ^* \omega_i = \omega$.
If $g$ is a modular form, we can define another modular form $g\mid V$ by
\[
g\mid V (A,t, \omega) := g(A / C_0, 1/p t_0, p \omega_0)
\]
and also a modular form $g \mid U$ by
\[
g\mid U (A,t, \omega) := \sum_{i=1}^{p} g(A / C_i, t_i, \omega_i).
\]
If $[p]$ is the operator on modular forms that is given by
$g \mid [p](A, t, \omega) = g(A, pt, 1/p\omega)$, then $U$ and $V$ are related to the usual $T_p$ operator by
\[
T_p = U + 1/p [p]V.
\]
Furthermore, one has $VU = \id$ and the operators $UV$ and $V U - UV$ are idempotent. The {\bf $p$-depletion} of a modular form $g$ is defined to be
\[
g^{(p)} := g \mid (\id - UV) = g \mid (\id - T_p V + 1/p [p]V^2).
\]
\subsection{Hecke characters and $p$-adic Galois characters} \label{char}
Let $K$ be our imaginary quadratic field
and let $m$, $n$ be integers.
\begin{defi}
A \textbf{Hecke character of $K$ of infinity type $(m,n)$} is a continuous homomorphism
\[
\chi : K^{\times} \backslash \mathbb{A}_K^{\times} \longrightarrow \mathbb{C}^{\times}
\]
satisfying
$
\chi(x \cdot z_{\infty}) = \chi(x) z_{\infty}^m\overline{z}_{\infty}^n,
$
for every $z_{\infty} \in K_{\infty}^{\times}$ and $x \in \hat{K}^{\times}$.
\end{defi}
In particular, the infinite component of $\chi$ is given by $\chi_{\infty}(z) = z^m \overline{z}^n$.
For each prime $\mathfrak{q}$ of $K$, denote by $\chi_{\mathfrak{q}} : K_{\mathfrak{q}}^{\times} \rightarrow \mathbb{C}^{\times}$ the $\mathfrak{q}$-component of $\chi$. The \emph{conductor} of $\chi$ is the largest integral ideal $\mathfrak{c}_f$ of $K$ such that $\chi_{\mathfrak{q}}(u) = 1 $ for each element $u \in 1 + \mathfrak{c}_f \mathcal{O}_{K,\mathfrak{q}}$. As it is known, one can identify a Hecke character $\chi$ with a character on fractional ideals of $\mathcal{O}_K$ prime to $\mathfrak{c}_f$ via the formula
$
\chi(\mathfrak{a}) = \prod_{\mathfrak{q} \mid \mathfrak{a}\ \text{prime}} \chi_{\mathfrak{q}} (\pi_{\mathfrak{q}})^{v_{\mathfrak{q}}(\mathfrak{a})},
$
with $\pi_{\mathfrak{q}}$ a uniformizer at $\mathfrak{q}$; the formula is independent of the choice of
the uniformizer.
So, if $\chi$ has conductor $\mathfrak{c}$ and $\mathfrak{a}$ is any fractional ideal prime to $\mathfrak{c}$, we write $\chi(\mathfrak{a})$ for $\chi(a)$, where $a\in \mathbb{A}_K^{\times}$ is an idele with $a\hat{\mathcal{O}}_K \cap K = \mathfrak{a}$ and $a_{\mathfrak{q}} = 1$ for all $\mathfrak{q} \mid \mathfrak{c}$.
A Hecke character $\chi$ is called \textbf{anticyclotomic} if $\chi$ is trivial on $\mathbb{A}_{\mathbb{Q}}^{\times}$.
The \textbf{$p$-adic avatar} $\hat{\chi} : K^{\times} \backslash
\hat{K}^{\times} \rightarrow \mathbb{C}_p^{\times}$ of a Hecke character $\chi$ of infinity type $(m, n)$ is defined by
\[
\hat{\chi} (x) = \chi(x) x_{\mathfrak{p}}^m x_{\overline{\mathfrak{p}}}^n
\]
with $x \in \hat{K}^{\times}$ and $\mathfrak{p}$ the chosen prime above $p$ which splits in $K$. Every $p$-adic Galois character $\rho: G_K \rightarrow \mathbb{C}_p^{\times}$ can be seen as a $p$-adic character $ K^{\times} \backslash \hat{K}^{\times} \rightarrow \mathbb{C}_p^{\times}$ via the geometrically normalized reciprocity law map $\text{rec}_K : K^{\times} \backslash \hat{K}^{\times} \rightarrow \Gal(K^{\ab}/K)$.
A $p$-adic Galois character
is said to be \textbf{locally algebraic} if it is the $p$-adic avatar of some Hecke character. A locally algebraic
character is called of infinity type $(m, n)$ if the associated Hecke character is of infinity type
$(m, n)$, and its conductor is the conductor of the associated Hecke character.
\subsection{Construction of a measure}
Consider now our modular form $f \in S_k^{\text{new}}(\Gamma_0(N))$, with $k\geq 4$, and let
$F/\mathbb{Q}_p$ be a finite extension containing the image of the Fourier coefficients of $f$. Via the Jacquet--Langlands correspondence we can see $f$ as a modular form in $M_k(Sh, \mathcal{O}_F)$. Take the $p$-depletion $f^{(p)}$ of $f$ and then consider it as a $p$-adic modular form $\hat{f}^{(p)}$ in $V_p(N^+,\mathcal{O}_F)$ of weight $k$.
Fix an anticyclotomic Hecke character $\psi$ of infinity type $(k/2, -k/2)$, and let $c_0\mathcal{O}_K$ be the prime to $p$ part of the conductor of $\psi$.
Take a finite extension of $\W$ obtained by adjoining the values of the Hecke character $\psi$ and, with an abuse of notation, still denote it by $\W$. Let $\hat{\psi}$ be the $p$-adic avatar of $\psi$. The $\W$-valued measures $\mu_{\hat{f}^{(p)},\mathfrak{a}}$ are given by
\[
\psi(\mathfrak{a})N(\mathfrak{a})^{-k/2} \psi_{\mathfrak{p}} \mu_{\hat{f}^{(p)}_{\mathfrak{a}}},\]
where $\mu_{\hat{f}^{(p)}_{\mathfrak{a}}}$ is defined by
\[
\int_{\mathbb{Z}_p} t^x d\mu_{\hat{f}^{(p)}_{\mathfrak{a}}} = \hat{f}^{(p)}_{\mathfrak{a}}(t_{\mathfrak{a}}) := \hat{f}^{(p)} (t_{\mathfrak{a}}^{N(\mathfrak{a}^{-1})\sqrt{(-D_K)}^{-1}}) \in \W[[t_{\mathfrak{a}}-1]],
\]
and $t_{\mathfrak{a}}$ is the Serre--Tate coordinate around $ x(\mathfrak{a}) \otimes_{\W} \overline{\mathbb{F}}_p$.
\begin{rmk}
The measure $\mu_{\hat{f}^{(p)}_{\mathfrak{a}}}$ associated with $\hat{f}^{(p)}_{\mathfrak{a}}$ is supported on $\mathbb{Z}_p^{\times}$.
Indeed, $UV \hat{f}^{(p)} =0$ because $VU=\id$.
Since $UV$ acts on the expansion in Serre--Tate coordinates as
$UV g(t) = 1/p\sum_{\zeta \in \mbox{$\raisebox{-0.59ex_p} g(\zeta t)$ (see \cite[Proposition 4.17]{Brooks}), taking $\phi = \boldsymbol{1}_{\mathbb{Z}_p^{\times}}$ to be the characteristic function of $\mathbb{Z}_p^{\times}$ and using \eqref{twistphi} yields
$
[\phi]g(t)
= p^{-1} \sum_{b \in \mathbb{Z}/p\mathbb{Z}} \phi(b) \sum_{\zeta \in \boldsymbol{\mu}_{p}} \zeta^{-b} g(\zeta t)
= [\boldsymbol{1}_{\mathbb{Z}_p}]g(t) - p^{-1} \sum_{\zeta \in \boldsymbol{\mu}_{p}} g(\zeta t).
$
Hence, $\mu_{\hat{f}^{(p)},\mathfrak{a}}$ is supported on $\mathbb{Z}_p^{\times}$ as well.
\end{rmk}
\begin{defi} \label{Lfct}
Let $\psi$ denote an anticyclotomic Hecke character of infinity type $(k/2, -k/2)$ and conductor $c_0\mathcal{O}_K$ with $(c_0,pN^+)=1$. The {\bf measure $\LL_{f,\psi}$ associated with $f$ and $\psi$} is the $\W$-valued measure given by
\[
\LL_{f,\psi} (\varphi) = \sum_{\mathfrak{a} \in \Pic(\mathcal O_{c_0})} \int_{\mathbb{Z}_p^{\times}} \varphi \big|[\mathfrak{a}]d\mu_{\hat{f}^{(p)},\mathfrak{a}}
\]
for any continuous function $\varphi:\Gal(K_{c_0p^{\infty}}/K)\rightarrow\W$.
\end{defi}
Therefore
\[
\begin{split}
\LL_{f,\psi}(\varphi) &= \sum_{\mathfrak{a} \in \Pic(\mathcal O_{c_0})} \psi(\mathfrak{a})N(\mathfrak{a})^{-k/2} \int_{\mathbb{Z}_p^{\times}} \psi_{\mathfrak{p}}\varphi \big|[\mathfrak{a}]d\mu_{\hat{f}^{(p)}_{\mathfrak{a}}
= \sum_{\mathfrak{a} \in \Pic(\mathcal O_{c_0})} \psi(\mathfrak{a})N(\mathfrak{a})^{-k/2} \Phi_{\psi_{\mathfrak{p}}\varphi \mid[\mathfrak{a}]d\mu_{\hat{f}^{(p)}_{\mathfrak{a}}}} \mid_{t=1} \\
&= \sum_{\mathfrak{a} \in \Pic(\mathcal O_{c_0})} \psi(\mathfrak{a})N(\mathfrak{a})^{-k/2} [\psi_{\mathfrak{p}}\varphi \big|[\mathfrak{a}]]\hat{f}^{(p)}_{\mathfrak{a}}(t_{\mathfrak{a}}) \mid_{t=1
= \sum_{\mathfrak{a} \in \Pic(\mathcal O_{c_0})} \psi(\mathfrak{a})N(\mathfrak{a})^{-k/2} [\psi_{\mathfrak{p}}\varphi \big|[\mathfrak{a}]]\hat{f}^{(p)}_{\mathfrak{a}}\bigl(x(\mathfrak{a})\bigr).
\end{split}
\]
Notice that the second equality holds because $\mu_{\hat{f}^{(p)}_{\mathfrak{a}}}$ is supported on $\mathbb{Z}_p^{\times}$. Indeed, for a measure $\mu$ supported on $\mathbb{Z}_p^{\times}$ one has
$
\int_{\mathbb{Z}_p^{\times}} 1 d \mu =
\int_{\mathbb{Z}_p} 1 d \mu =\Phi_{\mu} \mid_{t=1}.
$
We are also using the fact that $x(\mathfrak{a})$ is the canonical lifting of $x(\mathfrak{a}) \otimes_{\W} \overline{\mathbb{F}}_p$, that is $t_{\mathfrak{a}}\bigl(x(\mathfrak{a})\bigr)=1$, where $t_{\mathfrak{a}}$ is again the Serre-Tate coordinate around $x(\mathfrak{a}) \otimes_{\W} \overline{\mathbb{F}}_p$.
\begin{rmk}
We are interested in evaluating $\LL_{f,\psi}$ at continuous functions that factor through $\Gal(K_{p^{\infty}}/K)$. In other words, we will view $\LL_{f,\psi}$ as a measure on $\Gal(K_{p^{\infty}}/K)$.
\end{rmk}
Now we state a result that we will use later.
Let $g\in V_p(N^+,\W)$ be a $p$-adic modular form and let $g_{\mathfrak{a}}$ be defined as above, so that $g_{\mathfrak{a}}(t) = g\bigl(t^{N(\mathfrak{a})^{-1}\sqrt{-D_K}^{-1}}\bigr)$ with $t$ the Serre--Tate coordinate around $x(\mathfrak{a}) \otimes_{\W} \overline{\mathbb{F}}_p$.
\begin{prop} \label{3.3}
If $g\in V_p(N^+,\W)$ and $\phi : (\mathbb{Z}/p^n\mathbb{Z})^{\times} \rightarrow \mathbb{C}^{\times}$ is a primitive Dirichlet character, then
\[
[\phi]g_{\mathfrak{a}}(x(\mathfrak{a})) = p^{-n} G(\phi) \sum_{u \in (\mathbb{Z}/p^n\mathbb{Z})^{\times}} \phi^{-1}(u) g(x(\mathfrak{a}) \star \alpha(u/p^n)),
\]
where $G(\phi) = \sum_{v \in (\mathbb{Z}/p^n\mathbb{Z})^{\times}} \phi(v) \zeta_{p^n}^v$ is the Gauss sum of $\phi$.
\end{prop}
\begin{proof}
The statement follows by applying (\ref{twistphi}) and Lemma \ref{3.2}.
\end{proof}
\subsection{Interpolation properties} \label{interp}
Working on $\hat{B}$ instead of $\GL_2(\hat{\mathbb{Q}})$ and adapting the computations from \cite{Hs}, one can obtain an analogue of \cite[Theorem 3.4]{Hs} in our quaternionic setting and use it, as in \cite{CH}, to get an interpolation formula for our $p$-adic $L$-function evaluated at Galois characters that are $p$-adic avatars of anticyclotomic Hecke characters of infinity type $(n,-n)$ with $n \geq 0$. In particular, one can relate our $p$-adic $L$-function to the Rankin--Selberg $L$-function associated with $f$ and some anticyclotomic Hecke character $\chi$ of infinity type $(k/2+n,k/2-n)$ with $n \geq 0$, i.e., the $L$-function associated with the $G_K$-representation $V_{f,\chi}= V_f(k/2) \otimes \chi$.
In the statement of the following theorem, $\psi$ is, as usual, an anticyclotomic Hecke character of infinity type $(k/2, -k/2)$ and conductor $c_0\mathcal{O}_K$ with $(c_0,pN^+)=1$.
\begin{teor} \label{interpolation-thm}
Let $\hat{\phi}$ be the $p$-adic avatar of an anticyclotomic Hecke character $\phi$ of infinity type $(n,-n)$ with $n \geq 0$ and $p$-power conductor. Then there exists a non-zero constant $C(f, \psi, \phi, K)$ depending on $f$, $\psi$, $\phi$, $K$ such that $C(f, \psi,\phi, K) \cdot L(f, \psi\phi, k/2)$ is an algebraic number and
\[
\left(\LL_{f,\psi}(\hat{\phi})\right)^2 = C(f, \psi,\phi, K) \cdot L(f, \psi\phi, k/2),
\]
where the equality holds via the fixed embedding $i_p : \overline{\mathbb{Q}} \hookrightarrow \mathbb{C}_p$.
\end{teor}
A proof of this result will appear in a future project.
\section{Generalized Heegner cycles}
Recall that $K$ is an imaginary quadratic field, $N$ is a positive integer with a factorization $N = N^+ N^-$
where $N^+$ is a product of primes that are split in $K$ and $N^-$ is a square-free
product of an even number of primes that are inert in $K$, $B$ is again our indefinite rational quaternion algebra of discriminant $D=N^-$ and $p$ is an odd prime that splits in $K$ and $B$ and such that $(p,N)=1$.
Consider then our Shimura curve $Sh$, our fixed modular form $f$ in $S_k^{\text{new}}(\Gamma_0(N))$ of weight $k = 2r+2 \geq 4$, which can be seen by Jacquet--Langlands as a modular form on $Sh$, and the $r$-fold fiber product $\mathcal{A}^r$ of the universal QM abelian surface over $Sh$ with itself.
Following the work of Brooks, \cite{Brooks}, we want to define generalized Heegner cycles associated with $f$ lying over a Kuga--Sato variety over $Sh$.
Indeed, these cycles will live in the Chow groups of the generalized Kuga--Sato variety $\mathcal{X}_{r} = \mathcal{A}^{r} \times A^r$, where $A$ will be a fixed QM abelian surface.
Then, to obtain cohomology classes from the generalized Heegner cycles, we will apply the $p$-adic Abel--Jacobi map.
We will construct in this way a system of generalized Heegner classes indexed by fractional ideals of $K$.
\subsection{Kuga--Sato varieties} \label{K-S}
Consider the $r$-fold fiber product $\mathcal{A}^r$ of the universal QM abelian surface $\mathcal{A}$ with itself over $Sh$, which is called the \textbf{$r^{\text{th}}$-Kuga--Sato variety} over $Sh$.
We define the action of the Hecke operator $T_{\ell}$, for $\ell \nmid N^+D$ on the
Kuga--Sato variety $\mathcal{A}^r$
as follows.
Recall the interpretation of the Hecke operator $T_{\ell}$ on $Sh$ as correspondence.
Let $Sh(\ell)$ be the Shimura curve classifying quadruples $(A,\iota,\nu_{N^+},C)$, where
$(A,\iota,\nu_{N^+})$ is a QM abelian surface with $V_1(N^+)$-level structure endowed with a subgroup $C$ of $A[\ell]$ stable under the action of $\mathcal{O}_B$ and cyclic as $\mathcal{O}_B$-module. $A[\ell]$ has $\ell + 1$ such $\mathcal{O}_B$-submodules, all of them of order $\ell^2$.
Consider the natural forgetful morphism of Shimura curves $\alpha: Sh(\ell) \rightarrow Sh$ and the morphism $\beta: Sh(\ell) \rightarrow Sh$ given by $(A,\iota,\nu_{N^+},C) \mapsto (A/C,\iota_{\psi_C},\nu_{N^+,\psi_C})$, where $\iota_{\psi_C}$, $\nu_{N^+,\psi_C}$ are induced by the isogeny $\psi_C : A \twoheadrightarrow A/C$. Then $T_{\ell}$ is defined by the correspondence
\[
\begin{tikzcd}
& Sh(\ell) \arrow[dr, "\beta"] \ar[ld, "\alpha", '] &
\\
Sh & & Sh,
\end{tikzcd}
\]
which means that $T_{\ell} = \alpha^* \circ \beta_*$, i.e., $T_{\ell} (x) = \sum_{y \in \alpha^{-1}(x)} \beta(y)$. In other words, we recover the definition given in \S \ref{shimura-chapter}.
Now take the fiber product $\mathcal{A}_{\ell} := \mathcal{A} \times_{Sh} Sh(\ell)$, which is the universal QM abelian surface over $Sh(\ell)$, equipped with a subgroup scheme $\mathcal{C}$ of $\mathcal{A}[\ell]$ that is an $\mathcal{O}_B$-module of order $\ell^2$. Consider the quotient $\mathcal{Q} := \mathcal{A}_{\ell}/\mathcal{C}$ with induced QM and level structure and the fiber products $\mathcal{A}_{\ell}^r$ and $\mathcal{Q}^r$ over $Sh(\ell)$. The action of the Hecke operator $T_{\ell}$ on the Kuga--Sato variety $\mathcal{A}^r$ is defined by
the commutative diagram
\[
\begin{tikzcd}
\mathcal{A}^r \arrow[d] & \mathcal{A}_{\ell}^r \arrow[dr] \arrow[rr, "\psi"] \arrow[l, "p_1", '] && \mathcal{Q}^r \arrow[ld] \arrow[r, "p_2"]& \mathcal{A}^r \arrow[d]
\\
Sh && Sh(\ell) \arrow[ll, "\alpha", '] \arrow[rr, "\beta"] && Sh,
\end{tikzcd}
\]
where the two squares are cartesian, by the formula
\[ T_{\ell} = {p_1}_* \circ \psi^* \circ p_2^*. \]
Write $\overline{\mathcal{A}}^r$ for the base change of $\mathcal{A}^r$ to $\overline{\mathbb Q}$. The correspondence $T_\ell$ just defined induces an endomorphism of the \'etale cohomology groups $H_{\et}^*(\overline{\mathcal{A}}^r, -)$, which will still be denoted by $T_{\ell}$.
The reader is advised to compare with \cite{Sch} for the definition of Hecke operators on Kuga--Sato varieties over modular curves and with \cite{EdVP} for the case of Shimura curves relative to ``$\Gamma_0$-type'' level structures.
\subsection{Generalized Kuga--Sato varieties}
Fix the QM abelian surface $A$ with $V_1(N^+)$-level structure and CM by $\mathcal{O}_K$ defined in \eqref{CMpoints}. Thanks to the assumption that $p$ splits in $K$, the surface $A$ is ordinary at
$p$. Our {\bf generalized Kuga--Sato variety} is the product $\mathcal{X}_{r} := \mathcal{A}^r \times A^r$. This enlarged Kuga--Sato variety will be the space where our arithmetic cycles will live. As a piece of notation, we shall write $\overline{\mathcal{X}}_{r}$ for the base change of $\mathcal{X}_{r}$ to $\overline{\mathbb Q}$.
The usual Hecke correspondence $T_{\ell}$ for a prime $\ell \nmid N^+D$ on the Kuga--Sato variety $\mathcal{A}^r$ induces a Hecke correspondence $T_{\ell} \times \id$ on $\mathcal{X}_{r}$, which will still be denoted by $T_{\ell}$.
\subsection{Projectors on Kuga--Sato varieties} \label{projectors-sec}
We will define our algebraic cycles as graphs of morphisms of QM abelian surfaces. In order to make them homologically trivial, we will need to modify them by certain projectors associated with the generalized Kuga--Sato variety. Consider the projectors $P \in \text{Corr}_{Sh}(\mathcal{A}^r)$ and $\varepsilon_A \in \text{Corr}(A^r)$ defined in \cite[\S 6.1]{Brooks}.
Then
\[
PH^*_{\et}(\overline{\mathcal{A}}^r,\mathbb{Z}_p) \subseteq H^{k-1}_{\et}(\overline{\mathcal{A}}^r,\mathbb{Z}_p)
\]
and
\[
\begin{split}
&\varepsilon_A H^i_{\et}(\overline{A}^{r},\mathbb{Z}_p) = 0\quad \text{if}\ i\neq k-2,\\[1mm]
&\varepsilon_A H^{k-2}_{\et}(\overline{A}^{r},\mathbb{Z}_p)= \text{Sym}^{2r}eH^1(\overline{A},\mathbb{Z}_p).
\end{split}
\]
Consider the variety $\mathcal{X}_r$ together with the projector $\varepsilon = P \varepsilon_A \in \text{Corr}_{Sh}(\mathcal{X}_r)$.
Thanks to properties of the projectors, one has
\begin{equation} \label{eH}
\begin{split}
&\varepsilon H^{i}_{\et}\bigl(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p\bigl) = 0 \quad \text{if}\ i\neq 2k-3, \\[1mm]
&\varepsilon H^{2k-3}_{\et}\bigl(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p\bigr) \cong
P H^{k-1}_{\et}\bigl(\overline{\mathcal{A}}_{r},\mathbb{Z}_p\bigr) \otimes \text{Sym}^{2r}eH^1_{\et}\bigl(\overline{A},\mathbb{Z}_p\bigr).
\end{split}
\end{equation}
We can prove these relations using the K\"unneth decomposition. More precisely, the K\"unneth decomposition for $\mathcal{X}_{r} = \mathcal{A}_{r} \times A^{r}$ reads
\[
H^{i}_{\et}\bigl(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p\bigr) \cong
\bigoplus_{n+s=i} H^{n}_{\et}\bigl(\overline{\mathcal{A}}_{r},\mathbb{Z}_p\bigr) \otimes H^s_{\et}\bigl(\overline{A}^{r},\mathbb{Z}_p\bigr).
\]
Indeed, the K\"unneth exact sequence for $\mathcal{A}_{r}$ and $A^{r}$ (for which we refer, for example, to \cite[Theorem 8.21]{Mi80} or \cite[\S 22]{LEC}) is
\[
\begin{split}
0 \longrightarrow \bigoplus_{n+s=i} H^{n}_{\et}\bigl(\overline{\mathcal{A}}_{r},\mathbb{Z}_p\bigr) &\otimes H^s_{\et}\bigl(\overline{A}^{r},\mathbb{Z}_p\bigr) \longrightarrow H^i_{\et}\bigl(\mathcal{X}_{r},\mathbb{Z}_p\bigr) \longrightarrow \\
&\longrightarrow \bigoplus_{n+s=i+1} \text{Tor}^{\mathbb{Z}_p}_1\Bigl(H^{n}_{\et}\bigl(\overline{\mathcal{A}}_{r},\mathbb{Z}_p\bigr), H^s_{\et}\bigl(\overline{A}^{r},\mathbb{Z}_p\bigr)\Bigr) \longrightarrow 0.
\end{split}
\]
Since $A$ is an abelian variety, by \cite[Theorem 15.1]{AV} (or \cite[Theorem 12.1]{AV1}), the $p$-adic cohomology of $\overline{A}^{r}$ is a free $\mathbb{Z}_p$-module, so the last term of the sequence above is $0$. As a consequence, we obtain the desired K\"unneth decomposition.
Now we want to apply the projector $\varepsilon$ and the twists. Since
$PH^{i}_{\et}(\overline{\mathcal{A}}_{r},\mathbb{Z}_p) = 0$ for $i \neq k-1$
and
$\varepsilon_A H^i_{\et}(\overline{A}^{r},\mathbb{Z}_p) = 0$ for $i\neq k-2$,
we deduce that
\[
\varepsilon H^{i}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p) = 0 \quad \text{for}\ i\neq 2k-3,
\]
as in this case all the terms on the right hand side of the K\"unneth decomposition vanish after applying the projectors. For $i=2k-3$ the only term in the sum in the right hand side of the K\"unneth decomposition that does not vanish after the application of the projectors is
$P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p) \otimes \varepsilon_A H^{k-2}_{\et}(\overline{A}^{r},\mathbb{Z}_p)$, hence
\[
\varepsilon H^{2k-3}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p) =
P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p) \otimes \text{Sym}^{2r}eH^1_{\et}(\overline{A},\mathbb{Z}_p),
\]
as desired.
\subsection{Galois representations and Kuga--Sato varieties over Shimura curves}
Let $V_f$ be the $2$-dimensional Galois representation attached to $f \in S_k^{\text{new}}(\Gamma_0(N))$ by Deligne (\cite{Del}) and let $V_f(k/2)$ be the self-dual twist of $V_f$. As explained, for example, in \cite[\S 2 and \S 3]{Nek92}, $V_f(k/2)$ can be realized as a direct summand of the $(k-1)$-st $p$-adic cohomology group of the Kuga--Sato variety over a certain modular curve.
Let $\phi$ be Euler's function and observe that the index of $\Gamma_{1,N^+}$ in $\Gamma_{0,N^+}$ divides $\phi(N^+)$ (see \cite[p. 4184]{Brooks}). From now on, we work under the following
\begin{assumption} \label{main-assumption}
$p\nmid N\phi(N^+)$.
\end{assumption}
Consider now the $r$-th Kuga--Sato variety $\mathcal{A}^{r}$ over the Shimura curve $Sh$. A similar construction can be performed to obtain the representation $V_f(k/2)$ from the \'etale cohomology group $H^{k-1}_{\et}(\overline{\mathcal{A}^{r}},\mathbb{Q}_p)(k/2)$ of $\mathcal{A}^{r}$.
Namely, let $F$ be the finite extension of $\mathbb{Q}_p$ generated by the Fourier coefficients of $f$, whose valuation ring will be denoted by $\mathcal O_F$. Moreover, let $P$ be the projector from \S \ref{projectors-sec}. Under Assumption \ref{main-assumption}, one can define a Galois-equivariant surjection
\[
PH^{k-1}_{\et}\bigl(\overline{\mathcal{A}^{r}},\mathbb{Z}_p\bigr)(k/2) \mbox{\;$\relbar\joinrel\twoheadrightarrow$\;} T,
\]
where $T$ is a suitable Galois-stable $\mathcal{O}_F$-lattice inside the $F$-vector space $V_f (k/2)$, whose definition can be found, for example, in \cite{Nek92} and \cite{Ota} (see also \cite{Nek93}). This can be done by adapting the arguments in \cite[\S 5 and Appendix 10.1]{IS} to our setting, which coincides with that of \cite{Brooks}. Since the modifications required are straightforward, we leave them to the interested reader. See also \cite[\S 3.3]{EdVP} for further details.
\subsection{The $p$-adic Abel--Jacobi map}
Recall that for a smooth projective variety $X$ of dimension $d$ over a field $L$ of characteristic $0$ one can associate the \emph{ $p$-adic \'etale Abel--Jacobi map}
\[
\AJ_{p,L}: {\CH^s(X/L)}_0 \longrightarrow
H^1\Bigl(F,H^{2s-1}_{\et}(\overline{X},\mathbb{Z}_p)(s)\Bigr),
\]
where
$\CH^s(X/L)_0:= \ker(\text{cl}_{X})$ is
the group of homologically trivial cycles of codimension $s$ on $X$ defined over $L$ modulo rational equivalence, i.e. the kernel of the cycle map
$
\text{cl}_X : \CH^s(X/L) \rightarrow H_{\et}^{2s}(\overline{X},\mathbb{Z}_p(s))^{G_L},
$
with $\mathbb{Z}_p(s)$ the $s$-th Tate twist of the $p$-adic sheaf $\mathbb{Z}_p$ and $G_L$ the absolute Galois group $\Gal(\overline{L}/L)$. See
\cite[Chapter VI, \S 9]{Mi80} or \cite[\S 23]{LEC} for the definition of the cycle map and \cite[\S 3]{BDP}, for example, for the construction of the $p$-adic Abel--Jacobi map.
\subsection{Generalized Heegner cycles} \label{GHcycles}
For any morphism $\varphi: (A,i,\nu_{N^+}) \rightarrow (A',i',\nu'_{N^+})$ of abelian surfaces with QM by $\mathcal{O}_B$ and $V_1(N^+)$-level structure over a field $L\supseteq \mathbb{Q}$, we can consider its graph $\Gamma_{\varphi} \subseteq A \times A'$.
Consider the point $x$ in $Sh(L)$ corresponding to the class of $ (A',i',\nu'_{N^+})$.
There is an embedding $A' = \mathcal{A}_{x} \hookrightarrow \mathcal{A}$ of the fiber of $\mathcal{A}$ above $x$ in $\mathcal{A}$ that induces an embedding $ i_{x} : (A')^{r} \hookrightarrow \mathcal{A}^{r}$. Via this embedding, we can view the $(r)$-power of the graph of $\varphi$ inside $\mathcal{A}^{r} \times A^r $:
\[
\Gamma_{\varphi}^{r} \subseteq A^r \times A'^r = A'^r \times A^r \stackrel{i_x \times \id}{\hookrightarrow} \mathcal{A}^{r} \times A^r = \mathcal{X}_{r}.
\]
We define the \textbf{generalized Heegner cycle} $\Delta_{\varphi}$ associated with $\varphi$ as
\[
\Delta_{\varphi} := \varepsilon \Gamma_{\varphi}^{r} \in \text{CH}^{k-1}(\mathcal{X}_{r}/L),
\]
where $\varepsilon = P \varepsilon_A \in \text{Corr}^{r}_{Sh}(\mathcal{X}_{r},\mathcal{X}_{r})$ and $L$ is a field
such that $ (A',i',\nu'_{N^+})$ and $\varphi$ are defined over $L$.
One needs to apply the projector $\varepsilon$ to make the cycle $\Delta_{\varphi}$ homologically trivial, that means that we want the image of $\Delta_{\varphi}$ via the cycle map
$
\cl_{\mathcal{X}_{r}}: \text{CH}^{k-1}(\mathcal{X}_{r}/L)_{0} \rightarrow H^{2k-2}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p(k-1))
$
to be trivial in order to apply the Abel--Jacobi map.
Indeed, the cycle $\Delta_{\varphi}$ is homologically trivial, because, thanks to equation (\ref{eH}), one has $\varepsilon H^{2k-2}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p(k-1)) = 0$.
Therefore, we have that $\varepsilon \text{CH}^{k-1}(\mathcal{X}_{r}/L) \subseteq \text{CH}^{k-1}(\mathcal{X}_{r}/L)_{0}$
and we can consider the image of this cycle under the $p$-adic Abel--Jacobi map
\[
\text{AJ}_{p,L} : \text{CH}^{k-1}(\mathcal{X}_{r}/L)_{0} \longrightarrow H^1\big(L,H^{2k-3}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p(k-1))\big).
\]
Applying the projector $\varepsilon$ in the construction of the Abel--Jacobi map (as in \cite[\S 3.1]{BDP} for Kuga--Sato varieties over modular curves) we obtain a $p$-adic Abel--Jacobi map
\[
\text{AJ}_{p,L} : \text{CH}^{k-1}(\mathcal{X}_{r}/L) \longrightarrow H^1\big(L,\varepsilon H^{2k-3}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p(k-1))\big).
\]
Then, considering the twist in (\ref{eH}), one has
\[
\varepsilon H^{2k-3}_{\et}(\overline{\mathcal{X}}_{r}, \mathbb{Z}_p(k-1)) =
P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p(k/2)) \otimes \text{Sym}^{2r}eH^1_{\et}(\overline{A},\mathbb{Z}_p)(r),
\]
so in the following we will see the Abel--Jacobi map as taking values in
\[
H^1\big(L,P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p)(k/2) \otimes \text{Sym}^{2r}eH^1_{\et}(\overline{A},\mathbb{Z}_p)(r)\big).
\]
\subsection{A distinguished family of generalized Heegner cycles}
With notation as in \S \ref{CMpoints}, start with the fixed QM abelian surface $A$. For any integer $c$ prime to $N^+$, take the multiplication-by-$c$ isogeny
\[
(A,i,\nu_{N^+}) \stackrel{\phi_c}{\longrightarrow} (A_c,i_c,\nu_{c,N^+}),
\]
which is an isogeny of QM abelian surfaces with $V_1(N^+)$-level structures.
For each class $[\mathfrak{a}]$ in $\Pic \mathcal{O}_c$, where the representative $\mathfrak{a}$ is chosen to be integral and prime to $N^+pc$,
consider the isogeny
\[
\phi_{\mathfrak{a}} : A \longrightarrow A_{\mathfrak{a}},
\]
defined as the composition
\[
(A,i,\nu_{N^+}) \stackrel{\phi_c}{\longrightarrow} (A_c,i_c,\nu_{c,N^+}) \stackrel{\varphi_{\mathfrak{a}}}{\longrightarrow} (A_{\mathfrak{a}},i_{\mathfrak{a}},\nu_{\mathfrak{a},N^+}),
\]
and then the cycle
\[
\Delta_{\phi_{\mathfrak{a}}} \in \text{CH}^{k-1}(\mathcal{X}_{r}/K_c).
\]
In fact, both $(A_{\mathfrak{a}},i_{\mathfrak{a}},\nu_{\mathfrak{a},N^+})$ and the isogeny $\phi_{\mathfrak{a}}$ are defined over the ring class field $K_c$ of $K$ of conductor $c$.
\subsection{Generalized Heegner classes} \label{GHclasses}
For any integer $c$ prime to $N^+$, consider the Abel--Jacobi map
\[
\text{AJ}_{p,K_c} : \text{CH}^{k-1}(\mathcal{X}_{r}/K_c)_0 \longrightarrow H^1\bigl(K_c,P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p)(k/2) \otimes \text{Sym}^{2r}eH^1_{\et}(\overline{A},\mathbb{Z}_p)(r)\bigr).
\]
Because $V_{f}(k/2)$ can be realized as a quotient of $P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p(k/2))$, then we can see the Abel--Jacobi map as a map
\[ \text{CH}^{k-1}(\mathcal{X}_{r}/K_c)_{0} \longrightarrow H^1(K_c,T \otimes \text{Sym}^{2r}eH^1_{\et}(\overline{A},\mathbb{Z}_p)(r)),
\]
where $T$ is the Galois stable lattice in $V_f(k/2)$.
Since
$\
eH^1_{\et}(\overline{A},\mathbb{Z}_p) \cong H^1_{\et}(\overline{E},\mathbb{Z}_p),
$
then we have a map
\[
\Phi_{K_c} : \text{CH}^{k-1}(\mathcal{X}_{r}/K_c) \longrightarrow H^1\bigl(K_c,T \otimes \text{Sym}^{2r}H^1_{\et}(\overline{E},\mathbb{Z}_p)(r)\bigr) \longrightarrow H^1\bigl(K_c,T \otimes S(E)\bigr),
\]
where, as in \cite[\S 4.2]{CH}, we set $S(E) := \text{Sym}^{2r}T_p(\overline{E})(-r)$.
We define the \textbf{generalized Heegner class} $z_{\mathfrak{a}}$, associated with an ideal $\mathfrak{a}$ of $\mathcal{O}_c$, to be
\[
z_{\mathfrak{a}} := \Phi_{K_c}(\Delta_{\phi_{\mathfrak{a}}}).
\]
\subsection{$\chi$-components} \label{chi-comp}
As in \cite{CH}, we want to define ``$\chi$-components'' of the generalized Heegner classes and construct classes $z_{c,\chi} \in H^1(K_c, T \otimes \chi)$, for $\chi$ an anticyclotomic Galois character. For this we will use $S(E) = \text{Sym}^{2r}T_p(\overline{E})(-r)$ appearing in the image of the Abel--Jacobi map and we will do the same work as \cite{CH}.
So, closely following \cite[\S 4.4]{CH}, let us start with a positive integer $c_0$ prime to $pN^+$, and let
$\chi: \Gal(K_{c_0p^{\infty}}/K) \rightarrow \mathcal{O}_F^{\times}$ (possibly enlarging $F$ so that $\Ima(\chi) \subseteq \mathcal{O}_F^{\times}$)
be a locally algebraic anticyclotomic character of infinity type $(j, -j)$ with
$-k/2 < j < k/2$ and conductor $c_0p^s\mathcal{O}_K$.
Recall that $E = \mathbb{C}/\mathcal{O}_K$ (cf. \S 1.5); note that $E$ is denoted with $A$ in \cite{CH}. Consider the abelian variety
$W_{/K} := \Res_{K_1/K}E$,
obtained by Weil restriction of $E$ from $K_1$ to $K$, defined as the product
\[
W = \prod_{\sigma \in \Gal(K_1/K)} E^{\sigma},
\]
where $E^{\sigma}$ is the curve determined by the polynomials obtained by applying $\sigma$ to the coefficients of the polynomials defining $E$.
The Weil restriction $W$ is again a CM abelian variety but over $K$ and of dimension $[K_1:K]$. The endomorphism ring of $W$, $M := \End_K(W) \otimes \mathbb{Q}$, is a product of CM fields over $K$ and $\dim W = [M : K] = [K_1 : K]$ (see \cite[\S 1]{Rub} and \cite{Wi} for a general introduction to the Weil restriction of abelian varities). Since
\[
T_p(W) = \prod_{\sigma \in \Gal(K_1/K)} T_p(E^{\sigma}) = \text{Ind}^{G_K}_{G_{K_1}}T_p(E),
\]
viewing Tate modules as Galois representations, where $\text{Ind}$ is the induced representation, we have an inclusion $T_p(E) \hookrightarrow T_p(W)$.
Define the $G_K$-module
\[
S(W):= \Sym^{2r} T_p(W)(-r) \otimes_{\mathbb{Z}_p} \mathcal{O}_F = \text{Ind}^{G_K}_{G_{K_1}} S^r(E) \otimes_{\mathbb{Z}_p} \mathcal{O}_F.
\]
By the discussion in \cite[\S 4.4]{CH}, there exists a finite order anticyclotomic character
$\chi_t$ such that $\chi$ can be realized as a direct summand of $S(W) \otimes \chi_t$ as $G_K$-modules. Denote by
\[
e_{\chi}: S(W) \otimes \chi_t \mbox{\;$\relbar\joinrel\twoheadrightarrow$\;} \chi
\]
the corresponding $G_K$-equivariant projection. The character $\chi_t$ is unique up to multiplication by a character of $\Gal(K_1/K)$ and it has the same conductor as $\chi$.
Since $T_p(E) \hookrightarrow T_p(W)$ and then $S(E) \hookrightarrow S(W)$, we can see the classes $z_{\mathfrak{a}}$ as elements of $H^1(K_c,T \otimes S(W))$, where $\mathfrak{a}$ is a fractional ideal of $\mathcal{O}_c$ with $c$ divisible by $c_0p^s$.
For each integer $c$ divisible by the conductor $c_0p^s$ of $\chi$, put \[z_{c} \otimes \chi_t := z_c \in H^1(K_c,T \otimes S(W) \otimes \chi_t),\] through the map $H^1(K_c,T \otimes S(W)) \rightarrow H^1(K_c,T \otimes S(W) \otimes \chi_t)$ and define
\begin{equation} \label{chicomp}
z_{c,\chi} := (\id \otimes e_{\chi}) (z_{c} \otimes \chi_t) \in H^1(K_c,T \otimes \chi),
\end{equation}
the \textbf{$\chi$-component of the class $z_c$}.
\subsection{Compatibility properties}
First we study compatibility properties of the generalized Heegner classes defined in section \ref{GHclasses}, by examining the action of the Hecke operators, and proving results analoguos to
\cite[Lemma 4.3 and Proposition 4.4]{CH}.
Let $I(D_K)$ denote the group of fractional ideals of $K$ that are prime to $D_K$ and let
$\tilde{\kappa}_E : I(D_K) \rightarrow M^{\times}$
be the CM character associated to $W$ (as in \cite[\S 4.4]{CH}).
Denote by $\kappa_E : G_K \rightarrow\mathcal{O}_F^{\times}$ the $p$-adic avatar of $\tilde{\kappa}_E$, possibly enlarging $F$ so that $M \subseteq F$.
\begin{prop}
Let $\mathfrak{a}$, $\mathfrak{b}$ be fractional ideals in $\mathcal{O}_c$ prime to $cN^+D_K$. Suppose that $\mathfrak{a}\mathcal{O}_K$ is trivial in $\Pic \mathcal{O}_K$ and put $\alpha:= \tilde{\kappa}_E(\mathfrak{a}) \in K^{\times}$.
Then
\[
(\id \times \alpha)^*\Delta_{\phi_{\mathfrak{b}}}^{\sigma_{\mathfrak{a}}} = \Delta_{\phi_{\mathfrak{a}\mathfrak{b}}},
\]
where $\sigma_{\mathfrak{a}} \in \Gal(K^{ab}/K_1)$ corresponds to $\mathfrak{a}$ through the Artin reciprocity map.
\end{prop}
\begin{proof}
Recall that the QM abelian surfaces $A$ and $A_{\mathfrak{a}}$ are respectively the self-products of elliptic curves $E \times E$ ad $E_{\mathfrak{a}} \times E_{\mathfrak{a}}$ for each fractional ideal $\mathfrak{a}$.
Note that the isogenies $\phi_{\mathfrak{a}} : A \twoheadrightarrow A_{\mathfrak{a}}$ of QM abelian surfaces are the self-product of the isogenies $E \twoheadrightarrow E_{\mathfrak{a}}$, denoted with $\varphi_{\mathfrak{a}}$ by Castella and Hsieh and used by them to define their Heegner cycles (cf. \cite[\S 4.1]{CH}), and also $\alpha \in K^{\times}$ acts on $A$ (and $A_{\mathfrak{a}}$) as the matrix multiplication by $\left( \begin{smallmatrix}
\alpha & 0 \\ 0 & \alpha
\end{smallmatrix}\right)$, hence as the multiplication by $\alpha$ in each component $E$ (and $E_{\mathfrak{a}}$). Because of this, the proof of \cite[Lemma 4.3]{CH} works also in our case, working component by component. So we obtain that
\[
(\id \times \alpha)^*\Gamma_{\mathfrak{b}}^{\sigma_{\mathfrak{a}}} = (\alpha \times \id)_*\Gamma_{\mathfrak{b}}^{\sigma_{\mathfrak{a}}} = \Gamma_{\alpha \circ \phi_{\mathfrak{b}}^{\sigma_{\mathfrak{a}}}} = \Gamma_{\phi_{\mathfrak{a}\mathfrak{b}}} = \Gamma_{\mathfrak{a}\mathfrak{b}}.
\]
Because $x(\mathfrak{b})^{\sigma_{\mathfrak{a}}} = (A_{\mathfrak{b}},i_{\mathfrak{b}},\nu_{\mathfrak{b},N^+})^{\sigma_{\mathfrak{a}}} = \mathfrak{a} \star (A_{\mathfrak{b}},i_{\mathfrak{b}},\nu_{\mathfrak{b},N^+}) = (A_{\mathfrak{ab}},i_{\mathfrak{ab}},\nu_{\mathfrak{ab},N^+}) = x_{\mathfrak{ab}}$ as points in $Sh$, we have that the immersions $i_{x(\mathfrak{b})^{\sigma_{\mathfrak{a}}}}$ and $i_{x(\mathfrak{ab})}$ are equal, thus, taking the $r$-power and applying the projector $\varepsilon$, we have that
\[
(\id \times \alpha)^*\Delta_{\mathfrak{b}}^{\sigma_{\mathfrak{a}}} = \Delta_{\mathfrak{a}\mathfrak{b}},
\]
and we are done.
\end{proof}
\begin{prop} \label{4.4}
Suppose that $p \nmid c$. For all $ n > 1$, one has
\[
T_p z_{cp^{n-1}} = p^{k-2} z_{p^{n-2}} + \Cor_{K_{cp^n}/K_{cp^{n-1}}}(z_{cp^n}).
\]
For $\ell \nmid c$ that is inert in $K$, one has
\[
T_{\ell}z_c = \Cor_{K_{c\ell}/K_c}(z_{c\ell}).
\]
\end{prop}
\begin{proof}
The operator $T_{p}$ acts on the cycle $\Delta_{\phi_{cp^{n-1}}}$ coming from the isogeny $\phi_{cp^{n-1}}: A \rightarrow A_{cp^{n-1}}$ of QM abelian surfaces with $V_1(N^+)$-level structure and CM by $K$ in the following way
\[
T_{p} \Delta_{\phi_{cp^{n-1}}} = \sum_{i=1}^{p+1} \Delta_{\phi_i},
\]
where the isogenies $\phi_i: A_i \rightarrow A_{cp^{n-1}}$ are $p^2$-isogenies of QM abelian surfaces with $V_1(N^+)$-level structures.
These isogenies correspond to the $p + 1$ sublattices of $\mathcal{O}_{cp^{n-1}} \times \mathcal{O}_{cp^{n-1}}$ that are invariant under the action of $\mathcal{O}_B$.
Since such a sublattice $L$ is determined by $eL$, one can work with sublattices of $\mathcal{O}_c$ of index $p$.
Therefore one can rearrange the computation in the proof of \cite[Proposition 4.4]{CH} to obtain the formula in the statement, using an analogue of \cite[Lemma 4.2]{CH} in this case.
The second part of the proposition can be proved analogously.
\end{proof}
The following result is analogous to \cite[Proposition 4.5]{CH}.
\begin{prop} \label{4.5}
Let $\mathfrak{a}$ be a fractional ideal of $\mathcal{O}_c$ prime to $cN^+D_K$. Then
\[
\chi_t(\sigma_{\mathfrak{a}}) (\emph{id} \otimes e_{\chi}) z_{c}^{\sigma_{\mathfrak{a}}} = \chi(\sigma_{\mathfrak{a}}) \chi_{\cyc}^{-r}(\sigma_{\mathfrak{a}}) (\emph{id} \otimes e_{\chi}) z_{\mathfrak{a}},
\]
where $\chi_{\cyc}$ is the $p$-adic cyclotomic character.
\end{prop}
\begin{proof}
Denote by $\sigma_{\mathfrak{a}} \in \Gal(K_c/K)$ the image of $\mathfrak{a}$ under the classical Artin reciprocity map.
We have
\[
(\text{id} \times \varphi_{\mathfrak{a}\mathcal{O}_K})_* \Gamma_{\phi_{\mathfrak{a}}}
= (\text{id} \times \phi_{\mathfrak{a}})_* \bigl\{(\phi_{\mathfrak{a}}(z),z) \mid z \in A \bigr\}
= \bigl\{ (\varphi_{\mathfrak{a}} \phi_c(z),\varphi_{\mathfrak{a}\mathcal{O}_K}z) \mid z \in A \bigr\}
= \Gamma_{\phi_c}^{\sigma_{\mathfrak{a}}},
\]
as $\sigma_{\mathfrak{a}}(z) = \varphi_{\mathfrak{a}\mathcal{O}_K}(z) $ for any $z \in A(\mathbb{C})$ and $\sigma_{\mathfrak{a}}(z) = \varphi_{\mathfrak{a}}(z) $ for any $z \in A_c(\mathbb{C})$.
Because $x_c^{\sigma_{\mathfrak{a}}} = x_{\mathfrak{a}}$, where $x_{\mathfrak{a}} = (A_{\mathfrak{a}},i_{\mathfrak{a}},\nu_{\mathfrak{a},N^+})$ and $x_c=x_{\mathcal{O}_c}$, one has $i_{x_c^{\sigma_{\mathfrak{a}}}} = i_{x_{\mathfrak{a}}}$. Hence, applying $\varepsilon$ to $\big((\text{id} \times \varphi_{\mathfrak{a}\mathcal{O}_K})_* \Gamma_{\phi_{\mathfrak{a}}} \big)^r= \big(\Gamma_{\phi_c}^{\sigma_{\mathfrak{a}}}\big)^r$, we obtain
\[
(\text{id} \times \varphi_{\mathfrak{a}\mathcal{O}_K})_* \Delta_{\phi_{\mathfrak{a}}} = \Delta_{\phi_c}^{\sigma_{\mathfrak{a}}}.
\]
Then
\[
z_{c}^{\sigma_{\mathfrak{a}}} = \Phi_{K_c}(\Delta_{\phi_c}^{\mathfrak{a}}) = \Phi_{K_c}((\text{id} \times \varphi_{\mathfrak{a}})_* \Delta_{\phi_{\mathfrak{a}}})= (\text{id} \times \varphi_{\mathfrak{a}\mathcal{O}_K})_* \Phi_{K_c}(\Delta_{\phi_{\mathfrak{a}}})
= (\text{id} \times \varphi_{\mathfrak{a}\mathcal{O}_K})_* z_{\mathfrak{a}}.
\]
Observe that $\varphi_{\mathfrak{a}\mathcal{O}_K}$ acts on $\text{Sym}^{2r}eT_p(\overline{A}) \cong \text{Sym}^{2r}T_p(\overline{E})$ as its first component $\lambda_{\mathfrak{a}\mathcal{O}_K} : E \rightarrow E/E[\mathfrak{a}\mathcal{O}_K]=E_{\mathfrak{a}\mathcal{O}_K}$. From the proof of \cite[Proposition 4.5]{CH}, we know that $\lambda_{\mathfrak{a}\mathcal{O}_K}$ acts on $\text{Sym}^{2r}H^1_{\et}(\overline{W},\mathbb{Z}_p))$
as the push-forward $[\tilde{\kappa}_E(\mathfrak{a}\mathcal{O}_K)]_*$ of $\tilde{\kappa}_E(\mathfrak{a}\mathcal{O}_K) \in \End(W)$, which in turn induces the Galois action $\sigma_{\mathfrak{a}}$.
Since $e_{\chi}$ commutes with the action of $G_K$, there are equalities
\[
\begin{split}
\chi_{\cyc}^{r}(\sigma_{\mathfrak{a}}) \chi_{t}(\sigma_{\mathfrak{a}}) e_{\chi}\big(\sigma_{\mathfrak{a}} \otimes \id \otimes \id \cdot y \big) &= e_{\chi}\big((\sigma_{\mathfrak{a}} \otimes \chi_{\cyc}^{r}(\sigma_{\mathfrak{a}}) \otimes \chi_{t}(\sigma_{\mathfrak{a}})) (y)\big)\\ &= e_{\chi}(\sigma_{\mathfrak{a}} \cdot y) = \chi(\sigma_{\mathfrak{a}})e_{\chi}(y),
\end{split}
\]
for any $y = y \otimes 1 \otimes 1 \in S(W) \otimes \chi_t =
\text{Sym}^{2r}H^1_{\et}(\overline{W},\mathbb{Z}_p) \otimes \chi_{\cyc}^{r} \otimes \chi_t$.
Therefore, viewing $z_{c}^{\sigma_{\mathfrak{a}}}, z_{\mathfrak{a}} \in H^1(K_c,T \otimes S(E)) \subseteq H^1(K_c,T \otimes S(W))$ as in \S \ref{chi-comp} and letting $z_c^{\sigma_{\mathfrak{a}}} := z_c^{\sigma_{\mathfrak{a}}} \otimes 1$, $z_{\mathfrak{a}} := z_{\mathfrak{a}} \otimes 1 \in H^1(K_c,T \otimes S(W) \otimes \chi_t)$, we get
\[
\begin{split}
(\text{id} \otimes e_{\chi}) z_c^{\sigma_{\mathfrak{a}}} &= (\text{id} \otimes e_{\chi}) (\text{id} \otimes [\tilde{\kappa}_E(\mathfrak{a}\mathcal{O}_K)]_*) z_{\mathfrak{a}}
\\
&= \chi_{\cyc}^{-r}(\sigma_{\mathfrak{a}}) \chi_{t}^{-1}(\sigma_{\mathfrak{a}}) (\text{id} \otimes e_{\chi}) (\text{id} \otimes \sigma_{\mathfrak{a}}) z_{\mathfrak{a}}
\\
&= \chi_{\cyc}^{-r}(\sigma_{\mathfrak{a}}) \chi_{t}^{-1}(\sigma_{\mathfrak{a}}) \chi(\sigma_{\mathfrak{a}})(\text{id} \otimes e_{\chi}) z_{\mathfrak{a}},
\end{split}
\]
which completes the proof.
\end{proof}
Finally, we conclude with two more propositions. As one can rearrange the proofs of analogous results in \cite{CH} to work also in our case, as we did in the proofs of the previous results, we will skip details.
\begin{prop} \label{4.6}
Let $\tau$ be the complex conjugation. Then
\[
z_{c,\chi}^{\tau} = w_f \cdot \chi(\sigma) \cdot (z_{c,\chi^{-1}})^{\sigma},
\]
for some $\sigma \in \Gal(K_c/K)$, with $w_f = \pm 1$ the Atkin--Lehner eigenvalue of $f$.
\end{prop}
\begin{proof} Proceed as in the proof of \cite[Lemma 4.6]{CH}. \end{proof}
\begin{prop} \label{4.7}
Let $\ell$ be a prime inert in $K$ such that $\ell \nmid cND_K$. Let $\overline{\lambda}$ be a prime of $\overline{\mathbb{Q}}$ above $\ell$. Denote by $K_{c\ell,\lambda}$ and $K_{c,\lambda}$ the completions of $K_{c\ell}$ and $K_c$, respectively, at the prime above $\ell$ determined by $\overline{\lambda}$, and write $\loc_{\lambda}$ for the localization map. Then
\[
\loc_{\lambda}(z_{c\ell,\chi}) = \Res_{K_{c\ell,\lambda}/K_{c,\lambda}}\bigl(\loc_{\lambda}(z_{c,\chi})^{\frob_{\ell}}\bigr).
\]
\end{prop}
\begin{proof} Proceed as in the proof of \cite[Lemma 4.7]{CH}. \end{proof}
\section{A $p$-adic Gross--Zagier formula}
We want to relate our $p$-adic $L$-function to generalized Heegner cycles. More precisely, we want our $p$-adic $L$-function to satisfy a $p$-adic Gross--Zagier formula that relates Bloch--Kato logarithms of generalized Heegner cycles associated with characters $\chi : \Gal(K_{p_{\infty}}/K) \rightarrow \mathcal{O}_{\mathbb{C}_p}^{\times}$ of infinity type $(j,-j)$ with $ -k/2 < j < k/2$, with its values, i.e., we look for a formula of the shape
\[
\LL_f(\psi)(\hat{\phi}) = (\text{something}) \cdot \braket{\log(z_{\chi}), *},
\]
where $\hat{\phi} : \Gal(K_{p_{\infty}}/K) \rightarrow \mathcal{O}_{\mathbb{C}_p}^{\times}$ is the $p$-adic avatar of a Hecke character $\phi$ of infinity type $(-k/2-j,k/2+j)$ and $\chi = \hat{\psi}^{-1} \hat{\phi}^{-1}$.
To establish a Gross--Zagier formula we will link our $p$-adic $L$-function to the differential operator $\theta = t \frac{d}{dt}$ on the Serre--Tate coordinates and then we will use some results of Brooks to obtain a key formula relating this operator $\theta$ applied to the modular form calculated on CM points with our generalized Heegner cycles.
\subsection{The Bloch--Kato logarithm map}\label{BKlog}
Let $F,L$ be finite extensions of $\mathbb{Q}_p$ and $V$ a $F$-vector space with a continuous linear $G_L := \Gal(\overline{L}/L)$-action.
Set $DR_L(V) := H^0(L,B_{\dR}\otimes V) = (B_{\dR} \otimes V)^{G_L}$, where $B_{\dR}$ is the Fontaine's de Rham periods ring.
If $V$ is a de Rham representation (i.e., $\dim_{F} V = \dim_L DR_L(V)$), then we can consider the Bloch--Kato exponential map
\[
\exp_{BK}: \frac{DR_L(V)}{\Fil^0DR_L(V)} \stackrel{\cong}{\longrightarrow} H^1_f(L,V),
\]
that is the connecting homomorphism of the long exact sequence in cohomology coming from the short exact sequence
\[
0 \longrightarrow V \longrightarrow B_{\text{crys}}^{\phi=1} \otimes V \oplus \Fil^0 B_{\dR} \otimes V \longrightarrow B_{\dR} \longrightarrow 0.
\]
See \cite[Definition 3.10, Corollary 3.8.4, Proposition 1.17]{BK}. Here $H^1_f(L,V)$ is the Bloch--Kato finite part
\[
H^1_f(L,V) := \ker\Big(H^1(L,V) \longrightarrow H^1_f(L,V \otimes B_{\crys})\Big),
\]
where $B_{\crys}$ is Fontaine's crystalline period ring.
Consider now the inverse of this map, the Bloch--Kato logarithm map
\[
\log_{BK}: H^1_f(L,V) \stackrel{\cong}{\longrightarrow} \frac{DR_L(V)}{\Fil^0DR_L(V)}.
\]
Since the long exact sequence in cohomology is functorial, the Bloch--Kato logarithm map is functorial as well, i.e., for any $F$-linear and $G_L$-equivariant morphism $V \rightarrow V'$ there is a commutative square
\[
\begin{tikzcd}
H^1_f(L,V) \arrow[r, "\log_{BK}"] \arrow[d] & \frac{DR_L(V)}{\Fil^0DR_L(V)} \arrow[d]\\
H^1_f(L,V') \arrow[r, "\log_{BK}"] & \frac{DR_L(V')}{\Fil^0DL_L(V')}.
\end{tikzcd}
\]
Denote by $V^*:=\Hom_F(V,F)$ the dual of $V$ and consider the perfect de Rham pairing
\[
\braket{-,-} : DR_L(V) \times DR_L\bigl(V^*(1)\bigr) \longrightarrow \mathbb{C}_p.
\]
Then we can consider the logarithm $\log_{BK}$ as a map
\begin{equation}\label{logdef}
\log_{BK}: H^1_f(L,V) \stackrel{\cong}{\longrightarrow} \frac{DR_L(V)}{\Fil^0DL_F(V)} \cong \Bigl(\Fil^{0} DR_L\bigl(V^*(1)\bigr)\Bigr)^{\vee},
\end{equation}
which is again functorial.
\subsection{The operator $\theta$} \label{theta}
Recall the differential operator $\theta := t \frac{d}{dt}$ on the power series ring $W[[t-1]]$. For a negative exponent $j$, one can define
\[
\theta^j:= \lim_{i \to \infty} \theta^{j+(p-1)p^i}.
\]
Indeed, this limit makes sense because of \cite[Proposition 4.18]{Brooks}, which implies that $\theta^{j+(p-1)p^m}F \equiv \theta^{j+(p-1)p^n}F \mod p^{n+1}$ for $m\geq n\gg0$ and $F \in W[[t-1]]$, so $\bigl| \theta^{j+(p-1)p^m}F - \theta^{j+(p-1)p^n}F \bigr|_p \leq 1/p^{n+1}$.
For a positive integer $m$ we know from equation (\ref{x^m}) that
\[[x^m]F = \theta^m F.\]
The same formula can be obtained by direct computation also for $m$ negative. Furthermore, for any locally constant function $\phi \in \mathcal{C}(\mathbb{Z}_p,\W)$, the formula
\begin{equation} \label{x^mtheta_neg}
\int_{\mathbb{Z}_p^{\times}} \phi(x)x^m dF = {\theta ^m ([\phi]F)|} _{t=1}
\end{equation}
holds also for a negative integer $m$.
\subsection{CM periods and de Rham cohomology classes}\label{6.3}
Recall that $A$ has a model defined over $\mathcal{V} := \W \cap K^{\ab}$.
Fix a non-vanishing global section $\omega_A$ of the line bundle $e\Omega_{A/\mathcal{V}}$ on $A$, defined over $\mathcal{V}$.
Define a $p$-adic period $\Omega_p \in \mathbb{C}_p$ by the rule
\[
\omega_{A} = \Omega_p \hat{\omega}_{A}
\]
where $\hat{\omega}_A$ denote the section $\omega_P$ determined for $A$ as in the last lines of \ref{pmodform} (which depends upon the $p^{\infty}$- level structure on $A$).
Now, take a finite extension $L$ of $\mathbb{Q}_p$ containing
$K_1$ and recall the fixed non-vanishing differential $\omega_{A}$ in $eH^0(A,\Omega_{A/L})$. We can see it as an element of $eH^1_{\dR}(A/L)$. This determines another element $\eta_A$, as in the last lines of \cite[\S 2.8]{Brooks}, so that $\omega_A^i \eta_A^{2r-i}$ is a basis for $\Sym^{2r}eH^1_{dR}(A/L)$, when $i= 0, \dots, 2r$.
We can arrange $\omega_A$ and $\eta_A$ in a way such that they correspond to the elements $\omega_E, \eta_E \in H^1_{\dR}(E/L)$ defined \cite[\S 4.5]{CH}, through the isomorphism $eH^1_{\dR}(A/L) \cong H^1_{\dR}(E/L)$, keeping in mind that the elliptic curve $E$ is denoted by $A$ in \cite{CH}.
\subsection{A key formula} \label{keyform}
First we want to prove a formula that relates the operator $\theta$ applied to the modular form calculated on CM points with our generalized Heegner cycles.
The proof of this formula is the same as the proof of \cite[(4.9)]{CH} but with \cite[Proposition 3.24 and Lemmas 3.23, 3.22]{BDP} replaced by \cite[Theorem 7.3, Lemma 8.6 and Proposition 7.4]{Brooks}.
Consider the statement of \cite[Proposition 7.4]{Brooks}. In that statement, $F$ is a finite extension of $\mathbb{Q}_p$, containing
$K_1$, such that the cycle $\Delta_{\varphi}$ associated with an isogeny $\varphi$ (as in \ref{GHcycles}) is defined over $F$, $\omega_f$ is defined as in \cite[\S2.7]{Brooks} and can be seen as an element in $\Fil^{k-1}PH^{k-1}_{\dR}(\mathcal{A}^r/F)$ (as in \cite{Brooks}; see also \cite[Corollary 2.3]{BDP}).
Note that the map $\AJ_F$ in \cite[Proposition 7.4]{Brooks} is the Abel--Jacobi map
\[
\AJ_F : CH^{k-1}(\mathcal{X}_r/F)_{0} \longrightarrow (\Fil^{k-1}\varepsilon H^{4r+1}_{\dR}\bigl(\mathcal{X}_r/F)\bigr)^{\vee}
\]
defined in \cite[\S 6.3]{Brooks}, which is the composition of the usual Abel--Jacobi map
\[
CH^{k-1}(\mathcal{X}_r/F)_{0} \longrightarrow H^1_f\Bigl(F,\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)\Bigr)
\]
with the Bloch--Kato logarithm map
\[
\log_{\BK}:H^1_f\Bigl(F,\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)\Bigr) \longrightarrow
\frac{DR_F\Bigl(\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)\Bigr)}{\Fil^0\Bigl(DR_F\bigl(\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)\bigr)\Bigr)}
\]
for the Galois representation $V = \varepsilon H^{4r+1}_{\et}\bigl(\overline{\mathcal{X}}_r,\mathbb{Q}_p\bigr)(k-1)$.
Actually, the image of the $p$-adic Abel--Jacobi map is contained in the subgroup \[H^1_f(F,\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1))\] of $H^1(F,\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1))$, see \cite[Theorem 3.1]{Nek00}.
Since the comparison isomorphism
\[
\Phi:DR_F\bigl(\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)\bigr) \stackrel{\cong}\longrightarrow \varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F),
\]
for which we refer to
\cite{Fa}, is compatible with the filtrations, and since the Tate twist shifts the filtration, there
is a functorial isomorphism
\[
\Phi:\frac{DR_F\bigl(\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)\bigr)}{\Fil^0 DR_F\bigl(\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)\bigr)} \stackrel\cong\longrightarrow \frac{\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F)}{\Fil^{k-1}\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F)}.
\]
On the other hand, by Poincar\'e duality, there is an isomorphism
\begin{equation} \label{poincare-eq}
\frac{\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F)}{\Fil^{k-1}\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F)} \cong \Fil^{k-1} \varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F) ^{\vee}.
\end{equation}
Thus, writing $V := \varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Q}_p)(k-1)$ in this case, we can view $\log_{BK}$ as a map
\[
\log_{BK}: H^1_f(F,V) \stackrel{\cong}{\longrightarrow} \frac{DR_F(V)}{\Fil^0DR_F(V)} \cong \Fil^{k-1} \varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F) ^{\vee}.
\]
For more details, see \cite[\S 6.3]{Brooks}.
Recall the elements $\omega_A^i \eta_A^{2r-i}\in \Sym^{2r}eH^1_{dR}(A/F)$ for $i= 0, \dots, 2r$. Then
\[
\omega_f \otimes \omega_A^i \eta_A^{2r-i} \in \Fil^{r+1}P H^{k-1}_{\dR}(\mathcal{A}^r/F) \otimes \Sym^{2r} eH^1_{\dR}(A/F) \cong \Fil^{k-1}\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F).
\]
See \cite{Brooks}, or \cite[pp. 1060--1062]{BDP} for an explanation in the case of modular curves.
\begin{rmk}
We can restate \cite[Proposition 7.4]{Brooks} as
\[
\big\langle\log_{BK}(\xi_{\varphi}),\omega_f \otimes \omega_A^i \eta_A^{r-i}\big\rangle = d^i G_i(A',t',\omega'),
\]
where $\xi_{\varphi}$ is the image of $\Delta_{\varphi}$ under the usual Abel--Jacobi map.
Indeed, isomorphism \eqref{poincare-eq} sends $\log_{BK}(\xi_{\varphi}) \in \frac{\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F)}{\Fil^{k-1}\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/F)}$ to $\big\langle\log_{BK}(\xi_{\varphi}),-\big\rangle$.
\end{rmk}
\begin{rmk} \label{log}
We want to apply the Bloch--Kato logarithm to the localization at $\mathfrak{p}$ of the generalized Heegner classes $z_{\mathfrak{a}}$ (defined in \S \ref{GHclasses}), regarded as elements of $H^1(K_c, T \otimes \chi)$ via the map $\id \otimes e_{\chi}$ (defined in \S \ref{chi-comp}), and also to the classes $z_{\chi} \in H^1(K,T \otimes \chi)$ that will be defined in \eqref{z.chi}. Recall the choice of the prime $\mathfrak{p}$ of $K$ over $p$. If $M$ is a finite extension of $K$, then we write $M_{\mathfrak{p}}$ for the completion of $M$ at the prime over $\mathfrak{p}$ determined by the fixed embedding $i_p: \overline{\mathbb{Q}} \hookrightarrow \mathbb{C}_p$ and denote by $\loc_{\mathfrak{p}}$ the localization map
\[
\loc_{\mathfrak{p}} : H^1(M,V) \longrightarrow H^1(M_{\mathfrak{p}}, V),
\]
for any representation $V$ of $G_M$.
Thus, denote by $\log_{\mathfrak{p}}$ the composition of the localization at $\mathfrak{p}$ with $\log_{BK}$, so that $\log_{\mathfrak{p}}(z_{\mathfrak{a}})$ (respectively, $\log_{\mathfrak{p}}(z_{\chi})$) is the image of the localization $ \loc_{\mathfrak{p}}(z_{\mathfrak{a}}) \in H^1_f(K_{c,\mathfrak{p}},T \otimes \chi)$ (respectively, of the localization $ \loc_{\mathfrak{p}}(z_{\chi}) \in H^1_f(K_{\mathfrak{p}},T \otimes \chi)$) via the Bloch--Kato logarithm.
Since $V_f$ can be realized as a quotient of $P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Z}_p)$, by the comparison isomorphism recalled above we get maps
\[
\begin{split}
&P H^{k-1}_{\dR}(\mathcal{A}^r/K_{c,\mathfrak{p}}) \cong \text{DR}_{K_{c,\mathfrak{p}}}\bigl(P H^{k-1}_{\et}(\overline{\mathcal{A}}^{r},\mathbb{Q}_p)\bigr) \longrightarrow \text{DR}_{K_{c,\mathfrak{p}}}(V_f),\\
&\Sym^{2r}eH^{1}_{\dR}(A/K_{c,\mathfrak{p}}) \cong \text{DR}_{K_{c,\mathfrak{p}}}\bigl(\Sym^{2r}eH^{1}_{\et}(A/K_{c,\mathfrak{p}})\bigr).
\end{split}
\]
Then, we can view the $\omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1}$ as elements of $\Fil^{k-1}\text{DR}_{K_{c,\mathfrak{p}}}\bigl(V_f\otimes \Sym^{2r}eH^1_{\et}(\overline{A},\mathbb{Q}_p)\bigr)$, which we will denote in the same way.
Let $t\in B_{\dR}$ denotes Fontaine's $p$-adic analogue of $2\pi i$ associated with the compatible sequence $\{i_p(\zeta_{p^n})\}$.
Hence, the elements $\omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \otimes t^{1-k}$ live in $\Fil^0\text{DR}_F\bigl(V_f\otimes \Sym^{2r}eH^1_{\et}(\overline{A},\mathbb{Q}_p)(k-1)\bigr)$, so we can view them in $\Fil^0\text{DR}_F\bigl(V_f(k/2)\otimes \chi\bigr)$.
Because of the functoriality of the logarithm and because the comparison isomorphism preserves the dualities, there is an equality
\[
\big\langle\log_{\mathfrak{p}}(z_{\mathfrak{a}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \otimes t^{1-k}\big\rangle = \big\langle\log_{\mathfrak{p}}(\xi_{\phi_{\mathfrak{a}}}),\omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1}\big\rangle,
\]
where $\xi_{\phi_{\mathfrak{a}}}$ is the image of $\Delta_{\mathfrak{a}}$ through the usual Abel--Jacobi map $\AJ_{p,K_{c}}$, $\omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \otimes t^{1-k} \in \Fil^0\text{DR}_{K_{c,\mathfrak{p}}}\bigl(V_f(k/2)\otimes \chi\bigr)$, $\omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \in \Fil^{k-1} \varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/K_{c,\mathfrak{p}})$,
$\log_{\mathfrak{p}}(z_{\mathfrak{a}})$ is viewed in $\text{DR}_{K_{c,\mathfrak{p}}}\bigl(V_f(k/2)\otimes \chi\bigr)/\Fil^0\text{DR}_{K_{c,\mathfrak{p}}}\bigl(V_f(k/2)\otimes \chi\bigr)$ and $\log_{\mathfrak{p}}(\xi_{\mathfrak{a}})$ is viewed, as in \cite{Brooks}, in $\varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/K_{c,\mathfrak{p}})/\Fil^{k-1} \varepsilon H^{4r+1}_{\dR}(\mathcal{X}_r/K_{c,\mathfrak{p}})$
\end{rmk}
Evaluating the $p$-adic $L$-function at the $p$-adic avatar of an anticyclotomic Hecke character of infinity type
$(k/2+j, -k/2-j)$ with $ -k/2<j<k/2$ and conductor $p^n\mathcal{O}_K$ with $n \geq 1$, we get $\theta^{-j-k/2}\hat{f}^{(p)} (x(c_0p^n)^{\sigma_{\mathfrak{a}}})$, with $x(c_0p^n)$ the CM point defined in \S \ref{CMpoints}. Now we want to relate this expression to the image of the Abel--Jacobi map of certain Heegner classes.
\begin{lem} \label{keylem}
Set $z^{(p)}_{\mathfrak{a}} := z_{\mathfrak{a}} - a_p p^{2j-1} z_{\mathfrak{a}\mathcal{O}_{c_0p^{n-1}}} + p^{4j+k-3} z_{\mathfrak{a}\mathcal{O}_{c_0p^{n-2}}}$. Then
\[
\begin{split}
\theta^{-j-k/2}\hat{f}^{(p)} (x(c_0p^n)^{\sigma_{\mathfrak{a}}}) = \frac{(c_0p^nN(\mathfrak{a}))^{-j-k/2+1}}{\Omega_p^{2j}(j+k/2-1)!}\cdot \big\langle\log_{\mathfrak{p}}(z^{(p)}_{\mathfrak{a}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \otimes t^{1-k}\big\rangle.
\end{split}
\]
\end{lem}
\begin{proof}
We have
\[
\begin{split}
\theta^{-j-k/2}\hat{f}^{(p)}\bigl(x(c_0p^n)^{\sigma_{\mathfrak{a}}}\bigr) &= \theta^{-j-k/2}f^{(p)} (A_{c_0p^n}, \iota_{c_0p^n}, \nu_{N^+,c_0p^n}, \hat{\omega}_{c_0p^n})^{\sigma_{\mathfrak{a}}}\\
&= \left(\frac{1}{\Omega_p}\right)^{2j} \theta^{-j-k/2}f^{(p)} \bigl(\mathfrak{a} \star (A_{c_0p^n}, \iota_{c_0p^n}, \nu_{N^+,c_0p^n}, \omega_{c_0p^n})\bigr),
\end{split}
\]
as the form $\theta^{-j-k/2}f^{(p)}$ has weight $-2j$ and, by definition, $\hat{\omega}_{c_0p^n} = (1/\Omega_p) \omega_{c_0p^n}$, with $\omega_{c_0p^n}$ induced by $\omega_A$. Let $\tilde{g}_{j+k/2-1}^{(p)}$ be the ${j+k/2-1}$-st component of the Coleman primitive of $\omega_{f^{(p)}}$. Applying
\cite[Theorem 7.3]{Brooks} yields
\[
\begin{split}
\theta^{-j-k/2}\hat{f}^{(p)} (x(c_0p^n)^{\sigma_{\mathfrak{a}}}) &= \frac{1}{\Omega_p^{2j}(j+k/2-1)!} \cdot\tilde{g}_{j+k/2-1}^{(p)} \bigl(\mathfrak{a} \star (A_{c_0p^n}, \iota_{c_0p^n}, \nu_{N^+,c_0p^n}, \omega_{c_0p^n})\bigr).
\end{split}
\]
Writing $x_{\mathfrak{a}}$ for $\mathfrak{a} \star(A_{c_0p^n}, \iota_{c_0p^n}, \nu_{N^+,c_0p^n}, \omega_{c_0p^n}) = (A_{\mathfrak{a}}, \iota_{\mathfrak{a}}, \nu_{N^+,\mathfrak{a}}, \omega_{\mathfrak{a}})$, by
\cite[Lemma 8.6]{Brooks} one has
\[
\begin{split}
\theta^{-j-k/2}\hat{f}^{(p)}\bigl(x(c_0p^n)^{\sigma_{\mathfrak{a}}}\bigr) &= \frac{1}{\Omega_p^{2j}(j+k/2-1)!} \ \Big[ \ \tilde{g}_{j+k/2-1} (x_{\mathfrak{a}}) \\ &- \frac{a_pp^{-j-k/2}}{p^{-2j}} \ \tilde{g}_{j+k/2-1} (x_{\mathfrak{a}\mathcal{O}_{c_0p^{n-1}}}) \\ &+ \frac{1}{p^{-2j +1}} \ \tilde{g}_{j+k/2-1} (x_{\mathfrak{a}\mathcal{O}_{c_0p^{n-2}}}) \Big].
\end{split}
\]
Then, applying
\cite[Proposition 7.4]{Brooks}, and keeping Remark \ref{log} in mind, we get
\[
\begin{split}
&\theta^{-j-k/2}\hat{f}^{(p)}\bigl(x(c_0p^n)^{\sigma_{\mathfrak{a}}}\bigr) =\\
&=\frac{1}{\Omega_p^{2j}(j+k/2-1)!} \cdot \Big[ \ \frac{1}{(c_0p^{n}N(\mathfrak{a}))^{j+k/2-1}} \big\langle\log_{\mathfrak{p}}(z_{\mathfrak{a}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1}\otimes t^{1-k}\big\rangle\\
&-\frac{a_p p^{j-k/2}}{(c_0p^{n-1}N(\mathfrak{a}))^{j+k/2-1}} \cdot \big\langle\log_{\mathfrak{p}}(z_{\mathfrak{a}\mathcal{O}_{c_0p^{n-1}}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1}\otimes t^{1-k}\big\rangle \\
&+\frac{p^{2j-1}}{(c_0p^{n-2}N(\mathfrak{a}))^{j+k/2-1}} \cdot \big\langle\log_{\mathfrak{p}}(z_{\mathfrak{a}\mathcal{O}_{c_0p^{n-2}}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1}\otimes t^{1-k}\big\rangle \Big].
\end{split}
\]
Finally, if we set
\[
z^{(p)}_{\mathfrak{a}} := z_{\mathfrak{a}} - a_p p^{2j-1} z_{\mathfrak{a}\mathcal{O}_{c_0p^{n-1}}} + p^{4j+k-3} z_{\mathfrak{a}\mathcal{O}_{c_0p^{n-2}}},
\]
then
we obtain
\[
\begin{split}
\theta^{-j-k/2}\hat{f}^{(p)}\bigl(x(c_0p^n)^{\sigma_{\mathfrak{a}}}\bigr) = \frac{(c_0p^nN(\mathfrak{a}))^{-j-k/2+1}}{\Omega_p^{2j}(j+k/2-1)!}\cdot\big\langle\log_{\mathfrak{p}}(z^{(p)}_{\mathfrak{a}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \otimes t^{1-k}\big\rangle,
\end{split}
\]
as was to be shown. \end{proof}
\subsection{A $p$-adic Gross--Zagier formula}
In this section we finally state and prove the $p$-adic Gross--Zagier formula that we are interested in, which relates our $p$-adic $L$-function to the Bloch--Kato logarithm of generalized Heegner classes.
First, we define a cohomology class $z_{\chi} \in H^1(K,T \otimes \chi)$ associated with $f$ and $\chi$, which will be linked with the $p$-adic $L$-function by the $p$-adic Gross--Zagier type formula. Recall that $f\in S_k^{\text{new}}(\Gamma_0(N))$ is our fixed modular form and
$\chi: \Gal(K_{c_0p^{\infty}}/K) \rightarrow \mathcal{O}_F^{\times}$
is a locally algebraic anticyclotomic character of infinity type $(j,-j)$ with $-k/2 < j < k/2$ and conductor $c_0p^n\mathcal{O}_K$, where $c_0$ is prime to $pN^+$. Put
\begin{equation} \label{z.chi}
\begin{split}
z_{\chi} :&= \Cor_{K_{c_0p^n}/K} z_{c_0p^n,\chi
= \sum_{\sigma \in \Gal(K_{c_0p^n}/K)} \sigma \cdot (id \otimes e_{\chi})(z_{c_0p^n}\otimes \chi_t)\\
&= \sum_{\sigma \in \Gal(K_{c_0p^n}/K)} (id \otimes e_{\chi})\bigl( \chi_t(\sigma) z_{c_0p^n}^{\sigma}\bigr
= \sum_{\mathfrak{a} \in \Pic\mathcal{O}_{c_0p^n}} (id \otimes e_{\chi})\bigl( \chi_t(\sigma_{\mathfrak{a}}\bigr) z_{c_0p^n}^{\sigma_{\mathfrak{a}}})\\
&= \sum_{\mathfrak{a} \in \Pic\mathcal{O}_{c_0p^n}} \chi \varepsilon_{cyc}^{-r}(\sigma_{\mathfrak{a}})(id \otimes e_{\chi})(z_{\mathfrak{a}}),
\end{split}
\end{equation}
where the last equality holds by Proposition \ref{4.5}.
It is convenient to use, in the statement of the following theorem, the symbol $\doteq$ to indicate that the claimed equality holds up to an explicit non-zero multiplicative factor that is comparatively less important than the main terms. Recall that $p\mathcal O_K= \mathfrak{p} \overline{\mathfrak{p}}$ with $ \mathfrak{p}$ and $\overline{\mathfrak{p}}$ distinct maximal ideals of $\mathcal O_K$.
\begin{teor} \label{GZ}
Let $\psi$ be an anticyclotomic Hecke character of infinity type $(k/2, -k/2)$ and conductor $c_0\mathcal{O}_K$ with $(c_0,Np) = 1$. If $\hat{\phi} : \Gal(K_{c_0p^{\infty}}/K) \rightarrow \mathcal{O}_{\mathbb{C}_{p}}^{\times}$ is the $p$-adic avatar of an anticyclotomic Hecke character $\phi$ of infinity type
$(k/2+j, -k/2-j)$ with $ -k/2<j<k/2$ and conductor $p^n\mathcal{O}_K$, $n \geq 1$, then
\begin{displaymath}
\frac{\LL_{\mathfrak{p},\psi}(f)(\hat{\phi}^{-1})}{\Omega_p^{*}}\doteq
\big\langle\log_\mathfrak{p}(z_{f,\chi}),\omega_{f}\otimes \omega_A^{k/2 + j -1}\eta_A^{k/2 -j -1} \otimes t^{1-k}\big\rangle,
\end{displaymath}
where, as before, $\chi := \hat{\psi}^{-1}\hat{\phi}$ and $ \log_{\mathfrak{p}} := \log_{BK} \circ \loc_{\mathfrak{p}}$.
\end{teor}
\begin{proof}
By definition of $\LL_{f,\psi}$ (cf. Definition \ref{Lfct}), one has
\[
\LL_{f,\psi}(\hat{\phi}^{-1})
= \sum_{\Pic\mathcal{O}_{c_0}} \psi(\mathfrak{a})N(\mathfrak{a})^{-k/2} \int_{\mathbb{Z}_p^{\times}} \psi_{\mathfrak{p}}\hat{\phi}^{-1} \big|[\mathfrak{a}]d\mu_{\hat{f}^{(p)}_{\mathfrak{a}}}.
\]
Here $\hat{\phi}^{-1} \big|[\mathfrak{a}] : \mathbb{Z}_p^{\times} \rightarrow \mathbb{C}_p^{\times}$ is given by $\bigl(\hat{\phi}^{-1}\big|[\mathfrak{a}]\bigr)(x):= \hat{\phi}^{-1}(x) \hat{\phi}^{-1}(\mathfrak{a})$, where $x$ is viewed as an element of $\hat{K}^{\times}$ via the chosen embedding $\mathbb{Z}_p^{\times} \cong \mathcal{O}_{K,\mathfrak{p}} \hookrightarrow K_{\mathfrak{p}}^{\times} \hookrightarrow \hat{K}^{\times}$. It follows that
\[
\bigl(\hat{\phi}^{-1}\big|[\mathfrak{a}]\bigr)(x)= \phi^{-1}(x)x_{\mathfrak{p}}^{-k/2-j}x_{\overline{\mathfrak{p}}}^{k/2+j} \phi^{-1}(a) = \phi^{-1}_{\mathfrak{p}}(x)x^{-k/2-j} \phi^{-1}(a),
\]
as $a \in \hat{K}^{(c_0p)\times}$ satisfies $a\hat{\mathcal{O}}_K\cap K = \mathfrak{a}$. Therefore
\[
\begin{split}
\LL_{f,\psi}(\hat{\phi}^{-1})
&= \sum_{\Pic(\mathcal{O}_{c_0})} \psi(\mathfrak{a})\phi^{-1}(\mathfrak{a})N(\mathfrak{a})^{-k/2} \int_{\mathbb{Z}_p^{\times}} \psi_{\mathfrak{p}}\phi^{-1}_{\mathfrak{p}}(x)x^{-k/2-j} d\mu_{\hat{f}^{(p)}_{\mathfrak{a}}}
\end{split}
\]
Then, in light of (\ref{x^mtheta_neg}), we can bring out the differential operator $\theta = t_{\mathfrak{a}} \frac{d}{dt_{\mathfrak{a}}}$ from the integral of $x^{-k/2-j}$ and, using the fact that $x(\mathfrak{a})$ is the canonical lifting of $x(\mathfrak{a})\otimes_{\W} \overline{\mathbb{F}}_p$, we get
\[
\begin{split}
\LL_{f,\psi}(\hat{\phi}^{-1}) = \sum_{\Pic\mathcal{O}_{c_0}} \psi(\mathfrak{a})\phi^{-1}(\mathfrak{a})N(\mathfrak{a})^{-k/2} \theta^{-k/2-j} [\psi_{\mathfrak{p}}\phi^{-1}_{\mathfrak{p}}]\hat{f}^{(p)}_{\mathfrak{a}}\bigl(t_{\mathfrak{a}}(x_{\mathfrak{a}})\bigr),
\end{split}
\]
where $t_{\mathfrak{a}}$ is the Serre--Tate coordinate around the reduction $x(\mathfrak{a})\otimes_{\W} \overline{\mathbb{F}}_p$.
Setting $\xi := \psi^{-1} \phi$, then we have
\[
\begin{split}
\LL_{f,\psi}(\hat{\phi}^{-1}) &=\sum_{\Pic(\mathcal{O}_{c_0})} \psi(\mathfrak{a})\phi^{-1}(\mathfrak{a})N(\mathfrak{a})^{-k/2} [\psi_{\mathfrak{p}}\phi^{-1}_{\mathfrak{p}}]\theta^{-k/2-j}\hat{f}^{(p)}_{\mathfrak{a}}\bigl(t_{\mathfrak{a}}(x_{\mathfrak{a}})\bigr)\\
&= \sum_{\Pic(\mathcal{O}_{c_0})} \xi^{-1}(\mathfrak{a})N(\mathfrak{a})^{-k/2} [\xi^{-1}_{\mathfrak{p}}]\theta^{-k/2-j}\hat{f}^{(p)}_{\mathfrak{a}}\bigl(t_{\mathfrak{a}}(x_{\mathfrak{a}})\bigr).
\end{split}
\]
Note that, since $\xi^{-1}$ is a character of conductor $c_0p^n\mathcal{O}_K$, $\xi_{\mathfrak{p}}^{-1}$ is a primitive Dirichlet character mod $p^n$ via the isomorphism $(\mathbb{Z}/p^n\mathbb{Z})^{\times} \cong(\mathcal{O}_{K,\mathfrak{p}}/\mathfrak{p}^n)^{\times} \cong \mathcal{O}_{K,\mathfrak{p}}^{\times}/1 + \mathfrak{p}^n$.
By Proposition \ref{3.3}, since $(N(\mathfrak{a})\sqrt{-D_K})^{k/2+j}(\theta^{-j-k/2}\hat{f}^{(p)})_{\mathfrak{a}} = \theta^{-j-k/2}\hat{f}^{(p)}_{\mathfrak{a}}$, we obtain
\[
\begin{split}
\LL_{f,\psi}(\hat{\phi}^{-1}) &=\sum_{\Pic(\mathcal{O}_{c_0})} \xi^{-1}(\mathfrak{a})N(\mathfrak{a})^{-k/2}
\bigl(N(\mathfrak{a})\sqrt{-D_K}\bigr)^{k/2+j} p^{-n}G(\xi_{\mathfrak{p}}^{-1})\\
&\cdot\sum_{u\in (\mathbb{Z}/p^n\mathbb{Z})^{\times}} \xi_{\mathfrak{p}}(u) \theta^{-k/2-j} \hat{f}^{(p)}\bigl(x_{\mathfrak{a}}\star \alpha(u/p^n)\bigr).\\
\end{split}
\]
For positive exponents one obtains $\bigl(N(\mathfrak{a})\sqrt{-D_K}\bigr)^{-m}(\theta^{m}\hat{f}^{(p)})_{\mathfrak{a}} = \theta^{m}\hat{f}^{(p)}_{\mathfrak{a}}$ by an easy computation; because $\theta^{m} = \lim_{i \to \infty} \theta^{m+(p-1)p^i}$, one has the same formula for negative exponents.
Now, as in the proof of \cite[Theorem 4.9]{CH} and applying Lemma \ref{keylem}, we obtain
\[
\begin{split}
\frac{\LL_{f,\psi}(\hat{\phi}^{-1})}{\Omega_p^{-2j}} &=\frac{c_0^{-j-k/2+1}(\sqrt{-D_K})^{k/2+j} p^{n(-j-k/2)}G(\xi_{\mathfrak{p}}^{-1})\chi_{\mathfrak{p}}(p^n)}{(j+k/2-1)!}\\
&\cdot \sum_{\Pic(\mathcal{O}_{c_0p^n})} \chi \chi_{\cyc}^{1-k/2}(\sigma_{\mathfrak{a}})
\big\langle\log_{\mathfrak{p}}(z^{(p)}_{\mathfrak{a}}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1}\otimes t^{1-k}\big\rangle.
\end{split}
\]
Finally, by the argument in \cite[Theorem 4.9]{CH} and by (\ref{z.chi}), we get
\[
\frac{\LL_{f,\psi}(\hat{\phi}^{-1})}{\Omega_p^{-2j}}= C'\cdot
\big\langle\log_{\mathfrak{p}}(z_{\chi}), \omega_f \otimes \omega_A^{j+k/2-1}\eta_A^{k/2 - j -1} \otimes t^{1-k}\big\rangle,
\]
with
\[
C':=\frac{c_0^{-j-k/2+1}(\sqrt{-D_K})^{k/2+j} p^{n(-j-k/2)}G(\xi_{\mathfrak{p}}^{-1})\chi_{\mathfrak{p}}(p^n)}{(j+k/2-1)!},
\]
and the theorem is proved.
\end{proof}
\section{Reciprocity law and Selmer groups}
In this last section we want to extend to our setting the reciprocity law of \cite[Theorem 5.7]{CH}, relaxing the Heegner hypothesis and making use of generalized Heegner cycles on generalized Kuga--Sato varieties over Shimura curves and our $p$-adic $L$-function. This result will be important to prove (under certain assumptions) the vanishing of the Selmer group associated with the twisted representation $V_{f,\chi} := V_f(k/2) \otimes \chi$.
\subsection{The algebraic anticyclotomic $p$-adic $L$-function $\mathcal{L}_{\mathfrak{p},\psi}$}
We now construct an algebraic $p$-adic $L$-function as a sort of image of some Iwasawa cohomology class, coming from generalized Heegner classes, under a big logarithm map.
Assume for this section that our modular form $f\in S_k^{new}(\Gamma_0(N))$ is $p$-ordinary, i.e., that the $p$-th Fourier coefficient $a_p$ is a unit of $\mathcal{O}_F$.
\subsubsection{Perrin-Riou's big logarithm}
Let $G$ be a commutative compact $p$-adic Lie group and $L$ a complete discretely valued extension of $\mathbb{Q}_p$.
Recall that a \emph{$p$-adic Lie group} is a group $G$ endowed with a structure of a manifold over $\mathbb{Q}_p$ such that the group operation is locally analytic. For the definitions of a manifold over $\mathbb{Q}_p$ and of a locally analytic map, the reader is referred to \cite{Schneider}.
Consider the noetherian topological
$O_L$-algebra $\mathcal{O}_L \llbracket G \rrbracket$; if $L/\mathbb{Q}_p$ is a finite extension then it is compact. Put $\Lambda_{L}(G) := L \otimes_{\mathcal{O}_L} \mathcal{O}_L \llbracket G \rrbracket$, which is also noetherian; it is isomorphic to
the continuous dual of the space $C(G, L)$ of continuous $L$-valued functions on $G$ (cf. \cite[\S 2.2]{LZ14}).
Now let
$\mathcal{H}_{L}(G)$ denote the space of $L$-valued locally analytic distributions on $G$
i.e., the continuous
dual of the space $C^{la}(G, L)$ of $L$-valued locally analytic functions on $G$.
There is an injective algebra homomorphism \[\Lambda_L(G) \mbox{\;$\lhook\joinrel\longrightarrow$\;} \mathcal{H}_L(G)\]
(see \cite[Proposition 2.2.7]{Eme}), dual to the dense inclusion $C^{la}(G, L) \hookrightarrow C(G, L)$. We
endow $\mathcal{H}_L(G)$ with its natural topology as an inverse limit of Banach spaces, with
respect to which the map $\Lambda_L(G) \hookrightarrow \mathcal{H}_L(G)$ is continuous.
If $L$ is a finite unramified extension of $\mathbb{Q}_p$ and $G$ is the Galois group of a $p$-adic Lie extension $L_{\infty}=\cup_n L_n$ with $L_n/L$ finite and Galois, then define the Iwasawa cohomology group
\begin{displaymath}
H^1_{\Iw}(L_{\infty},V):=\Bigl(\varprojlim_n H^1(L_n,T)\Bigr)\otimes_{\mathbb{Z}_p} \mathbb{Q}_p,
\end{displaymath}
where $V$ is a $p$-adic $G_L$-representation and $T$ is a Galois stable lattice. The definition is independent from the choice of $T$.
Now let $\hat{F}^{\text{ur}}$ denote the composite of $\hat{\mathbb{Q}}_p^{\text{ur}}$ with a finite extension $F$ of $\mathbb{Q}_p$.
Suppose that $V$ is a crystalline $F$-representation of $G_L$ with non negative Hodge--Tate weights (for us the $p$-adic cyclotomic character has Hodge--Tate weight $+1$) and that $V$ has no quotient isomorphic to the trivial representation. Let $\mathfrak{F}$ be a relative height one Lubin--Tate formal group over $\mathcal{O}_L/\mathbb{Z}_p$
and let $\Gamma := \Gal(L(\mathfrak{F}_{p^{\infty}})/L) \cong \mathbb{Z}_p^{\times}$.
Let $B_{\crys}$ be the Fontaine's crystalline period ring and put $\mathbf{D}_{\crys,L}(V):= (V \otimes_{\mathbb{Q}_p} B_{\crys})^{G_L}$. Assume that $V^{G_{L(\mathfrak{F}_{p^\infty})}} = 0$.
\begin{teor} \label{5.1}
There exists a $\mathbb{Z}_p\llbracket \Gamma \rrbracket$-linear map
\begin{displaymath}
\mathcal{L}_V : H^1_{\Iw}(L(\mathfrak{F}_{p^{\infty}}),V) \longrightarrow \mathcal{H}_{\hat{F}^{ur}}(\Gamma) \otimes_L \mathbf{D}_{\crys,L}(V)
\end{displaymath}
such that for any $\mathbf{z} \in H^1_{\Iw}(L(\mathfrak{F}_{p^{\infty}}),V)$ and any locally algebraic character $\chi : \Gamma \rightarrow \overline{\mathbb{Q}}_p^{\times}$ of Hodge-Tate weight $j$ and conductor $p^n$ there is an equality
\begin{displaymath}
\mathcal{L}_V(\mathbf{z})(\chi)= \varepsilon(\chi^{-1}) \cdot \frac{\Phi^n P(\chi^{-1},\Phi)}{P(\chi,p^{-1}\Phi^{-1})} \cdot
\begin{cases}
\frac{(-1)^{-j-1}}{(-j-1)!} \cdot \log_{L,V \otimes \chi^{-1}} (\mathbf{z^{\chi^{-1}}})\otimes t^{-j} & \text{if}\ j<0, \\[2mm]
j! \cdot \exp^*_{L,(V \otimes \chi^{-1})^*(1)} (\mathbf{z^{\chi^{-1}}})\otimes t^{-j} & \text{if}\ j \geq 0,
\end{cases}
\end{displaymath}
where
\begin{itemize}
\item $\varepsilon(\chi^{-1})$ and $P(\chi^{\pm1},-)$ are the $\varepsilon$-factor and the $L$-factor (see \cite[p. 8]{LZ14});
\item $\Phi$ denotes the crystalline Frobenius operator on $\overline{\mathbb{Q}}_p \otimes_L \mathbf{D}_{\crys,L}(V)$ acting trivially on first factor;
\item $\mathbf{z^{\chi^{-1}}} \in H^1(L,V \otimes \chi^{-1})$ is the specialization of $\mathbf{z}$ at $\chi^{-1}$.
\end{itemize}
\end{teor}
\begin{proof} This is \cite[Theorem 5.1]{CH}. \end{proof}
In the statement of this result, $\exp^*_{L,(V \otimes \chi^{-1})^*(1)}$ is the dual exponential map
\[
\exp^*_{L,(V \otimes \chi^{-1})^*(1)}: H^1(L,V \otimes \chi^{-1})) \rightarrow \Fil^0DR_{L}(V \otimes \chi^{-1})
\]
of the Bloch--Kato exponential map of $V(\chi^{-1})^*(1)$.
We will apply Theorem \ref{5.1} to some representation $\mathscr{F}^+V$ attached to a twist of $V_f(k/2)$ to obtain a map $\mathcal{L}_{\mathscr{F}^+V}$.
Let $\psi$ be an anticyclotomic Hecke character of infinity type $(k/2,-k/2)$ and conductor $c\mathcal{O}_K$ with $p \nmid c$ and let $\hat{\psi}: \Gal(K_{cp^{\infty}}/K) \rightarrow \mathbb{C}_p^{\times}$ be its $p$-adic avatar. Let $F$ be a finite extension of $\mathbb{Q}_p$ containing the Fourier coefficients of $f$ and the image of $\hat{\psi}$, so $\hat{\psi} : \Gal(K_{cp^{\infty}}/K) \rightarrow \mathcal{O}_F^{\times}$.
Since $p \nmid N$, if $V_f$ is the $F$-linear Galois representation of $G_{\mathbb{Q}}$ associated with $f$, then $V_f\mid_{G_{\mathbb{Q}_p}}$ is crystalline.
Because $f$ is $p$-ordinary, there is an exact sequence of $G_{\mathbb{Q}_p}$-modules
\begin{displaymath}
0 \longrightarrow \mathscr{F}^+V_f \longrightarrow V_f \longrightarrow \mathscr{F}^-V_f \longrightarrow 0
\end{displaymath}
with $\mathscr{F}^{\pm}V_f \cong F$ and $\mathscr{F}^+V_f$ unramified (see \cite[Theorem 2.1.4]{Wiles} or \cite[\S 12.5.3]{NekSC}).
Recall that $T \subseteq V_f(k/2)$ is a Galois-stable lattice; set
\begin{displaymath}
\begin{split}
\mathscr{F}^+T &:= \mathscr{F}^+V_f(k/2) \cap T;\\
V &:= V_f(k/2) \otimes \hat{\psi}_{\mathfrak{p}}^{-1},\quad \text{where}\ \hat{\psi}_{\mathfrak{p}} := \hat{\psi} \mid_{K_{\mathfrak{p}}};\\
\mathscr{F}^{\pm}V &:= \mathscr{F}^{\pm}V_f(k/2) \otimes \hat{\psi}_{\mathfrak{p}}^{-1}.
\end{split}
\end{displaymath}
Consider the dual representation $V^* := \Hom_F(V,F)$ of $V$ and, with notation as above, define $\mathscr{F}^{\pm}V^* := \Hom_F(\mathscr{F}^{\mp}V, F)$.
Let $L_{\infty}/L$ be the $\mathfrak{p}$-adic completion of $K_{cp^{\infty}}/K_c$.
The big logarithm $\mathcal{L}_{\mathscr{F}^+V}$, obtained applying Theorem \ref{5.1} to $\mathscr{F}^+V$ as a representation of $G_L$, is a map
\begin{displaymath}
\mathcal{L}_{\mathscr{F}^+V} : H^1_{\Iw}\bigl(L(\mathfrak{F}_{p^{\infty}}),\mathscr{F}^+V\bigr) \longrightarrow \mathcal{H}_{\hat{F}^{\text{ur}}}\bigl(\Gal(L(\mathfrak{F}_{p^{\infty}})/L)\bigr) \otimes_L \mathbf{D}_{\crys,L}(\mathscr{F}^+V).
\end{displaymath}
Since $L_{\infty} \subseteq L(\mathfrak{F}_{p^{\infty}})$, we can restrict $\mathcal{L}_{\mathscr{F}^+V}$ to the Galois group $\Gamma:= \Gal(L_{\infty}/L) \cong \Gal(K_{cp^{\infty}}/K_c)$ to obtain a map
\begin{displaymath}
H^1_{\Iw}(L_{\infty},\mathscr{F}^+V) \longrightarrow \mathcal{H}_{\hat{F}^{\text{ur}}}(\Gamma) \otimes_L \mathbf{D}_{\crys,L}(\mathscr{F}^+V).
\end{displaymath}
Recall the element $\omega_f \in DR_L(V_f)$ attached to $f$ as in \ref{keyform}.
Let $t\in B_{\dR}$ denotes again Fontaine's $p$-adic analogue of $2\pi i$ associated with the compatible sequence $\{i_p(\zeta_{p^n})\}$.
Define the class
\begin{displaymath}
\omega_{f,\psi} := \omega_f \otimes t^{-k} \otimes \omega_{\psi} \in \mathbf{D}_{\crys,L}(V^*(1)),
\end{displaymath}
where $\omega_{\psi}\in\mathbf{D}_{\crys,L}(\hat{\psi}_{\mathfrak{p}}(-k/2))$ is as in \cite[\S 5.3]{CH}.
Denote again by $\omega_{f,\psi}$ its image under the projection
$\mathbf{D}_{\crys,L}(V^*(1)) \twoheadrightarrow \mathbf{D}_{\crys,L}(\mathscr{F}^-V^*(1))$.
There is a pairing
\begin{displaymath}
\braket{-,-} : \mathcal{H}_{\hat{F}^{\text{ur}}}(\Gamma) \otimes_L \mathbf{D}_{\crys, L}(\mathscr{F}^+V) \times \mathbf{D}_{\crys, L}(\mathscr{F}^-V^*(1)) \longrightarrow \mathcal{H}_{\hat{F}^{\text{ur}}}(\Gamma).
\end{displaymath}
Recall that $\mathbf{D}_{\crys, L}(\mathscr{F}^+V) = (B_{\crys} \otimes_{\mathbb{Q}_p} \mathscr{F^+}V)^{G_{L}}$ and $\mathbf{D}_{\crys, L}(\mathscr{F}^-V^*(1)) = (B_{\crys} \otimes_{\mathbb{Q}_p} \mathscr{F^-}V^*(1))^{G_{L}}$.
Finally, the composition of $\mathcal{L}_{\mathscr{F}^+V}$ with the map
\begin{displaymath}
\braket{-, \omega_{f,\psi}}: \mathcal{H}_{\hat{F}^{\text{ur}}}(\Gamma) \otimes_L \mathbf{D}_{\crys, L}(\mathscr{F}^+V) \longrightarrow \mathcal{H}_{\hat{F}^{\text{ur}}}(\Gamma)
\end{displaymath}
has image contained in the Iwasawa algebra $\Lambda_{\hat{F}^{\text{ur}}}(\Gamma) := \mathcal{O}_{\hat{F}^{\text{ur}}}\llbracket \tilde{\Gamma} \rrbracket \otimes \hat{F}^{\text{ur}}$. For details, see \cite[Lemma 5.5]{CH}.
\subsubsection{Iwasawa classes associated with generalized Heegner classes}
Consider the Iwasawa cohomology group
\begin{displaymath}
H^1_{\Iw}(K_{cp^{\infty}},T):=\biggl(\varprojlim_n H^1(\Gal(K'/K_{cp^n}),T)\biggr)\otimes_{\mathbb{Z}_p} \mathbb{Q}_p,
\end{displaymath}
where $K'$ is the maximal extension of $K$ unramified outside the primes above $pNc$ (the representation $T$ is unramified outside the prime above $pN$). Let $\alpha$ denote the root of the Hecke polynomial $x^2 -a_px+p^{k-1}$ that is a $p$-adic unit. For each fractional ideal $\mathfrak{a}$ of $\mathcal{O}_c$ prime to $cNpD_K$, recall the cohomology class $z_{\mathfrak{a}}$ introduced in \S \ref{GHclasses} and define the class
\[
z_{\mathfrak{a},\alpha} := \begin{cases}
z_{\mathfrak{a}} - \frac{p^{k-2}}{\alpha} \cdot z_{\mathfrak{a}\mathcal{O}_{c/p}} \ &\text{if}\ p\,|\,c\\[3mm]
\frac{1}{\# \mathcal{O}_c^{\times}} \left( 1 - \frac{p^{k/2 -1}}{\alpha} \sigma_{\mathfrak{p}}\right) \left( 1 - \frac{p^{k/2 -1}}{\alpha} \sigma_{\overline{\mathfrak{p}}}\right) \cdot z_{\mathfrak{a}}\ &\text{if}\ p \nmid c,
\end{cases}
\]
which lives in $H^1\bigl(K_c,T \otimes S(E)\bigr)$. Here, $\sigma_{\mathfrak{p}}, \sigma_{\overline{\mathfrak{p}}} \in \Gal(K_c/K)$ are the Frobenius elements of $\mathfrak{p}$ and $\overline{\mathfrak{p}}$. By Proposition \ref{4.4}, one knows that
\[
\Cor_{K_{cp}/K_c}(z_{cp, \alpha}) = \alpha \cdot z_{c,\alpha}.
\]
Now consider the projection $e_{\chi}$ for $\chi = \boldsymbol{1}$ and write $z_{\mathfrak{a}, \alpha,\boldsymbol{1}}$ for the image of $z_{\mathfrak{a},\alpha}$ under $\id \otimes e_{\boldsymbol{1}} : H^1\bigl(K_c,T \otimes S(W)\bigr) \rightarrow H^1(K_c,T)$.
Thus, it makes sense to consider the element
\[
\boldsymbol{z}_{c, \alpha} := \varprojlim_n \alpha^{-n} z_{cp^n,\alpha, \boldsymbol{1}}
\]
in the Iwasawa cohomology group $H^1_{\Iw}(K_{cp^{\infty}},T)$.
There is an isomorphism
\begin{displaymath}
H^1_{\Iw}(K_{cp^{\infty}}, T) \cong H^1\bigl(K_c, T \otimes \mathcal{O}_F\llbracket \Gamma \rrbracket\bigr),
\end{displaymath}
where $\Gamma := \Gal(K_{cp^{\infty}}/K_c)$ (see the proof of \cite[Proposition 2.4.2]{LZ16}).
Put $\tilde{\Gamma}_{c} := \Gal(K_{cp^{\infty}}/K)$ and consider the map
\begin{displaymath}
H^1(K_{cp^{\infty}}, T) \cong H^1\bigl(K_c, T \otimes \mathcal{O}_F\llbracket \Gamma \rrbracket\bigr) \longrightarrow H^1\bigl(K_c, T \otimes \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket\bigr);
\end{displaymath}
we can view the classes $\boldsymbol{z}_{c,\alpha}$ as elements of $H^1\bigl(K_c, T \otimes \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket\bigr)$. Then set
\begin{equation} \label{zf}
\boldsymbol{z}_f := \Cor_{K_c/K}(\boldsymbol{z}_{c,\alpha}) \in H^1\bigl(K, T \otimes \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket\bigr),
\end{equation}
where the subscript $f$ is meant to remind that the class above, like the others already defined, depends on it.
For any character $\chi: \tilde{\Gamma}_{c} \rightarrow \mathcal{O}_{\mathbb{C}_p}^{\times}$, we can consider the twist $\boldsymbol{z}_f ^{\chi} \in H^1(K,T \otimes \chi)$ of $\boldsymbol{z}_f$ through the $\chi$-specialization map
\[
H^1\bigl(K,T \otimes \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket\bigr) \longrightarrow H^1\Bigl(K,T \otimes \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket \otimes_{\mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket} \chi\Bigr) = H^1(K,T \otimes \chi),
\]
where $\chi$ is extended to $\chi: \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket \rightarrow \mathcal{O}_{F}$ in the obvious way, possibly enlarging $F$ by adding the image of $\chi$.
Suppose that $\chi$ is non-trivial, of finite order and with conductor $cp^n$; then
\begin{equation} \label{zchi}
\boldsymbol{z}_f^{\chi} = \alpha^{-n} z_{\chi},
\end{equation}
where $z_{\chi} \in H^1(K,T \otimes \chi)$ is as in (\ref{z.chi}). See \cite[Lemma 5.4]{CH} for details.
\subsubsection{The algebraic anticyclotomic $p$-adic $L$-function} \label{algL}
We want to apply the logarithm map $\mathcal{L}_{\mathscr{F}^+V}$ to the localization at $\mathfrak{p}$ of the classes $\boldsymbol{z}_{c,\alpha} \otimes \hat{\psi}^{-1}$, so we need to check that these classes actually lie in $H^1_{\Iw}(L_{\infty},\mathscr{F}^+V) = H^1_{\Iw}(K_{cp^{\infty},\mathfrak{p}},\mathscr{F}^+V) \hookrightarrow H^1_{\Iw}(K_{cp^{\infty},\mathfrak{p}},V)$.
Similarly to what we said in \S \ref{keyform}, by \cite{Nek00}, one knows that $z_{\mathfrak{a}}$ lies in the Bloch--Kato Selmer group $\Sel(K_{c}, T \otimes S(E))$, indeed $z_{\mathfrak{a}}$ is the image, through a morphism of $G_{K_{c}}$-modules, of a cohomology class in $\Sel(K_c,\varepsilon H^{4r+1}_{\et}(\overline{\mathcal{X}}_r,\mathbb{Z}_p)(k-1))$, which is the Abel--Jacobi image of the generalized Heegner cycle $\Delta_{\mathfrak{a}}$. Recall that the Bloch--Kato Selmer group $\Sel(F,M)$ of a $G_F$-representation $M$, with $F$ number field, is the subspace of elements $x$ of $H^1(G_F,M)$ such that for all finite place $v$ of $F$, the localization $\loc_v(x) \in H^1_f(F_v,M)$. See \S \ref{Sel} for more precise definitions.
Thus, $z_{cp^n,\alpha,\mathbf{1}} \in \Sel(K_{cp^n},T)$ as well, so $\loc_{\mathfrak{p}}(z_{cp^n,\alpha,\mathbf{1}}) \in H^1_f(K_{cp^n,\mathfrak{p}},T)$.
But $H^1_f(K_{cp^n,\mathfrak{p}},T)$ is identified with the image of the map $H^1(K_{cp^n,\mathfrak{p}},\mathscr{F}^+T) \rightarrow H^1(K_{cp^n,\mathfrak{p}},T)$ (cf. \cite[\S 5.5]{CH}), so we can view $\loc_{\mathfrak{p}}(z_{cp^n,\alpha,\mathbf{1}}) \in H^1(K_{cp^n,\mathfrak{p}},\mathscr{F}^+T)$.
Since $\boldsymbol{z}_{c,\alpha}$ is the inverse limit of the classes $\alpha^{-n} z_{cp^n,\alpha,\mathbf{1}}$, one has
\begin{displaymath}
\loc_{\mathfrak{p}}(\boldsymbol{z}_{c,\alpha}) \in
H^1_{\Iw}(L_{\infty},\mathscr{F}^+T).
\end{displaymath}
We conclude that
\begin{displaymath}
\loc_{\mathfrak{p}}(\boldsymbol{z}_{c,\alpha}\otimes\hat{\psi}^{-1}) \in H^1_{\Iw}(L_{\infty},\mathscr{F}^+V).
\end{displaymath}
Now, using notation similar to that in \cite{CH}, we can define
\begin{displaymath}
\begin{split}
\mathcal{L}^*(\boldsymbol{z}_f \otimes \hat{\psi}^{-1}) &:= \Cor_{K_c/K}(\mathcal{L}_{\mathscr{F}^+V}(\loc_{\mathfrak{p}}(\boldsymbol{z}_{c,\alpha}\otimes\hat{\psi}^{-1})))\\
&= \sum_{\sigma \in \tilde{\Gamma}_c/\Gamma} \mathcal{L}_{\mathscr{F}^+V}(\loc_{\mathfrak{p}}(\boldsymbol{z}^{\sigma}_{c,\alpha}\otimes\hat{\psi}^{-1}))\hat{\psi}(\sigma^{-1}
\in \mathbf{D}_{\crys,L}(\mathscr{F}^+V) \otimes \Lambda_{\hat{F}^{\text{ur}}}(\tilde{\Gamma}_c),
\end{split}
\end{displaymath}
where $\Lambda_{\hat{F}^{\text{ur}}}(\tilde{\Gamma}_c) = \mathcal{O}_{\hat{F}^{\text{ur}}}\llbracket \tilde{\Gamma}_c \rrbracket \otimes \hat{F}^{\text{ur}}$.
Finally, consider the restriction
\begin{equation}
\mathcal{L}_{\psi}(\boldsymbol{z}_f) := \Res_{K_{p^{\infty}}}(\mathcal{L}^*(\boldsymbol{z}_f \otimes \hat{\psi}^{-1})) \in \mathbf{D}_{\crys, L}(\mathscr{F}^+V) \otimes \Lambda_{\hat{F}^{\text{ur}}}(\tilde{\Gamma}),
\end{equation}
where $ \Res_{K_{p^{\infty}}} : \tilde{\Gamma}_c = \Gal(K_{cp^{\infty}}/K) \rightarrow \tilde{\Gamma} = \Gal(K_{p^{\infty}}/K)$ is the restriction map.
\subsection{Reciprocity law}
We start by giving a sketch of the proof of the following theorem, which is analogous to that of \cite[Theorem 5.7]{CH}.
\begin{teor} \label{5.7}
Let $\psi : K^{\times} / \mathbb{A}^{\times}_K \rightarrow \mathbb{C}^{\times}$ be an anticyclotomic Hecke character of infinity type $(k/2, -k/2)$ and conductor $c\mathcal{O}_K$ with $ p \nmid c$ and suppose that $f$ is $p$-ordinary.
Then
\begin{displaymath}
\big\langle\mathcal{L}_{\psi}(\boldsymbol{z}_f), \omega_f \otimes t^{-k}\big\rangle = -c^{k/2-1}(\sqrt{-D_K})^{-k/2} \cdot \LL_{f,\psi} \cdot \sigma_{-1,\mathfrak{p}} \in \Lambda_{\hat{F}^{\emph{ur}}}(\tilde{\Gamma}),
\end{displaymath}
where $\sigma_{-1,\mathfrak{p}}:= \emph{rec}_{\mathfrak{p}(-1)|_{K{p^{\infty}}}} \in \tilde{\Gamma}$ is an element of order $2$.
\end{teor}
\begin{proof}[Sketch of proof]
For any $ n > 1$, let $ \hat{\phi} : \Gal(K_{p^{\infty}/K}) \rightarrow \mathbb{C}_{p}^{\times}$ be the $p$-adic avatar of a Hecke character $\phi$ of infinity type
$(k/2, -k/2)$ and conductor $p^n$. Moreover, define the finite order character $\chi := \hat{\psi}^{-1}\hat{\phi}$.
Recall that, by (\ref{zchi}), we have
\begin{displaymath}
\boldsymbol{z}^{\chi} = \alpha^{-n} \cdot z_{\chi}.
\end{displaymath}
Now, since our $\omega_A$ and $\eta_A$ are chosen, in \S \ref{6.3}, to be compatible with the ones of \cite{CH}, so that $\omega_A\eta_A = t$ (see \cite[\S 5.3]{CH}), applying Theorem \ref{GZ} with $j = 0$ and \eqref{zchi} yields the expression $\big\langle\log_\mathfrak{p}(\boldsymbol{z}_{f}^{\chi}) \otimes t^{k/2},\omega_{f}\otimes t^{-k}\big\rangle$ from $\LL_{f,\psi}(\hat{\phi}^{-1})$. Now, performing the same computation as in the proof of \cite[Theorem 5.7]{CH} and applying Theorem \ref{5.1} to the expression $\big\langle\mathcal{L}_{\psi}(\boldsymbol{z}_f), \omega_f \otimes t^{-k}\big\rangle$, we obtain the formula of the statement evaluated at $\hat{\phi}^{-1}$ for any $p$-adic avatar $\hat{\phi}$ as above. By an argument that is formally identical to the one at the end of the proof of \cite[Theorem 5.7]{CH}, one gets the desired equality.
\end{proof}
Now we state the reciprocity law that is the counterpart of \cite[Corollary 5.8]{CH}.
\begin{teor}\label{rec}
Let $\chi : \Gal(K_{p^{\infty}}/K) \rightarrow \mathcal{O}_F^{\times}$ be a locally algebraic $p$-adic Galois character of infinity type $(j, -j)$ with $j \geq k/2$ and conductor $cp^n\mathcal{O}_K$ with $ p \nmid c$ and suppose that $f$ is $p$-ordinary. Then
\[
\braket{\exp^*(\loc_{\mathfrak{p}}(\boldsymbol{z}^{\chi^{-1}})), \omega_f \otimes \omega_A^{{-k/2-j}} \eta_A^{-k/2 - j}}^2 = D(f, \psi, \chi\psi^{-1}, K) \cdot L(f, \chi, k/2),
\]
where $D(f, \psi, \chi\psi^{-1}, K)$ is non-zero a constant depending on $f, \chi, K$ and $\psi$, which is a Hecke character of infinity type $(k/2, -k/2)$ and conductor $c$.
\end{teor}
\begin{proof}[Sketch of proof]
Let $ \hat{\psi} : \Gal(K_{p^{\infty}/K}) \rightarrow \mathbb{C}_{p}^{\times}$ be the $p$-adic avatar of a Hecke character $\psi$ of infinity type $(k/2, -k/2)$ and conductor $c$, so that $\hat{\phi} := \chi \hat{\psi}^{-1}$ is a locally algebraic character of infinity type $(j-k/2, -j + k/2)$ and conductor $p^n$.
The proof proceeds by using Theorem \ref{5.1} to extract the expression $\big\langle\exp^*(\loc_{\mathfrak{p}}(\boldsymbol{z}^{\chi^{-1}})), \omega_f \otimes \omega_A^{{-k/2-j}} \eta_A^{-k/2 + j }\big\rangle^2$ from $\braket{\mathcal{L}_{\mathfrak{p},\psi}(\boldsymbol{z}), \omega_f \otimes t^{-k}}(\hat{\phi})$. Now one can take the square and apply Theorem \ref{5.7} to recover the square of the $p$-adic $L$-function $\LL_{f,\psi}(\hat{\phi})$, and then use the interpolation formula of Theorem \ref{interpolation-thm} to obtain the statement. The constant $D(f,\psi, \chi\psi^{-1},K)$ turns out to be
\[
D(f,\psi, \chi\psi^{-1},K) =\;
\alpha^{-2n} \epsilon(0,\phi_{\mathfrak{p}}^{-1}\psi_{\mathfrak{p}}^{-1})^{-2} p^{nk} (j-k/2+1)!^{-2
\cdot\Omega^{-4j} c^{k-2} \sqrt{-D_K}^{-k}C(f,\psi, \chi\psi^{-1},K),
\]
where $\Omega \in \W^{\times}$ is the constant denoted with $\Omega_p$ in the proof of \cite[Corollary 5.8]{CH}.
For the details of the computation, see the proof of \cite[Corollary 5.8]{CH}.
\end{proof}
\subsection{The anticyclotomic Euler system method}
In this section we apply the Kolyvagin-type method developed in \cite[Section 7]{CH} to our system of Heegner classes, in order to deduce results on the Selmer group of the representation $V_{f,\chi} := V_f(k/2)\mid_{G_K} \otimes \chi$, with $\chi : \Gal(K_{c_0p^{\infty}}/K) \rightarrow \mathcal{O}_F^{\times}$ a locally algebraic $p$-adic Galois character of infinity type $(j, -j)$ and conductor $c_0p^s\mathcal{O}_K$ with $c=c_0p^s$ and $ (pN,c_0)=1$.
First of all, we introduce the objects and the properties of the Kolyvagin method employed in \cite{CH}. Then, we will apply it to our system of generalized Heegner cycles and, finally, we will deduce results on Selmer groups. As will be clear, we follow \cite{CH} closely.
\subsubsection{Anticyclotomic Euler systems}
Let $\mathcal{G}_n := \Gal(K_n/K)$ and let $H^{1}(K_n,-)$ denote the cohomology group with respect to $\Gal(K^{\Sigma_n}/K_n)$, where $\Sigma_n$ is the finite set containig the prime factors of $pNc_0n$ and $K^{\Sigma_n}$ is the maximal extension of $K$ unramified outside the primes above $\Sigma_n$.
By \cite[Proposition 3.1]{Nek92}, there is a $G_{\mathbb{Q}}$-equivariant $\mathcal{O}_F$-linear perfect pairing
\begin{displaymath}
\braket{-,-} : T \times T \longrightarrow \mathcal{O}_F(1)
\end{displaymath}
that induces for each local field $L$ the local Tate pairing
\begin{equation}
\braket{-,-}_L : H^1(L,T) \times H^1(L,T) \longrightarrow \mathcal{O}_F.
\label{Tate-pairing}
\end{equation}
Here $T$ is the $G_{\mathbb{Q}}$-stable $\mathcal{O}_F$-lattice inside $V_f(k/2)$ that was fixed before. Let $\varpi$ be a uniformizer of $\mathcal{O}_F$ and let $ \mathbb{F} := \mathcal{O}_F/(\varpi)$ be the residue field. For every integer $M\geq1$, set $T_M:= T / \varpi^MT$.
For us, $\ell$ will always denote a prime inert in $K$ and $\lambda$ will be the unique prime of $K$ above $\ell$; denote by $\frob_{\ell}$ the Frobenius element of $\lambda$ in $G_K$. Let $H^1_f(K_{\lambda},-)$ be the finite part of $H^1(K_{\lambda},-)$, where $K_{\lambda}$ is the completion of $K$ at $\lambda$, and denote by $\loc_{\ell} : H^1(K,-) \rightarrow H^1(K_{\lambda},-)$ the localization map at $\ell$.
Let $\mathcal{S}$ be the set of square-free products of primes $\ell$ inert in $K$ with $\ell \nmid 2pNc_0$. Let $\tau$ denote complex conjugation.
\begin{defi}
An \textbf{anticyclotomic Euler system} for $T$ and $\chi$ is a collection $\mathbf{c} = \left\lbrace c_n\right\rbrace _{n \in \mathcal{S}}$ of classes $c_n \in H^1(K_{nc}, T \otimes \chi^{-1})$ such that for any $ n = m \ell \in \mathcal{S}$ the following properties hold:
\begin{enumerate}[noitemsep]
\item $\Cor_{K_{nc}/K_{mc}}(c_n) = a_{\ell}(f) \cdot c_m$;
\item $\loc_{\ell}(c_n) = \Res_{K_{mc,\lambda}/K_{nc,\lambda}}(\loc_{\ell}(c_m)^{\frob_{\ell}})$;
\item if $\chi^2 = 1$ then $c_n^{\tau} = w_f \cdot \chi(\sigma) \cdot c_n^{\sigma}$ for some $\sigma \in \Gal(K_{nc}/K)$,
\end{enumerate}
where $w_f \in \left\lbrace \pm 1 \right\rbrace $ is the Atkin--Lehner eigenvalue of $f$.
\end{defi}
\subsubsection{Kolyvagin's derivative classes}
Define the constant $\beta$ as in \cite[(7.2)]{CH}. For any integer $M\geq1$, denote by
$\mathcal{S}_M \subseteq \mathcal{S}$ the set of square-free products of primes $\ell$ such that
\begin{enumerate}[noitemsep]
\item $\ell$ is inert in $K$;
\item $\ell \nmid 2c_0Np$;
\item $\varpi^M \mid \ell+1, a_{\ell}(f)$;
\item $\varpi^{M+\beta+1} \nmid \ell + 1 \pm a_{\ell}(f) \ell^{1-k/2}$.
\end{enumerate}
A prime number satisfying all these conditions is called an \emph{$M$-admissible (Kolyvagin) prime}. By using the \v{C}ebotarev density theorem, it can be checked that there exist infinitely many $M$-admissible primes.
Put $G_n := \Gal(K_n/K_1) \cong \Gal(K_{nc}/K_c) \subseteq \mathcal{G}_{nc}$. Let $n \in \mathcal{S}_M$; since $n$ is square-free, there is a splitting $ G_n = \prod_{\ell \mid n} G_{\ell}$. Moreover, each $\ell\,|\,n$ is inert in $K$, so the group $G_{\ell} \cong \Gal(K_{\ell}/K_1)$ is cyclic of order $\ell + 1$. Fix a generator $\sigma_{\ell}$ for each $G_{\ell}$ and put
\begin{displaymath}
\begin{split}
D_{\ell} &:= \sum_{i=1}^{\ell} i \sigma_{\ell}^i \in \mathbb{Z}[G_{\ell}],\\
D_n &:= \prod_{\ell \mid n} D_{\ell} \in \mathbb{Z}[G_n] \subseteq \mathcal{O}_F[\mathcal{G}_{nc}].
\end{split}
\end{displaymath}
Now we choose a positive integer $M'$ such that $p^{M'}$ annihilates
\begin{enumerate}[noitemsep]
\item the kernel and the cokernel of $\Res_{K/K_n} : H^1(K,T_M \otimes \chi^{-1}) \rightarrow H^1(K_n,T_M \otimes \chi^{-1})^{\mathcal{G}_n}$ for all $n,M \in \mathbb{Z}_+$;
\item the local cohomology groups $H^1(K_v,T_M \otimes \chi^{-1})$ for all places $v \mid c_0N$.
\end{enumerate}
One can prove that such an integer exists as in \cite[Proposition 6.3, Corollary 6.4 and Lemma 10.1]{Nek92}.
Consider an anticyclotomic Euler system $\mathbf{c} = \left\lbrace c_n \right\rbrace$ for $T$ and $\chi$. Denote by $\Red_M$ the reduction $H^1(-,T \otimes \chi^{-1}) \rightarrow H^1(-, T_M \otimes \chi^{-1})$.
For $n \in \mathcal{S}_M$, we want to apply the derivative operators $D_n$ to the classes $c_n$.
For each $n \in \mathcal{S}_M$ there is a unique
class $\mathcal{D}_M(n) \in H^1(K_c,T_M \otimes \chi^{-1})$ such that
\begin{displaymath}
\Res_{K/K_{n}}(\mathcal{D}_M(n)) = p^{3M'} \Red_M(D_n c_n),
\end{displaymath}
because of the properties of $M'$.
Define the derivative class by
\begin{equation} \label{P1}
P_{M,\chi^{-1}}(n) := \Cor_{K_c/K}(\mathcal{D}_M(n)) \in H^1(K,T_M \otimes \chi^{-1}).
\end{equation}
\subsubsection{Local conditions}
Now we introduce Selmer groups imposing local conditions at $p$ at the cohomology classes, through the choices of subspaces of
$H^1(K_{\mathfrak{p}},V_f(k/2) \otimes \chi^{-1}) \oplus H^1(K_{\overline{\mathfrak{p}}},V_f(k/2) \otimes \chi^{-1})$.
Recall that $ p = \mathfrak{p} \overline{\mathfrak{p}}$ splits in $K$.
Let $\mathcal{F} \subseteq
H^1(K_{\mathfrak{p}},V_f(k/2) \otimes \chi^{-1}) \oplus H^1(K_{\overline{\mathfrak{p}}},V_f(k/2) \otimes \chi^{-1})$
be an $F$-subspace and let $\mathcal{F}^*\subseteq
H^1(K_{\mathfrak{p}},V_f(k/2) \otimes \chi) \oplus H^1(K_{\overline{\mathfrak{p}}},V_f(k/2) \otimes \chi)$ be the orthogonal complement of $\mathcal{F}$ with respect to the local Tate pairing of equation (\ref{Tate-pairing}). Assume that $\mathcal{F}^*=\mathcal{F}$ if $\chi^2=1$.
Define $\mathcal{F}_T \subseteq
H^1(K_{\mathfrak{p}},V_f(k/2) \otimes \chi^{-1}) \oplus H^1(K_{\overline{\mathfrak{p}}},V_f(k/2) \otimes \chi^{-1})$
to be the $F$-subspace obtained as the inverse image of $\mathcal{F}$ under the
direct sum of the maps
$H^1(K_{\mathfrak{p}},T \otimes \chi^{-1}) \rightarrow H^1(K_{\mathfrak{p}},V_f(k/2) \otimes \chi^{-1})$, $H^1(K_{\overline{\mathfrak{p}}},T \otimes \chi^{-1}) \rightarrow H^1(K_{\overline{\mathfrak{p}}},V_f(k/2) \otimes \chi^{-1})$
and $\mathcal{F}_M \subseteq
H^1(K_{\mathfrak{p}},T_M \otimes \chi^{-1}) \oplus H^1(K_{\overline{\mathfrak{p}}},T_M \otimes \chi^{-1})$
as the image of $\mathcal{F}_T$ through the reduction map. Put
$Y_M := T_M \otimes \chi^{-1}$.
Now define
\[
\Sel^{(n)}_{\mathcal{F}}(K,Y_M) := \Large \left\lbrace x \in H^1(K,Y_M) \mid
\normalsize
\begin{aligned}
\displaystyle
&\loc_v(x) \in H^1_f(K_v,Y_M) && \text{if}\ v \nmid pn\\
&\loc_p \in \mathcal{F}_M && \text{if}\ p \nmid n
\end{aligned}
\Large\right\rbrace,
\]
where $\loc_p = \loc_{\mathfrak{p}} \oplus \loc_{\overline{\mathfrak{p}}}$.
Note that if $p \mid n$ then the choice of $\mathcal{F}_M$ is irrelevant.
If $n=1$ we abbreviate $\Sel_{\mathcal{F}}(K,Y_M) := \Sel^{(1)}_{\mathcal{F}}(K,Y_M)$. Define then
\begin{displaymath}
\Sel_{\mathcal{F}}(K, T \otimes \chi^{-1}) := \varinjlim_M \Sel_\mathcal{F}(K,Y_M).
\end{displaymath}
If $\mathbf{c} = \left\lbrace c_n\right\rbrace _{n \in \mathcal{S}}$ is an anticyclotomic Euler system for $T$ and $\chi$, let
\begin{displaymath}
\mathbf{c}_K := \Cor_{K_c/K}(c_1) \in H^1(K,T \otimes \chi^{-1}).
\end{displaymath}
Then
\begin{displaymath}
P_{M,\chi^{-1}}(1) = p^{3M'} \Red_{M}(\mathbf{c}_K),
\end{displaymath}
since the square
\\
\[
\begin{tikzcd}
H^1(K_c,T \otimes \chi^{-1}) \arrow[r, "\Cor_{K_c/K}"] \arrow[d, "\Red_M"]
& H^1(K, T \otimes \chi^{-1}) \arrow[d, "\Red_M"]
\\
H^1(K_c, T_M \otimes \chi^{-1}) \arrow[r, "\Cor_{K_c/K}"]
& H^1(K, T_M \otimes \chi^{-1})
\end{tikzcd}
\]
\\
is commutative.
Using \cite[Proposition 10.2]{Nek92} we obtain that
\[P_{M,\chi^{-1}}(n) \in \Sel_{\mathcal{F}}^{(np)}(K,Y_M).\]
Note that $p\,|\,pn$, so this holds for any $\mathcal{F}$.
\begin{defi}
An anticyclotomic Euler system $\mathbf{c} = \left\lbrace c_n \right\rbrace_{n \in \mathcal{S}}$ for $T$ and $\chi$ has \textbf{local condition $\mathcal{F}$} if it satisfies
\begin{itemize
\item[4.] $\mathbf{c}_K \in \Sel_{\mathcal{F}}(K,T \otimes \chi^{-1})$ and $\mathbf{c}_K^{\tau} \in \Sel_{\mathcal{F^*}}(K,T \otimes \chi)$, that is $\loc_{p}(\mathbf{c}_K) \in \mathcal{F}_T$ and $\loc_{p}(\mathbf{c}_K^{\tau}) \in \mathcal{F}^*_T$;
\item[5.] for every $M$ and $n \in \mathcal{S}_M$, one has \[P_{M,\chi^{-1}}(n) \in \Sel_{\mathcal{F}}^{(n)}(K,T_M \otimes \chi^{-1}),\]
that is, $\loc_p(P_{M,\chi^{-1}}(n)) \in \mathcal{F}_M$
(these two conditions are equivalent because $P_{M,\chi^{-1}}(n) \in \Sel_{\mathcal{F}}^{(np)}(K,Y_M)$).
\end{itemize}
\end{defi}
Now we state an important technical result.
\begin{teor} \label{keyth}
Let $\mathbf{c} = \left\lbrace c_n \right\rbrace_{n \in \mathcal{S}}$ be an anticyclotomic Euler system for $T$ and $\chi$ with local condition $\mathcal{F}$. If $\mathbf{c}_K \neq 0$, then
\begin{displaymath}
\Sel_{\mathcal{F}^*}(K,V \otimes \chi) = F \cdot \mathbf{c}_K^{\tau}.
\end{displaymath}
\end{teor}
\begin{proof} This is \cite[Theorem 7.3]{CH}. \end{proof}
\subsubsection{Construction of Euler systems for generalized Heegner cycles}
We keep notation and assumptions introduced at the beginning of this section, but now we assume that
\begin{itemize}
\item $\chi$ has infinity type $(j,-j)$ with $j \geq k/2$;
\item $f$ is ordinary at $p$.
\end{itemize}
Recall the cohomology class $\boldsymbol{z}_f \in H^1\bigl(K, T \otimes \mathcal{O}_F\llbracket \tilde{\Gamma}_{c} \rrbracket\bigr)$ defined in (\ref{zf}) and consider its $\chi$-specialization $\boldsymbol{z}_f^{\chi} \in H^1(K,T \otimes \chi)$.
Let us consider for $v = \mathfrak{p},\overline{\mathfrak{p}}$ the subspace $\mathcal{L}_v$ of $H^1(K_v,V \otimes \chi)$ spanned by $\loc_v(\boldsymbol{z}_f^{\chi})$ and put
\[
\mathcal{L}_{v,T} := \mathcal{L}_v \cap H^1(K,T \otimes \chi).
\]
Set $\mathcal{L}^* := \mathcal{L}_{\mathfrak{p}}^* \oplus \mathcal{L}_{\overline{\mathfrak{p}}}^*$.
Choose $M'$ large enough so that $p^{M'}H^1(K_v,T)_{\text{tor}}=0$ for $ v= \mathfrak{p},\overline{\mathfrak{p}}$.
Recall the cohomology classes $\boldsymbol{z}_{m,\alpha} \in H^1(K_{m^{\infty}},T)$ and for $n \in \mathcal{S}$ set
\begin{displaymath}
c_n := \boldsymbol{z}_{cn,\alpha}^{\chi^-1} \in H^1(K_{nc}, T \otimes \chi^{-1}),
\end{displaymath}
where $\boldsymbol{z}_{cn,\alpha}^{\chi^-1}$ is the specialization at $\chi^{-1}$ obtained via the map
\[
H^1_{\Iw}(K_{ncp^{\infty}},T) \longrightarrow
H^1(K_{nc},T \otimes \chi^{-1}).
\]
Finally, define
\begin{equation}
\mathbf{c}:= \left\lbrace c_n \right\rbrace_{n \in \mathcal{S}} = \left\lbrace \boldsymbol{z}_{cn,\alpha}^{\chi^-1}\right\rbrace _{n \in \mathcal{S}}.
\end{equation}
We would like to prove that this collection of cohomology classes is an anticyclotomic Euler system with local condition $\mathcal{L}^*$.
\begin{prop} \label{eulsys}
The collection $\mathbf{c}:= \left\lbrace c_n \right\rbrace_{n \in \mathcal{S}}$ is an anticyclotomic Euler system for $T$ and $\chi$ with local condition $\mathcal{L}^*$. Moreover, $\mathbf{c}_K = \boldsymbol{z}_f^{\chi^{-1}}$.
\end{prop}
\begin{proof}
If $\mathbf{c}$ is an anticyclotomic Euler system, then
\[
\mathbf{c}_K = \Cor_{K_c/K}(c_1) = \Cor_{K_c/K}(\boldsymbol{z}_{c,\alpha}^{\chi^{-1}}) =\Cor_{K_c/K}(\boldsymbol{z}_{c,\alpha})^{\chi^{-1}} = \boldsymbol{z}_f^{\chi^{-1}}.
\]
Analogously to (\ref{zchi}), we obtain
\begin{displaymath}
\boldsymbol{z}_{n,\alpha}^{\psi} = \alpha^{-t} \cdot z_{n,\psi}
\end{displaymath}
for each non-trivial finite order character $\psi : \Gal(K_{c_0 p^{\infty}}/K_{c_0}) \rightarrow \mathcal{O}_{\mathbb{C}_p}^{\times}$ of conductor $c=c_0p^s$.
By Propositions \ref{4.4}, \ref{4.6}, \ref{4.7}, we know that
\begin{enumerate}[noitemsep]
\item $\Cor_{K_{nc}/K_{mc}}(z_{nc,\psi}) = a_{\ell}(f) \cdot z_{mc,\psi}$;
\item $\loc_{\ell}(z_{nc,\psi}) = \Res_{K_{mc,\lambda}/K_{nc,\lambda}}(\loc_{\ell}(z_{mc,\psi})^{\frob_{\ell}})$;
\item $z_{nc,\psi}^{\tau} = w_f \cdot \psi(\sigma) \cdot z_{nc,\psi^{-1}}^{\sigma}$ for some $\sigma \in \Gal(K_{nc}/K)$.
\end{enumerate}
Upon taking $\psi=\mathbf{1}$, we deduce these properties for the classes $\boldsymbol{z}_{nc,\alpha}$ and then for $\boldsymbol{z}_{nc,\alpha}^{\chi^{-1}}$ by specializing the relations at $\chi^{-1}$. This proves that $\mathbf{c}$ is an anticyclotomic Euler system. The last thing we need to show is that $\mathbf{c}$ has local condition $\mathcal{L}^*$, which can be checked as in the proof of \cite[Proposition 7.8]{CH}.
\end{proof}
\begin{prop} \label{loc}
If $\loc_\mathfrak{p}\Bigl(\boldsymbol{z}_f^{\chi^{-1}}\Bigr) \neq 0$, then $\Sel\bigl(K,V_f(k/2) \otimes \chi\bigr) =0$.
\end{prop}
\begin{proof}
The proof is completely analogous to that of \cite[Theorem 7.9]{CH}, so we will briefly sketch the arguments. For each choice of subspaces $\mathcal{F}_v \subset H^1\bigl(K_v,V_f(k/2) \otimes \chi\bigr)$ with $v = \mathfrak{p}, \overline{\mathfrak{p}}$, consider the ''generalized Selmer group'' given by
\begin{displaymath}
H^1_{\mathcal{F}_{\mathfrak{p}}, \mathcal{F}_{\overline{\mathfrak{p}}}}\bigl(K, V_f(k/2) \otimes \chi\bigr) :=
\Large \left\lbrace x \in H^1(K, V_f(k/2) \otimes \chi) \;\,\bigg|\;\,
\normalsize
\begin{aligned}
&\loc_v(x) \in H^1_f(K_v, V_f(k/2) \otimes \chi) && \text{if}\ v \nmid p\\
&\loc_v(x) \in \mathcal{F}_v && \text{if}\ v \,|\, p
\end{aligned}
\Large \right\rbrace.
\end{displaymath}
\!\!Thanks to Proposition \ref{eulsys}, we know that $\mathbf{c}$ is an anticyclotomic Euler system for $T$ and $\chi$ with local condition $\mathcal{L}^*$ such that $\mathbf{c}_K = \boldsymbol{z}_f^{\chi^{-1}}$. Since $\loc_{\mathfrak{p}}(\boldsymbol{z}_f^{\chi^{-1}}) \neq 0$, it follows that $\boldsymbol{z}_f^{\chi^{-1}} \neq 0$, so Theorem \ref{keyth} ensures that \[H^1_{\mathcal{L}_{\mathfrak{p}},\mathcal{L}_{\overline{\mathfrak{p}}}}\bigl(K, V_f(k/2) \otimes \chi\bigr) = \Sel_{\mathcal{L}}\bigl(K,V_f(k/2) \otimes \chi\bigr) = F \cdot (\boldsymbol{z}_f^{\chi^{-1}})^{\tau} = F \cdot \boldsymbol{z}_f^{\chi}.\]
We have that
\begin{displaymath}
H^1_{\mathcal{L}_{\mathfrak{p}},0}\bigl(K, V_f(k/2) \otimes \chi\bigr) \subseteq H^1_{\mathcal{L}_{\mathfrak{p}},\mathcal{L}_{\overline{\mathfrak{p}}}}\bigl(K, V_f(k/2) \otimes \chi\bigr) = F \cdot \boldsymbol{z}_f^{\chi}
\end{displaymath}
Since $ \loc_{\overline{\mathfrak{p}}}(\boldsymbol{z}_f^{\chi})^{\tau} = \loc_{\mathfrak{p}}(\mathbf{z}_f^{\chi^{-1}})$, also $\loc_{\overline{\mathfrak{p}}}(\boldsymbol{z}_f^{\chi})^{\tau} \neq 0$, hence $\boldsymbol
{z}_f^{\chi} \notin H^1_{\mathcal{L}_{\mathfrak{p}},0}(K, V_f(k/2) \otimes \chi)$. It follows that
\begin{displaymath}
H^1_{\mathcal{L}_{\mathfrak{p}},0}(K, V_f(k/2) \otimes \chi) = 0.
\end{displaymath}
There is a Poitou--Tate exact sequence
\begin{displaymath}
\begin{split}
0 &\longrightarrow H^1_{0,\emptyset}(K,V_f(k/2)\otimes \chi^{-1}) \longrightarrow H^1_{\mathcal{L}_{\mathfrak{p}}^*,\emptyset}(K,V_f(k/2)\otimes \chi^{-1}) \xrightarrow{\loc_{\mathfrak{p}}} \mathcal{L}_{\mathfrak{p}}^*\\ &\longrightarrow H^1_{\emptyset, 0}(K,V_f(k/2) \otimes \chi)^{\vee} \longrightarrow H^1_{\mathcal{L}_{\mathfrak{p}},0}(K,V_f(k/2)\otimes \chi)^{\vee} \longrightarrow 0,
\end{split}
\end{displaymath}
where $\emptyset$ indicates that we are imposing no condition.
Then
$H^1_{\emptyset,0}\bigl(K,V_f(k/2) \otimes \chi\bigr) = 0$.
Since, by \cite[(6.2)]{CH}, there is an equality
\begin{displaymath}
H^1_f\bigl(K_v,V_{f}(k/2) \otimes \chi\bigr) = \begin{cases}
H^1\bigl(K_v,V_{f}(k/2) \otimes \chi\bigr) &\text{if $v= \mathfrak{p}$}\\[2mm]
\left\lbrace 0 \right\rbrace &\text{if $v =\overline{\mathfrak{p}}$},
\end{cases}
\end{displaymath}
we conclude that $\Sel\bigl(K,V_f(k/2) \otimes \chi\bigr) = H^1_{\emptyset,0}\bigl(K,V_f(k/2) \otimes \chi\bigr) = 0$.
\end{proof}
Now we construct another anticyclotomic Euler system associated with our generalized Heegner cycles. In the remainder of this subsection, we assume that
\begin{itemize}
\item $\chi$ has infinity type $(j,-j)$ with $-k/2 < j < k/2$.
\end{itemize}
Notice that this time we do not need to assume that $f$ is ordinary at $p$.
Recall the cohomology classes $z_{n,\chi^{-1}} \in H^1(K_n, T \otimes \chi^{-1})$ defined in (\ref{chicomp}).
For $n \in \mathcal{S}$, set
\begin{displaymath}
c'_n := z_{cn,\chi^-1} \in H^1(K_{nc}, T \otimes \chi^{-1}),
\end{displaymath}
and define
\begin{equation}
\mathbf{c}':= \left\lbrace c'_n \right\rbrace_{n \in \mathcal{S}} = \left\lbrace z_{cn,\chi^-1}\right\rbrace _{n \in \mathcal{S}}.
\end{equation}
Denote by $\mathcal{L}_{BK}$ the direct sum of the Bloch--Kato finite subspaces
\[
\mathcal{L}_{BK} := H^1_f\bigl(K_{\mathfrak{p}},V_f(k/2) \otimes \chi^{-1}\bigr) \oplus H^1_f\bigl(K_{\overline{\mathfrak{p}}},V_f(k/2) \otimes \chi^{-1}\bigr).
\]
We would like to prove that this collection of cohomology classes is an anticyclotomic Euler system with local condition $\mathcal{L}_{BK}$.
\begin{prop} \label{c'eulsys}
The collection $\mathbf{c}':= \left\lbrace c'_n \right\rbrace_{n \in \mathcal{S}}$ is an anticyclotomic Euler system for $T$ and $\chi$ with local condition $\mathcal{L}_{BK}$. Moreover, $\mathbf{c'}_K = z_{\chi^{-1}}$.
\end{prop}
\begin{proof}
If $\mathbf{c}'$ is an anticyclotomic Euler system, then \eqref{z.chi} implies that
\[
\mathbf{c}'_K = \Cor_{K_c/K}(c'_1) = \Cor_{K_c/K}(z_{c,\chi^{-1}}) = z_{\chi^{-1}}.
\]
By Propositions \ref{4.4}, \ref{4.6}, \ref{4.7}, we know that
\begin{enumerate}[noitemsep]
\item $\Cor_{K_{nc}/K_{mc}}(z_{nc,\chi^{-1}}) = a_{\ell}(f) \cdot z_{mc,\chi^{-1}}$;
\item $\loc_{\ell}(z_{nc,\chi^{-1}}) = \Res_{K_{mc,\lambda}/K_{nc,\lambda}}(\loc_{\ell}(z_{mc,\chi^{-1}})^{\frob_{\ell}})$;
\item $z_{nc,\chi^{-1}}^{\tau} = w_f \cdot \psi(\sigma) \cdot z_{nc,\chi}^{\sigma}$ for some $\sigma \in \Gal(K_{nc}/K)$.
\end{enumerate}
It follows that $\mathbf{c}'$ is an anticyclotomic Euler system.
The last thing we need to show is that $\mathbf{c}'$ has local condition $\mathcal{L}_{BK}$. In analogy to what was remarked at the beginning of \S \ref{algL}, the results in \cite{Nek00} ensure that
\[
z_{nc,\chi^{-1}} \in \Sel(K_{nc},T \otimes \chi^{-1}),\quad z_{\chi^{-1}} \in \Sel(K,T \otimes \chi^{-1}) = \Sel_{\mathcal{L}_{BK}}(K,T \otimes \chi^{-1}).
\]
Since $\tau$ induces an isomorphism
\[
H^1_f(K_{\mathfrak{p}},T \otimes \chi^{-1}) \oplus H^1_f(K_{\overline{\mathfrak{p}}},T \otimes \chi^{-1}) \cong H^1_f(K_{\mathfrak{p}},T \otimes \chi) \oplus H^1_f(K_{\overline{\mathfrak{p}}},T \otimes \chi),
\]
we deduce that also ${\mathbf{c}'_K}^{\tau} = z_{\chi^{-1}}^{\tau} \in \Sel_{\mathcal{L}_{BK}^*}(K,T \otimes \chi)$. Furthermore, one has
\[
\loc_v\Bigl(\Res_{K_c/K_{nc}}\bigl(\mathcal{D}_M(n)\bigr)\Bigr) = \loc_v\Bigl(p^{3M'} \Red_M\bigl(D_n z_{nc,\chi^{-1}}\bigr)\Bigr) \in H^1_f(K_{nc,v}, T_M \otimes \chi^{-1})
\]
for $v = \mathfrak{p}, \overline{\mathfrak{p}}$. In light of \cite[Lemma 7.5]{CH}, it follows that
\[
\loc_v\bigl(\mathcal{D}_M(n)\bigr)\in H^1_f(K_{c,v},T_M \otimes \chi^{-1}),
\]
which implies that $\loc_p\bigl(P_{M,\chi^{-1}}(n)\bigr) \in \mathcal{L}_{BK,M}$.
\end{proof}
\subsection{Results on Selmer groups} \label{Sel}
In this final section, we use the anticyclotomic Euler system method to deduce results on the Selmer group of $V_{f,\chi}$. First of all, we recall notation and assumptions.
As usual, $f \in S^{\text{new}}_{k}(\Gamma_0(N))$ is our newform of weight $k=2r+2 \geq 4$ and level $N$, $F$ is a finite
extension of $\mathbb{Q}_p$ containing the Fourier coefficients of $f$, $\chi : \Gal(K_{c_0p^{\infty}}/K) \rightarrow \mathcal{O}_F^{\times}$ is a locally algebraic anticyclotomic character of infinity type $(j, -j)$ and conductor $c_0p^s\mathcal{O}_K$ with $(c_0,pN)=1$, $V_{f,\chi} := V_f(k/2)_{{|}_{G_K}} \otimes \chi$ is the twist of $V_{f}(k/2)$ by $\chi$, $L(f,\chi,s)$ is the associated Rankin $L$-series and $\Sel(K,V_{f,\chi})$ is the Block--Kato Selmer group of $V_{f,\chi}$ over $K$. Assume that:
\begin{itemize}[noitemsep]
\item[1.] $p \nmid 2N \phi(N^+)$;
\item[2.] $c_0$ is prime to $N$;
\item[3.] either $D_K > 3$ is odd or $8 \mid D_K$;
\item[4.] $p = \mathfrak{p} \overline{\mathfrak{p}}$ splits in $K$;
\item[5.] $N = N^+N^-$ where $N^+$ is a product of primes that split in $K$ and $N^-$ is a square free product of an \emph{even} number of primes that are inert in $K$.
\end{itemize}
As before, the last condition can be expressed by saying that $K$ satisfies a \emph{generalized Heegner hypothesis} relative to $N$.
Recall now the definition of the Bloch--Kato Selmer group.
If $v$ is a place of $K$ such that $v \nmid p$, we consider the inertia group $I_{K_v} \subseteq G_{K_v}$. The unramified subgroup of $H^1(K_v,V_{f,\chi})$ is defined by
\[
H^1_{\text{ur}}(K_v,V_{f,\chi}) := \ker \Bigl( H^1(K_v,V_{f,\chi}) \longrightarrow H^1(I_{K_v}, V_{f,\chi})\Bigr).
\]
Set $H^1_f(K_v,V_{f,\chi}):= H^1_{\text{ur}}(K_v,V_{f,\chi})$.
If $v$ is a place of $K$ such that $v\,|\,p$, then we set
\[
H^1_{f}(K_v,V_{f,\chi}) := \ker \Bigl( H^1(K_v,V_{f,\chi}) \longrightarrow H^1(K_v, V_{f,\chi} \otimes_{\mathbb{Q}_p}\mathbf{B}_{\cris})\Bigr).
\]
The {\bf global Bloch--Kato Selmer group} $\Sel(K,V_{f,\chi})$ of $V_{f,\chi}$ over $K$ is the subspace of $H^1(K, V_{f,\chi})$ given by
\begin{displaymath}
\Sel(K, V_{f,\chi}) :=
\Large \left\lbrace x \in H^1(K, V_{f,\chi} ) \;\,\bigg|\;\,
\normalsize
\begin{aligned}
&\loc_v(x) \in H^1_{\text{ur}}(K_v,V_{f,\chi}) && \text{if}\ v \nmid p\\
&\loc_v(x) \in H^1_{f}(K_v,V_{f,\chi}) && \text{if}\ v \mid p
\end{aligned}
\Large \right\rbrace.
\end{displaymath}
Now we can prove our theorems on Selmer groups, the first of which is a vanishing result.
\begin{teor}
\label{selmer-1}
Suppose that $f$ is $p$-ordinary. If $L(f,\chi,k/2)\neq0$, then
$
\Sel(K,V_{f,\chi}) = 0.
$
\end{teor}
\begin{proof}
Let $\epsilon(V_{f,\chi})\in\{\pm1\}$ be the sign of the functional equation for $L(f,\chi,s)$. Then
\[
\epsilon(V_{f,\chi}) = -1\; \Longleftrightarrow\; -k/2 < j < k/2.
\]
Indeed, $\epsilon(V_{f,\chi})$ is a product
\begin{displaymath}
\epsilon(V_{f,\chi}) = \prod_v \epsilon(1/2, \pi_{K_v} \otimes \chi_v)
\end{displaymath}
of local signs, where $\pi_K$ is the base change to $K$ of the automorphic representation of $\GL_2(\mathbb{A}_{\mathbb{Q}})$ associated with $f$, $\pi_{K_v}$ are the local factors of $\pi_K$ and the local $\epsilon$-factors $\epsilon(1/2, \pi_{K_v} \otimes \chi_v)$ are defined as follows.
If $F$ is a finite extension of $\mathbb{Q}_{\ell}$ and $\pi'$ is an irreducible representation of $GL_n(F)$, then
\begin{displaymath}
\epsilon(s,\pi') := \epsilon(s,\pi', \psi_F),
\end{displaymath}
where $\psi_F= \psi \circ \text{Tr}_{F/\mathbb{Q}_{\ell}}$
is the standard additive character.
(see, e.g., \cite{Sch02} for the definition). It follows that
\[\epsilon(V_{f,\chi}) = \prod_v \epsilon(1/2, \pi_{K_v} \otimes \chi_v, \psi_{K_v}).\]
Each of these local factors is equal either to $1$ or to $-1$. Because $N^-$ is an even product of inert primes, the product of the local factors at the finite places is equal to $1$. Thus, the global $\epsilon$-factor depends only on the infinite part.
Furthermore, by \cite[(3.2.5)]{Tate}, one has
\[
\epsilon(\frac{1}{2},\pi_{\infty} \otimes \chi_{\infty}, \psi_{K_{\infty}}) = \epsilon(\frac{1}{2},\mu^{(\frac{k}{2}-\frac{1}{2}+j)},\psi_{K_{\infty}})\epsilon(\frac{1}{2},\mu^{(-\frac{k}{2}+\frac{1}{2}+j)},\psi_{K_{\infty}}) = i^{\mid k-1+2j\mid + \mid 1-k+2j \mid},
\]
where $\mu : z \in \mathbb{C}^{\times} \mapsto \frac{z}{\overline{z}} \in \mathbb{C}^{\times}$.
Hence, one can check that
\begin{displaymath}
\begin{split}
\epsilon(V_{f,\chi}) = -1 &\;\Longleftrightarrow\; -k/2 < j < k/2,\\[1mm]
\epsilon(V_{f,\chi}) = +1 &\;\Longleftrightarrow\; j \leq -k/2 \ \text{or}\ j \geq k/2.
\end{split}
\end{displaymath}
Since $L(f,\chi,k/2) \neq 0$, we know that $\epsilon(V_{f,\chi}) = +1$, therefore either $ j \leq -k/2$ or $j \geq k/2$.
As before, let $\tau$ be complex conjugation; set $\chi^{\tau}(g) := \chi(\tau g\tau)$. There is an equality of $L$-functions $L(f,\chi,k/2) = L(f,\chi^{\tau},k/2)$ and the action of $\tau$ induces an isomorphism $\Sel(K,V_{f,\chi}) \cong \Sel(K,V_{f,\chi^{\tau}})$. This shows that we can assume $j \geq k/2$.
Finally, since $L(f,\chi,k/2) \neq 0$, by Theorem \ref{rec} we know that $\loc_{\mathfrak{p}}\bigl(\boldsymbol{z}_f^{\chi^{-1}}\bigr) \neq 0$, and then Proposition \ref{loc} gives $\Sel(K,V_{f,\chi})=0$.
\end{proof}
Our second theorem on Selmer groups gives a one-dimensionality result.
\begin{teor}
\label{selmer-2}
If $\epsilon(V_{f,\chi})=-1$ and $z_{\chi} \neq 0$, then
$
\Sel(K,V_{f,\chi}) = F z_{\chi}.
$
\end{teor}
\begin{proof}
Since $z_{\chi} \neq 0$, also $z_{\chi^{-1}} \neq 0$. Because $\epsilon(V_{f,\chi})=-1$, we know that $-k/2 < j < k/2$. Then, by Proposition \ref{c'eulsys}, the collection $\mathbf{c}'$ is an anticyclotomic Euler system for $T$ and $\chi$ with local condition $\mathcal{L}_{BK}$. Since $\mathbf{c}'_K=z_{\chi^{-1}}$, applying Theorem \ref{keyth}, we obtain that $\Sel(K,V_{f,\chi}) = \Sel_{\mathcal{L}_{BK}^*}(K,V_{f,\chi}) = F z_{\chi}$.
\end{proof}
|
1,116,691,499,685 | arxiv | \section{Introduction}
Matrix completion (MC) \cite{mnih2008probabilistic, kalofolias2014matrix,dziugaite2015neural} is one of the most important problems in modern recommender systems, using past user-item interactions to predict future user ratings or purchases. Specially, given a partially observed user-item historical rating matrix whose entries represent the ratings of users with items, MC is to predict the missing entries (unobserved or future potential ratings) in the matrix based on the observed ones. The most common paradigm of MC is to factorize the rating matrix into the product of low-dimensional latent embeddings of rows (users) and columns (items), and then predict the missing entries based on these latent embeddings. Traditional matrix completion methods \cite{candes2009exact, dziugaite2015neural} have achieved great successes in the past. However, these methods mainly learn the latent user (or item) representation yet largely neglect an explicit encoding of the collaborative signal to reveal the behavioral similarity between users \cite{wang2019neural}. These signals are crucial for predicting the missing rating in the rating matrix, but hard to be exploited, since they are hidden in user-item interactions \cite{he2020lightgcn}.
Recently, many works \cite{10.5555/3294996.3295127,berg2017graph,zhang2019inductive, you2020handling} have studied using a GNN to distill collaborative signals from the \emph{user-item interaction graph}. Specially, matrix completion is formulated as link prediction, where the rating matrix is formulated as a bipartite graph, with users (or items) as nodes and observed ratings/interactions as links.
The goal of GNN-based matrix completion methods is to predict the potential or missing links connecting any pair of nodes in this graph. Graph Autoencoder (GAE) \cite{kipf2016variational} is a popular GNN-based link prediction method, where a GNN is first applied to the entire network to learn node-specific representations. Then the representations of the target nodes are aggregated to predict the target link. Many GNN-based matrix completion methods directly apply GAE to the rating graph to predict potential ratings such as GC-MC and NMTR \cite{berg2017graph, gao2019neural}. By exploiting the structure of the bipartite user-item graph, the node-specific representations learned by GAE, which represents user-specific preferences or item attributes, are more expressive than the patterns learned by the traditional matrix completion methods for personalized recommendation.
\begin{table}[!t]
\centering
\begin{tabular}{c|ccc}
\toprule
& GAE-based models & IGMC & IMC-GAE (ours)\\
\midrule
Specific & \checkmark & $\times$ & \checkmark\\
Local & $\times$ & \checkmark & \checkmark \\
Efficient & \checkmark & $\times$ & \checkmark\\
Inductive & $\times$ & \checkmark & \checkmark\\
\bottomrule
\end{tabular}
\vspace{1em}
\caption{We compare the GNN-based matrix methods from different aspects: 1) whether they learn node-specific representations for personalized recommendation (denoted as Specific), 2) whether they learn local graph patterns (denoted as Local), (3) whether they are efficient matrix completion methods (denoted as Efficient), (4) whether they are inductive matrix completion methods (denoted as Inductive)}
\vspace{-2.8em}
\label{tab:comparison}
\end{table}
Despite its effectiveness, there remain two main challenges to apply GAE-based matrix completion to real recommender systems. The first challenge stems from a key observation from real-world scenarios: There are a large number of users or items in a real recommender system that have few historical ratings. This requires a model to predict potential ratings in a sparse rating matrix. However, GAE-based models usually fail in this situation since there are a few historical ratings in a sparse rating matrix for GAE-based models to train node-specific representations for personalized recommendation \cite{zhang2020revisiting}. The second challenge is applying the GAE-based models to real recommender systems for the large-scale recommendation. In real recommender systems, new users (or items) are emerging that are not exposed to the model during training. This requires that the model to be \emph{inductive}, i.e., the model trained on a group of users (or items) can adapt to new groups. However, previous GAE-based models are all transductive models so that the learned node representations cannot be generalized to users (or items) unseen during training \cite{zhang2019inductive}.
The following question arises: Can we have a GAE-based model that can not only guarantee good performance on a sparse rating matrix but also enable inductive learning? In fact, using GAE to simultaneously satisfy the two requirements for matrix completion is a non-trivial challenge when high-quality user (or item) features are unavailable. The one-hot node indices (together with learnable node-specific embeddings) in the GAE-based model give a maximum capacity for learning distinct user preferences (or item attributes) from historical ratings. On the other side, learning distinct user preferences (or item attributes) in GAE also requires adequate rating samples from the rating matrix. Accordingly, without adequate rating samples in a sparse rating matrix, it is hard for GAE to obtain satisfactory performance. Moreover, for unseen nodes from a new rating matrix, GAE lacks the representations of them, and therefore cannot predict the potential ratings in a new rating matrix, which makes inductive learning impossible. To overcome these two challenges, \citet{zhang2019inductive} propose an inductive matrix completion based on GNN (IGMC). To predict a potential link (i.e., rating), it first extracts a 1-hop subgraph around the target link and then relabels the node w.r.t the distance to the target nodes. Finally, a GNN is applied to each subgraph to learn the local graph patterns that can be generalized to an unseen graph. By learning local graph patterns, IGMC has a better performance on the sparse rating matrix and enables inductive matrix completion. However, extracting subgraphs in both training and inference processes is time-consuming for the real recommendation. Moreover, the performance degradation on the dense rating matrix in IGMC also hinder us from applying it to real recommender systems.
In this paper, we propose an inductive matrix completion method using GAE (IMC-GAE) that achieves efficient and inductive learning for matrix completion, and meanwhile obtain good performance on both sparse and dense rating matrices. As summarized in Table \ref{tab:comparison}, IMC-GAE combines the advantages of both the GAE-based models and IGMC together, which uses GAE to learn both node-specific representation for personalized recommendation, and local graph patterns for inductive matrix completion. Specially, we incorporate two informative node features into IMC-GAE to represent two types of user-item interactions and design a layer-wise node dropout scheme in GAE to learn local graph patterns for inductive matrix completion.
In summary, this work makes the following main contributions:
\begin{itemize}[leftmargin=*]
\item (Sec. \ref{sub_sec_1}) To better understand local graph patterns, we conduct a quantitative analysis on five real datasets. Based on this quantitative analysis, we have multiple observations that reveal the properties of local graph patterns in matrix completion. It motivates us to design our model, IMC-GAE.
\item (Sec. \ref{sub_sec_2}) We design two informative features, the identical feature and the role-aware feature, for the model to learn the expressive graph patterns. Moreover, these graph patterns can be easily generalized to unseen graphs.
\item (Sec. \ref{sub_sec_3}) We design a layer-wise node dropout schema that drops out more nodes in the higher layers. With the layer-wise node dropout, link representation in our model contains more node information in a 1-hop local graph around the target link. Accordingly, our model is able to learn local graph patterns associated with the target link, which enhances the capability of the inductive learning of our model.
\item (Sec. \ref{sub_sec_4}) To illustrate the effectiveness of the proposed IMC-GAE, we conduct empirical studies on five benchmark datasets. Extensive results demonstrate the state-of-the-art performance of IMC-GAE and its effectiveness in learning both local graph patterns and node-specific representations.
\end{itemize}
\begin{figure*}[!tp]
\centering\includegraphics[width=\textwidth]{Overview_MLRL.png}
\caption{Model Overview. The rating matrix is formulated as a bipartite user-item graph, in which the nodes represent users (or items) and the links represent the corresponding ratings.
In addition, the input features of each node in this graph consist of the identical feature, the role-aware feature, and the one-hot index feature.
In addition, the encoder of our model has multiple layers (e.g., Layer 1) with multiple rating-subgraph (e.g., Rating 1).
As stacking more layers, the node dropout probability increases, which is referred to as layer-wise node dropout.
The model aggregated the latent embedding which is learned by one-hot index feature and structure embedding of a node which is learned by role-aware feature and identical feature in all layers by the weighted sum operator.
At last, we reconstruct the links by a bilinear decoder. In this way, the output of our model contains the information of both latent link representation and structure representation}
\label{fig:Overview}
\end{figure*}
\vspace{-8pt}
\section{Related Works}
In this section, we will briefly review existing works on GAE-based matrix completion methods and inductive matrix completion methods based on GNN, which are most relevant with this work. Here, we highlight their differences to IMC-GAE, and illustrate how we combine the advantages of them to build a more effective model for real recommendation.
\subsection{GAE-based matrix completion}
The majority of GNN-based matrix completion methods is based on Graph Autoencoder (GAE) \cite{kipf2016variational}, which applies a GNN to the entire network to learn a representation for each node. The representations of the user and item nodes are aggregated to predict potential ratings. For example, \citet{10.5555/3294996.3295127} propose a multi-graph CNN model to extract user and item latent features from their nearest-neighbor networks. \citet{berg2017graph} propose graph convolutional matrix completion (GC-MC) which uses one-hot encoding of node IDs as initial node features, learns specific node representations by applying a GNN-encoder to the bipartite user-item graph, and reconstructs the rating links by a GNN-decoder. To the best of our knowledge, our method is the first inductive GAE-based matrix completion method that achieves a good performance in both sparse and dense rating matrices.
\subsection{Inductive GNN-based matrix completion methods}
There are mainly two types of GNN-based matrix completion methods that are applicable to inductive settings. One is attempting to handle inductive matrix completion without using node content, such as IGMC \cite{zhang2019inductive}. IGMC first extracts enclosing subgraphs around target links, then relabels the nodes in subgraphs according to their distances to the source and target nodes, and finally applies a GNN to each subgraph to learn a link representation for link prediction. IGMC applies GNN to those enclosing subgraphs to learn local graph patterns, which can easily generalize to the users (or items) unseen during training. Moreover, local graph patterns help IGMC obtain a better performance than the GAE-based models on the sparse rating matrices. However, applying IGMC to real recommender systems yields two crucial challenges. First of all, IGMC replaces nodes' one-hot index embedding with local structure features, which does not capture diverse user preferences and item attributes for personalized recommendation. Second, IGMC extracts subgraphs around target links during both the training and inference process, which is time-consuming for large-scale recommendation. In contrast, IMC-GAE maintains the ability to give a node-specific representation, which is important in personalized recommendation for the users with historical ratings. In addition, instead of extracting subgraphs and relabeling each node, we incorporate two informative features into the input features of each node and design a layer-wise node dropout scheme in IMC-GAE to help the GAE to learn local graph patterns. By using GAE to learn local graph patterns, the inference process of IMC-GAE becomes efficient and inductive.
Another previous inductive GNN-based matrix completion methods are content-based models; such as PinSage \cite{ying2018graph}, which uses node content as initial node features. Although being inductive and successful in real recommender systems, content-based models rely heavily on the rich content of each node, which is not easily accessible in most real recommender systems. In comparison, our model is inductive and does not rely on any node content.
\section{Method}
As aforementioned, matrix completion has been formulated as the link prediction problem on a bipartite user-item graph in recent GNN-based matrix completion methods.
Specially, we consider a matrix completion that deals with a rating matrix $M$ of shape $N_u \times N_v$, where $N_u$ is the number of users and $N_v$ is the number of items. Some entries in this matrix exist and other entries are missing. Existing entry $M_{ij}$ is a historical rating from a user $i$ to an item $j$.
The task of matrix completion is to predict the value of missing entries.
GNN-based matrix completion views the matrix as a bipartite graph and predicts the missing links in this graph.
In this section, we first present some findings on multiple real-world datasets, which reveal the properties of local graph patterns in both sparse and dense rating matrices.
Based on these observations, we then elaborate on how the proposed learning algorithm, IMC-GAE, integrates the GAE-based model and IGMC to obtain a more effective model for real recommender systems. Then, we show an overview of IMC-GAE in Figure \ref{fig:Overview}. Specially, IMC-GAE is a GAE-based model consisting of three major components: 1) embedding layer whose input features consist of the one-hot index of nodes, the identical feature and the role-aware feature, 2) relational GCN encoder, 3) a bilinear decoder that combines the representations of target nodes to reconstruct links representation.
\begin{table}
\caption{Quantitative Analysis on multiple datasets.}
\label{tab:QAD}
\begin{tabular}{lcccccc}
\toprule
\textbf{Dataset} & \textbf{density} &\textbf{AUR}&\textbf{AIR}&\textbf{MCR}&\textbf{SCF} \\
\midrule
YahooMusic & < 0.0001 & 0.1915 & 0.0745 & 0.3585 & 0.4713\\
Flixster & 0.0029 & 0.4705 & 0.1289 & 0.4362 & 0.5008\\
Douban & 0.0152 & 0.3672 & 0.5033 & 0.4537 & 0.4735\\
ML-1M & 0.0447 & 0.3771 & 0.4812 & 0.4151 & 0.5659\\
ML-100K & 0.0630 & 0.3826 & 0.4177 & 0.3815 & 0.5006\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Understanding local graph patterns}
\label{sub_sec_1}
In the previous works, some handcrafted heuristics in a local graph around the target link (i.e., \textbf{local graph patterns}) are designed for link prediction on graphs \cite{liben2007link}. IGMC first adopts the labeling trick in GNN-based matrix completion that automatically learns suitable graph patterns from the local graph.
These local graph patterns can be easily generalized to new local graphs or unseen links.
To develop a better understanding of local graph patterns in matrix completion, we do a quantitative data exploration on five real-world datasets, the density of which ranges from less than $0.0001$ to $0.063$. In particular, we examine the Pearson's correlation coefficient (PCC) between the true ratings and four heuristic scores \cite{liben2007link, zeng2021graph}: average user rating (AUR), average item rating (AIR), most common rating between source nodes and target nodes (MCR) and a simple collaborative signal (SCF) in five datasets. Specially, we find a user node that has the most common neighbors with the source node as \emph{guider} in SCF. The link prediction in SCF is based on the rating that \emph{guider} rates the target item node. From Table \ref{tab:QAD}, we can extract multiple findings,
\begin{itemize}[leftmargin=*]
\item The PCCs between the true ratings and four heuristic scores in five datasets are all positive, which indicates that the true ratings are correlated with these four heuristic scores in each dataset. Furthermore, it suggests that local graph patterns are effective to predict the missing ratings in matrix completion.
\item The PCCs between the true ratings and four heuristic scores are all smaller than $6.0$, which indicates that a single local graph pattern is not enough to predict the missing ratings. It suggests that the model needs to learn more complex local graph patterns from rating matrix or specific node representations for personalized recommendation to obtain better performance.
\item Among the four heuristic scores, AUR and AIR are simple statistics that only depend on one type of user-item interaction (i.e., the interactions with the target user or the interactions with the target item), while MCR is a statistic depending on these two types of interactions. We find that the performance of MCR is more stable across different datasets than that of AUR (or AIR) . It suggests that MCR is effective in both sparse and dense rating matrices. Moreover, stable local graph patterns like MCR are effective across different datasets, which makes inductive matrix completion possible. Furthermore, SCF considers all the interactions with the target nodes and their neighbors within 1-hop, which outperforms MCR on all datasets. It suggests that local graph patterns which consider more user-item interactions may be more powerful.
\end{itemize}
\subsection{Input features}
\label{sub_sec_2}
Motivated by our earlier findings, we now introduce two input features, identical feature and role-aware feature for the GAE-based model to learn local graph patterns. The identical feature is an identical index, which helps GNN model aggregate one-hop user-item interactions (user-to-item interaction or item-to-user interaction) by message passing function. It aims to represent some simple local heuristics scores such as AIR or AUR, which have been demonstrated to be effective to predict potential ratings in the above quantitative analysis.
To model two-hop user-item interactions, we design the second structure feature, the role-aware feature, using two extra indexes to distinguish user and item in the input space of the model. It helps the model distinguish user nodes with item nodes, and therefore distinguish the interactions from user to item with the interactions from item to user. Furthermore, after the user-item interactions around the target link are aggregated by the message passing function, the model can distinguish the user-item interactions from the 1-hop neighbors with the user-item interactions from 2-hop neighbors. By distinguishing these two types of user-item interactions, the model is capable of learning more complicated and powerful local graph patterns such as the aforementioned MCR or SCF.
Furthermore, the model needs more expressive patterns for personalized recommendation. Accordingly, we incorporate the one-hot index into the input space of IMC-GAE, which is same as previous GAE-based models that learns specific node representations for personalized recommendation. Altogether, we adopt two informative features and one-hot index feature in IMC-GAE, which aims to help GAE learn structure link representation and latent link representation, respectively. The structure link representation represents local graph patterns around the target link, and the latent link representation represents the user-specific preference to the item.
\subsection{GNN encoder on heterogeneous graph}
In our paper, matrix completion is formulated as the link prediction problem on a heterogeneous graph. In the heterogeneous graph, rating edges of the same type are collected into a rating subgraph (e.g., if the graph consists of four types of ratings, there are four rating subgraphs). Correspondingly, each rating subgraph contains a copy of all the nodes. Then IMC-GAE applies a node-level GNN encoder to these subgraphs that learn a distinct representation for each node in each subgraph. There are three components in our GNN encoder: 1) embedding layer, 2) message passing layer, and 3) accumulation layer.
\subsubsection{Embedding layer.}
In each rating subgraph, the representation of each node consists of three different embeddings (identical node embedding $u_t$, role-aware embedding $r_t$, and rating embedding $l_t$). We assume that there are $T$ rating types in the rating matrix so that we have $T$ rating subgraphs in our model. With three different embeddings in each rating subgraphs, each node has $3 \times T$ embeddings in IMC-GAE.
In order to reduce the number of parameters while allowing for more robust pattern learning, we use the same identical node embedding and role-aware embedding in each rating subgraph.
Therefore, there are $T + 2$ embeddings to represent a node in $T$ rating subgraphs. Moreover, we concentrate (denoted by $Concat(\dot)$) these three embeddings (denoted by $U_t$, $R_t$, $L_t$) in embedding layer, which is the output of the embedding layer,
\begin{equation}
\label{eq:ini}
x_t^0[i] = Concat(u_t[i], r_t[i], l_t[i]),
\end{equation}
where $x_t^0[i]$ denotes node $i$’s embedding vector in $t$-th rating subgraph. The node embedding vectors are the input of message passing layer.
\subsubsection{Message passing layer.} In IMC-GAE, we adopt a traditional GCN message passing layer to do local graph convolution, which has the following form:
\begin{equation}
x_t^{l+1}[i] = \sum_{j \in \mathcal{N}_t(i)} \frac{1}{\sqrt{|\mathcal{N}_t(i)|\cdot |\mathcal{N}_t(j)|}}x_t^l[j]
\end{equation}
where $x_t^{l+1}[i]$ denotes node $i$’s feature vector at layer $l+1$ in the $t$-th rating subgraph.
In addition, we chose symmetric normalization as the degree normalization factor in our message passing layer, where the $|\mathcal{N}_t(i)|$ represents the number of neighbors of node $i$ in the $t$-th rating subgraph.
\subsubsection{Accumulation layer.}
In each $t$-th rating subgraph, we stack $L$ message passage layer with ReLU activations \cite{agarap2018deep} between two layers. Following \cite{he2020lightgcn}, node $i$'s feature vectors from different layers are weighted sum as its final representation $h_t[i]$ in the $t$-th rating subgraph,
\begin{equation}
h_t[i] = \sum_{0 \leq l \leq L}\frac{1}{l+1}x_t^1[i]
\end{equation}
Then we accumulate all node $i$'s final representation $h_t[i]$ from all $T$ rating subgraphs into a single vector representation by sum operator,
\begin{equation}
h[i] = \sum_{t\in T} h_t[i]
\end{equation}
To obtain the final representation of user or item node, we transform the intermediate output $h[i]$ by a linear operator,
\begin{equation}
n[i] = tanh(Wh[i])
\end{equation}
The parameter matrix $W$ of user nodes is the same as that of item nodes, which because the model is trained without side information of the nodes.
\subsection{Bilinear decoder}
In IMC-GAE, following \cite{berg2017graph}, we use a bilinear decoder to reconstruct links in the user-item graph and treat each rating level as a separate class. Given the final representation $n[i]$ of user $i$ and $n[j]$ of item $j$, we use billinear operator to produce the final link representation $e_t[i, j]$ in the $t$-th rating subgraph,
\begin{equation}
e_t[i, j] = n[i]^TW_tn[j],
\end{equation}
where $W_t$ is a learnable parameter matrix. Thus, we can estimate the final rating score as,
\begin{equation}
r[i, j] = \sum_{t \in T} t S_t(\mathbf{e(i, j)}),
\end{equation}
where $\mathbf{e(i, j)}$ is the vector that concentrate the final link representations of user $i$ and item $j$ on all $T$ rating subgraph, and the $S_t$ is the softmax probability on $t$-th dimension of $\mathbf{e(i, j)}$ vector.
\begin{figure}[tp]
\centering\includegraphics[width=3.0in]{NodeDropout.png}
\caption{Layer-wise Node Dropout. In this subgraph extracted in ML-100k, red nodes indicates target nodes; blue nodes indicates the 1-hop neighbors of target nodes; white nodes indicates the 2-hop neighbors of target nodes.}
\label{fig:NodeDropout}
\end{figure}
\subsection{Layer-wise node dropout}
\label{sub_sec_3}
The layer-wise node dropout is inspired by node dropout from \cite{berg2017graph}, aiming to help model grasp patterns in local graphs which can be better generalized to unobserved ratings.
In previous works, GAE-based models always adopt a node dropout scheme which randomly drops out all outgoing messages of a particular node with a probability $p$. However, our method adopts different node dropout probabilities in different layers, which we call it layer-wise node dropout. Specially, layer-wise node dropout is $p_l = p_0 - l \theta$,
where $p_l$ is the node dropout probability in the $l$-th layser, $p_0$ is the initial node dropout probability, and $\theta$ is the hyperparameter.
In our paper, layer-wise node dropout facilitates node representation learning due to the following two reasons.
The first reason is the same as \cite{berg2017graph}, which we adopt to overcome the over-smoothing problem in GNN representation and improve the generalization ability of our model.
The second reason is to help the model learn local graph patterns which consider more user-item interactions in a 1-hop subgraph around the target link. As shown in Figure \ref{fig:NodeDropout}, the target nodes are node $0$ and node $1$. In the previous GAE-based models, the representations of node $0$ and node $1$ in the $2$-th layer aggregate too much node information beyond the $1$-hop subgraph around them (e.g., node $4$, node $5$, node $8$ and node $9$ in the example), which prevents the model learning graph patterns from the user-item interactions around target nodes.
\begin{figure*}[tp]
\centering\includegraphics[width=5.5in]{Local.png}
\caption{We compare local graph patterns learning in IMC-GAE with that in GAE and IGMC in two cases. In the first case, the model is to infer the link between node $1$ and node $5$ in original graph. In the second case, the model is to infer the link between node $6$ and node $0$ in a new graph.}
\label{fig:Discuss}
\end{figure*}
\subsection{Model training}
\subsubsection{Loss function.} We minimize the cross entropy loss (denoted by $CE$) between the predictions and the ground truth ratings,
\begin{equation}
\mathcal{L} = \frac{1}{|{(i, j)|\Omega_{i, j} = 1}|}\sum_{(i, j):\Omega_{i, j} = 1} CE(r[i, j], \hat{r}[i, j]),
\end{equation}
where we use $r[i, j]$ and $\hat{r}[i, j]$ to denote the true rating and the predicted rating of $(i, j)$, respectively, and the 0/1 matrix $\Omega$ serves as a mask for unobserved ratings in rating matrix $M$.
\subsubsection{Node representation regularization.}
It is inspired by adjacent rating regularization in \cite{zhang2019inductive}.
Since each two rating types are comparable in matrix completion (e.g., ratings 5 is bigger than ratings 4 and ratings 4 is bigger than 3), we need to consider the magnitude of ratings. Accordingly, we propose node representation regularization to encourages the representation of each node in rating subgraph that adjacent to each other to have similar representations.
Specially, we assume that the representation of the $i$-th node in the $t$-th rating subgraph is $h_t[i]$, where $0 \leq t \leq T$.
Then, the NRR regularizer is,
\begin{equation}
\mathcal{L}_{NRR} = -\sum_{0 \leq t < T}\sum_{0 \leq i \leq N} Cos(h_t[i], h_{t+1}[i]),
\end{equation}
where $Cos$ is cosine similarity between two vectors, and
$N$ is the total number of users and items in the matrix. Finally, we combine these two loss functions to the final loss function,
\begin{equation}
\label{eq:loss}
\mathcal{L}_{f} = \mathcal{L} + \lambda \mathcal{L}_{NRR},
\end{equation}
where $\lambda$ is a hyperparameter that trade-off two losses.
\subsection{Inductive learning}
In IMC-GAE, the inductive link representation for unseen nodes has two parts, inductive structure representation which is learned from the identical feature and the role-aware feature, and inductive latent representation which is learned from one-hot index of the node.
For the inductive structure representation, we just leverage message passing, propagating learned structure representation from neighbors to target nodes.
For the inductive latent representation, we also first accumulates the latent presentation of the neighbors of the target nodes. However, there may exist some unseen nodes during training in their neighbors, which lacks the latent representation.
In our method, we use the average latent representation of the other nodes to represent the unseen nodes in each rating subgraph,
\vspace{-5pt}
\begin{equation}
l_t[i] = \sum_{ j \in {\mathcal{I}}} \frac{1}{|{\mathcal{I}}|} l_t[j],
\end{equation}
where $l_t[i]$ is the initial latent representation of the $i$-th node in $t$-th rating subgraph (in equation \ref{eq:ini}) and $\mathcal{I}$ is a set of nodes which we have seen during training. It is a simple but effective method, which is demonstrated in the following experiments.
\section{Discussing local graph patterns}
To shed more light on local graph patterns learning in GNN-based models, we provide a comparison with GAE-based matrix completion, IGMC, and IMC-GAE through a typical example in Figure \ref{fig:Discuss}. Here, we assume the ratings in our example are within \{1, -1\} (like, denoted by bold black line, and dislike, denoted by bold coffee line). The solid lines are observed ratings for training and dash lines are test ratings. In a train case, user $1$ and $2$ all like item $3$, while they all dislike item $4$. It indicates that user $1$ may have a similar taste with user $2$, which is a common local graph pattern in matrix completion. Furthermore, since user $2$ dislikes item $5$, we inference that user $1$ may dislike item $5$ based on the "similar taste" pattern.
When trained with the existing rating between user $2$ and item $4$, IGMC first extracts the 1-hop local graph around user $2$ and item $4$, and relabels user $1$, user $2$, item $3$, item $4$ and item $5$ as index $2$, $0$, $3$, $1$ and $3$, respectively. Finally, the model applies a GNN to the local graph, where the new node labels are the input features of the model. Without introducing the user-item interactions beyond the 1-hop local graph around target nodes, the "similar taste" pattern is easily learned by the model. However, previous GAE-based models apply the GNN to the entire graph for learning graph patterns. Accordingly, when trained with the existing rating between user $2$ and item $4$, the representations of user $2$ and item $4$ are aggregated with many node embeddings beyond 1-hop local graph around user $2$ and item $4$, which makes the model hardly focus on the interactions in this local graph, and fails to learn "similar taste" pattern. To solve this problem, IMC-GAE designs the layer-wise node dropout scheme for the GAE-based model to avoid aggregating too many embeddings of the nodes beyond the 1-hop local graph into the target nodes representation. Although the target node representations are still aggregated a few node representations beyond the local graph, the model is capable of learning the "similar taste" pattern between user $1$ and $2$. Furthermore, if given a new graph in Figure \ref{fig:Discuss} which has the same graph structure as the original graph, previous GAE-based models need to be retrained to inference the missing rating between user $8$ and item $9$. However,
with labeling trick in IGMC and inductive structure representation in IMC-GAE, the models learn the "similar taste" pattern into structural link representation, which can be generalized to the new graph.
Despite the effectiveness of local graph patterns learning in IGMC, the labeling trick introduces extra computational complexity. The reason is that for every rating $(u_j, i_k)$ to predict, IGMC needs to relabel the graph according to $(u_j, i_k)$. The same node $u_j$ will be labeled differently depending on the target link and will be given a different node representation by the GNN when it appears in different links’ labeled graphs. This is different from previous GAE-based models and IMC-GAE, where we do not relabel the graph and each node only has a single embedding vector. For a graph with $n$ nodes and $m$ ratings to predict, the GAE-based model needs to apply the GNN $\mathcal{O}(n)$ times to compute an embedding for each node, while IGMC needs to apply the GNN $\mathcal{O}(m)$ times for all ratings. When $m \gg n$, IGMC has worse time complexity than GAE-based models, which is not suitable for real recommendation.
\section{Experiments}
\label{sub_sec_4}
We perform experiments on five datasets to evaluate our proposed method. We aim to answer the following research questions:
\begin{itemize}[leftmargin=*]
\item \textbf{RQ1:} How does IMC-GAE perform compared with state-of-the-art matrix completion methods when facing both sparse and dense rating matrices?
\item \textbf{RQ2:} How does the different hyper-parameter settings (e.g., depth
of layer, weighted layer combination, and node representation regularization (NRR)) affect IMC-GAE?
\item \textbf{RQ3:} How does the local graph patterns learning in IMC-GAE benefit from two informative features and layer-wise node dropout scheme respectively?
\item \textbf{RQ4:} How does the IMC-GAE perform on few-shot or even unseen users (or items) as compared with GAE-based models and IGMC?
\end{itemize}
\subsection{Datasets description}
To evaluate the effectiveness of IMC-GAE, we conduct experiments on five common matrix completion datasets, Flixster \cite{jamali2010matrix}, Douban datasets \cite{ma2011recommender}, YahooMusic \cite{dror2012yahoo}, MovieLens-100K \cite{miller2003movielens} and MovieLens-1M \cite{miller2003movielens}, which are publicly accessible and vary in terms of domain, size, and sparsity. Moreover, Flixster, Douban, and YahooMusic are preprocessed subsets of the original datasets provided by \cite{monti2017geometric}. These datasets contain sub rating matrix of only 3000 users and 3000 items, which we consider as sparse rating matrices in real recommendation. The MovieLens-100K and MovieLens-1M are widely used datasets for evaluating many recommender tasks, which we consider as dense rating matrices in real recommendation. For ML-100k, we train and evaluate on canonical u1.base/u1.test train/test split. For ML-1M, we randomly split into 90\% and 10\% train/test sets. For the Flixster, Douban, and YahooMusic, we use the splits provided by \cite{monti2017geometric}.
\begin{table}
\caption{RMSE of different algorithms on Flixster, Douban and YahooMusic.}
\label{tab:rmse_spare}
\begin{tabular}{lccccc}
\toprule
\textbf{Model} &\textbf{Flixster}&\textbf{Douban}&\textbf{YahooMusic}\\
\midrule
IGC-MC & 0.999 & 0.990 & 21.3\\
F-EAE & 0.908 & 0.738 & 20.0\\
PinSage & 0.954 & 0.739 & 22.9\\
IGMC &\textbf{0.872} & \textbf{0.721} & 19.1\\
\midrule
GRALS & 1.245 & 0.883 & 38.0\\
sRGCNN & 0.926 & 0.801 & 22.4\\
GC-MC & 0.917 & 0.734 & 20.5\\
IMC-GAE (ours) & 0.884 & \textbf{0.721} & \textbf{18.7}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{RMSE test results on MovieLens-100K (left) and MovieLens-1M (right).}
\label{tab:rmse_dense}
\begin{tabular}{lc|lc}
\toprule
\textbf{Model}&\textbf{ML-100K}&\textbf{Model}&\textbf{ML-1M}\\
\midrule
F-EAE & 0.920 & F-EAE & 0.860\\
PinSage & 0.951 & PinSage & 0.906\\
IGMC & 0.905 & IGMC & 0.857\\
\midrule
MC & 0.973 & PMF & 0.883\\
IMC & 1.653 & I-RBM & 0.854\\
GMC & 0.996 & NNMF & 0.843\\
GRALS & 0.945 & I-AutoRec & 0.831\\
sRGCNN & 0.929 & CF-NADE & \textbf{0.829}\\
GC-MC & 0.905 & GC-MC & 0.832\\
NMTR & 0.911 & NMTR & 0.834 \\
IMC-GAE (ours) & \textbf{0.897} & IMC-GAE(ours) & \textbf{0.829}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Experimental Settings}
\subsubsection{Baselines.}
To demonstrate the effectiveness, we compare our proposed IMC-GAE with the following methods:
\begin{itemize}[leftmargin=*]
\item \textbf{Traditional methods.} matrix completion (MC) \cite{candes2009exact}, inductive matrix completion (IMC) \cite{jain2013provable}, geometric matrix completion (GMC) \cite{kalofolias2014matrix}, PMF \cite{mnih2007probabilistic}, I-RBM \cite{salakhutdinov2007restricted}, NNMF \cite{dziugaite2015neural}, I-AutoRec \cite{sedhain2015autorec} and CF-NADE \cite{zheng2016neural} are traditional matrix competion methods, which use the user-item ratings (or interactions) only as the target value of their objective function.
\item \textbf{GAE-based methods.} sRGCNN \cite{10.5555/3294996.3295127}, NMTR \cite{gao2019neural}, GC-MC \cite{berg2017graph} are GAE-based matrix completion methods, which use one-hot index as the initial feature of each node.
\item \textbf{IGMC.} IGMC \cite{zhang2019inductive} is an inductive matrix completion method, which learns local graph patterns to generalize to new local graphs for inductive learning.
\item \textbf{Content-based GNN methods.} Content-based matrix completion methods are inductive GNN-based methods adopting side information as initial features of each node, which includes PinSage \cite{ying2018graph} and IGC-MC \cite{berg2017graph}. PinSage is originally used to predict related pins and is adapted to predicting ratings here. IGC-MC is a content-based GC-MC method, which uses the content features instead of the one-hot encoding of node IDs as its input features.
\item \textbf{Other GNN methods.} GRALS \cite{rao2015collaborative} is a graph regularized matrix completion algorithm and F-EAE \cite{hartford2018deep} uses exchangeable matrix layers to perform inductive matrix completion without using content.
\end{itemize}
In addition, given different datasets, we compare IMC-GAE with different baseline methods under RMSE in Table \ref{tab:rmse_spare}. The RMSE is a common evaluation metric in matrix completion \cite{zhang2019inductive, berg2017graph}. The baseline results are taken from \cite{zhang2019inductive}.
\subsubsection{Hyperparameter Settings.} We implement our model based on DGL \cite{wang2019deep} and use the Adam optimizer. We apply a grid search for hyperparameters, the number of layers is searched in $\{1, 2, ... , 5\}$, the $\lambda$ in equation \ref{eq:loss} is searched in $\{4e^{-5}, 4e^{-4}, 4e^{-3}, 4e^{-2}\}$, the embedding size of each vector in embedding layer is chosen from $\{90, 120,..., 1800\}$ and the embedding size of each vector in bilinear decoder is searched in $\{30, 40, ... , 80\}$. Besides, the initial node dropout probability is tuned in $\{0.1, 0.2, 0.3\}$ and the decay ratio $\theta$ is tuned in $\{0.05, 0.1, 0.2\}$.
All implementation codes can be found at \url{https://github.com/swtheing/IMC-GAE}.
\subsection{Performance comparison (RQ1)}
We start by comparing our proposed IMC-GAE with baselines on five benchmark datasets and then explore how the combination of the local graph patterns learning and specific node representations improves the performance in matrix completion.
For Flixster, Douban, and Yahoomusic, we compare our proposed model with GRALS, sRGCNN, GC-MC, F-EAE, PinSage, IGC-MC and IGMC. We show the result in Table \ref{tab:rmse_spare}. Our model achieves the smallest RMSEs on Douban and YahooMusic datasets, but slightly worse than IGMC on the Flixster dataset. Furthermore, as a GAE-based model, our method outperforms significantly all the GAE-based baselines (sRGCNN and GC-MC), which highlights the successful designs (two informative features, layer-wise dropout scheme) of our model.
For ML-100k, we compare IMC-GAE with MC, IMC, as well as GRALS, sRGCNN, GC-MC, F-EAE, PinSage, NMTR and IGMC. For ML-1M, besides the baselines GC-MC, F-EAE, PinSage, NMTR and IGMC, we further include PMF, I-RBM, NNMF, I-AutoRec, and CF-NADE. Our model achieves the smallest RMSEs on these datasets without using any content, significantly outperforming all the compared baselines, regardless of whether they are GAE-based models.
Altogether, our model outperforms all GAE-based models on all datasets. It demonstrates that the local graph patterns learned in our model truly help model inference the missing ratings in both sparse and dense rating matrices.
\subsection{Study of IMC-GAE (RQ2)}
As the GNN encoder plays a pivotal role in IMC-GAE, we investigate its impact on the performance. We start by exploring the influence of layer numbers. We then study how the weighted layer combination and NRR affect the performance.
\begin{table}
\caption{Effect of layer numbers in GNN encoder}
\label{tab:Layers}
\begin{tabular}{lccc}
\toprule
&\textbf{Douban}&\textbf{YahooMusic} & \textbf{ML-100k}\\
\midrule
IMC-GAE-1 & 0.728 & 18.803 & 0.900 \\
IMC-GAE-2 & 0.725 & 18.793 & \textbf{0.897} \\
IMC-GAE-3 & 0.722 & \textbf{18.702} & 0.897 \\
IMC-GAE-4 & 0.723 & 20.343 & 0.901 \\
IMC-GAE-5 & \textbf{0.721} & 18.785 & 0.897 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Effect of weighted layer combination and NRR}
\label{tab:Ablation}
\begin{tabular}{lccc}
\toprule
&\textbf{Douban}&\textbf{YahooMusic}& \textbf{ML-100k}\\
\midrule
IMC-GAE(Original) & \textbf{0.721}& \textbf{18.7}& \textbf{0.897}\\
IMC-GAE(no NRR) & 0.722& 18.8& 0.900\\
IMC-GAE(Sum) & 0.727 & 19.2& 0.905\\
IMC-GAE(Concat) & 0.723& 18.9& 0.903\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Effect of Layer Numbers.} To investigate whether IMC-GAE can benefit from multiple layers in the GNN encoder, we vary the model depth. In particular, we search the layer numbers in the range of $\{1, 2, 3, 4, 5\}$. Table \ref{tab:Layers} summarizes the experimental results, wherein IMC-GAE-$i$ indicates the model with $i$ embedding propagation layers, and similar notations for others. By analyzing Table
\ref{tab:Layers}, we have the following observations:
\begin{itemize}[leftmargin=*]
\item Increasing the depth of IMC-GAE substantially enhances the performance of the model. Clearly, IMC-GAE-2 achieves consistent improvement over IMC-GAE-1 across all the board, which considers the 1-hop neighbors only. We attribute the improvement to the effective modeling of local graph structure: structure features and layer-wise node dropout help model grasp effective patterns from local graph around target nodes.
\item When further stacking propagation layer on the top of IMC-GAE-2, we find that IMC-GAE-3 leads to performance degradation on ML-100k, but performance improvement on Douban and YahooMusic. This might be caused by the deeper layer with a higher node dropout probability might introduce noise in latent link representation. More specifically, the deeper layers (e.g., the third layer in IMC-GAE-3) lose the original graph connectivity, which makes the model fail to learn the latent link representation from the neighbors. Moreover, the marginal improvements on the other two datasets verify that the local graph patterns beyond 1-hop neighbors still improve the performance of the model in the sparse rating matrix.
\end{itemize}
\subsubsection{Effect of weighted layer combination and NRR} Different from prior works \cite{berg2017graph,zhang2019inductive}, we adopt the weighted sum operator for layer combination instead of sum or concentration operator and NRR to encourages node representation in rating subgraph that adjacent to each other to have similar parameter matrice. Table \ref{tab:Ablation} shows the result of the ablation experiments. From the ablation experiments, we have the following observations. First, the weighted sum combination shows a performance improvement over the sum or concentration operator in both sparse and dense rating matrices. This might be because that sum or concentration operator does not assign lower importance to the node representations in deeper layers, which introduces more noise into the representation of the nodes. Second, we can see that disabling NRR results in the performance drop on all three datasets. It demonstrates that NRR is an effective way to regularize the model.
\begin{table}
\caption{Ablation study on three datasets, where IMC-GAE-R indicates the IMC-GAE-R trained with only role-aware feature; IMC-GAE-I indicates IMC-GAE trained with only identical feature; IMC-GAE-OD indicates IMC-GAE trained with original node dropout scheme.}
\label{tab:structure_cmp}
\begin{tabular}{lccc}
\toprule
\textbf{Model}&\textbf{Douban}&\textbf{ML-100K}&\textbf{ML-1M}\\
\midrule
IMC-GAE & \textbf{0.721} & \textbf{0.897} & \textbf{0.829}\\
IMC-GAE-R & 0.734 & 0.912 & 0.868\\
IMC-GAE-I & 0.738 & 0.924 & 0.912\\
IMC-GAE-OD & 0.727 & 0.905 & 0.834 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Inference time~(s) of IMC-GAE, IGMC and GCMC on Douban, MovieLens-100K, and MovieLens-1M.}
\label{tab:infer}
\begin{tabular}{lccc}
\toprule
\textbf{Model}&\textbf{Douban}&\textbf{ML-100K}&\textbf{ML-1M}\\
\midrule
IGMC & 9.255 & 33.041 & 122.042\\
GC-MC & 0.011 & 0.011 & 0.025\\
IMC-GAE(ours) & \textbf{0.060} & \textbf{0.0382} & \textbf{0.067}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Study of the local graph patterns learning (RQ3)}
\subsubsection{Performance Comparison.} In this section, we attempt to understand how the identical feature, the role-aware feature, and the layer-wise node dropout scheme affect local graph patterns learning in IMC-GAE, and how local graph patterns learning affects the performance of IMC-GAE in both sparse and dense rating matrices. Towards this end, we compare the performance of original IMC-GAE with IMC-GAE trained with only identical feature, IMC-GAE trained with only role-aware feature, IMC-GAE with normal node dropout on the three datasets.
From the result shown in Table \ref{tab:structure_cmp}, we conclude the following findings:
\begin{itemize}[leftmargin=*]
\item With only role-aware feature or only identical feature for training, IMC-GAE obtains a competitive performance with IMC-GAE. It demonstrates that local graph patterns learning in IMC-GAE is effective in both sparse and dense rating matrices.
\item The IMC-GAE outperforms the other three baselines in all datasets, which demonstrates that each design in IMC-GAE for local graph patterns learning (i.e., role-aware feature, identical feature, and layer-wise node dropout scheme) is essential, which helps GAE learn a series of effective local graph patterns.
\end{itemize}
\subsubsection{Inference Time Comparison.} We compare the inference time of IMC-GAE with IGMC and GC-MC (a typical GAE-based model) on three datasets. Specially, we infer the 20\% samples in each dataset and conduct the experiment on GN7 on Tencent Cloud, which is equipped with 4*Tesla T4. We repeat this experiment five times and report the average inference time of each model in Table \ref{tab:infer}. The results show that the inference time of IMC-GAE is slightly longer than that of GAE but significantly shorter than that of IGMC.
\begin{figure}[tp]
\centering\includegraphics[width=3.0in]{sparsity.png}
\caption{ML-1M results under different sparsity ratios.}
\label{fig:sparse}
\end{figure}
\subsection{Study of IMC-GAE on sparse data (RQ4)}
To investigate the model performance on few-shot or even unseen users (or items), we test the model on dataset under different sparsity levels of the rating matrix \cite{berg2017graph, zhang2019inductive}. Here we construct several sparse datasets by using 100\%, 20\%, 10\%, 5\%, 1\% and 0.1\% training ratings in Movielens-1M, and then compare the test RMSEs of our method with GC-MC and IGMC, a typical GAE-based model and an inductive GNN model. As shown in Figure \ref{fig:sparse}, we have two observations,
\begin{itemize}[leftmargin=*]
\item As the dataset becomes sparser, the performance of all the models suffer from a drop, but the drop rate of our model is much smaller compared with GC-MC.
\item The performance of our model in the datasets with 100\%, 20\%, and 10\% training ratings is better than IGMC, but worse in other datasets.
\end{itemize}
From the observations, we find that the way we adopt GAE to learn local graph patterns that truly improves its inductive learning ability. However, the performance of IMC-GAE on three sparer datasets is worse than that of IGMC. It suggests that local graph patterns learned in IMC-GAE is not as good as those learned in IGMC which can be generalized to sparser datasets containing more few-shot or unseen users (or items).
\section{Conclusion}
In this paper, we propose Inductive Matrix Completion using Graph Autoencoder (IMC-GAE), which uses GAE to learn both graph patterns for inductive matrix completion and specific node representations for personalized recommendation. Extensive experiments on real-world datasets demonstrate the rationality and effectiveness of the way IMC-GAE learns local graph patterns by GAE. This work represents an initial attempt to exploit local structural knowledge in GAE-based matrix completion, which is more suitable to be applied to real recommender systems.
\clearpage
\balance
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf,authordraft]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,499,686 | arxiv |
\section{Introduction}
\label{sec:intro}
Over the last decade, large-volume cosmological simulations have matured into one of the key tools in the study of galaxy formation \citep[][see also the review of \citealt{somerville_15}]{schaye_10, schaye_15, dubois_14, vogelsberger_14_illustris, vogelsberger_14_nature, dave_16, dave_19}. However, such simulations achieve their large volumes at the cost of relatively low resolution, meaning that many physical processes need to be implemented through sub-grid prescriptions, which are inevitably based on a significant number of free parameters \citep[e.g.,][]{schaye_15, pillepich_18_tng}. These parameters must be calibrated to reproduce the properties of galaxies in the real Universe, such as the stellar mass function at low redshift. Galaxy properties that are not used in the tuning are then the simulations' predictions, and need to be carefully compared to observations to check whether the physical models create realistic galaxy populations.
In this work, we focus on the IllustrisTNG simulation suite \citep{marinacci_18, naiman_18, nelson_18_color, pillepich_18, springel_18}, though we also briefly compare to the original Illustris-1 simulation \citep{vogelsberger_14_illustris, vogelsberger_14_nature, genel_14}. Many comparisons between the Illustris or IllustrisTNG simulations and observations have already been undertaken, including topics as wide-ranging as galaxy colours \citep{sales_15, nelson_18_color}, sizes \citep{genel_18}, the mass-metallicity relation \citep{torrey_18, torrey_19}, morphology \citep{snyder_15, rodriguezgomez_18, tacchella_19}, star formation \citep{sparre_15, diemer_17_sfh, donnari_19}, merger rates \citep{rodriguezgomez_15}, black holes \citep{weinberger_18, habouzit_19}, and the abundance of shell galaxies \citep{pop_18}.
However, one category of observations is noticeably under-represented in this list: the abundance, phase, and structure of gas in galaxies. For the original Illustris model, \citet{vogelsberger_14_nature} showed that the neutral fraction was broadly compatible with the trends seen in the Arecibo Legacy Fast ALFA (ALFALFA) survey. For the IllustrisTNG model, the total gas fraction in large groups and clusters was used as a constraint \citep[][see also \citealt{vogelsberger_18} and \citealt{barnes_18}]{pillepich_18_tng}. In addition, there have been studies of the larger-scale gas distribution around galaxies, namely of absorption lines due to the circum-galactic medium \citep{suresh_17, nelson_18_cgm}, the abundance of damped Lyman-alpha systems (DLAs) and the Lyman-alpha forest \citep{bird_14, gurvich_17}, hot gaseous haloes \citep{bogdan_15}, and other gas-related phenomena such as jellyfish galaxies \citep{yun_19_jellyfish}.
Nevertheless, we are still missing a complete census of the bulk of the observable gas in IllustrisTNG galaxies, namely, of atomic and molecular hydrogen (denoted ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace hereafter). Especially the molecular (or simply, cold) component is of interest because it is thought to provide the fuel for star formation. While the IllustrisTNG simulations reproduce numerous properties of the observed stellar population, star formation is governed by an equilibrium interstellar-medium (ISM) model that does not directly predict the abundance of ${\rm H}_2$\xspace \citep{springel_03}. Thus, a detailed comparison of the gas and stellar properties of galaxies can tell us whether the relationship between gas and star formation is physically correct, or whether it emerges due to the tuning of the ISM model. There are, however, significant obstacles to ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace comparisons on both the observational and theoretical sides.
Observationally, ${\rm H}$\,{\sc i}\xspace in galaxies is detected either by the 21-cm emission from its spin-flip transition or by absorption in quasar or radio galaxy spectra \citep{zwaan_05, prochaska_05, allison_19}. The former technique has provided tight constraints on the local ${\rm H}$\,{\sc i}\xspace abundance and its distribution in thousands of nearby galaxies \citep{martin_10, haynes_18}. At higher redshift, quasar sightlines provide rough constraints on the overall abundance of ${\rm H}$\,{\sc i}\xspace, but we are lacking detailed information about ${\rm H}$\,{\sc i}\xspace in galaxies \citep[e.g.,][]{zafar_13}. ${\rm H}_2$\xspace is, arguably, even more difficult to observe because the molecule lacks a permanent dipole moment \citep{draine_11}. Instead of observing ${\rm H}_2$\xspace directly, we rely on tracers such as CO, which are subject to uncertain conversion factors \citep{bolatto_13}. Nevertheless, molecular observations are now being pushed to high redshifts and high resolution by instruments such as ALMA, offering a chance to study gas in high-redshift galaxies in great detail \citep[e.g.,][]{walter_16}.
From a theoretical perspective, ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace comparisons are challenging because the IllustrisTNG simulations predict the fraction of all gas that is in neutral hydrogen, but not whether this gas is atomic or molecular. The \hi/${\rm H}_2$\xspace transition is governed by the balance between ${\rm H}_2$\xspace formation on dust grains and destruction by Lyman-Werner band ultraviolet (UV) photons, leading to complex dependencies on the density structure, star formation rate (SFR), and metallicity of the ISM \citep[e.g.,][]{spitzer_74, black_76, sternberg_89, elmegreen_89, elmegreen_93, krumholz_09_kmt1, draine_11}. These dependencies have been captured in a number of models, including observational correlations, calibrations from simulations, and analytic models \citep[e.g.,][]{blitz_06, gnedin_11, krumholz_13}. Such formulae can then be used to roughly constrain the molecular fraction in large-volume cosmological simulations in post-processing \citep[e.g.,][]{duffy_12}. \citet{lagos_15} applied this technique to the EAGLE simulation and showed that it reproduces the observed overall ${\rm H}_2$\xspace abundance, mass function, gas fraction, and depletion time. Based on similar modelling, \citet{bahe_16} investigated the abundance, size, and structure of ${\rm H}$\,{\sc i}\xspace discs in EAGLE and found them to be broadly compatible with observations, though their conclusions depended somewhat on the chosen \hi/${\rm H}_2$\xspace model \citep[for related results, see][]{marasco_16, lagos_16, crain_17, marinacci_17}.
In \citet[][hereafter \citetalias{diemer_18_hih2}]{diemer_18_hih2}, we undertook a systematic study of models that predict the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace fractions and applied them to the IllustrisTNG simulations. We extended the cell-by-cell modelling of \citet{lagos_15} by introducing a projection-based method and found the predictions of most models to be broadly compatible, as long as care is taken when computing the intermediate quantities they rely on. \citet{stevens_19_hi} used very similar modelling to investigate the dependence of ${\rm H}$\,{\sc i}\xspace on environment, carefully mock-observing the simulation to take observational systematics into account. \citet{popping_19} use a version of the \hi/${\rm H}_2$\xspace modelling that, among other differences, assumes a simpler approach to the UV field, and compare the high-redshift abundance of ${\rm H}_2$\xspace to ALMA observations \citep[see also][]{popping_14}.
In this work, we compare the results of \citetalias{diemer_18_hih2} to observations of galaxies at low redshift. We restrict ourselves to six observational metrics: the overall abundance of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace, their mass functions, gas fractions as a function of stellar mass, the correlation between ${\rm H}_2$\xspace and SFR, the spatial distribution of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace within galaxies, and correlations with morphology. Whenever possible, we carefully consider the observational systematics of the respective observational samples, attempting to faithfully mock-observe our simulated galaxies. All \hi/${\rm H}_2$\xspace data used in this paper are publicly available as part of the IllustrisTNG data release \citep[][\href{http://www.tng-project.org/data/}{tng-project.org/data}]{nelson_19_datarelease}.
The paper is structured as follows. We give an overview of all observational data used in Section~\ref{sec:obs}. We briefly review the simulation data in Section~\ref{sec:sim} (referring the reader to \citetalias{diemer_18_hih2} for details) and discuss our techniques for matching simulated galaxies to observations. We present the results of our comparisons in Section~\ref{sec:results}, including a brief review of the original Illustris-1 simulation. We further discuss particular successes and tensions in Section~\ref{sec:discussion} before summarizing our conclusions in Section~\ref{sec:conclusion}. We describe our experiments with morphology in Appendix~\ref{sec:app:morphology} and provide detailed tests of our methodology in Appendix~\ref{sec:app:apertures}.
We follow the notation of \citetalias{diemer_18_hih2}, denoting the masses of all gas as $M_{\rm gas}$, of all hydrogen as $M_{\rm H}$, of atomic hydrogen as $M_{\rm HI}$, of molecular hydrogen as $M_{\rm H_2}$, and of neutral hydrogen as $M_{\rm HI+H_2}$. The same subscripts are used for volumetric mass densities (e.g., $\rho_{\rm H}$), surface densities (e.g., $\Sigma_{\rm H}$), number densities (e.g., $n_{\rm H}$), and column densities (e.g., $N_{\rm H}$). The neutral fraction, $f_{\rm HI+H_2}$, refers to the fraction of all gas (including helium and metals) that is in neutral hydrogen.
\section{Observational Data}
\label{sec:obs}
In this section, we give an overview of the observational data used in our comparison. Each subsection discusses one of our six observational metrics.
\subsection{Overall Abundance of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace}
\label{sec:obs:omega}
As a first-order check on the abundance of neutral gas in IllustrisTNG, we use observations of the total ${\rm H}$\,{\sc i}\xspace abundance as a function of redshift (though, for the remainder of the paper, we will mostly be concerned with $z \approx 0$). Our data compilation is largely based on that of \citet{rhee_18}, with a few sources added and removed. In particular, we use data from the blind ALFALFA 21-cm survey and similar observations \citep{zwaan_05, martin_10, freudling_11, hoppmann_15, obuljen_18}, from 21-cm observations of low-redshift galaxy samples \citep{lah_07, braun_12, delhaize_13, rhee_13, rhee_16, rhee_18}, and from quasar absorption lines studies that constrain the number of DLAs and Lyman-limit systems (LLSs) at high redshift \citep{prochaska_05, rao_06, rao_17, noterdaeme_09, noterdaeme_12, zafar_13, neeleman_16, bird_17}.
The ${\rm H}_2$\xspace abundance of the Universe at low redshift can currently not be determined in blind surveys because the emission of the CO tracer is simply too weak. Similar caveats affect all ${\rm H}_2$\xspace-related comparisons in this paper: ${\rm H}_2$\xspace is never observed directly, and the relationship between ${\rm H}_2$\xspace and CO is probably much more complicated than suggested by constant conversion factors \citep{wolfire_10, glover_11, leroy_11, narayanan_11, narayanan_12, feldmann_12, bolatto_13}. Thus, all ${\rm H}_2$\xspace abundances are to be interpreted as estimates that are likely accurate to factors of a few.
Subject to this caveat, we compare our simulations to low-redshift constraints based on targeted CO observations of galaxies \citep{keres_03, obreschkow_09a, saintonge_17}. While the first blind surveys are beginning to probe the abundance of ${\rm H}_2$\xspace at high redshift \citep{walter_16, decarli_16_aspecs1, pavesi_18, riechers_19}, we do not use their results in this study because the small volumes probed lead to significant sample variance and other complications (see \citet{popping_19} for a detailed comparison). In recent years, the ${\rm H}_2$\xspace abundance is increasingly being inferred from the dust continuum emission, e.g., as measured by ALMA, leading to different systematics \citep{scoville_17, tacconi_18}. However, such measurements are currently possible only at high redshift, and are thus not the focus of this paper.
\subsection{Mass Functions}
\label{sec:obs:mfunc}
The ${\rm H}$\,{\sc i}\xspace mass function has been measured to exquisite accuracy by ALFALFA \citep{giovanelli_05, martin_10, haynes_11, huang_12, jones_18, haynes_18}. The final $\alpha$.100 catalogue contains about \num{30000} sources below $z = 0.06$. Our primary constraint for the ${\rm H}_2$\xspace mass function is the work of \citet{boselli_14_hrs2}, specifically their estimate based on a fixed CO-to-${\rm H}_2$\xspace conversion factor (which we convert to $\alpha_{\rm CO} = 2.0 {\rm M}_{\odot} / ({\rm K\ km/s\ pc}^2)$ as in \citealt{popping_19}). The underlying data are taken from the {\it Herschel} Reference Survey, a sample of 323 galaxies observed with the SPIRE instrument \citep{boselli_10}. The galaxies in this survey are local, with distances between 15 and 25 Mpc. For comparison, we also show the ${\rm H}_2$\xspace mass function of \citet{obreschkow_09a}, which is based on \citet{keres_03}. The underlying dataset is the FCRAO survey \citep{young_95}, which measured the CO emission for $300$ local galaxies. However, due to the range of distances and observing strategies, this survey is somewhat difficult to mock-observe.
\subsection{Gas Fractions}
\label{sec:obs:fraction}
By gas fractions, we mean the ratio of the mass in ${\rm H}$\,{\sc i}\xspace or ${\rm H}_2$\xspace divided by stellar mass. These ratios are commonly constrained observationally and help us determine whether IllustrisTNG produces an appropriate amount of gas in galaxies of a certain stellar mass. The scatter in the gas fractions is known to be sizeable, meaning that both the median fraction and the distribution around the median are of interest. However, comparing simulations to observed gas fractions is non-trivial because of numerous survey-specific aspects such as the galaxy selection, aperture (or beam), and other assumptions used in the processing of the observations. \citet{stevens_19_hi} recently demonstrated that the total ${\rm H}$\,{\sc i}\xspace fractions in IllustrisTNG can differ significantly from mock observations that take the observational systematics into account.
To avoid having to mock-observe many heterogeneous surveys, we rely on the recent compilation of \citet[][hereafter \citetalias{calette_18}]{calette_18}. They quantify the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace fractions as a function of stellar mass, including fitting functions for the mass-dependent distribution of gas fractions. \citetalias{calette_18} amalgamate ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace masses from a large range of radio observations that have been combined with stellar luminosities or masses from optical or infrared observations. In particular, their ${\rm H}$\,{\sc i}\xspace data includes the Updated Nearby Galaxy Catalogue \citep{karachentsev_13}, GASS \citep{catinella_13, catinella_18}, the {\it Herschel} Reference Survey \citep{boselli_10, boselli_14_hrs1, boselli_14_hrs2, boselli_14_hrs3}, ATLAS$^{\rm 3D}$ \citep{serra_12}, the Nearby Field Galaxy Survey \citep{jansen_00_a, jansen_00_b, kannappan_13}, THINGS \citep{leroy_08}, ALFALFA \citep{huang_12_dwarfs}, UNAM-KIAS \citep{hernandeztoledo_10}, AMIGA \citep{lisenfeld_11}, and a number of other compilations \citep{geha_06, stark_13, bradford_15}. Their ${\rm H}_2$\xspace data is taken from the {\it Herschel} Reference Survey, COLD GASS \citep{saintonge_11_coldgass}, ATLAS$^{\rm 3D}$, HERACLES \citep{leroy_08}, ALLSMOG \citep{bothwell_14}, EGNoG \citep{bauermeister_13}, and \citet{stark_13}. These data sources are discussed in detail in Appendices A and B of \citetalias{calette_18}.
Using the \citetalias{calette_18} compilation offers a number of advantages. First, \citetalias{calette_18} homogenize the survey data to a \citet{chabrier_03} stellar initial mass function (IMF), the same IMF used in IllustrisTNG. Second, numerous survey biases and upper limits are taken into account, the latter via a Kaplan-Meier estimator. For example, surveys can easily be biased towards gas-rich galaxies (see Fig.~1 in \citetalias{calette_18} for an example of a fitting function that overestimates the full, debiased data set). Third, all CO observations are converted to ${\rm H}_2$\xspace masses based on the same mass-dependent $x_{\rm CO}$ factor.
The \citetalias{calette_18} compilation does, however, introduce one complication: they split the galaxy sample into early and late-type galaxies (ETGs and LTGs, respectively). This distinction is sensible because the gas content of galaxies strongly correlates with their morphological type \citep{kannappan_13, boselli_14_hrs3}. Thus, quantifying the median gas fraction of all galaxies introduces additional scatter compared to the relations split by morphology. In Section~\ref{sec:sim:morphology} and Appendix~\ref{sec:app:morphology}, we experiment with different morphological selections, but find no criterion that reliably creates late-type and early-type samples with the properties expected from observations. To avoid the resulting difficulties in interpretation, we reconstruct the distribution of gas fractions in the overall galaxy sample. \citetalias{calette_18} provide fitting functions for the distribution of gas fractions of early and late types as a function of stellar mass (their equations 3 and 7), as well as the ETG fraction as a function of stellar mass. We add the distributions of the LTG and ETG samples accordingly. For the ETG sample, \citetalias{calette_18} give a mass-dependent cutoff gas fraction below which the distribution cannot be reliably inferred because the majority of observations are upper limits. We separately constrain the properties of those galaxies with gas fractions above the limit as well as the fraction of galaxies below the limit. We use a slightly updated version of the \citetalias{calette_18} best-fit parameters.
\citetalias{calette_18} find that, once they split into LTGs and ETGs and homogenize the data, the different samples are broadly consistent, except for certain datasets where strong biases are expected. Those are collected in a ``bronze'' sample, which we do not use in our analysis. \citetalias{calette_18} show that their gas fractions are compatible with the observed mass functions and that the ${\rm H}$\,{\sc i}\xspace fractions measured in individual LTGs are compatible with the stacked ALFALFA analysis of \citet{brown_15}, indicating that the galaxy sample is not missing any major contributions to the overall ${\rm H}$\,{\sc i}\xspace abundance. Given that some data sources were selected from optical samples such as SDSS, there may be a bias against satellites close to their central. For example, galaxies in the SDSS-I and II spectroscopic surveys could not be closer than 55'' due to fibre collisions \citep[e.g.,][]{zehavi_02, guo_12}. We will return to this difficulty when discussing the gas fractions of satellites in Sections~\ref{sec:results:fraction} and \ref{sec:discussion:gasfree}. \citetalias{calette_18} assume $h = 0.7$ while IllustrisTNG uses $h = 0.6774$; we rescale masses wherever appropriate.
\subsection{Correlations with Star Formation Rate}
\label{sec:obs:sfr}
For our comparison of ${\rm H}_2$\xspace mass and SFRs, we use the xCOLD GASS survey \citep{saintonge_11_coldgass, saintonge_17}. This dataset is composed of two surveys, COLD GASS and xCOLD GASS-low, which combine to cover all stellar masses above $10^9 {\rm M}_{\odot}$. The CO observations were taken with the IRAM telescope, their stellar properties are taken from SDSS and other surveys. It is well established that the depletion time, $t_{\rm depl} \equiv M_{\rm H_2} / {\rm SFR}$, is roughly constant but depends on redshift \citep{leroy_08, leroy_13, bigiel_08, bigiel_11, tacconi_10, tacconi_13, tacconi_18, daddi_10_laws, genzel_10, genzel_15, saintonge_11_depletion, saintonge_11_coldgass, saintonge_17}. We compare IllustrisTNG to the parameterization of \citet{tacconi_18}, $t_{\rm depl} \approx 1\ {\rm Gyr} \times (1 + z)^{-0.57}$.
\subsection{Spatial Distribution of Gas}
\label{sec:obs:sizes}
One of the most well-constrained indicators of the spatial distribution of gas in galaxies is the ${\rm H}$\,{\sc i}\xspace radius, $R_{\rm HI}$, commonly defined as the radius where the surface density falls below $\Sigma_{\rm HI} < 1 {\rm M}_{\odot}/{\rm pc}^2$. This radius forms a tight power-law relationship with $M_{\rm HI}$, at least in the local galaxy samples where it has been observed in resolved 21-cm observations \citep{broeils_94, broeils_97, verheijen_01_masssize, wang_13_bluedisks, wang_14_hi, martinsson_16, ponomareva_16, wang_16_hi}. We use the data compilation of \citet{wang_16_hi}, who combine $R_{\rm HI}$--$M_{\rm HI}$ data from about 500 galaxies measured by 15 different observational projects. For comparison, we also show the similar relation of \citet{lelli_16}. The measured ${\rm H}$\,{\sc i}\xspace sizes correspond to a face-on orientation.
The resolved ${\rm H}$\,{\sc i}\xspace observations used here do not represent a volume-limited sample but are typically selected to include mostly ${\rm H}$\,{\sc i}\xspace-rich targets. However, all morphological types are present in the surveys and the gas fractions roughly match the averages observed in volume-limited samples, indicating that there are no strong biases in the selection \citep{lelli_16}. Moreover, the $R_{\rm HI}$--$M_{\rm HI}$ is extremely robust in that it does not vary as a function of stellar brightness or stellar diameter \citep{wang_16_hi, lelli_16}.
Going beyond $R_{\rm HI}$, we also investigate the radial dependence of the ${\rm H}$\,{\sc i}\xspace surface density. For this purpose, we use the ${\rm H}$\,{\sc i}\xspace profiles from the Bluedisk survey \citep{wang_13_bluedisks, wang_14_hi}. They initially selected 25 galaxies as a control sample and 25 galaxies as a ${\rm H}$\,{\sc i}\xspace-rich sample, but the surface density profiles (with radii scaled to $R_{\rm HI}$) turn out to be almost identical. The Bluedisk galaxies have $10^{10} {\rm M}_{\odot} < M_* < 10^{11} {\rm M}_{\odot}$ and lie on the star-forming main sequence, with SFRs between $0.5$ and $10\ {\rm M}_{\odot}/{\rm yr}$ \citep{cormier_16}.
Finally, we wish to quantify the spatial distribution of ${\rm H}_2$\xspace, but there are a number of issues. First, the observational sample sizes are limited by the difficulty of obtaining spatially resolved CO observations. Second, the radial profiles of ${\rm H}_2$\xspace appear to be diverse \citep[e.g.,][]{young_82, regan_01, leroy_08, leroy_09}. Third, uncertain (and perhaps radially varying) CO-to-${\rm H}_2$\xspace conversion factors complicate the interpretation of observations \citep[][and references therein]{bolatto_13}. Given these difficulties, we delegate a detailed comparison with CO profiles and maps to future work. Instead, we restrict ourselves to a comparison of the observed half-mass radii of CO and stars from the EDGE-CALIFA survey \citep{bolatto_17}. This dataset adds spatially resolved CARMA CO observations to a subset of the optical integral field unit spectroscopy from CALIFA \citep{sanchez_12, walcher_14}. We note that there are numerous other ways to quantify the CO spatial scale, but some of them use relatively high surface densities that are difficult to reproduce in our simulations. For example, \citet{davis_13_extent} measured $R_{\rm CO}$ as the radius where the surface density of ${\rm H}_2$\xspace reaches $15 \, {\rm M}_{\odot} / {\rm pc}^2$, corresponding to their $3 \sigma$ detection limit. This surface density is never reached by two-thirds of TNG100 galaxies with $M_{*} > 10^{10} {\rm M}_{\odot}$. As a result, \citet{davis_13_extent} find very small values for $R_{\rm CO}$ that are close to the force resolution scale used in IllustrisTNG.
\section{Simulation Data}
\label{sec:sim}
Our modelling of the \hi/${\rm H}_2$\xspace transition and the relevant details of the IllustrisTNG simulations were discussed at length in \citetalias{diemer_18_hih2}. Here we briefly review the aspects that are most pertinent to this work and refer to the reader to \citetalias{diemer_18_hih2} for details.
\subsection{The IllustrisTNG Simulations}
\label{sec:sim:illustris}
The IllustrisTNG suite of cosmological magneto-hydrodynamical simulations was run using the moving-mesh code \textsc{Arepo} \citep{springel_10}. In this paper, we use the highest-resolution versions of the $100$ and $300$ Mpc box sizes, hereafter referred to as TNG100 and TNG300 \citep{marinacci_18, naiman_18, nelson_18_color, pillepich_18, springel_18}. As the TNG100 simulation has significantly higher spatial and mass resolution than TNG300, we interpret differences in their galaxy properties to be resolution effects, i.e., we trust the TNG100 results over TNG300 (see also \citetalias{diemer_18_hih2}). We use a lower-resolution version of TNG100, TNG100-2, to quantify the convergence with resolution.
IllustrisTNG adopts the \citet{planck_16} cosmology: $\Omega_{\rm m} = 0.3089$, $\Omega_{\rm b} = 0.0486$, $h = 0.6774$, and $\sigma_8 = 0.8159$; the same cosmology is assumed throughout the paper. The simulations follow a large range of physical processes, including prescriptions for gas cooling, star formation, stellar winds, metal enrichment, supernovae, black hole growth, and active galactic nuclei \citep{weinberger_17, pillepich_18_tng}. This so-called TNG model builds on the original Illustris model \citep{vogelsberger_13, vogelsberger_14_illustris, vogelsberger_14_nature, genel_14, torrey_14, sijacki_15}. We briefly compare our results to the original simulation in Section~\ref{sec:results:orig}. Dark matter haloes and their galaxies were identified using the \textsc{Subfind} algorithm \citep{springel_01_subfind}.
\subsection{Modelling the {\rm \hi/${\rm H}_2$\xspace} Fractions}
\label{sec:sim:hih2}
In \citetalias{diemer_18_hih2}, we considered the \hi/${\rm H}_2$\xspace models of \citet{leroy_08}, \citet{gnedin_11}, \citet{krumholz_13}, \citet{gnedin_14}, and \citet{sternberg_14} (hereafter \citetalias{leroy_08}\xspace, \citetalias{gnedin_11}\xspace, \citetalias{krumholz_13}\xspace, \citetalias{gnedin_14}\xspace, and \citetalias{sternberg_14}\xspace, respectively). These models represent three separate categories: (i) observational correlations with the mid-plane pressure of galaxies (\citetalias{leroy_08}\xspace), (ii) fitting functions trained on high-resolution simulations with radiative transfer and chemical networks (\citetalias{gnedin_11}\xspace, \citetalias{gnedin_14}\xspace), and (iii) analytical models of simplified, individual clouds that were modified to represent an average over a larger ISM region (\citetalias{krumholz_13}\xspace, \citetalias{sternberg_14}\xspace). Models in the latter two categories demand the Lyman-Werner band UV field strength as an input parameter, and we thus modelled the UV field by propagating light from young stars in an optically thin fashion.
Conventionally, \hi/${\rm H}_2$\xspace models have been computed cell by cell, i.e., by assigning a molecular fraction to each gas element. The main issue with this method is that all models listed above rely on surface densities rather than volume densities. For a 3D gas cell, such surface densities are commonly estimated by multiplying with the Jeans length, roughly representing the size of a self-gravitating system in equilibrium \citep[e.g.,][]{schaye_01}. In \citetalias{diemer_18_hih2}, we showed that this approximation does not, in general, reproduce the actual surface density of our galaxies. We proposed an alternative way to compute the models in projection, where each galaxy is rotated into a face-on position, projected onto a 2D map, and the model equations are solved in two dimensions.
In this work, we remain agnostic as to which method and which \hi/${\rm H}_2$\xspace model is most accurate and view their spread as a systematic uncertainty. We consider both the cell-by-cell and projected versions, with the exception of the cell-by-cell \citetalias{leroy_08}\xspace model, which was shown to be unphysical \citepalias{diemer_18_hih2}. Thus, the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace masses of each galaxy are estimated by a total of nine different methods. In the forthcoming figures, we shade the region between the models but do not show the individual models to avoid crowding. The differences between the model predictions are shown in the corresponding figures in \citetalias{diemer_18_hih2}.
\subsection{Galaxy Selection}
\label{sec:sim:selection}
In \citetalias{diemer_18_hih2}, we computed the \hi/${\rm H}_2$\xspace fractions of all TNG100 galaxies with either $M_* > 2 \times 10^8 {\rm M}_{\odot}$ or $M_{\rm gas} > 2 \times 10^8 {\rm M}_{\odot}$. We use the same mass limits in this work, but increase them to $M_* > 5 \times 10^{10} {\rm M}_{\odot}$ and $M_{\rm gas} > 5 \times 10^9 {\rm M}_{\odot}$ for TNG300. These selections result in stellar-mass-selected samples of \num{43213} and \num{22109} galaxies for TNG100 and TNG300, respectively, and \num{140923} and \num{449176} galaxies for the gas-mass-selected samples. We caution that our mass cut-offs in TNG100 are chosen somewhat aggressively, as they corresponds to about $200$ stellar-population particles or gas cells.
We do not cut the galaxy sample any further at this point, though we will later impose additional cuts to mimic the galaxy selection of various observational datasets. We include both central and satellite galaxies because they are not generally separated in observations. As a result, our sample contains galaxies from different environments such as field galaxies and cluster members. It is now well established that the ${\rm H}$\,{\sc i}\xspace fractions tend to be significantly lower in the latter, a trend partially captured in the morphology-density relation where ETGs tend to both live in denser environments and contain less gas \citep{haynes_84, cortese_11, catinella_13, boselli_14_hrs3, stevens_19_hi}. Similarly, different gas fractions have been observed in the satellite population by splitting observed samples using group catalogues \citep{brown_17}. However, \citetalias{calette_18} do not split their galaxy sample by any isolation criterion, and the mass functions also include both satellites and centrals. Moreover, the IllustrisTNG simulations have been shown to recover the various trends with environment \citep{stevens_19_hi}.
Finally, we do not cut out objects with unusual ratios of their gas, stellar, and dark matter content. For example, IllustrisTNG contains some objects with barely any dark halo which can be traced to non-cosmological formation mechanisms \citep{nelson_19_datarelease}. We find that such objects contribute to a population of gas-free galaxies that we discuss further in Sections~\ref{sec:results:fraction} and \ref{sec:discussion:gasfree}.
\subsection{Mock-observing Masses, Sizes, and SFRs}
\label{sec:sim:apertures}
The most natural definition for the masses of stars or gas in simulated galaxies is to include all particles or gas cells that are gravitationally bound to the respective halo or subhalo. When comparing to observations, however, this definition rarely applies because the observed data are usually based on a finite beam size or aperture, which can either reduce the measured gas mass due to partial coverage or increase it due to confusion with other objects. Such effects can lead to significant differences in the inferred gas masses \citep[e.g.,][]{stevens_19_hi}.
We compute all aperture masses from linearly interpolated, cumulative mass profiles of the respective species. The profiles contain only matter bound to the respective subhalo and are binned in $50$ linearly spaced radii at fixed fractions of the outer radius of our 2D maps (equation~4 in \citetalias{diemer_18_hih2}). If the desired aperture is larger than the largest radial bin, we use the cumulative mass in that final bin. For any quantity that is defined cell-by-cell or particle-by-particle (e.g., stellar mass or gas masses), we could use a spherically averaged (3D) or a projected (2D) profile. For disc-like, anisotropic systems, this choice could make a difference in the computed masses. We choose to compute all masses and sizes from projected profiles to be consistent with those quantities that are computed in projection in the first place (Section~\ref{sec:sim:hih2}). In such cases, we cannot use 3D profiles because the information about the 3D positions has been lost. We have tested the difference between using 2D and 3D profiles and find it to be insignificant, with no clearly discernible trend in the inferred masses.
We compute sizes in a similar way, namely, we linearly interpolate the projected mass profile to find a density threshold (e.g., $\Sigma_{\rm HI} = 1 {\rm M}_{\odot} / {\rm pc}^2$) or a fraction of the integrated mass (e.g, the half-mass radius). This method is compatible with the observational samples to which we compare, where sizes correspond to a face-on orientation (Section~\ref{sec:obs:sizes}).
\subsubsection{Stellar Masses and Sizes}
\label{sec:sim:apertures:star}
Optical surveys used to measure stellar masses tend to underestimate the true extent of the stellar distribution \citep[e.g.,][]{bernardi_13, kravtsov_14}. To mimic such effects, we define $M_*$ as the stellar mass within a fixed aperture of $30 \>{\rm kpc}$. \citet{schaye_15} demonstrated that this definition tracks the Petrosian radii used in observations \citep[see also][for an investigation of various definitions in IllustrisTNG]{pillepich_18}. A fixed $30$ kpc aperture also matches the BaryMP technique of \citet{stevens_14} to about 20\% accuracy (with larger deviations at $M_* \gtrsim 10^{11} {\rm M}_{\odot}$).
In extremely rare cases ($0.003$\% of galaxies in TNG100, $0.02$\% in TNG300), the radius of the innermost bin of the stellar mass profile is larger than $30 \>{\rm kpc}$. In those cases, we linearly interpolate the logarithmic value of the first bin from zero. We find that the details of this procedure have no appreciable impact on our results. We have also tested that increasing the number of bins by more than two-fold has less than a one-percent effect on the computed stellar masses.
Besides the bias related to the size of a galaxy, stellar masses suffer from significant systematic uncertainties such as the assumed IMF and the modelling of the stellar light profiles. For example, while \citetalias{calette_18} homogenize their data to the same IMF, modelling uncertainties largely remain (see their Section~2.1.3). Thus, we add a log-normal scatter of $0.2$ dex to the stellar masses from IllustrisTNG \citep{marchesini_09, mancini_11, bower_12, mitchell_13}. When comparing gas fractions at fixed stellar mass, this has two main effects. First, features in the relations are smoothed out, which can hide trends such as rapid changes due to the feedback implementation (see the discussion in \citealt{stevens_19_hi} and \citealt{nelson_19_datarelease}). Second, as the gas fractions typically fall with stellar mass, Eddington bias can shift the gas fractions upwards \citep{eddington_13}.
\subsubsection{Star Formation Rates}
\label{sec:sim:apertures:sfr}
We compare our ${\rm H}_2$\xspace masses and SFRs to the xCOLD GASS survey, which combines measurements of ${\rm H}$\,{\sc i}\xspace and CO properties of galaxies with optical surveys \citep{saintonge_17}. Their SFRs are derived using a combination of UV and infrared (IR) luminosities \citep{janowiecki_17}. While UV tracers such as the H$\alpha$ line are indicative of star formation on short timescales, IR tracers are sensitive to timescales of a few hundred Myr; their combination typically traces variations over roughly $50$--$150$ Myr \citep[e.g.,][]{caplar_19}.
We use the instantaneous SFRs given by the simulation, but caution that there may be a slight systematic offset \citep{donnari_19}. To put a simplistic upper limit on this offset, we have compared the instantaneous SFRs of IllustrisTNG galaxies, defined as the sum of the SFRs of a galaxy's gas cells, to an average over the stars formed within about 200 Myr. Given the timescales discussed above, this represents a conservative estimate of any differences. For the relevant stellar mass and redshifts, $z = 0$ and $z = 2$, we find that the median instantaneous SFRs of TNG100 galaxies differ from the averaged ones by about 3\%, a negligible offset. There is, however, a scatter of $0.15$ dex at $z = 0$ and $0.6$ dex at $z = 2$ to which we return when interpreting our results in Section~\ref{sec:results:sfr}.
\subsubsection{${\rm H}$\,{\sc i}\xspace Masses and Sizes}
\label{sec:sim:apertures:hi}
The majority of the ${\rm H}$\,{\sc i}\xspace data described in Section~\ref{sec:obs} are based on Arecibo observations. The beam width of the Arecibo telescope at 21 cm is about $3.4$' \citep[e.g.,][]{jones_18}, corresponding to a physical aperture that depends on redshift. For example, at the median redshift of the GASS survey, $z = 0.037$, $3.4$' corresponds to roughly $140 \>{\rm kpc}$, leading \citet{bahe_16} to choose a fixed aperture radius of $70 \>{\rm kpc}$. The median redshift of the ALFALFA sample is slightly lower, which would demand a slightly smaller aperture. Moreover, only ${\rm H}$\,{\sc i}\xspace within a certain range of relative velocity would be registered in the 21-cm band. However, \citet{bahe_16} found that a spherical aperture of $70 \>{\rm kpc}$ corresponds very well to a more complex procedure that takes the velocity of the gas into account (see their Appendix~A). We thus follow their suggestion and use a circular $70 \>{\rm kpc}$ aperture in face-on projection for all our ${\rm H}$\,{\sc i}\xspace masses. As Arecibo is a single-dish telescope, we apply a Gaussian beam instead of a radial cut-off, i.e., we weight the projected ${\rm H}$\,{\sc i}\xspace density profile by a Gaussian with $\sigma = 70$ kpc. The differences are negligible at all stellar masses except at $M_* > 10^{11}{\rm M}_{\odot}$ where using the Gaussian slightly increases the ${\rm H}$\,{\sc i}\xspace mass. We further discuss the impact of the chosen aperture in Appendix~\ref{sec:app:apertures}.
One observational complication that is not taken into account is blending: galaxies sometimes lie behind each other and close enough in velocity space to be confused into a single detection, increasing the measured ${\rm H}$\,{\sc i}\xspace fraction and decreasing the number of objects. However, \citet{jones_15} investigated this effect for ALFALFA and found that it would change the ALFALFA mass function by less than $3 \sigma$, a small shift given the extremely tight error bars. In Appendix~\ref{sec:app:apertures}, we compare our ${\rm H}$\,{\sc i}\xspace masses to those of \citet{stevens_19_hi} who take blending into account and find that the differences do not appreciably change our results. For our ${\rm H}_2$\xspace masses, blending effects are negligible due to the small spatial extent of ${\rm H}_2$\xspace and due to the optical galaxy selection (Sections~\ref{sec:obs:fraction} and \ref{sec:sim:apertures:h2}).
Finally, in Section~\ref{sec:results:size}, we use the ${\rm H}$\,{\sc i}\xspace radius $R_{\rm HI}$, defined as the radius where the surface density of ${\rm H}$\,{\sc i}\xspace falls below $1 {\rm M}_{\odot} / {\rm pc}^2$. We measure this radius from projected one-dimensional density profiles because the observational measurements are face-on equivalents, i.e., they are corrected for inclination. As discussed in Section~\ref{sec:obs:sizes}, the datasets are heterogeneous but are generally very well resolved and include the entire ${\rm H}$\,{\sc i}\xspace disc. Thus, we do not impose an aperture when measuring $M_{\rm HI}$ for use in the $R_{\rm HI}$--$M_{\rm HI}$ relation.
\subsubsection{${\rm H}_2$\xspace Masses and Sizes}
\label{sec:sim:apertures:h2}
For ${\rm H}_2$\xspace, the chosen aperture turns out to be much more important than for ${\rm H}$\,{\sc i}\xspace. The data used in \citetalias{calette_18} are dominated by the xCOLD GASS survey, which consists of $532$ measurements of the CO luminosity taken with the IRAM 30 m telescope \citep{saintonge_17}. The 3 mm channel (which contains the CO 1--0 transition) has a field of view of 22''. Over the survey's redshift range, $0.01 < z < 0.05$, this corresponds to between $4.6$ and $20$ kpc. Based on the high-mass end of this redshift range, we use $20$ kpc as an approximation for the typical beam of IRAM. The instrument used is a single-pixel bolometer, meaning that the beam is Gaussian in shape. The quoted beam size corresponds to the full width at half maximum, or $2.355 \sigma$, meaning that the standard deviation of our Gaussian aperture is $\sigma \approx 8.5$ kpc. \citet{saintonge_17} apply a correction factor for inclination which increases the measured CO luminosity by a median factor of $1.17$ \citep[see also][]{saintonge_12}. Since we mock-observe galaxies in face-on orientation, this correction should roughly match our procedure.
The {\it Herschel} Reference Survey observations that underlie our ${\rm H}_2$\xspace mass function use a rather different instrument, SPIRE \citep{boselli_10}. The SPIRE field of view is 4' by 8', which we approximate by a single aperture of 6'. The survey sources lie between 15 and 25 Mpc; we take 20 Mpc as a typical distance. At this separation, 6' corresponds to $36$ kpc or a radial aperture of $18$ kpc. Rather than a single beam such as IRAM, SPIRE has a resolution of 30''. Thus, we use a hard cutoff radius of $18$ kpc rather than a Gaussian aperture, though the difference is insignificant.
Returning to the IRAM-based observations used in the gas fraction measurements, we might worry that the ${\rm H}_2$\xspace mass of our sample could be overestimated because a fraction of the galaxies lies at lower redshift and experiences even smaller apertures. However, there are also reasons to fear that our measurement might underestimate the observations because of the spatial extent of ${\rm H}_2$\xspace in IllustrisTNG. In Section~\ref{sec:results:size:h2}, we demonstrate that the ${\rm H}_2$\xspace in our simulated galaxies tends to be more spread out than suggested by CO observations, both in relation to the stellar component and in absolute units. This disagreement begs the question of whether we wish to mock-observe exactly what a telescope would see given the simulation data, or whether we are interested in quantifying the overall ${\rm H}_2$\xspace mass of the simulated galaxies. For the purposes of gas fractions and mass functions, we are interested in the latter. Thus, our results should probably be seen as a lower limit on the ${\rm H}_2$\xspace mass one would observe given the simulations. We quantitatively investigate the effect of the ${\rm H}_2$\xspace aperture in Appendix~\ref{sec:app:apertures}.
\subsection{Separation into Early and Late Types}
\label{sec:sim:morphology}
As discussed in Section~\ref{sec:obs:fraction}, we base our comparisons of gas fractions on the observational compilation of \citetalias{calette_18}, who split galaxies into LTGs and ETGs. This distinction is physically meaningful because the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace fractions of the two groups differ significantly, with ETGs containing less gas than LTGs at fixed stellar mass. To assess whether IllustrisTNG galaxies match this trend, we split our sample into LTGs and ETGs based on the concentration of stellar mass, $C_{80,20} > 4.9$. This split reproduces the \citetalias{calette_18} ETG fraction for both centrals and satellites. Here, we define concentration as $C_{80,20} \equiv 5 \log_{10} (r_{80} / r_{20})$, where $r_{80}$ and $r_{20}$ are the radii enclosing 80\% and 20\% of the gravitationally bound stellar mass, respectively. This parameter (observationally defined through stellar light rather than mass) is known to strongly correlate with the morphological type of galaxies \citep{kent_85, bershady_00, lotz_04}. The concentration in IllustrisTNG appear to match observations reasonably well \citep{rodriguezgomez_18}, especially when a scatter in stellar mass is applied. In Appendix~\ref{sec:app:morphology}, we describe our experiments with morphology and investigate the correlation between various morphological indicators and other galaxy properties.
\section{Results}
\label{sec:results}
\begin{figure}
\centering
\includegraphics[trim = 3mm 4mm 1mm 0mm, clip, scale=0.7]{./omega.pdf}
\caption{Overall abundance of ${\rm H}$\,{\sc i}\xspace (top) and ${\rm H}_2$\xspace (bottom) as a function of redshift. As in all following figures, the shaded blue region encloses the curves due to the different \hi/${\rm H}_2$\xspace models. The TNG100 result represents a lower limit because it does not include galaxies outside of our selection (${\rm max}(M_{\rm gas}, M_*) \geq 2 \times 10^8 {\rm M}_{\odot}$, Section~\ref{sec:sim:selection}) or gas that is not associated with any subhalo. However, the dashed lines in the top panel demonstrate that using all gas particles gives similar results \citep{villaescusanavarro_18}. We expect the amount of ${\rm H}_2$\xspace outside galaxies to be negligible. See Section~\ref{sec:obs:omega} for details on the ${\rm H}$\,{\sc i}\xspace data compilation.}
\label{fig:omega}
\end{figure}
We are now ready to compare the gas content of IllustrisTNG galaxies to observations. We split this section into six observational metrics, namely the overall abundance of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace (Section~\ref{sec:results:omega}), mass functions (Section~\ref{sec:results:mfunc}), gas fractions (Section~\ref{sec:results:fraction}), correlations with SFR (Section~\ref{sec:results:sfr}), the spatial distribution of gas (Section~\ref{sec:results:size}), and the correlation between neutral gas content and morphology (Section~\ref{sec:results:morphology}). Throughout the paper, we adhere to a consistent plotting scheme where blue lines and shapes correspond to TNG100, orange to TNG300, and gray or black elements indicate observational data. Wherever we show \hi/${\rm H}_2$\xspace-related quantities, we indicate the variation between the nine different models as a dark shaded area. We omit the individual model lines to avoid confusion and refer the interested reader to \citetalias{diemer_18_hih2} where the detailed differences between the models are shown. Where applicable, the 68\% scatter is shown as a lighter shaded area of the same colour. In this case, it would not make sense to show the maximum scatter of the models; instead, we take the mean of the 16th and 84th percentiles of all nine models. We focus on median rather than mean quantities to avoid issues with outliers and galaxies that contain no or very little gas. To avoid overcrowding the figures in this section, we do not show results from the original Illustris-1 simulation or from the lower-resolution TNG100-2 simulation. However, we provide full sets of figures comparing TNG100 to Illustris-1 and to TNG100-2 at \href{http://www.benediktdiemer.com/data/}{benediktdiemer.com/data}. We briefly review the results for Illustris-1 in Section~\ref{sec:results:orig}.
\subsection{Overall Abundance of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace}
\label{sec:results:omega}
To set a baseline expectation for the match between the neutral gas content in IllustrisTNG galaxies and the real Universe, we first consider the overall abundance of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace as a function of redshift as shown in Fig.~\ref{fig:omega}. These abundances, denoted $\Omega_{\rm HI}$ and $\Omega_{\rm H_2}$, are defined with respect to the critical density of the Universe at $z = 0$. The TNG100 result is to be understood as a lower limit because it does not include the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace in galaxies that were not included in our sample selection (Section~\ref{sec:sim:selection}). For this reason, we do not show TNG300 because the much higher stellar and gas mas limits would mean that significant gas reservoirs would be excluded. Furthermore, our calculation does not include any neutral gas not bound to haloes and subhaloes, although such gas should be negligible because virtually all non-bound gas is ionized by the UV background (hereafter UVB). We assess the importance of these effects by comparing to the $\Omega_{\rm HI}$ calculation of \citet{villaescusanavarro_18}. While their method used a simpler prescription for the ${\rm H}_2$\xspace fraction, it included all gas cells in the simulation. The small difference to our galaxy sample indicates that the vast majority of ${\rm H}$\,{\sc i}\xspace resides within or near galaxies, at least below $z \approx 2$. We caution that the simulation results are not fully converged, especially at high redshift (Fig. 26 in \citealt{villaescusanavarro_18}). Moreover, the \hi/${\rm H}_2$\xspace separation becomes less certain at high $z$ because all our models were calibrated at low $z$, as evident from their increasing dispersion.
As this paper focuses on observational comparisons at $z = 0$, we are most interested in low-redshift constraints on $\Omega_{\rm HI}$ and $\Omega_{\rm H_2}$. We assume that the observational data represent the total amount of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace in the Universe, with no severe effects due to explicit or implicit selection functions \citep[e.g.,][]{obreschkow_13}. As shown in the top panel of Fig.~\ref{fig:omega}, observations agree that $\Omega_{\rm HI}$ falls to about $4 \times 10^{-4}$ at $z = 0$, about half of the ${\rm H}$\,{\sc i}\xspace we find in TNG100 (regardless of whether galaxies or all gas are used). Thus, we expect the ${\rm H}$\,{\sc i}\xspace mass function and gas fractions to also overestimate observations. For ${\rm H}_2$\xspace, the picture is similar (bottom panel of Fig.~\ref{fig:omega}). Here we expect virtually no molecular gas outside of galaxies \citep[see also][]{lagos_15}, though some galaxies might fall below our mass selection thresholds. Moreover, $\Omega_{\rm H_2}$ depends more strongly on the \hi/${\rm H}_2$\xspace model than $\Omega_{\rm HI}$. Despite these caveats, the ${\rm H}_2$\xspace abundance in TNG100 appears to be somewhat higher than observed. However, some \hi/${\rm H}_2$\xspace models fall within the observational uncertainty, especially given the additional systematic uncertainties due to the CO-to-${\rm H}_2$\xspace conversion factor and due to sample selection.
The redshift evolution of $\Omega_{\rm HI}$ is characterised by relatively weak changes between $z = 4$ and $z \approx 1$. Given the large uncertainties on the data, this result does not present a conflict with observations, particularly since we have not taken into account potential selection effects. At $z < 0.5$, however, $\Omega_{\rm HI}$ is observed to sharply decline, whereas the ${\rm H}$\,{\sc i}\xspace abundance in TNG100 increases. In contrast, $\Omega_{\rm H_2}$ displays an evolution of a factor of $\approx 2$--$3$. Its peak at $z \approx 2$ is close to the peak of the cosmic SFR density \citep[e.g.,][]{madau_14}, consistent with a picture where the differences between $\Omega_{\rm HI}$ and the cosmic SFR are owed to a changing ${\rm H}$\,{\sc i}\xspace-to-${\rm H}_2$\xspace conversion efficiency, driven primarily by a combination of gas column density and metallicity \citep[e.g.,][]{obreschkow_09a, lagos_14}. The bottom panel of Fig.~\ref{fig:omega} also shows the evolution of $\Omega_{\rm H_2}$ in EAGLE according to \citet{lagos_15}. Their ${\rm H}_2$\xspace abundance rises to a higher level around $z = 1$ but falls to a slightly lower level at $z = 0$. With rapidly improving CO data at high redshift, this evolution will be a useful constraint on galaxy formation models. In Fig.~\ref{fig:omega}, we have omitted high-$z$ CO surveys because their small volume and their selection functions complicate the interpretation. We refer the reader to \citet{popping_19} for such a comparison.
In summary, TNG100 seems to exhibit a reasonable redshift evolution of $\Omega_{\rm HI}$ and $\Omega_{\rm H_2}$, but contains about twice as much neutral gas as observed at $z = 0$. We further discuss this tension in Section~\ref{sec:discussion:excess}.
\subsection{Mass Functions}
\label{sec:results:mfunc}
\begin{figure*}
\centering
\includegraphics[trim = 3mm 11mm 0mm 2mm, clip, scale=0.73]{./mfunc_hi.pdf}
\includegraphics[trim = 24mm 11mm 2mm 2mm, clip, scale=0.73]{./mfunc_h2.pdf}
\caption{Mass functions of atomic (left) and molecular (right) hydrogen at $z = 0$. The gray lines and points correspond to various observational measurements (Section~\ref{sec:obs:mfunc}), namely the ALFALFA ${\rm H}$\,{\sc i}\xspace mass function of \citet{jones_18} and the ${\rm H}_2$\xspace mass function of \citet{boselli_14_hrs2}. The simulation data were mock-observed to match their respective apertures. The ${\rm H}_2$\xspace data of \citet{obreschkow_09a} are shown only for comparison. TNG100 and TNG300 are relatively well converged and generally match the data. At low masses, however, the ${\rm H}$\,{\sc i}\xspace mass function is overestimated by a factor 2--3. Given the 1-$\sigma$ observational uncertainties in the normalization and low-mass slope (shown as a shaded area and as thin gray lines, respectively), the disagreement is statistically significant. Any disagreement with the observed ${\rm H}_2$\xspace mass functions is not significant given the systematic uncertainties in our modelling and in the CO-to-${\rm H}_2$\xspace conversion.}
\label{fig:mfunc}
\end{figure*}
Fig.~\ref{fig:mfunc} shows the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace mass functions from TNG100 and TNG300 as well as a number of observational constraints. Overall, the simulations match the data fairly well, especially at the high-mass end. The ${\rm H}$\,{\sc i}\xspace mass function is compared to the extremely well-measured mass function from the ALFALFA survey \citep{jones_18}. The shaded uncertainty region around the ALFALFA mass function represents the systematic error on their normalization due to the uncertainty in the overall flux calibration, the thin lines show the mass function with low-mass slopes that are one $\sigma$ lower and higher than the fiducial mass function. We cannot combine these uncertainties because \citet{jones_18} do not provide covariances. There is no discernible tension above $M_{\rm HI} \lower.7ex\hbox{\gtsima} 5 \times 10^9 {\rm M}_{\odot}$, and the TNG100 and TNG300 results agree reasonably well. Between masses of about $3 \times 10^8 {\rm M}_{\odot}$ and $5 \times 10^9 {\rm M}_{\odot}$, however, IllustrisTNG contains about $2$--$3$ times too many objects. This prediction is robust to the choice of \hi/${\rm H}_2$\xspace model because the ${\rm H}_2$\xspace fraction is low, meaning that all \hi/${\rm H}_2$\xspace models basically predict the same amount of ${\rm H}$\,{\sc i}\xspace. Moreover, the choice of aperture has very little impact on ${\rm H}$\,{\sc i}\xspace and on this mass range in particular (Fig.~\ref{fig:gasfrac_aperture}). Given the results shown in Fig.~\ref{fig:omega}, the disagreement is hardly surprising: since $\Omega_{\rm HI}$ exceeds the $z = 0$ measurement from ALFALFA \citep{obuljen_18}, the mass function must also exceed its ALFALFA counterpart at some masses.
\citet{crain_17} showed that a similar excess in the EAGLE ${\rm H}$\,{\sc i}\xspace mass function is largely due to an unphysical, resolution-dependent population of objects that contain very little dark matter or stars. We have checked that this is not the case in IllustrisTNG, i.e., that the galaxies around $M_{\rm HI} \approx 10^9 {\rm M}_{\odot}$ do not exhibit any obvious unphysical characteristics. Comparing to a lower-resolution version of TNG100, however, we do find a similar excess in the ${\rm H}$\,{\sc i}\xspace mass function that is shifted to higher masses. A comparison with the upcoming TNG50 simulation \citep{nelson_19_outflows, pillepich_19_tng50} will reveal whether residual resolution effects change the TNG100 ${\rm H}$\,{\sc i}\xspace mass function.
While the ${\rm H}$\,{\sc i}\xspace mass function is more tightly constrained observationally, the ${\rm H}_2$\xspace mass function provides a more interesting point of comparison for our simulation because ${\rm H}_2$\xspace is a smaller fraction of neutral gas, meaning that the form of the ${\rm H}_2$\xspace mass function is less directly tied to the overall neutral gas content. The right panel of Fig.~\ref{fig:mfunc} compares the ${\rm H}_2$\xspace mass functions in TNG100 and TNG300 to the observations of \citet{obreschkow_09a} and \citet{boselli_14_hrs2}. The aperture chosen for the simulated galaxies was matched to the \citet{boselli_14_hrs2} observations (Section~\ref{sec:sim:apertures:h2}), the \citet{obreschkow_09a} points are shown to highlight the systematic observational uncertainty on the ${\rm H}_2$\xspace mass function. As with the ${\rm H}$\,{\sc i}\xspace mass function, TNG100 and TNG300 are well converged and agree with the observations at the high-mass and low-mass ends. Any apparent disagreements with the observed data are not truly significant, given that our \hi/${\rm H}_2$\xspace modelling is uncertain to at least a factor of two \citepalias{diemer_18_hih2} and that the observations rely on an uncertain CO-to-${\rm H}_2$\xspace conversion factor \citep[see also][]{popping_19}.
\subsection{Gas Fractions}
\label{sec:results:fraction}
We now turn to the connection between the stellar and gaseous content of galaxies in IllustrisTNG, quantified by the fractions of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace mass with respect to stellar mass (hereafter simply referred to as ``gas fractions'' or $f_{\rm gas}$). Fig.~\ref{fig:fraction} shows the gas fractions of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace as a function of stellar mass. Only stellar mass bins with at least $50$ galaxies are shown. When comparing gas fractions, simple means or medians are not appropriate measures of the respective distributions because a number of galaxies will contain zero gas cells, or at least lie below some observational detection threshold (for example, the sensitivity limit of radio telescopes). For this reason, we separately consider two quantities: the fraction of galaxies below some threshold gas fraction, and the distribution of gas fractions above this limit.
In defining the threshold gas fraction, we follow \citetalias{calette_18} who introduce a mass-dependent lower limit to the distributions for ETGs (dashed lines in Fig.~\ref{fig:fraction}). As we are combining the ETG and LTG samples in Fig.~\ref{fig:fraction}, we apply the same threshold to both ETGs and LTGs. \citetalias{calette_18} infer the distribution of gas fractions using Kaplan-Meier estimation, taking into account upper limits, and parameterize their results as separate fitting functions for the distribution of LTG and ETG gas fractions. We reconstruct the distribution of the full sample by summing the LTG and ETG contributions given the ETG fraction at each stellar mass (Fig.~\ref{fig:etgltg}). For both the simulated and observed distributions, we now count the fraction of galaxies that lie below the threshold, $f_{\rm low}$, and compute the median and 68\% scatter of those gas fractions that lie above the threshold. The exact value of the threshold is unimportant because it is applied to both simulation and observations. We note that $f_{\rm low}$ is not the fraction of galaxies without a gas detection, but rather the fraction of galaxies whose gas fraction falls below the threshold that is imposed a posteriori. Thus, we do not need to worry about whether a simulated galaxy would or would not have been detected by a given observational survey. If its gas fraction lies below the threshold, it is registered as such by \citetalias{calette_18}. The individual observed data points shown in Fig.~\ref{fig:fraction} highlight the importance of applying an estimator that takes into account upper limits and selection effects.
\begin{figure*}
\centering
\includegraphics[trim = 1mm 9mm 0mm 0mm, clip, scale=0.68]{./mstar_fstar_split_hi_nosplit.pdf}
\includegraphics[trim = 24mm 9mm 1mm 0mm, clip, scale=0.68]{./mstar_fstar_split_h2_nosplit.pdf}
\caption{Gas fractions of atomic (left) and molecular hydrogen (right) as a function of stellar mass at $z = 0$. The gray lines and shaded areas show the inferred median gas fractions and 68\% scatter according to the compilation of \citetalias{calette_18}. These values refer only to galaxies whose gas fractions lie above the dashed lines. The distributions of lower gas fractions could not be reliably determined in \citetalias{calette_18}, their fractional contribution is shown in the bottom panels. As a result, the median gas fractions in the top panels can appear different from those shown in \citetalias{calette_18}, especially at high stellar masses where many galaxies fall below the threshold. The simulation data are analysed in the same way, i.e., by separately counting galaxies that fall below the lower limits. The darker shaded areas show the area covered by all nine \hi/${\rm H}_2$\xspace models used and can be interpreted as a systematic uncertainty. The lighter shaded area indicates the mean 68\% scatter, i.e., the mean of the scatter contours according to the different models. The detections and upper limits used in the \citetalias{calette_18} analysis are shown as gray dots and arrows (combining both their LTG and ETG samples). See Section~\ref{sec:results:fraction} for a detailed discussion of these results.}
\label{fig:fraction}
\end{figure*}
We first consider the distribution of $f_{\rm gas}$ above the threshold. Overall, the ${\rm H}$\,{\sc i}\xspace fraction matches the results of \citetalias{calette_18} well, including a realistic 68\% scatter. The differences between the \hi/${\rm H}_2$\xspace models are relatively small for ${\rm H}$\,{\sc i}\xspace. While the median dips below the observed median by a factor of about two at intermediate stellar masses, it is unclear whether this difference is significant. TNG300 exhibits systematically lower gas abundances than TNG100. This undesirable resolution dependence could be resolved with a rescaling procedure \citep[see, e.g.,][for stellar masses]{pillepich_18}, but we do not undertake such an exercise here.
The median ${\rm H}_2$\xspace fraction is almost perfectly matched at stellar masses below $2 \times 10^{10} {\rm M}_{\odot}$, at the highest masses it falls below the observations by a factor of about $4$--$10$ depending on the \hi/${\rm H}_2$\xspace model. However, it is not clear how significant this difference is for a number of reasons. First, as discussed in Section~\ref{sec:obs:omega}, the observations are systematically uncertain due to the poorly known CO-to-${\rm H}_2$\xspace conversion factor. Second, the low density of observations above $2 \times 10^{11} {\rm M}_{\odot}$ means that the distribution inferred by \citetalias{calette_18} is less reliable than at lower masses. Finally, the estimated ${\rm H}_2$\xspace fraction becomes more model-dependent at high stellar masses (see also \citetalias{diemer_18_hih2}). Part of this effect is due to the different spatial distribution predicted by the models, which, given a limited observational aperture, can cut out significant fractions of the total ${\rm H}_2$\xspace mass and leads to the sharp drop in the ${\rm H}_2$\xspace fraction at high masses. In summary, there may be some tension between the observed and simulated ${\rm H}_2$\xspace fractions at the high-mass end, but it is hard to quantify this tension exactly.
We now turn to the fraction of galaxies with $f_{\rm gas}$ below the lower limit. According to \citetalias{calette_18}, this fraction increases with stellar mass, reaching about $50\%$ of both the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace distributions at the highest stellar masses (bottom panels of Fig.~\ref{fig:fraction}). The IllustrisTNG galaxies do not follow this trend. Especially in the ${\rm H}$\,{\sc i}\xspace distribution, the fraction of low-gas galaxies decreases slightly at high masses. Most notably, the simulations contain a population of at least 25\% below the limit at all stellar masses. Virtually all of those, about 85--90\% depending on the \hi/${\rm H}_2$\xspace model, are satellites. A small fraction of those turn out to be satellites that contain no gas cells at all, presumably because they were stripped away. The high satellite fraction in the low-gas galaxies can perhaps partially explain the disagreement with \citetalias{calette_18} because some observational samples are likely biased against close satellite pairs due to effects such as fibre collisions (Section~\ref{sec:discussion:gasfree}). However, in Section~\ref{sec:discussion:gasfree}, we discuss this issue further and conclude that fibre collisions cannot explain the bulk of the excess of galaxies with low gas fractions. Similarly, blending can assign some ${\rm H}$\,{\sc i}\xspace mass to gas-free galaxies, but this effect impacts only a few percent of our galaxies (Appendix~\ref{sec:app:apertures}).
We conclude that the gas fractions in IllustrisTNG broadly match the observations, but that there is an excess of satellites with very low gas fractions. By considering LTGs and ETGs as one combined sample, we have sidestepped the question of how the gas fractions correlate with the morphological type of galaxies; we return to this question in Section~\ref{sec:results:morphology}.
\subsection{Correlations with Star Formation Rate}
\label{sec:results:sfr}
\begin{figure}
\centering
\includegraphics[trim = 7mm 28mm 6mm 2mm, clip, scale=0.72]{./sfr_h2_z0.pdf}
\includegraphics[trim = 7mm 13mm 6mm 5mm, clip, scale=0.72]{./sfr_h2_z2.pdf}
\caption{Relation between molecular gas mass and SFR at $z = 0$ (top) and $z = 2$ (bottom). In the top panel, the gray points show individual galaxies from xCOLD GASS \citep{saintonge_17}, the solid line and shaded region show their median and 68\% scatter. IllustrisTNG matches the median relation well, to better than a factor of two at all masses. In TNG300, the scatter is smaller than the variation due to the \hi/${\rm H}_2$\xspace models. The relation closely approximates a constant depletion time as parameterized by \citet[][dashed gray lines]{tacconi_18}.}
\label{fig:sfr}
\end{figure}
Given the reasonable gas fractions and the generally realistic star formation activity in IllustrisTNG \citep{donnari_19}, we expect a that the molecular gas reservoirs of galaxies should tightly correlate with their SFR. This trend is well established observationally (Fig.~\ref{fig:sfr}). We note that the $M_{\rm H_2}$-SFR correlation is not quite the same as the Kennicutt-Schmidt relation \citep{schmidt_59, kennicutt_98} because we have not normalised the mass and SFR by area.
The top panel of Fig.~\ref{fig:sfr} shows the median SFR as a function of $M_{\rm H_2}$ for both observations and simulations at $z = 0$. The xCOLD GASS galaxy sample is representative in the sense that it was randomly selected from a particular stellar mass range \citep{saintonge_12, saintonge_17}, and that galaxies were observed until a certain ${\rm H}_2$\xspace fraction was detected or a certain upper limit was reached. In particular, galaxies in the COLD GASS sample ($M_{*} > 10^{10} {\rm M}_{\odot}$) were guaranteed to be detected in CO if $M_{\rm H_2} / M_{*} > 1.5\%$, while galaxies in the lower-mass COLD GASS-low sample ($10^9 < M_{*} < 10^{10} {\rm M}_{\odot}$) were detected if $M_{\rm H_2} / M_{*} > 2.5\%$. To match this selection, we have chosen all simulated galaxies in the respective stellar mass ranges that reach the limiting ${\rm H}_2$\xspace mass. Given the mixed sources of the SFR measurements, there is no clear detection limit on the SFR \citep{saintonge_17}. We take the lowest SFR in the sample, $10^{-2.8} {\rm M}_{\odot}/{\rm yr}$, as a guideline and neglect all IllustrisTNG galaxies whose SFR falls below this value. In practice, this cut makes no discernible difference.
The median trend at $z = 0$ is matched well by IllustrisTNG, to a factor of two or better at all masses. For TNG300, the situation is less clear because the data become somewhat sparse at the highest masses. The scatter in TNG100 appears to be slightly smaller than observed, but the observed scatter constitutes an upper limit because it does not take into account the sizeable error bars on the individual SFR measurements. Moreover, the observed SFRs correspond to an average over a few hundred Myr and might therefore be subject to added scatter compared to the instantaneous SFRs used for the simulation data (Section~\ref{sec:sim:apertures:sfr}).
An alternative measure by which to understand the relation between gas mass and star formation is the depletion time, $t_{\rm depl} \equiv M_{\rm H_2} / {\rm SFR}$, i.e., the time over which a gas reservoir would be exhausted given the current SFR. Observationally, the depletion time is thought to be more or less independent of mass but to evolve with redshift. The dashed lines in Fig.~\ref{fig:sfr} correspond to a constant depletion time as parameterized by \citet{tacconi_18}, $t_{\rm depl} \approx 1\ {\rm Gyr} \times (1 + z)^{-0.57}$. IllustrisTNG reproduces this relation between $z = 0$ and $z = 2$, although the highest-mass galaxies have higher star formation rates than expected by a factor of about two. The comparison to \citet{tacconi_18} should be taken as approximate because we have not matched our galaxy selection to theirs, which includes only galaxies on the star-forming main sequence. However, we have checked that the $M_{\rm H_2}$--SFR connection does not significantly differ between ETGs and LTGs (defined as described in Section~\ref{sec:sim:morphology}).
\citet{lagos_15} reported a similar result for the EAGLE simulation, namely a roughly constant depletion time of $t_{\rm depl} \approx 10^9$ Gyr at $z = 0$. They argued that an overly tight $M_{\rm H_2}$--SFR relation is perhaps to be expected because the star formation in simulations is directly based on the density of gas, and thus the density of cold gas in the ISM models.
\subsection{Spatial Distribution}
\label{sec:results:size}
Having investigated how atomic and molecular gas is distributed between galaxies of different stellar mass and SFR, we now turn to the distribution of gas within galaxies.
\subsubsection{The ${\rm H}$\,{\sc i}\xspace Mass-size Relation}
\label{sec:results:size:hi}
\begin{figure}
\centering
\includegraphics[trim = 5mm 8mm 6mm 2mm, clip, scale=0.72]{./size_hi_r1.pdf}
\caption{The ${\rm H}$\,{\sc i}\xspace mass-size relation at $z = 0$, where $R_{\rm HI}$ is defined as the radius where $\Sigma_{\rm HI}$ falls below $1 {\rm M}_{\odot}/{\rm pc}^2$. The gray points show the data compilation of \citet{wang_16_hi}, the solid gray lines shows their best linear fit. For comparison, the dashed line shows the fit from \citet{lelli_16}.}
\label{fig:hi_mass_size}
\end{figure}
Fig.~\ref{fig:hi_mass_size} shows the ${\rm H}$\,{\sc i}\xspace mass-size relation as measured in TNG100 and TNG300, and in the observational compilations of \citet{wang_16_hi} and \citet{lelli_16}. Observationally, the relation is extremely tight, with only $0.06$ dex scatter (15\%, shaded gray). Its slope is closed to $0.5$, the value one would obtain for a disc of constant surface density ($0.506$ in \citealt{wang_16_hi}, 0.535 in \citealt{lelli_16}). Both TNG100 and TNG300 match the slope of the relation almost perfectly, but predict very slightly larger ${\rm H}$\,{\sc i}\xspace radii than observed, about 14\% above the measured relation. The \hi/${\rm H}_2$\xspace models agree to better than 13\% for all bins for TNG100 and to better than 20\% for TNG300, a much more well-defined prediction than for gas fractions or mass functions. Interestingly, TNG100 and TNG300 are almost perfectly converged in this metric, unlike in any other quantity we consider. We discuss the physical reasons for the tight relation and the good agreement in Section~\ref{sec:discussion:hisize}.
\subsubsection{${\rm H}$\,{\sc i}\xspace Surface Density Profiles}
\label{sec:results:size:hiprofiles}
\begin{figure}
\centering
\includegraphics[trim = 2mm 24mm 6mm 2mm, clip, scale=0.69]{./prof_hi_control.pdf}
\includegraphics[trim = 2mm 8mm 6mm 3mm, clip, scale=0.69]{./prof_hi_rich.pdf}
\caption{Median radial profiles of the ${\rm H}$\,{\sc i}\xspace surface density. The light shaded areas show the 68\% scatter. As the profiles are scaled to $R_{\rm HI}$, they overlap at $\Sigma_{\rm HI} = 1 {\rm M}_{\odot}/{\rm pc^2}$ by definition. The gray lines show individual galaxies and the median for the Bluedisk sample of \citet{wang_14_hi}. The top panel shows their control sample ($10^{9.1} < M_{\rm HI} < 10^{9.8} {\rm M}_{\odot}$), the bottom panel their ${\rm H}$\,{\sc i}\xspace-rich sample ($M_{\rm HI} > 10^{9.8} {\rm M}_{\odot}$).}
\label{fig:prof_hi}
\end{figure}
Having established that $R_{\rm HI}$ matches observations well, we proceed to the radial distribution of ${\rm H}$\,{\sc i}\xspace. Fig.~\ref{fig:prof_hi} shows the radial ${\rm H}$\,{\sc i}\xspace profiles from the Bluedisk survey \citep{wang_13_bluedisks, wang_14_hi}. Their sample was designed to contain an ${\rm H}$\,{\sc i}\xspace-rich and a control subset (Section~\ref{sec:obs:sizes}). To mimic their selection, we choose all galaxies with $10^{10} < M_{*} < 10^{11} {\rm M}_{\odot}$, $0.5 < {\rm SFR} < 10\ {\rm M}_{\odot}/{\rm yr}$, and a stellar half-mass radius of at least $3$ kpc (though the latter cut makes virtually no difference). For the control sample, we select galaxies with $10^{9.1} {\rm M}_{\odot} < M_{\rm HI} < 10^{9.8} {\rm M}_{\odot}$, for the ${\rm H}$\,{\sc i}\xspace-rich sample we apply a minimum of $M_{\rm HI} > 10^{9.8} {\rm M}_{\odot}$ (according to the respective \hi/${\rm H}_2$\xspace model). \citet{bahe_16} carefully demonstrated that these relatively simple cuts match the Bluedisk selection closely.
Observationally, the median ${\rm H}$\,{\sc i}\xspace profiles for the two samples turn out to be almost indistinguishable, but the ${\rm H}$\,{\sc i}\xspace-rich IllustrisTNG galaxies do exhibit somewhat different profiles from the control galaxies. The median profiles of the control sample match the data reasonably well, though TNG100 galaxies exhibit a slight deficit at small radii. In both TNG100 and TNG300, the ${\rm H}$\,{\sc i}\xspace profiles in the control sample are perhaps a little too extended at large radii, though the difference is well within the scatter of the observations. In the more massive ${\rm H}$\,{\sc i}\xspace-rich sample, the median outer profiles are matched extremely well but the simulations show clear evidence of central ${\rm H}$\,{\sc i}\xspace holes: up to a factor of five deficit of ${\rm H}$\,{\sc i}\xspace at the centre. This trend is more pronounced in TNG100, but both TNG100 and TNG300 show large scatter at the smallest radii. The overall features of the profiles are independent of the \hi/${\rm H}_2$\xspace model, which is expected at large radii where ${\rm H}$\,{\sc i}\xspace dominates over ${\rm H}_2$\xspace, but not necessarily at small radii where the molecular fraction could be high.
${\rm H}$\,{\sc i}\xspace holes have also been observed in some galaxies \citep[e.g.][]{boomsma_08} but are not prevalent in the Bluedisk sample. We note that the holes in IllustrisTNG galaxies are not likely to be a resolution effect. First, they are more pronounced in the higher-resolution TNG100 than in TNG300. Second, they are more pronounced in the ${\rm H}$\,{\sc i}\xspace-rich sample, which contains the more massive, and thus more extended, ${\rm H}$\,{\sc i}\xspace disks. According to the median ${\rm H}$\,{\sc i}\xspace mass-size relation, $R_{\rm HI} \approx 30$ kpc at the lower-mass end of this sample, about $40$ times the force resolution length in TNG100. Finally, the holes are not unique to either volumetric or projected models but a robust prediction of both.
The question, then, arises as to what causes the central ${\rm H}$\,{\sc i}\xspace holes. There are two possible processes that can create them. First, gas at the center can be ionized or ejected, for example due to feedback from active galactic nuclei (AGN) in large galaxies \citep{weinberger_18}. Second, the inner part of the gas disc can have high enough surface densities to almost entirely transition to ${\rm H}_2$\xspace. We find evidence for the former mechanism \citep[see also][]{nelson_19_outflows}. First, the prevalence of ${\rm H}$\,{\sc i}\xspace holes increases with mass, a trend that would be expected for AGN feedback. Second, the median neutral fraction, $f_{\rm HI+H_2}$, drops towards the centre of massive galaxies, indicating that the majority of the gas is ionized in many objects. Third, the prevalence of ${\rm H}$\,{\sc i}\xspace holes depends on the star formation activity: galaxies with a specific SFR (sSFR) above $10^{-10}/{\rm yr}$ exhibit barely any ${\rm H}$\,{\sc i}\xspace holes, whereas the vast majority of galaxies below an sSFR of $10^{-11}/{\rm yr}$ do exhibit them. Thus, ${\rm H}$\,{\sc i}\xspace holes do not seem to be connected to a dominance of ${\rm H}_2$\xspace over ${\rm H}$\,{\sc i}\xspace. The control and ${\rm H}$\,{\sc i}\xspace-rich samples select slightly different SFRs, explaining the different median profiles in Fig.~\ref{fig:prof_hi}.
\citet{bahe_16} analysed ${\rm H}$\,{\sc i}\xspace profiles in the EAGLE simulation and report that a significant fraction of their galaxies exhibit clear ${\rm H}$\,{\sc i}\xspace holes, with holes becoming more prevalent towards high ${\rm H}$\,{\sc i}\xspace masses (their figure 5). The holes in EAGLE, however, appear not only at the center but also throughout the disc, and are traced to their particular feedback implementation \citep{bahe_16}.
\subsubsection{The Extent of the ${\rm H}_2$\xspace Distribution}
\label{sec:results:size:h2}
\begin{figure}
\centering
\includegraphics[trim = 2mm 8mm 4mm 2mm, clip, scale=0.69]{./sizehist_rh2_half_rstar_half.pdf}
\caption{Distribution of the ratio of ${\rm H}_2$\xspace and stellar half-mass radii, a crude test of the extent of ${\rm H}_2$\xspace compared to the optical size of the galaxy. The gray histogram shows the distribution of CO-to-stellar half-mass radii from the EDGE-CALIFA survey \citep{bolatto_17}. Their best-fit CO-to-stellar scale ratio is unity (dashed line), whereas IllustrisTNG galaxies exhibit somewhat larger median ratios and large scatter (the thin vertical lines bracket the medians according to the different \hi/${\rm H}_2$\xspace models).}
\label{fig:h2_size}
\end{figure}
\begin{figure*}
\centering
\includegraphics[trim = 1mm 9mm 0mm 0mm, clip, scale=0.62]{./mstar_fstar_split_hi_split_concentration.pdf}
\includegraphics[trim = 24mm 9mm 1mm 0mm, clip, scale=0.62]{./mstar_fstar_split_h2_split_concentration.pdf}
\caption{Same as Fig.~\ref{fig:fraction}, but with the galaxy sample split into LTGs (top row) and ETGs (bottom row) by the concentration of their stellar mass. The \citetalias{calette_18} constraints are derived directly from their best-fit distributions. The match to observations is somewhat worse than for the overall sample, highlighting that the gas content of IllustrisTNG galaxies does not correlate with morphology as observed in the real Universe.}
\label{fig:gasfrac_c}
\end{figure*}
As discussed in Section~\ref{sec:obs:sizes}, there are a number of factors that complicate the interpretation of observed molecular profiles. Thus, we restrict our comparison to a simple measure of the relative extent of the molecular and stellar distributions: the ratio of ${\rm H}_2$\xspace and stellar half-mass radii, shown in Fig.~\ref{fig:h2_size}. The gray histogram shows the ratio of CO and stellar half-mass radii from the EDGE-CALIFA survey \citep[][see their figure 12]{bolatto_17}. The CALIFA selection function is based mostly on the optical radii of galaxies \citep{walcher_14}, but the EDGE selection complicates this picture further. To mimic the overall selection, we choose all simulated galaxies that fall within the ranges of galaxy properties observed in the final EDGE-CALIFA sample, namely, a projected stellar half-mass radius between $1.3$ and $8.8$ kpc, $10^{9.5} {\rm M}_{\odot} < M_{*} < 10^{12} {\rm M}_{\odot}$, $M_{\rm H_2} > 10^8 {\rm M}_{\odot}$, and an SFR between $0.01$ and $100\ {\rm M}_{\odot} / {\rm yr}$. We have verified that our results are not sensitive to the exact values of any of these limits.
\citet{bolatto_17} find a relatively tight relation between the CO and stellar half-mass radii, with a best-fit ratio of 1.00 (dashed line in Fig.~\ref{fig:h2_size}). In agreement with these observations, a large fraction of IllustrisTNG galaxies exhibit radius ratios around unity, but there is also a significant tail towards large ratios. These galaxies have an ${\rm H}_2$\xspace distribution that is much more extended than the stellar distribution, whereas no such objects were observed in EDGE-CALIFA. As a result, the median ratio is closer to $1.3$ in the simulations, though with a large dependence on the \hi/${\rm H}_2$\xspace model. The median ratio tends to increase towards higher ${\rm H}_2$\xspace masses. The stellar sizes in IllustrisTNG match observations well \citep{genel_18}, meaning that the disagreements are likely due to the extent of the ${\rm H}_2$\xspace distribution. The ${\rm H}_2$\xspace radii appear well converged, with small differences between TNG100 and TNG300, or with the lower-resolution version TNG100-2. Splitting the sample into LTGs and ETGs as described in Section~\ref{sec:sim:morphology} has only a modest effect on the results.
While half-mass radii have the advantage of being simple to measure, observed CO and stellar profiles are often fit with an exponential \citep[e.g.,][]{leroy_08}. This procedure tends to give very similar results: when comparing the fitted scale lengths of the CO and stellar profiles, \citet{bolatto_17} find a best-fit relation of $l_{\rm CO} = 0.89 l_*$, \citet{leroy_08} find $l_{\rm CO} \approx (0.9 \pm 0.2) l_*$. We thus conclude that our comparison is robust to the exact definition of the scale radii.
There are, however, a number of other caveats. For example, the CO-to-${\rm H}_2$\xspace conversion factor could depend on radius, leading to CO profiles that would systematically differ from ${\rm H}_2$\xspace profiles \citep[][and references therein]{bolatto_13}. Moreover, the simple CO-to-stellar half-mass ratio shown in Fig.~\ref{fig:h2_size} may paint too simplistic a picture because the CO surface density profiles of galaxies have been found to be diverse \citep[e.g.,][]{young_91, regan_01, leroy_08, leroy_09}. For example, CO emission at large radii has been seen at high redshift \citep{cicone_15, ginolfi_17, dannerbauer_17}. While these complications mean that we cannot draw strong conclusions from Fig.~\ref{fig:h2_size}, there is another piece of evidence that our simulated ${\rm H}_2$\xspace distributions are too extended: we showed that IllustrisTNG does not quite match the Kennicutt-Schmidt relation at the high-surface-density end, with low $\Sigma_{\rm H_2}$ at fixed SFR (Fig.~3 in \citetalias{diemer_18_hih2}). This mismatch indicates that star formation in the simulation happens at lower surface densities than observed, which could well be a resolution effect.
In summary, we find tentative evidence that the spatial distribution of ${\rm H}_2$\xspace in our modelling is somewhat more extended than in observations. We leave a more detailed comparison for future work.
\subsection{Correlation with Morphology}
\label{sec:results:morphology}
When considering gas fractions as a function of stellar mass in Section~\ref{sec:results:fraction}, we combined the \citetalias{calette_18} results for LTGs and ETGs to recover the distribution of the overall galaxy sample. Considering the types separately, however, tightens the scatter on the relations and allows us to check whether the gas content of IllustrisTNG galaxies correlates with their stellar morphology as expected. To this end, Fig.~\ref{fig:gasfrac_c} shows the same comparison of gas fractions as Fig.~\ref{fig:fraction}, but split into LTGs (top row) and ETGs (bottom row) according to stellar mass concentration (Section~\ref{sec:sim:morphology} and Appendix~\ref{sec:app:morphology}).
This comparison reveals a number of issues. First, the fractions of galaxies below the $f_{\rm gas}$ threshold trends the wrong way: a significant number of LTGs have no gas when almost no such objects are observed, whereas there are not enough low-gas ETGs. Clearly, a lack of neutral gas does not correlate well with the shape of the stellar distribution as captured by concentration. Second, the median ${\rm H}$\,{\sc i}\xspace fractions in the ETG sample are too high. While we found that the median gas fractions and scatter of the overall galaxy sample are in good agreement with data, we conclude that these gas fractions are not always assigned to galaxy samples with the expected morphological properties.
However, when splitting the sample by colour, we find much better agreement with the observed gas fractions than in Fig.~\ref{fig:gasfrac_c}. This conclusion is in line with the work of \citet{rodriguezgomez_18}, who found that galaxy morphology and colour in IllustrisTNG do not correlate quite as observed. In Appendix~\ref{sec:app:morphology}, we further discuss the connection between gas content, various morphological indicators, and colour. We conclude that the gas content of galaxies correlates much more strongly with colour than with morphology in IllustrisTNG.
\subsection{Results for the Original Illustris Simulation}
\label{sec:results:orig}
The IllustrisTNG physics model represents a major improvement over the original Illustris model \citep{pillepich_18_tng}. However, it is still informative to evaluate the original simulation with regard to its ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace content. For this purpose, we provide Illustris-1 versions of Figs.~\ref{fig:omega} through \ref{fig:h2_size} at \href{http://www.benediktdiemer.com/data/}{benediktdiemer.com/data}. In this section, we briefly describe the main conclusions. We omit Fig.~\ref{fig:gasfrac_c} from the set and from our discussion because our chosen morphological indicator, $C_{80,20}$, takes on such different values in Illustris-1 that it cannot be used for a sensible morphological classification.
Most notably, $\Omega_{\rm HI}$ and $\Omega_{\rm H_2}$ are massively overestimated in Illustris-1 at $z = 0$, by almost an order of magnitude. This neutral gas excess also manifests itself in the ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace mass functions, which are similarly overestimated by up to an order of magnitude at some masses. It follows that the gas fractions also exceed observations, by a factor of about five at the low-mass end. They decrease strongly towards high masses, falling below the \citetalias{calette_18} relations at $M_* > 10^{11} {\rm M}_{\odot}$. This trend is also apparent in Fig.~3a of \citet{vogelsberger_14_nature}, but the \citet{huang_12} ALFALFA gas fractions shown there are somewhat higher than than inferred by \citetalias{calette_18}, leading to a seemingly better agreement at the low-mass end. Interestingly, Illustris-1 matches the fraction of galaxies below the threshold better than IllustrisTNG as it contains a much smaller population of very gas-poor objects and follows the expected trends with stellar mass. Given the excess of both ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace, it is unsurprising that the correlation between $M_{\rm H_2}$ and SFR at $z = 0$ is similarly off, with a large and non-constant depletion time. At $z = 2$, however, Illustris-1 does roughly match the constant depletion time found in IllustrisTNG.
Despite these significant disagreements, the ${\rm H}$\,{\sc i}\xspace mass-size relation turns out to be robust once again: while slightly shallower in Illustris, the normalization is matched at $M_{\rm HI} \approx 10^9 {\rm M}_{\odot}$. This agreement is somewhat unexpected, given that the stellar sizes of galaxies were about twice too large below $M_{*} \lower.7ex\hbox{\ltsima} 10^{10.5} {\rm M}_{\odot}$ in Illustris-1, an observable that is much improved in TNG100 \citep{pillepich_18_tng, genel_18}. Thus, the improvement in stellar sizes was not driven by a change in the distribution of neutral gas. This conclusion is confirmed by the ${\rm H}$\,{\sc i}\xspace profiles, which more or less agree between Illustris-1 and IllustrisTNG. Conversely, the median ${\rm H}_2$\xspace half-mass radius in Illustris-1 is significantly larger than in IllustrisTNG.
In summary, Illustris-1 galaxies contain a significant excess of neutral gas at $z = 0$, leading to offsets in all correlations with other galaxy properties. However, the properties of the excessively large ${\rm H}$\,{\sc i}\xspace discs are internally consistent. Our findings highlight the enormous improvements of the IllustrisTNG galaxy formation model compared to the original model.
\section{Discussion}
\label{sec:discussion}
In this section, we discuss the physics behind the most striking agreements and tensions with observations that we have discovered. We also consider anecdotal evidence from individual galaxies such as the Milky Way.
\subsection{Is There an Excess of Neutral Gas in IllustrisTNG?}
\label{sec:discussion:excess}
Perhaps the most significant tension with observations is that IllustrisTNG appears to contain about twice as much neutral gas at $z = 0$ as expected observationally, despite efficient stellar and AGN feedback. This conclusion is entirely independent of the \hi/${\rm H}_2$\xspace modelling, which takes the neutral gas abundance as an input from the simulation. In non-star forming cells (below the density threshold of $n_{\rm H} = 0.106\ {\rm cm}^{-3}$), the neutral fraction is determined numerically by the balance between cooling, the photoionization rate due to the UVB \citep{fauchergiguere_09} and due to nearby AGN, and self-shielding according to a fitting function \citep[][see \citealt{vogelsberger_13} for details]{rahmati_13}. In cells above the density threshold, the \citet{springel_03} two-phase ISM model governs the gas physics. There, we assume that all gas in cold clouds is entirely neutral whereas the hot phase is entirely ionized, leading to a neutral fraction of about 90\%. If the fraction of star-forming gas was drastically different, we would expect corresponding changes in the stellar properties of IllustrisTNG galaxies, which more or less agree with observations. Given these considerations, and assuming that the observations are accurate, there are a number of possible reasons why the neutral fraction might be overestimated in star-forming cells, in non-star-forming cells, or in both.
In reality, some part of the cold-cloud phase in the \citet{springel_03} model could be ionized due to radiation from young stars \citep[][see also Section~4.3 of \citetalias{diemer_18_hih2}]{rahmati_13_localradiation}. However, only about 30\% of the total neutral gas in TNG100 stems from star-forming cells, meaning that even erasing all neutral gas from them would not reduce the overall neutral abundance by a factor of two.
In non-star-forming gas, the neutral fraction could be lowered if the gas was heated or experienced stronger photo-ionization. For example, stronger feedback could remove some gas that is dense enough to self-shield and contain a significant neutral component. Another explanation could be the strength of the UVB: if the model of \citet{fauchergiguere_09} underestimated the true background significantly, the neutral fraction would be artificially increased, perhaps without significantly increasing the star formation activity. We have tested this mechanism using a smaller test simulation that uses the \citet{puchwein_19} UVB instead of \citet{fauchergiguere_09}. In the test simulation, the neutral gas abundance is reduced by about 10\%, indicating that the UVB would need to change somewhat drastically to reduce the neutral gas masses by a factor of two.
While none of these effects seem likely to account for the entire disagreement in the neutral gas abundance, several of them could conspire. We note that the disagreement in $\Omega_{\rm HI}$ is driven largely by an upturn between $z = 1$ and $z = 0$ that is not apparent in observations (Fig.~\ref{fig:omega}). The upturn is partially caused by a decrease in the ${\rm H}_2$\xspace abundance over the same time interval, but the combined neutral abundance does increase in TNG100. An understanding of the reasons for this trend might shed light on the disagreement in $\Omega_{\rm HI}$, an investigation we leave for future work.
\subsection{What is the Origin of Gas-free Galaxies in IllustrisTNG?}
\label{sec:discussion:gasfree}
While investigating ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace fractions in Section~\ref{sec:results:fraction}, we discovered that IllustrisTNG contains a significant population of satellites whose gas fractions lie below the detection threshold. Upon closer inspection, some of those galaxies turn out not to contain any gas at all, that is, the halo finder associates no gas cells with them despite their significant stellar masses. Such gas-free galaxies account for 7\% of our TNG100 sample ($M_* > 2 \times 10^8 {\rm M}_{\odot}$) and are found across the entire range of stellar mass. While gas-free galaxies account only partially for the excess of galaxies below the limit in Fig.~\ref{fig:fraction}, they are, at first sight, a strange population that we wish to investigate further.
The vast majority of gas-free galaxies, 94\%, are satellites. They can occupy (sub-)halos as massive as $10^{12} {\rm M}_{\odot}$, but live in dark matter halos that are, on average, about $0.5$ dex less massive than those of their gaseous counterparts. About 20\% of them contain no dark matter at all, meaning that they are purely stellar clumps. Those objects are mostly tagged as possible halo finder artefacts \citep[][see also \citealt{genel_18} and \citealt{pillepich_18}]{nelson_19_datarelease}. Conversely, out of those gas-free galaxies that do have dark halos, only 3\% are tagged. While the average satellite in our sample resides at a median radius of $0.8 R_{\rm 200c}$ (where $R_{\rm 200c}$ refers to the halo radius of the host halo), the gas-free satellites live closer to their hosts, at a median distance of $0.55 R_{\rm 200c}$. We find virtually the same distribution of distances for the larger population of satellites that fall into the low-gas category in our comparison of gas fractions (Fig.~\ref{fig:fraction}, see also \citealt{marasco_16}). From a visual inspection of the stellar distribution around some gas-free satellites, we conclude that some of them are relatively isolated, but that their majority lives in crowded environments, as expected from their radial distribution.
These properties constitute strong evidence that the majority of gas-free galaxies have been stripped of their gas and most of their dark matter by a larger host halo \citep[see, e.g.,][for related investigations in IllustrisTNG]{yun_19_jellyfish, stevens_19_hi}. The small fraction of gas-free galaxies that are identified as centrals at $z = 0$ may well have had a significant encounter in the past. Interestingly, the stellar size distributions of gas-free galaxies are compatible with those of the overall sample. Thus, it appears that the stellar population is relatively immune to the processes that so efficiently strip dark matter and gas from galaxies \citep[cf.][]{niemiec_19}, presumably because it is much more concentrated than the dark and gaseous components.
In terms of our comparison to observations, the question is whether low-gas satellites would be included in the observational samples used in the \citetalias{calette_18} compilation. The answer depends somewhat on whether we consider ${\rm H}$\,{\sc i}\xspace or ${\rm H}_2$\xspace, and on the stellar mass range. At low masses, where we see a strong excess in the fraction of galaxies with low ${\rm H}$\,{\sc i}\xspace mass, the ${\rm H}$\,{\sc i}\xspace fractions used in \citetalias{calette_18} are dominated by the Updated Nearby Galaxy Catalogue of \citet{karachentsev_13}, which explicitly includes objects in crowded environments. For the ALFALFA-based measurements, we expect no explicit bias against satellites as ALFALFA is a blind survey. The only considerable bias against low-gas satellites could stem from blending, where satellites are assigned additional ${\rm H}$\,{\sc i}\xspace mass due to confusion with other sources (most likely, their central galaxy). However, we show in Appendix~\ref{sec:app:apertures} that blending is a relatively minor effect in our sample.
As there are currently no blind CO surveys at low redshift, many of the ${\rm H}_2$\xspace fraction samples are based on an optical selection using SDSS, meaning that those samples might be biased against close pairs due to fibre collisions \citep{zehavi_02}. However, the median distance of satellites to their host in our TNG100 sample is 140 kpc, which would correspond to the 55'' fibre collision limit at a redshift of $z \approx 0.14$. Since samples such as GASS and COLD GASS have much lower median redshifts, the bulk of their satellite population should not be affected, and there are no further cuts on the proximity to another galaxy \citep{catinella_10, catinella_13, saintonge_11_coldgass, saintonge_17}. Thus, the impact of fibre collisions remains limited.
In conclusion, IllustrisTNG hosts a population of satellites with very little or no neutral gas, and there is no clear reason why the bulk of such galaxies should not be included in observational samples. Excessive stripping is not likely to be responsible, as \citet{stevens_19_hi} showed that the reduction in ${\rm H}$\,{\sc i}\xspace mass in TNG100 is compatible with observations \citep{brown_17}. We conclude that, despite the overall excess in neutral gas discussed in the previous section, too many galaxies in IllustrisTNG are left with small amounts of neutral gas, perhaps due to excessive feedback. Such issues provide motivation to push observations of the ${\rm H}$\,{\sc i}\xspace content of galaxies to lower gas fractions, for example using new telescopes such as FAST \citep{nan_11}.
\subsection{What Sets the ${\rm H}$\,{\sc i}\xspace Mass-size Relation?}
\label{sec:discussion:hisize}
In Section~\ref{sec:results:size:hi}, we found that IllustrisTNG matches the observed ${\rm H}$\,{\sc i}\xspace mass-size relation with surprising accuracy. The \hi/${\rm H}_2$\xspace models agree almost exactly, an indication that the ${\rm H}$\,{\sc i}\xspace radius is generally not set by the \hi/${\rm H}_2$\xspace transition (see \citet{wang_16_hi} for observational arguments to the same effect). Instead, one could imagine that ${\rm H}$\,{\sc i}\xspace sizes were driven by the transition from ionized to neutral gas, and thus by the interplay of the UVB and self-shielding. However, we find that the radial profiles of the neutral fraction fall smoothly, with median neutral fractions between 30\% and 60\% at $R_{\rm HI}$ (depending on stellar mass). There is no sharp break in the neutral gas profiles that would set $R_{\rm HI}$. We have also tested the effect of the UVB explicitly, but the ${\rm H}$\,{\sc i}\xspace mass-size relation does not change in a simulation that uses the \citet{puchwein_19} UVB prescription.
These considerations hint that the tight ${\rm H}$\,{\sc i}\xspace mass-size relation arises because the ${\rm H}$\,{\sc i}\xspace density profiles take on a universal form. The mass-size slope of roughly $0.5$ corresponds to a fixed surface density. Indeed, the surface densities approach more or less constant values at small radii (Fig.~\ref{fig:prof_hi}). Thus, we might speculate that the normalization of the mass-size relation is set by the critical surface density where ${\rm H}$\,{\sc i}\xspace transitions to ${\rm H}_2$\xspace \citep[e.g.][]{bigiel_08}. However, changing this surface density by scaling the UV field in our modelling (see \citetalias{diemer_18_hih2}) does not modify the mass-size relation at all.
We have also investigated the evolution of the ${\rm H}$\,{\sc i}\xspace mass-size relation with redshift. Between $z = 0$ and $z = 4$, the relation approximately maintains its slope, but the normalization (the ${\rm H}$\,{\sc i}\xspace size) decreases with redshift and the scatter increases slightly. In particular, while the TNG100 median relation lies 14\% above the \citet{wang_14_hi} relation at $z = 0$, it agrees to better than 1\% at $z = 1$ and lies below the relation by 9\% and 11\% at $z = 2$ and 4, respectively (see \citet{obreschkow_09a} for similar simulation results). This evolution constitutes a prediction for future observations of the ${\rm H}$\,{\sc i}\xspace mass-size relation at high redshift.
In summary, the ${\rm H}$\,{\sc i}\xspace mass-size relation appears to be extremely robust in IllustrisTNG \citep[see also][]{marinacci_17}. The physical reasons for the tight ${\rm H}$\,{\sc i}\xspace mass-size relation will be further investigated in a forthcoming paper (A. Stevens et al. 2019, in preparation).
\subsection{Anecdotal Evidence from Individual Galaxies}
\label{sec:discussion:individual}
The molecular fraction in the Milky Way is observationally known to be about $M_{\rm H_2} / M_{\rm H} \approx 10^{9} / 10^{10} = 0.1$ \citep[e.g.,][see also \citealt{rachford_02}]{tumlinson_02}. When we select Milky Way analogues with an SFR between $0.5$ and $2 {\rm M}_{\odot}/{\>{\rm yr}}$ and a stellar mass around $5 \times 10^{10} {\rm M}_{\odot}$, the \hi/${\rm H}_2$\xspace models predict a median ${\rm H}_2$\xspace fraction between $0.3$ and $0.4$. A value of $0.1$ is outside the 68\% scatter of all models, but there are, of course, some galaxies with such relatively low molecular fractions. Moreover, in \citetalias{diemer_18_hih2} we emphasized that our \hi/${\rm H}_2$\xspace modelling is systematically uncertain to factors of $2$--$3$.
\citet{zhu_18_malin} recently highlighted a particular galaxy in TNG100 to be an analogue of the massive, low surface brightness gas disc Malin 1 \citep{bothun_87}. \citet{braine_00} found no CO emission in Malin 1, translating into an upper limit of $M_{\rm H_2} / M_{\rm H} < 0.03$, given that Malin 1 contains a large ${\rm H}$\,{\sc i}\xspace mass of $M_{\rm HI} = 4 \times 10^{10} {\rm M}_{\odot}$ \citep{pickering_97}. The Malin analogue in TNG100 has a total neutral hydrogen mass of $1.4 \times 10^{11} {\rm M}_{\odot}$, about three times larger than observationally measured. Like in the real Malin galaxy, this gas is distributed in a massive, low surface density disc. Our modelling predicts similarly low molecular fractions between $0.03$ and $0.1$.
\section{Conclusions}
\label{sec:conclusion}
We have compared the atomic and molecular gas content of galaxies in the Illustris TNG100 and TNG300 simulations to observational data at low redshift. While we largely find good agreement, we uncover a few points of tension. Our main conclusions do not depend on the \hi/${\rm H}_2$\xspace model used. They are as follows:
\begin{enumerate}
\item The cosmic abundance of ${\rm H}$\,{\sc i}\xspace at $z = 0$ is about $8 \times 10^{-4}$ in units of the critical density, roughly twice the abundance measured by 21-cm surveys. The cosmic abundance of ${\rm H}_2$\xspace also appears to be slightly higher than observations at $z = 0$, but this disagreement is not conclusive due to selection biases and due to the uncertain CO-to-${\rm H}_2$\xspace conversion factor.
\item The ${\rm H}$\,{\sc i}\xspace mass function generally agrees well with observations, but overestimates the observed counts around $10^9 {\rm M}_{\odot}$. The ${\rm H}_2$\xspace mass function exhibits no significant disagreements with observations.
\item The median ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace fractions and their scatter are generally well matched by the simulations. While the ${\rm H}_2$\xspace fraction is somewhat low at the highest stellar masses, the uncertainties on the corresponding observations are significant. However, IllustrisTNG produces a sizeable population of satellite galaxies (about 25\% at some stellar masses) that have little or no neutral gas, significantly more than observationally inferred.
\item The relation between ${\rm H}_2$\xspace mass and SFR is matched to better than a factor of two in IllustrisTNG, with a constant depletion time of 1 Gyr at $z = 0$. The trend of decreasing depletion time at higher redshift is also reproduced accurately.
\item The observed ${\rm H}$\,{\sc i}\xspace mass-size relation is matched to 14\% accuracy in IllustrisTNG, and is relatively independent of the \hi/${\rm H}_2$\xspace modelling. The radial profiles of ${\rm H}$\,{\sc i}\xspace surface density also agree well, though a fraction of massive galaxies exhibit excessive central ${\rm H}$\,{\sc i}\xspace holes. We find indications that the spatial extent of the ${\rm H}_2$\xspace distribution is somewhat larger than observed, though with significant scatter.
\item The neutral gas content of simulated galaxies does not show the expected correlations with morphology. As a result, the gas fractions of simulated ETGs are too high, though this statement strongly depends on how morphology is defined.
\item The original Illustris-1 simulation overestimates the neutral gas content of galaxies at $z = 0$ by up to an order of magnitude, highlighting that IllustrisTNG represents an enormous improvement in the galaxy formation model.
\end{enumerate}
It is worth noting that our \hi/${\rm H}_2$\xspace results are almost entirely a prediction of the IllustrisTNG simulations. While the model parameters were calibrated to give reasonable gas fractions in cluster halos \citep{pillepich_18_tng}, it is far from obvious that such tuning would automatically result in the correct amount of neutral gas in galaxies over a large range of masses. Similarly, the star formation and feedback models used in the simulations were calibrated to form realistic amounts of stars in halos of a given mass, but it is by no means guaranteed that those stars would form from the right amount of neutral gas. Splitting the gas into ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace introduces additional physical demands on the simulation, for example, because ${\rm H}_2$\xspace forms in high-density regions. Thus, our \hi/${\rm H}_2$\xspace results represent an impressively accurate set of predictions.
However, we have also discovered some areas of tension with observations that can help to inform future generations of cosmological simulations. For example, the sub-grid star formation model of \citet{springel_03} is currently based on a relatively simplistic picture of the temperature and density structure of the ISM, which determines the abundance of cold molecular clouds. Ideally, the ISM model would directly predict the neutral and molecular fractions, rendering post-processing unnecessary. Such a model would be strongly constrained by some of the data we considered in this paper, for instance, by the ${\rm H}_2$\xspace-SFR relation and the spatial distribution of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace. Another area where improved observations could constrain simulation physics is the distribution of ${\rm H}$\,{\sc i}\xspace and ${\rm H}_2$\xspace masses at the gas-poor end. As discussed in Section~\ref{sec:discussion:gasfree}, the gas fraction is sensitive to a number physical effects besides star formation and feedback, including the stripping of gas from satellites.
We have left many promising avenues for future work. Most notably, we have focused almost entirely on the local Universe, while there is a rapidly growing body of observations of the gas properties and scaling relations at higher redshift \citep[e.g.,][]{tacconi_13, tacconi_18, genzel_15, freundlich_19}. One particularly interesting question is how molecular gas reservoirs are related to star formation and quenching, and whether their relation changes over redshift. For example, \citet{suess_17} recently reported ALMA detections of two galaxies at $z \approx 0.7$ that host massive reservoirs of molecular gas but exhibit a low SFR. An analysis of similar galaxies in IllustrisTNG could perhaps explain their low star formation efficiency. Finally, we have refrained from a detailed comparison of the spatial distribution of molecular gas, which is increasingly being constrained observationally \citep[e.g.,][]{cormier_16, cormier_18, bolatto_17}.
We emphasize that all simulation data used in this work are publicly available as part of the IllustrisTNG data release \citep{nelson_19_datarelease}. We encourage the community to undertake further analyses and data comparisons.
\section*{Acknowledgements}
We are grateful to Yannick Bah{\'e}, Alberto Bolatto, Martin Bureau, Andreas Burkert, John Forbes, Jonathan Freundlich, Hong Guo, Federico Lelli, Gerg{\"o} Popping, Greg Snyder, Paul Torrey, and Rainer Weinberger for productive discussions. We thank the referee, Romeel Dav{\'e}, for his constructive suggestions. This research made extensive use of the python packages \textsc{NumPy} \citep{code_numpy}, \textsc{SciPy} \citep{code_scipy}, \textsc{Matplotlib} \citep{code_matplotlib}, and \textsc{Colossus}\xspace \citep{diemer_18_colossus}. Support for Program number HST-HF2-51406.001-A was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. ARC acknowledge support from CONACyT graduate fellowship. MV acknowledges support through an MIT RSC award, a Kavli Research Investment Fund, NASA ATP grant NNX17AG29G, and NSF grants AST-1814053 and AST-1814259.
\bibliographystyle{mnras}
\iflocal
|
1,116,691,499,687 | arxiv |
\section{Conclusion}
In this paper, we proposed DPA, a transfer learning framework with discriminative pre-training tasks for academic performance prediction.
Our experimental results showed the effectiveness of DPA for the label-scarce academic performance prediction task over the previous state-of-the-art generative pre-training method.
Avenues of future research include investigating more effective pre-training tasks for academic performance prediction and pre-train/fine-tune relations in AIEd.
\section{Experiments}
\subsection{Dataset}
The pre-training dataset consists of student interaction logs.
The statistics of the dataset are summarized in Table \ref{tab:statistics_pre-training}.
We exclude student interaction logs less than 15 in length to reduce noisy interactions from students using \emph{Santa} just for trying out.
Since most of the values of \emph{elapsed\_time}, \emph{exp\_time}, and \emph{inactive\_time} are distributed in head areas as shown in Figure \ref{fig:data_distribution}, we set their maximum to 300, 300, and 86400 seconds, respectively, and any values more than that are capped off to the maximum.
We further normalize the values so that they are between 0 and 1 by dividing them by the maximum.
\begin{table}[ht]
\caption{Statistics of pre-training dataset.}
\centering
\begin{tabular}{l|l}
\toprule
Statistics & Value \\
\toprule
Number of students & 436847 \\
Number of interactions & 135884952 \\
Minimum length of interactions & 15 \\
Maximum length of interactions & 76379 \\
Mean length of interactions & 311.06 \\
Median length of interactions & 50 \\
Correct response ratio & 0.66 \\
Timely response ratio & 0.72 \\
\bottomrule
\end{tabular}
\label{tab:statistics_pre-training}
\end{table}
\begin{table}[ht]
\caption{Statistics of fine-tuning dataset.}
\centering
\begin{tabular}{l|l}
\toprule
Statistics & Value \\
\toprule
Number of students & 6814 \\
Number of score labels & 11212 \\
Minimum length of interactions before the test & 10 \\
Maximum length of interactions before the test & 39040 \\
Mean length of interactions before the test & 1408.31 \\
Median length of interactions before the test & 736 \\
\bottomrule
\end{tabular}
\label{tab:statistics_fine-tuning}
\end{table}
The fine-tuning dataset consists of test scores and student interaction logs before the test.
Table \ref{tab:statistics_fine-tuning} summarizes the statistics of the dataset.
The number of score labels is far less than that of interactions in the pre-training dataset, which leads to the label-scarcity problem in academic test performance prediction.
The minimum length of interactions before the test is 10 because \emph{Santa} required students to solve at least 10 exercises before taking the test.
The mean and median length of interactions before the test are longer than those in the pre-training dataset since students who decide to report their scores tend to be more serious about studying with \emph{Santa}.
As shown in Figure \ref{fig:data_distribution}, most of the scores are distributed over the range from 700 to 900, and there are very few scores in the below 200 area.
This has to do with the distribution of students using \emph{Santa} whose initial and goal scores are usually in the range of 600 to 700 and higher than 800, respectively.
Since the score labels are few in number, we perform 5-fold cross-validation by dividing the fine-tuning dataset into 5 splits and using 3/5, 1/5, and 1/5 of the dataset for training, validation, and test, respectively.
\subsection{Training Details and Hyperparameters}
We use the Mean Absolute Error (MAE) as the metric for academic test performance prediction.
The list of hyperparameters and their values are described in Table \ref{tab:hyperparameters}.
For each pre-training evaluation, the pre-trained model is fine-tuned and cross-validated on the validation set, which results in the same number of validation results as the number of pre-training evaluations.
Then, we select the model with the best validation result and report an evaluation result of the model on the test set.
\begin{table}[ht!]
\caption{Pre-train/fine-tune hyperparameters.}
\centering
\begin{tabular}{l|l}
\toprule
Hyperparameter & Value \\
\toprule
Attention window size & 1024 \\
Masked interaction ratio & 0.6 \\
$\lambda$ & 1 \\
\midrule
\multicolumn{1}{l|}{\textbf{Embedding}} \\
Interaction embedding dimension & 256 \\
Axial positional embedding shape & [32, 32] \\
Axial positional embedding dimension & [64, 192] \\
\midrule
\multicolumn{1}{l|}{\textbf{Generator}} \\
Number of reversible layers & 4 \\
Hidden layer dimension & 64 \\
Hidden activation function & GELU \cite{hendrycks2016gaussian} \\
Hidden layer dropout probability & 0.1 \\
Number of attention heads & 2 \\
Attention head dimension & 64 \\
Attention dropout probability & 0.1 \\
Feed-forward intermediate layer dimension & 256 \\
\midrule
\multicolumn{1}{l|}{\textbf{Discriminator}} \\
Number of reversible layers & 4 \\
Hidden layer dimension & 256 \\
Hidden activation function & GELU \\
Hidden layer dropout probability & 0.1 \\
Number of attention heads & 8 \\
Attention head dimension & 64 \\
Attention dropout probability & 0.1 \\
Feed-forward intermediate layer dimension & 1024 \\
\midrule
\multicolumn{1}{l|}{\textbf{FAVOR+}} \\
Number of random features & 256 \\
Random features redrawing interval & 1000 \\
\midrule
\multicolumn{1}{l|}{\textbf{Optimization}} \\
Optimizer & Adam \cite{kingma2014adam} \\
Adam $\beta_1$ & 0.9 \\
Adam $\beta_2$ & 0.98 \\
Adam $\epsilon$ & 1e-09 \\
Scheduler & Noam \cite{vaswani2017attention} \\
Noam warm-up steps & 4000 \\
\midrule
\multicolumn{1}{l|}{\textbf{Pre-training}} \\
Batch size & 64 \\
Batch update steps before each evaluation & 5000 \\
Number of evaluations & 40 \\
\midrule
\multicolumn{1}{l|}{\textbf{Fine-tuning}} \\
Batch size & 64 \\
Batch update steps before each evaluation & 10 \\
Patience & 30 \\
\bottomrule
\end{tabular}
\label{tab:hyperparameters}
\end{table}
\subsection{Effects of Generator's Pre-training Tasks}
There are multiple interactive features to be masked in each token of the interaction sequence, which raises a question of how to construct a set of masked interactive features, and accordingly, which pre-training task for the generator is the most effective for academic test performance prediction.
By default, all interactive features listed in Section \ref{sec:transfer_learning} are taken as inputs for both the generator and discriminator.
However, if \emph{response} (or \emph{elapsed\_time}) is masked, \emph{correctness} (or \emph{timeliness}) is excluded from the inputs and vice versa since there is an overlap of information that the features represent.
For example, when both \emph{response} and \emph{correctness} are taken as inputs, and \emph{correctness} is masked, the generator can predict the masked \emph{correctness} by only looking at \emph{eid} and \emph{response} without considering other interactions, which leads to poor pre-training.
The results are described in Table \ref{tab:comparison_pre-training_tasks}.
\begin{table}[h]
\centering
\caption{Comparison between different pre-training tasks.}
\begin{tabular}{l|l}
\toprule
Pre-training task & MAE \\
\toprule
\emph{response} & $\textbf{50.65}\pm1.26$ \\
\emph{response} + \emph{elapsed\_time} & $54.86\pm1.64$ \\
\emph{response} + \emph{timeliness} & $52.91\pm1.38$ \\
\emph{response} + \emph{exp\_time} & $57.54\pm1.47$ \\
\emph{response} + \emph{inactive\_time} & $60.69\pm1.74$ \\
\emph{correctness} & $51.36\pm0.97$ \\
\emph{correctness} + \emph{elapsed\_time} & $53.36\pm1.43$ \\
\emph{correctness} + \emph{timeliness} & $52.60\pm1.20$ \\
\emph{correctness} + \emph{exp\_time} & $54.36\pm1.62$ \\
\emph{correctness} + \emph{inactive\_time} & $55.04\pm1.58$ \\
\emph{response} + \emph{correctness} & $51.13\pm1.60$ \\
\emph{response} + \emph{correctness} + \emph{elapsed\_time} & $52.15\pm1.43$ \\
\emph{response} + \emph{correctness} + \emph{timeliness} & $53.05\pm1.81$ \\
\emph{response} + \emph{correctness}+ \emph{exp\_time} & $53.09\pm1.25$ \\
\emph{response} + \emph{correctness} + \emph{inactive\_time} & $56.41\pm1.72$ \\
\bottomrule
\end{tabular}
\label{tab:comparison_pre-training_tasks}
\end{table}
The best result was obtained under the pre-training task of predicting \emph{response} alone, which is slightly better than that of predicting \emph{correctness}, and both \emph{response} and \emph{correctness}.
Predicting correctness of student response is an important task in AIEd as can be seen from the large volume of studies about Knowledge Tracing.
Also, \cite{choi2020assessment} empirically showed that student response correctness is the most pedagogical interactive feature for academic test performance prediction.
However, rather than pre-training a model to predict whether a student correctly responded to a given exercise, the pre-training task of predicting student response itself injects more fine-grained information into the model, which leads to the more effective pre-training for academic test performance prediction.
Interestingly, the underperformed results were obtained when predicting \emph{elapsed\_time} or \emph{timeliness} in pre-training despite the benefits their information bring to several AIEd tasks \cite{feng2009addressing,zhang2017incorporating,shin2020saint+}.
We hypothesize that \emph{elapsed\_time} and \emph{timeliness} may introduce irrelevant noises and thus guide the model towards a direction inappropriate for academic test performance prediction.
In the case of \emph{exp\_time} and \emph{inactive\_time}, we observed that the generator failed to learn to predict their values when only given the interactive features listed in Section \ref{sec:transfer_learning}, which leads to unstable pre-training.
From these observations, in the following subsections, we conduct experimental studies based on the pre-training task of predicting \emph{response} alone.
\subsection{DPA vs. Baseline Methods}
\begin{table}[h]
\centering
\caption{Comparison of DPA with baseline methods.}
\begin{tabular}{l|l|l}
\toprule
Pre-training method & Fine-tuning model & MAE \\
\toprule
No pre-training & MLP & $82.89\pm3.23$ \\
& BiLSTM & $84.05\pm2.06$ \\
& Transformer encoder & $107.06\pm2.52$ \\
& Performer encoder & $81.76\pm1.24$ \\
\midrule
AE & MLP & $79.46\pm1.15$ \\
& BiLSTM & $85.64\pm1.89$ \\
& Transformer encoder & $75.13\pm3.10$ \\
& Performer encoder & $64.80\pm1.43$ \\
\midrule
AM & MLP & $77.17\pm2.14$ \\
& BiLSTM & $58.16\pm1.28$ \\
& Transformer encoder & $57.16\pm2.08$ \\
& Performer encoder & $52.79\pm1.39$ \\
\midrule
DPA & MLP & $77.24\pm1.59$ \\
& BiLSTM & $57.59\pm1.76$ \\
& Transformer encoder & $55.99\pm1.62$ \\
& Performer encoder & $\textbf{50.65}\pm1.26$ \\
\bottomrule
\end{tabular}
\label{tab:comparison_baselines}
\end{table}
We compare DPA with the following pre-training methods:
\begin{itemize}
\item No pre-training: We train the fine-tuning models only on the fine-tuning dataset.
\item Autoencoding: Autoencoding (AE) is a generative pre-training method widely used across different domains of machine learning including AIEd \cite{guo2015predicting,ding2019transfer}.
Given an unmasked interaction sequence, AE pre-trains a model to reconstruct the input interaction sequence.
\item Assessment Modeling: Assessment Modeling (AM) \cite{choi2020assessment} is the previous state-of-the-art generative pre-training method for academic test performance prediction.
In AM, a model takes a masked interaction sequence as an input and is pre-trained to predict masked features.
AM is exactly the same as fine-tuning the pre-trained generator in DPA.
\end{itemize}
Also, we investigate whether DPA is effective with the following different fine-tuning models:
\begin{itemize}
\item MLP: Multi-Layer Perceptron (MLP) is stacks of simple fully-connected layers.
Given an interaction sequence, interaction embedding vectors of all time-steps are summed together to compute a fixed-dimensional vector which is fed to a series of the fully-connected layers.
\item BiLSTM: Bi-directional Long Short-Term Memory (BiLSTM) is a model widely used for time series data prediction tasks.
The global max pooling layer is applied on top of the BiLSTM layer to obtain a fixed-dimensional intermediate representation from an input sequence of varying length.
\item Transformer Encoder: Transformer Encoder is a series of several identical layers composed of a multi-head self-attention layer with the softmax attention kernel and a point-wise feed-forward layer.
We set the Transformer encoder’s attention window size to 512 due to the out of GPU memory occuring when training the Transformer encoder of 1024 attention window size on our single GPU machine.
\end{itemize}
As described in Table \ref{tab:comparison_baselines}, transferring the pre-trained knowledge brings better results in most cases, and the best result is obtained from DPA.
Especially, when the Performer encoder, the best performing fine-tuning model, is used as the fine-tuning model, DPA reduces MAE by 4.05\%, 21.84\%, and 38.05\% compared to AM, AE, and No pre-training, respectively.
Among the baseline pre-training methods excluding No pre-training, the worst result is obtained from AE beacuse the pre-training task of AE is much easier than that of AM and DPA.
We observed that the loss curve of AE converged to near zero within the first pre-training evaluation.
\subsection{Robustness to Increased Label-scarcity}
Since the motivation behind our proposal of DPA is the label-scarcity problem, we investigate how MAE changes at varying degrees of label-scarcity.
Figure \ref{fig:label_scarcity} and Table \ref{tab:comparison_label_scarcity} describe the results when using 1/2, 1/4, and 1/8 of the total number of fine-tuning training samples.
In all degrees of label-scarcity, DPA consistently outperforms AM.
Also, DPA fine-tuned on 1/2, 1/4, and 1/8 of the dataset outperforms AM fine-tuned on the entire dataset, 1/2, and 1/4 of the dataset, respectively, which shows that DPA is more robust to label-scarcity than AM.
Compared with No pre-training, the gap between No pre-training and the other two pre-training methods increases as the number of labels becomes scarce.
Furthermore, the other two pre-training methods fine-tuned on 1/8 of the dataset outperform No pre-training fine-tuned on the entire dataset.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={Ratio of fine-tuning training samples (N)},
ylabel={MAE},
xmin=0, xmax=1.1,
ymin=50, ymax=95,
xtick={1/8, 1/4, 1/2, 1},
ytick={50, 55, 60, 65, 70, 75, 80, 85, 90, 95},
legend pos=north east,
ymajorgrids=true,
grid style=dashed,
]
\addplot[
color=black,
mark=square,
]
coordinates {
(1/8, 94.21) (1/4, 89.01) (1/2, 85.37) (1, 81.76)
};
\addlegendentry{No pre-training}
\addplot[
color=blue,
mark=square,
]
coordinates {
(1/8, 60.22) (1/4, 57.08) (1/2, 54.29) (1, 52.79)
};
\addlegendentry{AM}
\addplot[
color=red,
mark=square,
]
coordinates {
(1/8, 55.90) (1/4, 53.46) (1/2, 51.38) (1, 50.65)
};
\addlegendentry{DPA}
\end{axis}
\end{tikzpicture}
\caption{The black, blue, and red lines represent MAEs for No pre-training, AM, and DPA, respectively, when the number of fine-tuning training samples becomes 1/2, 1/4, and 1/8 of the entire dataset.}
\label{fig:label_scarcity}
\end{figure}
\begin{table}[t]
\centering
\caption{Comparison of DPA with AM and No pre-training at varying degrees of label-scarcity.}
\begin{tabular}{l|l|l|l}
\toprule
N & No pre-training & AM & DPA \\
\toprule
1/8 & $94.21\pm8.40$ & $60.22\pm1.86$ & $55.90\pm1.97$ \\
1/4 & $89.01\pm2.14$ & $57.08\pm1.75$ & $53.46\pm1.45$ \\
1/2 & $85.37\pm1.15$ & $54.29\pm1.50$ & $51.38\pm1.16$ \\
Full & $81.76\pm1.24$ & $52.79\pm1.39$ & $50.65\pm1.26$ \\
\bottomrule
\end{tabular}
\label{tab:comparison_label_scarcity}
\end{table}
\subsection{Analysis of Predictions by Score Distribution}
Figure \ref{fig:data_distribution} described that score labels are mainly distributed over the specific ranges.
We investigate how this biased distribution of score labels affects model predictions.
The results are described in Figure \ref{fig:score_mae}.
As expected, DPA severely underperforms when the score labels are below 200.
Although these are natural results from a machine learning perspective, this is a serious problem from a perspective of educational service because students whose scores are lower than 200 are inaccurately diagnosed their academic status.
It is also against the equity of education since not all students can receive the same level of educational service.
There may be various research directions to solve this problem, such as generating pseudo labels, measuring prediction uncertainties, or even collecting more score labels.
However, we don’t go deeper into it any further and leave it as a future work.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figs/pred_score.png}
\caption{MAEs by score distribution.}
\label{fig:score_mae}
\end{figure}
\subsection{Analysis of DPA}
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{figs/ablation.pdf}
\caption{Graphical descriptions of DPA, DPA60, AM, RAM, and AAM.}
\label{fig:ablation}
\end{figure*}
The previous experimental results showed that DPA makes better predictions and is more robust to label-scarcity than AM.
However, where the gains from DPA are coming from is not obvious.
We investigate what makes DPA outperform AM by comparing DPA and AM with the following set of ablation pre-training methods:
\begin{itemize}
\item DPA60\%: DPA60\% is the same as DPA except the discriminator loss tokens only come from the masked interactions.
Since we set the masked interaction ratio to 60\%, DPA60\% pre-trains the discriminator with 40\% fewer loss tokens than DPA.
\item RAM: Unlike DPA where there are the generator and the discriminator, in Replaced Assessment Modeling (RAM), there are two generators of different sizes, a small and large generator.
The small generator is pre-trained in the same way as AM.
The large generator takes a replaced interaction sequence which is generated by replacing the masked features with the small generator’s outputs as an input, and is pre-trained to predict the masked features.
After the pre-training, we throw away the small generator and fine-tune the large generator.
The sizes of the small and large generator are the same as those of DPA’s generator and discriminator, respectively.
\item AAM: All-tokens Assessment Modeling (AAM) is the same as RAM except the large generator is pre-trained to predict all features rather than just predicting the masked features.
\end{itemize}
Figure \ref{fig:ablation} depicts graphical description of each pre-training method.
The results are described in Table \ref{tab:comparison_ablation}.
In the following subsections, we analyze the results in aspects of pre-train/fine-tune discrepancy due to the mask token, discriminative vs. generative pre-training, and sample efficiency.
\begin{table}[t]
\centering
\caption{Comparison of DPA and AM with other ablation pre-training methods.}
\begin{tabular}{l|l}
\toprule
Pre-training method & MAE \\
\toprule
AM & $52.79\pm1.39$ \\
RAM & $52.64\pm1.21$ \\
DPA60 & $51.87\pm1.63$ \\
AAM & $51.07\pm1.23$ \\
DPA & $50.65\pm1.26$ \\
\bottomrule
\end{tabular}
\label{tab:comparison_ablation}
\end{table}
\subsubsection{Pre-train/Fine-tune Discrepancy Due to Mask Token}
During pre-training, the generator in AM sees the mask token which does not appear in fine-tuning, leading to pre-train/fine-tune discrepancy.
\cite{yang2019xlnet,clark2020electra} also raised the same issue found in the MLM pre-training method proposed in \cite{devlin2018bert}.
Since the discriminator in DPA does not see the mask token both in pre-training and fine-tuning, DPA does not suffer from the pre-train/fine-tune discrepancy, making it necessary to examine how much gain is obtainable from this issue.
Comparing RAM and AM, removing the pre-train/fine-tune discrepancy slightly reduces MAE by 0.28\%.
\subsubsection{Discriminative vs. Generative Pre-training}
The discriminator in DPA and the generator in AM are pre-trained with different objectives.
For each token to be predicted, the discriminator pre-training loss comes from a discrimination error between the discriminator’s output and an originality of the token.
On the other hand, the generator is pre-trained by generation error between the generator’s output and an identity of a token to be predicted.
Comparing DPA with AAM, and DPA60 with RAM, the discriminative pre-training objective reduces MAE by 0.82\% and 1.46\%, respectively, over the generative pre-training objective.
\subsubsection{Sample Efficiency}
The loss tokens for pre-training the discriminator in DPA come from all interactions in the input interaction sequence.
However, the generator in AM is pre-trained with loss tokens only coming from the masked interactions.
Considering the masked interaction ratio is 60\%, DPA is more sample efficient than AM since the discriminator is pre-trained with 40\% more loss tokens than the generator.
Comparing DPA with DPA60, and AAM with RAM, sample efficient pre-training brings reduction in MAE by 2.35\% and 2.89\%, respectively, showing that the sample efficiency contributed most to the improvement from AM to DPA.
\section{Introduction}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figs/label_scarce.pdf}
\caption{Interactive features, such as student response and elapsed time for the response, are automatically recorded to the database whenever a student interacts with ITS.
On the other hand, more complicated steps are necessary to obtain a test score: a student should take the test in the designated test center, receive the test score, and report the score to ITS.}
\label{fig:label_scarce}
\end{figure}
Predicting a student’s future academic performance is a fundamental task for developing modern Intelligent Tutoring System (ITS) which aims to provide personalized learning experience by supporting educational needs of each individual.
However, labels for academic performance, such as test scores, are often scarce since they are external to ITS.
For example, as shown in Figure \ref{fig:label_scarce}, test scores are not automatically collected inside of ITS.
Obtaining a test score requires a student to take the test in the designated test center, receive the score, and report the score to ITS.
Transfer learning is a commonly taken approach to address such label-scarcity problems across different domains of machine learning.
In this framework, a model is first pre-trained to optimize auxiliary objectives with abundant data, and then fine-tuned on the task of interest.
In Artificial Intelligence in Education (AIEd) community, \cite{choi2020assessment} introduced Assessment Modeling (AM), a set of pre-training tasks for label-scarce educational problems including academic performance prediction.
AM proposed a pre-training method where first, a masked interaction sequence is generated by replacing a set of interactive features which can serve as criteria for pedagogical evaluation with artificial mask tokens.
Then, given the masked interaction sequence, a model is pre-trained to predict the masked interactive features.
The idea was borrowed from the Masked Language Modeling (MLM) pre-training method proposed in \cite{devlin2018bert}.
In the MLM pre-training method, given a masked word sequence where some words in the sequence are replaced with an artificial mask token, a model is pre-trained to predict the masked words.
However, recently, \cite{clark2020electra} pointed out that the MLM pre-training method has poor sample efficiency and suffers from pre-train/fine-tune discrepancy due to the artificial mask token, and proposed a new discriminative pre-training method.
Considering the problems are also inherent in AM, potential gains are expected to be obtainable when the discriminative pre-training method is applied to academic performance prediction.
To this end, we propose DPA, a transfer learning framework with \textbf{D}iscriminative \textbf{P}re-training tasks for \textbf{A}cademic performance prediction.
There are two models in DPA: a generator and a discriminator.
In DPA’s pre-training phase, the generator is trained to predict the masked interactive features in the same way as AM.
Then, given a replaced interaction sequence which is generated by replacing the masked features with the generator’s outputs, the discriminator is trained to predict whether each token in the sequence is the same as the one in the original interaction sequence.
After the pre-training, the generator is thrown away and only the discriminator is fine-tuned on academic performance prediction.
Also, we investigate diverse pre-training tasks for the generator and show that pre-training the generator to predict a student’s response is more effective than to predict the correctness and timeliness of their response which were considered as the most pedagogical interactive features in AM.
Extensive experimental studies conducted on a real-world dataset collected from a multi-platform ITS application show that DPA outperforms AM with a reduction of 4.05\% in mean absolute error and more robust when the degree of label-scarcity increases.
Through a series of ablation experiments, we show that DPA’s sample efficient pre-training contributed most to the improvement from AM to DPA.
\section{Proposed Method}
Figure \ref{fig:overall_architecture} depicts our proposed method.
There are two models in DPA: a generator and a discriminator.
In pre-training phase, given a sequence of interactions $I = [I_1, \dots, I_T]$, where each interaction $I_t = \{f^1_t, \dots, f^k_t\}$ is a set of interactive features $f^i_t$, such as \emph{eid}, \emph{part}, and \emph{response}, a masked interaction sequence $I^M = [I^M_1, \dots, I^M_T]$ is generated by first randomly selecting a set of positions to mask $M = \{M_1, \dots, M_m\}$ $(m < T)$, and for the masked position $M_i$, masking out a fixed set of features $\{f^1_{M_i}, \dots, f^n_{M_i}\}$ $(n < k)$.
For instance, in Figure \ref{fig:overall_architecture}, if the original interaction sequence is [(e419, part4, b), (e23, part3, c), (e4324, part3, a), (e5233, part1, a)] where each token in the sequence is a set of \emph{eid}, \emph{part}, and \emph{response}, a masked interaction sequence where $M = \{2, 3\}$ and \emph{response} as a masked feature is [(e419, part4, b), (e23, part3, \emph{mask}), (e4324, part3, \emph{mask}), (e5233, part1, a)].
Then, the generator takes the masked interaction sequence $I^M$ as an input, and outputs predicted values $O^G_{ij}$ for the masked features $f^j_{M_i}$.
After that, a replaced interaction sequence $I^R = [I^R_1, \dots, I^R_T]$ is generated by replacing the masked features $f^j_{M_i}$ with the generator's predictions $O^G_{ij}$.
In Figure \ref{fig:overall_architecture}, since the generator's outputs for the masked features are ‘b’ and ‘a’, a replaced interaction sequence is [(e419, part4, b), (e23, part3, b), (e4324, part3, a), (e5233, part1, a)].
Then, the discriminator takes the replaced interaction sequence $I^R$ as an input, and predicts whether each token in the sequence is the same as the one in the original interaction sequence (original) or not (replaced).
After the pre-training, we throw away the generator and fine-tune the pre-trained discriminator on academic test performance prediction.
We provide detailed explanations of each component in the generator and the discriminator, and training objective functions in the following subsections.
\subsection{Interaction Embeddings}
The embedding layer produces a sequence of interaction embedding vectors by mapping each interactive feature to an appropriate embedding vector.
We take two different approaches to embed the interactive features depending on whether they are categorical (\emph{eid}, \emph{part}, \emph{response}, \emph{correctness}, and \emph{timeliness}) or continuous (\emph{elapsed\_time}, \emph{exp\_time}, and \emph{inactive\_time}) variables.
If an interactive feature is a categorical variable, we assign unique latent vectors to possible values of the feature including special values for mask (\emph{mask}) and classification (\emph{cls}).
Take \emph{response} as an example, there is an embedding matrix $E_{response} \in \mathbb{R}^{6 \times d_{emb}}$ where each row vector is assigned to one of ‘a’, ‘b’, ‘c’, ‘d’, \emph{mask}, and \emph{cls}.
If an interactive feature is a continuous variable, we assign a single latent vector to the feature.
Then, an embedding vector for the feature is computed by multiplying the latent vector and a value of the feature.
For instance, we compute an embedding vector for \emph{elapsed\_time} as $et * E_{elapsed\_time}$, where $et$ is a specific value and $E_{elapsed\_time} \in \mathbb{R}^{d_{emb}}$ is a latent vector assigned to \emph{elapsed\_time}.
Also, mask and classification for the continuous interactive features are indicated by setting their values to -1 and 0, respectively.
Not only embeddings for interactive features, positional embeddings are also incorporated into Transformer-based models \cite{vaswani2017attention} to consider chronological order of each token.
Rather than using conventional positional embeddings which stores an embedding vector for every possible position, we adopt axial positional embeddings \cite{kitaev2020reformer} to further reduce memory usage.
The final interaction embedding vector of dimension $d_{emb}$ for each time-step is the sum of all embedding vectors in the time-step.
The interaction embedding layer is shared by both the generator and the discriminator.
\subsection{Performer Encoder}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figs/performer_encoder.pdf}
\caption{The reversible layer in the Performer encoder is composed of the FAVOR+-based multi-head attention layer and the point-wise feed-forward layer.}
\label{fig:performer_encoder}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{figs/data_dist.pdf}
\caption{Distributions of \emph{elapsed\_time}, \emph{exp\_time}, \emph{inactive\_time}, and score labels.}
\label{fig:data_distribution}
\end{figure*}
Since its successful debut in Natural Language Processing (NLP) community, Transformer’s attention mechanism has become a common recipe adopted across different domains of machine learning including speech processing \cite{li2019neural}, computer vision \cite{chen2020generative,dosovitskiy2020image}, and AIEd \cite{pandey2019self,choi2020towards,ghosh2020context,shin2020saint+}.
Compared to Recurrent Neural Network (RNN) family models, Transformer’s attention mechanism has benefits of capturing longer-range dependencies and allowing parallel training, which enables the model to achieve better performance with less training time.
However, despite these advantages, the time and memory complexities of computing the attention grow quadratically with respect to input sequence length, requiring demanding computing resources for training the model on long sequences.
For instance, if $L$ is input sequence length and $d$ is dimension of query, key, and value vectors, Transformer’s attention is computed as follows:
\begin{align*}
\text{Attention}(Q, K, V) = \text{softmax}\Big(\frac{QK^\top}{\sqrt{d}}\Big)V,
\end{align*}
where $Q, K, V \in \mathbb{R}^{L \times d}$.
The time and memory complexities for computing $QK^\top$ in the above equation are $O(L^2d)$ and $O(L^2)$, respectively.
Therefore, the cost for training Transformer becomes prohibitive with large $L$, preventing training the model even on a single GPU.
The problem of improving the efficiency of Transformer's attention mechanism is a common concern of machine learning community.
Recent studies have proposed several methods to reduce the computing complexities lower than the quadratic degree with respect to input sequence length \cite{kitaev2020reformer,wang2020linformer,katharopoulos2020transformers,tay2020sparse,choromanski2020rethinking}.
In this paper, we adopt Performer \cite{choromanski2020rethinking} since it uses reasonable memory and makes a better trade-off between speed and performance \cite{tay2020long}.
Performer approximates attention kernels through Fast Attention Via positive Orthogonal Random features (FAVOR+) approach.
The R+-part in FAVOR+ computes $Q’$ and $K’$ by applying random feature map $\phi: \mathbb{R}^d \rightarrow \mathbb{R}^r_+$ to each query vector $q$ and key vector $k$ in $Q$ and $K$, respectively.
\begin{align*}
Q' = [\phi(q_i), \dots, \phi(q_L)], K' = [\phi(k_i), \dots, \phi(k_L)] \in \mathbb{R}^{L \times r}.
\end{align*}
The FA-part leads to efficient attention mechanism computed as follows:
\begin{align*}
\text{FAVOR+}(Q, K, V) &= D^{-1}(Q'((K')^\top V)) \\
D &= diag(Q'((K')^\top 1_L)),
\end{align*}
where $1_L \in \mathbb{R}^L$ is an all-ones vector of length $L$ and $diag$ is a diagonal
matrix with the input as the diagonal.
Unlike Transformer’s attention mechanism which computes $QK^\top$, applies the softmax, and multiplies the result and $V$, FAVOR+ first computes $K’^\top V$, and multiplies the result and $Q’$, reducing the quadratic time and memory complexities to linear ones with respect to input sequence length.
Also, FAVOR+ can model most attention kernels used in practice.
If $\phi$ is a function given as below, FAVOR+ approximates the softmax attention kernel used in Transformer's attention mechanism.
\begin{align*}
\phi(x) = \mathbb{E}_{w \sim \mathbb{N}(0,I_d)}\left[\exp{\big(w^\top x - \frac{\norm{x}^2}{2}\big)}\right].
\end{align*}
Furthermore, the authors proposed a generalized attention kernel, an attempt to model a kernelizable attention mechanism beyond the commonly used softmax attention kernel, by defining $\phi$ as below.
\begin{align*}
\phi(x) = \mathbb{E}_{w \sim \mathbb{N}(0,I_d)}\left[\text{ReLU}(w^\top x)\right],
\end{align*}
where ReLU is the widely used Rectified Linear Unit function.
In this paper, we use the generalized attention kernel instead of the softmax attention kernel since the former showed better performance in \cite{choromanski2020rethinking} and our preliminary experiments.
Lastly, the O-part further reduces the variance of the estimator by making different random samples $w$s to be orthogonal to each other through the standard Gram-Schmidt orthogonalization process.
For those who want to know more about theoretical details of FAVOR+, please refer \cite{choromanski2020rethinking}.
With the efficient attention mechanism by FAVOR+, we propose the Performer encoder which is stacks of several identical reversible layers described in Figure \ref{fig:performer_encoder}.
The reversible layer is based on Reversible Transformer \cite{gomez2017reversible,kitaev2020reformer} architecture to further improve memory efficiency in back-propagation.
An input of the reversible layer $x \in \mathbb{R}^{L \times d_{hidden}}$ is first chunked to $x_1, x_2 \in \mathbb{R}^{L \times d_{hidden}/2}$.
Then, scaled $l_2$ normalization (ScaleNorm) \cite{nguyen2019transformers} and FAVOR+-based multi-head attention layer (MultiHeadAttn) are applied to $x_2$, and the result is added to $x_1$ to compute $y_1 \in \mathbb{R}^{L \times d_{hidden}/2}$.
\begin{align*}
y_1 = x_1 + \text{MultiHeadAttn}(\text{ScaleNorm}(x_2)).
\end{align*}
After that, the scaled $l_2$ normalization and point-wise feed-forward layer (FeedForward) are applied to $y_1$, and the result is added to $x_2$, computing $y_2 \in \mathbb{R}^{L \times d_{hidden}/2}$.
\begin{align*}
y_2 = x_2 + \text{FeedForward}(\text{ScaleNorm}(y_1)).
\end{align*}
An output of the reversible layer $y \in \mathbb{R}^{L \times d_{hidden}}$ is a concatenation of $y_1$ and $y_2$.
We stack the reversible layer multiple times to allow the final model to sufficiently represent underlying data distribution.
\subsection{Generator}
The generator computes hidden representations $[h^G_1, \dots, h^G_T]$ by feeding the masked interaction sequence $I^M$ to a series of the interaction embedding layer (InterEmbedding), a point-wise feed-forward layer (GenFeedForward1), the Performer encoder (GenPerformerEncoder), and another point-wise feed-forward layer (GenFeedForward2):
\begin{align*}
& [I^{ME}_1, \dots, I^{ME}_T] = \text{InterEmbedding}([I^M_1, \dots, I^M_T]) \\
& [h^{GF}_1, \dots, h^{GF}_T] = \text{GenFeedForward1}([I^{ME}_1, \dots, I^{ME}_T]) \\
& [h^{GP}_1, \dots, h^{GP}_T] = \text{GenPerformerEncoder}([h^{GF}_1, \dots, h^{GF}_T]) \\
& [h^G_1, \dots, h^G_T] = \text{GenFeedForward2}([h^{GP}_1, \dots, h^{GP}_T]),
\end{align*}
where $I^{ME}_t, h^G_t \in \mathbb{R}^{d_{emb}}$ and $h^{GF}_t, h^{GP}_t \in \mathbb{R}^{d_{gen\_hidden}}$.
Then, depending on whether the masked features are categorical or continuous variables, generator outputs are computed differently.
If the masked features are categorical variables, the outputs are sampled from a probability distribution defined by the following softmax layer:
\begin{align*}
O^G_{ij} \sim P_G(f^j_{M_i} | I^M) = \text{softmax}(E_j h^G_{M_i}).
\end{align*}
If the masked features are continuous variables, the outputs are computed by the following sigmoid layer:
\begin{align*}
O^G_{ij} = \text{sigmoid}(E_j^\top h^G_{M_i}).
\end{align*}
Similar to the case of categorical masked features, one can sample the outputs from a probability distribution defined by $I^M$ and parameters of the generator when the masked features are continuous variables.
For instance, the outputs can be sampled from the Gaussian distribution where the mean and the variance are determined by $I^M$ and the generator's parameters.
However, we make the outputs deterministic because sampling the outputs underperforms in our preliminary experiments when the masked features are continuous variables.
\subsection{Discriminator}
In pre-training, outputs of the discriminator $O^D = [O^D_1, \dots,$ $O^D_T]$ is computed by applying a series of the interaction embedding layer (InterEmbedding), a point-wise feed-forward layer (DisFeedForward1), the Performer encoder (DisPerformerEncoder), and another point-wise feed-forward layer (DisFeedForward2) to the replaced interaction sequence $I^R$:
\begin{align*}
& [I^{RE}_1, \dots, I^{RE}_T] = \text{InterEmbedding}([I^R_1, \dots, I^R_T]) \\
& [h^{DF}_1, \dots, h^{DF}_T] = \text{DisFeedForward1}([I^{RE}_1, \dots, I^{RE}_T]) \\
& [h^{DP}_1, \dots, h^{DP}_T] = \text{DisPerformerEncoder}([h^{DF}_1, \dots, h^{DF}_T]) \\
& [O^D_1, \dots, O^D_T] = \text{DisFeedForward2}([h^{DP}_1, \dots, h^{DP}_T]),
\end{align*}
where $I^{RE}_t \in \mathbb{R}^{d_{emb}}$, $h^{DF}_t, h^{DP}_t \in \mathbb{R}^{d_{dis\_hidden}}$, $O^{D}_t \in \mathbb{R}$, and the sigmoid is applied to the last layer of the discriminator.
After the pre-training, we slightly modify the discriminator by replacing the last layer with a layer having appropriate dimension for academic test performance prediction.
\subsection{Training Objectives}
The objective for pre-training is to minimize the following loss function:
\begin{align*}
\sum_{i=1}^m \sum_{j=1}^n \text{GenLoss}(O^G_{ij}, f^j_{M_i}) + \lambda \sum_{t=1}^T \text{DisLoss}(O^D_t, \mathbbm{1}(I^R_t = I_t)),
\end{align*}
where GenLoss is the cross entropy (or mean squared error) loss function if the masked features are categorical (or continuous) variables, DisLoss is the binary cross entropy loss function, and $\mathbbm{1}$ is the identity function.
For ease of notation, we omit an index for each input sample in the above equation.
If there are more than one masked features in each time-step $(n > 1)$, the generator is trained under the multi-task leaning scheme.
The objective for fine-tuning is to minimize the mean squared error loss between the model's predictions and score labels.
\section{Related Works}
\subsection{Transfer Learning}
Transfer learning is a widely studied research topic across different domains of machine learning.
A commonly taken approach in NLP field is pre-training a model on a large corpus and fine-tuning the pre-trained model on downstream tasks, such as machine translation, question answering, and reading comprehension.
\cite{radford2018improving,radford2019language,brown2020language} made dramatic improvements in several downstream tasks by pre-training a model with an autoregressive language modeling objective.
The further progress was made with other pre-training objectives including masked language modeling \cite{devlin2018bert}, permutation language modeling \cite{yang2019xlnet}, and discriminative pre-training \cite{clark2020electra}.
In computer vision domain, \cite{chen2020generative} and \cite{dosovitskiy2020image} outperformed the previous approaches by pre-training a model in pixel-wise and patch-wise fashion, respectively.
Transfer learning has also been applied to a large body of work for AIEd tasks.
\cite{su2018exercise,liu2019ekt}, \cite{sung2019improving}, and \cite{huang2017question} computed latent representations of exercises or students’ answers by mapping words in their text descriptions to embedding vectors pre-trained from a large corpus for Knowledge Tracing, answer grading, and exercise difficulty prediction, respectively.
\cite{ding2019transfer} introduced Auto-Encoder (AE)-based transfer learning method for course dropout prediction in edX Massive Open Online Course (MOOC) platform.
They proposed two different transfer learning methods, passive AE transfer and active AE transfer, and investigated transferability across similar and dissimilar MOOC courses.
\subsection{Student Performance Prediction}
Given records of learning activities of a student, estimating their grade, score, other academic status, such as pass/fail, on a specific course, or eligibility of graduation is an important task for many downstream educational applications.
Student performance prediction has many similarities with Knowledge Tracing \cite{corbett1994knowledge,piech2015deep,zhang2017dynamic,choi2020towards,shin2020saint+} which commonly formulated as a task of sequentially predicting a student's response correctness to the next given exercise.
However, in this subsection, we make a distinction between student performance prediction and Knowledge Tracing by narrowing down the range of student performance prediction to tasks where a prediction label is only given at the end of the learning activities sequence, and focus on the previous studies that meet this condition.
Wide range of approaches for student performance prediction have been proposed including Matrix Factorization (MF) \cite{ren2017grade,zhong2019constrained}, MLP \cite{raga2019early}, RNN \cite{kim2018gritnet,hu2019reliable,li2020student}, and Graph Neural Network (GNN) \cite{hu2019academic,karimi2020online,li2020peer}.
\cite{ren2017grade} proposed integrating course-wise influence effects and temporal effects in the MF algorithm for predicting grades that a student will get from courses in the next term.
Similar to other MF-based methods, the competence that a student has over a course was calculated by dot product between latent vectors of the student and the course.
However, the terms representing the student’s previous performance over other courses were added to the objective function, which allows to model pairwise course relationships.
\cite{zhong2019constrained} observed that some courses are very popular, while some are not, resulting in unbalanced distribution of course selection rate. To reduce the influence of course selection rate, additional terms were integrated into the MF objective function to force predicted scores approach toward course average scores.
With the rise of deep learning, several studies explored student performance prediction problem from a deep learning perspective, and adopted Neural Networks (NN) as a basic building block for their proposed model.
\cite{raga2019early} developed MLP model that predicts pass/fail of several classes based on online learning activity logs collected from the Moodle Learning Management System.
\cite{kim2018gritnet} proposed BiLSTM-based graduation prediction model which outperformed logistic regression-based baseline model on the Udacity dataset.
\cite{li2020student} suggested Sequential Prediction based on Deep Network (SPDN) model which is a combination of Multi-source Fusion Convolutional Neural Network (MFCNN) and BiLSTM.
The MFCNN component in the SPDN model computed compressed representation of a student’s multiple learning activities in weeks. The representation was then fed to the BiLSTM to estimate the student’s at-risk probability.
Not only the prediction accuracy, reliability and interpretability of predictions are also important considerations for many educational systems.
\cite{hu2019reliable} estimated uncertainties of MLP and LSTM models’ grade predictions through MC-dropout \cite{gal2016dropout} technique and computed the influence of a prior course grade on a target course grade to provide explainability to the models’ predictions.
Recently, with the consideration of course dependency structure, GNN-based models have become emerging approaches for student performance prediction.
Deep Online Performance Evaluation framework introduced in \cite{karimi2020online} learned student and course embeddings through relational GNN from student course relations knowledge graph.
\cite{hu2019academic} incorporated the attention layer into the Graph Convolutional Network to help identify prior courses important for predictions.
\cite{li2020peer} constructed a student-interaction-exercise graph and presented Residual Relational GNN which incorporated residual connections between different convolutional layers and original features.
\subsection{Transfer Learning for Student Performance Prediction}
Like other domains in machine learning, transfer learning has also been successfully applied to student performance prediction.
Sparse AE-based pre-training approach for grade prediction was proposed in \cite{guo2015predicting} where each hidden layer in NN was greedily pre-trained in unsupervised fashion.
\cite{hunt2017transfer} employed TrAdaBoost \cite{10.1145/1273496.1273521} algorithm, an extended version of the AdaBoost algorithm for transfer learning, for graduation prediction.
QuesNet \cite{yin2019quesnet} is an exercise embedding model pre-trained with heterogeneous contents in an exercise including text, images and side information, such as required knowledge to solve the exercise.
The authors suggested two pre-training objectives: Holed Language Modeling (HLM) and Domain-Oriented Objective (DOO).
The HLM aimed to learn low level representation and the objective of DOO was to capture domain specific logic and knowledge.
\cite{choi2020assessment} introduced Assessment Modeling, a pre-training framework for label-scarce educational problems.
They pre-trained a model to predict assessments, interactive features with pedagogical meaning, and fine-tuned the model appropriate for several label-scarce educational tasks including exam score prediction and review correctness prediction.
\section{Santa: A Self-study Solution \\
Equipped with an AI Tutor for English Education}
In this paper, we conduct experiments on a real-world dataset obtained from \emph{Santa}\footnote{\url{https://aitutorsanta.com}}, a multi-platform ITS with more than a million users in South Korea available through Android,
iOS, and Web that exclusively focuses on the Test of English for International Communication (TOEIC) standardized examination.
The publicly accessible version of the dataset was released under the name \emph{EdNet} \cite{choi2020ednet}.
The TOEIC consists of two timed sections, Listening Comprehension (LC) and Reading Comprehension (RC).
There are a total of 100 multiple choice exercises in each section, and the total score for each section is 495 in steps of 5 points.
\emph{Santa} provides learning experiences of solving exercises, studying explanations, and watching lectures.
When a student consumes a specific learning content, \emph{Santa} diagnoses their current academic status based on their learning activities records and recommends another learning content appropriate for their current position.
\emph{Santa} records diverse types of interactive features, such as student response, the duration of time the student took to respond, and the time interval between the current and previous learning activities.
However, unlike the interactive features automatically collected from \emph{Santa}, obtaining the official TOEIC score requires more steps: a student should register and pay for the test, take the test in the designated test center, receive the test score from the Educational Testing Service, and report the score to \emph{Santa} (Figure \ref{fig:label_scarce}).
\emph{Santa} collected students’ TOEIC score data by offering small gifts to students when they report their scores.
\section{Transfer Learning for Academic Test Performance Prediction}
\label{sec:transfer_learning}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figs/overall_architecture_3.pdf}
\caption{The overall pre-training/fine-tuning process of DPA when each token in an interaction sequence is a set of \emph{eid}, \emph{part}, and \emph{response}, and \emph{response} is a feature being masked.
\emph{mask} and \emph{cls} are special tokens for mask and classification, respectively, which are the same as the ones used in \cite{devlin2018bert}.}
\label{fig:overall_architecture}
\end{figure*}
To overcome the label-scarcity problem in academic test performance prediction, we consider burgeoning machine learning discipline of transfer learning.
There is an open issue of what information to transfer or which pre-training task is the most effective for academic test performance prediction.
Previous studies proposed two types of pre-training methods for AIEd Tasks: interaction-based method which models students’ dynamic learning behaviors \cite{guo2015predicting,hunt2017transfer,ding2019transfer,choi2020assessment}, and content-based method which learns representations of learning contents \cite{huang2017question,su2018exercise,liu2019ekt,sung2019improving,yin2019quesnet}.
\cite{choi2020assessment} showed that interaction-based pre-training method outperforms content-based pre-training methods when the pre-trained model is fine-tuned on several label-scarce educational tasks including academic test performance prediction.
Following this line of research, we propose a transfer learning framework where a model is pre-trained using only student interaction data, and fine-tune the pre-trained model on academic test performance prediction.
In this paper, we consider the following interactive features:
\begin{itemize}
\item \emph{eid}: A unique ID assigned to an exercise solved by a student.
There are a total of 14419 exercises in the dataset.
\item \emph{part}: Each exercise belongs to a specific part that represents the type of the exercise.
There are a total of 7 parts in the TOEIC.
\item \emph{response}: Since the TOEIC consists of multiple choice exercises and there are four options for each exercise, a student response for a given exercise is one of the options, ‘a’, ‘b’, ‘c’, or ‘d’.
\item \emph{correctness}: Whether a student responded correctly to a given exercise.
Note that \emph{correctness} is a coarse version of \emph{response} since \emph{correctness} is processed by comparing \emph{response} with a correct answer for a given exercise.
\item \emph{elapsed\_time}: The amount of time a student spent on solving a given exercise.
\item \emph{timeliness}: Whether a student responded to a given exercise under the time limit.
Note that \emph{timeliness} is a coarse version of \emph{elapsed\_time} since \emph{timeliness} is processed by comparing \emph{elapsed\_time} with the time limit recommended by domain experts for a given exercise.
\item \emph{exp\_time}: The amount of time a student spent on studying an explanation for an exercise they had solved.
\item \emph{inactive\_time}: The time interval between the current and previous interactions.
\end{itemize}
In our experiments, we normalize the values of \emph{elapsed\_time}, \emph{exp\_time}, and \emph{inactive\_time} so they are between 0 and 1 to stabilize the training process. |
1,116,691,499,688 | arxiv | \section{Introduction}\label{Intro}
\subsection{Background}\label{Background}
The original combination of a set of existing technologies (distributed ledgers, public-key encryption, merkle tree hashing, consensus protocols) gave origin to the peer-validated decentralised cryptocurrency called Bitcoin, originally introduced by Satoshi Nakamoto in 2008 \cite{Nakamoto2008}. That year was the advent of a new technological milestone: the \textit{blockchain}. Indeed, blockchain has an impact well beyond the specific case of Bitcoin.
Blockchain allows new forms of distributed software architecture to be developed where networks of untrusted (and sometimes even "corrupted") participants can establish agreements on shared states for decentralised and transactional data in a secure way and without the need of a central point of control or regulatory supervision.
Blockchain ensures trust among anonymous counterparts in decentralised systems without the need of central supervisor authorities in charge of verifying the correctness of the records in the ledger.
Blockchain has been announced as a disruptive technological innovation, but in fact there is no true technical innovation in Bitcoin and blockchain. All components had already been developed before the Bitcoin paper by Nakamoto in 2009. From a historic perspective, the technology has its roots in Ralph C. Merkle's elaborations, who proposed the Merkle Tree – the use of concatenated hashes in a tree for digital signatures in the 1970s. Hashing has been used since the 1950s for cryptography for information security, digital signatures and message-integrity verification. A decade later, Leslie Lamport proposed using the a hash chain for a secure login. The first crypto currency for electronic cash was described at the dawn of the web in 1990. Further evolutions and refinements of the hash chain concept were introduced in a paper by Neil Haller on the S/KEY application of a hash chain for Unix login, in 1994.
Adam Back proposed hashcash in 2002, but the first eletronic currency based on blockchain with the PoW concept has been proposed by Satoshi Nakamoto's disruptive paper \cite{Nakamoto2008} While blockchain is still in its emergent technological phase, it is fast evolving with the potential to see applications in many sectors of our socio-economic systems. According to some statistics summarised by the World Economic Forum the interest on blockchain expanded globally \cite{WEF_Report}. Almost thirty countries are currently investing in blockchain projects. In the finance sector, 80\% of the banks predict to initiate blockchain-related projects by 2017. Additionally, venture capital investments with a focus on blockchain activities raised to over 1.4 billion USD from 2014 to 2016. Since blockchain digital currencies combine together features of money with those of a payment system \cite{TascaDual}, also central banks started to look into the technology. Currently, over nineteen central banks worldwide do blockchain research and development. Some of them - e.g. the Bank of England - already have commissioned studies on CBDC (Central Bank Digital Currency). From the industry side, over one hundred corporations have joined blockchain working groups or consortia and the number of patents filled increased to more than three thousand at the moment of writing. These figures show the importance and awareness of blockchain as one of the most promising emerging technologies that, together with Artificial Intelligence, Internet of Things or nanotechnology will have a pervasive impact on the future of our society.
\subsection{Problem Statement and Research Method}\label{problem statement and research method}
At the moment of writing, we may argue that -- according to the Technology Life Cycle theory --, we are at the beginning of the so called phase of ``fermentation'' which is characterised by technological uncertainty due to the evolution of the blockchain into alternative technological paths. The industry promotes different model designs favouring functional and performance aspects over others in order to meet specific business goals. Currently there are thousands of blockchain projects worldwide under development, some of them run on forks of successful technologies such as Bitcoin or Ethereum, while others propose completely new functionalities and architectures. For this reason, instead of blockchain, in the remainder we refer to \textit{blockchains} or \textit{blockchain technologies} in order to encompass all the possible architectural configurations and, for the sake of simplicity, also the larger family of distributed ledger technologies, i.e., community consensus-based distributed ledgers where the storage of data is not based on chains of blocks. An heterogeneous development combined with a lack of interoperability may endanger a wide and uniform adoption of blockchains in our techno- and socio-economic systems. Moreover, the variation of blockchain designs and their possible configurations represent an hindrance for software architectures and developers. In fact, without the possibility to resort on a technical reference model, it is difficult to measure and compare the quality and the performance of different blockchains and those of the applications sitting on top of them. To summarise, current variations of blockchain software architectures pose greatest concerns from different perspectives according to heterogeneity:
\begin{enumerate}
\item Heterogeneity is a problem according to the future developments of blockchain technologies, because it will prevent the developments, adoption and stimulation of innovation.
\item Heterogeneity will prevent consistency in drafting laws and policies related to the regulation of blockchain/DLT technologies.
\item Heterogeneity will increase ambiguity in the application of consumer protection laws and regulations.
\item Heterogeneity will decrease the clarity on how the workforce may be affected by blockchain/DLT technology.
\item Heterogeneity will decrease the clarity in academic research and sharpen concepts that underpin the development of new applications and solutions.
\item Heterogeneity will prevent the development of the specification and use of solutions using blockchain and DLT for ISO, IEC and other SDOs.
\item Heterogeneity will increase the complexity in the understanding of blockchain/DLT for NGOs and how this technology may be applied in the relevant sectors to achieve social and economic goals
\end{enumerate}
The solution to these problems requires the setting up of software reference architectures where standardised structures and respective elements and relations shall provide templates for concrete blockchain architectures.
Standards can emerge naturally because of market adoption (industry driven) or because imposed by institutes and organisations. In the first group we may include initiatives like the Accord Project \footnote{https://www.accordproject.org/}, the ChinaLedger \footnote{http://www.chinaledger.com/} or R3 \footnote{https://www.r3.com/}. In the second group we may refer to the initiative conducted by the International Organization for Standardization (ISO) with the establishment of the technical committee ISO/TC 307 on Blockchain and distributed ledger technologies. Several working groups with different topics to discuss have been settled. In particular, the ISO/TC 307/WG1 working group is engaged with the reference architecture, taxonomy and ontology. Overall, a long-term standardisation of the blockchain reference architecture will benefit every industry.
Thus, a standard for software reference architecture is necessary in order to enable a level playing field where every industry player and community member can design and adopt blockchain-enabled products or services under the same very conditions with possibility of data exchange. As it is for the Internet, several institutes of standardisations (e.g., ETF in cooperation with the W3C, ISO/IEC, ITU) set a body of standards. Internet standards promote interoperability of systems on the Internet by defining precise protocols, message formats, schemas, and languages. As a result, different hardware and software can seamlessly interact and work together. Applied to World Wide Web (as a layer on the top of the Internet), standards bring interoperability, accessibility and usability of web pages. Similarly, the adoption of blockchain standards will promote the blossoming and proliferation of interoperable blockchain-enabled applications. Thus, if we envisage a future where blockchains will be one of the pillars of our society's development, it is necessary to begin discussing and identifying standards for blockchain reference architectures. The aim of this study is to highlight the need for standard technical reference models of blockchain architectures. This is timely aligned with the industry sentiment which currently pushes organisations for standardisation to set industry standards. In order to support an appropriate co-regulatory framework for blockchain-related industries, a multi-party approach is necessary as it is for the Internet where both national standards, international standards and a mixture of standards and regulation are in place. In the mid-long term, the lack of standards could bring risks related to privacy, security, governance, interoperability and risk to users and market participants, which can appear as blockchain related cyber crimes. From a preliminary survey conducted in 2016 by Standards Australia, more than 88\% of respondents indicate the role for standards in supporting the roll out of blockchain technologies \cite{Meguerditchian_Varant}.
Given the above problem statement the goal of this research is to conduct a review of the blockchain literature. This will be a preparatory work in order to identify and logically group different blockchain (main and sub) components and their layouts.
In order to achieve our goal we propose a blockchain taxonomy. Taxonomy comes from the term "taxon" which means a group of organisms. In our case, taxonomy encompasses the identification, description, nomenclature, and hierarchical classification of blockchain components. This is different from an ontology which would be more focused on the study of the types, properties, and interrelationships of the components and events that characterize a blockchain system.
The methodological approach is composed of the following steps:
\begin{enumerate}
\item Analysis across blockchains. A pre condition is the analysis of vocabulary and terms to sort out ambiguities and disagreements. A literature review of the existing technologies is the starting point to limit complexity and organise information in schematic order. To avoid dis-ambiguities the analysis is supported by a merge of common blockchain terminologies developed so far in the literature and grouped in an online database
\footnote{http//arstweb.clayton.edu/interlex}. This brings together a vocabulary of key blockchain terms to provide readers with a foundation upon which understanding the classification and taxonomy developed in the rest of the analysis.
The identification of the blockchain components is the crucial part of this analysis.
In order to explore all the possible domains of blockchain components and their topological layout indicating their runtime interrelationships, we conduct a comparative study across different families of blockchain applications: digital currencies, application stacks, asset registry technologies and asset centric technologies.
See Table \ref{DLT-techs} in Appendix and \cite{tasca2015digital} for more information.
\item Framework setting. After the comparative study, a hierarchical taxonomy (a tree structure of classifications for a given set of components) has been defined and populated by \textit{main}, \textit{sub} and (when necessary) \textit{sub-sub} components.
\item Layout categorisation. Finally, for the components in the lowest level of the hierarchical structure, different layouts are introduced and compared. However, as the technology keeps evolving, the layouts are increasing over time. Thus, for the sake of simplicity, we limit our study to two or three main layouts per each \textit{sub} or \textit{sub-sub} component.
\end{enumerate}
\subsection{Results}\label{Results}
The result of the component-level analysis is a universal blockchain taxonomy tree that groups (in a hierarchical structure) the major components, identifies their functional relation and possible design patterns.
In general, it is difficult to evaluate whether a taxonomy or an ontology is good or bad, especially if the domain is a moving target like the blockchain. Taxonomies and ontologies are generally developed to limit complexity and organise information but all serve different purposes and generally evolve over time (see e.g. the evolution of the famous Linnaean taxonomy in biology). Thus, our taxonomy simply aims to contribute to set the foundations for classifying different kinds of blockchain components. Without claiming to represent the ultimate structure, the proposed taxonomy could be of practical importance in many cases. For example, it can:
\begin{enumerate}
\item support software architectures to explore different system designs and to evaluate and compare different design options;
\item be propeadeutic to the development of blockchain standards with the aim to increase the adoption at a large scale of blockchain-enabled solutions and services;
\item enable research into architectural framework for blockchain-based systems in order to boost the adoption of blockhain-enabled systems, their interoperability and compatibility;
\item create gateway models to multiple blockchains and design governance framework;
\item promote blockchain predictability;
\item be used to promote a regulatory framework that provides a mix of both legal and technical rules (i.e., regetech for blockchain-based systems) \cite{marian2015conceptual}.
\end{enumerate}
\section{Background on Blockchain Technologies}\label{DLT-Tech}
Since the Bitcoin inception in 2009, many blockchain software architectures have been deployed to meet different technical, business and legal design options. Given the current complex dynamic of the blockchain architectural development, it would be neither exhaustive nor comprehensive to provide a picture of the existing blockchain technologies developed so far. Therefore, we take a bird-eye view and describe the blockchain by looking at its key driving principles such as data decentralisation, transparency, security, immutability and privacy \cite{Aste-Tasca}.
\textbf{Decentralisation of consensus}. The distributed nature of the network requires untrusted participants to reach a consensus. In blockchain, consensus can be on ``rules'' (that determine e.g., which transactions are allowed and which are not, the amount of bitcoins included in the block reward, the mining difficulty, etc.) or on the history of ``transactions'' (that allows to determine who owns what). The decentralised consensus on transactions governs the update of the ledger by transferring the responsibilities to local nodes which independently verify the transactions and add them to the most cumulative computation throughput (longest chain rule). There is no integration point or central authority required to approve transactions and set rules. No single point of trust and no single point of failure
\textbf{Transparency}. Records are auditable by a predefined set of participants, albeit the set can be more or less open. For example, in public blockchains everyone with an Internet connection to the network holds equal rights and ability to access the ledger. The records are thus transparent and traceable. Moreover, participants to the network can exercise their individual (weighted) rights (e.g. measured in CPU computing power) to update the ledger. Participants have also the option to pool together their individual weighted rights.
\textbf{Security}.
Blockchain is a shared, tamper-proof replicated ledger where records are irreversible and cannot be forged thanks to one-way cryptographic hash functions. Although security is a relative concept, we can say that blockchains are relatively secure because users can transfer data only if they posses a private key. Private keys are used to generate a signature for each blockchain transaction a user sends out. This signature is used to confirm that the transaction has come from the user, and also prevents the transaction from being altered by anyone once it has been issued.
\sloppy \textbf{Immutability}.
Blockchains function under the principle of non-repudiation and irreversibility of records. Blockchains are immutable because once data has been recorded in the ledger, it cannot be secretly altered ex-post without letting the network know it (data is tamper-resistant).
In the blockchain context immutability is preserved thanks to the use of hashes (a type of a mathematical function which turns any type of input data into a fingerprint of fixed size, that data called a hash. If the input data changes even slightly, the hash changes in an unpredictable way) and often of blocks. Each block includes the previous block’s hash as part of its data, creating a chain of blocks. Immutability is relative and relates to how hard the history of transactions is to change. Indeed, it becomes very difficult for an individual or any group of individuals to tamper with the ledger, unless these individuals control the majority of ``voters''. For public proof-of-work blockchains such as Bitcoin, the immutability is related to the cost of implementing the so-called “51\% attack”. For private blockchains, the block-adding mechanism tends to be a little different, and instead of relying on expensive proof-of-work, the blockchain is only valid and accepted if the blocks are signed by a defined set of participants. This means that, in order to recreate the chain, one would need to know private keys from the other block-adders.
A complete discussion of threats about immutability of the transaction history can be found in \cite{barber2012bitter}. On the other hand, from a governance perspective, this solution is never fully realised. The several examples where the Bitcoin community had reverted Bitcoin blocks based on community decisions. The division between Ethereum and Ethereum classic, and later between Bitcoin and Bitcoin Cash and Bitcoin Gold are not purely anecdotal evidence: they are strong indicators of the importance that the governing body - even if informal - ends up having on the information eventually stored in the blockchain \cite{walch2017path}.
Other non fundamental properties of blockchain include data automation and data storage capacity.
\textbf{Automation and smart contracts}. Without the need for human interaction, verification or arbitration, the software is written so that conflicting or double transactions are not permanently written in the blockchain. Any conflict is automatically reconciled and each valid transaction is added only once (no double entries). Moreover, automation regards also the development and deployment of smart legal contracts (or smart contract codes, see \cite{clack2016smart}) with payoff depending on algorithms which are self-executable, self-enforceable, self-verifiable and self-constraint.
\textbf{Storage}. The storage space available on the blockchain networks can be used for the storage and exchange of arbitrary data structures. The storage of the data can have some size limitations placed to avoid the blockchain bloat problem \cite{BTC_Bloat}. For example, metadata can be used to issue meta-coins: second-layer systems that exploit the portability of the underlying coin used only as “fuel”. Any transaction in the second layer represents a transaction in the underlying network. Alternatively, the storage of additional data can occur ``off-chain'' via a private cloud on the client's infrastructure or on a public (P2P or third-party) storage. Some blockchains like Ethereum allow to store data also as a variable of smart contracts or as a smart contract log event.
\section{Taxonomy of Blockchains}\label{taxonomy}
The diversity of blockchain research and development provides an opportunity for cross-fertilisation of ideas and creativity, but it can also result in fragmentation of the field and duplication of efforts. One solution is to establish standardised architectures to map the field and promote coordinated research and development initiatives. However, in terms of blockchain software architecture design little has been proposed so far \cite{2017-Xu-ICSA}, and the problem of consistently engineering large, complex blockchain systems remains largely unsolved. {We approach this problem by proposing a component-based blockchain taxonomy starting from a coarse--grained connector--component analysis. The taxonomy compartmentalises the blockchain connectors/components and establishes the relationships between them in a hierarchical manner. We adopt a reverse-engineering approach to unbundle the blockchains and divide them into \textit{main} (coarse-grained) components. Each main component is then split into more (fine-grained) \textit{sub} and \textit{sub-sub} components (where necessary).
For each of these sub (and/or sub-sub) components, different \textit{layouts} (models) are identified and compared. By deriving the logical relation between (main, sub or sub-sub) components, the study helps to clarify the alternative \textit{modus operandi} of the blockchains and helps to develop the conceptual blockchain design and modelling.
{ Similarly to other fields like electronics or mechanics \cite{Otto1998}, the software engineering approach used to derive the taxonomy threats blockchains as the result of gluing together prefabricated, well-defined, yet interdependent components. Although equivalent components provide similar services and functions, they can be of different importance and type, and the interconnection may work in different ways.
Following this logic, each of the next seven sections will introduce a new blockchain main component and its sub (and eventually sub-sub) components by describing and comparing their layouts.
}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Taxonomy2.pdf}
\caption{Blockchain Taxonomy Tree: A representation of the taxonomic decomposition of blockchain-based technologies.}
\label{taxonomy-Matrix}
\end{figure}
\section{Consensus}\label{consensus}
The first identified main component is \textit{Consensus}. It relates to the set of rules and mechanics that allows to maintain and update the ledger and to guarantee the trustworthiness of the records in it, i.e., their reliability, authenticity and accuracy \cite{bonneau2015sok}. \textit{Consensus} varies across different blockchain technologies, every consensus mechanism brings advantages and disadvantages based on different characteristics e.g. speed of transactions, energy efficiency, scalability, censorship-resistence and tamper-proof \cite{Mattila2016}. The set of rules and mechanics compose the framework of the validation process that is necessary to overcome security issues during the validation. Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Consensus:
\begin{itemize}
\item[1] Consensus Network Topology
\item[2] Consensus Immutability and Failure Tolerance
\item[3] Gossiping
\item[4] Consensus Agreement
\begin{itemize}
\item[4.1] Latency
\item[4.2] Finality
\end{itemize}
\end{itemize}
Those subcomponents and sub-subcomponents are to be jointly considered when designing an active network consensus validation process because not only their individual configuration but also their combination determine when and how the overall blockchain agreement is achieved and the ledger updated.
\subsection{Consensus Network Topology}\label{Cons-Netw-Top}\textit{Consensus Network Topology} describes the type of interconnection between the nodes and the type of information flow between them for transaction and/or for the purpose of validation.
For efficiency reasons, systems have historically been designed in a centralised manner. This centralisation lowers dramatically the costs for system configuration, maintenance, adjustment (and the costs of arbitration in case of conflict) as this work has to be performed only once in a central place. While highly efficient in many situations, this kind of systems induce a single (or very limited set of) point(s) of failure and suffers of scalability issues. With respect to the network topology, this hierarchical arrangement is still present in most of our techno- and socio-economic systems (one example is the modern electronic payment system). To avoid the single point of failure, these centralised can be extended into hierarchical constructs which exhibit larger scalability and more redundancy, while keeping the communication efficient.
Alternative to those centralised topologies, decentralised solutions have been proposed. Since the dawn of the Internet, technical systems have evinced a transition towards decentralised arrangements \cite{Wright2015} where all the nodes are equivalent to any other. For most applications, blockchain based systems - with its federated set of participants - is a clear example of this kind. Blockchain based systems resort on specific topologies to create the peer network that ultimately determines how the validation process will evolve.
It is important to mention that \textit{Consensus Network Topology} is linked to the level of (de)centralisation in the validation process but this is not the only determinant. Also other factors like the reward mechanism (see Section \ref{reward}) heavily influence the validation \cite{bonneau2015sok}. As a matter of an example, Bitcoin has a decentralised validation process, which is still be accompanied by ever increasing concentration of computational power devoted to the proof-of-work by network participants. Indeed, during the period 2013-2015, the cumulative market share of the largest ten pools relative to the total market hovered in the 70\% to 80\% range \cite{tasca2015digital}.
We identify three possible layouts for \textit{Consensus Network Topology}:
\begin{enumerate}
\item {\bf Decentralised.} There exist implementations that are decentralised.
Bitcoin as the pioneer in digital currencies established a distributed P2P network, which enables direct transactions to every node within the network. The validation process within the Bitcoin network is decentralised through miners and full nodes who validate the transactions within the network \cite{Nakamoto2008} connected in a random way as provided by super-nodes. This network illustrates a decentralised \textit{Consensus Network Topology}. Obviously, this layout is independent from the \textit{Consensus Immutability} layout (Peercoin and NXT also show decentralised network topologies).
\item{\bf Hierarchical.} There are other implementations that are not decentralised, and there exists an irregularity of the role the nodes have. For example, in Ripple \cite{Ripple} the network topology is divided into tracking and validating nodes. The tracking nodes are the gateway for submitting a transaction or executing queries for the ledger, in addition to that the validating nodes have the same functions as tracking nodes but they can also contribute additional sequences to ledger by validation \cite{Ripple}.
This kind of solution yields (or can be extrapolated to) a hierarchical network topology. In the community of developers these hierarchical topologies are also referred to as ``Consortium blockchains''.
\item {\bf Centralised.} In some specific implementations, a central authority may need (or wish) to control what is added to the ledger. An example for this are digital versions of fiat currencies: the so called Central Bank Digital Currencies. This kind of solution yields a third layer, ``Centralised topology'', which is intimately related to private blockchains. It is important to mention that a centralised solution would normally speak of a non-properly working design (or a non-solution) if implemented in terms of a blockchain, as it would have been implemented otherwise in a more transparent manner. Normally, some level of federation and redundancy are key to blockchain systems.
\end{enumerate}
\subsection{Consensus Immutability and Failure Tolerance}\label{Cons-Immut}
In general, the failure tolerance of a distributed system shall be defined with respect to three interrelated issues: faults (e.g., Byzantine faults), errors and failures. See e.g., \cite{driscoll2003byzantine,castro1999practical} for Byzantine fault tolerance in distributed systems.
There are different types of failures and
generally it is costly to implement a fault tolerant system. Practically, it is not possible to devise an infallible, reliable system . For a literature review and deeper analysis of fault tolerant distributed systems, we refer the reader to \cite{cristian1991understanding} and \cite{fischer1983consensus}.
A blockchain, as special case of a distributed system, is fault tolerant when it shows the ability to continue functioning. i.e., it must grant reliability, validity and security of the information stored in the ledger.
Indeed, blockchains represent a decentralised solution to the problem of information storage which require no central database but many duplicates such that each server holds a \textit{replica} of the ledger. Any new record is costly (often measured in terms of computational power) to be added to the ledger, but cheap to be verified by peers.
Therefore, a blockchain system is in the need of an efficient consensus mechanism to ensure that every node has its original version of the full transaction history which is kept consistent with the other peers over time. In this vein, the immutability of achieved consensus differs with respect to the resources required to keep large network security.
In the past years, the evolution of blockchain technologies has been accompained with the development of different mechanisms that help to keep reliable, valid and secure the information contained in the ledger.
All together, the mechanisms for \textit{Consensus Immutability} together with the subcomponents of \textit{Consensus Agreement} determine the failure tolerance of the blockchains.
As of this writing, we identify six main layouts for \textit{Consensus Immutability and Failure Tolerance}:
\begin{enumerate}
\item {\bf Proof-of-Work}. The most widely used cryptocurrency, Bitcoin, uses Proof-of-Work (PoW) to ensure the immutability of the transaction records. In this setup, computing devices, usually called \textit{miners}, connected to a peer-to-peer network perform the task of validating the transactions proposed for addition to the complete record of existing - valid - transactions.
The generation of a block that can be appended to the blockchain - rendering in this way valid all transactions there included - requires {finding the solution of inverting a cryptographic function, which can only be done by brute force. In PoW, the probability that a miner mines a new block depends on the ratio between the computational power he devotes to this task and the total instantaneous computational power by all miners connected to the network.
Specifically, miners must find a solution to an one-way hash function by computing new hash values based on the combination of the previous hash values contained in the message, the new transactions in the block they create and a nonce. The solution is such that the new hash value will start with a given number of zeros $\leq$ target.
As for this writing, the mining process needs several requirements to be successful \cite{BitFuryGroup}. These include specialised hardware which is needed to perform the computational tasks and ever increasing amounts of electricity to power the hardware.
These computations are run by dedicated machines (ASICs) which are very expensive and are resource-intensive as contribute to a large electricity footprint for cryptocurrency miners \cite{o2014bitcoin}.
Due to this scheme, in the last years miners agglomerate around mining pools \cite{Lewenberg:2015}.
Therefore, a clear drawback of the PoW mechanism is the inherent inefficiency from the resource point of view, and the large-scale investments needed, which has led to long-term centralisation of the mining power. In late 2017, every second almost five quadrillion SHA256 computations were performed in the Bitcoin mining process. Regretfully, these computations do not have any practical or scientific relevance apart from ensuring that the process of block creation is costly, but others' blocks validity are simple to verify for peers.
}
Interestingly, when adversaries coordinate, it is sufficient that they hold only the 25\% of the total computing power to mount an attack \cite{eyal2014majority}.
{In this layout, there exists the risk of monopoly mining, induced by large coordination of miners in a single mining pool, which continuously increases the expected payoff of others if they join said mining pool. In this hypothetic situation, said maining pool can censor specific transactions and dictate what transactions are accepted and which ones not.
}
In contrast, BFT consensus mechanisms tolerate at most $n/3$ corrupted nodes in the asynchronous communication protocol and even higher levels in the synchronicity case.
Electricity consumption can be estimated around 0.1 to 1 W/GH corresponding to around 1GW of electricity consumed every second. Therefore, other developers within the area of blockchain technologies continuously attempt to develop novel mechanisms to achieve an equivalent goal. {It is worth mentioning that some cryptocurrencies (e.g. Primecoin) have tried to make the PoW a task that serves a useful aim (in that case, searching for long chains of prime numbers, or Cuningham series).}
\item {\bf Proof-of-Stake}. PoS links the block generation to the proof of ownership of a certain amount of digital assets (e.g., digital currencies) linked to the blockchain. {The probability that a \textit{prover} is selected to verify the next block is larger the larger is the share of assets this prover has within the system. The underlying assumption is that users with a large share of the system wealth are more likely to provide trustworthy information with respect of the verification process, and are therefore to be considered trusted validator \cite{Mattila2016}. } Two alternative PoS methods have been devised. {The first one is based on randomised block selection (used in e.g., NXT and BlackCoin); it uses a calculation searching for the lowest hash together with the stake size; it is therefore somewhat deterministic and each node can independently determine the likelihood of being selected in a future round. An alternative scheme is the \textit{coin-age}-based selection (used by e.g., Peercoin, being actually the first one to be implemented in real world) which combines randomisation with coin-age (a number derived from multipliying the amount of the assets held by the prover and the length of time it has been helding them).}
Although PoS has the chance to solve two issues with PoW (risk of monopoly mining and resources wasted in the mining process), it is affected by the ``nothing at stake'' issue. Because there is little cost in working on several chains (unlike in PoW), one could abuse by voting for multiple blockchain-histories which would prevent the consensus from ever resolving (double spending). This problem can be addressed by Delegated Proof-of-Stake (DPoS), a generic term describing an evolution of the basic PoS consensus (utilised in, e.g., BitShares, Casper by Ethereum, Tendermint) where blocks are forged by a predetermined users delegated by the user who has the actual stake. These forgers are rewarded for their duty and are punished for malicious behaviour (such as participation in double-spending attacks). This principle of pre-authorised forgers is generalised by the Proof-of-Authority mechanism.
\item{\bf Proof-of-Authority}. In this case, participants are not
asked to solve arbitrarily difficult mathematical problems like in
PoW, but instead they are asked to use a hard-configured set of
``authorities'' empowered to collaborate ``trustlessly''. Namely, some nodes
are exclusively allowed to create new blocks and secure the
blockchain. Typically, Proof-of-Authority (PoA) mechanism
fits well for consortium private networks where
some preselected real entities (i.e., the \textit{authorities}) are allowed to
control the content that is added to the public registry. Those nodes will receive a
set of private keys that will be used to ``sign" the new blocks, acting as \textit{trusted signers}.
Thus, every block (or header) that a client
sees can be matched against the list of trusted signers. The
challenges brought by PoA are related to: control of mining
frequency, distribution of mining load (and opportunity) between
the various signers and; maintenance of the the list of signers
such to be robust from malicious attacks even in presence of
dynamic mutation of the trusted signers.
\item{\bf Proof-of-Capacity/Proof-of-Space and Proof-of-Storage}.
PoC or PoSpace and PoStorage
are implementations of the popular idea of ``space as resource''. Here the focus is not on the CPU cycles but on the amount of actual memory (non-volatile) space
the prover must employ to compute the proof. Nodes are asked to allocate significant volume of their hard drive space to mining instead of using CPU-bound space as in PoW. Miners are incentivised to devote hard-drive capacity as those who dedicate more disk space have a proportionally higher expectation of successfully mining a block and reaping the reward. The PoC makes use of hash trees to efficiently allow verification of a challenge
without storing the tree. These schemes are more fair and green than PoW. The reason mainly comes from the lower variance of memory access times between machines and the lower energy cost achieved through the reduced number of computations required. Several practical implementations adopt the PoC consensus algorithm like Permacoin, SpaceMint and Burstcoin, just to cite a few.
PoC consists of an initialisation and subsequent execution between a prover $P$ and a verifier $V$ \cite{dziembowski2015proofs}. Rather than $P$ proving to $V$ that some amount of work has been completed, $P$ proves to $V$ that she has allocated some number of bytes of storage. After the initialisation phase, $P$ is supposed to store some data $F$ of size $N$. Instead, $V$ only holds some small piece of information. At any later time point $V$ can initialise a proof execution phase, and at the end $V$ outputs reject or accept. The PoC is in general defined by three quantities: ($N_0$, $N_1$, $T$); then, the miner shows that she either: 1) had access to at least $N_0$ storage between the initialisation and execution phases and at least $N_1$ space during the execution phase; or 2) used more than $T$ time during the execution phase. Solutions to the ``Mining multiple chains'', and ``grinding blocks'' problems of PoC algorithms have been proposed by \cite{park2015spacecoin} among the others. The Proof-of-Storage (PoS) mechanism is similar to PoC but the designated space in it is used by all participants as common cloud storage \cite{PoC}.
\item {\bf Proof-of-Burn}. In Proof-of-Burn (PoB) miners must prove that they burned some digital assets. They do so by sending them (e.g., digital currencies) to a verifiable unspendable address belonging to them. Similarly to the PoS, also the PoB logic is to minimise the waste of resources generated by PoW. However, at the current stage, all PoB mechanisms function by burning PoW-mined digital currencies. This is therefore an expensive activity as the digital currencies once required to work as ``fuel'' in a PoB system cannot be recovered \cite{bonneau2015sok}. PoB can be used also to bootstrap a token off of another (see e.g., Counterparty or Mastercoin).
\item {\bf Hybrid}. The more advances hybrid consensus immutability and failure tolerance methods are ``PoB and PoS'' where Proof-of-Burn blocks act as checkpoints and ``PoW and PoS'' where PoW blocks act as checkpoints containing no transactions, but anchor both to each other and to the PoS chain. Peercoin uses PoW/PoS consensus. To solve the ``nothing at stake'' issue, Peercoin uses centrally broadcast checkpoints (signed under the developer's private key) according to which no blockchain reorganisation is allowed deeper than the last known checkpoints. Here the problem is that the developer becomes the central authority controlling the blockchain.
\end{enumerate}
\subsection{Gossiping}\label{Gossiping}
Blockchains are also decentralised, redundant storage systems. This redundancy makes it very difficult to hijack the information stored in them.
How this information travels through the network of computers is a characteristic that varies from one blockchain system to another.
Given the lack of a central routing authority (like it would exist for example in traditional electronic payment systems) nodes must transmit the information they possess -- in general new blocks, but it may be also the full blockchain to new nodes that enter the network -- to peers they know are participating of the system.
To this aim, nodes possess a list of peer nodes.
Whenever a new block is added to the local blockchain of a node, the later passes the block to others in its peer list by \textit{Gossiping}.
We identify two possible layouts for \textit{Gossiping}:
\begin{enumerate}
\item {\bf Local.} \textit{Gossiping} occurs first in a local manner (through a local validation process) until consensus is reached. This is also called ``federated consensus" used e.g., in Ripple \cite{Ripple} in which nodes can share transaction records to another node and reach consensus without directly knowing all the nodes in the network. Therefore most information travels ``locally" -- in terms of the P2P network -- such that a consensus is reached at this initial level. Only then, the information is sent throughout all the other nodes. In this layout, the \textit{Gossiping} can be termed ``local''.
\item {\bf Global.} In most implementations -- Bitcoin, Ethereum, etc. -- \textit{Gossiping} occurs to a list of peers that have been selected by what in the Bitcoin network are called fallback nodes. {These fallback nodes maintain a list of all peers in the network. Upon connection of a new node, the submit a randomly chosen list of peers to the entrant one. The logical network topology is intended to be largely unstructured, similar to the Erdos-Renyi network \cite{barabasi2016network}. Such topology lacks a concept of vicinity or local neighbourhood, and therefore the \textit{Gossiping} process can be termed \textit{global}. }
\end{enumerate}
\subsection{Consensus Agreement}\label{Cons-Agreement}
The \textit{consensus agreement} defines the set of rules under which { records (like sets of transactions or any other atomic piece of information) are independently updated by the nodes of a distributed systems. This is important to understand how a distributed system is able to handle the so-called Byzantine failures, i.e., how the system composed of $n$ nodes can achieve consensus on storing verified, trustworthy, information even in the presence of $f$ malicious nodes or in presence of malicious participants launching sybil attacks \cite{10.1007/3-540-45748-8_24}.
In this regard, it is very important to understand how the nodes communicate between them.}
\subsubsection{Latency}\label{Lat-cy}
\textit{Latency} is a sub-subcomponent which describes the rule of message propagation in the networks.
\begin{enumerate}
\item {\bf Synchronous Communication.} Systems which set
upper bounds on ``process speed interval'' and ``communication delay'' such that every message arrives within a certain known, predefined, time-interval ($\Delta$) are called \textit{synchronous}.
This does not preclude the possibility of having message delays due to exogenous network latency, but the delay is bounded and any message that takes longer than $\Delta$ is discarded. Lax synchronicity assumptions apply also to the Bitcoin blockchain. For example, a block is rejected if contains a timestamp: 1) lower than (or equal to) the median timestamp of the previous eleven blocks; and 2) greater than (or equal to)
the ``network-adjusted time'' plus 2 hours. Another example of blockchain adopting \textit{synchronous communication} is Ripple through the use of clocks. Specifically, Ripple's ``LastLedgerSequence'' parameter asserts that a transaction is either validated or rejected within a matter of seconds.
\item {\bf Asynchronous Communication.} Systems which do not set any bound on ``process speed interval'' and ``communication delay''
such that every message/packet can take an indefinite time to arrive are called \textit{asynchronous}. Although this type of communication protocols brings some advantages (e.g., calls/requests do not need to be addressed to active nodes and nodes do not need to be available when a new information is sent to them by peers), its main disadvantage is that response times are unpredictable and it is harder to design applications based on them. Synereo is an example of blockchain using the asynchronous communication protocol.
\end{enumerate}
\subsubsection{Finality}\label{agree-m}
{\textit{Finality} describes whether information intended to be stored in a blockchain (or, as a matter of fact, in any system) can be safely considered \textit{perpetually} stored once the recording is performed. For a distributed system like blockchain-based ones this is very challenging to achieve, and it is certainly not one of the underlying design principles. In a system where the new blocks diffuse through gossiping, and because of rules such as the precedence of the longest chains, even if consensus is achieved globally, \textit{a priori} nothing prevents a set of new nodes entering the system and overriding the previous consensus by offering a longer versions of the hostory. }
We identify two possible layouts for \textit{Finality}:
\begin{enumerate}
\item {\bf Non-Deterministic}. In this case, \textit{consensus agreement} ``eventually settles''. Non-deterministic are randomised or inherently probabilistic consensus (also called \textit{stabilising} consensus) in which the probability to disagree decreases over time. For example in the Bitcoin blockchain
the block frequency is adjusted (with respect to the block-mining rate and indirectly to the computational power of the nodes) to minimise the probability of forks.
Moreover, the propagation of blocks through the network has characteristic delays \cite{decker2013information} and
{ even in presence of only honest nodes the fork probability cannot be ruled out simply because different nodes may find competing blocks of the same height before the one found first reaches the complete network . This cannot be prevented even if there is in place a concurrency control mechanism, which which attempts to correct results for simultaneous operations. Therefore, overall, the protocol is non deterministic. Thus, even though the widespread heuristic ``wait until 6 confirmed blocks are appended to the chain" reduces the likelihood that a transaction is overridden afterwards, it does not eliminate completely the probability of a previously validated block to be pruned and removed from the blockchain in the future.}
\item {\bf Deterministic}. In this case, \textit{Consensus Agreement} converges with certainty and transactions are immediately confirmed/rejected in/from the blockchain. This property turns to be very useful for smart contracts where, using state-machine replication, consistent execution of the contracts can be achieved across multiple nodes.
{ All the blockchains based on Lamport Byzantine Fault Tolerance \cite{lamport1982byzantine} achieve deterministic consensus. A prime example of an implementation featuring deterministic finality is Stellar. Another case of deterministic is for private blockchains where new blocks follow a predefined set of rules.}
\end{enumerate}
\section{Transaction Capabilities}\label{Trans-Cap}
The second main component, \textit{Transaction Capabilities}, is important to illustrate scalability of transactions and usability in possible applications and platforms. One of the major challenges for the blockchain technology is to increase the transaction throughput to compete with other solutions already available in the market{ (e.g. centralised payment systems, like credit cards).} In order to achieve these improvements, quantitative parameters (e.g. data storage in block header, TPS (transactions per second)), need to be redesigned to realise such improvements. Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Transaction Capabilities:
\begin{itemize}
\item[1] Data Structure in the Blockheader
\item[2] Transaction Model
\item[3] Server Storage
\item[4] Block Storage
\item[5] Limits to Scalability
\begin{itemize}
\item[5.1] Transactions
\item[5.2] Users
\item[5.3] Nodes
\item[5.4] Confirmation Time
\end{itemize}
\end{itemize}
\sloppy \subsection{Data Structure in the Blockheader}\label{data-structure}
The data stored in the block header has different functions. On the one hand, it includes the transaction hashes for validation purposes; on the other, it contains additional information for different application layers or blockchain technology platforms. The \textit{data structure in the blockheader} describes the capabilities of the system to store transaction information. The original application of Merkle proof was implemented in Bitcoin, as described in \cite{Nakamoto2008}.
We identify two possible layouts for \textit Data Structure in the blockheader:
\begin{enumerate}
\item {\bf Binary Merkle Tree.} Bitcoin uses the Binary Merkle tree \cite{10.1007/3-540-48184-2_32} within the block header to store the transactions. The information in the block header in the Merkle tree structure contains a hash of the previous header, timestamp, mining difficulty value, proof of work nonce and root hash for the Merkle tree containing the transactions for that block, which are used for the verification process to scale up the transactions speed. By convention, the longest chain (since the so-called Genesis block) is considered to be the current status of the blockchain.
\item {\bf Patricia Merkle Tree.} One the one hand, \textit{Patricia Merkle Tree} (Practical Algorithm To Retrieve Information Coded In Alphanumeric \cite{morrison1968patricia}) allows activities like inserting, editing or deleting information referring to the balance and nonce of accounts, which enables faster and more flexible validation of transactions than the one \textit{merkle tree model} \cite{wood2014ethereum}. However, with respect to the applications, it has the important advantage of allowing for verification of specific branches of the tree. Ethereum \cite{Ethereum} uses the Patricia Merkle Tree within the block header to store more information than what is possible in the Binary Merkle Tree. Those contain transactions, receipts (essentially, pieces of data showing the effect of each transaction) and state \cite{ethDec}.
Importantly, this technology allows even blocks outside the longest chain to contribute to the validation process, building a confirmation system that is less centralised. This is the so called Ghost rule, a variant of which is implemented also in the Ethereum blockchain \cite{ethDec}.
\end{enumerate}
\subsection{Transaction Model}\label{transc-model}
{The transaction model can be imagined as an accounting ledger which tracks the inputs and outputs of each transaction. The \textit{transaction model} describes how the nodes connected to the P2P network store and update the user information in the distributed ledger.
The challenge of the \textit {transaction model} is to prevent data that ought not to be trusted by the parties connected to the system - e.g.~those originated in behaviour, like double spending - to enter into the ledger
}
As of this writing, it is possible to identify two possible layouts for \textit{Transaction Model} in the widely used blockchain-based systems:
\begin{enumerate}
\item {\bf The Unspent Transaction Output (UTXO).} \textit{UTXO} model includes a refractory number of blocks during which network participants are prevented of using the transaction output in new transactions. In this way, it prevents miners from spending transactions fees and block rewards before stable validation status of the block chain. This measure prevents the \textit{forking problem} of blockchains \cite{BLCHGuide}. This transactions mechanism is available in blockchain technologies like Bitcoin.
\item {\bf Traditional Ledger.} {In comparison to the \textit{UTXO model}, different implementations of blockchain systems - like \textit{Stellar} and \textit{Ripple} use a more traditional ledger model to record the transactions recorded in the system}. In particular, Stellar lists every single transaction in the \textit{Stellar distributed ledger history}. Also, \textit{Ripple} uses the traditional ledger transaction model to register increments/decrements of balance and clear all account balances. In \textit{Ethereum} some transactions are used to execute actions in smart contracts defined in specific atomic records in the blockchain. Those transactions can be seen as order executions of stakeholders which perform the actions out of said smart contracts.
\end{enumerate}
\subsection{Server Storage}\label{server-store}
At the core of blockchain-based systems underlies their decentralised nature. This requires that nodes connected to the peer-to-peer network are indistiguishable from each other.
This concept, however, cannot be fully expressed when the storage needs, computing power or bandwidth constraints of the network nodes do not permit this feature to be fully realised.
In these scenarios, different nodes have access to different layers of information, those which do not store the information fully are ``thin clients'' connected to the peer-to-peer network \cite{xu2018blockchain}.
We identify two possible layouts for \textit{Server Storage}:
\begin{enumerate}
\item {\bf Full Nodes.} All nodes connected to the network, and which are part of the validation process, are of the same kind. This is a genuinely peer-to-peer network where all the nodes are equivalent in terms of information contained. This property creates a large information redundancy, which makes the system more resilient to attacks or malfunctioning.
\item {\bf Thin Nodes Capabilities.} In this setup, some nodes connected to the network contain only a selected subset of all the information contained in the blockchain. This creates more scalable systems (in terms of number of nodes connected to the network and the concomitant network traffic and storage needs), but may deteriorate the resiliency as only a fraction of the nodes contain the complete blockchain information.
\end{enumerate}
\subsection{Block Storage}\label{Block-store}
Which information is stored in the blockchain determines the scalability of the system across some dimensions. More crucially, it also allows to understand how concomitant information from users are abstracted within the system.
We identify two possible layouts for \textit{Block Storage}:
\begin{enumerate}
\item {\bf Transactions.} In systems like Bitcoin, only the transactions are stored. They contain both, a set of inputs and outputs that help to identify emitter(s) and receiver(s) of a specific transaction. {This kind of approach is preserved in more exotic applications of blockchain-like technologies, like IOTA, which relies on the storage of an directed-acyclic-graph to store every single transaction. This kind of approach works not only for cryptocurrencies applications, but it is underlying all transfer-of-property-like applications.}
\item {\bf User balance.} In systems like Ripple, the decentralised storage also contains information about the user balance in the specific assets. {This approach may limit the storage needs of the system, but at the same time reduces accountability and the possibility to roll back transactions.}
\end{enumerate}
\subsection{Limits to Scalability}\label{Limit-scale}
The decentralised nature of blockchain systems and the concomitant redundancy in the storage impose different kinds of limits to the way in which a specific implementation scales when the \textit{system size}.
System size is used here in a broad sense: It {may refer to} to the number of nodes connected to the network, the number of users of the service, the set of network connections and/or amount of network traffic, the number of transactions, etc.
It is worth remarking that these ingredients are intertwined in the real world \cite{Tessone2017b}, and - upon usage and continuous development - the limiting factor of a particular blockchain system may vary over time. {In a rapidly evolving technology such as blockchain is nowadays, these limits are often changed by the development teams behind some of these systems. An example is the implementation (or not) of SegWit2x and Lightning (as technology for micropayment channels) \cite{decker2015fast} as ways to alleviate the limitation in the number of transactions that Bitcoin can process with respect to its current implementation. }
{The importance of this component is due to its influence on the final scaling of the system}. Scaling is a property that specifies how the growth will influence its overall performance. As an example, how the total network traffic induced by unverified transactions grow with the number of network nodes. If every node has a small - limited - number of connections, then the total network traffic will scale linearly on the number of nodes. In mathematical terms it will be $\mathcal{O}(N)$.
However, if every node is connected to each other then the traffic will be $\mathcal{O}(N^2)$, i.e.~it will grow quadratically on the number of nodes. Therefore, if - for a given implementation - network traffic is the most crucial limiting factor, different logical topologies will have different scalability. Acknowledging that a categorical definition is a crude simplification, we will focus here on the most limiting element for each system.
We identify four possible layouts as \textit{Limits to Scalability}:
\sloppy \begin{enumerate}
\item {\bf Limit by number of transactions.} We start from the most common real-world example. Bitcoin has a limitation in the number of transactions it can process in every block, because of the hard-coded limit to the block size in bytes. Given that new blocks appear (on average) every ten minutes, this means that the number of transactions that can be included in a given time window is limited. Therefore the layout ``Number of Transactions'' refers - regardless of the information stored in the blockchain\cite{Eyal2015} - to the specific implementations, where the number of operations that can be included in the blockchain is severely limited by design.
\item {\bf Limit by number of users.} Bitcoin only stores transactions in its public ledger. This is different from other related technologies. Ripple, on the other hand, stores not only transactions, but also the state of the Ripple accounts. Therefore, in scenarios like this, it is the number of users of the system that limits its scalability. A similar problem occurs in Ethereum where the system will be constrained by the number of DAOs, individuals, etc. that it will contain as these are the actors that generate activity in the system. Therefore, the term ``Number of Users'' for this layout is a broad reference to the number of objects of which states stored. Needless to say, this layout is somehow related to the previous, the number of transactions will depend on the number of users. However, one limiting factor can still appear irrespective of the other.
\item {\bf Limit by number of nodes.} The number of nodes connected to the network, acting as verifiers for the information that is stored on the blockchain, presupposes a limiting factor because of the mechanism of information diffusion adopted. \textit{Gossiping} is a process that requires larger times in decentralised networks to propagate into a consensus state \cite{Tessone2017}, and may even reach a point - where the relative time taken by network traffic is very long - where consensus cannot longer be reached and the blockchain naturally forks. Therefore this process naturally limits the applicability of fully decentralised solutions.
\item {\textit{Possible values.}} These three layouts can have three different values, regarding on how detrimental is a specific layout to the overall performance of the system. The possible values that each layout can have are divided into four: (i) \textbf{Indifferent}, (ii) {\bf At most linear}, (iii) \textbf{At most quadratic}, (iv) {\bf Worse than quadratic}. The first value is assigned \cite{catalini2874598some} when the relevant global characteristics of a system is independent of the number of specific class; the other three, express three categorical values that are assigned to the dependency of the number of elements in said class. For example, the number of users is largely irrelevant to the performance of the Bitcoin network (because this number is never translated into any property of the network). However, the number of transactions increases linearly a penalty on the local network traffic.
\item {\bf Confirmation time.} The time it takes a specific action to be confirmed ultimately depends on the time it takes for it to be added to the blockchain, and to be validated to further blocks later appended to it. Different approaches can be taken to this process: \textbf{deterministic} addition of new blocks at regular intervals (taken by Peercoin) and \textbf{stochastic} addition like in Bitcoin, where the process of mining induces an Exponential distribution of inter-block discovery time.
\end{enumerate}
\section{Native Currency/Tokenisation}\label{native-currency-tok}
{
So far, cryptocurrencies and other transfer of property records are the most common usage of the blockchain technology. In cryptocurrencies, system participants who contribute to the verification process - if selected by some rule to issue a new block into the blockchain - are awarded the possibility to issue a transaction without issuer (so called ``coinbase'') to themselves. On the one hand, this is a customary way of introducing new assets into the system. On the other, it introduces an incentive for users to participate of the verification process which leads to an increased trustworthiness on the system.}
{The aforementioned incentive scheme \cite{Sompolinsky2018} is to be provided in a token, whose value is assigned \cite{catalini2874598some} precisely because of the cost associated with its production \cite{garcia2014digital}. Initially, solutions like Bitcoin have created its own (and single) asset class (\textit{the bitcoin}) that can be transacted within the system. This particular solution is not the only one possible with the primary example of Ethereum, where beyond the natural Ether native token, via smart contracts arbitrary new tokens can be created and their property exchanged.
Further, the native currency possibilities present for example in Ripple \cite{tsukerman2015block} and tokenisation enable different use cases of the blockchain technology like asset-transfers via tokens, exchanges, etc.
All this is just the beginning of cryptoeconomics: it is of uttermost importance how these assets are supplied into the system, because this affects the way users are incentivised to participate in the validation process. } Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Native Currency/Tokenisation:
\begin{itemize}
\item[1] Native Asset
\item[2] Tokenisation
\item[3] Asset Supply Management
\end{itemize}
\subsection{Native Asset}\label{nat-cur}
Some systems implemented using blockchain technologies have underlying a native asset (which are normally called \textit{cryptocurrency}) which is a digital token whose owners assign a value and allow to run the daily activities on the platforms or communities.
Whether these cryptocurrenties ought to be considered fiat or commodity currencies \cite{grinberg2012bitcoin,selgin2015synthetic,Luther2018}, and whether they may eventually be massively adopted replacing traditional ones \cite{Luther2016}.
We identify three possible layouts for \textit{Native asset}:
\begin{enumerate}
\item {\bf None.} Private blockchain implementations do not require a native asset within to incentivise participation. In these cases, there is no native asset incorporated into the system
\item {\bf Own Cryptocurrency.} Most implementations of cryptocurrencies only deal with transfer of property of its own tokens within the system.
Bitcoin or Litecoin are examples of technologies with single asset compatibility \cite{buterin2014next}. These technologies are limited to their own underlying digital currency, but it can also have off-chain solutions to interoperate with other currencies to execute transactions or to enrol into smart contracts. Further, solutions like coloured-coins \cite{rosenfeld2012overview}.
\item{\bf Convertible Multiple Assets.} Other technologies like Counterparty, Ardor o do have their own underlying currencies or tokens to execute tasks. However, these technologies also enable the possibility of exchange of assets expressed in others outside those native to the platform. This approach of multiple, convertible, currencies has the advantage of allowing for exchange markets be directly reflected into the system.
\end{enumerate}
\sloppy \subsection{Tokenisation}\label{tokens}
A token acts as a digital bearer bond, whose ownership is determined by the data embedded in the blockchain. Ownership of the tokens is transferable between holders using other transactions with associated ``transfer'' metadata. This does not require the approval of any other authority. The possibility of tokenisation\cite{catalini2874598some} enables a range of possible use cases for the blockchain technologies outside the purely financial world\cite{tsukerman2015block,rohr2017blockchain,adhami2017businesses,conley2017blockchain}.
We identify three possible layouts for \textit{Tokenisation}:
\begin{enumerate}
\item {\bf No tokenisation present.} Without third-party technologies , Bitcoin does not have implemented technologies that enable tokenisation.
\item {\bf Tokenisation through third-party addons.} Bitcoin plus Colour-Coin \cite{rosenfeld2012overview} enables the existence of tokenised transactions in the Bitcoin blockchain. Such solution is based on the cryptographic nature of Bitcoin addresses and the script language.
\item {\bf Tokenisation.} The tokenisation possibilities together with the extensions of metadata are available in several implementations and constitute the backbone of blockchain-based property registries. The most paradigmatic example is Ethereum, where the creation of a new Token is produced by means of the creation of a smart contract. Thanks to this flexibility, and extreme extension possibility of such platform, the conditions for creation of new tokens is countless.
\end{enumerate}
\quad
\sloppy \subsection{Asset Supply Management}\label{monetary-supply}
The process of the digital asset (usually referred to as \textit{cryptocurrency}) creation varies across different blockchain technologies.
Each approach has taken different economic frameworks in most cases fixing a specific monetary policy the future of a particular system.
This is also a pillar of the incentive scheme that users have to participate (or not) in the validation process \cite{Tessone2017}.
We identify three possible layouts for \textit{Asset Supply Management}:
\begin{enumerate}
\item {\bf Limited - Deterministic.} The most replicated system in the world of blockchain is the limited supply as introduced in Bitcoin. Not only the supply grows sub-linearly over long periods of time (in contrast to what occurs in normal fiat currencies), but it is designed to have a well defined limit. It is important that, while this incentivises users to adopt the technology and contribute to the process of verification - for which they get a retribution -, on the other hand, it also creates an incentive to hoard the asset, limiting transactions.
\item {\bf Unlimited - Deterministic.} Very few (eventually not broadly adopted) digital currencies based on blockchain attempted to create unlimited supply, like Dogecoin or Freicoin.
\item {\bf Pre-mined} Some altcoins (with the purpose of funding the development of the platform, or with the sole idea of profiting) have distributed all the assets before the starting of the system. Then, a reward system induces some kind of redistribution.
\end{enumerate}
\section{Extensibility}\label{extens}
The alignment of the interoperability, intraoperability, governance and script language determine the future ecosystem of
the blockchain network and the integration possibilities of variety of blockchain related technology. Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Extensibility:
\begin{itemize}
\item[1] Interoperability
\item[2] Intraoperability
\item[3] Governance
\item[4] Script Language
\end{itemize}
\sloppy \subsection{Interoperability}\label{inter}
Interoperability illustrates the overall capability of blockchains to exchange information with other systems, outside of blockchains.
It allows inflow, outflow and information retrieval of data providers that are not necessarily a blockchain-based system, e.g.~financial data providers\cite{dilley2016strong}.
We identify three possible layouts for \textit{Interoperability}:
\begin{enumerate}
\item{\bf Implicit interoperability.} It occurs when the smart contracts that specify conditions under which a particular transaction (or event) is to take place can be written in a Turing-complete blockchain script language. In this context, implicitly any kind of condition can be specified, even those involving specific status in other systems. This implies an (albeit cumbersome) way of interaction from a blockchain solution to any API tool or interface.
\item {\bf Explicit interoperability.} If the script language is not Turing complete or the system has specific tools implemented that enable interoperability with the real world (like Bitcoin with Counterparty), then we talk about explicit interoperability, as it is brought purportedly into the system and one of its design principles.
\item {\bf No Interoperability.} A blockchain without any kind of possibility to interact with other systems. As implemented, Bitcoin in absence of external solutions (i.e. off the chain layers) has no interoperability implemented. It applies to most existing blockchain-based systems whose script language is not Turing complete.
\end{enumerate}
\subsection{Intraoperability}\label{intra}
Intraoperability illustrates the overall capability of blockchains to exchange information with other blockchains.
It allows inflow, outflow and exchange of data between different blockchains\cite{chen2017inter}.
We identify three possible layouts for \textit{Intraoperability}:
\begin{enumerate}
\item {\bf Implicit intraoperability} It occurs when the smart contracts that specify conditions under which a particular transaction (or event) is to take place can be written in a Turing-complete blockchain script language. In this context, implicitly any kind of condition can be specified, even those involving specific status in other blockchains.
\item {\bf Explicit intraoperability} If the script language is not Turing complete but is specifically designed to allow for intraoperability, then we talk about explicit intraoperability, because it is brought purportedly into the blockchain and it is one of its design principles. An example of this is Bitcoin with Counterparty.
\item {\bf No intraoperability} A blockchain without any kind of possibility to interact with other blockchains. As implemented, Bitcoin in absence of external solutions has no intraoperability implemented. Solutions for non intraoperable blockchains resort on: 1) Trusted proxies to connect blockchains; 2) Pegged blockchain systems; 3) Distinguishing tokens in the same blockchain based system.
\end{enumerate}
\subsection{Governance}\label{governance}
Effective governance rules are crucial for the successful implementation of the blockchains and for their capability to adapt, change and interact. As the blockchain deployment structures (public chain, private chain, consortium chain) are different, their management patterns are also quite different.
We identify two type of governance rules: 1) {\it technical rules} of self-governance defined by the participants. Technical rules are composed of software, protocols, procedures, algorithms, supporting facilities and other technical elements; 2) {\it regulatory rules} defined by external regulatory bodies composed of regulatory frameworks, provisions, industry policies and other components \cite{atzori2015blockchain,davidson2016disrupting,wright2015decentralized}.
Regulatory rules are by definition not technical in nature and therefore outside the scope of this taxonomy. We focus instead on techical rules which are particularly interesting for their feedback loop with the proposed technological solutions.
We identify three possible layouts \textit{Governance}:
\begin{itemize}
\item {\bf Open-source Community.} In this case, open communities of developers (following open-source principles) and validators (very often in coordination with the blockchain foundation) coordinate upgrades and technical adjustments of the blockchain. For example, Bitcoin is mainly maintained by a team of core developers who in coordination with miners agree on changing parameters or other settings of the Bitcoin network. Also Ethereum and Hyperledger (backed up by the Linux Foundation) follow an open-source community model.
\item {\bf Technical}. Since the blockchain technology is very versatile and can be applied to many business cases, enterprises with a strong technical strength (e.g., IBM and Microsoft) have proposed themselves as technical solution providers for blockchain architectures (proprietary hardware and software systems and basic services). In these cases, the technical rules of blockchain governance are dictated by the companies according to their business goals. For example, in 2015 Microsoft collaborated with ConsenSys to create the Ethereum blockchain technology service and took it as part of the Microsoft Azure service (EBaaS) to provide distributed ledger technology trials for enterprise customers, partners and developers. Moreover, in order to protect their proprietary blockchain architectures, these companies generally apply also for patents. According to \cite{WEF_Report} 2,500 patents on this topic have been filed from 2014 to 2016.
\item {\bf Alliance}. This is the blockchain governance model proposed by industry consortia (e.g., B3i, R3) composed of companies with common business or technological progress demands. The alliance mode has the scope to sharing technology platforms to build common business models and standards. Only companies that meet certain criteria (e.g., payment of the fees, qualification of the organisation) are legitimised to collaborate to set technical rules of blockchain governance. Those companies join together to promote commercial and technological progress in the area of blockchain under mutual benefit and common contribution.
\end{itemize}
\subsection{Script Language}\label{script}
Widespread programming languages are Turing-complete, which in formal terms refers to the fact that it is possible to implement an algorithm on it to simulate any Turing machine. These are therefore general purpose languages, in which arbitrary computations can be performed. Languages that are not of this kind, are so because of design reasons which aim at prevent specific behaviours of code execution, like undefined termination.
Blockchain systems allow to modify the conditions under which certain information (e.g. transactions) will be included into the public record. These conditions must be specified in an algorithmic manner, and in some contexts are termed \textit{smart contracts}.
These algorithms are elicited in a \textit{scripting language} designed purely for this purpose.
Therefore the intended flexibility given to the users, with respect to the scope that the algorithm can develop, affects tremendously the degree of freedom to create conditions for some actions to occur (on the one hand) and the hypothetical computational effort that may be necessary to assess if a particular condition is fulfilled or not \cite{kim2017perspective}.
It is worth remarking here that how limited is the scope of the scripting language is another design decision developers must carefully choose before the implementation of the blockchain, as abrupt changes (or bugs) may deride the logic of particular transactions.
We identify four possible layouts for \textit{Script Language}:
\begin{enumerate}
\item {\bf Turing Complete.} Ethereum refers to a suite of protocols that define a platform for decentralised applications. With respect to scripting languages, on the one end of the spectrum, the Ethereum Virtual Machine (EVM) can execute code of arbitrary algorithmic complexity. In the terms described above, Ethereum is ``Turing complete'', because developers can create applications, which runs on the EVM. Furthermore, Counterparty also uses Ethereum's entire smart contract platform to enable users to write Turing completeness for smart contracts.
It has been pointed out that there exist scalability and security concerns regarding the usage of Turing-complete for scripting languages in blockchain systems \cite{atzei2017survey}. As of this writing, these have not been resolved.
\item {\bf Generic Non-Turing Complete}. When designing Bitcoin, a decision was made to keep the scripting language limited in scope, to allow for a low impact of these calculations in the efficiency of the system. It is therefore a non-Turing complete language, and most blockchain implementations have followed this path. There is no connectivity in these to so-called ``oracles'' that allows obtaining data from sources that gather data which is exogenous to the blockchain.
\item {\bf Application-specific Non-Turing Complete}. There are some non-Turing complete languages that are more expressive than the generic ones and purposely designed for certain cases. By restricting the language to be only able to write programs relevant to specific limited cases, the potential outputs of those programs becomes predictable. This allows those outputs to be queried and easily analysed. One example is Digital Asset Modelling Language (DAML) which is designed to codify only financial rights and obligations for execution in private networks. DAML is also more expressive than Bitcoin' script language and easier to read from a non techical audience.
\sloppy \item {\bf Non-Turing Complete + External Data.} There exists a third category barely used so far that (while keeping the nature of the scripting language non-Turing complete) allows for existence of oracles. These oracles are considered trustful sources and add a layer of simplification on the validation to be performed by the language, empowering above Turing-completeness (as long as the oracles are reliable). This layout is then ``non-Turing Complete + External Data''.
\end{enumerate}
\section{Security and Privacy}\label{secpriv}
{The recent evolution and new implementations of blockchain systems bring risks, both technical and operational, associated with security and privacy. Thus, we group together security and privacy as two interrelated faces of the same problem.
Similarly, ISO TC 307 has created a dedicated Working Group on ``Security and Privacy'' \cite{iso.org}.}
{Security of blockchain systems is a matter of significant concern. Crypto currencies, the most widely deployed application of blockchain systems, have suffered from cyber attacks which became possible because of sensitive data mismanagement and the flawed design of the systems \cite{lin2017survey}. Without going into the detailed distinction between ``risks'', ``threats'', ``attack surfaces'' and ``vulnerabilities'', security of blockchain systems concerns: 1) Information mismanagement (alternation, deletition, distruction, disclosure etc.); 2) Implementation vulnerabilities (including cryptomechanisms implementation vulnerabilities, run-time leakage of information etc. ); 3) Cryptographic mechanisms mismanagement (including use of weak algorithms, key disclosure); 4) User privileges mismanagement.
For a recent comprehensive survey specifically targeting to the security and privacy aspects of Bitcoin and its related concepts we refer the readers to \cite{conti2017survey}.
With regard to privacy we refer to the
``freedom from intrusion into the private life or affairs of an individual when that intrusion results from undue or illegal gathering and use of data about that individual'' (ISO 25237 and other ISO standards). These privacy principles apply to any ICT system containing or processing PII, including blockchain systems.}
Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Security \& Privacy:
\begin{itemize}
\item[1] Data Encryption
\item[2] Data Privacy
\end{itemize}
\subsection{Data Encryption}\label{dataencryp}
{By \textit{Data Encryption} we refer to cryptographic primitives. To ensure authenticity, integrity property and order of events, cryptographic primitive (cryptographic algorithms) are used. For example, Bitcoin blockchain uses ECDSA digital signature scheme for authenticity and integrity, and SHA-2 hash function for integrity and order of event. Hash functions are also commonly used as a part of Proof-of-Work consensus mechanism.}
We identify two major layouts for \textit{Data Encryption}:
\begin{enumerate}
\item {\bf SHA-2.} SHA stands for Secure Hash Algorithm. In its two incarnations, SHA-256 and SHA-512, SHA (originally developed by the National Security Agency, USA) is the most widely variants for hashing functions \sloppy \cite{crosby2016blockchain, harvey2016cryptofinance} having first been used in Bitcoin. When issued to hash transactions, it requires a piece of information from the issuer, i.e.~the public key for the validation to take place\cite{meiklejohn2015privacy}.
\item {\bf ZK-SNARKS.} The Zero-Knowledge - Succinct Non-interactive Argument of Knowledge is a newer technology where no data whatsoever has to be provided to validate a specific hash \cite{ben2014scalable}. With the hashed message and the encrypted one, is sufficient as a proof to generate the validation. This anonymises much more the individual information.
\end{enumerate}
\quad
\subsection{Data Privacy}\label{dataprivacy}
Although public/private key infrastructures and other measures like hashing functions should
ensure that only the intended recipient can read the message and have access to the content of the transaction, the research shows that blockchain transactions (for e.g., in Bitcoin) can be linked together in order to extract additional information and eventually also the identity of the participants \cite{tasca2016evolution}. Indeed, there exists an inevitable tradeoff between a decentralised peer-validate system and the security and privacy of information.
In this regard, several alternative solutions have been proposed to ``encrypt'' the data in such a way that even though computations and transactions occur in plain sight, the underlying information is completely kept obfuscated. Obfuscation is a way of turning any program into a ``black box''. This is equivalent to the original program: runs the same ``internal logic'' and provides the same outputs for the same inputs. But information on the data and processes is inaccessible. Of course there exists a strong interrelation between \textit{Data Privacy} and \textit{Data Encryption}.
According to the solutions proposes so far to enhance \textit{Data Privacy}, we identify two possible layouts:
\begin{enumerate}
\sloppy \item {\bf Built-in data privacy}. With built-in data privacy we include all those blockchains that by default provide obfuscation of information. For example ZeroCash uses built-in zero-knowledge cryptography to encrypts the payment information in the transactions {\cite{sasson2014zerocash}}. Although ZeroCash payments are published on a public blockchain, sender, recipient, and amount of a transaction remain private. Alternatively, blockchains like Enigma (a project that seeks to implement the secret sharing DAO concept)\cite{wit2017dao} uses built-in secure multi-party computation guaranteed by a verifiable secret-sharing scheme. In this case, the data can be split among $N$ parties in such a way that $M$ $<$ $N$ are needed to cooperate in order to either complete the computation or reveal any internal data in the program or the state. But $M$-$1$ parties cannot recover any information at all (which implies the need of trust on the majority of the participants to be honest). Finally, CORDA by R3 proposes a Node to Node ($N$-to-$N$) system characterised by encrypted transactions where only the parties involved in the transaction have access to the data { \cite{hearn2016corda}}. This is suitable for financial transactions
where a high degree of confidentiality is required. Third parties like central banks or other market authorithies may have access to the data by invitation only.
\item {\bf Add-on data privacy}. In this case, pseudonymous or public blockchains must resort on external solutions in order to obfuscate the information. One method is the \textit{mixing} service like Coinjoin. The principle behind this method is quite simple: several transactions are grouped together so to become a unique $M$-to-$N$ transaction. If for example, Alice wants to send one coin to Bob, and Carla wants to send one coin to David, a mixing transaction could be established whereby the addresses of Alice and Carla are both listed as inputs, and the addresses of Bob and David are listed as outputs in one unique transaction. Thus, when inspecting the $2$-to-$2$ transaction from outside it is impossible to discern who is the sender and who the recipient {\cite{Mixingservice} \cite{Coinjoin}}. Alternative to the \textit{mixing} service, the \textit{secret sharing} allows data to be stored in a decentralised way across $N$ parties such that any $K$ parties can work together to reconstruct the data, but $K$-$1$ parties cannot recover any information at all. Alternative add-on data privacy tools are \textit{ring signatures} {\cite{noether2016ring}}
and \textit{stealth addesses}
{\cite{moser2017anonymous}}
which hide the recipient of a transaction and can be used by any blockchain. {Ring signatures - firstly introduced by \cite{rivest2001leak} -}
and its variant (linkable ring signatures) allow to hide transactions within a set of others' transactions. In this case the transaction is tied to multiple senders' private keys but only one of them is the initiator. Thus, the verifier may only identify that one of them was a signer, but not who exactly that was. In the case of stealth addresses, a receiver generates a new dedicated address and a ``secret key'' and then sends this address to someone who he wants payment from. The sender use the address generated by the receiver plus a ``nonce'' (one time random number) in order to generate the address he/she will send funds to. The sender communicates the nonce to the receiver wwho can unlock the address by using the nonce and the secret key generated earlier. {Monero (https://getmonero.org/) is an example of blockchain that aims to achieve privacy through the use of traceable ring signatures and stealth addresses}.
\end{enumerate}
\section{Codebase}\label{code}
The codebase of the blockchain technologies delivers information about which challenges a developer could face and what kind of changes the underlying programming language could undergo. Therefore the main component ‘Codebase’ is essential to align and increase the efficiency of blockchain related IT architectures. Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Codebase:
\begin{itemize}
\item[1] Coding Language
\item[2] Code License
\item[3] Software Architecture
\end{itemize}
\subsection{Coding Language}\label{cod-lang}
Coding language illustrates the interconnectivity of programming languages of the blockchain technologies.
We identify two possible layouts for \textit{Coding Language}:
\begin{enumerate}
\item {\bf Single Language.} Bitcoin has released The Bitcoin Core version 0.13.1 with the underlying coding language C++. As Bitcoin is open source, implementations occurred (much less popular than the original codebase) in different languages (like Java).
\item {\bf Multiple Languages.} Ethereum uses C++, Ethereum Virtual Machine Language and Go, which enables more interaction with other languages. Stellar maintains JavaScript, Java, and Go-based SDKs for communicating with Horizon. There are also community-maintained SDKs for Ruby, Python, and C-Sharp.
\end{enumerate}
\quad
\subsection{Code License}\label{license}
The Code License illustrates the possibility of changes to the source code of the underlying technology.
We identify thee possible layouts for \textit{Code License}:
\begin{enumerate}
\item {\bf Open Source.} Regardless of the exact licence used for specific projects, we refer only to the openness in the source code as the only differentiating factor. Bitcoin core developers have continuously licenced the source code under the MIT licence. Counter-intuitively, a permissive licence like the MIT one (in which other developers can take the source code and fork it) eventually prevents multiple implementations. It also allows for continued development, larger code growth and allows adoption at a faster pace. Furthermore, Ripple and Stellar have licensed their codes with the ISC License. The ISC license is another permissive licence.
\item {\bf Closed Source.} For private implementations of blockchain-based systems, the source code is not necessarily openly distributed. {Just as an example, most the blockchains running on the Ethereum Enterprise Alliance, rather than on the public Ethereum blockchain, use closed source codes.}
In this case, risking the existence of unadressed bugs or unreported characteristics that may violate the expected conditions of use and functioning, the code may be kept outside of reach for users.
\end{enumerate}
\subsection{Software Architecture}\label{Consensus-Engine}
The {\it Software Architecture} refers to the high level structures of the blockchain system. Each structure comprises software elements, relations between them, and the properties that elements and relations give. The choice of the software architecture is very important in order to better manage changes once implemented. Software architecture choices include specific structural options among the possibilities that are available for software design.
We identify two possible layouts for \textit{Software Architecture}:
\begin{enumerate}
\item {\bf Monolithic Design}. In this case, all the aspects of a decentralised ledger (P2P connectivity, the ``mempool'' broadcasting of transactions, criterion for consensus on the most recent block, account balances, nature of smart contracts, user-level permissions, etc.) are handled by a blockchain built as a single-tier software application without modularity. \sloppy Examples of blockchains with monolithic design include Bitcoin and Ethereum. These architectures suffer from lack of extensibility on the long run.
\item {\bf Polylithic Design}. The Polylithic approach decouples the consensus engine and P2P layers from the details of the application state of the particular blockchain application.
For example, in Tendermint the blockchain design is decomposed. It offers a very simple API
between the application process and its application-agnostic "consensus engine" (TenderminCore) which enables to run Byzantine fault tolerant applications, written in any programming language, not just the one the consensus engine is written in. Also Hyperledger {Fabric} follows a polylithic design as it is composed of interchangeable modules representing different components of blockchain technology.
\end{enumerate}
\section{Identity Management}\label{identity-}
The main component \textit{Identity Management} ensures secure access to sensitive data to establish a suitable governance model for the blockchain. {This is a complex matter, as different levels of authority, accountability and responsibility are attached to different type of participants (e.g. users, administrators, developers, validators, etc). Generally, the set of rules are defined and enforced through mechanisms intrinsic to the system itself (on-chain governance)}.
{The subcomponents also eventually determine the concept of digital identity that users end up having within the systems}. Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Identity Management:
\begin{itemize}
\item[1] Access and Control Layer
\item[2] Identity Layer
\end{itemize}
\subsection{Access and Control Layer}\label{access-control}
When establishing the right governance structure for a blockchain it is important to consider the ledger construct. Depending on its purpose, the ledger could be run by a central authority and governed by it or it could be run in a decentralised fashion according to a set of governance rules adhered to and enforced by participants on the blockchain network. The governance structure determines
the authorisation and the control policy management functions. Those rules provide permission for users to access to or use blockchain resources. Those are a set of rules that manage user, system and node permissions that must be followed in security-related activities.
Blockchains may have different permissions according to which access and control to data is allowed. The distinguishing features must answer to the following questions:
\begin{itemize}
\item Which users have ``read" access?
\item Which users have ``write" access?
\item Is it there anyone who can ``manage consensus" (i.e., update and maintain the integrity of the ledger)?
\end{itemize}
According to the set of governance rules, we may have different system designs that reply to the above questions in a different way in order to better serve either a public or a private interest of either a \textit{general}
(like in the case of Ethereum) or a \textit{special} (like in the case of Corda) purpose.
On one side, private blockchains are generally those with a set of constrained ``read/write" access alongside a consensus algorithm which allows only a pre-selected group of people to contribute and maintain the blockchain integrity. Instead, public blockchains do not control ``read/write" access or in the consensus algorithm for any given set of participants. Nevertheless, this does not mean that certain permission structures can not be implemented as part of a specific application.
Although different variations are possible, the authority to perform transactions on a blockchain generally belongs to one of the following main models of the \textit{Access and control Layer}\cite{guegan2017public}:
\begin{enumerate}
\item {\bf Public blockchain}. In this case, there is no preference in access or in managing consensus. All participants (nodes),
have "read/write" access and without any control can contribute to the update and management of the ledger. An example of blockchain in this area is Bitcoin where every participant can either choose just to use the blockchain to exchange Bitcoins (or other data on the top of it, in general by means of third-party technologies), run a full node or even become a miner to participate in the process of transaction validation.
\item {\bf Permissioned Public blockchain}. In this case ``read" access is enabled for all users, however ``write'' access and/or ``consensus management" require permission by a pre-selected set of nodes. Ripple belongs to this group as to validate transactions a participant need to be part of the so-called \textit{Unique Node List}. Some other examples include Ethereum and Hyperledger Fabric which is used for the exchange of tangible (real estate and hardware) with intangible (contracts and intellectual property) assets between enterprises.
\item {\bf Permissioned Private blockchain}.
In this case, "read/write" and "consensus management" rights can only be granted by a centralised organisation.
An example is Monax (formerly known as Eris)
\end{enumerate}
\subsection{Identity Layer}\label{identity}
The onboarding and offboarding of nodes / entities to the blockchain networks is handled differently by the various software solutions.
{By identification we mean the capability to identify an entity uniquely in a given context. Digital identity can be defined as a set of identifying attributes for an entity that together enable the unique identification of the entity in a context (UID). A vital part of any identity system (and most information systems) is that a UID is managed throughout the entity’s lifecycle to protect it from negligence and fraud, and to preserve the UID’s uniqueness. A UID can then be assigned to the identity and used to link or bind the entity to the claimed identity and to any digital credential (software or hardware) issued to the entity. This digital credential acts a trusted proxy for the physical or logical entity and is used to support a wide range of personal and trust-related functions such as authentication, encryption, digital signatures, application logins and physical access control.}
AML and KYC procedures --
generally required to proceed personal related data e.g. medical data, bank information or other personal related data --, are the key aspects to consider when looking at the \textit{Identity Layer}.
We identify two possible layouts:
\begin{enumerate}
\item {\bf KYC/AML.}
{Compliant blockchains have the ability to validate organisations and their attribute data from authoritative sources to ensure the quality of data written to the blockchain and linked to identifiers in the blockchain. An example is} Stellar that sets requirements for all integrators to implement Know-Your-Customer (KYC)/Anti-Money-Laundering (AML) identity verification process to increase the transparency of the stellar network participants. Furthermore, Ripple forces its financial services partners to implement an identity layer to verify the user information.The financial services partners have to do a due diligence, depending on the requirements they must fulfil.
\item {\bf Anonymous.} {In general the common misunderstanding of the anonymity level within Bitcoin networks is that the majority of the users do not distinguish between anonymity and the pseudo-anonymity. In light of this, the Bitcoin protocol has no identity layer to identify the users. Those circumstances could benefit misuse of Bitcoins and money laundering activities through this blockchain network, but to control approaches to anonymity in Bitcoin and other cryptocurrencies\cite{maurer2016survey}. Regal Reid and Martin Harrigan \cite{reid2013analysis} have been able to demonstrate that several pseudonymous
addresses can be linked to one single user. See also \cite{tasca2016evolution} for a Bitcoin transaction-path driven users identification method.}
\end{enumerate}
\section{Charging and Rewarding System}\label{reward}
Blockchain systems incur in operational and maintenance costs that are generally absorbed by the participants to the network. Different kind of cost models are applied according to: 1) the architectural configuration design; 2) the governance system; 3) the data structure and the computation required on-chain. One of the
cost items which is common to the wide majority of the blockchains is the verification cost. This is required to sustain the validation process of the transactions that compete to be appended (and never removed) to the ledger.
The potential financial costs incurred when taking part of a blockchain platform, require an incentive scheme that maintains consistency of the cost structure across the different stakeholders. Figure \ref{taxonomy-Matrix} illustrates the subcomponents forming the component Charging and Rewarding System:
\begin{itemize}
\item[1] Reward System
\item[2] Fee System
\begin{itemize}
\item[2.1] Fee Reward
\item[2.2] Fee Structure
\end{itemize}
\end{itemize}
\subsection{Reward System}
This subcomponent illustrates the rewarding mechanisms automatically put in place and triggered by the systems in order to compensate active members contributing to data storage or transaction validation and verification.
We identify two possible layouts for \textit{Reward System}:
\begin{enumerate}
\item {\bf Lump-sum Reward.} Individuals taking part of the storage, validation or verification process (e.g., in the Bitcoin verification is only rewarded to users called \textit{miners}) may be rewarded for their action. For example, in Bitcoin, the first transaction in each block is called \textit{coinbase}, and the recipient is the user (or users) who created the block, that in this regard is a set of transactions verified by said user(s).
The lump-sum reward can be fixed like in Enigma or variable like in Bitcoin.
\item {\bf Block + Security Reward.} In other \sloppy blockchain-based technologies, like Ethereum, the blockchain rewarding system includes, besides the block reward, a reward for including in the validation forked blocks that are still valid. The design idea is to incentivise cross-validation of transactions (crucial in a setting where validation can be arbitrarily costly)\cite{timmerman2017ethereum}.
\end{enumerate}
\quad
\subsection{Fee System}\label{fee-system}
Other kind of rewards are those provided directly by the users to other participants of the system when launching any request in the network for storage, data retrieval, or computation and validation. With regards to this, we identify two sub-subcomponents: \textit{Fees Reward} and \textit{Fee Structure}.
\subsubsection{Fee reward}\label{fee-reward}
\textit{Fees Reward} describes the nature of the fees that the users are required to contribute when using a blockchain. {The fees system has been shown to play an important role in the way verifiers do \cite{moser2015trends}} and may \cite{carlsten2016impact} behave. This kind of design-time consideration ought not to be neglected, as it is usually the case.
We identify three possible layouts for \textit{Fees Reward}:
\begin{enumerate}
\item {\bf Optional Fees. } In Bitcoin and related technologies \cite{FeeMining} users can optionally pay a voluntary fee for the validation process. This fee is optional, but it is assumed that the larger the fee is, the lower is the processing time it will take to be added to a block, as miners will be more incentivised to do so. Moreover, given that the coinbase reward halves approximately every four years, currently, the reference Bitcoin client refuses to relay transactions with zero or no fees.
\item {\bf Mandatory Fees.} Some systems like Stellar force all users to include fees in any transaction added into the system.
\item {\bf No Fees.} In comparison, the Hyperledger Fabric is a blockchain solution for businesses, which combines a permissioned network and an identity layer without any transaction fees.
\end{enumerate}
\subsubsection{Fee structure}\label{fee-structure}
When provided by the system, fees can follow either a fixed or a variable structure.
There are two alternative layouts for \textit{Fees Structure}:
\begin{enumerate}
\item {\bf Variable Fees.} In this case, the fee is somehow linked to the "size" of the request. In Bitcoin, the larger the transaction size, the higher will be the fee the user shall pay in order to compensate for taking up space inside the block. Miners usually include transactions with the highest fee/byte first. The user can decide how many Satoshis (0.00000001 Bitcoins) wants to pay per byte of transaction. For example, if the transaction is 1,000 bytes and the user pay a fee of 300,000 Satoshis, he/she will be in the 300 Satoshi/bytes section (300,000/1,000=300.00). At the time of writing, this implies that the transaction will be included in the next 2 block transactions (i.e., within 20 minutes).
However, to avoid queuing, the user can increase the fee. The fastest and cheapest transaction fee is currently 360 Satoshis/byte. For an average transaction size of 226 bytes, a fee of 81,360 Satoshis is currently the cheaper fee in order to get the transaction included in the first available block without delays. Also other blockchains apply variable fees and follows similar rules as Bitcoin.
\item {\bf Fixed Fees.} In this case, the fee is linked to the request, not to its "size". For example, in Enigma every request in the network for storage, data retrieval, or computation has a fixed price, similar to the concept of Gas in Ethereum. However, since Enigma is a Turing-complete system, the fee can be different depending on the specific request. Another example of blockchain with a fixed transaction fee is Peercoin which required a fixed 0.01 PPC per kilobyte.
\end{enumerate}
\section{Conclusion}
In the 21st century, the blockchain technologies will athwart affect all business areas: financial services \cite{alvseike2017blockchain,scott2016can,evans2015bitcoin,quintana2014merger}, IoT \sloppy \cite{boudguiga2017towards,dorri2017towards}, consumer electronics \cite{andrews2017utilising}, insurances \cite{mainelli2014chain, stellnbergerinsurance}, energy industry, logistics \cite{badzar2016blockchain,hackius2017blockchain}, transportation, media \cite{kotobi2017blockchain}, communications \cite{plant2017implications}, \cite{antorweep}, entertainment, healthcare \cite{kuo2017blockchain}, automation, and robotics will be involved. After the advent of Internet, it currently represents the most prominent technology and it will shape the upcoming products and services in every industry field.
Since the introduction of Bitcoin in 2009, the awareness of blockchain technologies has considerably increased.
During the initial phase, the first mover regarding the adoption of blockchain was the financial industry. This is explained by the fact that blockchain enables cost reduction and increases the efficiency in several business processes (both internal and external) for financial institutions.
An example of the big impact of blockchain in the financial industry regard the networks of global payments which involve money transactions in exchange of goods, services or legal obligations between both individuals or economic entities. Beyond payments, blockchain allows real-time settlements, which reduces operational costs for the banks. Furthermore, the immutability of the blockchain reduces the risk of fraud, as a consequence banks can use sophisticated smart contracts to capture digital obligations and to eliminate operational errors. Global payments are just a fraction of the overall use cases in the financial industry. Moreover, many other industries, including the public sector, are now looking at blockchain-enabled solutions for their own processes.
This tremendous trend is causing a proliferation of multiple blockchain {architectures} which often are not interoperable and are built according to different engineering designs. Lately, software architectures, companies and regulators realised the need for standardisation of some of their components.
This is becoming a necessary step for the blockchain in order to: 1) gain global adoption and compatibility, 2) create cross-industry solutions, 3) provide cost-effective solutions. As always with standardisation processes, their creation must be a necessary equilibrium between different parties. But in this particular case, the open source community that brought forward this disruptive technology and which continuously develops most implementations should play a crucial role, as heralds of the advancement of blockchain.
{Based on the review of the current literature on blockchain technologies, our work is an early stage analysis across existing software architectures with the aim to propose a taxonomy: a reference architectural model for blockchains and their possible configurations. Based on component-based design, the blockchain taxonomy decomposes the blockchains into individual functional or logical components and identifies any possible different layout. The blockchain taxonomy proposes to assist in the exploration of design domains, in the implementation, deployment and performance measurement of different blockchain architectures. Figure \ref{taxonomy-Matrix} illustrates the blockchain taxonomy tree resulting from our analysis.}
{Our work sheds light on the current proliferation of non-interoperable blockchain platforms and on the need (for) and current discussions (about) blockchain standards. Although our work contributes to the ongoing efforts on setting blockchain standards, we do not conclude by saying that we need a set of standards \textit{now}. This process generally takes several years in order to produce concrete solutions (even 10 years for complex subjects). Therefore, we think that our taxonomy represents a timely honest intellectual exercise to be used as preliminary supporting material for all those interested in reducing blockchain complexity. At the same time, we are aware that our taxonomy tree, although hopefully very useful, is very preliminary and likely the first version of subsequent more complex evolutions.}
\paragraph{Acknowledgements:} PT and CJT thank Harvey R. Campbell for his
invaluable comments on a previous version of this paper. They also thank Alessandro Recchia and Thayabaran Thanabalasingham for their support and contribution.
PT acknowledges the University College London for financial support through the EPSRC program EP/P031730/1.
CJT acknowledges the University of Zurich for financial support through the University Research Priority Programme on Social Networks.
|
1,116,691,499,689 | arxiv |
\section{Introduction
Sevearal effects of numerical integration have been studied in various
aspects. In particular, the Gaussian quadrature rules for general
domains goes back to the monograph by
Stroud and Secrest \cite{stroud1966gaussian}, and
Herbold {\it et al.} \cite{herbold1969effect,
herbold1971effect}
reported extensive analysis for the effects of numerical integration
of variational equations.
Ciarlet and Raviart \cite{ciarlet1972combined} investigated the
numerical effects in finite element methods and Ciarlet
\cite{ciarlet1991finite} describes in detail the numerical quadrature effects in
approximating finite element methods.
More recently, such effects were extensively investigated
in the $p$--version finite elements by Banerjee and Suri
\cite{banerjee1992effect}, and in the approximation of eigenvalues
by Banerjee and Osborn \cite{banerjee1989estimation, osborn1990estimation}. More recently
Banergee and Babu{\v{s}}ka {\it et al.}\cite{banerjee1989estimation, babuska2011effect} studied such effects in the approximation
of linear functionals including eigenvalue approximations.
In the meanwhile, it has been
well-known that the lowest degree conforming finite element pairs
lead to unstable numerical solutions in the numerical
simulation of fluid and solid mechanics.
A proper choice of nonconforming finite element spaces in the
approximation of vector variables heals such
kind of instability \cite{crouzeix-raviart, fortin-soulie-quad-nc-2d,
fortin3d, han84, linke2016robust, rannacher-turek, turek, cdy, cdssy} in the
approximation of incompressible fluid flows. Contrary to the
simplicial nonconforming elements, most quadrilateral nonconforming
elements contain extra polynomials to $P_k$
\cite{achchab2014simple, achchab2018new, arbogast1995implementation,
dssy-nc-ell, han84, linke2016robust, rannacher-turek, turek, zhou2016new} which require
additional quadrature points, although there are some
quadrilateral elements consisting of $P_k$ only
\cite{park-sheen-p1quad, altmann-carstensen, zeng2020optimal}.
In our paper, we limit our interest to 4-DOFs
quadrilateral nonconforming elements of lowest order.
In \cite{rannacher-turek}, Rannacher and Turek introduced the rotated $Q_1$
nonconforming elements consisting of $P_1(\hat K)\oplus
\operatorname{Span}\{\hat{x}_1^2-\hat{x}_2^2\}$
on the reference domain $\hat K=[-1,1]^2$
with two types of degrees of freedom:
(1) the four mean edge integral DOFs and
(2) the four edge-midpoint value DOFs. The two types of DOFs lead to
different numerical results. The use of
edge-midpoint values is cheaper and simpler than that of
mean edge integral values in calculating the basis functions.
Douglas {\it et al.} modified the Rannacher--Turek element by
replacing quadratic polynomial to a quartic polynomial, where
the two types of DOFs are identical on rectangular meshes
\cite{dssy-nc-ell}. We will call the element \cite{dssy-nc-ell} as the
DSSY element, which fulfills the property $\frac1{|e|}\int_e \phi\,ds =
\phi(m),$ for every edge $e$ and its midpoint $m,$ which will be
coined as MVP (Mean value Property) throughout the paper.
For truly quadrilaterals, a class of nonparametric
DSSY element \cite{jeon2013class} was introduced.
Recently, an interesting observation was made for quadrature rules for
nonconforming quadrilateral element by Meng, Cui, and Luo (hereafter,
abbreviated by MCL), and a new type of nonconforming element was
introduced in \cite{meng2018new}. The MCL element consists of
$P_1(\bar K)\oplus \operatorname{Span}\{\bar{x}_1\bar{x}_2\}$ on each MCL-type quadrilateral
$\bar K,$ which will be explained in the following section. Then a
simple effective quadrature rules with three points in $\bar K$ is
defined.
In \cite{meng2018new}, the basis functions are at most of degree two
so that the quadrature formulae of degree two is found.
However the class of DSSY finite elements contains high-order degree polynomial
bases to fulfill MVP, and thus the quadrature formula in
\cite{meng2018new} does not guarantee optimal convergence.
Our aim in this paper is to investigate whether it is possible
to define similar two-point and
three--point quadrature rules for the class of DSSY elements. We
construct a class of nonparametric DSSY element on MCL-type
quadrilaterals. It
turns out to be possible to find a two-point rule and a three points
rules of precision 1 at one stroke under the assumption on the equal weights and
geometrically symmetric points with respect to barycenters.
We show
optimal convergence in broken energy norm under the condition that the
mesh sizes are sufficiently small.
The organization of the paper is as follows. In the next section, we
expose some notations and preliminaries, and then
briefly review some quadrilateral nonconforming elements which have
four DOFs. In Section \ref{sec:npDSSY}, we introduce a class of
nonparametric DSSY elements in quadrilaterals of the type
used by Meng {\it et al.} \cite{meng2018new}. Then in Section
\ref{sec:effects} the effects of numerical integration are analyzed.
Then we construct two-point and three-point quadrature rules
in Section \ref{sec:const-formula}. In the final section we provide
some numerical results which confirm the theories developed so far.
\section{Quadrilateral nonconforming elements}\label{sec:review-nonconform}
\subsection{Notations and Preliminaries}
Let $\Omega$ be a simply-connected polygonal domain in $\mathbb{R}^2$
and $(\mathcal{T}_h)_{h>0}$ be a family of shape regular
convex quadrilateral triangulations of $\Omega$ with
$\text{max}_{K \in T_h} \text{diam}(K)=h.$
If $D$ is a polygonal domain or a triangulation,
denote by $\mathcal{E}(D), \mathcal{E}^i(D),\mathcal{E}^b(D)$ the set of all edges, interior
edges,
and boundary edges, respectively, in $D;$ also
by $\mathcal V(D), \mathcal V^i(D),\mathcal V^b(D)$ the set of all vertices, interior
vertices,
and boundary vertices, respectively, in $D.$ If $D=\mathcal{T}_h,$ the
notations will be simplified to $\mathcal{E}_h,\mathcal{E}_h^i,\mathcal{E}_h^b,\mathcal V_h,
\mathcal V_h^i,\mathcal V_h^b,$ and so on. For edge $e,$ denote by $\mathbf m_e$ the midpoint of $e.$
For a typical element $K\in \mathcal{T}_h,$
denote $\mathbf{v}_j, j=1,\cdots,4,$ the four vertices of $K.$
Also denote
by $e_j$ the edge passing through $\mathbf{v}_{j-1}$ and $\mathbf{v}_{j}$
and by $\mathbf{m}_j$
the midpoint of $e_j$ for $j=1,\cdots,4,$ (with identification of
indices by $\mod(4)$ such as $\mathbf{v}_0:=\mathbf{v}_4,$ etc.)
Let $\hat K=[-1,1]^2$ be the reference cube with
$\hat\mathbf{v}_1={1\choose 1},\hat\mathbf{v}_2={-1\choose 1},
\hat\mathbf{v}_3={-1\choose -1},\hat\mathbf{v}_4={1\choose -1},$ with the four
midpoints
$\hat\mathbf m_1={1\choose 0},\hat\mathbf m_2={0\choose 1},
\hat\mathbf m_3={-1\choose 0},\hat\mathbf m_4={0\choose -1}.$
Define the linear functionals
$\sigma_e^{(k)}\in (C^0(\overline D))',k=i,m,$
for all $e\in \mathcal{E}(D) $ by
\begin{eqnarray}
\sigma^{(i)}_{e}(v)=\frac1{|e|}\int_{e} v\,\operatorname{d}\mathbf x;\quad
\sigma^{(m)}_e(v)=v(\mathbf m_e)\,\forall\, v\in C^0(\bar D).
\end{eqnarray}
We also denote by $\jump{f}{e}$ the jump of $f$ across edge $e$ such
that $\jump{f}{e} = (f_k-f_j)|_e$ where $f_j$ and $f_k$ denote the
restrictions of $f$ to $K_j$ and $K_k$ where $e=K_j\cap K_k.$ If $e\in
\mathcal{E}_h^b,$ $\jump{f}{e} = -f|_e.$
Let $\mathcal F_K$ denote an invertible bilinear map which maps $\hat K$ onto $K.$
For any open subset $\Omega$ of $\mathbb{R}^n$, denote the seminorm
and norm of the Sobolev space $W^{k,p}(\Omega)$ by
$|\cdot|_{k,p,\Omega}$ and $||\cdot||_{k,p,\Omega}$,
respectively. Also denote by $H^k(\Omega) = W^{k,2}(\Omega)$ and
abbreviate $|\cdot|_{k,p,\Omega}$ and $||\cdot||_{k,p,\Omega}$ as
$|\cdot|_{k,\Omega}$ and $||\cdot||_{k,\Omega}$.
Define the broken norms and seminorms on broken Sobolev spaces as
follows.
\begin{eqnarray*}
|v_h|_{k,p,h} &=& \begin{cases}
(\sum_{K\in\mathcal{T}_h} |v_h|_{k,p,K}^p)^{\frac1{p}}, &\, p \in [1,\infty),\\
\max_{K\in\mathcal{T}_h} |v_h|_{k,\infty,K} &\, p = \infty,
\end{cases}\quad
\|v_h\|_{k,p,h} = \begin{cases} \left[\sum_{j=0}^k
|v_h|_{k,p,h}^p\right]^{\frac1{p}}, &\, p \in [1,\infty),\\
\max_{j=0}^k |v_h|_{j,\infty,h} &\, p = \infty,
\end{cases}\\
W^{k,p}(\mathcal{T}_h) &=& \{v_h\in L^p(\Omega)\,|\, \|v\|_{k,p,h} < \infty\}.
\end{eqnarray*}
If $p=2,$ the subindices $p$ can be omitted as usual.
\subsection{The Rannacher--Turek and DSSY nonconforming elements}
The parametric nonconforming quadrilateral elements are
defined as follows:
\begin{definition}[Parametric nonconforming
quadrilateral finite element] \label{def:rannacher-turek}
For $k=i,m,$ define the reference element
$(\hat{K},\hat{P}^{pNC}_{\hat{K}},
\hat{\Sigma}^{pNC,(k)}_{\hat{K}})$ by
\begin{enumerate}
\item $\hat{K}=[-1,1]^2;$
\item $\hat{P}^{pNC}_{\hat{K}} =\begin{cases} P_1(\hat K)\oplus
\operatorname{Span}\{\hat{x}_1^2-\hat{x}_2^2 \}, &\text{ if } NC=RT,\\
P_1(\hat K)\oplus \operatorname{Span}\{\hat{x}_1^2-\hat{x}_2^2 - \frac53
(\hat{x}_1^4 -\hat{x}_2^4)\},&\text{ if } NC=DSSY;
\end{cases}$
\item$\hat{\Sigma}^{pNC,(k)}_{\hat{K}}=\{\sigma_e^{(k)}\,\forall\, e
\in \mathcal{E}(\hat K)\},$ for NC=RT or NC=DSSY.
\end{enumerate}
For $k=i,m,$
the global parametric Rannacher-Turek finite element spaces
\cite{rannacher-turek} and DSSY finite element spaces \cite{dssy-nc-ell}, with
NC=RT and NC=DSSY, respectively,
are defined by
\begin{equation*}
\begin{split}
&\mathcal{NC}^{pNC,(k)}_{h}=\{ v \in L^2(\Omega) \,\mid\, v|_K = \hat v\circ
F_K^{-1} \text{ for some } \hat v \in \hat P^{pNC,(k)}_{\hat K}
\,\forall\, K \in \mathcal{T}_h, \\
&\quad\q\quad\q\quad\q\quad\q\quad\q\quad \sigma_e^{(k)}(\jump{v}{e})=0 \,\forall\, e \in \mathcal{E}^i_h \},\\
&\mathcal{NC}^{pNC,(k)}_{h,0}=\{v \in \mathcal{NC}^{pNC,(k)}_{h} \,\mid\, \sigma_e^{(k)}(v) =0 \,\forall\,
~ e \in \mathcal{E}_h^b\}.
\end{split}
\end{equation*}
\end{definition}
Notice that the main additional feature of the DSSY element is the
MVP:
\begin{equation}\label{eq:mvp}
\sigma^{(i)}(\phi) = \sigma^{(m)}(\phi) \,\forall\, e\in \mathcal{E}_h
\,\forall\, \phi\in \mathcal{NC}^{pDSSY}_h.
\end{equation}
The nonparametric elements were introduced as follows.
\begin{definition}[Nonparametric Rannacher-Turek nonconforming quadrilateral
finite element] \label{def:rannacher-turek-nonpara}
The {\it nonparametric} nonconforming Rannacher-Turek element $(K,P_K,\Sigma_K)$
on $K$ is defined by
\begin{enumerate}
\item $K$ is a convex quadrilateral;
\item $P_K^{npRT} = P_1(K)\oplus\operatorname{Span}\{\xi_1^2 - \xi_2^2\},\,
\xi_j,j=1,2,$ are the two coordinates connecting the two pairs
of opposite edge-midpoints of $K;$
\item ${\Sigma}^{npRT}_{K}=\{\sigma_{e}^{(i)}\,\forall\, e \in \mathcal{E}(K)\}.$
\end{enumerate}
The global nonparametric Rannacher-Turek finite element space is defined by
\begin{equation*}
\begin{split}
&\mathcal{NC}^{npRT}_{h}=\{ v \in L^2(\Omega) \,\mid\, v|_K \in P^{npRT}_{K}
\,\forall\, K \in \mathcal{T}_h, \sigma_e^{(i)}(\jump{v}{e})=0 \,\forall\, e
\in \mathcal{E}^i_h\},\\
&\mathcal{NC}^{npRT}_{h,0}=\{v \in \mathcal{NC}^{npRT}_{h} \,\mid\, \sigma_e^{i}(v) =0 \,\forall\,
~ e \in \mathcal{E}_h^b\}.
\end{split}
\end{equation*}
\end{definition}
For the definition of nonparametric DSSY element, we need to introduce
an intermediate quadrilateral, denoted by $\widetilde K,$
which is the image of $\hat K$ under the following simple bilinear map
$$
S\widehat{\mathbf{x}} = \widehat{\mathbf{x}} + \hat{x}_1\hat{x}_2 \tilde{\mathbf{s}}
$$ where $\tilde{\mathbf{s}}=A_K^{-1}\mathbf d,
A_K = \frac14\left(\mathbf{v}_1-\mathbf{v}_2-\mathbf{v}_3+\mathbf{v}_4,
\mathbf{v}_1+\mathbf{v}_2-\mathbf{v}_3-\mathbf{v}_4\right),
\mathbf d = \frac14(\mathbf{v}_1-\mathbf{v}_2+\mathbf{v}_3-\mathbf{v}_4).
$ Then, the bilinear map $\mathcal F_K$ mapping $\hat K$ to $K$
is represented by $\mathcal F_K(\widehat{\mathbf{x}})=A_K(S_K(\widehat{\mathbf{x}}))+\mathbf b,$
$\mathbf b = \frac14(\mathbf{v}_1+\mathbf{v}_2+\mathbf{v}_3+\mathbf{v}_4).$ Then there exists an
invertible affine map $\widetilde A_K$ which maps $\widetilde K$ onto $K$ such
that $\mathcal{F}_K = \widetilde A_K\circ S_K.$
For these formula, see \cite[(2.5)]{jeon2013class}.
For the nonparametric DSSY element \cor{for arbitrary $\widetilde
c\in\mathbb R$}, set
\begin{eqnarray}\label{eq:til-mu}
\tilde{\mu}(\tilde{\mathbf{x}};\tilde{c})
=\widetilde\ell_1(\tilde{\mathbf{x}})\widetilde\ell_2(\tilde{\mathbf{x}})\widetilde\mathcal Q(\tilde{\mathbf{x}};\widetilde c),
\end{eqnarray}
where \cor{(\cite[(2.12)]{jeon2013class})}
\begin{subeqnarray*}
\widetilde\ell_1(\tilde{\mathbf{x}})&=&\widetilde{x}_1-\widetilde{x}_2-\widetilde s_1+\widetilde s_2;
\quad\widetilde\ell_2(\tilde{\mathbf{x}})=\widetilde{x}_1+\widetilde{x}_2+\widetilde s_1+\widetilde s_2,\quad \cor{\widetilde r=\frac{\sqrt{6}}{5}\sqrt{\frac52-\widetilde s_1^2-\widetilde s_2^2}},\\
\tilde{\mathcal Q}(\tilde{\mathbf{x}};\tilde{c})&=&\left(\widetilde{x}_1+\frac25\widetilde s_2\right)^2+
\left(\widetilde{x}_2+\frac25\widetilde s_1\right)^2 - \widetilde r^2 + \widetilde c\left[
(\widetilde{x}_1+\frac25\widetilde s_2)
(\widetilde{x}_2+\frac25\widetilde s_1) + \frac6{25}\widetilde s_1\widetilde s_2\right].
\end{subeqnarray*}
Then the nonparametric DSSY element is defined as follows.
\begin{definition}[Nonparametric DSSY nonconforming quadrilateral
finite element] \label{def:dssy-nonpara}
The {\it nonparametric} nonconforming DSSY element $(K,P_K,\Sigma_K)$
on $K$ is defined by
\begin{enumerate}
\item $\widetilde K=S_K(\hat K);$
\item $\widetilde P_{\widetilde K}^{npDSSY} = P_1(\widetilde K)\oplus\operatorname{Span}\{\widetilde\mu\},$
\item ${\widetilde\Sigma}^{npDSSY}_{\widetilde K}=\{\sigma_{e}^{(i)}\,\forall\, e
\in \mathcal{E}(\widetilde K)\}= \{\sigma_{e}^{(m)}\,\forall\, e
\in \mathcal{E}(\widetilde K)\}.$
\end{enumerate}
The global nonparametric DSSY finite element space is defined by
\begin{equation*}
\begin{split}
&\mathcal{NC}^{npDSSY}_{h}=\{ v \in L^2(\Omega) \,\mid\, v|_K = (\widetilde v\circ
\widetilde A_K^{-1}) \text{ for some } \widetilde v\in \widetilde P^{npDSSY}_{\widetilde K}
\,\forall\, K \in \mathcal{T}_h,\\
&\hspace{8cm}\sigma_e^{(i)}(\jump{v}{e})=0 \,\forall\, e
\in \mathcal{E}^i_h\},\\
&\mathcal{NC}^{npDSSY}_{h,0}=\{v \in \mathcal{NC}^{npDSSY}_{h} \,\mid\, \sigma_e^{i}(v|_{e}) =0 \,\forall\,
~ e \in \mathcal{E}_h^b \}.
\end{split}
\end{equation*}
\end{definition}
\subsection{The MCL nonconforming element}
Recently Meng {\it et al.} \cite{meng2018new} defined a nonparametric
quadrilateral element slightly differently from the above
nonparametric Rannacher-Turek element by using an
intermediate quadrilateral $\bar K.$
In order to briefly explain the notion of the intermediate
quadrilateral $\bar K$ of MCL type, the introduction of the
following line equations are useful.
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{meng-bilin.eps}
\caption{A bilinear map $\mathcal F_K$ from $\hat{K}$ onto $K$, a bilinear map
$\bar{\mathcal F}_K$ from $\hat{K}$ onto $\bar{K}$, and an affine map $\bar{\mathcal A}_K$ from $\bar{K}$ onto $K$.}
\label{fig:Bil}
\end{figure}
Define $\ell_j,j=1,2$ by
\begin{eqnarray}\label{eq:bell}
\ell_1(\mathbf{x}) = \frac{ (\mathbf{x}-\mathbf{v}_3)\times(\mathbf{v}_1-\mathbf{v}_3)}{ (\mathbf{v}_2-\mathbf{v}_3)\times(\mathbf{v}_1-\mathbf{v}_3)},\quad
\ell_2(\mathbf{x}) = \frac{ (\mathbf{x}-\mathbf{v}_4)\times(\mathbf{v}_2-\mathbf{v}_4)}{ (\mathbf{v}_1-\mathbf{v}_4)\times(\mathbf{v}_2-\mathbf{v}_4)},
\end{eqnarray}
where $\mathbf a\times\mathbf b$ denotes the cross product of vectors
$\mathbf a$ and $\mathbf b.$
Then
$\ell_1(\mathbf{x})=0$ and $\ell_2(\mathbf{x})=0$ are the equations of lines satisfying
\begin{equation}
\ell_1(\mathbf{v}_1)=\ell_1(\mathbf{v}_3)=0,
\ell_1(\mathbf{v}_2)=1;\,
\ell_2(\mathbf{v}_2)=\ell_2(\mathbf{v}_4)=0,
\ell_2(\mathbf{v}_1)=1.
\end{equation}
Then the intermediate quadrilateral $\bar K$ of MCL type is
defined with the following four vertices:
\begin{equation}\label{convex_poly}
\bar{\mathbf{v}}_1={1\choose 0}, \bar{\mathbf{v}}_2={0\choose 1},
\bar{\mathbf{v}}_3={\bar{h}_1\choose 0},
\bar{\mathbf{v}}_4={0\choose\bar{h}_2}, \text{ with } \bar{h}_1 = \ell_2(\mathbf{v}_3)<0,
\bar{h}_2:=\ell_1(\mathbf{v}_4)<0.
\end{equation}
We will use $(x_1, x_2)$ and $(\bar x_1,\bar x_2)$ for the
notations for coordinates in $K$ domain and $\bar K$ domain, respectively.
The following proposition is useful for our analysis.
\begin{proposition}
Let $\bar{\mathcal A}_K$ be an affine map defined by
\begin{eqnarray}\label{eq:cAK-2}
\bar{\mathcal A}_K\bar{\mathbf x} = \begin{bmatrix} \bar{\mathcal A}_K^{(1)} &
\bar{\mathcal A}_K^{(2)} \end{bmatrix}\bar{\mathbf x} + \boldsymbol \xi,
\end{eqnarray}
where the three column vectors are defined by
\begin{eqnarray}\label{eq:cAK-3}
\bar{\mathcal A}_K^{(1)} = \frac{\mathbf{v}_1-\mathbf{v}_3}{1-\bar{h}_1},\quad
\bar{\mathcal A}_K^{(2)} = \frac{\mathbf{v}_2-\mathbf{v}_4}{1-\bar{h}_2},\quad
\boldsymbol \xi= \frac{\mathbf{v}_3-\bar{h}_1\mathbf{v}_1}{1-\bar{h}_1} =
\frac{\mathbf{v}_4-\bar{h}_2\mathbf{v}_2}{1-\bar{h}_2}.
\end{eqnarray}
Then $\bar{\mathcal A}_K: \mathbb R^2 \to \mathbb R^2$ is an affine map which maps $\bar K$
onto $K.$
Moreover, the affine map
$\bar{\mathcal A}_K^{-1}: K\to \bar K$ by
\begin{eqnarray}\label{eq:cAK-31}
{\bar x_1\choose\bar x_2} = \bar{\mathcal A}_K^{-1}\mathbf{x} =
{ \ell_2(\mathbf{x}) \choose\ell_1(\mathbf{x})}.
\end{eqnarray}
maps $K$ onto $\bar K$ with four corresponding vertices.
\end{proposition}
\begin{proof}
Let $\boldsymbol \xi$ be the intersection of the line passing through
$\mathbf{v}_1$ and $\mathbf{v}_3$ and the line passing through
$\mathbf{v}_2$ and $\mathbf{v}_4.$ Then,
we have
\begin{eqnarray}
\bar{h}_1(\mathbf{v}_1-\boldsymbol \xi) = (\mathbf{v}_3 -\boldsymbol \xi),\quad
\bar{h}_2(\mathbf{v}_2-\boldsymbol \xi) = (\mathbf{v}_4 -\boldsymbol \xi),
\end{eqnarray}
from which it follows that
\begin{eqnarray}\label{eq:bxi}
\boldsymbol \xi = \frac{\mathbf{v}_3-\bar{h}_1\mathbf{v}_1}{1-\bar{h}_1} = \frac{\mathbf{v}_4-\bar{h}_2\mathbf{v}_2}{1-\bar{h}_2}.
\end{eqnarray}
Set $\bar{\mathcal A}_K^{(j)}= \mathbf{v}_j - \boldsymbol \xi,j =1,2.$ Then, owing to
\eqref{eq:bxi}, the column vectors and the shift
vector
$\boldsymbol \xi$ in \eqref{eq:cAK-2} are given as in \eqref{eq:cAK-3}.
It is immediate to verify that $\bar{\mathcal A}_K\bar\mathbf{v}_j = \mathbf{v}_j,j=1,\cdots,4.$
\end{proof}
Then the nonparametric MCL nonconforming quadrilateral element
is defined as follows.
\begin{definition}[Meng-Cui-Luo nonconforming quadrilateral element]
\label{def:meng-cui-luo}
Define the intermediate reference element
$(\bar{K},\bar{P}_{\bar{K}}, \bar{\Sigma}_{\bar{K}})$ by
(1) $\bar{K}$ is a typical quadrilateral of MCL type;$\quad$
(2) $\bar{P}_{\bar{K}}^{npMCL}=P_1(\bar K)\oplus \operatorname{Span}\{\bar{x}_1\bar{x}_2 \};\quad$
(3) $\bar{\Sigma}^{npMCL}_{\bar{K}}=\{\sigma_e^{(i)},\,e\in\mathcal{E}(\bar K)\}.$
Then the global nonparametric MCL nonconforming quadrilateral
element space can be defined as follows:
\begin{eqnarray*}
&\mathcal{NC}^{npMCL}_{h}=\{ v \in L^2(\Omega) \,\mid\, v|_K = \bar v\circ\bar
A_K^{-1}\text{ for some } \bar v\in \bar P^{npMCL}_{\bar K}
\,\forall\, K \in \mathcal{T}_h,\\
&\hspace{8cm} \sigma_e^{(i)}(\jump{v}{e})=0 \,\forall\, e \in \mathcal{E}^i_h \},\\
&\mathcal{NC}^{npMCL}_{h,0}=\{v \in \mathcal{NC}^{npMCL}_{h} \,\mid\, \sigma_e^{i}(v) =0 \,\forall\,
~ e \in \mathcal{E}_h^b \}.
\end{eqnarray*}
\end{definition}
\begin{remark} In \cite{meng2018new} the polynomial space
$P_K$ is defined as
$P_K = \operatorname{Span}\{1,x_1,x_2,\ell_1(\mathbf{x})\ell_2(\mathbf{x})\}.$ But, due to the
definitions of $\ell_1, \ell_2,$ and $\bar{\mathcal A}_K,$
it is evident to see the two definitions of $P_K$ are identical.
\end{remark}
\begin{remark} The Rannacher-Turek element is defined by using the
reference element, and the basis function space $P_K$ in Definition \ref{def:rannacher-turek} consists of
$P_1(K)\oplus \operatorname{Span}\{\hat\phi_4\circ \mathcal F_K^{-1}\},$ with
$\hat\phi_4(\hat\mathbf{x}) = \hat x_1^2-\hat x_2^2.$
Since the components of $\mathcal F_K^{-1}$ are not polynomials,
the function
$\hat\phi_4\circ \mathcal F_K^{-1}(\mathbf{x})$ is not a polynomial. In
contrast, the MCL finite element space
$P_K$ in Definition \ref{def:meng-cui-luo} consists of quadratic
polynomials since $\bar{\mathcal A}_K^{-1}(\mathbf{x})$ is an affine map.
\end{remark}
\begin{remark}
The convexity condition on $K$ is that
\begin{equation}
\bar{h}_1 <0, \: \bar{h}_2 <0.
\end{equation}
\end{remark}
\section{Nonparametric DSSY elements in MCL-type domains}\label{sec:npDSSY}
In this section, we use the intermediate domain $\bar K$ of
MCL type to design a new nonconforming quadrilateral element
with MVP.
Similarly to \eqref{eq:til-mu} as in \cite{jeon2013class},
we aim to find a quartic polynomial $\bar{\mu}$
in $\bar{K}$ as follows:
\begin{equation}\label{eq:bar-mu}
\bar{\mu}(\bar{\mathbf x})=\bar\ell_1(\bar{\mathbf x}) \bar\ell_2(\bar{\mathbf x})\bar{\mathscr{Q}}(\bar{\mathbf x}),
\end{equation}
where
${\bar\ell}_1(\bar{\mathbf x}) = \bar x_2, \bar{\ell}_2(\bar{\mathbf x}) = \bar x_1,$ and
$\bar{\mathscr{Q}}(\bar{\mathbf x})$ is a suitable quadratic polynomial. (Recall that the
coordinate indices for $\bar\ell_j$ and $\bar x_k$ are switched as in \eqref{eq:cAK-31}.)
We seek a quartic
polynomial $\bar{\mu}(\bar{\mathbf x})$ fulfilling the
MVP \eqref{eq:mvp} in $\bar{K}$.
Denote by
$\bar{\mathbf{d}}_j:=\frac{\bar{\mathbf{v}}_{j+1}-\bar{\mathbf{v}}_{j}}{2}$
and $\bar{\mathbf{m}}_j$
the edge vector of $\bar{\mathbf{e}}_j$ and its midpoint
for $j=1,\cdots,4$ (with indices modulo 4).
An application of
the three-point Gauss quadrature formula:
\begin{eqnarray*}
\int_{-1}^1 f(t)~\operatorname{d}t \approx \frac89 f(0) + \frac59 (f(\xi) +
f(-\xi)),\quad \xi =\sqrt{\frac35},
\end{eqnarray*}
which is exact for quartic polynomials,
simplifies MVP \eqref{eq:mvp} into the form
\begin{eqnarray}\label{eq:quadrature-2}
\bar{\mu}(\bar{\mathbf{g}}_{2j-1}) + \bar{\mu}(\bar{\mathbf{g}}_{2j}) -2\bar{\mu}(\bar{\mathbf{m}}_j) = 0, \quad j = 1, \cdots, 4,
\end{eqnarray}
where
$\bar{\mathbf{g}}_{2j-1} = \bar{\mathbf{m}}_j - \xi\bar{\mathbf{d}}_j,\quad
\bar{\mathbf{g}}_{2j} = \bar{\mathbf{m}}_j + \xi\bar{\mathbf{d}}_j, J=1,\cdots,4,
$
together with $\bar{\mathbf{m}}_j,j = 1,\cdots,4,$
are the twelve Gauss points on the four edges.
Notice that the equations of lines for edges $\bar{\mathbf{e}}_j,j = 1,\cdots,4,$ are given
in vector notation as follows:
\[
\bar{\mathbf{e}}_j(t) = \bar{\mathbf{m}}_j + t\bar{\mathbf{d}}_j, \quad t\in [-1,1].
\]
Consider the quartic polynomial \eqref{eq:bar-mu} restricted to an edge
$\bar{\mathbf{e}}_j(t), t\in[-1,1].$ Since $\bar{x}_1\bar{x}_2$ vanishes at the other two end points of each edge,
one sees that
\begin{eqnarray}\label{eq:inter-quad}
\bar{\ell}_1(\bar{\mathbf{g}}_{2j-1})\bar{\ell}_2(\bar{\mathbf{g}}_{2j-1}) =
\bar{\ell}_1(\bar{\mathbf{g}}_{2j})\bar{\ell}_2(\bar{\mathbf{g}}_{2j}) = (1-\xi^2)
\bar{\ell}_1(\bar{\mathbf{m}}_{j})\bar{\ell}_2(\bar{\mathbf{m}}_{j}), \quad \left(\xi = \sqrt{\frac35}\right).
\end{eqnarray}
A combination of \eqref{eq:quadrature-2} and \eqref{eq:inter-quad} yields that
\eqref{eq:mvp} holds if and only if the quadratic polynomial $\bar{\mathscr{Q}}$
satisfies
\begin{eqnarray}\label{eq:inter-quad-2}
\bar{\mathscr{Q}}(\bar{\mathbf{g}}_{2j-1}) + \bar{\mathscr{Q}}(\bar{\mathbf{g}}_{2j}) -5\bar{\mathscr{Q}}(\bar{\mathbf{m}}_j) = 0, \quad j = 1, \cdots, 4.
\end{eqnarray}
A standard use of symbolic calculation gives the general solution of
\eqref{eq:inter-quad-2} in the following one-parameter family in $\bar{c}$:
\begin{eqnarray}\label{eq:sq}
\bar{\mathscr{Q}}(\bar{\mathbf x}) = \bar q(\bar{x}_1;\bar{h}_1) + \bar{c}\bar q(\bar{x}_2;\bar{h}_2),\,
\bar{c} \in \mathbb{R},\quad
\text{where }\bar q(\bar{x};\bar{h})= \bar{x}^2 - \frac{3}{10}(1+\bar{h})\bar{x} +
\frac{3}{20}\bar{h}.
\end{eqnarray}
Define for each $\bar{c} \in \mathbb{R},$
\[
\bar{\mu}(\bar{\mathbf x};\bar{c}) =
\bar{\ell}_1(\bar{\mathbf x})\bar{\ell}_2(\bar{\mathbf x})\bar{\mathscr{Q}}(\bar{\mathbf x})
= \bar{x}_1\bar{x}_2\bar{\mathscr{Q}}(\bar{\mathbf x}).
\]
where $\bar{\mathscr{Q}}$ by \eqref{eq:sq} depending on $\bar{c}$ as well as $\bar{\mathbf x}$.
We are now in a position to define a class of {\it nonparametric nonconforming
elements on the intermediate quadrilaterals $\bar{K}$} with four degrees of freedom as follows.
(1) $\bar{K} = \mathcal{S}_K(\bar K);\quad$
(2) $\bar{P}^{npDSSY}_{\bar{K}} = P_1(\bar K)\oplus \operatorname{Span}\{\bar{\mu}(\bar{x}_1,\bar{x}_2;\bar{c}) \}$\quad;
(3) $\bar{\Sigma}^{npDSSY}_{\bar{K}}=\{\sigma_e^{(i)},\,e\in\mathcal{E}(\bar K)\}=
\{\sigma_e^{(m)},\,e\in\mathcal{E}(\bar K)\}.$
The global nonparametric DSSY quadrilateral nonconforming element
spaces are defined similarly.
By the above construction it is apparent that
MVP holds. Owing to this property, it is simple to show the unisolvency of the element.
\begin{theorem}\label{thm:unisol}
Assume that $\bar{c}$ is chosen such that $\bar{h}_1^2+\bar{h}_1+1+\bar{c}(\bar{h}_2^2+\bar{h}_2+1)\neq 0.$ Then
$(\bar{K},\bar{P}_{\bar{K}}, \bar{\Sigma}_{\bar{K}} )$ is unisolvent.
\end{theorem}
\begin{proof}
Set
$\bar{\psi}_1(\bar{\mathbf x})=1,\bar{\psi}_2(\bar{\mathbf x})=\bar{x}_1,\bar{\psi}_3(\bar{\mathbf x})=\bar{x}_2,$
and $\bar{\psi}_4=\bar{\mu}(\bar{x}_1,\bar{x}_2;\bar{c}),$ and
define $A=(a_{jk}) \in M_{4\times4}(\mathbb{R})$ by
$a_{jk}= \bar\psi_k(\bar{\mathbf{m}}_j), j,k=1,\cdots,4.$ Then,
Using $\bar q(\frac12;\bar{h})=\frac1{10}$ and
$\bar q(\frac{\bar{h}}{2};\bar{h})=\frac{\bar{h}^2}{10},$ we get
\begin{equation}\label{eq:matA}
A = \begin{bmatrix}
1 & \frac{1}{2} & \frac{1}{2} & \frac{1}{40}(1+ \bar c)\\
1 & \frac{\bar{h}_1}{2} & \frac12 & \frac{1}{40}\bar{h}_1(\bar{h}_1^2+ \bar c)\\
1 & \frac{\bar{h}_1}{2} & \frac{\bar{h}_2}{2} & \frac{1}{40}\bar{h}_1\bar{h}_2(\bar{h}_1^2+ \bar c\bar{h}_2^2)\\
1 & \frac{1}{2} & \frac{\bar{h}_2}{2} & \frac{1}{40}\bar{h}_2(1 + \bar c\bar{h}_2^2)
\end{bmatrix}
\end{equation}
with $\det(A)=-\frac{1}{160}(1-\bar{h}_1)^2(1-\bar{h}_2)^2 \big(\bar{h}_1^2+\bar{h}_1+1+\bar{c}(\bar{h}_2^2+\bar{h}_2+1)\big).$ Thus $A$ is nonsingular if and only if $\bar{h}_1^2+\bar{h}_1+1+\bar{c}(\bar{h}_2^2+\bar{h}_2+1)\neq 0.$
\end{proof}
From now on, we choose $\bar{c} =1$ to have
symmetry in \eqref{eq:sq}.
The basis functions can be easily constructed by using MVP. For
this,
as in the proof of Theorem \ref{thm:unisol},
set $\bar\psi_1(\bar{x})=1, \bar\psi_2(\bar{x})=\bar x_1, \bar\psi_3(\bar x)=\bar x_2,
\bar\psi_4(\bar x)=\bar\mu(\bar x,\bar{h})$ and seek the basis functions
\begin{eqnarray}\label{eq:basis-barphi}
\bar\phi_j(\bar x)= \sum_{k=1}^4 c_{jk}
\bar\psi_k(\bar x),\,j=1,\cdots,4,
\text{ such that } \bar\phi_j(\mathbf m_k)=\delta_{jk}.
\end{eqnarray}
\def\operatorname{D}{\operatorname{D}}
\def\operatorname{N}{\operatorname{N}}
\def\operatorname{Nh}{\operatorname{Nh}}
By computing the inverse of the matrix $A^{T}$ in \eqref{eq:matA},
we can represent the basis functions \eqref{eq:basis-barphi}
with the coefficients given by
\begin{eqnarray}
(c_{jk}) = \frac2{(1-\bar{h}_1)(1-\bar{h}_2) }
\begin{bmatrix}
\frac{ \bar{h}_1 \bar{h}_2}{2} &
- \operatorname{N}(1,2) &
- \operatorname{N}(2,1) &
\frac{20}{\operatorname{D}} \\
-\frac{\bar{h}_2}{2} &
\operatorname{N}(1,2) &
\operatorname{Nh}(2) &
-\frac{20}{\operatorname{D}}\\
\frac12 &
- \operatorname{Nh}(1)&
- \operatorname{Nh}(2)&
\frac{20}{\operatorname{D}}\\
-\frac{\bar{h}_1}{2} &
\operatorname{Nh}(1)&
\operatorname{N}(2,1)&
-\frac{40}{\operatorname{D}}
\end{bmatrix},
\end{eqnarray}
where
$
\operatorname{D} = 2+\bar{h}_1+\bar{h}_2+\bar{h}_1^2+\bar{h}_2^2,\,
\operatorname{N}(j,k) = \frac{ (\bar{h}_j^2 + \bar{h}_j + \bar{h}_k^2 + 1) \bar{h}_k }{\operatorname{D}},\,
\operatorname{Nh}(j)=\frac{\bar{h}_j^2 + \bar{h}_j + 2}{\operatorname{D}},\, j,k =1, 2.
$
\section{Effects of numerical integration}\label{sec:effects}
Consider the following elliptic boundary problem
\begin{subeqnarray}\label{eq:model-elliptic}
-\nabla\cdot \left( \boldsymbol{\kappa}(x) \nabla\, u \right) &=& f \text{ in } \Omega,\\
u &=& 0 \text{ on } \partial \Omega,
\end{subeqnarray}
where $\Omega \subset \mathbb{R}^n$ is a polygonal domain and $\boldsymbol{\kappa}=\big(\boldsymbol{\kappa}_{ij}(x)\big)$ is a
symmetric matrix with smooth functions $\boldsymbol{\kappa}_{ij}\in W^{1,\infty,h}(\Omega)$
such that there are constants $\alpha_K>0$ such that
\begin{equation*}
\sum_{i,j=1}^n \kappa_{ij}(\mathbf{x}) \xi_i \xi_j \geq \alpha_K
|\xi|^2,\,\mathbf{x}\in K\,\forall\, K\in\mathcal{T}_h.
\end{equation*}
The variational form of \eqref{eq:model-elliptic} is given by finding
$u \in H_0^1(\Omega)$ such that
\begin{equation}\label{eq:model-weakform}
a(u,v) = F(v), \quad v\in H_0^1(\Omega),
\end{equation}
where $a(u,v)= \int_\Omega \boldsymbol{\kappa}\nabla\, u \cdot \nabla\, v dx$ and
$F(v)=\int_\Omega fv dx.$
Consider the nonparametric DSSY space $\mathcal{NC}^{npDSSY}_h$ defined by
$(K,P_K, \Sigma_K)$ in the previous section.
Then the finite element approximation $u_h\in \mathcal{NC}^{npDSSY}_h$ of
\eqref{eq:model-weakform} is defined as the solution of discrete problem
\begin{equation}\label{eq:model-discreteform}
a_h(u_h,v_h) = F_h(v_h), \quad v \in \mathcal{NC}^{npDSSY}_h
\end{equation}
where $a_h(u,v)= \sum_{K \in \mathcal{T}_h} \int_K \boldsymbol{\kappa}\nabla\, u \cdot
\nabla\, v dx$ and $F_h(v)=\sum_{K \in \mathcal{T}_h} \int_K fv dx.$
The energy error estimate for nonconforming methods are as
provided in \cite{dssy-nc-ell}.
\begin{theorem}\cite{dssy-nc-ell}
Assume that $u$ and $u_h$ are the solutions of \eqref{eq:model-weakform} and \eqref{eq:model-discreteform}, respectively. Then we have following error estimate
\begin{equation}
|u-u_h|_{1,h} \leq C h |u|_{2,\Omega}.
\end{equation}
\end{theorem}
Our aim in this section is to find sufficient conditions on numerical quadrature rules
in approximating the stiffness matrix and load vector based on $\mathcal{NC}^{npDSSY}_h$.
Notice that our nonconforming elements on $K$ are constructed
via the affine map $\bar{\mathcal{A}}_K$ form the reference element $\bar{K}$ onto
$K$.
Thus it is natural to construct a quadrature formulae on $\bar{K}$,
which is defined with positive weights $\bar{\omega}_{\bar K,l}$
and nodes $\bar{\mathbf{b}}_{\bar K,l}$, $l=1,\cdots,L,$ by
\begin{equation}\label{eq:quad_ref}
\int_{\bar{K}} \bar{\phi}(\bar{\mathbf{x}})d\bar{\mathbf{x}} \approx \sum_{l=1}^L
\bar{\omega}_{\bar K, l} \bar{\phi}({\bar{\mathbf b}}_{\bar K, l}).
\end{equation}
Denote by $J_{\bar{\mathcal{A}}_K}$ the Jacobian matrix of $\bar{\mathcal{A}}_K$,
and observe that
\begin{equation*}
\int_K \phi(\mathbf{x}) d\mathbf{x} = |\det(J_{\bar{\mathcal{A}}_K})| \int_{\bar{K}} \bar{\phi}(\bar{\mathbf{x}})d\bar{\mathbf{x}}.
\end{equation*}
It induces the quadrature formulae on $K$ from \eqref{eq:quad_ref}, which is given by
\begin{equation}\label{eq:quad_physical}
\int_K \phi(\mathbf{x}) d\mathbf{x} \approx \sum_{l=1}^L \omega_{K,l} \phi(\mathbf{b}_{K,l}),
\end{equation}
where $\omega_{K,l} = |\det(J_{\bar{\mathcal{A}}_K})|\, \bar{\omega}_{\bar K,l}$ and
$\mathbf{b}_{K,l} = \bar{\mathcal{A}}_K\mathbf{\bar{b}}_{\bar K,l}$.
Suppose that the discrete problem \eqref{eq:model-discreteform}
is approximated by above quadrature formulae. Then $\bar{u}_h$, the Galerkin approximation
with quadrature, is defined as the solution of approximate problem: to
find
$\bar u_h\in \mathcal{NC}^{npDSSY}_h$ such that
\begin{equation}\label{eq:model-approxi}
\bar{a}_h(\bar{u}_h,\bar{v}_h) = \bar{F}_h({v}_h), \quad {v}_h \in \mathcal{NC}^{npDSSY}_h,
\end{equation}
where
\begin{align}
\bar{a}_h(u,v) = \sum_{K \in \mathcal{T}_h} \omega_{K,l} (\boldsymbol{\kappa}\nabla\, u
\cdot \nabla\, v )(\mathbf{b}_{K,l});\quad
\bar{F}_h(v) = \sum_{K \in \mathcal{T}_h} \omega_{K,l} (fv)(\mathbf{b}_{K,l}).
\end{align}
Define the quadrature error functionals to estimate the effect of numerical integration by
\begin{subeqnarray}\label{eq:quad-rule}
\bar{E}_{\bar K}(\bar{\phi}) &=& \int_{\bar{K}}
\bar{\phi}(\bar{\mathbf{x}})d\bar{\mathbf{x}} - \sum_{l=1}^L \bar{\omega}_{\bar K,l}
\phi(\mathbf{\bar{b}}_{\bar K, l}), \slabel{eq:quad-rulea} \\
E_K(\phi) &=& \int_K \phi(\mathbf{x}) d\mathbf{x} - \sum_{l=1}^L \omega_{K,l}
\phi(\mathbf{b}_{K,l}),
\slabel{eq:quad-ruleb}
\end{subeqnarray}
where above two error functionals are related as follows:
\begin{equation*}
E_K(\phi) = |\det(J_{\bar{\mathcal{A}}_K})| \bar{E}_{\bar K}(\bar{\phi}).
\end{equation*}
The following Bramble-Hilbert Lemma is essential for our argument.
\begin{lemma}\label{lem:Bramble-Hilbert}\cite[Theorem 28.1, p.198]{ciarlet1991finite}
Let $D \subset \mathbb{R}^n$ be a domain with a Lipschitz continuous
boundary. Suppose that $L$ is a continuous linear mapping on
$W^{k+1,q}(D)$ for some integer $k \geq 0$ and $ 1\le q \le \infty$. If
\begin{equation}
L(\phi)=0 \quad \forall \phi \in P_k(D),
\end{equation}
then there exists a constant $C(D)$ such that
\begin{equation}
|L(v)| \leq C(D) \|L\|_{k+1,q,D}' |v|_{k+1,q,D}\,\forall\, v\in W^{k+1,q}(D),
\end{equation}
where $ \|\cdot\|_{k+1,q,D}' $ denotes the norm in the dual space of $W^{k+1,q}(D).$
\end{lemma}
The following theorem estimates the effect of quadrature formulae on
the approximate bilinear form $\bar{a}_h,$ whose proof is essentially
identical to that of \cite[Theorem 28.2, p.199]{ciarlet1991finite}.
\begin{theorem}\label{thm:3.4}
Assume that $\bar{E}_{\bar K}(\bar{\phi}) = 0$ for any $\bar{\phi} \in \nabla\, \bar{P}_{\bar{K}}$, where
\begin{equation*}
\nabla\,\bar{P}_{\bar{K}}:=\operatorname{Span}\{\nabla\, \bar{v} \; \big| \; \bar{v} \in \bar{P}_{\bar{K}} \}.
\end{equation*}
If $\boldsymbol{\kappa} \in W^{1,\infty}(K),$
then there exists a constant $C>0$ such that
\begin{align}
\big| E_K(\boldsymbol{\kappa} \nabla\, u \cdot \nabla\, v) \big| \leq C h_K
||\boldsymbol{\kappa}||_{1,\infty,K} |u|_{1,K} |v|_{1,K}
\quad \forall u,v \in P_K,
\end{align}
where $h_K$ denotes the diameter of $K$.
\end{theorem}
We now establish the following conditional ellipticity estimate.
\begin{theorem}\label{thm:conditional-ell}
Assume that $\bar{E}_{\bar K}(\bar{\phi}) = 0$ for any $\bar{\phi} \in \nabla\, \bar{P}_{\bar{K}}$, where
\begin{equation*}
\nabla\,\bar{P}_{\bar{K}}:=\operatorname{Span}\{\nabla\, \bar{v} \; \big| \; \bar{v} \in \bar{P}_{\bar{K}} \}.
\end{equation*}
Assume that $\boldsymbol{\kappa} \in W^{1,\infty}(\mathcal{T}_h).$ Then for
sufficiently small $h>0$,
the following ellipticity holds:
\begin{equation}\label{eq:app_ellipticity}
\bar{a}_h(v,v) \geq (\alpha- C h |\kappa|_{1,\infty,h}) \cor{|v|_{1,h}^2}
\,\forall\, v \in \mathcal{NC}^{npDSSY}_h.
\end{equation}
Moreover, the conditional ellipticity coefficient in \eqref{eq:app_ellipticity}
can be given as
$\min_{K\in\mathcal{T}_h}(\alpha_K-C_Kh_K|\kappa|_{1,\infty,h}).$
\end{theorem}
\begin{proof}
Let $v\in\mathcal{NC}^{npDSSY}_h$ be arbitrary. Then
by the triangle inequality, the ellipticity, and Theorem
\ref{thm:conditional-ell}, we have
\begin{subeqnarray*}
\bar a_h(v,v) &\ge& \sum_{K\in\mathcal{T}_h} \left[ a_K(v,v) -|E_K(\kappa\nabla\, u, \nabla\, u)|\right] \\
&\ge & \min_{K\in\mathcal{T}_h}(\alpha_K-C_Kh_K|\kappa|_{1,\infty,h})\cor{|v|_{1,h}^2}\\
&\ge & (\alpha- C h |\kappa|_{1,\infty,h})\cor{|v|_{1,h}^2}.
\end{subeqnarray*}
This completes the proof.
\end{proof}
Next, we estimate the effect of numerical integration on the right hand
side linear functional $\bar{F}_h,$
which is essentially identical to the case of $k=1$ of \cite[Theorem 28.3,
p.201]{ciarlet1991finite}.
\begin{theorem}\label{thm:3.5}
Suppose that $\bar{E}_{\bar K}(\bar{\phi}) = 0$ for any $\phi \in
P_0(\bar{K})$.
Then for arbitrary $f \in W^{1,\infty}(\Omega)$ and $\phi \in P_K$,
there exists a constant $C>0$ such that
\begin{equation*}
\big| E_K(f\phi) \big| \leq C h_K
|K|^{1/2}||f||_{1,\infty,K} ||\phi||_{1,K}\,\forall\,
\phi\in P_K,
\end{equation*}
where $h_K$ denotes the diameter of $K$.
\end{theorem}
Finally, the effect of numerical integration is obtained by combining the above results.
\begin{theorem}
Let $u$ and $\bar{u}_h$ be the solutions of \eqref{eq:model-weakform}
and \eqref{eq:model-approxi}, respectively. Assume that
$\bar{E}_{\bar K}(\bar{\phi}) = 0$ for any
$\bar{\phi} \in \nabla\, \bar{P}_{\bar{K}}.$ Assume that $\boldsymbol{\kappa}\in W^{1,\infty}(\mathcal{T}_h).$
Then, for sufficiently small $h>0,$ we have the following error estimate
\begin{equation*}
|u-\bar{u}_h|_{1,h} \leq C \frac{h}{1-h\|\boldsymbol{\kappa}\|_{1,\infty,h}} \big(||\boldsymbol{\kappa}||_{1,\infty,h} |u|_{2,\Omega} + ||f||_{1,\infty,h} \big).
\end{equation*}
\end{theorem}
\begin{proof}
We exploit the conditional uniform ellipticity of $\bar{a}_h$.
Let $u_h$ be the solutions of \eqref{eq:model-discreteform}.
Then we have, for sufficiently small $h>0,$ for any $v_h \in\mathcal{NC}^{npDSSY}_h$,
\begin{align*}
(\alpha-C h\|\boldsymbol{\kappa}\|_{1,\infty,h}) |\bar{u}_h-v_h|_{1,h}^2 &\leq \bar{a}_h(\bar{u}_h-v_h,\bar{u}_h-v_h) \\
&=\bar{a}_h(u_h-v_h,\bar{u}_h-v_h) + \bar{a}_h(\bar{u}_h-u_h,\bar{u}_h-v_h) \\
&=\bar{a}_h(u_h-v_h,\bar{u}_h-v_h) + \big(\bar{F}_h(\bar{u}_h-v_h)-\bar{a}_h(u_h,\bar{u}_h-v_h) \big) \\
&=\bar{a}_h(u_h-v_h,\bar{u}_h-v_h) + \big(a_h(u_h,\bar{u}_h-v_h)-\bar{a}_h(u_h,\bar{u}_h-v_h) \big) \\
& \qquad\qq\qquad\qq\qquad\, + \big(\bar{F}_h(\bar{u}_h-v_h) - F_h(\bar{u}_h-v_h) \big).
\end{align*}
Denote by $w_h:=\bar{u}_h-v_h$. It follows that
\begin{align*}
(\alpha-C h\|\boldsymbol{\kappa}\|_{1,\infty,h}) |\bar{u}_h-v_h|_{1,h} &\leq C|u_h-v_h|_{1,h} + \frac{|a_h(u_h,\bar{u}_h-v_h)-\bar{a}_h(u_h,\bar{u}_h-v_h)|}{|\bar{u}_h-v_h|_{1,h}} \\
&\quad\qquad\qq\qquad + \frac{|F_h(\bar{u}_h-v_h) - \bar{F}_h(\bar{u}_h-v_h)|}{|\bar{u}_h-v_h|_{1,h}} \\
&\leq C \inf_{v_h \in \mathcal{NC}^{npDSSY}_h} |u_h-v_h|_{1,h} \\
&+ \sup_{w_h \in \mathcal{NC}^{npDSSY}_h} \Big(\frac{|a_h(u_h,w_h)-\bar{a}_h(u_h,w_h)|}{|w_h|_{1,h}} + \frac{|\bar{F}_h(w_h) - F_h(w_h)|}{|w_h|_{1,h}} \Big).
\end{align*}
If we take $v_h=u_h$, the above inequality is simplified to
\begin{align}\label{eq:3.22}
|\bar{u}_h-u_h|_{1,h} &\leq \frac{1}{\alpha-C h\|\boldsymbol{\kappa}\|_{1,\infty,h}} \sup_{w_h \in \mathcal{NC}^{npDSSY}_h} \Big(\frac{|a_h(u_h,w_h)-\bar{a}_h(u_h,w_h)|}{|w_h|_{1,h}} + \frac{|\bar{F}_h(w_h) - F_h(w_h)|}{|w_h|_{1,h}} \Big).
\end{align}
It remains to estimate the above two consistency error terms. For the
first term,
using Theorem \ref{thm:3.4}, we have
\begin{align}\label{eq:3.23}
|a_h(u_h,w_h)-\bar{a}_h(u_h,w_h)| & \leq \sum_{K \in \mathcal{T}_h} \big| E_K(\boldsymbol{\kappa}\nabla\, v_h \cdot \nabla\, w_h) \big|\nonumber \\
& \leq \sum_{K \in \mathcal{T}_h} h_K ||\boldsymbol{\kappa}||_{1,\infty,K} |v_h|_{1,K} |w_h|_{1,K}\nonumber \\
& \leq C h ||\boldsymbol{\kappa}||_{1,\infty,h} |u_h|_{1,h} |w_h|_{1,h} \nonumber\\
& \leq C h ||\boldsymbol{\kappa}||_{1,\infty,h} |u|_{2,\Omega} |w_h|_{1,h},
\end{align}
where the last inequality is obtained by the following estimate:
\begin{align*}
|u_h|_{1,h} &\leq |u|_{1,\Omega} + |u-u_h|_{1,h}\\
&\leq |u|_{1,\Omega} + C h |u|_{2,\Omega} \leq C |u|_{2,\Omega}.
\end{align*}
For the second consistency error term in \eqref{eq:3.22},
Theorem \ref{thm:3.5} applies:
\begin{align}\label{eq:3.24}
|\bar{F}_h(w_h)-F_h(w_h)| & \leq \sum_{K \in \mathcal{T}_h} \big| E_K(f w_h)\big| \nonumber\\
& \leq C \sum_{K \in \mathcal{T}_h} h_K |K|^{1/2}||f||_{1,\infty,K} |w_h|_{1,K} \nonumber\\
& \leq C h |\Omega|^{1/2} ||f||_{1,\infty,h} |w_h|_{1,h}.
\end{align}
The theorem follows by combining \eqref{eq:3.22}--\eqref{eq:3.24}
with the triangle inequality. That is, for sufficiently small $h>0,$
\begin{align*}
|u-\bar{u}_h|_{1,h} & \leq |u-u_h|_{1,h} + |u_h-\bar{u}_h|_{1,h} \\
&\leq C h |u|_{2,\Omega} + C \frac{h}{1- h\|\boldsymbol{\kappa}\|_{1,\infty,h} } \big(||\boldsymbol{\kappa}||_{1,\infty,h} |u|_{2,\Omega} + |\Omega|^{1/2}||f||_{1,\infty,h} \big) \\
&\leq C \frac{h}{1- h\|\boldsymbol{\kappa}\|_{1,\infty,h} } \big(||\boldsymbol{\kappa}||_{1,\infty,h} |u|_{2,\Omega} + ||f||_{1,\infty,h} \big).
\end{align*}
\end{proof}
\section{Construction of quadrature formulae}\label{sec:const-formula}
In this section we construct quadrature formulae for the nonparametric
DSSY element defined in Section \ref{sec:npDSSY}.
\subsection{Quadrature formula on $\bar{K}$}
In \cite{meng2018new}, the basis functions are at most of degree two
so that the quadrature formulae of degree two is found.
However our element has high-order degree polynomial basis to fulfill MVP,
and thus we require some other quadrature formulae.
Following the analysis of previous section, we may find quadrature
formulae exact for functions in $\bar{\mathcal Q}$, which is defined as
\begin{equation*}
\bar{\mathcal Q} := \operatorname{Span}\{\nabla\, \bar{u} \; \big|\;
\bar{u}, \bar{v} \in \bar{P}_{\bar{K}} \}=\operatorname{Span}\{1,
\frac{\partial\bar\mu}{\partial\bar x_1},
\frac{\partial\bar\mu}{\partial\bar x_2}\}
\end{equation*}
to preserve the order of convergence.
From now on, we choose $\bar c=1,$ the generalization being trivial
to include the other cases.
We quote the formula from \cite[(1) p.330]{meng2018new}:
\begin{align}\label{eq:exact-int}
\int_{\bar{K}} \bar{x}_1^j \bar{x}_2^k d\bar{\mathbf{x}} = \sum_{k=1}^4
\int_{\bar{T}_k} \bar{x}_1^j \bar{x}_2^k d\bar{\mathbf{x}}
=\frac{j!k!}{(2+j+k)!}(1-{\bar h_1}^{j+1})(1-{\bar h_2}^{k+1}).
\end{align}
We have the area
$$|\bar K|=\frac{(1-\bar h_1)(1-\bar h_2)}{2}.$$
For the sake of notational simplicity, we write
\begin{eqnarray*}
\bar{\bar{h}}_1:=1+\bar{h}_1,\quad \bar{\bar{h}}_2:=1+\bar{h}_2,\quad \bar{\bar{h}}_1 < 1,\, \bar{\bar{h}}_2 < 1.
\end{eqnarray*}
Clearly, $\bar{\bar{\mathbf h}}_c=\frac13{\bar{\bar{h}}_1\choose \bar{\bar{h}}_2}$ is the barycenter of $\bar K.$
\subsection{One-point quadrature rule of precision 1}
The obvious one-point quadrature weight and point are given by
\begin{eqnarray}\label{eq:1pt-form}
\bar{\omega}_{1} = |\bar K|,\quad
\bar\boldsymbol \xi^{(1)} = \bar{\bar{\mathbf h}}_c.
\end{eqnarray}
\subsection{Two-point and three-point quadrature rules of precision 1}
We seek two-point and three-point quadrature rules \eqref{eq:quad_ref} with
equal weights at one stroke. The weights are then given by
$\omega_{\bar K,l}=\frac{|K|}{L}, l=1,\cdots, L,$ for $L=2,3.$
The Gauss points $\mathbf{\bar{b}}_{\bar K,l},l=1,\cdots, L,$ are assumed to be
geometrically symmetric with respect to the barycenter
$\bar{\bar{\mathbf h}}_c.$ Hence we seek $\boldsymbol \xi^{(L)} = {\xi_1^{(L)}\choose
\xi_2^{(L)}}$ such that
\begin{equation}\label{eq:symm-quad}
\int_{\bar{K}} \bar{\phi}(\bar{\mathbf{x}})d\bar{\mathbf{x}} \approx
\begin{cases} \frac{|K|}{2} \left( \bar{\phi}(\bar{\bar{\mathbf h}}_c + \boldsymbol \xi^{(2)})
+ \bar{\phi}(\bar{\bar{\mathbf h}}_c - \boldsymbol \xi^{(2)}) \right) & \text{for }L=2,\\
\frac{|K|}{3} \left(
\bar{\phi}(\bar{\bar{\mathbf h}}_c) + \bar{\phi}(\bar{\bar{\mathbf h}}_c + \boldsymbol \xi^{(3)}) + \bar{\phi}(\bar{\bar{\mathbf h}}_c
- \boldsymbol \xi^{(3)})\right)
& \text{for }L=3,
\end{cases}
\end{equation}
which are exact for
\begin{subeqnarray}\label{eq:mu_der}
\frac{\partial\bar\mu}{\partial\bar{x}_1} &=&
3\bar x_1^2\bar x_2 + \bar x_2^3 - \frac3{10}( 2\bar{\bar{h}}_1\bar x_1\bar x_2 + \bar{\bar{h}}_2\bar x_2^2) + \frac3{20}(\bar{\bar{h}}_1+\bar{\bar{h}}_2-2)\bar x_2,\\
\frac{\partial\bar\mu}{\partial\bar{x}_2} &=&
\bar x_1^3 + 3\bar x_1\bar x_2^2 - \frac3{10}( \bar{\bar{h}}_1\bar x_1^2 +
2\bar{\bar{h}}_2\bar x_1\bar x_2) + \frac3{20}(\bar{\bar{h}}_1+\bar{\bar{h}}_2-2)\bar x_1.
\end{subeqnarray}
Then, by utilizing \eqref{eq:exact-int}, the
exactness of \eqref{eq:symm-quad} for \eqref{eq:mu_der} implies that
$(\xi^{(L)}_1, \xi^{(L}_2),L=2,3,$ turn out to be the solutions
$(X,Y)$ of the quadratic equations:
\begin{subeqnarray}\label{eq:poly-sym}
10 \bar{\bar{h}}_2 X^2 + 14 \bar{\bar{h}}_1 XY + 7 \bar{\bar{h}}_2 Y^2 &=&
Lr(\bar{\bar{h}}_1,\bar{\bar{h}}_2),\slabel{eq:poly-syma} \\
7 \bar{\bar{h}}_1 X^2 + 14 \bar{\bar{h}}_2 XY + 10 \bar{\bar{h}}_1 Y^2 &=& Lr(\bar{\bar{h}}_2,\bar{\bar{h}}_1),\slabel{eq:poly-symb}
\end{subeqnarray}
where
$r(u,v)=\frac{5}{4} v \left(\frac2{90} u^2- \frac4{10} u + \frac{185}{999} v^2 -
\frac6{10} v + 1\right).$
Since \eqref{eq:poly-sym} is a symmetric system of homogeneous
quadratic equations, it is easy to see the following lemma.
\begin{lemma}\label{lem:sym-sols}
The four solutions of \eqref{eq:poly-sym} are given by
$\pm (g_1(\bar{\bar{h}}_1,\bar{\bar{h}}_2),g_1(\bar{\bar{h}}_2,\bar{\bar{h}}_1)),$
and \\ $\pm(g_2(\bar{\bar{h}}_1,\bar{\bar{h}}_2),g_2(\bar{\bar{h}}_2,\bar{\bar{h}}_1))$ for some functions
$g_1(u,v)$ and $g_2(u,v).$
\end{lemma}
Denote by $r_1$ and $r_2$ the right hand sides of \eqref{eq:poly-sym}.
Then a judicial use of symbolic software (for example, Julia or Matlab)
with some cook-ups
gives the following two pairs of solutions $(X,Y)$:
\begin{subeqnarray}\label{eq:solXY}
X^{(1)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2) &=&-\frac{T_4+\sqrt{T_3}}{T_1}Y^{(1)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2),\quad
Y^{(1)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2)=\sqrt{\frac{T_5+T_6 \sqrt{T_3} }{T_2}};\\
X^{(2)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2) &=&-\frac{T_4-\sqrt{T_3}}{T_1}Y^{(2)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2),\quad
Y^{(2)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2)=\sqrt{\frac{T_5-T_6 \sqrt{T_3} }{T_2}},
\end{subeqnarray}
where
$
T_1 = 7r_1\bar{\bar{h}}_1 - 10r_2\bar{\bar{h}}_2,\,
T_2 = 13720\bar{\bar{h}}_1^4 - 26603\bar{\bar{h}}_1^2\bar{\bar{h}}_2^2 + 13720\bar{\bar{h}}_2^4,\,
T_3 = -70(r_1^2\bar{\bar{h}}_1^2 + r_2^2\bar{\bar{h}}_2^2)+ 49(r_1^2\bar{\bar{h}}_2^2+r_2^2\bar{\bar{h}}_1^2) + 51r_1r_2\bar{\bar{h}}_1\bar{\bar{h}}_2 ,\,
T_4 = 7(r_1\bar{\bar{h}}_2-r_2\bar{\bar{h}}_1),\,
T_5 = -1043r_1\bar{\bar{h}}_1^2\bar{\bar{h}}_2 + 980r_1\bar{\bar{h}}_2^3 + 686r_2\bar{\bar{h}}_1^3 - 470r_2\bar{\bar{h}}_1\bar{\bar{h}}_2^2,\,
T_6 = 14(7\bar{\bar{h}}_1^2 - 10\bar{\bar{h}}_2^2 ).
$
Owing to the symmetries of $T_2$ and $T_3$ with respect to $\bar{\bar{h}}_1$
and $\bar{\bar{h}}_2$, one can check by using symbolic software, again, that
$X^{(j)}(\bar{\bar{h}}_1,\bar{\bar{h}}_2)=Y^{(j)}(\bar{\bar{h}}_2,\bar{\bar{h}}_1),j=1,2,$ which confirms
Lemma \ref{lem:sym-sols}.
Obviously, $T_2>0$ unless $\bar{\bar{h}}_1=0$ and $\bar{\bar{h}}_2=0.$
Among the two pairs of solutions $(X^{(j)},Y^{(j)}),j=1,2,$ in \eqref{eq:solXY}, we choose
one that is closer to the origin $(0,0)$ if $\bar{\bar{\mathbf h}}_c+{X^{(j)}\choose
Y^{(j)}}$ is in $\bar K$ to
increase numerical stability.
In the case where $\bar{\bar{h}}_1=0$ or $\bar{\bar{h}}_2=0,$ $T_1$ vanishes, and
therefore, the above formula is unstable. To deal with this,
first, assume the case of $\bar{\bar{h}}_2=0.$
Then, the polynomial equations corresponding
to \eqref{eq:poly-sym} are simplified as follows:
\begin{subeqnarray}\label{eq:poly-sym-k=0}
1008 \bar{\bar{h}}_1 (2-\bar{\bar{h}}_1) XY &=&0,
\slabel{eq:poly-sym-k=0a} \\
252X^2 + 360Y^2 &=& 25 \bar{\bar{h}}_1^2 - 81 \bar{\bar{h}}_1 + 135.
\slabel{eq:poly-sym-k=0b}
\end{subeqnarray}
From \eqref{eq:poly-sym-k=0a} and \eqref{eq:poly-sym-k=0b}, we have
either (i) $X^{(1)}=0, Y^{(1)}=\sqrt{\frac{25 \bar{\bar{h}}_1^2 - 81 \bar{\bar{h}}_1^2 + 135}{360}}$ or
(ii) $Y^{(2)}=0, X^{(2)}=\sqrt{\frac{25 \bar{\bar{h}}_1^2 - 81 \bar{\bar{h}}_1^2 + 135}{252}}.$
Among these two pair
$(X^{(j)},Y^{(j)}),j=1,2,$ the suitable choice is made as above.
The case of $\bar{\bar{h}}_1=0$ is treated similarly by rotation of the case of $\bar{\bar{h}}_2=0.$
\section{Numerical examples}
In this section some numerical results are reported to confirm the
theoretical parts about the quadrature developed in the previous
sections.
\begin{example}\label{ex:1-2}
For the numerical example, consider the elliptic boundary value
problem \eqref{eq:model-elliptic} on $\Omega=(0,1)^2$ and
$\kappa(\textbf{x})=1+(1+x_1)(1+x_2)+\epsilon\sin(10\pi x_1)\sin(5\pi
x_2).$ The source term $f$ is generated by the exact solution
\begin{equation*}
u(x_1,x_2) = \sin(3\pi x_1)x_2(1-x_2) +
\epsilon\sin\left(\frac{\pi
x_1}{\epsilon}\right)\sin\left(\frac{\pi
x_2}{\epsilon}\right),\quad \epsilon=0.2.
\end{equation*}
\end{example}
The above problem is solved by using the nonparametric DSSY element
constructed in Section \ref{sec:npDSSY}.
Let $\bar u_h$ denote the nonparametric DSSY
Galerkin approximation to $u$ by using any specific Gauss quadrature
rule \eqref{eq:model-approxi}.
The meshes used in the numerical example are
perturbed from $(N\times N)$ uniform rectangles as follows.
The random meshes $x_{jk},j,k=1,\cdots,N-1,$ are obtained by
perturbing the uniform mesh points $(j,k)h,h=\frac1{N}$ with randomly
by $r_1$ and $r_2$ such that $x_{jk} = (j+r_{jk,1},k+r_{jk,2})h$ with
$|r_{jk,l}| \le r, l=1,2.$ Here, $r=0.2$ was chosen. The linear systems are solved by the Conjugate
Gradient method with tolerance $10^{-7}$ for residuals.
The errors and reduction ratios with random perturbation are averaged with
20 ensembles, but the number of ensembles can be arbitrarily
increased.
Our 2--point and 3--point Gauss quadrature rules are compared with
the standard $2\times 2$--point and $3\times 3$--point tensor product Gauss
quadrature rules in computing the stiffness matrix and the right hand
side vectors. In order to make the comparison fair to the above four
different quadrature rules, all errors are calculated by using
$3\times 3$--point tensor product Gauss quadrature.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$N$} & \multicolumn{4}{c|}{$2\times 2$ Gauss quadrature}
& \multicolumn{4}{c|}{$3\times 3$ Gauss quadrature}\\ \cline{2-9}
& $|\bar{u}_h-u|_{1,h}$ & order &
$\|\bar{u}_h-u\|_0$
& order &
$|\bar{u}_h-u|_{1,h}$
& order &
$\|\bar{u}_h-u\|_0$ & order \\ \hline
4 & 4.37 & & 0.213 & & 2.45 & & 0.131 & \\ \hline
8 & 4.17 & 0.688E-01 & 0.857E-01 & 1.31 & 1.72 & 0.508 & 0.426E-01 & 1.62\\ \hline
16 & 2.39 & 0.803 & 0.240E-01 & 1.84 & 0.926 &0.896 & 0.112E-01 & 1.92\\ \hline
32 & 1.29 & 0.888 & 0.661E-02 & 1.86 & 0.471 &0.975 & 0.286E-02 & 1.97\\ \hline
64 & 0.731 & 0.821 & 0.204E-02 & 1.69 & 0.237 &0.994 & 0.718E-03 & 1.99\\ \hline
128 & 0.476 & 0.619 & 0.825E-03 & 1.31 & 0.118 &0.998 & 0.180E-03 & 2.00\\ \hline
256 & 0.384 & 0.311 & 0.497E-03 & 0.732& 0.592E-01&0.999 & 0.450E-04 & 2.00\\ \hline
\end{tabular}
\caption{Numerical results of Example \ref{ex:1-2} on random meshes with
tensor product Gauss quadratures: the broken $H^1(\Omega)$--seminorm and
$L^2(\Omega)$--norm errors and their reduction orders are reported.}
\label{tab:ex1-2-1}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$N$} & \multicolumn{4}{c|}{$2$--pt Gauss quadrature}
& \multicolumn{4}{c|}{$3$--pt Gauss quadrature}\\ \cline{2-9}
& $|\bar{u}_h-u|_{1,h}$ & order &
$\|\bar{u}_h-u\|_0$
& order &
$|u-\bar{u}_h|_{1,h}$ & order & $\|\bar{u}_h-u\|_0$ & order \\ \hline
4 & 13.2 & & 0.960 & & 4.61 & & 0.539 & \\ \hline
8 & 9.04 & 0.548 & 0.228 & 2.08 & 2.42 & 0.927 & 0.113 & 2.26 \\ \hline
16 & 4.15 & 1.12 & 0.457E-01& 2.32 & 1.15 & 1.08 & 0.208E-01& 2.44 \\ \hline
32 & 2.10 & 0.984 & 0.107E-01& 2.09 & 0.559 & 1.04 & 0.388E-02& 2.42 \\ \hline
64 & 1.06 & 0.979 & 0.263E-02& 2.03 & 0.279 & 1.00 & 0.835E-03& 2.21 \\ \hline
128 & 0.533 & 0.999 & 0.655E-03& 2.01 & 0.139 & 1.00 & 0.200E-03& 2.06 \\ \hline
256 & 0.267 & 0.999 & 0.163E-03& 2.00 & 0.698E-01& 0.997 & 0.494E-04& 2.02 \\ \hline
\end{tabular}
\caption{Numerical results of Example \ref{ex:1-2} on random meshes with
2-pt and 3-pt Gauss quadratures: the broken $H^1(\Omega)$--seminorm
and $L^2(\Omega)$--norm errors and their reduction orders are reported.}
\label{tab:ex1-2-3}
\end{table}
As seen from the tables, the $2\times 2$ tensor product Gauss
quadrature is not sufficient to integrate the matrices in the
nonparametric DSSY element methods, while the $3\times 3$ tensor
product rule is sufficient. In the meanwhile, our new
2--pt Gauss quadrature rule is almost optimal, which gives better
numerical results than the $2\times 2$ product rule.
Observe that numerical errors obtained by our $3$--pt Gauss
quadrature rule are as good as those obtained by using the $3\times 3$
tensor product rule. In most calculations, the 2--pt Gauss quadrature
rule is acceptable.
\section*{Acknowledgments}
This research was supported in part by National Research Foundations
(NRF-2017R1A2B3012506 and NRF-2015M3C4A7065662).
|
1,116,691,499,690 | arxiv | \subsection{Recollections from previous work}
Our computations will rely upon two fundamental facts about $\mathrm{THH}(\ell)$ that can be proven by non-motivic means. The first fact, due originally to McClure--Staffeldt at odd primes \cite{McClureStaffeldt} and Angeltveit--Rognes at the prime $2$ \cite{AngeltveitRognes}, is the calculation of $\mathrm{THH}_*(\ell)/(p,v_1)$:
\begin{theorem} \label{thm:THHellwithFpcoeffs}
There is an isomorphism of algebras
\[\pi_{*} \left(\mathrm{THH}(\ell) \otimes_{\ell} \mathbb{F}_p\right) \cong \Lambda(\lambda_1,\lambda_2) \otimes \mathbb{F}_p[\mu],\]
for classes $\lambda_1$ in degree $2p-1$, $\lambda_2$ in degree $2p^2-1$, and $\mu$ in degree $2p^2$.
\end{theorem}
To state the second fact, note that the sequence of $\mathbb{E}_{\infty}$-ring maps
$$\ell \to \mathrm{THH}(\ell) \xrightarrow{\varphi} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}$$
allows us to view the cyclotomic Frobenius $\varphi$ as a map of $\ell$-algebras. We then have the following result, which is a version of the Segal conjecture:
\begin{theorem} \label{thm:ellSegal}
The mod $(p,v_1)$ Frobenius map $\pi_{*} \left(\mathrm{THH}(\ell) \otimes_{\ell} \mathbb{F}_p\right) \xrightarrow{\varphi} \pi_{*}\left(\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} \otimes_{\ell} \mathbb{F}_p\right)$ is identified under the isomorphism of \Cref{thm:THHellwithFpcoeffs} with the ring map
\[\Lambda(\lambda_1,\lambda_2) \otimes \mathbb{F}_p[\mu] \to \Lambda(\lambda_1,\lambda_2) \otimes \mathbb{F}_p[\mu^{\pm 1}]\]
that inverts the class $\mu$.
\end{theorem}
\Cref{thm:ellSegal} was first proved for primes $p \ge 5$ by Ausoni--Rognes \cite[Theorem 5.5]{AusoniRognes}.
An argument at $p=2$ is given in the thesis of Sverre {L}un{\o}e{-}{N}ielsen \cite{sverre-thesis}.
The result can be deduced at all primes from the discussion of the first and third authors in \cite[\textsection 4]{HahnWilson}.
To recall that argument, note that filtering $\ell$ by its $\mathbb{F}_p$-Adams filtration gives a map of spectral sequences, converging to the mod $(p,v_1)$ Frobenius, which on $\mathrm{E}_2$-pages is $\pi_{*}$ of
\[\mathrm{THH}(\mathbb{F}_p[v_0,v_1]) \otimes_{\mathbb{F}_p[v_0,v_1]} \mathbb{F}_p \xrightarrow{\varphi} \mathrm{THH}(\mathbb{F}_p[v_0,v_1])^{\mathrm{t} \mathrm{C}_p} \otimes_{\mathbb{F}_p[v_0,v_1]} \mathbb{F}_p.\]
This map of $E_2$-pages can be identified with the ring map
\[\mathbb{F}_p[x] \otimes \Lambda(\sigma v_0,\sigma v_1) \to \mathbb{F}_p[x^{\pm}] \otimes \Lambda(\sigma v_0,\sigma v_1)\]
that inverts the degree $2$ B\"ockstedt generator $x \in \mathrm{THH}_{*}(\mathbb{F}_p)$.
In both spectral sequences, the class $\mu$ is detected by $x^{p^2}$.
Upon checking that $\varphi(\mu)$ is invertible, and not merely invertible on the $\mathrm{E}_2$-page, we can deduce \Cref{thm:ellSegal} by the fact that it is true on associated graded.
It follows from the Leibniz rule that $x^{-p^2}$ is a permanent cycle in the codomain spectral sequence, and therefore that $\varphi(\mu)$ has an inverse up to higher filtration.
By completeness of the filtration on homotopy groups,
$\varphi(\mu)$ is itself invertible.
\begin{remark}
A key step in the Ausoni--Rognes work, also featured in \cite{EllipticTC}, is the construction of elements in $V(0)_*\mathrm{TC}(\ell)$ lifting the classes $\lambda_1$ and $\lambda_2$ of \Cref{thm:THHellwithFpcoeffs}. Our arguments below do not presuppose the existence of such lifts.
\end{remark}
\subsection{A useful isomorphism}
In order to calculate the mod $(p,v_1,v_2)$ syntomic cohomology of $\ell$ we will first need to compute the mod $(p,v_1,v_2)$ prismatic cohomology of $\ell$, or in other words $\left(\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell)\right)/(p,v_1,v_2)$.
A useful and perhaps surprising fact is that the mod $(p,v_1,v_2)$ prismatic cohomology of $\ell$ has a second interpretation: as we explain in this brief section, it is canonically isomorphic to $\left(\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}\right)/(p,v_1)$ (cf. \cite[Construction 3.3]{BhattMathew}). For the purposes of this section we are defining
$\operatorname{fil}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}$ via the pushout in
filtered $\mathbb{E}_\infty$-rings:
\[
\begin{tikzcd}
\operatorname{fil}^\star_\mathrm{mot} \mathrm{TC}^{-}(\ell)\arrow{r}{\varphi}\ar[d] &
\operatorname{fil}^\star_\mathrm{mot} \mathrm{TP}(\ell) \ar[d]\\
\operatorname{fil}^\star_\mathrm{mot} \mathrm{THH}(\ell)\ar[r] &
\operatorname{fil}^\star_\mathrm{mot} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}.
\end{tikzcd}
\]
To define the isomorphism, note that $v_2=0$ in $\left(\operatorname{gr}^\star_{\mathrm{ev}} \ell\right)/(p,v_1)$, so the sequence of algebra maps
\[\operatorname{gr}^\star_{\mathrm{ev}} \ell \to \operatorname{gr}^\star_{\mathrm{mot}} \mathrm{THH}(\ell) \xrightarrow{\varphi} \operatorname{gr}^\star_{\mathrm{mot}} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}\]
imply that $v_2=0$ in $\left(\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}\right)/(p,v_1)$. Thus, the natural map
\[\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TP}(\ell)/(p,v_1) \to \operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} / (p,v_1) \]
factors over a map
\[g:\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TP}(\ell)/(p,v_1,v_2) \to \operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} / (p,v_1) \]
\begin{theorem} \label{thm:HodgeTateIso}
The map $g$ above is an isomorphism
\[\left(\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TP}(\ell)\right)/(p,v_1,v_2) \xrightarrow{\cong} \left(\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p}\right) / (p,v_1)\]
\end{theorem}
\begin{proof}
We can compute the motivic associated graded for $\mathrm{TP}(\ell)$ via the cobar complex with $s$th term given by $\pi_{*}\mathrm{TP}(\ell/\mathrm{MW}^{\otimes s +1})$. The domain of $g$ is calculated by the complex obtained from modding out each term $\pi_{*}\mathrm{TP}(\ell/\mathrm{MW}^{\otimes s +1})$ by $p,v_1$ and $v_2$. The codomain is obtained from the complex $\pi_{*}\mathrm{THH}(\ell/\mathrm{MW}^{\otimes \bullet +1})^{\mathrm{t} \mathrm{C}_p}$ by levelwise killing $p$ and $v_1$.
We first note that, for each value of $s \ge 0$, $v_2=0$ in $\left(\pi_{*}\mathrm{THH}(\ell/\mathrm{MW}^{\otimes s +1})^{\mathrm{t} \mathrm{C}_p}\right)/(p,v_1)$. This can be seen from the existence of the relative cyclotomic Frobenius map
\[\pi_{*}\mathrm{THH}(\ell/\mathrm{MW}^{\otimes s +1}) / (p,v_1) \to \pi_{*}\mathrm{THH}(\ell/\mathrm{MW}^{\otimes s +1})^{\mathrm{t} \mathrm{C}_p} / (p,v_1),\]
because $v_2 \equiv 0$ modulo $(p,v_1)$ in any $\pi_{*}\ell$-algebra such as $\pi_{*}\mathrm{THH}(\ell/\mathrm{MW}^{\otimes s +1})$.
It follows that $g$ extends to a map of cobar complexes, which levelwise is of the form
$$g_s:\left(\pi_* \mathrm{TP}(\ell/\mathrm{MW}^{\otimes s+1})\right) / (p,v_1,v_2) \to \left(\pi_* \mathrm{THH}(\ell/\mathrm{MW}^{\otimes s+1})^{\mathrm{t} \mathrm{C}_p}\right)/(p,v_1).$$
To prove that $g$ is an isomorphism, we will prove the stronger claim that $g_{s}$ is an isomorphism for each $s \ge 0$.
To see that $g_{s}$ is an isomorphism,
it follows from the lemma below that the group
$\left(\pi_* \mathrm{THH}(\ell/\mathrm{MW}^{\otimes s+1})^{\mathrm{t} \mathrm{C}_p}\right)/(p,v_1)$ can be computed from $\left(\pi_{*} \mathrm{TP}(\ell/\mathrm{MW}^{\otimes s+1})\right)/(p,v_1)$ by killing $[p](t)$ for any complex orientation $t$, so it suffices to show that $v_2$ is a unit multiple of $[p](t)$. We know that $v_2$ is some multiple of $[p](t)$, because $v_2$ becomes $0$ when $[p](t)$ is killed. Since $[p](t)=v_2t^{p^2}+\mathcal{O}(t^{p^2+1})$, we must have $v_2=t^{-p^2}([p](t))(1+\mathcal{O}(t))$, so $v_2$ is a unit multiple of $[p](t)$.
\end{proof}
\begin{lemma} Let $M \in \mathrm{Mod}_{\mathrm{MU}}^{{\mathrm{BS}^1}}$
be an ${\mathrm{S}^1}$-equivariant $\mathrm{MU}$-module. Then the map
\[
M^{\mathrm{t}{\mathrm{S}^1}}/[p](t) = M^{\mathrm{t}{\mathrm{S}^1}}\otimes_{\mathrm{MU}^{\mathrm{t}{\mathrm{S}^1}}}
\mathrm{MU}^{\mathrm{t} \mathrm{C}_p} \to M^{\mathrm{t} \mathrm{C}_p}
\]
is an equivalence.
\end{lemma}
\begin{proof} We argue exactly as in \cite[IV.4.12]{NikolausScholze}.
Since $\mathrm{MU}^{\mathrm{t} \mathrm{C}_p} =
\mathrm{MU}^{\mathrm{t}{\mathrm{S}^1}}/[p](t)$ is a perfect $\mathrm{MU}^{\mathrm{t}{\mathrm{S}^1}}$-module,
the functor $(-)\otimes_{\mathrm{MU}^{\mathrm{t}{\mathrm{S}^1}}}\mathrm{MU}^{\mathrm{t} \mathrm{C}_p}$
commutes with all limits and colimits.
Using the equivalences $M^{tG}=\operatorname*{colim} (\tau_{\ge n}M)^{tG}$ and $M^{tG}=\lim (\tau_{\le n}M)^{tG}$ we are reduced
to the case when $M$ is bounded, and further to the case where
$M$ is discrete. In this case the ${\mathrm{S}^1}$-action (and hence
the $\mathrm{C}_p$-action) must be trivial, and then one concludes by
direct observation.
\end{proof}
\begin{remark} \label{rmk:isE2tate}
We will study and compute $\left(\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TP}(\ell)\right)/(p,v_1,v_2)$ at any prime $p$. However, if the Smith--Toda complex $V(2)=\mathbb{S}/(p,v_1,v_2)$ does not exist, we cannot say that $\left(\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TP}(\ell)\right)/(p,v_1,v_2)$ is the $\mathrm{E}_2$-page of a spectral sequence converging to $V(2)_{\star} \mathrm{TP}(\ell)$. In contrast, it is always the case that $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} / (p,v_1)$ is the $\mathrm{E}_2$-page of a spectral sequence converging to $\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} \otimes_{\ell} \mathbb{F}_p$. This spectral sequence is the one associated to the cosimplicial object $\mathrm{THH}(\ell/\mathrm{MW}^{\otimes \bullet+1})^{\mathrm{t} \mathrm{C}_p} \otimes_{\ell} \mathbb{F}_p$, where the map from $\ell$ into $\mathrm{THH}(\ell/\mathrm{MW}^{\otimes s+1})^{\mathrm{t} \mathrm{C}_p}$ is given by the composite of the map $\ell \to \mathrm{THH}(\ell/\mathrm{MW}^{\otimes s+1})$ with the relative cyclotomic Frobenius.
\end{remark}
\subsection{Naming conventions in the cobar complex}
In the next sections we begin our explicit computations of the prismatic and syntomic cohomologies of $\ell$. We make all of these computations via a convenient, specific resolution, which was also used in \cite[Section 6]{HahnWilson}.
\begin{definition} \label{dfn:ellcobar}
The \emph{cobar complex} computing $\operatorname{gr}^\star_{\mathrm{mot}}\mathrm{THH}(\ell)$ is the $E_1$-page of the descent spectral sequence for
\[\mathrm{THH}(\ell) \to \mathrm{THH}(\ell/\mathrm{MU}).\]
This cobar complex has $s$th term given by $\pi_{\star}\mathrm{THH}(\ell/\mathrm{MU}^{\otimes s+1})$. Cocycles in the $s$th term represent elements in the $s$th Adams weight of $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)$.
Similarly, we refer to $\pi_{\star}\left(\mathrm{THH}(\ell/\mathrm{MU}^{\otimes \bullet+1})^{\mathrm{h}{\mathrm{S}^1}}\right)$ as the cobar complex computing $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$, and $\pi_{\star} \left(\mathrm{THH}(\ell/\mathrm{MU}^{\otimes \bullet+1})^{\mathrm{t}{\mathrm{S}^1}}\right)$ as the cobar complex computing $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell)$. There are canonical maps from the cobar complex computing $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$ to the cobar complexes computing $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell)$ and $\operatorname{gr}^\star_{\mathrm{mot}}\mathrm{THH}(\ell)$, respectively.
\end{definition}
We emphasize again that the cobar complexes of \Cref{dfn:ellcobar} are merely our preferred presentations for the much more canonical $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)$, $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}(\ell)$, and $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TP}(\ell)$. Using the Quillen map $\mathrm{BP} \to \mathrm{MU}_{(p)}$, as well as various unit maps such as $\mathrm{MU}_{(p)} \to \mathrm{TC}^{-}(\ell/\mathrm{MU}_{(p)}) = \mathrm{TC}^{-}(\ell/\mathrm{MU}),$
we obtain a map from the Adams--Novikov $\mathrm{E}_1$-page to the cobar complex computing $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$.
In particular, any Adams--Novikov cocycle names a cocycle in the cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}(\ell)$, and, via the canonical maps, also names cocycles in the cobar complexes for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell)$ and $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)$.
When referring to elements of the Adams--Novikov $\mathrm{E}_1$-page, we will use the (standard) conventions of \cite[\textsection 3]{WilsonSampler}. For example, $t_1^2+v_1t_1$, which is a cocycle at $p=2$ representing $\nu \in \pi_*(\mathbb{S})$, also represents a class of Adams weight $1$ in $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}(\ell)$.
\begin{remark}
We will need the fact that $t_1$ is a cocycle on the Adams--Novikov $E_1$-page, or in other words that $d(t_1)=0$. This furthermore implies that $d(t_1^p) \equiv 0$ modulo $p$.
\end{remark}
\begin{definition}
Using the unit map $\mathrm{BP}^{\mathrm{h}{\mathrm{S}^1}} \to \mathrm{MU}_{(p)}^{\mathrm{h}{\mathrm{S}^1}} \to \mathrm{TC}^{-}(\ell/\mathrm{MU})$, we send the standard complex orientation of $\mathrm{BP}$ from \cite[Section 3]{WilsonSampler} to a class in the cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$ that we name $t$.
\end{definition}
\begin{remark}
We have the standard formula from \cite[Lemma 3.14]{WilsonSampler}:
$$\eta_R(t)=c(t+_{\mathbb{G}} t_1t^p+_{\mathbb{G}}t_2t^{p^2}+\cdots),$$
where $+_{\mathbb{G}}$ denotes addition using the $\mathrm{BP}_*$ formal group law and $c$ denotes the conjugation action on $\mathrm{BP}_*\mathrm{BP}$, extended by the rule that $c(t)=t$. All we really need from this formula is the fact that $\eta_R(t) \equiv t+t_1t^p$ modulo $p,v_1$, and $t^{p+2}$. This implies that $\eta_R(t^p) \equiv t^p+t_1^pt^{p^2}$ modulo $p,v_1,$ and $t^{p^2+2p}$.
\end{remark}
\begin{remark}
Each term $\mathrm{THH}_{\star}(\ell/\mathrm{MU}^{\otimes s+1})$ in the cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)$ is a free $\pi_* \ell$-module, and so has $(p,v_1)$ as a regular sequence.
By the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)$ we will mean the complex obtained by modding out both $p$ and $v_1$ levelwise.
This complex computes $\left(\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)\right) / (p,v_1)$, which is the $E_2$-term of the motivic spectral sequence for $\pi_{\star}\left(\mathrm{THH}(\ell) \otimes_{\ell} \mathbb{F}_p\right)$.
We may also speak of the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$, which has $s$th term isomorphic to $\mathrm{THH}_\star(\ell/\mathrm{MU}^{\otimes s+1})\llbracket t \rrbracket / (p,v_1)$, and analogously the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell)$. At the prime $2$, where $V(1)=\mathbb{S}/(2,v_1)$ does not exist, these cobar complexes do not necessarily compute the $\mathrm{E}_2$-page of any topologically relevant spectral sequence.
\end{remark}
\subsection{Hochschild homology}
The motivic spectral sequence for $\mathrm{THH}(\ell) \otimes_{\ell} \mathbb{F}_p$ was computed in \cite[\textsection 6.1]{HahnWilson}, and we recall the results below.
\begin{lemma}[\citebare{HahnWilson}]
In the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}_{\star}(\ell)$, the elements $v_2$ and $t_1$ are both divisible by $t$.
\end{lemma}
\begin{definition}
We write $\sigma^2v_2$ and $\sigma^2t_1$ for the elements in the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}(\ell)$ defined by the relations $t\sigma^2v_2=v_2$ and $t\sigma^2t_1=t_1,$ respectively. Using the canonical map between the cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}(\ell)$ and the cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)$, we may also speak of classes $\sigma^2v_2$ and $\sigma^2t_1$ in the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)$
\end{definition}
\begin{theorem}[\citebare{HahnWilson}]
The motivic spectral sequence for $\pi_{\star}\left(\mathrm{THH}(\ell) \otimes_{\ell} \mathbb{F}_p\right)$ collapses. In the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{THH}(\ell)$, the elements $\sigma^2v_2,\sigma^2t_1,$ and $(\sigma^2t_1)^{p}$ are cocycles representing $\mu,\lambda_1,$ and $\lambda_2$, respectively.
\end{theorem}
\begin{comment}
\begin{sseqdata}[name=ellTHHmot, classes = fill, class labels = { left = 0.01 cm }, xscale=0.55, yscale=1, yrange={0}{2}, xrange={0}{20}, Adams grading]
\class(0,0)
\classoptions["1"](0,0)
\class["\mu"](8,0)
\class["\mu^2"](16,0)
\class["\lambda_1"](3,1)
\class["\mu \lambda_1"](11,1)
\class["\mu^2 \lambda_1"](19,1)
\class["\lambda_2"](7,1)
\class["\mu \lambda_2"](15,1)
\class["\lambda_1 \lambda_2"](10,2)
\class["\mu \lambda_1 \lambda_2"](18,2)
\end{sseqdata}
\tiny
\printpage[name=ellTHHmot, grid=chess, title style ={font=\small},title = {The first 20 stems of the $\mathrm{E}_2=\mathrm{E}_{\infty}$ page of the motivic spectral sequence for $\pi_{*}\left(\mathrm{THH}(\mathrm{ku}) \otimes_{\mathrm{ku}} \mathbb{F}_2\right)$}, page = 10]
\normalsize
\end{comment}
\includegraphics[scale=1,trim={4cm 20.5cm 3.5cm 2.5cm},clip]{Figure3.pdf}
\begin{remark}
The reader may find it enlightening to understand why $\sigma^2v_2$, $\sigma^2t_1$, and $(\sigma^2t_1)^p$ are cocycles in the cobar complex for $\left(\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{THH}(\ell)\right)/(p,v_1)$. First, in the mod $(p,v_1,t^{p+2})$ cobar complex for $\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$ we may make the calculations
\[0=d(v_2)=d(t\sigma^2v_2)=\eta_R(t)\eta_R(\sigma^2v_2)-t\sigma^2v_2=(t+t^pt_1)\eta_R(\sigma^2v_2)-t\sigma^2v_2,\]
\[0=d(t_1)=d(t \sigma^2t_1)=t^pt_1 | \sigma^2t_1 - td(\sigma^2t_1), \text{ and}\]
\[0=d(t_1^p) = d(t^p(\sigma^2t_1)^p)=-t^p d((\sigma^2t_1)^p).\]
The above equations immediately imply that $t d(\sigma^2v_2) \equiv t d(\sigma^2t_1) \equiv 0$ modulo $t^2$ and $t^{p}d((\sigma^2t_1)^p) \equiv 0$ modulo $t^{p+1}$. We may divide by powers of $t$, which are not zero divisors in any term of the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$, and learn that $d(\sigma^2v_2) \equiv d(\sigma^2t_1) \equiv d((\sigma^2t_1)^p) \equiv 0$ modulo $t$. Finally, the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)$ is obtained from the mod $(p,v_1)$ cobar complex for $gr^\star_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)$ by killing the element $t$.
\end{remark}
\begin{remark}
Each term $\mathrm{THH}_{\star}(\ell/\mathrm{MU}^{\otimes s+1})/(p,v_1)$ in the mod $(p,v_1)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell)$ is a free $\mathbb{F}_p[\sigma^2v_2]$-module. This means in particular that $(p,v_1,v_2=t \sigma^2v_2)$ is a regular sequence in $\mathrm{TC}^{-}(\ell/\mathrm{MU}^{\otimes s+1})$ and in $\mathrm{TP}(\ell/\mathrm{MU}^{\otimes s+1})$, so we may profitably speak of e.g. the mod $(p,v_1,v_2)$ cobar complex for $\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell)$.
\end{remark}
\subsection{Prismatic cohomology}
In this section we will calculate $\left(\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TP}(\ell) \right)/ (p,v_1,v_2)$ and $\left(\operatorname{gr}^{\star}_{\mathrm{mot}} \mathrm{TC}^{-}(\ell)\right)/(p,v_1,v_2)$. Our strategy will be to use the second filtration, $\operatorname{fil}^{\bullet}_+$, introduced in \cref{SecEvenFiltration}.
We will call the resulting spectral sequences the \emph{algebraic $t$-Bockstein
spectral sequences}, and they have signature:
\[
\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{THH}(\ell))[t] \Rightarrow
\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{TC}^{-}(\ell))
\]
\[
\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{THH}(\ell))[t^{\pm 1}] \Rightarrow
\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{TP}(\ell))
\]
The map between the first and second spectral sequences which inverts
$t$ converges to the canonical map
$\operatorname{gr}^\star_\mathrm{mot} \mathrm{TC}^{-} \to \operatorname{gr}^\star_\mathrm{mot} \mathrm{TP}$.
The elements $p$ and $v_1$ are detected by the likewise named elements
in $\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{THH}(\ell))$, and after killing these elements $v_2$ is detected by $t\mu$. We may therefore choose
appropriately
filtered lifts of $p, v_1,$ and $v_2$ and take the cofiber by each
element in turn to obtain spectral sequences with signatures:
\[
\mathbb{F}_p[t,\mu]/(t\mu) \otimes \Lambda(\lambda_1,\lambda_2)
\Rightarrow
\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{TC}^{-}(\ell)/(p,v_1,v_2))
\]
\[
\mathbb{F}_p[t^{\pm 1}] \otimes \Lambda(\lambda_1,\lambda_2) \Rightarrow
\pi_*(\operatorname{gr}^\star_\mathrm{mot} \mathrm{TP}(\ell)/(p,v_1,v_2))
\]
We explain the behavior of the second spectral sequence in the
following theorem.
\begin{theorem}
The algebraic $t$-Bockstein spectral sequence for $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2)$ has $\mathrm{E}_1$-page given by $\mathbb{F}_p[t^{\pm 1}] \otimes \Lambda(\lambda_1,\lambda_2)$. The spectral sequence is determined by multiplicative structure together with the following facts:
\begin{enumerate}
\item The classes $t^{p^2},\lambda_1,$ and $\lambda_2$ are permanent cycles.
\item There is a $d_p$ differential $d_{p}(t)=t^{p+1}\lambda_1$.
\item There is a $d_{p^2}$ differential $d_{p^2}(t^p)=t^{p^2+p} \lambda_2$.
\end{enumerate}
The $\mathrm{E}_{\infty}$-page is $\mathbb{F}_p[t^{\pm p^2}] \otimes \Lambda(\lambda_1,\lambda_2)$.
\end{theorem}
\begin{proof}
In the mod $(p,v_1,v_2)$ cobar complex, we compute
$$\eta_R(t) \equiv t+t^{p}t_1 \text{ modulo }t^{p+2},$$
and then note that $t^{p}t_1=t^{p+1}\sigma^2t_1$. Since $\lambda_1$ is represented by $\sigma^2 t_1$ in the mod $(p,v_1)$ cobar complex for $\mathrm{THH}(\ell)$, we conclude the claimed $d_p$ differential. Taking $p$th powers, we compute
$$\eta_R(t^p) \equiv t^p+t^{p^2}t_1^{p}\text{ modulo }t^{p^2+2p},$$
and then note that $t^{p^2}t_1^p = t^{p^2+p}(\sigma^2t_1)^p$. Since $\lambda_2$ is represented by $(\sigma^2 t_1)^p$ in the mod $(p,v_1)$ cobar complex for $\mathrm{THH}(\ell)$, we conclude the claimed $d_{p^2}$ differential.
It remains to see that $\lambda_1$, $\lambda_2$, and $t^{p^2}$ are permanent cycles. First, we note that
$$\eta_R(t^{p^2}) \equiv t^{p^2} +t^{p^3}t_1^{p^2}=t^{p^2}+t^{p^3+p^2} (\sigma^2t_1)^{p^2} \text{ modulo }t^{p^3+2p^2}.$$
This is the same as $t^{p^2}$ modulo $t^{p^3+p^2}$, and so $t^{p^2}$ must survive to the $\mathrm{E}_{p^3}$ page of the spectral sequence. For sparsity reasons, it follows that $t^{p^2}$ is a permanent cycle. To see that $\lambda_2$ is a permanent cycle, we make the following computation in the mod $(p,v_1,v_2)$ cobar complex:
\begin{eqnarray*}
0&=& d(t_1^p)\\
&=&d(t^p (\sigma^2 t_1)^p) \\
&=&t^{p^2} t_1^p | (\sigma^2t_1)^p-t^{p} d((\sigma^2 t_1)^p) \text{ modulo } t^{p^2+2p} \\
&=& t^{p^2+p} (\sigma^2t_1)^p | (\sigma^2t_1)^p - t^{p} d((\sigma^2 t_1)^p) \text{ modulo } t^{p^2+2p}.
\end{eqnarray*}
In particular, $d((\sigma^2t_1)^p)$ is zero modulo $t^{p^2}$, so $\lambda_2$ survives to the $\mathrm{E}_{p^2}$-page of the spectral sequence. For sparsity reasons it follows that $\lambda_2$ is a permanent cycle.
Finally, the only way $\lambda_1$ could fail to be a permanent cycle is via a differential $d_{p^2}(\lambda_1) \doteq \lambda_1\lambda_2t^{p^2}$. If such a differential occurred, we would learn that $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2)$ is trivial in degree $2p-1$. However, it is also the $\mathrm{E}_2$-page of a spectral sequence converging to $\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} \otimes_{\ell} \mathbb{F}_p$, by \Cref{rmk:isE2tate}, and the latter object is nontrivial in degree $2p-1$.
\end{proof}
\begin{comment}
\DeclareSseqGroup\ttower {} {
\foreach \i in {-10,...,10} {
\class(\i+\i,-\i)
}
}
\begin{sseqdata}[name=ellTPBock, classes = fill, class labels = { left = 0.01 cm }, xscale=0.45, yscale=0.5, yrange={-4}{4}, Adams grading]
\ttower(0,0)
\classoptions["1"](0,0)
\classoptions["t"](-2,1)
\classoptions["t^2"](-4,2)
\classoptions["t^3"](-6,3)
\classoptions["t^4"](-8,4)
\classoptions["t^{-1}"](2,-1)
\classoptions["t^{-2}"](4,-2)
\classoptions["t^{-3}"](6,-3)
\classoptions["t^{-4}"](8,-4)
\ttower(3,0)
\classoptions["\lambda_1"](3,0)
\classoptions["\lambda_1t^4"](-5,4)
\classoptions["\lambda_1t^{-4}"](11,-4)
\ttower(7,0)
\classoptions["\lambda_2"](7,0)
\classoptions["\lambda_2t^{-4}"](15,-4)
\classoptions["\lambda_2t^{4}"](-1,4)
\ttower(10,0)
\classoptions["\lambda_1 \lambda_2"](10,0)
\classoptions["\lambda_1\lambda_2t^{-4}"](18,-4)
\classoptions["\lambda_1\lambda_2t^{4}"](2,4)
\d4(12,-6)
\d2(10,-5)
\d2(6,-3)
\d4(4,-2)
\d2(2,-1)
\d2(-2,1)
\d4(-4,2)
\d2(-6,3)
\d4(15,-6)
\d2(17,-5)
\d2(13,-3)
\d4(7,-2)
\d2(9,-1)
\d2(5,1)
\d4(-1,2)
\d2(1,3)
\end{sseqdata}
\tiny
\printpage[name=ellTPBock, grid=chess, title style ={font=\small},title = {The algebraic $t$-Bockstein spectral sequence for $\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TP}(\mathrm{ku})/(2,v_1,v_2)$}
\normalsize
\end{comment}
\includegraphics[scale=1,trim={4cm 18.5cm 3.5cm 2.5cm},clip]{Figure4.pdf}
\begin{corollary} \label{cor:TCminusBockstein}
The algebraic $t$-Bockstein spectral sequence for $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2)$ has $\mathrm{E}_1$-page given by $\mathbb{F}_p[t,\mu]/(t\mu) \otimes \Lambda(\lambda_1,\lambda_2)$. The spectral sequence is determined by multiplicative structure together with the following facts:
\begin{enumerate}
\item The classes $t^{p^2},\lambda_1,$ $\lambda_2$, and $\mu$ are permanent cycles.
\item There is a $d_p$ differential $d_{p}(t)=t^{p+1}\lambda_1$.
\item There is a $d_{p^2}$ differential $d_{p^2}(t^p)=t^{p^2+p} \lambda_2$.
\end{enumerate}
The $\mathrm{E}_{\infty}$-page is $\mathbb{F}_p[t^{p^2}, \mu]/(t^{p^2} \mu) \otimes \Lambda(\lambda_1,\lambda_2) \oplus \mathbb{F}_p\{t^d \lambda_1, t^{pd} \lambda_2, t^d \lambda_1 \lambda_2, t^{pd} \lambda_1 \lambda_2 \text{ }|\text{ } 0 < d <p\}$.
\end{corollary}
\begin{proof}
The canonical map from the cobar complex for $\mathrm{TC}^{-}(\ell)$ to the cobar complex for $\mathrm{TP}(\ell)$ induces a map of $t$-Bockstein spectral sequences, from which we can read off the claimed differentials and the facts that $t^{p^2}$, $\lambda_1$, and $\lambda_2$ are permanent cycles. No other differentials are possible, by sparsity.
\end{proof}
\begin{comment}
\DeclareSseqGroup\posttower {} {
\foreach \i in {-10,...,0} {
\class(\i+\i,-\i)
}
}
\begin{sseqdata}[name=ellTCminusBock, classes = fill, class labels = { below =0.01cm}, xscale=0.43, yscale=1, yrange={0}{4}, Adams grading]
\posttower(0,0)
\classoptions["1"](0,0)
\classoptions["t"](-2,1)
\classoptions["t^2"](-4,2)
\classoptions["t^3"](-6,3)
\classoptions["t^4"](-8,4)
\posttower(3,0)
\classoptions["\lambda_1"](3,0)
\classoptions["\lambda_1t"](1,1)
\classoptions["\lambda_1t^4"](-5,4)
\posttower(7,0)
\classoptions["\lambda_2"](7,0)
\classoptions["\lambda_2t^2"](3,2)
\classoptions["\lambda_2t^{4}"](-1,4)
\posttower(10,0)
\classoptions["\lambda_1 \lambda_2"](10,0)
\classoptions["\lambda_1\lambda_2t"](8,1)
\classoptions["\lambda_1\lambda_2t^{2}"](6,2)
\classoptions["\lambda_1\lambda_2t^{4}"](2,4)
\class["\mu"](8,0)
\class["\mu^2"](16,0)
\class["\lambda_1 \mu"](11,0)
\class["\lambda_2 \mu"](15,0)
\class["\lambda_1 \mu^2"](19,0)
\d2(-2,1)
\d4(-4,2)
\d2(-6,3)
\d2(5,1)
\d4(-1,2)
\d2(1,3)
\end{sseqdata}
\tiny
\printpage[name=ellTCminusBock, grid=chess, title style ={font=\small},title = {Stems $-8$ to $20$ of the algebraic $t$-Bockstein spectral sequence for $\operatorname{gr}^\star_{\mathrm{mot}} \mathrm{TC}^{-}(\mathrm{ku})/(2,v_1,v_2)$}
\normalsize
\end{comment}
\includegraphics[scale=1,trim={4cm 18.5cm 3.5cm 2.5cm},clip]{Figure5.pdf}
\begin{corollary} \label{cor:ellCanMap}
There are isomorphisms of $\mathbb{F}_p$ vector spaces
\begin{align*}
\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2) &\cong \mathbb{F}_p[t^{p^2}, \mu]/(t^{p^2}\mu) \otimes \Lambda(\lambda_1,\lambda_2) \\ &\qquad\oplus \mathbb{F}_p\{t^d \lambda_1, t^{pd} \lambda_2, t^d \lambda_1 \lambda_2, t^{pd} \lambda_1 \lambda_2 \text{ }|\text{ } 0 < d <p\},
\end{align*}
\[\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2) \cong \mathbb{F}_p[t^{\pm p^2}] \otimes \Lambda(\lambda_1,\lambda_2)\]
The canonical map
\[\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2) \to \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2)\]
sends each class of the form $\lambda_1^{\epsilon_1}\lambda_2^{\epsilon_2}t^{kp^2}$ to the correspondingly named class in the target, where $\epsilon_1,\epsilon_2\in\{0,1\}$ and $k \ge 0$. It is zero on all other classes.
\end{corollary}
\begin{proof}
The only subtle point is to prove that classes not of the form $\lambda_1^{\epsilon_1} \lambda_2^{\epsilon_2} t^{kp^2}$ all map to zero. The calculations above prove this to be the case after taking $t$-adic associated graded, and all non-zero classes in $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2)$ are in low enough $t$-adic filtrations that no filtration jumps are possible.
\end{proof}
\subsection{Syntomic cohomology} \label{subsec:finalsyntomic}
By \Cref{cor:ellCanMap}, we understand the canonical map
\[\mathrm{can}:\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2) \to \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2).\]
To compute $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}(\ell) \right) / (p,v_1,v_2)$, it remains to understand the Frobenius map
\[\varphi:\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2) \to \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2).\]
For this, we contemplate the following diagram:
$$
\begin{tikzcd}
\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1) \arrow{r}{\varphi} \arrow{d} & \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1) \arrow{d}\\
\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell) \right) / (p,v_1) \arrow{r}{\varphi} & \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} \right) / (p,v_1)
\end{tikzcd}
$$
Since $v_2=0$ in $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell) \right) / (p,v_1)$, for example because $\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell)$ is an algebra over $\operatorname{gr}^{\star}_{\mathrm{ev}} \ell$, the diagram factors through a square of the form
$$
\begin{tikzcd}
\left(\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2) \arrow{r}{\varphi} \arrow{d}{f} & \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2) \arrow{d}{g}\\
\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell) \right) / (p,v_1) \arrow{r}{\varphi} & \left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell)^{\mathrm{t} \mathrm{C}_p} \right) / (p,v_1)
\end{tikzcd}
$$
Here, the map $f$ is an isomorphism from the $0$-line of the spectral sequence of \Cref{cor:TCminusBockstein} onto $\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{THH}(\ell) \right) / (p,v_1)$. It is trivial on classes above the $0$-line. The map $g$ is the isomorphism of \Cref{thm:HodgeTateIso}.
\begin{corollary} \label{cor:ellFrob}
In terms of the isomorphisms
\begin{align*}
\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TC}^{-}(\ell) \right) / (p,v_1,v_2) &\cong \mathbb{F}_p[t^{p^2}, \mu]/(t^{p^2}\mu) \otimes \Lambda(\lambda_1,\lambda_2) \\ &\qquad\oplus \mathbb{F}_p\{t^d \lambda_1, t^{pd} \lambda_2, t^d \lambda_1 \lambda_2, t^{pd} \lambda_1 \lambda_2 \text{ }|\text{ } 0 < d <p\},
\end{align*}
\[\left(\mathrm{gr}^*_{\mathrm{mot}}\mathrm{TP}(\ell) \right) / (p,v_1,v_2) \cong \mathbb{F}_p[t^{\pm p^2}] \otimes \Lambda(\lambda_1,\lambda_2)\]
of \Cref{cor:ellCanMap}, the Frobenius is trivial on classes not of the form $\lambda_1^{\epsilon_1} \lambda_2^{\epsilon_2} \mu^k$ where $k\ge 0$ and $\epsilon_1,\epsilon_2 \in \{0,1\}$. On the other hand, the Frobenius sends each class of the form $\lambda_1^{\epsilon_1} \lambda_2^{\epsilon_2} \mu^k$ to an $\mathbb{F}_p^{\times}$ multiple of the class named $\lambda_1^{\epsilon_1}\lambda_2^{\epsilon_2} t^{-{p^2}k}$.
\end{corollary}
\begin{proof}
The map $f$ is already trivial on every class not of the form $\lambda_1^{\epsilon_1}\lambda_2^{\epsilon_2} \mu^k$. \Cref{thm:ellSegal}, together with the fact that $g$ is an isomorphism, implies that each class of the form $\lambda_1^{\epsilon_1}\lambda_2^{\epsilon_2} \mu^k$ has non-trivial Frobenius image. The only non-trivial classes in the codomain, in the same degree as $\lambda_1^{\epsilon_1}\lambda_2^{\epsilon_2} \mu^k$, are $\mathbb{F}_p^{\times}$ multiples of the class named $\lambda_1^{\epsilon_1}\lambda_2^{\epsilon_2} t^{-{p^2}k}$.
\end{proof}
We can now deduce the main theorem of this section:
\begin{proof}[Proof of \Cref{thm:ellsyntomic}]
We deduce the first part of \Cref{thm:ellsyntomic} as an immediate consequence of the combination of \Cref{cor:ellCanMap} and \Cref{cor:ellFrob}, with the symbol $\partial$ decorating classes in $\left(\operatorname{gr}^{\star}_{\mathrm{mot}}\mathrm{TC}(\ell)\right)/(p,v_1,v_2)$ that come from the cokernel of $\varphi-\mathrm{can}$. The second part of \Cref{thm:ellsyntomic}, about the $v_2$-Bockstein spectral sequence, follows by the argument given immediately after the theorem statement, which relies on the elementary lemma below applied
to $R=\operatorname{gr}^{\star}_{\mathrm{ev}}(\mathbb{S})/(p,v_1)$
and $M = \operatorname{gr}^{\star}_{\mathrm{mot}}(\mathrm{TC}(\ell))/
(p,v_1)$. We remind the reader that $v_2$ lives in $\pi_{2p^2-2}$ of the $(p^2-1)$'st graded piece of
$\operatorname{gr}^{\star}_{\mathrm{ev}}(\mathbb{S})/(p,v_1)$, and that our
convention for displaying spectral sequences is to draw
a term from $\pi_nL^a$ of a graded object $L$ in column
$n$ and row $2a-n$.
\end{proof}
\begin{lemma} Let $R$ be a graded ring, $M$ a graded
$R$-module. If $L^{\star}$ is graded, write $\pi_{n,a}L$ for
$\pi_n(L^a)$. Then the Bockstein spectral sequence for
$\pi_{*,*}\left(\cpl{M}_x\right)$ associated
to an element $x \in \pi_{n,a}(R)$, with $\mathrm{E}_1$-page $\pi_{*,*}(M/x)[x]$, has $d_r$ differentials that send elements of bidegree $(m,b)$ to elements of bidegree $(m+rn-1,b+ra)$.
\end{lemma}
To finish the paper, we record proofs of the final two theorems mentioned in the introduction.
\begin{corollary} \label{cor:TCisfp}
For any prime number $p$ and $p$-local type $3$ complex $M$, $M_*\mathrm{TC}(\ell)$ is finite.
\end{corollary}
\begin{proof}
By thick subcategory considerations, it suffices to prove this for $M$ equal to a generalized Moore spectrum of the form $\mathbb{S}/(p^i,v_1^j,v_2^k)$, where $j\gg i$ and $ k \gg j$ so that killing $(p^i,v_1^j,v_2^k)$ is a well-defined operation in $\mathrm{MU}_*\mathrm{MU}$-comodules. There is then a spectral sequence converging to $M_*\mathrm{TC}(\ell)$ beginning with $\operatorname{gr}^{\star}_{\mathrm{mot}}(\mathrm{TC}(\ell)) / (p^i,v_1^j,v_2^k)$. The latter object may be resolved by finitely many copies of $\operatorname{gr}^{\star}_{\mathrm{mot}}(\mathrm{TC}(\ell)) / (p,v_1,v_2)$, and so is finite.
\end{proof}
As explained in \cite[\textsection 3]{HahnWilson}, \Cref{cor:TCisfp} implies that the map
\[\mathrm{TC}(\ell)_{(p)} \to L_2^{f}\mathrm{TC}(\ell)_{(p)}\]
is a $\pi_*$-iso in degrees $* \gg 0$, which can be seen as a telescopic analog of the Lichtenbaum--Quillen conjecture. In fact, one can localize at a wedge of Morava $K$-theories rather than a wedge of telescopes, which we record as our final result.
\begin{theorem}
The telescope conjecture is true of $\mathrm{TC}(\ell)$. In other words, the natural map
$$L_2^{f} \mathrm{TC}(\ell) \to L_2 \mathrm{TC}(\ell)$$
is an equivalence.
\end{theorem}
\begin{proof}
We say that the height $2$ telescope conjecture holds for a spectrum $X$ if the natural map $L_2^{f}X \to L_2X$ is an equivalence.
First note that, since $L_2$ and $L_2^{f}$ are smashing localizations, if the height $2$ telescope conjecture holds for a ring $R$ then it also holds for every $R$-module. We will prove the height $2$ telescope conjecture for $\mathrm{TC}^{-}(\ell)$. Since $\mathrm{TP}(\ell)$ is a module over $\mathrm{TC}^{-}(\ell)$, we may conclude the height $2$ telescope conjecture for $\mathrm{TP}(\ell)$ and then, by the Nikolaus--Scholze fiber sequence, for $\mathrm{TC}(\ell)$.
By the work of Mahowald and Miller \cite{MahowaldTelescope,MillerTelescope}, to prove the height $2$ telescope conjecture for $\mathrm{TC}^{-}(\ell)$ it suffices to prove it for $F \otimes \mathrm{TC}^{-}(\ell)$, where $F$ is a $p$-local finite type $2$ complex. For this, consider the equivalence
\[F \otimes\mathrm{TC}^{-}(\ell) \simeq F \otimes \left(\lim_{\Delta} \mathrm{TC}^{-}(\ell/\mathrm{MU}^{\otimes \bullet+1})\right) \simeq \lim_{\Delta} \left( F \otimes \mathrm{TC}^{-}(\ell/\mathrm{MU}^{\otimes \bullet+1})\right),\]
where we may pass $F$ inside of the totalization because it is finite. Now, the $L_2^{f}$ localization of $F \otimes\mathrm{TC}^{-}(\ell)$ is given by $v_2^{-1} F \otimes \mathrm{TC}^{-}(\ell)$. A corollary of our work above is that the motivic spectral sequence for $F_* \mathrm{TC}^{-}(\ell)$ has, when displayed in Adams grading, an eventual
horizontal vanishing line. It follows
(e.g. from \cite[Lemma 2.34]{clausen-mathew}) that
\[v_2^{-1}F \otimes\mathrm{TC}^{-}(\ell) \simeq \lim_{\Delta} \left(v_2^{-1}F \otimes \mathrm{TC}^{-}(\ell/\mathrm{MU}^{\otimes \bullet+1})\right).\]
Each term inside the totalization is an $L_2^{f}$-local $\mathrm{MU}$-module, and hence is $L_2$-local. Thus, the totalization is also $L_2$-local.
\end{proof}
\section{Comparison theorems}
In this section, we will compare the motivic filtrations defined above with filtrations defined previously in special cases.
\begin{notation}
\label{cm--notation}
\begin{enumerate}[leftmargin=*]
\item For $k \to R$ any map of commutative rings, we have the Hochschild--Kostant--Rosenberg (HKR) filtration on Hochschild homology, $\operatorname{fil}^\star_\mathrm{HKR}\mathrm{HH}(R/k)$ (see, for example, \cite[\textsection 2]{BMS}). For a prime number $p$, we denote the $p$-completion of the HKR filtration by $\operatorname{fil}^\star_\mathrm{HKR}\cpl{\mathrm{HH}(R/k)}_p$.
\item For $k \to R$ a quasi-lci map of commutative rings, we have Antieau's Beilinson filtrations on negative and periodic cyclic homology, $\operatorname{fil}^\star_\mathrm{B}\mathrm{HC}^-(R/k)$ and $\operatorname{fil}^\star_\mathrm{B}\mathrm{HP}(R/k)$ \cite{AntieauHP}.
\item For $p$ a prime number and $k \to R$ a $p$-quasi-lci map of $p$-quasisyntomic $p$-complete commutative rings, we have the Bhatt--Morrow--Scholze (BMS) filtrations on $p$-completed negative and periodic cyclic homology, $\operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{HC}^-(R/k)}_p$ and $\operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{HP}(R/k)}_p$ \cite[\textsection 5]{BMS}.
\item For $p$ a prime number and $R$ a $p$-quasisyntomic $p$-complete commutative ring, we have the BMS filtrations on $p$-completed topological Hochschild, topological negative cyclic, topological periodic cyclic, and topological cyclic homology \cite[\textsection 7]{BMS},
\[
\operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{THH}(R)}_p, \quad
\operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{TC}^-(R)}_p, \quad
\operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{TP}(R/k)}_p, \quad
\operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{TC}(R)}_p.
\]
\item For $R$ any commutative ring, we have the Bhatt--Lurie filtration on topological Hochschild, topological negative cyclic, and topological periodic cyclic homology \cite[\textsection 6.4]{BL},
\[
\operatorname{fil}^\star_\mathrm{BL}\mathrm{THH}(R), \quad
\operatorname{fil}^\star_\mathrm{BL}\mathrm{TC}^-(R), \quad
\operatorname{fil}^\star_\mathrm{BL}\mathrm{TP}(R/k).
\]
\end{enumerate}
\end{notation}
\begin{theorem}
\label{mo--hkr-comparison}
For $k \to R$ a quasi-lci map of discrete commutative rings, there are natural identifications
\begin{align*}
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{HH}(R/k) \simeq \operatorname{fil}^\star_\mathrm{HKR}\mathrm{HH}(R/k), \\
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{HC}^-(R/k) \simeq \operatorname{fil}^\star_\mathrm{B}\mathrm{HC}^-(R/k), \\
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{HP}(R/k) \simeq \operatorname{fil}^\star_\mathrm{B}\mathrm{HP}(R/k).
\end{align*}
For $p$ a prime number and $k \to R$ a $p$-quasi-lci map of $p$-quasisyntomic $p$-complete commutative rings, there are natural identifications
\begin{align*}
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{HH}(R/k)}_p \simeq \operatorname{fil}^\star_\mathrm{HKR}\cpl{\mathrm{HH}(R/k)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{HC}^-(R/k)}_p \simeq \operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{HC}^-(R/k)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{HP}(R/k)}_p \simeq \operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{HP}(R/k)}_p.
\end{align*}
\end{theorem}
\begin{proof}
We will establish the first identification; the rest can be established similarly. Let $k \to R$ be a quasi-lci map of commutative rings. Let $S$ be the polynomial $k$-algebra with generators indexed by the elements of $R$, which comes equipped with a canonical surjection $S \to R$. Then we have
\[
\operatorname{fil}^\star_\mathrm{mot}\mathrm{HH}(R/k) \simeq \lim_{\Delta}(\tau_{\ge 2\star}(\mathrm{HH}(R/S^{\otimes_k \bullet+1}))) \simeq \lim_{\Delta}(\operatorname{fil}^\star_\mathrm{HKR} \mathrm{HH}(R/S^{\otimes_k \bullet+1})),
\]
where the first equivalence follows from \cref{hkr--quasilci-HH-atlas} and the second equivalence follows from the identifications
\[
\operatorname{gr}^i_\mathrm{HKR} \mathrm{HH}(R/S^{\otimes_k \bullet+1}) \simeq \Sigma^i\L\Lambda^i_R(\L^\mathrm{alg}_{R/S^{\otimes_k \bullet+1}})
\]
and the fact that $\L^\mathrm{alg}_{R/S^{\otimes_k n+1}}$ has Tor-amplitude concentrated in degree $1$. It thus suffices to show that the canonical map
\[
\operatorname{fil}^\star_\mathrm{HKR}\mathrm{HH}(R/k) \to \lim_{\Delta}(\operatorname{fil}^\star_\mathrm{HKR} \mathrm{HH}(R/S^{\otimes_k \bullet+1}))
\]
is an equivalence. We can check this after passing to associated graded objects, so it is enough to show that the canonical maps
\[
\L\Lambda^i_R(\L^\mathrm{alg}_{R/k}) \to \lim_{\Delta}(\L\Lambda^i_R(\L^\mathrm{alg}_{R/S^{\otimes_k \bullet+1}}))
\]
are equivalences.
Consider the commutative diagram of cosimplicial $R$-modules
\[
\begin{tikzcd}
R \otimes_S \L^\mathrm{alg}_{S/k} \ar[r] \ar[d] &
\L^\mathrm{alg}_{R/k} \ar[r] \ar[d] &
\L^\mathrm{alg}_{R/S} \ar[d] \\
R \otimes_S \L^\mathrm{alg}_{S/S^{\otimes_k\bullet+1}} \ar[r] & \L^\mathrm{alg}_{R/S^{\otimes_k\bullet+1}} \ar[r] &
\L^\mathrm{alg}_{R/S}
\end{tikzcd}
\]
in which the top row consists of constant cosimplicial objects, each row is a transitivity cofiber sequence, and the map between the two rows is induced by the map of cosimplicial commutative rings $k \to S^{\otimes_k\bullet+1}$. In \cite[Proof of Corollary 2.7]{BhattDR}, it is shown that the map $\L^\mathrm{alg}_{S/k} \to \L^\mathrm{alg}_{S/S^{\otimes_k\bullet+1}}$ is a homotopy equivalence of cosimplicial $S$-modules. Thus, the left-hand vertical map is a homotopy equivalence of cosimplicial $R$-modules, which implies that the same is true of the middle vertical map, since the right-hand vertical map is the identity map. It follows that the map $\L\Lambda^i_R(\L^\mathrm{alg}_{R/k}) \to \L\Lambda^i_R(\L^\mathrm{alg}_{R/S^{\otimes_k \bullet+1}})$ is a homotopy equivalence of cosimplicial $R$-modules for all $i$, implying the desired claim.
\end{proof}
\begin{theorem}
\label{mo--bms-comparison}
For $p$ a prime number and $R$ a $p$-quasisyntomic $p$-complete commutative ring, there are natural identifications
\begin{align*}
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{THH}(R)}_p \simeq \operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{THH}(R)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{TC}^-(R)}_p \simeq \operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{TC}^-(R)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{TP}(R)}_p \simeq \operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{TP}(R)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{TC}(R)}_p \simeq \operatorname{fil}^\star_\mathrm{BMS}\cpl{\mathrm{TC}(R)}_p.
\end{align*}
\end{theorem}
\begin{proof}
\footnote{We thank Bhargav Bhatt for suggesting the argument written here (any error is our own responsibility). Our original argument used \cref{mo--hkr-comparison} to establish quasisyntomic descent for our motivic filtrations.} Let $R'$ be the polynomial ring over $\num{Z}$ on generators indexed by the set underlying $R$, so that we have a natural surjection $R' \twoheadrightarrow R$. Let $S'$ be the ring obtained by adjoining all $p$-power roots of the polynomial generators of $R'$, and form the $p$-completed pushout $S := \cpl{(R \otimes_{R'} S')}_p$. Then $S$ is quasiregular semiperfectoid and the map $R \to S$ is a $p$-quasisyntomic cover, so from \cite[\textsection 7]{BMS} we have that $\cpl{\mathrm{THH}(S)}_p$ is even and a natural equivalence
\[
\operatorname{fil}^\star_\mathrm{BMS} \cpl{\mathrm{THH}(R)}_p \simeq \lim_\Delta(\tau_{\ge 2\star}(\cpl{\mathrm{THH}(S^{\otimes_R\bullet+1})}_p)),
\]
and similarly for $\mathrm{TC}^-$, $\mathrm{TP}$, and $\mathrm{TC}$. It now follows from \cref{ev--magic-criterion} that to prove the claim, it suffices to show that the map $\cpl{\mathrm{THH}(R)}_p \to \cpl{\mathrm{THH}(S)}_p$ is discretely $p$-completely eff.
Let $\S_{R'}$ be the polynomial $\mathbb{E}_\infty$-ring over $\S$ on generators indexed by the set underlying $R$ (i.e. the tensor product over this set of copies of the monoid ring $\S[\mathbb{N}]$) and let $\S_{S'}$ be the $\mathbb{E}_\infty$-ring obtained by adjoining all $p$-power roots of the polynomial generators of $\S_{R'}$ (i.e. a tensor product of copies of the monoid ring $\S[\mathbb{N}[1/p]]$). Consider the commutative diagram
\[
\begin{tikzcd}
\mathrm{THH}(\S_{R'}) \ar[r] \ar[d] &
\S_{R'} \ar[r] \ar[d] &
\S_{S'} \ar[d] \\
\mathrm{THH}(R) \ar[r] &
\mathrm{THH}(R/\S_{R'}) \ar[r] &
\mathrm{THH}(S/\S_{S'}).
\end{tikzcd}
\]
By \cite[Proposition 11.7]{BMS}, the map $\mathrm{THH}(S) \to \mathrm{THH}(S/\S_{S'})$ is an equivalence after $p$-completion. It is thus enough to show that each of the bottom horizonal maps is discretely $p$-completely eff. Since each square in the diagram is in fact a pushout square, it furthermore suffices to show that each of the top horizontal maps is discretely $p$-completely eff. This is clear for the right-hand map, as $\S_{S'}$ is free as a module over $\S_{R'}$. For the left-hand map, we may check after tensoring with $\mathrm{MU}$ (since $\S \to \mathrm{MU}$ is evenly free), and then we may use \cref{hkr--quasismooth-HH-atlas}.
\end{proof}
\begin{theorem}
\label{mo--bl-comparison}
For $R$ a commutative ring with bounded $p$-power torsion for all primes $p$ and with algebraic cotangent complex $\L^{\mathrm{alg}}_R$ having Tor amplitude contained in $[0,1]$, there are natural identifications
\begin{align*}
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{THH}(R) \simeq \operatorname{fil}^\star_\mathrm{BL}\mathrm{THH}(R), \\
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{TC}^-(R) \simeq \operatorname{fil}^\star_\mathrm{BL}\mathrm{TC}^-(R), \\
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{TP}(R) \simeq \operatorname{fil}^\star_\mathrm{BL}\mathrm{TP}(R).
\end{align*}
\end{theorem}
\begin{proof}
Let us just establish the identification for $\mathrm{THH}$. Let $R$ be a commutative ring with bounded $p$-power torsion for all primes $p$ and such that the algebraic cotangent complex $L^{\mathrm{alg}}_R$ has Tor amplitude contained in $[0,1]$. Then we have a commutative square
\[
\begin{tikzcd}
\operatorname{fil}^\star_\mathrm{mot} \mathrm{THH}(R) \ar[r] \ar[d] &
\prod_p \cpl{(\operatorname{fil}^\star_\mathrm{mot} \mathrm{THH}(R))}_p \ar[d] \\
\operatorname{fil}^\star_\mathrm{mot} \mathrm{HH}(R) \ar[r] &
\prod_p \cpl{(\operatorname{fil}^\star_\mathrm{mot} \mathrm{HH}(R))}_p,
\end{tikzcd}
\]
and we have a defining pullback square
\[
\begin{tikzcd}
\operatorname{fil}^\star_\mathrm{BL} \mathrm{THH}(R) \ar[r] \ar[d] &
\prod_p \operatorname{fil}^\star_\mathrm{BMS} \cpl{\mathrm{THH}(R)}_p \ar[d] \\
\operatorname{fil}^\star_\mathrm{HKR} \mathrm{HH}(R) \ar[r] &
\prod_p \operatorname{fil}^\star_\mathrm{HKR} \cpl{\mathrm{HH}(R)}_p
\end{tikzcd}
\]
From \cref{mo--hkr-comparison,mot--p-compatibility}, we obtain an identification between the lower arrows of the two squares. Noting that the $p$-completion of $R$ is a $p$-quasisyntomic discrete commutative ring and that $\cpl{\mathrm{THH}(R)}_p \simeq \cpl{\mathrm{THH}(\cpl{R}_p)}_p$, \cref{mo--bms-comparison,mot--p-compatibility} give an identification between the upper right objects of the two squares. The natural transformation $\operatorname{fil}^\star_\mathrm{BMS} \cpl{\mathrm{THH}(-)}_p \to \operatorname{fil}^\star_\mathrm{HKR} \cpl{\mathrm{HH}(-)}_p$ on $p$-quasisyntomic $p$-complete commutative rings is the unique natural map of filtered objects compatible with the canonical map $\cpl{\mathrm{THH}(-)}_p \to \cpl{\mathrm{HH}(-)}_p$, since the filtrations are descended from the double-speed Postnikov filtration for quasiregular semiperfectoid rings. It follows that the right-hand arrows of the squares identify as well. Thus, to finish the proof, it suffices to show that the first square is a pullback diagram.
Consider the extended diagram
\[
\begin{tikzcd}
\operatorname{fil}^\star_\mathrm{mot} \mathrm{THH}(R) \ar[r] \ar[d] &
\prod_p \cpl{(\operatorname{fil}^\star_\mathrm{mot} \mathrm{THH}(R))}_p \ar[d] \\
\operatorname{fil}^\star_\mathrm{mot} \mathrm{HH}(R) \ar[r] \ar[d] &
\prod_p \cpl{(\operatorname{fil}^\star_\mathrm{mot} \mathrm{HH}(R))}_p \ar[d] \\
(\operatorname{fil}^\star_\mathrm{mot} \mathrm{HH}(R)) \otimes \num{Q} \ar[r] &
(\prod_p \cpl{(\operatorname{fil}^\star_\mathrm{mot} \mathrm{HH}(R))}_p) \otimes \num{Q}
\end{tikzcd}
\]
The lower square is an arithmetic square, hence a pullback square, so it suffices to show that the outer square is a pullback square. In fact, the outer square is also an arithmetic square; this is a consequence of \cref{ev--rationalization,ev--profinite-rationalization}, using the following observations:
\begin{itemize}
\item The canonical maps $\mathrm{THH}(R) \otimes \num{Q} \to \mathrm{HH}(R) \otimes \num{Q}$ and $\cpl{\mathrm{THH}(R)} \otimes \num{Q}$ and $\cpl{\mathrm{HH}(R)} \otimes \num{Q}$ are equivalences (where $\cpl{(-)}$ denotes profinite completion). For the latter, note that the canonical map $\cpl{\mathrm{THH}(R)} \otimes_{\mathrm{THH}(\num{Z})} \num{Z} \to \cpl{\mathrm{HH}(\num{Z})}$ is an equivalence. All of this follows from the finiteness of $\pi_i(\S)$ for $i > 0$ (cf. \cite[Lemma 2.5]{BMS}).
\item By \cref{factor-quasi-lci}, we may choose a connective, even $\mathbb{E}_\infty$-$\mathrm{MU}$-algebra $S$ where $\pi_*(S)$ is a polynomial algebra over $\pi_*(\mathrm{MU})$, together with a map of $\mathbb{E}_\infty$-$\mathrm{MU}$-algebras $S \to R \otimes \mathrm{MU}$ such that the induced map $\pi_*(S) \to \pi_*(R \otimes \mathrm{MU})$ is a quasiregular quotient. Then the map $\mathrm{THH}(R) \to \mathrm{THH}(R \otimes \mathrm{MU}/S)$ is $1$-connective and evenly free (\cref{hkr--quasismooth-HH-atlas,ev--MU-evenly-free}), and both $\mathrm{THH}(R \otimes \mathrm{MU}/S)$ and $\mathrm{HH}(R) \otimes_{\mathrm{THH}(R)} \mathrm{HH}(R \otimes \mathrm{MU}/S) \simeq \mathrm{HH}(R \otimes \mathrm{MU}/\num{Z} \otimes S)$ are even and have even profinite completions (\cref{hkr--quasiregular-HH-even}). \qedhere
\end{itemize}
\end{proof}
\subsection{Defining the filtrations}
The most basic version of the even filtration construction was formulated in \cref{in--fil-ev}. It will be convenient for some of our purposes to set things up here in slightly greater generality, namely for modules over $\mathbb{E}_\infty$-rings rather than simply $\mathbb{E}_\infty$-rings.
\begin{notation}
Let $\mathrm{CAlg}^\mathrm{ev}$ denote the full subcategory of $\mathrm{CAlg}$ spanned by the even $\mathbb{E}_\infty$-rings. Let $\mathrm{Mod}^{\mathrm{ev}}$ denote the full subcategory of $\mathrm{Mod}$ spanned by pairs $(A,M)$ where $A$ is even (but $M$ need not be). We denote by
\begin{align*}
U_{\mathrm{Alg}}&: \mathrm{Mod} \to \mathrm{CAlg}& (A,M)\mapsto A\\
U_{\mathrm{Mod}}&: \mathrm{Mod} \to \mathrm{Spt}& (A,M) \mapsto M
\end{align*}
the two forgetful functors.
\end{notation}
\begin{construction}
\label{ev--fil-ev}
Recall that there is a functor $\tau_{\ge 2\star} : \mathrm{Spt} \to \mathrm{FilSpt}$ sending a spectrum $X$ to its double-speed Postnikov filtration
\[
\cdots \to \tau_{\ge 4}(X) \to \tau_{\ge 2}(X) \to \tau_{\ge 0}(X) \to \tau_{\ge -2}(X) \to \tau_{\ge -4}(X) \to \cdots .
\]
We denote by $(A,M) \mapsto \operatorname{fil}^{\star}_{\mathrm{ev}/A} M$ the functor $\mathrm{Mod} \to \mathrm{FilSpt}_{\kappa_2}$ given by the right Kan extension of the composition
\[
\mathrm{Mod}^\mathrm{ev} \lblto{U_\mathrm{Mod}} \mathrm{Spt} \lblto{\tau_{\ge 2\star}} \mathrm{FilSpt}_{\kappa_2}
\]
along the inclusion $\mathrm{Mod}^\mathrm{ev} \subseteq \mathrm{Mod}$. We refer to this construction as the \emph{even filtration}.
\end{construction}
\begin{remark}
If $A \to B$ is a map of $\mathbb{E}_{\infty}$-rings and $M$ is an $A$-module, then $(B, M\otimes_AB)$ is initial among maps $(A,M) \to (B, N)$ in $\mathrm{Mod}$ lying over $A \to B$. It follows that the even filtration of \cref{ev--fil-ev} is given by the following limit expression:
\[
\operatorname{fil}^{\star}_{\mathrm{ev}/A}M \simeq
\lim_{A \to B, B \in \mathrm{CAlg}^{\mathrm{ev}}} \tau_{\ge 2\star}(M\otimes_AB).
\]
\end{remark}
\begin{remark}
The functor $\tau_{\ge 2\star} : \mathrm{Spt} \to \mathrm{FilSpt}$ has a canonical lax symmetric monoidal structure, from which the even filtration functor $\operatorname{fil}^\star_{\mathrm{ev}/(-)}$ obtains the same. It follows that, for $A$ an $\mathbb{E}_\infty$-ring and $A'$ an $\mathbb{E}_{\infty}$-$A$-algebra, $\operatorname{fil}^{\star}_{\mathrm{ev}/A}A'$ is canonically a filtered $\mathbb{E}_{\infty}$-ring; in the case $A'=A$, we abbreviate by denoting the filtered $\mathbb{E}_{\infty}$-ring $\operatorname{fil}^\star_{\mathrm{ev}/A}A$ by $\operatorname{fil}^\star_\mathrm{ev} A$. The construction $A \mapsto \operatorname{fil}^\star_\mathrm{ev} A$ gives a functor
\[
\operatorname{fil}^{\star}_\mathrm{ev} : \mathrm{CAlg} \to \mathrm{FilCAlg}_{\kappa_2},
\]
and for $A$ an $\mathbb{E}_\infty$-ring, $\operatorname{fil}^{\star}_{\mathrm{ev}/A}$ lifts to a functor
\[
\operatorname{fil}^{\star}_{\mathrm{ev}/A}: \mathrm{Mod}_A \to \mathrm{FilMod}_{\operatorname{fil}^\star_\mathrm{ev} A}.
\]
\end{remark}
\begin{remark}
In \cref{ev--fil-ev}, we use the category $\mathrm{FilSpt}_{\kappa_2}$ with possibly large objects to ensure that the right Kan extension exists (cf. \cref{ConventionsSection}). However, our attention will be on $\mathbb{E}_{\infty}$-rings $A$ for which we can prove that $\operatorname{fil}^{\star}_{\mathrm{ev}/A} M$ lies in the subcategory $\mathrm{FilSpt} \subset \mathrm{FilSpt}_{\kappa_2}$ for any $A$-module $M$. This is in particular true of any even $A$.
\end{remark}
\begin{remark}
For any spectrum $X$, the double speed Postnikov filtration $\tau_{\ge 2\star}(X)$ is complete (that is, $\lim_{n \to \infty} \tau_{\ge 2n}(X) \simeq 0$). The collection of complete filtered spectra is closed under limits, so for any $\mathbb{E}_{\infty}$-ring $A$ and $A$-module $M$, the even filtration $\operatorname{fil}^\star_{\mathrm{ev}/A}M$ is complete.
\end{remark}
\begin{variant}
\label{ev--fil-ev-p}
Fix a prime number $p$. Let $\mathrm{Mod}_p$ denote the full subcategory of $\mathrm{Mod}$ spanned by the pairs $(A,M)$ where $A$ and $M$ are $p$-complete and let $\mathrm{Mod}_p^{\mathrm{ev}} := \mathrm{Mod}_p \cap \mathrm{Mod}^{\mathrm{ev}}$. Define
\begin{align*}
\operatorname{fil}^\star_{\mathrm{ev}/(-),p} &: \mathrm{Mod}_p \to \mathrm{FilSpt}_{\kappa_2}
\end{align*}
to be the right Kan extension of
\begin{align*}
\tau_{\ge 2*}(U_{\mathrm{Mod}}) &: \mathrm{Mod}_p^\mathrm{ev} \to \mathrm{FilSpt}
\end{align*}
along the inclusion $\mathrm{Mod}_p^{\mathrm{ev}} \subseteq \mathrm{Mod}_p$. We refer to this as the \emph{$p$-complete even filtration}, and again set $\operatorname{fil}^{\star}_{\mathrm{ev},p}A := \operatorname{fil}^{\star}_{\mathrm{ev}/A,p}A$ for $A$ a $p$-complete $\mathbb{E}_\infty$-ring.
\end{variant}
\begin{variant}
Say that an $\mathbb{E}_\infty$-ring with ${\mathrm{S}^1}$-action is \emph{even} if its underlying $\mathbb{E}_\infty$-ring is even. Then $(\mathrm{Mod}^\mathrm{ev})^{{\mathrm{BS}^1}}$ (resp. $(\mathrm{Mod}^\mathrm{ev}_p)^{\mathrm{BS}^1}$) is the full subcategory of $\mathrm{Mod}^{\mathrm{BS}^1}$ (resp. $\mathrm{Mod}_p^{\mathrm{BS}^1}$) spanned by pairs
$(A,M)$ where $A$ is even. Define
\begin{align*}
&\operatorname{fil}^\star_{\mathrm{ev}/(-),\mathrm{h}{\mathrm{S}^1}} : \mathrm{Mod}^{\mathrm{BS}^1} \to \mathrm{FilSpt}_{\kappa_2}, &\operatorname{fil}^\star_{\mathrm{ev}/(-),p,\mathrm{h}{\mathrm{S}^1}} : \mathrm{Mod}_p^{\mathrm{BS}^1} \to \mathrm{FilSpt}_{\kappa_2}, \\
&\operatorname{fil}^\star_{\mathrm{ev}/(-),\mathrm{t}{\mathrm{S}^1}} : \mathrm{Mod}^{\mathrm{BS}^1} \to \mathrm{FilSpt}_{\kappa_2}, &\operatorname{fil}^\star_{\mathrm{ev}/(-),p,\mathrm{t}{\mathrm{S}^1}} : \mathrm{Mod}_p^{\mathrm{BS}^1} \to \mathrm{FilSpt}_{\kappa_2},\\
&\operatorname{fil}^{\filledsquare}_{+}\operatorname{fil}^\star_{\mathrm{ev}/(-),\mathrm{h}{\mathrm{S}^1}} : \mathrm{Mod}^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}_{\kappa_2}, &\operatorname{fil}^{\filledsquare}_+\operatorname{fil}^\star_{\mathrm{ev}/(-),p,\mathrm{h}{\mathrm{S}^1}} : \mathrm{Mod}_p^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}_{\kappa_2}, \\
&\operatorname{fil}^{\filledsquare}_+\operatorname{fil}^\star_{\mathrm{ev}/(-),\mathrm{t}{\mathrm{S}^1}} : \mathrm{Mod}^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}_{\kappa_2}, &\operatorname{fil}^{\filledsquare}_+\operatorname{fil}^\star_{\mathrm{ev}/(-),p,\mathrm{t}{\mathrm{S}^1}} : \mathrm{Mod}_p^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}_{\kappa_2},
\end{align*}
to be the right Kan extensions of
\begin{align*}
& \tau_{\ge 2*}(U_{\mathrm{Mod}}^{\mathrm{h}{\mathrm{S}^1}}) : (\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1} \to \mathrm{FilSpt}, & \tau_{\ge 2*}(U_{\mathrm{Mod}}^{\mathrm{h}{\mathrm{S}^1}}) : (\mathrm{Mod}^\mathrm{ev}_p)^{\mathrm{BS}^1} \to \mathrm{FilSpt}, \\
& \tau_{\ge 2*}(U_{\mathrm{Mod}}^{\mathrm{t}{\mathrm{S}^1}}): (\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1} \to \mathrm{FilSpt}, & \tau_{\ge 2*}\cpl{(U_{\mathrm{Mod}}^{\mathrm{t}{\mathrm{S}^1}})}_p : (\mathrm{Mod}^\mathrm{ev}_p)^{\mathrm{BS}^1} \to \mathrm{FilSpt},\\
& \tau_{\ge 2*}(\tau_{\ge \filledsquare}(U_{\mathrm{Mod}})^{\mathrm{h}{\mathrm{S}^1}}) : (\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}, & \tau_{\ge 2*}(\tau_{\ge \filledsquare}(U_{\mathrm{Mod}})^{\mathrm{h}{\mathrm{S}^1}}) : (\mathrm{Mod}^\mathrm{ev}_p)^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}, \\
& \tau_{\ge 2*}(\tau_{\ge \filledsquare}(U_{\mathrm{Mod}})^{\mathrm{t}{\mathrm{S}^1}}): (\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1} \to \mathrm{BiFilSpt}, & \tau_{\ge 2*}\cpl{(\tau_{\ge \filledsquare}(U_{\mathrm{Mod}})^{\mathrm{t}{\mathrm{S}^1}})}_p : (\mathrm{Mod}^\mathrm{ev}_p)^{\mathrm{BS}^1} \to \mathrm{BiFilSpt},
\end{align*}
respectively, along the inclusions $(\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1} \subseteq \mathrm{Mod}^{\mathrm{BS}^1}$ and $(\mathrm{Mod}^\mathrm{ev}_p)^{\mathrm{BS}^1} \subseteq \mathrm{Mod}_p^{\mathrm{BS}^1}$. Again, in the case $M=A$ we omit the subcript indicating the ring.
\end{variant}
\begin{variant}
Say that a $p$-typical cyclotomic $\mathbb{E}_\infty$-ring is \emph{even} if its underlying $\mathbb{E}_\infty$-ring is even. Let $\mathrm{CycMod}^\mathrm{ev}_p$ denote the full subcategory of $\mathrm{CycMod}_p$ spanned by the pairs $(A,M)$ where $A$ is even. We define
\[
\operatorname{fil}^\star_{\mathrm{ev}/(-),p,\mathrm{TC}}(-) : \mathrm{CycMod}_p \to \mathrm{FilSpt}_{\kappa_2},
\]
to be the right Kan extension of
\[
\operatorname*{fib}(\varphi-\mathrm{can} : \tau_{\ge 2\star}\cpl{(U_{\mathrm{Mod}}^{\mathrm{h}{\mathrm{S}^1}})}_p \to \tau_{\ge 2\star}\cpl{(U_{\mathrm{Mod}}^{\mathrm{t}{\mathrm{S}^1}})}_p) : \mathrm{CycMod}_p^\mathrm{ev} \to \mathrm{FilSpt},
\]
along the inclusion $\mathrm{CycMod}_p^\mathrm{ev} \subseteq \mathrm{CycMod}_p$. Yet again, in the case $M=A$ we omit the subcript indicating the ring.
\end{variant}
\subsection{Descent properties of the filtrations}
The key to computing these even filtrations in our cases of interest is a simple flat descent property, which we formulate and prove in this subsection.
\begin{definition}
\label{ev--pcpl-flat}
Following \cite{BMS,BL}, for a fixed prime number $p$, we say that a map $A \to B$ of (discrete) commutative rings is \emph{$p$-completely flat} if $(A/p) \otimes^{\L}_{A} B$ is a flat $A/p$-module concentrated in homological degree $0$. The map is furthermore said to be \emph{$p$-completely faithfully flat} if $(A/p) \otimes^{\L}_{A} B$ is a faithfully flat $A/p$-module.
We introduce one further definition here: we say that a map $A \to B$ of commutative rings is \emph{discretely $p$-completely faithfully flat} if, for every commutative ring $C$ and map $A \to C$, the $p$-completed pushout $\cpl{(B \otimes^{\L}_A C)}_p$ is discrete and $p$-completely faithfully flat over $C$.
\end{definition}
\begin{remark}
It follows from \cite[Proposition 2.7.3.2(c)]{sag} that a map $A \to B$ of discrete commutative rings is $p$-completely flat if and only if the induced map $A\otimes^{\L}_{\num{Z}}\num{Z}/p \to B \otimes^{\L}_{\num{Z}} \num{Z}/p$ is faithfully flat in the sense of \cite[Definition D.4.4.1]{sag}. The reader may thus replace the underived construction $A/p$ with the derived construction $A\otimes^{\L}_{\num{Z}}\num{Z}/p$ in the above definition.
\end{remark}
\begin{example}
\label{ev--pcpl-free}
Let $A \to B$ be a map of $p$-complete commutative rings. Suppose that there is an $A$-module $M$ which is free on a nonempty set of generators and an isomorphism of $A$-modules $B \simeq \cpl{M}_p$. Then $B$ is discretely $p$-completely faithfully flat over $A$.
\end{example}
We now generalize and define the relevant notions of flatness in the setting of even $\mathbb{E}_\infty$-rings.
\begin{definition}
\label{ev--flat}
We say that a map of even $\mathbb{E}_\infty$-rings $f : A \to B$ is \emph{faithfully flat} (resp. \emph{$p$-completely faithfully flat}) if the induced map of (ungraded) commutative rings $\pi_*(f) : \pi_*(A) \to \pi_*(B)$ is faithfully flat (resp. $p$-completely faithfully flat).
We say that a map of even $\mathbb{E}_\infty$-rings $f : A \to B$ is \emph{discretely $p$-completely faithfully flat} if the induced map of (ungraded) commutative rings $\pi_*(f) : \pi_*(A) \to \pi_*(B)$ is discretely $p$-completely faithfully flat.
\end{definition}
\begin{warning}
The notion of faithful flatness in \cref{ev--flat} is distinct from that in \cite[Definition D.4.4.1]{sag}. It is the former that is used throughout this paper (except where explicitly stated otherwise).
\end{warning}
\begin{proposition}
\label{ev--pcpl-flat-homotopy}
Let $A \to B$ be a discretely $p$-completely faithfully flat map of even $\mathbb{E}_\infty$-rings. Then for any even $\mathbb{E}_\infty$-ring $C$ and map $A \to C$, the $p$-completed pushout $\cpl{(B \otimes_A C)}_p$ is even, and has homotopy groups given by $\cpl{(\pi_*(B) \otimes^{\L}_{\pi_*(A)} \pi_*(C))}_p$.
\end{proposition}
\begin{proof}
The filtered object $\cpl{(\tau_{\ge \star}(B) \otimes_{\tau_{\ge \star}(A)} \tau_{\ge \star}(C))}_p$ is complete, with underlying object $\cpl{(B \otimes_A C)}_p$ and associated graded object $\Sigma^{*}\cpl{(\pi_*(B) \otimes^{\L}_{\pi_*(A)} \pi_*(C))}_p$. The spectral sequence associated to this filtered object collapses to give the desired result.
\end{proof}
As usual, the above notions of flatness give rise to Grothendieck topologies.
\begin{definition}
\label{ev--flat-topology-def}
We say that a sieve on $(A,M) \in (\mathrm{Mod}^{\mathrm{ev}})^{\mathrm{op}}$ is a \emph{flat covering sieve} (resp. \emph{$p$-completely flat covering sieve}) if it contains a finite collection of maps $\{(A,M)\to (B_i,M_i)\}_{1\le i \le n}$ such that the map $A \to \prod_i B_i$ is faithfully flat (resp. discretely $p$-completely faithfully flat) and each of the morphisms $(A,M) \to (B_i,M_i)$ induce an equivalence $M\otimes_AB_i \simeq M_i$ (resp. $\cpl{(M\otimes_A B_i)}_p \simeq M_i$).
\end{definition}
\begin{proposition}
\label{ev--flat-topology-prop}
The flat covering families of \cref{ev--flat-topology-def} define a Grothendieck topology on $(\mathrm{Mod}^\mathrm{ev})^\mathrm{op}$ and the $p$-completely flat covering families define a Grothendieck topology on $(\mathrm{Mod}^\mathrm{ev}_p)^\mathrm{op}$.
For a category $\mathcal{C}$ admitting small limits, a functor $F: \mathrm{Mod}^\mathrm{ev} \to \mathcal{C}$ is a sheaf for the flat topology if and only if the following conditions are satisfied:
\begin{enumerate}
\item $F$ preserves finite products.
\item For every $A \to B$ which is faithfully flat,
the map
\[
F(A,M) \to \lim_{\Delta} F(B^{\otimes_A\bullet+1},M\otimes_AB^{\otimes_A\bullet+1})
\]
is an equivalence.
\end{enumerate}
The analogous claim holds for the discretely $p$-completely flat topology.
\end{proposition}
\begin{proof}
The proofs of \cite[A.3.2.1, A.3.3.1]{sag} carry over verbatim (the only pushouts required to exist are those along faithfully flat or discretely $p$-completely faithfully flat maps, which do indeed exist.)
\end{proof}
\begin{definition}
\label{de--flat-topology-name}
We refer to the Grothendieck topologies of \cref{ev--flat-topology-prop} as the \emph{flat topology} on $\mathrm{Mod}^\mathrm{ev}$ and the \emph{$p$-completely flat topology} on $\mathrm{Mod}^\mathrm{ev}_p$.
Since pushouts in $(\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1}$ and $\mathrm{CycMod}^\mathrm{ev}_p$ are
computed in $\mathrm{Mod}^\mathrm{ev}$ and $\mathrm{Mod}_p^\mathrm{ev}$, respectively, the above induce topologies on $\smash{(\mathrm{Mod}^\mathrm{ev})^{\mathrm{BS}^1}}$ and $\mathrm{CycMod}^\mathrm{ev}_p$, which we call by the same names.
\end{definition}
We now turn to studying the descent properties of our various filtrations.
We will need the following lemma, which is well-known.
\begin{lemma}
Let $A$ be an even $\mathbb{E}_{\infty}$-ring with ${\mathrm{S}^1}$-action. Then, for any ${\mathrm{S}^1}$-equivariant $A$-module $M$, the canonical maps
\[
M^{\mathrm{h}{\mathrm{S}^1}}\otimes_{A^{\mathrm{h}{\mathrm{S}^1}}}A \to M,
\]
\[
M^{\mathrm{h}{\mathrm{S}^1}}\otimes_{A^{\mathrm{h}{\mathrm{S}^1}}}A^{\mathrm{t}{\mathrm{S}^1}} \to M^{\mathrm{t}{\mathrm{S}^1}}
\]
are equivalences. Moreover, $M^{\mathrm{h}{\mathrm{S}^1}}$ is $A$-complete as an $A^{\mathrm{h}{\mathrm{S}^1}}$-module.
\end{lemma}
\begin{proof}
We argue as in \cite[Lemma IV.4.12]{NikolausScholze}. We may replace $A$ by $\tau_{\ge 0}A$ and reduce to the case when $A$ is connective. Since $A$ is even the homotopy fixed point spectral sequence collapses and there is a non-canonical isomorphism $\pi_*(A^{\mathrm{h}{\mathrm{S}^1}}) \cong \pi_*(A)\llbracket t \rrbracket$. It follows that $A \simeq A^{\mathrm{h}{\mathrm{S}^1}}/t$ is a perfect $A^{\mathrm{h}{\mathrm{S}^1}}$-module and that $(-)\otimes_{A^{\mathrm{h}{\mathrm{S}^1}}}A$ commutes with all limits and colimits. Using the equivalences
\[
M^{\mathrm{h}{\mathrm{S}^1}} \simeq \operatorname*{colim} (\tau_{\ge n}M)^{\mathrm{h}{\mathrm{S}^1}}
\]
\[
M^{\mathrm{h}{\mathrm{S}^1}} \simeq \lim (\tau_{\le m}M)^{\mathrm{h}{\mathrm{S}^1}}
\]
we may reduce the first claim to the case where $M$ is discrete and the claim follows by direct calculation.
To prove the second claim it suffices by \cite[Theorem I.4.1(iii)]{NikolausScholze} to prove that the fiber of $M^{\mathrm{h}{\mathrm{S}^1}} \to M^{\mathrm{h}{\mathrm{S}^1}}[t^{-1}]$ commutes with all colimits in the variable $M$. The fiber is given, up to suspension, by the colimit of the functors $M \mapsto M^{\mathrm{h}{\mathrm{S}^1}}/t^n$. When $n=1$ this functor commutes with colimits by the first claim, and by induction we see that each functor commutes with colimits. The result follows.
Finally we show that $M^{\mathrm{h}{\mathrm{S}^1}}$ is $A$-complete or, equivalently, is $t$-complete. Since $M^{\mathrm{h}{\mathrm{S}^1}} \simeq \lim (\tau_{\le n}M)^{\mathrm{h}{\mathrm{S}^1}}$ we may reduce to the case where $M$ is bounded above. Then the terms $\Sigma^{-2n}M^{\mathrm{h}{\mathrm{S}^1}}$ become increasingly coconnective and hence
\[
\lim(\cdots \Sigma^{-2n}M^{\mathrm{h}{\mathrm{S}^1}} \stackrel{t}{\to} \Sigma^{-2n+2}M^{\mathrm{h}{\mathrm{S}^1}}
\to \cdots \to M^{\mathrm{h}{\mathrm{S}^1}}) \simeq 0. \qedhere
\]
\end{proof}
\begin{lemma}
\begin{enumerate}[leftmargin=*]
\item The functor $\pi_*(U_{\mathrm{Mod}}): \mathrm{Mod}^\mathrm{ev} \to \mathrm{GrSpt}$ is a sheaf for the flat topology and restricts to a sheaf for the $p$-completely flat topology on $\mathrm{Mod}^\mathrm{ev}_p$.
\item The functors $\pi_*(U^{\mathrm{h}{\mathrm{S}^1}}_\mathrm{Mod}): (\mathrm{Mod}^\mathrm{ev})^{\mathrm{h}{\mathrm{S}^1}} \to \mathrm{GrSpt}$ and $\pi_*(U^{\mathrm{t}{\mathrm{S}^1}}_\mathrm{Mod}) : (\mathrm{Mod}^\mathrm{ev})^{\mathrm{t}{\mathrm{S}^1}} \to \mathrm{GrSpt}$ are sheaves for the flat topology and restrict to sheaves for the $p$-completely flat topology on $(\mathrm{Mod}^\mathrm{ev}_p)^{{\mathrm{BS}^1}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We will prove the $p$-complete statements; the integral statements can be addressed similarly.
\begin{enumerate}[leftmargin=*]
\item If $A \to B$ is discretely $p$-completely faithfully flat, then we need to prove that
\[
\pi_*M \to
\lim_{\Delta}(\pi_*\cpl{(M\otimes_A B^{\otimes_A\bullet+1})}_p)
\]
is an equivalence. Equivalently, using \cref{ev--pcpl-flat-homotopy}, that
\[
\pi_*M \to \lim_{\Delta}(\pi_*\cpl{(M\otimes_A B^{\otimes_A\bullet+1})}_p)
= \lim_{\Delta}\cpl{(\pi_*(M)\otimes^{\L}_{\pi_*(A)} (\pi_*(B))^{\otimes^{\L}_{\pi_*(A)}\bullet+1}))}_p
\]
is an equivalence. Both sides being $p$-complete, it suffices to show this map is an equivalence after (derived) base change along $\num{Z} \to \num{Z}/p$.
But
\[
\pi_*(A)\otimes^{\L}_{\num{Z}}\num{Z}/p \to
\pi_*(B)\otimes^{\L}_{\num{Z}}\num{Z}/p
\]
is faithfully flat in the sense of \cite[Definition D.4.4.1]{sag} (after forgetting the gradings). The claim now follows from \cite[Theorem D.6.3.1]{sag}.
\item If $A \to B$ is discretely $p$-completely faithfully flat, then we need to prove that
\[
\pi_*(M^{\mathrm{h}{\mathrm{S}^1}}) \to
\lim_{\Delta}(\pi_*(\cpl{(M\otimes_A B^{\otimes_A\bullet+1})^{\mathrm{h}{\mathrm{S}^1}})}_p)
\]
is an equivalence. Since $A$ is even, it is complex-orientable and there is a non-canonical isomorphism $\pi_*(A^{\mathrm{h}{\mathrm{S}^1}})\cong \pi_*(A)\llbracket t \rrbracket$. Arguing as in \cref{ev--pcpl-flat-homotopy}, we have that
\[
\pi_*(\cpl{(M\otimes_A B^{\otimes_A\bullet+1})^{\mathrm{h}{\mathrm{S}^1}})}_p
\simeq
\cpl{(\pi_*(M^{\mathrm{h}{\mathrm{S}^1}})\otimes^{\L}_{\pi_*(A^{\mathrm{h}{\mathrm{S}^1}})}
\pi_*(B^{\mathrm{h}{\mathrm{S}^1}})^{\otimes^{\L}_{\pi_*(A^{\mathrm{h}{\mathrm{S}^1}})}\bullet+1})}_{p,t}.
\]
As $M^{\mathrm{h}{\mathrm{S}^1}}$ is both (derived) $p$ and $t$-complete by the previous lemma,
it suffices to prove the claim after taking the (derived) base-change along $\num{Z}[t] \to \num{Z}/p$. But for any even ring $R$ we have
\[
\pi_*(R^{\mathrm{h}{\mathrm{S}^1}})\otimes^{\L}_{\num{Z}[t]}\num{Z}/p
\simeq
\pi_*(R)\otimes^{\L}_{\num{Z}}\num{Z}/p.
\]
Again, since $\pi_*(A)\otimes^{\L}_{\num{Z}}\num{Z}/p \to \pi_*(B)\otimes^{\L}_{\num{Z}}\num{Z}/p$ is faithfully flat in the sense of \cite[Definition D.4.4.1]{sag} the result follows from \cite[Theorem D.6.3.1]{sag}. Since
\[
\pi_*(M^{\mathrm{h}{\mathrm{S}^1}})[t^{-1}] \simeq \pi_*(M^{\mathrm{t}{\mathrm{S}^1}}),
\]
by the previous lemma, we deduce the second claim in (2). \qedhere
\end{enumerate}
\end{proof}
\begin{proposition}
\label{ev--even-filtration-descent}
\begin{enumerate}[leftmargin=*]
\item \label{ev--even-filtration-descent--plain}
The functor $\tau_{\ge 2\star}(U_{\mathrm{Mod}}) : \mathrm{Mod}^\mathrm{ev} \to \mathrm{FilSpt}$ is a sheaf for the flat topology and restricts to a sheaf for the $p$-completely flat topology on $\mathrm{Mod}^\mathrm{ev}_p$.
\item \label{ev--even-filtration-descent--ht}
The functors $\tau_{\ge 2\star}(U_{\mathrm{Mod}}^{\mathrm{h}{\mathrm{S}^1}}), \tau_{\ge 2\star}(U_{\mathrm{Mod}}^{\mathrm{t}{\mathrm{S}^1}}) : (\mathrm{Mod}^\mathrm{ev})^{{\mathrm{BS}^1}} \to \mathrm{FilSpt}$ are sheaves for the flat topology and restrict to sheaves for the $p$-completely flat topology on $(\mathrm{Mod}^\mathrm{ev}_p)^{{\mathrm{BS}^1}}$.
\item \label{ev--even-filtration-descent--nygaard}
The functors $\tau_{\ge 2\star}(\tau_{\ge \filledsquare}(U_{\mathrm{Mod}})^{\mathrm{h}{\mathrm{S}^1}}), \tau_{\ge 2\star}(\tau_{\ge \filledsquare}(U_{\mathrm{Mod}})^{\mathrm{t}{\mathrm{S}^1}}) : (\mathrm{Mod}^\mathrm{ev})^{{\mathrm{BS}^1}} \to \mathrm{BiFilSpt}$ are sheaves for the flat topology when restricted to the full subcategory of $(A,M)$ where $A$ is connective, and restrict to sheaves for the $p$-completely flat topology on the analogous subcategory of $(\mathrm{Mod}^\mathrm{ev}_p)^{{\mathrm{BS}^1}}$.
\item \label{ev--even-filtration-descent--tc} The functor
\[
\operatorname*{fib}(\mathrm{id}-\varphi : \tau_{\ge 2\star}(U_{\mathrm{Mod}}^{\mathrm{h}{\mathrm{S}^1}}) \to \tau_{\ge 2\star}\cpl{(U_{\mathrm{Mod}}^{\mathrm{t}{\mathrm{S}^1}}))}_p : \mathrm{CycMod}_p^\mathrm{ev} \to \mathrm{FilSpt}
\]
is a sheaf for the $p$-completely flat topology.
\end{enumerate}
\end{proposition}
\begin{proof}
Claim (3) follows from claim (2) by replacing a module with its $\filledsquare$-connective cover (which remains a module since we assumed the ring was connective in claim (3)). Claim (4) follows from claim (2). For claims (1) and (2) it suffices to prove the analogous statement for $\tau_{\ge \star}$ replacing $\tau_{\ge 2\star}$ since the functor doubling the speed of a filtration preserves limits. As these filtrations are complete, it suffices to check the claim after passage to associated graded. But then the claims follow by the preceding lemma.
\end{proof}
The upshot of \Cref{ev--even-filtration-descent} is a collection of descent statements for the even filtration and its variants, which we now enumerate to end this subsection.
\begin{definition}
\label{de--evenly-flat}
A map of $\mathbb{E}_\infty$-rings $A \to B$ is \emph{eff} (evenly faithfully flat) if for any even $\mathbb{E}_\infty$-ring $C$ and map of $\mathbb{E}_\infty$-rings $A \to C$, the pushout $B \otimes_A C$ is even and faithfully flat over $C$.
A map of $\mathbb{E}_\infty$-rings $A \to B$ is \emph{discretely $p$-completely eff} if for any even $\mathbb{E}_\infty$-ring $C$ and map of $\mathbb{E}_\infty$-rings $A \to C$, the $p$-completed pushout $\cpl{(B \otimes_A C)}_p$ is even and discretely $p$-completely faithfully flat over $C$.
A map of $\mathbb{E}_\infty$-rings with ${\mathrm{S}^1}$-action or a map of $p$-typical cyclotomic $\mathbb{E}_\infty$-rings is said to be eff (resp. discretely $p$-completely eff) if the underlying map of $\mathbb{E}_\infty$-rings is.
\end{definition}
\begin{corollary}
\label{ev--magic-criterion}
\begin{enumerate}[leftmargin=*]
\item For $A \to B$ an eff map of $\mathbb{E}_\infty$-rings and
$M$ an $A$-module, the canonical map
\[
\operatorname{fil}^\star_{\mathrm{ev}/A}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/ B^{\otimes_A\bullet+1}}(M\otimes_AB^{\otimes_A\bullet+1}))
\]
is an equivalence. For $A \to B$ a discretely $p$-completely eff map of $p$-complete $\mathbb{E}_\infty$-rings and $M$ a $p$-complete module, the canonical map
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/
B^{\otimes_A\bullet+1},p}(\cpl{(M\otimes_AB^{\otimes_A\bullet+1})}_p))
\]
is an equivalence.
\item For $A \to B$ an eff map of $\mathbb{E}_\infty$-rings with ${\mathrm{S}^1}$-action and $M$ an ${\mathrm{S}^1}$-equivariant module, the canonical maps
\begin{align*}
&\operatorname{fil}^\star_{\mathrm{ev}/A,\mathrm{h}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/
B^{\otimes_A\bullet+1},\mathrm{h}{\mathrm{S}^1}}(M\otimes_AB^{\otimes_A\bullet+1})), \\
&\operatorname{fil}^\star_{\mathrm{ev}/A,\mathrm{t}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/
B^{\otimes_A\bullet+1},\mathrm{t}{\mathrm{S}^1}}(M\otimes_AB^{\otimes_A\bullet+1}))
\end{align*}
are equivalences. For $A \to B$ a discretely $p$-completely eff map of $p$-complete $\mathbb{E}_\infty$-rings with ${\mathrm{S}^1}$-action and $M$ an ${\mathrm{S}^1}$-equivariant module, the canonical maps
\begin{align*}
&\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{h}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/
B^{\otimes_A\bullet+1},p,\mathrm{h}{\mathrm{S}^1}}(\cpl{(M\otimes_AB^{\otimes_A\bullet+1})}_p)),
\\
&\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{t}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/
B^{\otimes_A\bullet+1},p,\mathrm{t}{\mathrm{S}^1}}(\cpl{(M\otimes_AB^{\otimes_A\bullet+1})}_p)),
\end{align*}
are equivalences.
\item \label{ev--eff-descent-for-filtered-tc}
For $A \to B$ a discretely $p$-completely eff map of $p$-typical cyclotomic $\mathbb{E}_\infty$-rings and $M$ a cyclotomic $A$-module, the canonical map
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{TC}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/
B^{\otimes_A\bullet+1},p,\mathrm{TC}}(\cpl{(M\otimes_AB^{\otimes_A\bullet+1})}_p))
\]
is an equivalence.
\end{enumerate}
\end{corollary}
\begin{corollary}
\label{ev--compute-tc-as-fiber}
Let $A \to B$ be a discretely $p$-completely eff map of $p$-typical cyclotomic $\mathbb{E}_{\infty}$-rings where $B$ is even. Let $M$ be a cyclotomic $A$-module. Then the cyclotomic Frobenius and canonical maps $\varphi, \mathrm{can} : M^{\mathrm{h}{\mathrm{S}^1}} \to \cpl{(M^{\mathrm{t}{\mathrm{S}^1}})}_p$ refine to maps $\varphi,\mathrm{can}: \operatorname{fil}^{\star}_{\mathrm{ev}/A,p,\mathrm{h}{\mathrm{S}^1}} M \to \operatorname{fil}^{\star}_{\mathrm{ev}/A,p,\mathrm{t}{\mathrm{S}^1}} M$, and there is a canonical equivalence
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{TC}}(M) \simeq
\operatorname*{fib}(\varphi - \mathrm{can}: \operatorname{fil}^{\star}_{\mathrm{ev}/A,p,\mathrm{h}{\mathrm{S}^1}} M \to
\operatorname{fil}^{\star}_{\mathrm{ev}/A,p,\mathrm{t}{\mathrm{S}^1}} M).
\]
\end{corollary}
\begin{proof}
Let us temporarily denote by $G$ and $H$ the right Kan extension of the functors $(A,M) \mapsto \tau_{\ge 2*}(M^{\mathrm{h}{\mathrm{S}^1}})$ and $(A,M) \mapsto \tau_{\ge 2*}(\cpl{(M^{\mathrm{t}{\mathrm{S}^1}})}_p)$, respectively, along the inclusion $\mathrm{CycMod}_p^{\mathrm{ev}} \subseteq \mathrm{CycMod}_p$. Then, tautologically, both $\tau_{\ge 2*}(\varphi)$ and $\tau_{\ge 2*}(\mathrm{can})$ extend to maps of filtered objects and we have a fiber sequence
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{TC}}(M) \simeq \operatorname*{fib}(\varphi - \mathrm{can}: G \to H).
\]
It remains to identify $G(A,M)$ with $\operatorname{fil}^{\star}_{\mathrm{ev}/A,p,\mathrm{h}{\mathrm{S}^1}}(M)$ and $H(A,M)$ with $\operatorname{fil}^{\star}_{\mathrm{ev}/A,p,\mathrm{t}{\mathrm{S}^1}}(M)$. But this follows from \Cref{ev--eff-descent-for-filtered-tc} in the previous corollary, and the fact that $B^{\otimes_A\bullet+1}$ is even.
\end{proof}
\begin{remark} \label{rmk-geometric-stack}
In general, if $\mathcal{T}$ is a subcanonical site, then one says that a sheaf $X$ on $\mathcal{T}$ is a geometric stack if it admits an effective epimorphism $h_T \to X$ from a representable sheaf and the diagonal $X \to X \times X$ is representable, i.e. whenever $h_{S}, h_{S'} \to X$ are maps from representables, then $h_{S}\times_Xh_{S'}$ is representable. (This notion is heavily dependent on the choice of site presenting the topos $\mathrm{Shv}(\mathcal{T})$.) Those $\mathbb{E}_{\infty}$-rings $A$ with an eff map to an even $\mathbb{E}_{\infty}$-ring give examples of geometric stacks on the site $(\mathrm{CAlg}^\mathrm{ev})^{\mathrm{op}}$, and similarly in the equivariant and cyclotomic examples. The good behavior guaranteed in the preceding \Cref{ev--compute-tc-as-fiber} is part of a general paradigm where properties of geometric stacks more closely resemble those of ``affines''.
\end{remark}
\begin{corollary}[Novikov descent]
\label{ev--novikov}
The following statements hold:
\begin{enumerate}[leftmargin=*]
\item For $A$ an $\mathbb{E}_\infty$-ring and $M$ an $A$-module, the canonical map
\[
\operatorname{fil}^\star_{\mathrm{ev}/A}(M) \to \lim_{\Delta}\left(\operatorname{fil}^\star_{\mathrm{ev}/A \otimes \mathrm{MU}^{\bullet+1}}(M \otimes \mathrm{MU}^{\otimes \bullet+1})\right)
\]
is an equivalence. For $A$ a $p$-complete $\mathbb{E}_\infty$-ring and $M$ a $p$-complete $A$-module, the canonical map
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p}(M) \to \lim_{\Delta}\left(\operatorname{fil}^\star_{\mathrm{ev}/\cpl{(A \otimes \mathrm{MU}^{\bullet+1})}_p,p}(\cpl{(M \otimes \mathrm{MU}^{\otimes \bullet+1})}_p)\right)
\]
is an equivalence.
\item For $A$ an $\mathbb{E}_\infty$-ring with ${\mathrm{S}^1}$-action and $M$ an ${\mathrm{S}^1}$-equivariant $A$-module, the canonical maps
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,\mathrm{h}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/A \otimes \mathrm{MU}^{\otimes \bullet+1},\mathrm{h}{\mathrm{S}^1}}(M \otimes \mathrm{MU}^{\otimes \bullet+1})), \]
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,\mathrm{t}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/A \otimes \mathrm{MU}^{\otimes \bullet+1},\mathrm{t}{\mathrm{S}^1}}(M \otimes \mathrm{MU}^{\otimes \bullet+1}))
\]
are equivalences. For $A$ a $p$-complete $\mathbb{E}_\infty$-ring with ${\mathrm{S}^1}$-action and $M$ a $p$-complete ${\mathrm{S}^1}$-equivariant $A$-module, the canonical maps
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{h}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/\cpl{(A \otimes \mathrm{MU}^{\otimes \bullet+1})}_p,p,\mathrm{h}{\mathrm{S}^1}}(\cpl{(M \otimes \mathrm{MU}^{\otimes \bullet+1})}_p)), \]
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{t}{\mathrm{S}^1}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/\cpl{(A \otimes \mathrm{MU}^{\otimes \bullet+1})}_p,p,\mathrm{t}{\mathrm{S}^1}}(\cpl{(M \otimes \mathrm{MU}^{\otimes \bullet+1})}_p))
\]
are equivalences, where $\mathrm{MU}$ is considered to have trivial ${\mathrm{S}^1}$-action.
\item For $A$ a $p$-typical cyclotomic $\mathbb{E}_{\infty}$-ring and $M$ a cyclotomic $A$-module, the canonical map
\[
\operatorname{fil}^\star_{\mathrm{ev}/A,p,\mathrm{TC}}(M) \to \lim_{\Delta}(\operatorname{fil}^\star_{\mathrm{ev}/\cpl{(A \otimes \mathrm{MU}^{\otimes \bullet+1})}_p,p,\mathrm{TC}}(\cpl{(M \otimes \mathrm{MU}^{\otimes \bullet+1})}_p))
\]
is an equivalence, were $\mathrm{MU}$ is considered to have trivial cyclotomic structure.
\end{enumerate}
\end{corollary}
\begin{proof}
It suffices by the closure of eff maps under pushouts together with \cref{ev--magic-criterion} to show that $\mathbb{S} \to \mathrm{MU}$ is eff and discretely $p$-completely faithfully flat. If $B$ is any even $\mathbb{E}_{\infty}$-ring then $B$ is complex orientable and hence $\pi_*(B\otimes_{\mathbb{S}}\mathrm{MU})$ is isomorphic to a polynomial ring over $\pi_*(B)$ on a single generator in each even degree. In particular, this is even and both faithfully flat and discretely $p$-completely faithfully flat over $\pi_*(B)$.
\end{proof}
\subsection{Completion and rationalization}
Let us also record some statements concerning the interaction of the even filtration with $p$-completion and rationalization.
\begin{definition}
\label{ev--evenly-free}
We say that a map of $\mathbb{E}_\infty$-rings $A \to B$ is \emph{evenly free} if for any nonzero even $\mathbb{E}_\infty$-ring $C$ and map $A \to C$, the pushout $B \otimes_A C$ is equivalent as a $C$-module to a nonzero direct sum of even shifts of $C$.
\end{definition}
\begin{remark}
\label{ev--evenly-free-eff}
Let $A \to B$ be an evenly free map of $\mathbb{E}_\infty$-rings. Then $A \to B$ is eff, and for any prime $p$, the induced map $\cpl{A}_p \to \cpl{B}_p$ is discretely $p$-completely eff (by \cref{ev--pcpl-free}).
\end{remark}
\begin{example}
\label{ev--MU-evenly-free}
The proof of \cref{ev--novikov} shows that, for any $\mathbb{E}_\infty$-ring $A$, the map $A \to A \otimes \mathrm{MU}$ is evenly free.
\end{example}
\begin{proposition}
\label{ev--rationalization}
Let $A$ be an $\mathbb{E}_\infty$-ring. Suppose that there exists an even $\mathbb{E}_\infty$-ring $B$ and a $1$-connective, eff map $A \to B$. Then the canonical map
\[
\operatorname{fil}^\star_\mathrm{ev}(A) \otimes \num{Q} \to \operatorname{fil}^\star_\mathrm{ev}(A \otimes \num{Q})
\]
is an equivalence.
\end{proposition}
\begin{proof}
Fix a map $A \to B$ as in the statement. Then $B \otimes \num{Q}$ is also even and $A \otimes \num{Q} \to B \otimes \num{Q}$ is also eff. Thus, by \cref{ev--magic-criterion}, we have
\[
\operatorname{fil}^\star_\mathrm{ev}(A) \simeq \lim_{\Delta}(\tau_{\ge 2\star}(B^{\otimes_A\bullet+1})), \quad
\operatorname{fil}^\star_\mathrm{ev}(A \otimes \num{Q}) \simeq \lim_{\Delta}(\tau_{\ge 2\star}(B^{\otimes_A\bullet+1} \otimes \num{Q})),
\]
so it suffices to show that the canonical map
\[
\lim_{\Delta}(\tau_{\ge 2\star}(B^{\otimes_A\bullet+1})) \otimes \num{Q} \to
\lim_{\Delta}(\tau_{\ge 2\star}(B^{\otimes_A\bullet+1} \otimes \num{Q}))
\]
is an equivalence. It follows from the $1$-connectivity of the map $A \to B$ that the source of the map is a complete filtered object (by e.g. \cite[Proposition C.2.5]{HahnWilson}), and the target evidently is too, so it suffices to show that the induced map on each associated graded piece
\[
\lim_{\Delta}(\pi_{2n}(B^{\otimes_A \bullet+1})) \otimes \num{Q} \to
\lim_{\Delta}(\pi_{2n}(B^{\otimes_A\bullet+1} \otimes \num{Q}))
\]
is an equivalence. This follows from the fact that, for any cosimplicial abelian group $X^\bullet$, the canonical map $\pi^i(X^\bullet) \otimes \num{Q} \to \pi^i(X^\bullet \otimes \num{Q})$ is an isomorphism for each $i$.
\end{proof}
\begin{proposition}
\label{ev--profinite-rationalization}
Let $A \to A'$ be a map of connective $\mathbb{E}_\infty$-rings such that the induced map $\cpl{A} \otimes \num{Q} \to \cpl{(A')} \otimes \num{Q}$ is an equivalence, where $\cpl{(-)}$ denotes profinite completion. Suppose that there exists a connective $\mathbb{E}_\infty$-ring $B$ and a $1$-connective, evenly free map $A \to B$ such that $\cpl{B}$ and $\cpl{(A' \otimes_A B)}$ are even. Then the canonical map
\[
\left(\prod_p\operatorname{fil}^\star_{\mathrm{ev},p}(\cpl{A}_p)\right) \otimes \num{Q} \to
\left(\prod_p\operatorname{fil}^\star_{\mathrm{ev},p}(\cpl{(A')}_p)\right) \otimes \num{Q}
\]
is an equivalence.
\end{proposition}
\begin{proof}
Fix a map $A \to B$ as in the statement. By \cref{ev--magic-criterion,ev--evenly-free-eff}, we have
\[
\prod_p \operatorname{fil}^\star_{\mathrm{ev},p}(\cpl{A}_p) \simeq \prod_p \lim_{\Delta}( \tau_{\ge 2\star}(\cpl{(B^{\otimes_A\bullet+1})}_p)) \simeq \lim_{\Delta}(\tau_{\ge 2\star}(\cpl{(B^{\otimes_A\bullet+1})})),
\]
and as in the proof of \cref{ev--rationalization}, $A \to B$ also being $1$-connective further implies that
\[
\left(\prod_p \operatorname{fil}^\star_{\mathrm{ev},p}(\cpl{A}_p)\right) \otimes \num{Q} \simeq \lim_{\Delta}(\tau_{\ge 2\star}(\cpl{(B^{\otimes_A\bullet+1})} \otimes \num{Q})).
\]
Letting $B' := A' \otimes_A B$, the above also holds when $A,B$ are replaced by $A',B'$. We may now conclude by observing that $\cpl{A} \otimes \num{Q} \to \cpl{(A')} \otimes \num{Q}$ being an equivalence implies that $\cpl{(B^{\otimes_A\bullet+1})} \otimes \num{Q} \to \cpl{((B')^{\otimes_{(A')}\bullet+1})} \otimes \num{Q}$ is a (levelwise) equivalence.
\end{proof}
\section{The motivic filtration}
In this section, we define and prove basic properties of the motivic filtrations on $\mathrm{THH}(R)$, $\mathrm{TC}^{-}(R)$, $\mathrm{TP}(R)$, and $\cpl{\mathrm{TC}(R)}_p$, whenever $R$ is a well-behaved ring spectrum. We begin by specifying that well-behaved means \emph{chromatically quasisyntomic}, which is meant to be a generalization of the main definition of \cite[\textsection 4]{BMS} and \cite[Appendix C]{BL} to the setting of not necessarily discrete $\mathbb{E}_\infty$-ring spectra.
\subsection{Chromatically quasisyntomic $\mathbb{E}_\infty$-rings}
\label{xq}
\begin{definition}
\label{xq--even-quasi}
Let $f : A \to B$ be a map of even $\mathbb{E}_\infty$-rings. Below, we regard $\pi_*(A)$ and $\pi_*(B)$ as ungraded commutative rings.
\begin{enumerate}
\item The map $f$ is a \emph{quasiregular quotient} (resp. \emph{$p$-quasiregular quotient}) if $\smash{\L^\mathrm{alg}_{\pi_*(B)/\pi_*(A)}}$ has Tor-amplitude concentrated in degree $1$ (resp. $p$-complete Tor-amplitude concentrated in degree $1$).
\item The map $f$ is \emph{quasi-lci} (resp. \emph{$p$-quasi-lci}) if $\smash{\L^\mathrm{alg}_{\pi_*(B)/\pi_*(A)}}$ is a $\pi_*(B)$-module with Tor-amplitude concentrated in $[0,1]$ (resp. $p$-complete Tor-amplitude concentrated in $[0,1]$).
\end{enumerate}
\end{definition}
\begin{definition}
We say that an even $\mathbb{E}_\infty$-ring $A$ is \emph{$p$-quasisyntomic} if $\pi_*(A)$ is a $p$-quasisyntomic commutative ring, i.e. the unit map $\num{Z} \to \pi_*(A)$ is $p$-quasi-lci and $\pi_*(A)$ has bounded $p$-power torsion.
\end{definition}
The terminology ``quasi-lci'' is partially motivated by the following proposition.
\begin{proposition}\label{factor-quasi-lci}
A map $k \to R$ of connective,
even $\mathbb{E}_{\infty}$-rings is quasi-lci if and only if there exists a factorization
\[
k \to S \to R
\]
where $S$ is even and connective with
$\pi_*(S)$ a polynomial $\pi_*(k)$-algebra and
$S \to R$ is a quasiregular quotient.
Moreover, if $k \to R$ is quasi-lci, then any factorization
\[
k \to S \to R
\]
wth $\pi_*(S)$ a polynomial $\pi_*(k)$-algebra and
$\pi_*(S) \to \pi_*(R)$ surjective necessarily has
$S \to R$ a quasiregular quotient.
\end{proposition}
\begin{proof} First suppose such a factorization exists; then
the claim is immediate from the fiber sequence
\[
R_*\otimes^{\L}_{S_*}\L^{\mathrm{alg}}_{S_*/k_*} \to
\L^{\mathrm{alg}}_{R_*/k_*} \to
\L^{\mathrm{alg}}_{R_*/S_*}.
\]
For the converse,
we will require results from
\Cref{WilsonAppendix}. By \Cref{WilsonSurjectivity}, we may
choose an $\mathbb{E}_{\infty}$-map
$\Sigma^{\infty}_+W \to R$ from the suspension spectrum
of a weak product of
Wilson spaces which is surjective on homotopy groups.
By \Cref{WilsonEvenPolynomial}, $S:=k\otimes \Sigma^{\infty}_+W$
is even and connective with homotopy groups polynomial
over $\pi_*(k)$. From the fiber sequence
\[
R_*\otimes^{\L}_{S_*}\L^{\mathrm{alg}}_{S_*/k_*} \to
\L^{\mathrm{alg}}_{R_*/k_*} \to
\L^{\mathrm{alg}}_{R_*/S_*}
\]
we see that $\L^{\mathrm{alg}}_{R_*/S_*}$ has Tor-amplitude
in $[0,1]$. Moreover, since the map $S_* \to R_*$ is surjective
and $S_*$ is a polynomial algebra, we deduce that
the map $R_*\otimes^{\L}_{S_*}\L^{\mathrm{alg}}_{S_*/k_*} \to
\L^{\mathrm{alg}}_{R_*/k_*}$
is surjective on $\pi_0$ and hence also after any base change
to a discrete $R_*$-module. It follows that
$\L^{\mathrm{alg}}_{R_*/S_*}$ in fact has Tor-amplitude
$1$, as desired.
The final claim follows from the same argument.
\end{proof}
\begin{definition}
\label{xq--xquasi}
A map of $\mathbb{E}_\infty$-rings $k \to R$ is \emph{chromatically quasi-lci} (resp. \emph{chromatically $p$-quasi-lci}) if:
\begin{enumerate}
\item both $k \otimes \mathrm{MU}$ and $R \otimes \mathrm{MU}$ are even, and
\item the induced map $k \otimes \mathrm{MU} \to R \otimes \mathrm{MU}$ is quasi-lci (resp. $p$-quasi-lci).
\end{enumerate}
An $\mathbb{E}_\infty$-ring $R$ is \emph{chromatically $p$-quasisyntomic} if $R \otimes \mathrm{MU}$ is even and $p$-quasisyntomic. An $\mathbb{E}_{\infty}$-ring $R$ is \emph{chromatically quasisyntomic} if $R \otimes \mathrm{MU}$ is even, the unit map $\num{Z} \to \mathrm{MU}_*(R)$ is quasi-lci, and $\mathrm{MU}_*(R)$ has bounded $p$-power torsion for all primes $p$.
\end{definition}
\begin{example}
If $R$ is any of $\mathbb{S}$, $\mathrm{ko}$, or $\mathrm{tmf}$, then $\mathrm{MU}_*R$ is a polynomial $\mathbb{Z}$-algebra. Thus, all three of $\mathbb{S}$, $\mathrm{ko}$, and $\mathrm{tmf}$ are chromatically quasisyntomic $\mathbb{E}_{\infty}$-rings.
\end{example}
In the case that $k$ and $R$ are both even $\mathbb{E}_{\infty}$-rings, the chromatic definitions collapse to more familiar notions:
\begin{proposition}
\label{xq--even-xquasilci}
Suppose $k \to R$ is a map of even $\mathbb{E}_{\infty}$-rings. Then it is a quasi-lci map if and only if it is a chromatically quasi-lci map. Similarly, a map of even $\mathbb{E}_{\infty}$-rings is $p$-quasi-lci if and only if chromatically $p$-quasi-lci.
\end{proposition}
\begin{proof}
By complex-orientation theory, there are compatible isomorphisms $\mathrm{MU}_*k \cong k_*[b_1,b_2,\cdots]$ and $\mathrm{MU}_*R \cong R_*[b_1,b_2,\cdots]$. We calculate that
\[
\L^{\mathrm{alg}}_{\mathrm{MU}_*R/\mathrm{MU}_*k} \simeq \num{Z}[b_1,b_2,\cdots] \otimes^{\L}_{\mathbb{Z}} \L^{\mathrm{alg}}_{R_*/k_*} \simeq \mathrm{MU}_*R \otimes_{R_*} \L^{\mathrm{alg}}_{R_*/k_*},
\]
since the algebraic cotangent complex is compatible with derived base-change, and finish by noting that $\mathrm{MU}_*R$ is a free $R_*$-module.
\end{proof}
\begin{proposition}
\label{xq--even-xqsyn}
Let $A$ be an even $\mathbb{E}_\infty$-ring. Then $A$ is chromatically quasisyntomic (resp. chromatically $p$-quasisyntomic) if and only if $\pi_*A$ is quasisyntomic (resp. $p$-quasisyntomic).
\end{proposition}
\begin{proof}
By complex orientation theory, $\mathrm{MU}_*A \cong \pi_*(A)[b_1,b_2,\ldots]$, which is a free $\pi_*(A)$-module.
\end{proof}
The following result will be used in the proof of \cref{pmotconv}.
\begin{proposition}
\label{mo--strongly-even-quasisyntomic}
Let $R$ be a chromatically $p$-quasisyntomic $\mathbb{E}_\infty$-ring and let $A$ be a strongly even $\mathbb{E}_\infty$-ring (\cref{wi--strongly-even}). Then $R \otimes A$ is even and $p$-quasisyntomic.
\end{proposition}
\begin{proof}
Evenness is immediate from $R \otimes \mathrm{MU}$ being even and $A$ splitting as a direct sum of even suspensions of $\mathrm{MU}$. This implies that $\pi_*(R \otimes A \otimes \mathrm{MU})$ is an even polynomial algebra over $\pi_*(R \otimes A)$, and hence $R \otimes A$ is $p$-quasisyntomic if and only if $R \otimes A \otimes \mathrm{MU}$ is $p$-quasisyntomic; we will thus be done if we can show the latter. Noting that a commutative ring $B$ is $p$-quasisyntomic if and only if its $p$-localization $B_{(p)}$ is $p$-quasisyntomic, this follows from the hypothesis that $R \otimes \mathrm{MU}$ is $p$-quasisyntomic and the fact that $\pi_*(R \otimes \mathrm{MU} \otimes A)_{(p)}$ is an even polynomial algebra over $\pi_*(R \otimes \mathrm{MU})_{(p)}$ (\cref{wi--p-local-polynomial}).
\end{proof}
\subsection{Motivic filtrations}
With the definitions and constructions of \Cref{SecEvenFiltration} and \Cref{xq} in place, we now define our motivic filtrations on Hochschild homology and its associated invariants.
\begin{definition}
\label{mo--motivic-fil}
For a chromatically quasi-lci map of connective $\mathbb{E}_\infty$-rings $k \to R$
and $\mathrm{THH}(R/k)$-module $M$ (assumed to be ${\mathrm{S}^1}$-equivariant in the latter two
cases), we define
\begin{align*}
&\operatorname{fil}^\star_{\mathrm{mot}/(R|k)}M := \operatorname{fil}^\star_{\mathrm{ev}/\mathrm{THH}(R/k)}M, \\
&\operatorname{fil}^\star_{\mathrm{mot}/(R|k)}M^{\mathrm{h}{\mathrm{S}^1}} := \operatorname{fil}^\star_{\mathrm{ev}/\mathrm{THH}(R/k),\mathrm{h}{\mathrm{S}^1}}M, \\
&\operatorname{fil}^\star_{\mathrm{mot}/(R|k)}M^{\mathrm{t}{\mathrm{S}^1}} := \operatorname{fil}^\star_{\mathrm{ev}/\mathrm{THH}(R/k),\mathrm{t}{\mathrm{S}^1}}M.
\end{align*}
In the case $M=\mathrm{THH}(R/k)$ we abbreviate:
\begin{align*}
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{THH}(R/k) := \operatorname{fil}^\star_\mathrm{ev}\mathrm{THH}(R/k), \\
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{TC}^-(R/k) := \operatorname{fil}^\star_{\mathrm{ev},\mathrm{h}{\mathrm{S}^1}}\mathrm{THH}(R/k), \\
&\operatorname{fil}^\star_\mathrm{mot}\mathrm{TP}(R/k) := \operatorname{fil}^\star_{\mathrm{ev},\mathrm{t}{\mathrm{S}^1}}\mathrm{THH}(R/k).
\end{align*}
\end{definition}
\begin{variant}
\label{mo--pcpl-motivic-fil}
For a prime number $p$ and a chromatically $p$-quasi-lci map of connective $\mathbb{E}_\infty$-rings $k \to R$ and a $p$-complete
$\mathrm{THH}(R/k)$-module (assumed to be ${\mathrm{S}^1}$-equivariant in the latter two cases), we define
\begin{align*}
&\operatorname{fil}^\star_{\mathrm{mot}/(R|k)}M := \operatorname{fil}^\star_{\mathrm{ev}/\cpl{\mathrm{THH}(R/k)}_p,p}M, \\
&\operatorname{fil}^\star_{\mathrm{mot}/(R|k)}M^{\mathrm{h}{\mathrm{S}^1}} := \operatorname{fil}^\star_{\mathrm{ev}/\cpl{\mathrm{THH}(R/k)}_p,p,\mathrm{h}{\mathrm{S}^1}}M, \\
&\operatorname{fil}^\star_{\mathrm{mot}/(R|k)}\cpl{(M^{\mathrm{t}{\mathrm{S}^1}})}_p
:= \operatorname{fil}^\star_{\mathrm{ev}/\cpl{\mathrm{THH}(R/k)}_p,p,\mathrm{t}{\mathrm{S}^1}}M.
\end{align*}
When $M=\cpl{\mathrm{THH}(R/k)}_p$ we abbreviate:
\begin{align*}
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{THH}(R/k)}_p := \operatorname{fil}^\star_{\mathrm{ev},p}\cpl{\mathrm{THH}(R/k)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{TC}^-(R/k)}_p := \operatorname{fil}^\star_{\mathrm{ev},p,\mathrm{h}{\mathrm{S}^1}}\cpl{\mathrm{THH}(R/k)}_p, \\
&\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{TP}(R/k)}_p := \operatorname{fil}^\star_{\mathrm{ev},p,\mathrm{t}{\mathrm{S}^1}}\cpl{\mathrm{THH}(R/k)}_p.
\end{align*}
When $k$ is a cyclotomic base\footnote{See \Cref{cycbasedef} for our definition of a cyclotomic base.}, and $M$ is a $p$-typical cyclotomic
$\mathrm{THH}(R/k)$-module, we further define
\[
\operatorname{fil}^\star_{\mathrm{mot}/(R|k)} \cpl{\mathrm{TC}(M)}_p := \operatorname{fil}^\star_{\mathrm{ev}/\cpl{\mathrm{THH}(R/k)}_p,p,\mathrm{TC}}M,
\]
and abbreviate:
\[
\operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{TC}(R/k)}_p := \operatorname{fil}^\star_{\mathrm{ev}/\cpl{\mathrm{THH}(R/k)}_p,p,\mathrm{TC}}\cpl{\mathrm{THH}(R/k)}_p.
\]
\end{variant}
\begin{example}
We check that
$\operatorname{fil}^{\star}_{\mathrm{mot}} \mathrm{THH}(\ell) \simeq \lim_{\Delta}\left( \tau_{\ge 2\star}\mathrm{THH}(\ell/\mathrm{MU}^{\otimes \bullet+1}) \right)$, which is the descent studied by the first and third authors in \cite[\textsection 6.1]{HahnWilson}. Since $\mathrm{THH}(\ell/\mathrm{MU}^{\otimes \bullet+1}) \simeq \mathrm{THH}(\ell) \otimes_{\mathrm{THH}(\mathrm{MU})} (\mathrm{MU})^{\otimes_{\mathrm{THH}(\mathrm{MU})} \bullet+1}$, it suffices to check that:
\begin{enumerate}
\item $\mathrm{THH}(\ell/\mathrm{MU})$ is even.
\item $\mathrm{THH}(\mathrm{MU}) \to \mathrm{MU}$ is eff.
\end{enumerate}
The first of these points is \cite[Theorem E]{HahnWilson}. To prove the second point, given any map of $\mathbb{E}_{\infty}$-rings $\mathrm{THH}(\mathrm{MU}) \to A$ where $A$ is even, we must show that $A \otimes_{\mathrm{THH}(\mathrm{MU})} \mathrm{MU}$ is even and that $A \to A \otimes_{\mathrm{THH}(\mathrm{MU})} \mathrm{MU}$ is faithfully flat. In fact, we will show that $A \otimes_{\mathrm{THH}(\mathrm{MU})} \mathrm{MU}$ is a free $A$-module, equivalent to a direct sums of even suspensions of $A$.
Recall that $\mathrm{THH}_*(\mathrm{MU}) \cong \Lambda_{\mathrm{MU}_*}(\sigma b_1,\sigma b_2,\cdots)$ is an exterior algebra over $\mathrm{MU}_*$ generated by classes $\sigma b_i$ in odd degrees \cite{BCS,RognesMU}. Because $A$ is even, each of the classes $\sigma b_i$ must map to zero along the map $\mathrm{THH}_*(\mathrm{MU}) \to \pi_*(A)$. The K\"unneth spectral sequence for $\pi_*\left(A \otimes_{\mathrm{THH}(\mathrm{MU})} \mathrm{MU}\right)$ therefore has $\mathrm{E}_2$-term given by
$\pi_*(A) \otimes_{\mathrm{MU}_*} \Gamma_{\mathrm{MU}_*}(\sigma^2b_i),$
where $\Gamma_{\mathrm{MU}_*}(\sigma^2b_i)$ denotes a divided power algebra on even degree generators. The spectral sequence collapses for degree reasons, and the result is a free $\pi_*(A)$-module.
\end{example}
\begin{example} \label{example:BPn} With the same argument as above, we see that
$\operatorname{fil}^{\star}_{\mathrm{mot}/\mathrm{MU}}\mathrm{THH}(\mathrm{BP}\langle n\rangle)
\simeq \lim_{\Delta}\left( \tau_{\ge 2\star}\mathrm{THH}(\mathrm{BP}\langle n\rangle/\mathrm{MU}^{\otimes \bullet+1}) \right)$, also studied by the first and third authors
in \cite[\textsection 6.1]{HahnWilson}.
\end{example}
In the remainder of the section we set up some basic theory, culminating in proofs of \Cref{motConv} and \Cref{filteredFrob} from the introduction.
Roughly speaking, these theorems state that the motivic filtrations converge and that the motivic filtration on $\cpl{\mathrm{TC}}_p$ is compatible with the motivic filtrations on $\mathrm{TC}^{-}$ and $\mathrm{TP}$. For ease of exposition, we choose to focus on the case of rings rather than modules
over their Hochschild homology. The necessary modifications for the module
case are straightforward and left to the interested reader.
\begin{lemma}
\label{hkr--quasismooth-HH-atlas}
Let $k \to S$ be a map of connective, even $\mathbb{E}_\infty$-rings such that $\pi_*(S)$ is a polynomial $\pi_*(k)$-algebra. Then the map $\mathrm{THH}(S/k) \to S$ is evenly free, hence eff and discretely $p$-completely eff.
\end{lemma}
\begin{proof}
Suppose $S=k[x_i],$ where the $x_i$ are a (possibly infinite) collection of polynomial generators. Then we may calculate the homotopy groups of \[\mathrm{THH}(S/k) = S \otimes_{S \otimes_k S} S \simeq k[x_i] \otimes_{k[x_i,x'_i]} k[x_i]\] via a spectral sequence with associated graded
\[\mathrm{THH}_*(S_*/k_*) \cong k_*[x_i] \otimes_{k_*} \Lambda_{k_*}(dx_i).\]
This spectral sequence collapses because each multiplicative generator is a permanent cycle.
Now, consider a pushout square of the form
\[
\begin{tikzcd}
\mathrm{THH}(S/k) \arrow{d} \arrow{r} & S \arrow{d} \\
C \arrow{r} & C \otimes_{\mathrm{THH}(S/k)} S,
\end{tikzcd}
\]
where $C$ is a nonzero even $\mathbb{E}_{\infty}$-ring. The map $\mathrm{THH}_*(S/k) \to \pi_*C$ must send each class $dx_i$ to $0$, because each class $dx_i$ is in odd degree. We may calculate $\pi_*(C \otimes_{\mathrm{THH}(S/k)} S)$ via a spectral sequence with associated graded
\[C_* \otimes^{\L}_{k_*[x_i] \otimes_{k_*} \Lambda_{k_*}(dx_i)} k_*[x_i] \cong C_* \otimes_{k_*} \Gamma_{k_*}(\sigma^2 x_i),\]
where $\Gamma_{k_*}(\sigma^2 x_i)$ is a divided power algebra on even degree classes. This spectral sequence must degenerate for degree reasons, and the result is a nonzero free $C_*$-module.
The above shows that the map $\mathrm{THH}(S/k) \to S$ is evenly free. By \cref{ev--evenly-free-eff}, this implies that the map is eff and discretely $p$-completely eff.
\end{proof}
\begin{lemma}
\label{hkr--quasiregular-HH-even}
\begin{enumerate}[leftmargin=*]
\item Let $S \to R$ be a quasiregular quotient of connective, even $\mathbb{E}_\infty$-rings. Then $\mathrm{THH}(R/S)$ is even.
\item Let $S \to R$ be a $p$-quasiregular quotient of connective, $p$-complete $\mathbb{E}_\infty$-rings and suppose that $\pi_*(R)$ has bounded $p$-power torsion. Then $\cpl{\mathrm{THH}(R/S)}_p$ is even.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item There is a spectral sequence converging to $\mathrm{THH}_*(R/S)$ with associated graded $\mathrm{THH}_*(R_*/S_*)$, so it will suffice to prove that the latter ring is even. Now, the HKR filtration gives a spectral sequence for $\mathrm{THH}_*(R_*/S_*)$ with associated graded $\L\mathrm{Sym} \left(\Sigma\L^{\mathrm{alg}}_{R_*/S_*}\right)$. Since $S \to R$ is a quasiregular quotient, $\Sigma^{-1}\L^{\mathrm{alg}}_{R_*/S_*}$ is a flat $R_*$-module concentrated
in even internal degrees, and the result follows.
\item By the same argument as above, we are reduced to showing
that $\L\mathrm{Sym} \left(\Sigma\L^{\mathrm{alg}}_{R_*/S_*}\right)$ is concentrated in even degrees after $p$-completion.
First observe that $\cpl{(\Sigma^{-1}\L^{\mathrm{alg}}_{R_*/S_*})}_p$
is $p$-completely flat by hypothesis. Since the formation of
(derived) divided powers commutes with base change, we deduce
that $\Gamma_{R_*}^n(\cpl{(\Sigma^{-1}\L^{\mathrm{alg}}_{R_*/S_*})}_p)$ is also $p$-completely flat. By \cite[Lemma 4.7]{BMS} and our
assumption that $R_*$ has bounded $p^{\infty}$-torsion,
we deduce that
$\Gamma_{R_*}^n(\cpl{(\Sigma^{-1}\L^{\mathrm{alg}}_{R_*/S_*})}_p)$ is discrete after $p$-completion.
By a theorem of Illusie (see \cite[Proposition 25.2.4.2]{sag}),
we deduce that
\[
\L\mathrm{Sym}^n \left(\Sigma\L^{\mathrm{alg}}_{R_*/S_*}\right)
\cong
\Sigma^{2n}\Gamma_{R_*}^n(\Sigma^{-1}\L^{\mathrm{alg}}_{R_*/S_*})
\]
is concentrated in degree $2n$ after $p$-completion. This completes the
proof. \qedhere
\end{enumerate}
\end{proof}
\begin{corollary}
\label{hkr--quasilci-HH-atlas}
Let $k \to S \to R$ be maps of connective, even $\mathbb{E}_\infty$-rings such that $\pi_*(S)$ is a polynomial $\pi_*(k)$-algebra and $S \to R$ is a quasiregular quotient (resp. a $p$-quasiregular quotient with $R_*$ having bounded $p^{\infty}$-torsion). Then the map $\mathrm{THH}(R/k) \to \mathrm{THH}(R/S)$ (resp. the map $\cpl{\mathrm{THH}(R/k)}_p \to \cpl{\mathrm{THH}(R/S)}_p$) is eff (resp. $p$-completely eff).
\end{corollary}
\begin{proof}
This follows from \cref{hkr--quasismooth-HH-atlas,hkr--quasiregular-HH-even}, as the map $\mathrm{THH}(R/k) \to \mathrm{THH}(R/S)$ is obtained by pushing out the map $\mathrm{THH}(S/k) \to S$ along the map $\mathrm{THH}(S/k) \to \mathrm{THH}(R/k)$, and the same statement holds after $p$-completion.
\end{proof}
\begin{corollary}
\label{mot--p-compatibility}
Let $k \to R$ be a chromatically quasi-lci map of connective $\mathbb{E}_\infty$-rings and assume that $R_*$ has bounded $p^{\infty}$-torsion.
Then
\[
\cpl{(\operatorname{fil}^\star_\mathrm{mot}\mathrm{THH}(R/k))}_p \simeq \operatorname{fil}^\star_\mathrm{mot}\cpl{\mathrm{THH}(R/k)}_p.
\]
\end{corollary}
\begin{proof}
By Novikov descent we may replace $R$ and $k$ by
$R\otimes \mathrm{MU}$ and $k\otimes \mathrm{MU}$
and thereby reduce to the case when $k \to R$ is a quasi-lci map
of connective, even $\mathbb{E}_{\infty}$-rings.
By \Cref{factor-quasi-lci} we may produce a factorization
$k \to S \to R$ as in \Cref{hkr--quasilci-HH-atlas}.
The result follows from the conclusion of
\Cref{hkr--quasilci-HH-atlas}.
\end{proof}
The following statements generalize both \Cref{motConv} and \Cref{filteredFrob} from the introduction.
\begin{theorem}
Suppose $k \to R$ is a chromatically quasi-lci map of connective $\mathbb{E}_{\infty}$-rings. Then
\begin{itemize}
\item $\operatorname{fil}^{0}_\mathrm{mot} \mathrm{THH}(R/k)=\mathrm{THH}(R/k)$.
\item $\operatorname{fil}^{\star}_\mathrm{mot} \mathrm{TC}^{-}(R/k)$ has colimit $\mathrm{TC}^{-}(R/k)$.
\item $\operatorname{fil}^{\star}_\mathrm{mot} \mathrm{TP}(R/k)$ has colimit $\mathrm{TP}(R/k)$.
\end{itemize}
Moreover, the fiber of $\operatorname{fil}^i_{\mathrm{mot}}\mathrm{THH}(R/k) \to \mathrm{THH}(R/k)$
(and hence also of $\operatorname{fil}^i_{\mathrm{mot}}\mathrm{TC}^{-}(R/k) \to \mathrm{TC}^{-}(R/k)$) is
$(2i-2)$-truncated.
\end{theorem}
\begin{proof}
Since $\mathrm{THH}(R/k)$ is connective the Adams-Novikov
tower converges so that
\[
\mathrm{THH}(R/k) =
\lim_{\Delta}\mathrm{THH}(R/k)\otimes \mathrm{MU}^{\otimes \bullet+1}.
\]
We may therefore replace $R$ and $k$ by $R\otimes \mathrm{MU}$
and $k\otimes \mathrm{MU}$ and thereby reduce to the case when
$k \to R$ is a quasi-lci map of connective, even $\mathbb{E}_{\infty}$-rings.
By \Cref{factor-quasi-lci}, we may choose a factorization
$k \to S \to R$ where $S_*$ is an even polynomial ring over $k_*$
and $S \to R$ is a quasiregular quotient. By \Cref{hkr--quasilci-HH-atlas},
$\mathrm{THH}(R/S)$ is even and we have
\[
\operatorname{fil}^{\star}\mathrm{THH}(R/k) = \lim_{\Delta} \tau_{\ge 2\star}\mathrm{THH}(R/S)^{
\otimes_{\mathrm{THH}(R/k)} \bullet+1}.
\]
It follows that
\[
\operatorname{fil}^0\mathrm{THH}(R/k) = \lim_{\Delta}\mathrm{THH}(R/S)^{
\otimes_{\mathrm{THH}(R/k)} \bullet+1}.
\]
On the other hand, for any map $A \to B$ of connective
$\mathbb{E}_{\infty}$-rings
with $1$-connective fiber, the map
\[
A \to \lim_{\Delta} B^{\otimes_A\bullet+1}
\]
is an equivalence, so the first claim follows.
Now we turn to the remaining claims. Let us
(for the purposes of this proof)
say that a filtered spectrum $F^{\star}Y$
\emph{strongly converges to $Y$} if there
is a a map $F^{\star}Y \to Y$ to the constant
filtered spectrum at $Y$ such that
the fibers of the maps
\[
F^{i}Y \to Y
\]
have is $(2i-2)$-truncated.
Observe, in particular, that this implies
$\operatorname*{colim} F^{\star}Y = Y$, since the fiber
of the map $\operatorname*{colim} F^{\star}Y \to Y$, having
arbitrary coconnectivity, is necessarily zero.
Since spectra which are $\le n$ are closed under arbitrary
limits,
the property of strong convergence is closed under limits
Moreover, we always have
that $\tau_{\ge 2\star}Y$ strongly converges to $Y$.
By the same argument as in the claim for
$\operatorname{fil}^{\star}_{\mathrm{mot}}\mathrm{THH}(R/k)$, we are then reduced to
proving that, if $A \to B$ is a map of $S^1$-equivariant,
connective $\mathbb{E}_{\infty}$-rings,
with $1$-connective fiber, then
\begin{align*}
A^{\mathrm{h}{\mathrm{S}^1}} &= \lim_{\Delta} (B^{\otimes_A \bullet+1})^{\mathrm{h}{\mathrm{S}^1}},\\
A^{\mathrm{t}{\mathrm{S}^1}} &= \lim_{\Delta} (B^{\otimes_A\bullet+1})^{\mathrm{t}{\mathrm{S}^1}}.
\end{align*}
But both homotopy and Tate fixed points commute with limits
of \emph{connective} spectra, so the claim follows.
\end{proof}
\begin{theorem} \label{pmotconv}
Suppose $k \to R$ is a chromatically $p$-quasi-lci map of connective $\mathbb{E}_{\infty}$-rings, and assume that $\mathrm{MU}_*R$ has bounded $p$-power torsion. Then
\begin{itemize}
\item $\operatorname{fil}^{0}_\mathrm{mot} \cpl{\mathrm{THH}(R/k)}_p$ has colimit $\cpl{\mathrm{THH}(R/k)}_p$.
\item $\operatorname{fil}^{\star}_\mathrm{mot} \cpl{\mathrm{TC}^{-}(R/k)}_p$ has colimit $\cpl{\mathrm{TC}^{-}(R/k)}_p$.
\item $\operatorname{fil}^{\star}_\mathrm{mot} \cpl{\mathrm{TP}(R/k)}_p$ has colimit $\cpl{\mathrm{TP}(R/k)}_p$.
\end{itemize}
Furthermore, if $k=\mathbb{S}$ there exist maps
\[\varphi:\operatorname{fil}^{\star}_\mathrm{mot} \cpl{\mathrm{TC}^{-}(R)}_p \to \operatorname{fil}^{\star}_\mathrm{mot} \cpl{\mathrm{TP}(R)}_p, \text{ and}\]
\[\mathrm{can}:\operatorname{fil}^{\star}_\mathrm{mot} \cpl{\mathrm{TC}^{-}(R)}_p \to \operatorname{fil}^{\star}_\mathrm{mot} \cpl{\mathrm{TP}(R)}_p,\]
converging to $\varphi:\cpl{\mathrm{TC}^{-}(R)}_p \to \cpl{\mathrm{TP}(R)}_p$ and $\mathrm{can}:\cpl{\mathrm{TC}^{-}(R)}_p \to \cpl{\mathrm{TP}(R)}_p$, respectively, such that there is an equivalence
\[\operatorname{fil}^{\star}_{\mathrm{mot}} \cpl{\mathrm{TC}(R)}_p \simeq \mathrm{fib}\left(\varphi-\mathrm{can}\right).\]
\end{theorem}
\begin{proof}
The claims about the convergence of the
motivic filtrations are proved exactly as in the previous theorem.
It remains to prove the existence of the filtered $\varphi$ and the filtered $\mathrm{can}$ in the case $k=\S$. For this, we use the notation and theory developed in \cref{cycbaseSection}. Specifically, we choose a strongly even $\mathbb{E}_\infty$-ring $A$ (e.g. $A = \mathrm{MW}$) and a weak product of Wilson spaces $W$ with a surjective $\mathbb{E}_{\infty}$-$A$-algebra map $A \otimes \Sigma^{\infty}_+ W \to A \otimes R$, noting that $A \otimes R$ is even and $p$-quasisyntomic by \cref{mo--strongly-even-quasisyntomic}. Then the map
\[\cpl{\mathrm{THH}(R)}_p \to \cpl{\mathrm{THH}(A \otimes R/A \otimes \Sigma^{\infty}_+ W)}_p\]
is a $p$-completely eff map of $p$-complete, connective, cyclotomic $\mathbb{E}_{\infty}$-rings by \cref{finalAppThm,hkr--quasismooth-HH-atlas}, and the target is even by \cref{hkr--quasiregular-HH-even}. The claim now follows by \Cref{ev--compute-tc-as-fiber}.
\end{proof}
\begin{remark}
The filtered $\varphi$ and $\mathrm{can}$ of \cref{pmotconv} are natural in $\mathbb{E}_{\infty}$-ring maps between chromatically $p$-quasisyntomic $\mathbb{E}_{\infty}$-rings $R$ (see the proof of \cref{ev--compute-tc-as-fiber}).
\end{remark}
\begin{remark} It follows from the proof of the previous theorem
that if $R$ is a chromatically quasisyntomic then $\mathrm{THH}(R)$
admits an evenly free (and hence eff) map to an even cyclotomic ring.
As indicated in \cref{rmk-geometric-stack}, the convenience of the
motivic filtration should hold under this weaker requirement alone. It would
be very interesting to have a checkable characterization of those
$R$ such that
$\mathrm{THH}(R)$ admits an eff map to an even cyclotomic ring.
\end{remark}
\subsection{The even filtration and comparison theorems}
For motivation, let us recall how Bhatt--Morrow--Scholze defined their motivic filtration, which we denote here by $\operatorname{fil}^\star_\mathrm{BMS} \cpl{\mathrm{THH}(R)}_p$. First, they focus on a certain class of discrete commutative rings, which they call quasisyntomic and which from here forwards we will call $p$-complete $p$-quasisyntomic. Given a $p$-complete $p$-quasisyntomic ring $R$, they prove that there
exists another such ring $S$ and a map $R \to S$ with the following two properties:
\begin{enumerate}
\item the canonical map $\cpl{\mathrm{THH}(R)}_p \to \lim_{\Delta}(\cpl{\mathrm{THH}(S^{\otimes_R \bullet+1})}_p)$ is an equivalence;
\item for each $n \ge 0$, the spectrum $\cpl{\mathrm{THH}(S^{\otimes_R n+1})}_p$ is \emph{even}, i.e. its homotopy groups are concentrated in even degrees.
\end{enumerate}
They then define $\operatorname{fil}^k_\mathrm{BMS} \cpl{\mathrm{THH}(R)}_p$ to be $\lim_{\Delta}(\tau_{\ge 2k}(\cpl{\mathrm{THH}(S^{\otimes_R \bullet+1})}_p))$, showing that this is independent of the choice of $S$ using the formalism of Grothendieck topologies.
More generally, the strategy of computing $\mathrm{THH}(R)$ by realizing it as a totalization of even ring spectra has been broadly applied to great effect \cite{BMS, LiuWang,HahnWilson,NikolausKrause,Zpn,DavidLee}. It inspires the following construction:
\begin{definition}
\label{in--fil-ev}
An $\mathbb{E}_\infty$-ring $B$ is \emph{even} if its homotopy groups $\pi_*B$ are concentrated in even degrees. For any $\mathbb{E}_\infty$-ring $A$, we define $\operatorname{fil}^n_{\mathrm{ev}} A$ to be the limit, over all maps $A \to B$ with $B$ even, of $\tau_{\ge 2n} B$. Together, the $\operatorname{fil}^n_{\mathrm{ev}} A$ assemble to define a filtered $\mathbb{E}_\infty$-ring $\operatorname{fil}^\star_{\mathrm{ev}} A$.
\end{definition}
\begin{remark}
\label{in--fil-ev-rmk}
We make the above definition precise by considering the category $\mathrm{CAlg}$ of $\mathbb{E}_\infty$-rings, the full subcategory $\mathrm{CAlg}^{\mathrm{ev}}$ of even $\mathbb{E}_\infty$-rings, and the category $\mathrm{FilCAlg}_{\kappa_2}$ of (possibly large) filtered $\mathbb{E}_\infty$-rings.\footnote{For the definition of ``filtered $\mathbb{E}_\infty$-ring'' and comments on set-theoretic issues, see \Cref{ConventionsSection}.} Then $\operatorname{fil}^\star_\mathrm{ev}(\--)$ is the right Kan extension, along the inclusion $\mathrm{CAlg}^{\mathrm{ev}} \hookrightarrow \mathrm{CAlg}$, of the double-speed Postnikov filtration functor $\tau_{\ge 2*}:\mathrm{CAlg}^{\mathrm{ev}} \to \mathrm{FilCAlg}$.
\end{remark}
Suppose now that $R$ is a discrete commutative ring with bounded $p$-power torsion for all primes $p$ and such that the algebraic cotangent complex $\L^\mathrm{alg}_R$ has Tor-amplitude contained in $[0,1]$. For example, $R$ might be a polynomial ring over $\mathbb{Z}$, or the quotient of a polynomial ring by a regular sequence. For each prime number $p$, the $p$-completion $\cpl{R}_p$ is $p$-complete $p$-quasisyntomic, so we may speak of $\operatorname{fil}^\star_{\mathrm{BMS}}\cpl{\mathrm{THH}(R)}_p = \operatorname{fil}^\star_{\mathrm{BMS}}\cpl{\mathrm{THH}(\cpl{R}_p)}_p $. By gluing together the BMS filtration at all primes $p$, as well as rational information, Morin \cite{Morin} and Bhatt--Lurie \cite[\textsection 6.4]{BL} constructed a \emph{global motivic filtration} $\operatorname{fil}^\star_{\mathrm{mot}} \mathrm{THH}(R)$. Here, we prove that this motivic filtration is the even filtration:
\begin{theorem} \label{IntroBLcomparison}
Let $R$ be a discrete commutative ring with bounded $p$-power torsion for all primes $p$ and such that the algebraic cotangent complex $\L^\mathrm{alg}_R$ has Tor-amplitude contained in $[0,1]$. Then there is a canonical equivalence
\[
\operatorname{fil}^\star_{\mathrm{mot}} \mathrm{THH}(R) \simeq \operatorname{fil}^\star_{\mathrm{ev}} \mathrm{THH}(R),
\]
where $\operatorname{fil}^\star_{\mathrm{mot}}$ denotes the global motivic filtration of Morin and Bhatt--Lurie.
\end{theorem}
\begin{remark}
Bhatt--Lurie further define the global motivic filtration on $\mathrm{THH}(R)$ for all animated commutative rings $R$, by left Kan extension from the case when $R$ is a polynomial $\mathbb{Z}$-algebra. Since polynomial $\mathbb{Z}$-algebras satisfy the conditions of \Cref{IntroBLcomparison}, one may use $\operatorname{fil}^\star_{\mathrm{ev}}$ and left Kan extension to recover $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{THH}(R)$ for any animated commutative ring $R$. By $p$-completion, one may then recover $\operatorname{fil}_{\mathrm{BMS}}^{\star} \cpl{\mathrm{THH}(R)}_p$ for any $p$-complete $p$-quasisyntomic $R$; this can be also recovered directly from a $p$-complete variant of the even filtration, as will be discussed further below.
\end{remark}
In light of the above theorem and remark, it is fair to say that the even filtration provides an alternate construction of the motivic filtration on the $\mathrm{THH}$ of animated commutative rings. Notably, the construction is inherently global, and avoids mention of perfectoid rings and the quasisyntomic site. Even more notably, the even filtration is defined on \emph{any} $\mathbb{E}_\infty$-ring, not only $\mathbb{E}_\infty$-rings that arise as the $\mathrm{THH}$ of discrete commutative rings. For example, we may take the even filtration of the sphere spectrum: it turns out that the result is the d\'ecalage of the Adams--Novikov filtration, which features heavily in Morel--Voevodsky's theory of $\mathbb{C}$-motivic stable homotopy theory \cite{MorelVoevodsky}. More generally, we have the following result:
\begin{theorem}
For any $\mathbb{E}_\infty$-ring $A$,
\[
\operatorname{fil}^{\star}_{\mathrm{ev}} A \simeq \lim_{\Delta}(\operatorname{fil}^{\star}_{\mathrm{ev}}(A \otimes \mathrm{MU}^{\otimes \bullet+1})),
\]
where the limit is taken in the category of filtered $\mathbb{E}_\infty$-rings.
\end{theorem}
The corollary below then follows by the work of Gheorghe--Isaksen--Krause--Ricka \cite{Cmot} (cf. \cite{Pstragowski,SpecialFiber}):
\begin{corollary}
Fix a prime number $p$. Then, for every $\mathbb{E}_\infty$-ring $A$, the $p$-completion of $\operatorname{fil}^{\star}_\mathrm{ev} A$ is naturally an $\mathbb{E}_\infty$-algebra object in the category of $\mathbb{C}$-motivic spectra. Moreover, if $A$ is bounded below and $\mathrm{MU}_*A$ is even, then, after $p$-completion,
\[
\operatorname{fil}^{\star}_{\mathrm{ev}} A \simeq \nu(A),
\]
where $\nu$ is the synthetic analog functor from spectra to the $p$-completed cellular subcategory of $\mathcal{SH}(\mathbb{C})$.
\end{corollary}
\begin{remark} In \cref{SecEvenFiltration} we extend the notion
of the even filtration to modules over $\mathbb{E}_{\infty}$-rings
and so define a functor $(A,M) \mapsto \operatorname{fil}^\star_{\mathrm{ev}/A}M$,
which recovers the above definition when $M=A$. With this
notation it follows from the results in \Cref{SecEvenFiltration} and
\cite{Cmot}
that
\[
\operatorname{fil}^{\star}_{\mathrm{ev}/\mathbb{S}} M \simeq \nu(M),
\]
for any bounded below spectrum $M$ (or, more generally, for any $\mathrm{MU}$-complete spectrum $M$). It is then special to the scenario
where $\mathrm{MU}_*A$ is even that we have the middle equivalence
in the string
\[
\operatorname{fil}^\star_\mathrm{ev} A = \operatorname{fil}^\star_{\mathrm{ev}/A}A \simeq \operatorname{fil}^\star_{\mathrm{ev}/\mathbb{S}}A
\simeq \nu(A).
\]
In general, filtered modules over $\operatorname{fil}^{\star}_\mathrm{ev} A$ can be viewed as a deformation of the category of $A$-modules,
and the functor $\operatorname{fil}^\star_{\mathrm{ev}/A}:\mathrm{Mod}_A \to \mathrm{FilMod}_{\operatorname{fil}^\star_\mathrm{ev} A}$ associates a natural deformation to any $A$-module.
\end{remark}
Returning to topological Hochschild homology, we recall that the $\mathrm{THH}$ of an $\mathbb{E}_\infty$-ring is naturally a cyclotomic $\mathbb{E}_\infty$-ring \cite{NikolausScholze}. In other words, there is an ${\mathrm{S}^1}$-action, allowing us to form $\mathrm{TC}^{-}(R)=\mathrm{THH}(R)^{\mathrm{h}{\mathrm{S}^1}}$ and $\mathrm{TP}(R)=\mathrm{THH}(R)^{\mathrm{t}{\mathrm{S}^1}}$, together with a Frobenius $\varphi$ at each prime $p$, which Nikolaus--Scholze observed allows one to define $p$-completed $\mathrm{TC}(R)$ by the formula
\[
\cpl{\mathrm{TC}(R)}_p \simeq \operatorname*{fib}\left(\cpl{\mathrm{TC}^{-}(R)}_p \lblto{\varphi^{\mathrm{h}{\mathrm{S}^1}}-\mathrm{can}} \cpl{\mathrm{TP}(R)}_p\right),
\]
at least when $R$ is connective \cite{NikolausScholze}.
For $R$ a $p$-complete $p$-quasisyntomic discrete ring, Bhatt--Morrow--Scholze defined motivic filtrations on not just $\cpl{\mathrm{THH}(R)}_p$, but also $\cpl{\mathrm{TC}^{-}(R)}_p$, $\cpl{\mathrm{TP}(R)}_p$, and $\cpl{\mathrm{TC}(R)}_p$. In \Cref{SecComparison}, we prove that variants of the even filtration can be used to recover all of these BMS filtrations. The reader should look to \Cref{SecComparison} for a complete list of our comparison results; here, we explain as an example the result for $p$-completed $\mathrm{TC}$:
\begin{definition}
\label{in--ev--TC}
We say that a cyclotomic $\mathbb{E}_\infty$-ring is \emph{even} if its underlying $\mathbb{E}_\infty$-ring is even. Given any connective cyclotomic $\mathbb{E}_\infty$-ring $R$, we define $\operatorname{fil}^{\star}_{\mathrm{ev},p,\mathrm{TC}}\mathrm{THH}(R)$ to be the limit, over all cyclotomic $\mathbb{E}_\infty$-ring maps $\mathrm{THH}(R) \to B$ for which $B$ is even and $p$-complete, of
\[
\operatorname*{fib}\left(\tau_{\ge 2\star}\cpl{(B^{\mathrm{h}{\mathrm{S}^1}})}_p \lblto{\varphi^{\mathrm{h}{\mathrm{S}^1}}-\mathrm{can}} \tau_{\ge 2\star} \cpl{(B^{\mathrm{t}{\mathrm{S}^1}})}_p\right).
\]
\end{definition}
\begin{theorem}
Let $R$ be a $p$-complete $p$-quasisyntomic commutative ring. Then
\[
\operatorname{fil}^{\star}_{\mathrm{BMS}}\cpl{\mathrm{TC}(R)}_p \simeq \operatorname{fil}^{\star}_{\mathrm{ev},p,\mathrm{TC}} \mathrm{THH}(R).
\]
\end{theorem}
Finally, the even filtration may also be used to recover constructions in classical algebra. Namely, for $k \to R$ a map of discrete commutative rings, the Hochschild--Kostant--Rosenberg (HKR) filtration on the relative Hochschild homology $\mathrm{HH}(R/k)$ is a classical analog of the BMS filtration on $\mathrm{THH}(R)$. In the quasi-lci setting, the HKR filtration also turns out to be a special case of the even filtration:
\begin{theorem} \label{introHKRcomparison}
Let $k \to R$ be a map of discrete commutative rings such that the algebraic cotangent complex $\L^\mathrm{alg}_{R/k}$ has Tor-amplitude contained in $[0,1]$. Then
\[
\operatorname{fil}^\star_{\mathrm{ev}} \mathrm{HH}(R/k) \simeq \operatorname{fil}^\star_{\mathrm{HKR}} \mathrm{HH}(R/k).
\]
\end{theorem}
Our proofs of the above comparison theorems rely on descent properties of $\operatorname{fil}^\star_{\mathrm{ev}}$, studied in \Cref{SecEvenFiltration}. In particular, we identify certain maps of $\mathbb{E}_\infty$-rings $A \to B$, which we call \emph{evenly faithfully flat} (\emph{eff}), along which there is an identification $\operatorname{fil}_{\mathrm{ev}}^{\star}(A) \simeq \lim_{\Delta}(\operatorname{fil}_{\mathrm{ev}}^{\star}(B^{\otimes_{A} \bullet+1}))$.
\subsection{$\mathrm{THH}$ of chromatically quasisyntomic rings}
So far, we have discussed theorems showing that the even filtration recovers known filtrations. The ubiquity of these results suggests that there may be broader, unstudied contexts in which the even filtration is a useful tool. Here, we introduce the following class of $\mathbb{E}_\infty$-rings $R$ for which the even filtration on $\mathrm{THH}(R)$ can be controlled:
\begin{definition}
A connective $\mathbb{E}_\infty$-ring $R$ is \emph{chromatically quasisyntomic} if $\mathrm{MU}_*R$ is even, has bounded $p$-power torsion for all primes $p$, and (when considered as an ungraded commutative ring) has algebraic cotangent complex $\L^{\mathrm{alg}}_{\mathrm{MU}_*R}$ with Tor-amplitude contained in $[0,1]$.
\end{definition}
\begin{example}
If $R$ is a discrete commutative ring with bounded $p$-power torsion for each prime $p$ and such that the cotangent complex $\L^{\mathrm{alg}}_{R}$ has Tor-amplitude contained in $[0,1]$, then $R$ is chromatically quasisyntomic. This is because $\mathrm{MU}_*R$ is a polynomial algebra $R[b_1,b_2,\cdots]$, by complex orientation theory.
\end{example}
\begin{example}
The $\mathbb{E}_\infty$-ring spectra $\mathbb{S}$, $\mathrm{MU}$, $\mathrm{ku}$, $\mathrm{ko}$, and $\mathrm{tmf}$ are all chromatically quasisyntomic. Indeed, if $R$ is any of these ring spectra, then $\mathrm{MU}_*R$ is a polynomial $\mathbb{Z}$-algebra concentrated in even degrees. The Adams summand $\ell$ is also chromatically quasisyntomic, with $\mathrm{MU}_*\ell$ a polynomial $\mathbb{Z}_{(p)}$-algebra.
\end{example}
\begin{definition}
\label{in--xq--filmot}
For $R$ a chromatically quasisyntomic $\mathbb{E}_\infty$-ring, we define:
\begin{itemize}
\item $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{THH}(R)$ to be $\operatorname{fil}_{\mathrm{ev}}^{\star} \mathrm{THH}(R)$;
\item $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{TC}^{-}(R)$ to be the limit, over all ${\mathrm{S}^1}$-equivariant $\mathbb{E}_\infty$-ring\footnote{Here ``${\mathrm{S}^1}$-equivariant $\mathbb{E}_\infty$-ring'' refers to a local system of $\mathbb{E}_\infty$-rings over $\mathrm{B}{\mathrm{S}^1}$, not a more sophisticated notion of genuine equivariant homotopy theory.} maps $\mathrm{THH}(R) \to B$ such that the non-equivariant ring underlying $B$ is even, of $\tau_{\ge 2\star}(B^{\mathrm{h}{\mathrm{S}^1}})$;
\item $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{TP}(R)$ to be the limit, over all ${\mathrm{S}^1}$-equivariant $\mathbb{E}_\infty$-ring maps $\mathrm{THH}(R) \to B$ such that the non-equivariant ring underlying $B$ is even, of $\tau_{\ge 2\star}(B^{\mathrm{t}{\mathrm{S}^1}})$;
\item $\operatorname{fil}^{\star}_{\mathrm{mot}} \cpl{\mathrm{TC}(R)}_p$ to be $\operatorname{fil}_{\mathrm{ev},p,\mathrm{TC}}^{\star} \mathrm{THH}(R)$ (as in \cref{in--ev--TC}).
\end{itemize}
These \emph{motivic filtrations} lead to \emph{motivic spectral sequences} for $\mathrm{THH}_*(R)$, $\mathrm{TC}^{-}_*(R)$, $\mathrm{TP}_*(R)$, and $\pi_*\left(\cpl{\mathrm{TC}(R)}_p\right)$, respectively. In analogy with the discrete case, we call the associated graded of $\operatorname{fil}^{\star}_{\mathrm{mot}}\mathrm{TP}(R)$ the \emph{prismatic cohomology} of $R$, and we call the associated graded of $\operatorname{fil}_{\mathrm{mot}}^{\star}\cpl{\mathrm{TC}(R)}_p$ the \emph{syntomic cohomology} of $R$.\footnote{Beware that, in our terminology, the syntomic cohomology of $R$ depends only on the $p$-completion of $R$ (for the implicit prime $p$). When $R$ is discrete, this recovers what is called the ``syntomic cohomology of $\mathrm{Spf}(R)$'' in \cite[\textsection 7.4]{BL}, as opposed to the ``syntomic cohomology of $\mathrm{Spec}(R)$'' defined in \cite[\textsection 8.4]{BL}.}
\end{definition}
In this context, we prove the following results:
\begin{theorem}
\label{motConv}
The motivic filtrations of \cref{in--xq--filmot} converge: that is, for $R$ a chromatically quasisyntomic $\mathbb{E}_\infty$-ring, the colimits of the filtered diagrams $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{THH}(R)$, $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{TC}^{-}(R)$, $\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{TP}(R)$, and $\operatorname{fil}_{\mathrm{mot}}^{\star} \cpl{\mathrm{TC}(R)}_p$ are $\mathrm{THH}(R)$, $\mathrm{TC}^{-}(R)$, $\mathrm{TP}(R)$, and $\cpl{\mathrm{TC}(R)}_p$, respectively.
\end{theorem}
\begin{theorem}
\label{filteredFrob}
Let $R$ be a chromatically quasisyntomic $\mathbb{E}_\infty$-ring. Then, for each prime number $p$, the Nikolaus--Scholze Frobenius
\[
\varphi:\mathrm{TC^{-}}(R)_p^{\wedge} \to \mathrm{TP}(R)_p^{\wedge}
\]
refines to a natural map
\[
\varphi:\operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{TC}^{-}(R)^{\wedge}_p\to \operatorname{fil}_{\mathrm{mot}}^{\star} \mathrm{TP}(R)_p^{\wedge}.
\]
The same is true of the canonical map between the same objects, and $\operatorname{fil}^{\star}_{\mathrm{mot}} \cpl{\mathrm{TC}(R)}_p$ can be computed as the equalizer of the filtered Frobenius and canonical maps.
\end{theorem}
\begin{remark}
Our proof of \Cref{filteredFrob} is not formal, and in particular crucially uses the hypothesis that $R$ is chromatically quasisyntomic. A key ingredient is a result of Steve Wilson, which states, for each positive integer $i$ and even ring spectrum $A$, that $A_*\Omega^{\infty} \Sigma^{2i} \mathrm{MU}$ is an even, polynomial $A_*$-algebra. In \Cref{WilsonAppendix}, we enhance Wilson's result by proving that, for at least some even $\mathbb{E}_\infty$-rings $A$, $\mathrm{THH}$ relative to $A \otimes \Sigma^{\infty}_+\Omega^{\infty} \Sigma^{2i} \mathrm{MU}$ carries a cyclotomic structure. In other words, there is a dashed arrow and commutative diagram of ${\mathrm{S}^1}$-equivariant $\mathbb{E}_\infty$-rings
\[
\begin{tikzcd}
\mathrm{THH}(A \otimes \Sigma^{\infty}_+\Omega^{\infty} \Sigma^{2i} \mathrm{MU}) \arrow{r}{\varphi} \arrow{d}{\pi} & \mathrm{THH}(A \otimes \Sigma^{\infty}_+\Omega^{\infty} \Sigma^{2i} \mathrm{MU}) ^{\mathrm{t}\mathrm{C}_p} \arrow{d}{\pi^{\mathrm{t}\mathrm{C}_p}}\\
A \otimes \Sigma^{\infty}_+\Omega^{\infty} \Sigma^{2i} \mathrm{MU} \arrow[dashed]{r} & (A \otimes \Sigma^{\infty}_+\Omega^{\infty} \Sigma^{2i} \mathrm{MU})^{\mathrm{t}\mathrm{C}_p}.
\end{tikzcd}
\]
\end{remark}
It would be interesting to know if any analog of \Cref{filteredFrob} holds for a broader class of $\mathbb{E}_\infty$-rings $R$.
\begin{remark}
Concretely, given a chromatically quasisyntomic $\mathbb{E}_\infty$-ring $R$, one can compute the motivic spectral sequence for $\mathrm{THH}_*(R)$ by finding an $\mathbb{E}_\infty$-$\mathrm{MU}$-algebra map $S \to \mathrm{MU} \otimes R$ with the following properties:
\begin{enumerate}
\item $\pi_*S$ is a polynomial $\mathrm{MU}_*$-algebra, with polynomial generators in even degrees.
\item The map $\pi_*S \to \mathrm{MU}_*R$ is surjective.
\end{enumerate}
The motivic filtration will then be given by descent along the composite
\[
\mathrm{THH}(R) \to \mathrm{THH}(R) \otimes \mathrm{MU} \simeq \mathrm{THH}(R \otimes \mathrm{MU}/\mathrm{MU}) \to \mathrm{THH}(R \otimes \mathrm{MU}/S).
\]
\end{remark}
\begin{remark}
Suppose that $R$ is a chromatically quasisyntomic $\mathbb{E}_\infty$-ring spectrum such that $\pi_0R$ is also chromatically quasisyntomic. Given any filtration on $\cpl{\mathrm{K}^{\mathrm{alg}}(\pi_0R)}_p$ compatible with the motivic filtration on $\cpl{\mathrm{TC}(\pi_0R)}_p$, one can define a filtration on $\cpl{\mathrm{K}^{\mathrm{alg}}(R)}_p$ by pullback \cite[Theorem 0.0.2]{DGM}. We thus expect that, by mixing Voevodsky's filtration on the algebraic $\mathrm{K}$-theory of discrete rings with our motivic filtration on $\mathrm{TC}$ of $\mathbb{E}_\infty$-rings, one may obtain a motivic spectral sequence for the algebraic $\mathrm{K}$-theory of many chromatically quasisyntomic ring spectra.
\end{remark}
\begin{remark} \label{number-ring-example}
Let $\mathcal{O}_K$ be the ring of integers in a local number field, and fix a choice of uniformizer $\pi$. The works \cite{BMS,LiuWang,NikolausKrause,Zpn} study $\cpl{\mathrm{THH}(\mathcal{O}_K)}_p$ and $\cpl{\mathrm{TC}(\mathcal{O}_K)}_p$ by ($p$-completed) descent along the map
\[\mathrm{THH}(\mathcal{O}_K) \to \mathrm{THH}(\mathcal{O}_K/\mathbb{S}[\pi]),\]
which is a map of cyclotomic $\mathbb{E}_\infty$-ring spectra.
It follows from our work here (specifically, from the fact that $\mathrm{THH}(\Sigma^{\infty}_+ \mathbb{N}) \to \Sigma^{\infty}_+ \mathbb{N}$ is evenly free) that the descent filtration so obtained agrees with the even filtration on $\cpl{\mathrm{TC}(\mathcal{O}_K)}_p$, and hence with the BMS filtration on $\cpl{\mathrm{TC}(\mathcal{O}_K)}_p$. A comparison of the descent and BMS filtrations was independently obtained by Antieau--Krause--Nikolaus, and will appear in their announced work \cite{Zpn}.
\end{remark}
The work of Liu--Wang referenced in \Cref{number-ring-example} shows that the motivic filtration allows for the practical computation of $\mathrm{TC}$ of number rings \cite{LiuWang}. Additionally, works such as \cite{Zpn, Sulyma} and \cite[\textsection 10]{RecentTHH} show that the motivic filtration can be used to make precise calculations of the algebraic $K$-theories of commutative rings with lci singularities.
Similarly, we expect the motivic filtration to be a useful computational tool in higher chromatic contexts. In \cite{DavidLee} David Jongwon Lee completely computes $\mathrm{THH}_*(\mathrm{ku})$ by use of the motivic spectral sequence. No complete computation of $\mathrm{THH}_*(\mathrm{ku})$ previously existed in the literature, though tremendous progress was achieved by Ausoni \cite{AusoniTHH} and Angeltveit--Hill--Lawson \cite{THHlko}, building on work of McClure--Staffeldt \cite{McClureStaffeldt} and Angeltveit--Rognes \cite{AngeltveitRognes}. In the final section of the paper, we explain another computational application of the motivic spectral sequence.
\subsection{The motivic spectral sequence for $V(1)_\star \mathrm{TC}(\ell)$}
In \Cref{SecAdamsSummand} we will study the connective Adams summand $\ell$ of $\mathrm{ku}_{(p)}$ at an arbitrary prime $p \ge 2$. In seminal work, Ausoni and Rognes computed $V(1)_* \mathrm{TC}(\ell)=\pi_* \left(\mathrm{TC}(\ell) / (p,v_1)\right)$ for primes $p \ge 5$ \cite{AusoniRognes}. We demonstrate the computability of syntomic cohomology by giving an independent proof of their result. Furthermore, our methods just as easily compute $V(1)_* \mathrm{TC}(\ell)$ at the prime $p=3$, which had not previously been computed. Our main theorem is the following:
\begin{theorem} \label{thm:introSynell}
For any prime $p \ge 2$, the mod $(p,v_1)$ syntomic cohomology of $\ell$ is a free $\mathbb{F}_p[v_2]$-module on finitely many generators. A complete list of module generators is given by:
\begin{enumerate}
\item $ \{1\}$, in Adams weight $0$ and degree $0$.
\item $\{\partial,t^{d}\lambda_1,t^{dp}\lambda_2\text{ }|\text{ }0 \le d < p\}$, in Adams weight $1$. Here, $|\partial|=-1$, $|t^d\lambda_1|=2p-2d-1$, and $|t^{dp}\lambda_2|=2p^2-2dp-1$.
\item $\{t^d\lambda_1\lambda_2, t^{dp} \lambda_1 \lambda_2, \partial \lambda_1,\partial\lambda_2\text{ }|\text{ }0 \le d < p\}$, in Adams weight $2$. Here, $|t^d \lambda_1\lambda_2|=2p^2-2p-2d-2$, $|t^{dp} \lambda_1\lambda_2|=2p^2-2p-2dp-2$, $|\partial \lambda_1|=2p-2$, and $|\partial\lambda_2|=2p^2-2$.
\item $\{\partial\lambda_1\lambda_2\}$, in Adams weight $3$ and degree $2p^2+2p-3$.
\end{enumerate}
\end{theorem}
Here we have used the following convention:
\begin{definition}[Adams weight] \label{convention-adams-grading}
If $M^\star$ is a graded object, we say that an element of
$\pi_nM^a$ has \emph{Adams weight }$2a-n$ and degree $n$.
\end{definition}
When $p \ge 3$, so that $V(1)=\mathbb{S}/(p,v_1)$ exists as a spectrum, the mod $(p,v_1)$ syntomic cohomology described by \Cref{thm:introSynell} is the $\mathrm{E}_2$-page of a motivic spectral sequence converging to $V(1)_* \mathrm{TC}(\ell)$. Below, we draw a picture of the $\mathrm{E}_2$-page of this spectral sequence for $p=5$, with the horizontal axis recording degree and the vertical axis recording Adams weight. The Adams grading convention means that $d_r$ differentials decrease degree by $1$ and increase Adams weight by $r$.
\begin{comment}
\begin{sseqdata}[name=ellTCmot5, classes={fill, inner sep=0cm}, class labels = {below}, xscale=0.2, yscale=1, yrange={0}{3}, x tick step = 2, Adams grading]
\class(0,0)
\classoptions["1"](0,0)
\class(-1,1)
\classoptions["\partial"](-1,1)
\class["\lambda_1"{below right},yshift=-0.1cm,xshift=0.025cm](9,1)
\class["t\lambda_1"](7,1)
\class["t^2\lambda_1 "{above}](5,1)
\class[" t^3\lambda_1"](3,1)
\class[" t^4\lambda_1"{above}](1,1)
\class["\lambda_2"](49,1)
\class["t^5\lambda_2"](39,1)
\class["t^{10}\lambda_2"](29,1)
\class["t^{15}\lambda_2"](19,1)
\class["t^{20}\lambda_2"{above right}, yshift=0.1cm, xshift=-0.025cm](9,1)
\class["\lambda_1\lambda_2"{above}](58,2)
\class["t\lambda_1\lambda_2"](56,2)
\class["t^2\lambda_1\lambda_2"{above}](54,2)
\class["t^3\lambda_1\lambda_2"](52,2)
\class["t^4\lambda_1\lambda_2"{above}](50,2)
\class["\partial \lambda_1 \lambda_2"{above}](57,3)
\class["t^{10}\lambda_1\lambda_2"{above}](38,2)
\class["t^{15}\lambda_1\lambda_2"{above}](28,2)
\class["t^{20}\lambda_1\lambda_2"{above}](18,2)
\class["\partial\lambda_1"{above}](8,2)
\class["t^5\lambda_1\lambda_2"{below left},yshift=-0.1cm,xshift=0.025cm](48,2)
\class["\partial \lambda_2"{above left}, yshift=0.1cm, xshift=-0.025cm](48,2)
\end{sseqdata}
\[
\tiny
\printpage[name=ellTCmot5, grid=go, title style ={font=\footnotesize},title = {$\mathbb{F}_5[v_2]$-module generators of the $\mathrm{E}_2$-page of the motivic spectral sequence converging to $V(1)_* \mathrm{TC}(\ell),$ $p=5$}]
\normalsize
\]
\end{comment}
\includegraphics[scale=1,trim={4cm 18.5cm 3.5cm 2.7cm},clip]{Figure1.pdf}
The reader may notice the similarity between the picture of the motivic spectral sequence for $V(1)_* \mathrm{TC}(\ell)$, which we rigorously define here, and Rognes' conjectured picture for a conjectured motivic spectral sequence converging to $V(1)_* \mathrm{K}^{\mathrm{alg}}(\ell_p^{\wedge})$ \cite[Example 5.2]{RognesICM}. The rotational symmetry present in the above picture is indicative of Rognes' conjectured Tate--Poitou duality, which we will explore in future work.
The $\mathrm{E}_2$-page of the motivic spectral sequence for $V(1)_* \mathrm{TC}(\ell)$ is concentrated on four horizontal lines, with classes of odd degree on the $1$- and $3$-lines and classes of even degree on the $0$- and $2$-lines. It follows by parity considerations that the only possible differentials are $d_3$'s from the $0$-line to the $3$-line. On the $0$-line is a copy of $\mathbb{F}_p[v_2]$, where the degree of $v_2$ is $2p^2-2$. A $d_3$ differential off of the $0$-line would therefore have target of degree $1$ less than a multiple of $2p^2-2$. However, the $3$-line is concentrated in degrees $2p-1$ more than a multiple of $2p^2-2$. No differentials are possible, and we conclude the following corollary:
\begin{corollary} \label{cor:introTCell} Let $p \ge 3$, so that $V(1)=\mathbb{S}/(p,v_1)$ exists as a spectrum. Then the motivic spectral sequence for $V(1)_* \mathrm{TC}(\ell)$ degenerates at the $\mathrm{E}_2$-page.
\end{corollary}
At primes $p \ge 5$, $v_2$ is a self map of $V(1)$ and the motivic spectral sequence for $V(1)_* \mathrm{TC}(\ell)$ is one of $\mathbb{F}_p[v_2]$-modules. We note that there are no possible $\mathbb{F}_p[v_2]$-module extension problems, and conclude that $V(1)_{*} \mathrm{TC}(\ell)$ is a free $\mathbb{F}_p[v_2]$-module on generators with the same names as those in \Cref{thm:introSynell}. At the prime $p=3$, $v_2^9$ is a self map of $V(1)$ \cite{v29exists}, and we may similarly conclude that $V(1)_{*} \mathrm{TC}(\ell)$ is a free $\mathbb{F}_3[v_2^9]$-module.
\begin{remark}
It would be excellent to see the results of \cite{EllipticTC} or \cite{MoravaKTC} similarly recovered via motivic spectral sequences. These works compute the topological cyclic homologies of non-commutative ring spectra, which presents an obvious complication, but see \Cref{example:BPn}.
\end{remark}
Finally, we explain in \Cref{subsec:finalsyntomic} how \Cref{thm:introSynell} allows us to deduce the following two qualitative results:
\begin{theorem} \label{thm:introLQell}
For any prime $p \ge 2$ and type $3$ $p$-local finite complex $F$, $F_* \mathrm{TC}(\ell)$ is finite. In particular,
\[\mathrm{TC}(\ell)_{(p)} \to L_2^{f} \mathrm{TC}(\ell)_{(p)}\]
is a $\pi_*$-iso for $* \gg 0$.
\end{theorem}
\begin{theorem} \label{thm:introTelescope}
The telescope conjecture is true of $\mathrm{TC}(\ell)$. In other words, the natural map
\[L_2^{f} \mathrm{TC}(\ell) \to L_2 \mathrm{TC}(\ell)\]
is an equivalence.
\end{theorem}
\Cref{thm:introLQell} was previously proved in \cite{HahnWilson}, while \Cref{thm:introTelescope} is new to this paper. Of course, a natural next question is to compute the syntomic cohomology of $\ell$ without modding out by $p$ or $v_1$. The authors and Mike Hopkins will have more to say about the prismatic and syntomic cohomologies of $\ell$, $\mathrm{ku}$, $\mathrm{ko}$, and $\mathrm{MU}$ in future work. We note that the prismatic cohomologies of $\mathbb{E}_\infty$-ring spectra may be of interest even to those studying prismatic cohomology of discrete rings. Specifically, the sequence of $\mathbb{E}_\infty$-ring maps $\mathbb{S} \to \mathrm{MU} \to \mathrm{ku} \to \mathbb{Z}$ suggests a natural factorization of known connections between $\mathrm{WCart}$ and the moduli stack of formal groups.
\subsection{Acknowledgments}
The authors thank Gabe Angelini--Knoll, Bhargav Bhatt, Sanath Devalapurkar, Lars Hesselholt, Mike Hopkins, Achim Krause, Markus Land, David Jongwon Lee, Ishan Levy, Jacob Lurie, Haynes Miller, Andrew Senger, Guozhen Wang, and Allen Yuan for helpful conversations. We would particularly like to acknowledge Akhil Mathew and Thomas Nikolaus for teaching us a great deal about cyclotomic spectra. We thank John Rognes, both for his inspiring work and conjectures and for his comments on an earlier draft. We thank Ben Antieau both for his kind enthusiasm and for many helpful comments and suggestions on an earlier draft.
During the course of this work, the second author was supported by NSF grant DMS-2103152.
\subsection{Conventions} \label{ConventionsSection}
\begin{enumerate}[leftmargin=*]
\item $\mathrm{Spc}$ denotes the ($\infty$-)category of spaces; $\mathrm{Spt}$ denotes the category of spectra; $\mathrm{CAlg}$ denotes the category of $\mathbb{E}_\infty$-rings; $\mathrm{Mod}$ denotes the category of pairs $(A,M)$ where $A$ is an $\mathbb{E}_\infty$-ring and $M$ is an $A$-module.
\item For a prime number $p$, $\mathrm{CycSpt}_p$ denotes the category of $p$-typical cyclotomic spectra (following the same conventions as in \cite[Definition 3.2.1]{HahnWilson}, in particular including the hypothesis of $p$-completeness); $\mathrm{CycCAlg}_p$ denotes the category of $p$-typical cyclotomic $\mathbb{E}_\infty$-rings, i.e. commutative algebra objects in $\mathrm{CycSpt}_p$; and $\mathrm{CycMod}_p$ denotes the category of pairs $(A,M)$ where $A$ is a $p$-typical cyclotomic $\mathbb{E}_\infty$-ring and $M$ is an $A$-module in $\mathrm{CycSpt}_p$.
\item $\mathrm{FilSpt}$ denotes the category of filtered spectra, i.e. the category of functors from the poset $(\mathbb{Z},\ge)$ to $\mathrm{Spt}$; we will generally denote a filtered spectrum by a symbol of the form $\operatorname{fil}^\star_? X$, referring to a diagram of spectra
\[
\cdots \to \operatorname{fil}^2_? X \to \operatorname{fil}^1_? X \to \operatorname{fil}^0_? X \to \operatorname{fil}^{-1}_? X \to \operatorname{fil}^{-2}_? X \to \cdots.
\]
Given a filtered spectrum $\operatorname{fil}^\star_? X$, we refer to $\operatorname*{colim}_{n \to \infty} \operatorname{fil}^{-n}_? X$ as its \emph{underlying object}, and we say $\operatorname{fil}^\star_? X$ is \emph{complete} if $\lim_{n \to \infty} \operatorname{fil}^n_? X \simeq 0$. The category $\mathrm{BiFilSpt}$ of bifiltered spectra is defined similarly, using instead the product poset $(\num{Z},\ge) \times (\num{Z},\ge)$; we will generally denote a bifiltered spectrum by a symbol of the form $\operatorname{fil}^\filledsquare_{??} \operatorname{fil}^\star_? X$, where again the $\filledsquare$ and $\star$ stand for the two integer indices.
\item $\mathrm{FilCAlg}$ denotes the category of filtered $\mathbb{E}_\infty$-rings, i.e. commutative algebra objects in $\mathrm{FilSpt}$ with respect to the Day convolution symmetric monoidal structure (determined by addition on $\num{Z}$). For $\operatorname{fil}^\star_? A$ a filtered $\mathbb{E}_\infty$-ring, $\mathrm{FilMod}_{\operatorname{fil}^\star_? A}$ denotes the category of modules over $\operatorname{fil}^\star_? A$ in $\mathrm{FilSpt}$.
\item $\mathrm{GrSpt}$ denotes the category of graded spectra, i.e. the category of functors from the discrete set $\num{Z}$ to $\mathrm{Spt}$. A filtered spectrum $\operatorname{fil}^\star_? X$ has an \emph{associated graded} object $\operatorname{gr}^*_? X = \{\operatorname{gr}^n_? X\}_{n \in \num{Z}}$, given by the formula
\[
\operatorname{gr}^n_? X \simeq \operatorname*{cofib}(\operatorname{fil}^{n+1}_? X \to \operatorname{fil}^n_? X).
\]
We recall that a map of complete filtered spectra is an equivalence if and only if the induced map of associated graded objects is.
\item Let $\kappa_1,\kappa_2$ denote the two smallest strongly inaccessible cardinals.\footnote{We assume that these cardinals exist. This could be avoided without affecting the main results of the paper by making minor modifications to the constructions that depend on this assumption.} A mathematical object is \emph{small} (resp. \emph{large}) if it resides in the universe of sets of rank $< \kappa_1$ (resp. $< \kappa_2$). By default, all objects besides categories are assumed to be small. For example, in the notation set above, $\mathrm{Spc}$ is the large $\infty$-category of small spaces. When we allow large objects into the discussion, we will use a $\kappa_2$ to denote the corresponding category; e.g. $\mathrm{Spc}_{\kappa_2}$ denotes the $\infty$-category of possibly large spaces.
\end{enumerate}
\subsection{Preliminaries}
We being by recording a few useful facts about the following homotopy types, which were first studied by Steve Wilson in his PhD thesis \cite{WilsonThesis}:
\begin{definition}
For each integer $i>0$, the $(2i)$th \emph{Wilson space} is defined to be
\[W_{2i}=\Omega^{\infty} \Sigma^{2i} \mathrm{MU}.\]
We define $W_0$ to be the pullback
\[
\begin{tikzcd}
W_0 \arrow{r} \arrow{d} & \Omega^{\infty} \mathrm{MU} \arrow{d} \\
\mathbb{N} \arrow{r} & \mathbb{Z}.
\end{tikzcd}
\]
Here, the right-hand vertical map is the canonical map $\Omega^{\infty} \mathrm{MU} \to \Omega^{\infty} \tau_{\le 0} \mathrm{MU}$, and the bottom horizontal map is the inclusion of the non-negative integers into the integers.
\end{definition}
\begin{theorem}[Wilson] \label{WilsonThesisTheorem}
Suppose $R$ is an even $\mathbb{E}_{\infty}$-ring. Then, for each integer $i \ge 0$, $R_*W_{2i}$ is a polynomial $R_*$ algebra, with polynomial generators in even degrees.
\end{theorem}
\begin{proof}
In \cite[Theorem 3.3]{WilsonThesis}, Steve Wilson proved for every $i>0$ that $\mathrm{H}_*(W_{2i};\mathbb{Z})$ is a polynomial $\mathbb{Z}$-algebra, with polynomial generators in even degrees. It follows by the Atiyah--Hizerburch spectral sequence that the same is true of $R_*W_{2i}$.
It remains only to check the case $i=0$. For this, we recall that Wilson proved that $\mathrm{H}_*(\Omega^{\infty}_0 \mathrm{MU};\mathbb{Z})$ is an even polynomial $\mathbb{Z}$-algebra, where $\Omega^{\infty}_0 \mathrm{MU}$ is a path component of $\Omega^{\infty} \mathrm{MU}$. To finish, it suffices to prove that $W_0 \simeq \mathbb{N} \times \Omega^{\infty}_0 \mathrm{MU}$ in the category of $\mathbb{E}_2$-spaces. In other words, it will suffice to show that the defining $\mathbb{E}_{\infty}$-space map $W_0 \to \mathbb{N}$ is split by an $\mathbb{E}_2$-space map $\mathbb{N} \to W_0$. Indeed, to construct such an $\mathbb{E}_2$-space map it suffices to produce a double loop map $\mathbb{Z} \to \Omega^{\infty} \mathrm{MU}$ such that the composite $\mathbb{Z} \to \Omega^{\infty} \mathrm{MU} \to \mathbb{Z}$ is the identity. Such a map can be obtained by taking $\Omega^2$ of the canonical complex orientation $\mathbb{CP}^{\infty} \to \Omega^{\infty} \Sigma^2 \mathrm{MU}$.
\end{proof}
Since each $W_{2i}$ is an $\mathbb{E}_{\infty}$-space, each $\Sigma^\infty_+ W_{2i}$ is an $\mathbb{E}_{\infty}$-ring spectrum. We will need the following fact about these $\mathbb{E}_{\infty}$-rings:
\begin{proposition} \label{prop--Wilson-maps}
Suppose $R$ is a connective, even $\mathbb{E}_{\infty}$-ring, and that $x \in \pi_{2i} R$ is an element in the homotopy groups of $R$. Then there exists an $\mathbb{E}_{\infty}$-ring map
\[f:\Sigma^{\infty}_+ W_{2i} \to R\] such that $x$ is in the image of $\pi_{2i} f$.
\end{proposition}
\Cref{prop--Wilson-maps} will be an immediate consequence of the following lemma, which gives an even cell decomposition of $\Sigma^{\infty}_+ W_{2i}$ in the category of $\mathbb{E}_{\infty}$-ring spectra.
\begin{lemma} \label{Wilson-ring-cell-decomposition}
There exists a sequence of $\mathbb{E}_{\infty}$-ring maps
\[\mathrm{Free}_{\mathbb{E}_{\infty}} (S^{2i})=Y_{2i} \to Y_{2i+2} \to Y_{2i+4} \to \cdots \to \Sigma^{\infty}_+ W_{2i}\] such that:
\begin{enumerate}
\item The filtered colimit of the $Y_{2k}$ is $\Sigma^{\infty}_+ W_{2i}$
\item Each map $Y_{2k} \to Y_{2k+2}$ is the bottom arrow in a pushout of $\mathbb{E}_{\infty}$-ring spectra
\[
\begin{tikzcd}[column sep = 3cm]
\mathrm{Free}_{\mathbb{E}_{\infty}}(\bigoplus S^{2k+1}) \arrow{r}{\mathrm{Free}_{\mathbb{E}_{\infty}}(\bigoplus S^{2k+1} \to 0)} \arrow{d} & \mathbb{S} \arrow{d} \\
Y_{2k} \arrow{r} & Y_{2k+2}.
\end{tikzcd}
\]
\end{enumerate}
\end{lemma}
\begin{proof}
If $i>0$, this follows by applying $\Sigma^{\infty}_+ \Omega^{\infty}\Sigma^{2i}\left(\--\right)$ to an even cell decomposition of $\mathrm{MU}$ in the category of spectra. When $i=0$ we will need the fact that, if $A \to B$ is a map of
$\mathbb{E}_{\infty}$-rings with $1$-connective cofiber,
and the $\mathbb{E}_{\infty}$-cotangent complex $\mathrm{L}^{\mathbb{E}_{\infty}}_{B/A}$ has a cell structure with cells
$B\otimes Z_i$, each $Z_i$ a sum of $i$-spheres,
then the map factors as
\[
A=A_{-1} \to A_0 \to A_1 \to \cdots \to B,
\]
where $A_i \to A_{i+1} = A_i\otimes_{\mathrm{Free}(\Sigma^{-1}Z_i)}
\mathbb{S}$. We apply this result with $B=\Sigma^{\infty}_+W_0$ and $A$ the free $\mathbb{E}_{\infty}$-ring on a degree $0$ class, mapping to $B$ via an isomorphism on $\pi_0$.
\end{proof}
\begin{proof}[Proof of \Cref{prop--Wilson-maps}]
Let $f_{2i}:\mathrm{Free}_{\mathbb{E}_\infty}(S^{2i}) \to R$ denote the $\mathbb{E}_{\infty}$-ring map adjoint to the map $x:S^{2i} \to R$. Inductively assuming for $k \ge i$ that we have defined a map $f_{2k}:Y_{2k} \to R$, it suffices to prove that we may extend $f_{2k}$ through a map $f_{2k+2}:Y_{2k+2} \to R$. Indeed, the obstructions to doing so lie in an odd homotopy group of $R$, which is trivial by assumption. \qedhere
\end{proof}
\begin{warning}
The map guaranteed to exist by \Cref{prop--Wilson-maps} is not canonical in any sense. In particular, we do not claim any sort of functorial construction. The same warning applies to many of the other constructions in this section, all made via obstruction theory.
\end{warning}
Since each Wilson space $W_{2i}$ is an $\mathbb{E}_{\infty}$-space, any finite product of Wilson spaces has a canonical $\mathbb{E}_{\infty}$-space structure. In the category of $\mathbb{E}_{\infty}$-spaces, a finite product is a finite coproduct, and we will more generally have occasion to consider infinite coproducts:
\begin{definition}
A \emph{weak product of Wilson spaces} is a coproduct, in $\mathbb{E}_{\infty}$-spaces, of some collection $\{W_{2i_j}\}_{j \in J}$ of Wilson spaces.
\end{definition}
\begin{proposition} \label{WilsonEvenPolynomial}
Suppose that $R$ is an even $\mathbb{E}_{\infty}$-ring and that $W$ is a weak product of Wilson spaces. Then $R_*W$ is an even polynomial $R_*$-algebra.
\end{proposition}
\begin{proof}
The coproduct in $\mathbb{E}_{\infty}$-rings is the tensor product.
\end{proof}
\begin{proposition} \label{WilsonSurjectivity}
Suppose that $R$ is a connective, even $\mathbb{E}_{\infty}$-ring. Then there exists a weak product of Wilson spaces $W$ together with an $\mathbb{E}_{\infty}$-ring map
\[f:\Sigma^\infty_+ W \to R\]
such that $\pi_*f$ is surjective.
\end{proposition}
\begin{proof}
For each element $x \in \pi_{2i}R$, \Cref{prop--Wilson-maps} allows us to choose an $\mathbb{E}_{\infty}$-ring map $\Sigma^{\infty}_+W_{2i} \to R$ whose image in $\pi_*$ contains $x$. Taking the coproduct of such maps, over all elements $x$ in the homotopy groups of $R$, yields the result.
\end{proof}
Finally, the following technical result is a key ingredient in our proof that motivic filtrations respect cyclotomic structure:
\begin{theorem} \label{WilsonS1unobstruct}
Let $R$ be an even $\mathbb{E}_{\infty}$-ring with ${\mathrm{S}^1}$-action. Then, for any $i \ge 0$ and any ${\mathrm{S}^1}$-equivariant $\mathbb{E}_{\infty}$-ring map $\mathrm{THH}(\Sigma^{\infty}_+W_{2i}) \to R$, there exists a factorization
\[
\begin{tikzcd}
\mathrm{THH}(\Sigma^{\infty}_+W_{2i}) \arrow{r} \arrow{d}{\pi} & R \\
\Sigma^{\infty}_+ W_{2i}. \arrow{ur}
\end{tikzcd}
\]
Here, $\pi$ is the canonical ${\mathrm{S}^1}$-equivariant projection $\mathrm{THH}(\Sigma^{\infty}_+W_{2i}) \to \Sigma^{\infty}_+W_{2i}$, where $\Sigma^{\infty}_+W_{2i}$ has trivial ${\mathrm{S}^1}$-action.
\end{theorem}
\begin{proof}
We prove by induction on $k \ge i$ that there exist factorizations
\[
\begin{tikzcd}
\mathrm{THH}(Y_{2k}) \arrow{r} \arrow{d} & R \\
Y_{2k}, \arrow[dashed]{ur}
\end{tikzcd}
\]
where the $Y_{2k}$ are as in \Cref{Wilson-ring-cell-decomposition}.
First, we prove this for $k=i$, where the desired diagram is of the form
\[
\begin{tikzcd}
\mathrm{THH}\left(\mathrm{Free}_{\mathbb{E}_{\infty}}(S^{2i})\right) \arrow{d} \arrow{r} & R \\
\mathrm{Free}_{\mathbb{E}_{\infty}}(S^{2i}). \arrow[dashed]{ur}
\end{tikzcd}
\]
By the universal property of $\mathrm{THH}$, this is asking whether a certain non-equivariant map $S^{2i} \to R$ factors through an ${\mathrm{S}^1}$-equivariant map $S^{2i} \to R$. Since $R$ is even, the homotopy fixed point spectral sequence for $\pi_*(R^{\mathrm{h}{\mathrm{S}^1}})$ collapses, and in particular the map $\pi_*(R^{\mathrm{h}{\mathrm{S}^1}}) \to \pi_*(R)$ is surjective.
To finish the induction, we need to produce a dashed arrow completing the following diagram:
\[
\begin{tikzcd}
\mathrm{THH}(Y_{2k}) \arrow{d} \arrow{r}& \mathrm{THH}(Y_{2k+2}) \arrow{d} \arrow{r} & R \\
Y_{2k} \arrow{r} \arrow{urr} & Y_{2k+2}. \arrow[dashed]{ur}
\end{tikzcd}
\]
Because the map $Y_{2k} \to Y_{2k+2}$ is a cell attachment, extensions of the given ${\mathrm{S}^1}$-equivariant $\mathbb{E}_{\infty}$-ring map $Y_{2k} \to R$ through $Y_{2k+2}$ are given by nullhomotopies of an ${\mathrm{S}^1}$-equivariant spectrum map
\[\bigoplus S^{2k+1} \to R.\]
We need to show that we can find such a nullhomotopy compatible with a given nullhomotopy of the underlying non-equivariant map
\[\bigoplus S^{2k+1} \to R.\]
Again, this comes down to the facts that $R$ and $R^{\mathrm{h}{\mathrm{S}^1}}$ are even, and in particular that $\pi_*(R^{\mathrm{h}{\mathrm{S}^1}}) \to \pi_*(R)$ is surjective.
\end{proof}
\begin{corollary} \label{WilsonUnobstructed}
Let $R$ be an even $\mathbb{E}_{\infty}$-ring with ${\mathrm{S}^1}$-action and $W$ a weak product of Wilson spaces. Then, for any $\mathbb{E}_{\infty}$-ring map $\mathrm{THH}(\Sigma^{\infty}_+W) \to R$, there exists a factorization
\[
\begin{tikzcd}
\mathrm{THH}(\Sigma^{\infty}_+W) \arrow{r} \arrow{d}{\pi} & R \\
\Sigma^{\infty}_+ W. \arrow{ur}
\end{tikzcd}
\]
Here, $\pi$ is the canonical ${\mathrm{S}^1}$-equivariant projection $\mathrm{THH}(\Sigma^{\infty}_+W) \to \Sigma^{\infty}_+W$, where $\Sigma^{\infty}_+W$ has trivial ${\mathrm{S}^1}$-action.
\end{corollary}
\begin{proof}
Since $\mathrm{THH}(\Sigma^{\infty}_+\--)$ and $\Sigma^{\infty}_+$ both preserve coproducts of $\mathbb{E}_{\infty}$-spaces, this follows immediately from the preceding theorem.
\end{proof}
\subsection{A sufficient supply of cyclotomic bases} \label{cycbaseSection}
In order to understand cyclotomic structure on $\mathrm{THH}$ of discrete rings, it has proven extremely useful to understand cyclotomic structure on Hochschild homology relative to $\Sigma^{\infty}_+ \mathbb{N}$ \cite{LiuWang, Zpn}. As noted for example in \cite{LawsonTate}, it is rarely the case that Hochschild homology relative to an $\mathbb{E}_{\infty}$-ring $A$ carries cyclotomic structure. In such cases, we say that $A$ is a cyclotomic base:
\begin{definition} \label{cycbasedef}
A \emph{cyclotomic base} is an $\mathbb{E}_{\infty}$-ring $A$ together with the data of a commutative diagram of ${\mathrm{S}^1}$-equivariant $\mathbb{E}_{\infty}$-rings
\[
\begin{tikzcd}
\mathrm{THH}(A) \arrow{r}{\varphi} \arrow{d}{\pi} & \mathrm{THH}(A)^{\mathrm{t}\mathrm{C}_p} \arrow{d}{\pi^{\mathrm{t}\mathrm{C}_p}} \\
A \arrow{r} & A^{\mathrm{t}\mathrm{C}_p},
\end{tikzcd}
\]
where $\pi:\mathrm{THH}(A) \to A$ is the canonical projection to $A$ with trivial action.
\end{definition}
We warn the reader that the above definition is not quite the same as the one adopted in \cite[Appendix A]{BeilinsonSquare}, but has been discussed in \cite[\textsection 11.1]{BMS} and \cite[Remark 12.1]{LawsonTate}. If $A$ is a cyclotomic base, and $R$ is an $\mathbb{E}_{\infty}$-$A$-algebra, then we may use the formula
\[\mathrm{THH}(R/A) \simeq \mathrm{THH}(R) \otimes_{\mathrm{THH}(A)} A\]
to define the structure of a cyclotomic $\mathbb{E}_{\infty}$-ring on $\mathrm{THH}(R/A)$. By definition, this cyclotomic $\mathbb{E}_{\infty}$-ring structure is compatible with the map $\mathrm{THH}(R) \to \mathrm{THH}(R/A)$.
To date, practically all known examples of cyclotomic bases have been built out of $A=\mathbb{S}$ and $A=\Sigma^{\infty}_+ \mathbb{N}$. In this section, our goal will be to use Wilson spaces to construct many additional examples of cyclotomic bases, by obstruction theory.
\begin{definition}
\label{wi--strongly-even}
A connective $\mathbb{E}_{\infty}$-ring $A$ is said to be \emph{strongly even} if:
\begin{enumerate}
\item There exists a homotopy commutative ring map $\mathrm{MU} \to A$, such that as an $\mathrm{MU}$-module $A$ splits as a direct sum of even suspensions of $\mathrm{MU}$. After localization at any prime $p$, we further require that the composite $\pi_*\mathrm{BP} \to \pi_*\mathrm{MU}_{(p)} \to \pi_*A_{(p)}$ present $\pi_*A_{(p)}$ as a polynomial ring over $\pi_*\mathrm{BP}$.
\item In the category of $\mathbb{E}_{\infty}$-ring spectra, $A$ admits an even cell decomposition.
\end{enumerate}
\end{definition}
For each $i \ge 0$, the $\mathbb{E}_{\infty}$-ring $\Sigma^{\infty}_+W_{2i}$ admits an even $\mathbb{E}_{\infty}$ cell decomposition; this is precisely the statement \cref{Wilson-ring-cell-decomposition}. However, for no $i$ is $\Sigma^{\infty}_+ W_{2i}$ strongly even. Indeed, as a spectrum $\Sigma^{\infty}_+W_{2i}$ contains a unit summand equivalent to $\mathbb{S}$, and so $\Sigma^{\infty}_+W_{2i}$ cannot be a wedge of even suspensions of $\mathrm{MU}$. At the end of this section, we will prove that at least one strongly even $\mathbb{E}_{\infty}$-ring exists, by direct construction. First, we list some of the properties enjoyed by strongly even $\mathbb{E}_{\infty}$-rings:
\begin{proposition} \label{MWeff}
Suppose $A$ is strongly even. Then the unit map $\mathbb{S} \to A$ is eff, and its $p$-completion is discretely $p$-completely eff.
\end{proposition}
\begin{proof}
By \cite[Theorem 1.2]{ChadwickMandell}, we may find an $\mathbb{E}_2$-ring map $\mathrm{MU} \to A$ which makes $A$ into a free $\mathrm{MU}$-module. Now, suppose that $B$ is any $\mathbb{E}_{\infty}$-ring. We calculate $B \otimes A \simeq (B \otimes \mathrm{MU}) \otimes_{\mathrm{MU}} A$, where the equivalence is as $\mathbb{E}_1$-ring spectra. Since $B \otimes \mathrm{MU}$ is a free $B$-module, and $A$ is a free $\mathrm{MU}$ module, we learn that $B \otimes A$ is a free $B$-module, on even suspensions of $B$.
\end{proof}
\begin{proposition}
\label{wi--p-local-polynomial}
Suppose $A$ is strongly even, that $R$ is a $p$-local, even $\mathbb{E}_{\infty}$-ring spectrum, and that $W$ is a weak product of Wilson spaces. Then the natural map
\[\pi_*(R) \to \pi_*(R \otimes A \otimes W)\]
presents the codomain as a polynomial algebra over the domain, on even generators.
\end{proposition}
\begin{proof}
First, note that $\pi_*\left(\mathrm{BP} \otimes R\right)$ is a polynomial $\pi_*R$-algebra, by $p$-local complex orientation theory. Since $R$ is $p$-local, $A \otimes R$ is equivalent to $A_{(p)} \otimes R$, and we have assumed $\pi_*\left(A_{(p)}\right)$ to be a polynomial $\pi_*\mathrm{BP}$-algebra. It follows that $\pi_*(R \otimes A)$ is a polynomial $\pi_*R$-algebra, and we finish by \Cref{WilsonThesisTheorem}.
\end{proof}
\begin{proposition} \label{StronglyEvenUnobstructed}
Suppose that $A$ is strongly even, and that $R$ is an even $\mathbb{E}_{\infty}$-ring with ${\mathrm{S}^1}$-action. Then any ${\mathrm{S}^1}$-equivariant $\mathbb{E}_{\infty}$-ring map $\mathrm{THH}(A) \to R$ factors through the projection $\pi:\mathrm{THH}(A) \to A$.
\end{proposition}
\begin{proof}
This follows from the assumed even cell decomposition, exactly as in the proof of \cref{WilsonS1unobstruct}.
\end{proof}
\begin{theorem} \label{finalAppThm}
Suppose that $A$ is a strongly even $\mathbb{E}_{\infty}$-ring spectrum and that $W$ is a weak product of Wilson spaces. Then there exists a cyclotomic base with $A \otimes \Sigma^{\infty}_+ W$ as its underlying $\mathbb{E}_{\infty}$-ring.
\end{theorem}
\begin{proof}
We need to produce a diagram of ${\mathrm{S}^1}$-equivariant $\mathbb{E}_{\infty}$-ring spectra
\[
\begin{tikzcd}
\mathrm{THH}(A \otimes \Sigma^{\infty}_+ W) \arrow{d} \arrow{r} & \mathrm{THH}(A \otimes \Sigma^{\infty}_+ W)^{\mathrm{t}\mathrm{C}_p} \arrow{d} \\
A \otimes \Sigma^{\infty}_+ W \arrow[dashed]{r} & (A \otimes \Sigma^{\infty}_+ W)^{\mathrm{t}\mathrm{C}_p}.
\end{tikzcd}
\]
Note that $(A \otimes \Sigma^{\infty}_+W)^{\mathrm{t}\mathrm{C}_p}$ is even, because $A \otimes \Sigma^{\infty}_+W$ is a free $\mathrm{MU}$-module concentrated in even degrees. Therefore, we may finish by combining \Cref{StronglyEvenUnobstructed} and \Cref{WilsonUnobstructed}.
\end{proof}
Finally, we prove that at least one strongly even $\mathbb{E}_{\infty}$-ring exists, by direct construction:
\begin{definition}
Fix any map of spectra $\gamma:\mathrm{MU} \to \mathrm{ku}$ that is a surjection on $\pi_*$.
We denote by $\mathrm{MW}$ the $\mathbb{E}_{\infty}$-ring that is the Thom spectrum associated to the map
\[\Omega^{\infty} \Sigma^2 \gamma:W_2 \to \mathrm{BU}\]
\end{definition}
\begin{theorem}
The $\mathbb{E}_{\infty}$-ring $\mathrm{MW}$ is strongly even.
\end{theorem}
\begin{proof}
By Thomifying $\Omega^{\infty} \Sigma^2(\--)$, applied to an even cell decomposition of the spectrum $\mathrm{MU}$, we see that $\mathrm{MW}$ has an even cell decomposition as an $\mathbb{E}_{\infty}$-ring spectrum. It remains to check that:
\begin{enumerate}
\item There exists a homotopy commutative ring homomorphism $\mathrm{MU} \to \mathrm{MW}$ that makes $\mathrm{MW}$ a free $\mathrm{MU}$-module concentrated in even degrees.
\item After $p$-localization, $\mathrm{MW}$ is a polynomial $\mathrm{BP}$-algebra.
\end{enumerate}
First, we claim that the infinite loop map $\Omega^{\infty} \Sigma^2 \gamma:W_2 \to \mathrm{BU}$ has a double loop map section. To construct this section, we may take $\Omega^2$ of a section of the pointed space map $\Omega^{\infty}\Sigma^4\gamma:W_4 \to \mathrm{BSU}$. To see that the latter section exists, we note that \emph{any} map $\mathrm{BSU} \to \mathrm{BSU}$ lifts through $\Omega^{\infty} \Sigma^4 \gamma$. This is because the obstruction to such a lift is a map from $\mathrm{BSU}$ to a space that, by assumption, has only odd homotopy groups.
Now, the above section induces a splitting of $W_2$, as a double loop space, into the product of $\mathrm{BU}$ and $\Omega^{\infty} \mathrm{fib}(\Sigma^2 \gamma)$. It follows that $\mathrm{MW}$ is, as an $\mathbb{E}_2$-ring, the tensor product of $\mathrm{MU}$ and $ \Sigma^{\infty}_+\mathrm{fib}(\Omega^{\infty}\Sigma^2 \gamma)$. To prove $(1)$, it therefore suffices to prove that $\mathrm{MU}_*\mathrm{fib}(\Omega^{\infty}\Sigma^2 \gamma)$ is a free $\mathrm{MU}_*$-module concentrated in even degrees. By the Atiyah--Hizerburch spectral sequence, it suffices to check that $\mathrm{H}_*\left(\mathrm{fib}(\Omega^{\infty}\Sigma^2 \gamma);\mathbb{Z}\right)$ is a free $\mathbb{Z}$-module concentrated in even degrees. This is a submodule of $H_*\left(W_2;\mathbb{Z}\right)$, by the above double loop space splitting, and we finish by noting that submodules of free $\mathbb{Z}$-modules are free $\mathbb{Z}$-modules.
To prove $(2)$, it suffices to check that $\mathrm{BP}_*(\mathrm{fib}(\Omega^{\infty} \Sigma^2 \gamma))$ is a polynomial $\mathrm{BP}_*$-algebra. Since $\mathrm{fib}(\Omega^{\infty} \Sigma^2 \gamma)$ is a retract of $\Omega^{\infty} \Sigma^2 \mathrm{MU}$, it must have finitely generated homotopy groups and $\mathbb{Z}_{(p)}$-homology groups \cite{WilsonThesis}. We may therefore conclude the result by \cite[Theorem 6.2]{WilsonThesis}.
\end{proof}
\section{Introduction}
\input{PossibleIntro.tex}
\section{The even filtration} \label{SecEvenFiltration}
\input{EvenFiltration.tex}
\section{Wilson spaces} \label{WilsonAppendix}
\input{Wilson.tex}
\input{MotivicFiltration.tex} \label{SecMotivicFiltration}
\input{ComparisonTheorems.tex} \label{SecComparison}
\section{The mod $(p,v_1)$ syntomic cohomology of the Adams summand} \label{SecAdamsSummand}
\input{AdamsSummand.tex}
\printbibliography
\end{document}
|
1,116,691,499,691 | arxiv | \section{Introduction}
\label{intro}
The aim of this paper is to compute genus zero \emph{open} Gromov-Witten
invariants for toric Calabi-Yau threefolds, through a relation between
ordinary Gromov-Witten invariants of $K_S$ and $K_{S_n}$, where $S_n$ is a
blowup of $S$.
The celebrated SYZ mirror symmetry was initiated from the work of
Strominger-Yau-Zaslow \cite{SYZ}. For toric manifolds, the open
Gromov-Witten invariants which count holomorphic disks play a fundamental
role in the construction of their Landau-Ginzburg mirrors. For toric Fano
manifolds, Cho and Oh \cite{Cho-Oh} classified holomorphic disks with
boundary in Lagrangian torus fibers, and computed the mirror superpotential.
However, when the toric manifold is not Fano, the moduli of holomorphic
disks contains bubble configurations and it has a nontrivial obstruction
theory. The only known results are the computations of the mirror
superpotentials of Hirzebruch surface $\mathbb{F}_2$ by Fukaya-Oh-Ohta-Ono's
\cite{FOOO2} using their machinery, and $\mathbb{F}_2$ and $\mathbb{F}_3$ by
Auroux's \cite{A} via wall-crossing.
Our first main result Theorem \ref{thm_main} identifies the genus zero open
Gromov-Witten invariant in terms of an ordinary Gromov-Witten invariant of
another Calabi-Yau threefold. For simplicity, we state its corollary for the
canonical bundles of toric surfaces.
\begin{thm}[Corollary of Theorem \protect\ref{thm_main}]
\label{cor_main} Let $K_S$ be the canonical bundle of a toric surface $S$
and $L$ be a Lagrangian torus fiber in $K_S$. We denote $\beta\in
\pi_2(K_S,L)$ as the disk class for a fiber of $K_{S}\to S$. Given any $%
\alpha\in H_2(S,\mathbb{Z})$, we assume that every rational curve
representing the strict transform $\alpha^{\prime}$ of $\alpha$ in $S_1$
cannot be deformed away from the zero section of $K_{S_1}\to S_1$.
Then the genus zero open Gromov-Witten invariant $n_{\alpha+\beta}$ of $%
(K_S,L)$ is equal to an ordinary Gromov-Witten invariant of $K_{S_1}$, that
is
\begin{equation*}
n_{\alpha+\beta} = \langle 1 \rangle^{K_{S_1}}_{0,0, \alpha^{\prime}}.
\end{equation*}
\end{thm}
The assumption on the class $\alpha$ is needed in order for the
Gromov-Witten invariant of the noncompact space $K_{S_1}$ to be
well-defined. When $S_1$ is Fano, our assumption always holds true for any $%
\alpha$.
We remark that the theorem holds for the more general setting where the disk
boundary has $n$ components lying in $n$ distinct Lagrangian torus fibers.
We prove Theorem \ref{thm_main} using our second main result stated below
and Chan's result \cite{C} relating open and closed Gromov-Witten invariants.
Let $S$ be a smooth projective surface and $X$ be a fiberwise
compactification of $K_S$, i.e., $p:X=\mathbf{P}(K_S\oplus \mathcal{O}%
_{S})\to S$ as a ${\mathbf{P}^{1}}$-bundle. We relate certain $n$-point
Gromov-Witten invariants of $X$ to the Gromov-Witten invariants (with no
point condition) of $W$, the fiberwise compactification of $K_{S_n}$, where $%
S_n$ is the blowup of $S$ at $n$ points.
\begin{thm}
\label{main} For $\beta\in H_2(X,\mathbb{Z})$ whose intersection number with
the infinity section of $X$ is $n$. Let $\beta^{\prime}\in H_2(S_n,\mathbb{Z}%
)$ be the strict transform of $p_*(\beta)\in H_2(S,\mathbb{Z})$. Then
\begin{equation*}
\langle[\pt],\cdots,[\pt]\rangle^X_{0,n,\beta}=\langle
1\rangle^W_{0,0,\beta^{\prime}}.
\end{equation*}
Here $[\pt]$ is the Poincar\'e dual of the point class.
\end{thm}
Now we sketch the proof of Theorem \ref{main} in the case $n=1$, that is,
\begin{equation*}
\langle[\pt]\rangle^X_{0,1,\beta}=\langle 1\rangle^W_{0,0,\beta^{\prime}}
\end{equation*}
for $\beta=\alpha+h$, and $\beta^{\prime}=\pi^!\alpha-e$, the strict
transform of $\alpha$.
Recall that $X=\mathbf{P}(K_S\oplus \mathcal{O}_{S})$ and $W=\mathbf{P}%
(K_{S_1}\oplus \mathcal{O}_{S_1})$ with $\pi:S_1\to S$ the blowup of $S$ at
one point with exceptional divisor $e$. Fixing a generic fiber $H$ of $X$,
let $x$ be the intersection point of $H$ with the divisor at infinity of $X$%
. We construct a birational map $f:X\overset{\pi_1}{\longleftarrow }{\tilde X%
}\overset{\pi_2}{\dashrightarrow }W$ so that $\pi_1$ is the blow up at $x$,
and $\pi_2$ is a simple flop along ${\tilde H}$, the proper image of $H$
under $\pi_1$. We compare Gromov-Witten invariants of $X$ and $W$ through
the intermediate space ${\tilde X}$. The identity follows from the results
of Gromov-Witten invariants under birational transformations which are
listed in Section \ref{Review}.
We remark that Theorem \ref{main} is a corollary of Proposition \ref%
{GWblowup} which holds for all genera. They can be generalized to the case
when $K_S$ is replaced by other local Calabi-Yau threefolds, as we shall
explain in Section \ref{bundle}. \vskip 5pt
This paper is organized as follows. Section \ref{Review} serves as a brief
review on definitions and results that we need in Gromov-Witten theory. In
Section \ref{bundle} we prove Theorem \ref{main} and its generalization to
quasi-projective threefolds. In Section \ref{toric} we deal with toric
Calabi-Yau threefolds and prove Theorem \ref{thm_main}. Finally in Section %
\ref{generalization} we generalize Theorem \ref{main} to ${\mathbf{P}^{n}}$%
-bundles over an arbitrary smooth projective variety.
\textbf{Acknowledgements.} We thank Kwokwai Chan for the stimulating
discussions and his preprint \cite{C} on the comparison of Kuranishi
structures. His ideas on the relationship between open Gromov-Witten
invariants and mirror periods inspired our work. The first author is very
grateful to Mark Gross for the enlightening discussions on wall-crossing and
periods. We also thank Jianxun Hu for helpful comments. The authors are
partially supported by RGC grants from the Hong Kong Government.
\section{Gromov-Witten invariants under birational maps}
\label{Review}
In this section we review Gromov-Witten invariants and their transformation
under birational maps.
Let $X$ be a smooth projective variety. Let $\overline M_{g,n}(X,\beta)$ be
the moduli space of stable maps $f:(C; x_1,\cdots x_n)\to X$ with genus $%
g(C)=g$ and $[f(C)]=\beta\in H_2(X,\mathbb{Z})$. Let $\ev_i: \overline
M_{g,n}(X,\beta)\to X$ be the evaluation map $f\mapsto f(x_i)$. The
Gromov-Witten invariant for classes $\gamma_i\in H^*(X)$ is defined as
\begin{equation*}
\langle\gamma_1,\cdots,\gamma_n\rangle^X_{g,n,\beta}=\int_{[\overline
M_{g,n}(X,\beta)]^{^{\mathrm{vir}}}} \prod_{i=1}^n\ev_i^*(\gamma_i).
\end{equation*}
When the expected dimension of $\overline M_{g,n}(X,\beta)$ is zero, for
instance, $X$ is a Calabi-Yau threefold, we will be interested primarily in
the invariant
\begin{equation*}
\langle 1\rangle^X_{g,0,\beta}=\int_{[\overline M_{g,n}(X,\beta)]^{^{\mathrm{%
vir}}}}1
\end{equation*}
which equals to the degree of the $0$-cycle $[\overline M_{g,n}(X,\beta)]^{%
\mathrm{vir}}$.
Roughly speaking, the invariant $\langle\gamma_1,\cdots,\gamma_n%
\rangle^X_{g,n,\beta}$ is a virtual count of genus $g$ curves in the class $%
\beta$ which intersect with generic representatives of the Poincar\'e dual $%
PD(\gamma_i)$. In particular, if we want to count curves in a homology class
$\beta$ passing through a generic point $x\in X$, we simply take some $%
\gamma_i$ to be the cohomology class $[\pt]$ of a point. There is an
alternative way to do this counting: let $\pi:{\tilde X}\to X$ be the blow
up of $X$ along $x$; we count curves in the homology class $\pi^!(\beta)-e$,
where $\pi^!(\beta)=PD(\pi^*PD(\beta))$ and $e$ is the class of a line in
the exceptional divisor. The following result says that for genus zero case,
these two methods give the same result.
\begin{thm}
\label{thmGH}(\cite{Ga},\cite{Hu}) Let $\pi:{\tilde X}\to X$ be the blowup
at a point. Let $e$ be the line class in the exceptional divisor. Let $%
\beta\in H_2(X),\gamma_1,\cdots,\gamma_n\in H^*(X)$. Then
\begin{equation*}
\langle\gamma_1,\cdots,\gamma_n,[\pt]\rangle^X_{0,n+1,\beta}=
\langle\pi^*\gamma_1,\cdots,\pi^*\gamma_n\rangle^{{\tilde X}}_{0,n,{%
\pi^!(\beta)-e}}.
\end{equation*}
\end{thm}
Another result that we need concerns the transformation of Gromov-Witten
invariants under flops.
Let $f: X\dashrightarrow X_f$ be a simple flop along a smooth $(-1,-1)$
rational curve between two threefolds. There is a natural isomorphism
\begin{equation*}
\varphi: H_2(X,\mathbb{Z})\to H_2(X_f,\mathbb{Z}).
\end{equation*}
Suppose that $\Gamma$ is an exceptional curve on $X$ and $\Gamma_{\!f}$ is
the corresponding exceptional curve on $X_f$. Then
\begin{equation*}
\varphi([\Gamma])=-[\Gamma_{\!f}].
\end{equation*}
The following theorem is proved by A.-M.~Li and Y.~Ruan.
\begin{thm}
\label{thmLR}(\cite{LR}) For a simple flop $f:X\dashrightarrow X_f$, if $%
\beta\ne m[\Gamma]$ for any exceptional curve $\Gamma$, we have
\begin{equation*}
\langle\varphi^*\gamma_1,\cdots,\varphi^*\gamma_n\rangle^X_{g,n,\beta}=%
\langle\gamma_1,\cdots,\gamma_n\rangle^{X_f}_{g,n,\varphi(\beta)}.
\end{equation*}
\end{thm}
\section{Gromov-Witten invariants of projectivization of $K_{S}$}
\label{bundle}
In this section we prove Theorem \ref{main} and its generalization to
certain quasi-projective threefolds.
To begin with, we recall some notations.
Let $S$ be a smooth projective surface. Let $p:X=\mathbf{P}(K_S\oplus
\mathcal{O}_{S})\to S$ be a ${\mathbf{P}^{1}}$-bundle. $S$ is contained in $X
$ as the zero section of the bundle $K_S$. Denote by $S^+$ the section at
infinity. Let $h$ be the fiber class of $X$. Then any $\beta\in H_2(X,%
\mathbb{Z})$ can be written as $\alpha+nh$ for a class $\alpha$ in $H_2(S,%
\mathbb{Z})$. Here $n$ is the intersection number of $\beta$ with the
infinity section of $X$, and $p_*(\beta)=\alpha$. By Riemann-Roch, the
expected dimension of $\overline M_{0,n}(X,\beta)$ is $3n$. We have the
Gromov-Witten invariant
\begin{equation*}
\langle[\pt],\cdots,[\pt]\rangle_{0,n,\beta}^X
\end{equation*}
which counts rational curves in the class $\beta$ passing through $n$
generic points.
Let $x_1,\cdots,x_n$ be $n$ distinct points in $X$. Let $y_i=p(x_i)\in S$.
Let $\pi:S_n\to S$ be the blowup of $S$ along the set of points $%
y_1,\cdots,y_n$ with exceptional divisors $e_1,\cdots,e_n$. We form $%
\beta^{\prime}=\pi^!\alpha-\sum_{i=1}^n e_i\in H_2(S_n,\mathbb{Z})$, which
is called the strict transform of $\alpha$. Denote $W=\mathbf{P}%
(K_{S_n}\oplus \mathcal{O}_{S_n})$. Then $\beta^{\prime}$ is a homology
class of $W$ since $S_n\subset W$. The moduli space $\overline
M_{0,0}(W,\beta^{\prime})$ has expected dimension zero, we have the
Gromov-Witten invariant $\langle 1\rangle^W_{0,0,\beta^{\prime}}$.
\begin{prop}
\label{GWblowup} Let $S$ be a smooth projective surface. Denote $p:X=\mathbf{%
P}(K_S\oplus \mathcal{O}_{S})\to S$ . Let $X_1$ be the blowup of $X$ at a
point $x$ on the infinity section of $X\to S$. Let $W=\mathbf{P}%
(K_{S_1}\oplus\mathcal{O}_{S_1})$ where $\pi:S_1\to S$ is the blowup of $S$
at the point $y=p(x)$. Then $W$ is a simple flop of $X_1$ along the proper
transform ${\tilde H}$ of the fiber $H$ through $x$.
\end{prop}
\begin{proof}
Since ${\tilde H}$ is the proper transform of $H$ under the blowup $%
\pi_1:X_1\to X$ at $x$, ${\tilde H}$ is isomorphic to ${\mathbf{P}^{1}}$
with normal bundle $\mathcal{O}(-1)\oplus\mathcal{O}(-1)$. We have a simple
flop $f:X_1\dashrightarrow X^{\prime}$ along ${\tilde H}$. Next we show that
$X^{\prime}\cong W$. To this end, we use an alternative way to describe the
birational map $f\pi_1^{-1}:X\dashrightarrow X^{\prime}$.
It is well known that a simple flop $f$ is a composite of a blowup and a
blowdown. Let $\pi_2:X_2\to X_1$ be the blowup of $X_1$ along ${\tilde H}$
with exceptional divisor $E_2\cong {\tilde H}\times {\mathbf{P}^{1}}$.
Because the restriction of normal bundle of $E_2$ to ${\tilde H}$ is $%
\mathcal{O}(-1)$, we can blow down $X_2$ along the ${\tilde H}$ fiber
direction of $E_2$ to get $\pi_3:X_2\to X^{\prime}$. Of course we have $%
f=\pi_3\pi_2^{-1}$ and $\pi_3\pi_2^{-1}\pi_1^{-1}:X\dashrightarrow X^{\prime}
$.
Notice that the composite $\pi_2^{-1}\pi_1^{-1}:X\dashrightarrow X_2$ can be
written in another way. Let $\rho_1:Z_1\to X$ be the blowup of $X$ along $H$
with exceptional divisor $E^{\prime}$. Let $F$ be the inverse image $%
\rho^{-1}(x)$. Then $F\cong {\mathbf{P}^{1}}$. Next we blow up $Z_1$ along $F
$ to get $\rho_2:Z_2\to Z_1$. It is straightforward to verify that $Z_2=X_2$
and $\rho_1\rho_2=\pi_1\pi_2$. Thus we have $\pi_3\pi_2^{-1}\pi_1^{-1}=%
\pi_3(\rho_1\rho_2)^{-1}:X\dashrightarrow X^{\prime}$, from which it follows
easily that $X^{\prime}\cong W$.
\end{proof}
\begin{cor}
With notations as in the Proposition, and let $e_{1}$ be the exceptional
curve class of $\pi $, we have
\begin{equation*}
\langle 1\rangle _{g,0,\beta }^{X_{1}}=\langle 1\rangle _{g,0,\beta ^{\prime
}}^{W}
\end{equation*}%
where $\beta =\alpha +k{\tilde{H}}$ and $\beta ^{\prime }=\pi ^{!}\alpha
-ke_{1}$ for any nonzero $\alpha \in H_{2}(S,\mathbb{Z})$.
\end{cor}
\begin{proof}
>From the Proposition, we know there is a flop $f:X_1\dashrightarrow W$.
Applying Theorem \ref{thmLR} to the flop $f$, since $\varphi([{\tilde H}%
])=-e_1$, we get
\begin{equation*}
\varphi(\beta)=\varphi((\pi_1^!\alpha)+[k{\tilde H}])=\pi^!\alpha-ke_1=%
\beta^{\prime}.
\end{equation*}
It follows that
\begin{equation*}
\langle 1\rangle^{X_1}_{g,0,\beta}=\langle 1\rangle^{W}_{g,0,\beta^{\prime}}.
\end{equation*}
\end{proof}
When $S_{1}$ is a Fano surface, $K_{S_{1}}$ is a local Calabi-Yau threefold
and curves inside $S_{1}$ can not be deformed away from $S_{1}$. Indeed any
small neighborhood $N_{S_{1}}$ of $S_{1}$ (resp. $N_{S\cup C}$ of $S\cup C$)
inside any Calabi-Yau threefold has the same property. Here $C$ is a $\left(
-1,-1\right) $-curve which intersects $S$ transversely at a single point.
Therefore we can define local Gromov-Witten invariants for $N_{S_{1}}$ and $%
N_{S\cup C}$. Using a canonical identification,%
\begin{equation*}
H_{2}\left( S_{1}\right) \simeq H_{2}\left( S\right) \oplus \mathbb{Z}%
\left\langle e_{1}\right\rangle \simeq H_{2}\left( S\cup C\right) \text{,}
\end{equation*}%
the above corollary implies that the local Gromov-Witten invariants for
local Calabi-Yau threefolds $N_{S_{1}}$ and $N_{S\cup C}$ are the same. When
the homology class in $S_{1}$ does not have $e_{1}$-component, this becomes
simply the local Gromov-Witten invariants for $N_{S}$. This last relation
for Gromov-Witten invariants of $N_{S_{1}}$ and $N_{S}$ was pointed out to
us by J.~Hu \cite{Hu2} and he proved this result by the degeneration method.
This relationship was first observed by Chiang-Klemm-Yau-Zaslow \cite{CKYZ}
in the case $S$ is $\mathbf{P}^{2}$ and genus is zero by explicit
calculations.
These results can be generalized to the case when $K_{S}$ is replaced by
other local Calabi-Yau threefolds. The illustration of such a generalization
is given at the end of this section. \newline
Now we prove Theorem \ref{main}, that is
\begin{equation*}
\langle[\pt],\cdots,[\pt]\rangle^X_{0,n,\beta}=\langle
1\rangle^W_{0,0,\beta^{\prime}}.
\end{equation*}
\begin{proof}[Proof of Theorem \protect\ref{main}]
First we assume $n=1$, that is, $\pi:S_1\to S$ is a blowup of $S$ at one
point $y$ with exceptional curve class $e_1$ and $W=\mathbf{P}(K_{S_1}\oplus%
\mathcal{O}_{S_1})$. We need to show that
\begin{equation*}
\langle[\pt]\rangle^X_{0,1,\beta}=\langle 1\rangle^W_{0,0,\beta^{\prime}},
\end{equation*}
where $\beta=\alpha+h$ and $\beta^{\prime}=\pi^!\alpha-e_1$.
Applying Theorem \ref{thmGH} to $\pi_1:X_1\to X$, and notice that
\begin{equation*}
\pi_1^!(\beta)-e=\pi_1^!(\alpha+h)-e=\pi_1^!\alpha+[{\tilde H}],
\end{equation*}
which we denote by $\beta_1$, we then have $\langle[\pt]\rangle^X_{0,1,%
\beta}=\langle 1\rangle^{X_1}_{0,0,\beta_1}$. Next we apply Proposition \ref%
{GWblowup} for $k=1$, we get
\begin{equation*}
\langle 1\rangle^{X_1}_{0,0,\beta_1}=\langle
1\rangle^{W}_{0,0,\beta^{\prime}},
\end{equation*}
which proves the result for $n=1$.
For $n>1$, we simply apply the above procedure successively.
\end{proof}
In particular, when $S={\mathbf{P}^{2}}$ and $n=1$, $S_1$ is the Hirzebruch
surface $\mathbb{F}_1$. We use $\ell$ to denote the line class of ${\mathbf{P%
}^{2}}$. The class of exceptional curve $e$ represents the unique minus one
curve in $\mathbb{F}_1$ and $f=\pi^!\ell-e$ is its fiber class. In this
case, the corresponding class $\beta^{\prime}=k\pi^!\ell-e=(k-1)e+kf$. The
values of $N_{0,\beta^{\prime}}$ have been computed in \cite{CKYZ}. Starting
with $k=1$, they are $-2, 5, -32, 286, -3038, 35870$. (See Table \ref%
{tableF1}.)
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|rrrrrrrr|}
\hline
& b & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
a & & & & & & & & \\
0 & & & $-2$ & 0 & 0 & 0 & 0 & 0 \\
1 & & 1 & 3 & 5 & 7 & 9 & 11 & 13 \\
2 & & 0 & 0 & $-6$ & $-32$ & $-110$ & $-288$ & $-644$ \\
3 & & 0 & 0 & 0 & 27 & 286 & 1651 & 6885 \\
4 & & 0 & 0 & 0 & 0 & $-192$ & $-3038$ & $-25216$ \\
5 & & 0 & 0 & 0 & 0 & 0 & 1695 & 35870 \\
6 & & 0 & 0 & 0 & 0 & 0 & 0 & $-17064$ \\ \hline
\end{tabular}%
\end{center}
\par
\vskip 5pt
\caption{Invariants of $K_{\mathbb{F}_1}$ for classes $ae+bf$}
\label{tableF1}
\end{table}
Next we generalize Theorem \ref{main} to quasi-projective threefolds.
Let $X$ be a smooth quasi-projective threefold. Assume there is a
distinguished Zariski open subset $U\subset X$, so that $U$ is isomorphic to
the canonical line bundle $K_S$ over a smooth projective surface $S$, and
there is a Zariski open subset $S^{\prime}\subset S$, so that each fiber $F$
of $K_S$ over $S^{\prime}$ is closed in $X$. Typical examples of such
threefolds include a large class of toric Calabi-Yau threefolds.
Theorem \ref{main} still holds for such threefolds with some mild condition.
Now we sketch the proof.
First we construct a partial compactification $\bar X$ of $X$. Given a
generic point $x\in U$, we have a unique fiber through $x$, say $H$. Let $%
\{y\}=H\cap S$. Take a small open neighborhood $y\in V$, we compactify $K_V$
along the fiber by adding a section at infinity as we did before. We call
the resulting variety by $\bar X$.
The Gromov-Witten invariant $\langle[\pt]\rangle_{0,1,\beta}^{\bar X}$ is
well defined. Indeed, let $\beta\in H_2(\bar X,\mathbb{Z})$; and suppose $%
\beta=\alpha+[H]$ for some $\alpha$ in $H_2(S,\mathbb{Z})$. The moduli space
of genus zero stable maps to $\bar X$ representing $\beta$ and passing
through the generic point $x$ is compact, provided that $S$ is Fano. Then
the invariants can be defined as before.
To show the equality $\langle[\pt]\rangle_{0,1,\beta}^{\bar X}=\langle
1\rangle_{0,0,\beta^{\prime}}^{{\tilde S}}$, we construct a birational map $%
f:\bar X\dashrightarrow W$ as in the proof of Theorem \ref{main}. Let ${%
\tilde S}\subset W$ be the image of $S$. Then ${\tilde S}$ is the blowup of $%
S$ at $y$. Let $\beta^{\prime}\in H_2({\tilde S},\mathbb{Z})$ be the strict
transform of $\alpha$. Suppose that every rational curve in $\beta^{\prime}$
lies in ${\tilde S}$, for instance, when ${\tilde S}$ is Fano, or ${\tilde S}
$ can be contracted by a birational morphism, then we can define local
Gromov-Witten invariant $\langle 1\rangle_{0,0,\beta^{\prime}}^{{\tilde S}}$%
. The equality follows directly as in the proof of Theorem \ref{main}.
\section{Toric Calabi-Yau threefolds}
\label{toric}
In this section we prove our main Theorem \ref{thm_main}. As an application,
we show that certain open Gromov-Witten invariants for toric Calabi-Yau
threefolds can be computed via local mirror symmetry.
First we recall the standard notations. Let $N$ be a lattice of rank $3$, $M$
be its dual lattice, and $\Sigma_0$ be a strongly convex simplicial fan
supported in $N_\mathbb{R}$, giving rise to a toric variety $X_0 = X_{\Sigma_0}$. ($%
\Sigma_0$ is `strongly convex' means that its support $|\Sigma_0|$ is convex
and does not contain a whole line through the origin.) Denote by $v_i \in N$
the primitive generators of rays of $\Sigma_0$, and denote by $D_i$ the
corresponding toric divisors for $i = 0, \ldots, m-1$, where $m \in \mathbb{Z%
}_{\geq 3}$ is the number of such generators. \vskip 5pt
\noindent\textbf{Calabi-Yau condition for $X_0$:} There exists $\underline{%
\nu} \in M$ such that $\left( \underline{\nu}\, , \, v_i \right) = 1 $ for
all $i = 0, \ldots, m-1$.
By fixing a toric Kaehler form $\omega$ on $X_0$, we have a moment map $\mu:
X_0 \to P_0$, where $P_0 \subset M_\mathbb{R}$ is a polyhedral set defined by a
system of inequalities
\begin{equation*}
\left( v_j\, , \, \cdot \right) \geq c_j
\end{equation*}
for $j = 0, \ldots, m-1$ and suitable constants $c_j \in \mathbb{R}$.
(Figure \ref{KP2_KF1} shows two examples of toric Calabi-Yau varieties.)%
\newline
To investigate genus zero open Gromov-Witten invariants of $X_0$, we start
with the following simple lemma for rational curves in toric varieties:
\begin{lem}
\label{hol_sphere} Let $Y$ be a toric variety which admits $\nu \in M$ such
that $\nu$ defines a holomorphic function on $Y$ whose zeros contain all
toric divisors of $Y$. Then the image of any non-constant holomorphic map $%
u: {\mathbf{P}}^1 \to Y$ lies in the toric divisors of $Y$. In particular
this holds for a toric Calabi-Yau variety.
\end{lem}
\begin{proof}
Denote the holomorphic function corresponding to $\nu \in M$ by $f$. Then $f
\circ u$ gives a holomorphic function on ${\mathbf{P}}^1$, which must be a
constant by maximal principle. $f \circ u$ cannot be constantly non-zero, or
otherwise the image of $u$ lies in $(\mathbb{C}^\times)^n \subset Y$,
forcing $u$ to be constant. Thus $f \circ u \equiv 0$, implying the image of
$u$ lies in the toric divisors of $Y$.
For a toric Calabi-Yau variety $X_0$, $\left( \underline{\nu}\, , \, v_i
\right) = 1 > 0$ for all $i = 0, \ldots, m-1$ implies that the meromorphic
function corresponding to $\underline{\nu}$ indeed has no poles.
\end{proof}
Let $L \subset X_0$ be a Lagrangian torus fiber and $b \in \pi_2(X_0, L)$ of
Maslov index two. We consider the moduli space $\overline M_{1}(X_0,b)$ of
stable maps from bordered Riemann surfaces of genus zero with one boundary
marked point to $X_0$ in the class $b$. Fukaya-Oh-Ohta-Ono \cite{FOOO}
defines the invariant
\begin{equation*}
n_b := \int_{[\overline M_1(X_0,b)]} {\ev}^* [\pt].
\end{equation*}
We have the following
\begin{prop}
Let $\beta_i \in \pi_2(X_0, L)$ be a disc class of Maslov index two such
that $\beta_i \cdot D_j = \delta_{ij}$ for $i,j = 0, \ldots, m-1$. Then $%
\overline M_1(X_0,b)$ is empty unless $b = \beta_i$ for some $i$, or $b =
\beta_i + \alpha$, where $D_i$ is a compact toric divisor of $X_0$ and $%
\alpha \in H_2(X_0, \mathbb{Z})$ is represented by a rational curve.
\end{prop}
\begin{proof}
By \cite{FOOO}, $\overline M_1(X_0,b)$ is empty unless $b = \beta_i + \alpha$
for some $i = 0, \ldots, m-1$ and $\alpha \in H_2(X_0, \mathbb{Z})$ has
Chern number $0$. Now suppose $\overline M_1(X_0,b)$ is non-empty and $%
\alpha \not= 0$. Then $\alpha$ is realized by some chains of non-constant
holomorphic spheres $Q$ in $X_0$, which by Lemma \ref{hol_sphere} must lie
inside $\bigcup_{i=0}^{m-1} D_i$. $Q$ must have non-empty intersection with
the holomorphic disk representing $\beta_i \in \pi_2(X_0, L)$ for generic $L$%
, implying some components of $Q$ lie inside $D_i$ and have non-empty
intersection with the torus orbit $(\mathbb{C}^\times)^2 \subset D_i$. But
if $D_i$ is non-compact, then the fan of $D_i$ is simplicial convex
incomplete, and so $D_i$ is a toric manifold satisfying the condition of
Lemma \ref{hol_sphere}, forcing $Q$ to have empty intersection with $(%
\mathbb{C}^\times)^2 \subset D_i$.
\end{proof}
It was shown \cite{Cho-Oh}\cite{FOOO} that $n_b = 1$ for basic disc classes $%
b = \beta_i$. The remaining task is to compute $n_b$ for $b = \beta_i +
\alpha$ with nonzero $\alpha \in H_2(X_0)$. In this section we prove Theorem %
\ref{thm_main}, which relates $n_b$ to certain closed Gromov-Witten
invariants, which can then be computed by usual localization techniques.
Suppose we would like to compute $n_b$ for $b = \beta_i + \alpha$, and
without loss of generality let's take $i = 0$ and assume that $D_0$ is a
compact toric divisor. We construct a toric compactification $X$ of $X_0$ as
follows. Let $v_0$ be the primitive generator corresponding to $D_0$, and we
take $\Sigma$ to be the refinement of $\Sigma_0$ by adding the ray generated
by $v_{\infty} := -v_0$ (and then completing it into a convex fan). We
denote by $X = X_{\Sigma}$ the corresponding toric variety, which is a
compactification of $X_0$. We denote by $h \in H_2(X, \mathbb{Z})$ the fiber
class of $X$, which has the property that $h \cdot D_0 = h \cdot D_\infty = 1
$ and $h \cdot D = 0$ for all other irreducible toric divisors $D$. Then for
$\alpha \in H_2(X_0, \mathbb{Z})$, we have the ordinary Gromov-Witten
invariant $\langle[\pt]\rangle^X_{0,1, h + \alpha}$.
When $X_0 = K_S$ for a toric Fano surface $S$ and $D_0$ is the zero section
of $K_S \to S$, by comparing the Kuranishi structures on moduli spaces, it
was shown by K.-W. Chan \cite{C} that the open Gromov-Witten invariant $n_b$
indeed agrees with the closed Gromov-Witten invariant $\langle[\pt]%
\rangle^X_{0,1, h + \alpha}$:
\begin{prop}[\protect\cite{C}]
\label{openclose} Let $X_0 = K_S$ for a toric Fano surface $S$ and $X$ be
the fiberwise compactification of $X_0$. Let $b= \beta_i + \alpha$ with $%
\beta_i\cdot S=1$ and $\alpha \in H_2(S, \mathbb{Z})$. Then
\begin{equation*}
n_b = \langle[\pt]\rangle^X_{0,1, h + \alpha}.
\end{equation*}
\end{prop}
Indeed his proof extends to our setup without much modification, and for the
sake of completeness we show how it works:
\begin{prop}[slightly modified from \protect\cite{C}]
\label{openclose2} Let $X_0$ be a toric Calabi-Yau manifold and $X$ be its
compactification constructed above. Let $b= \beta_i + \alpha$ with $%
\beta_i\cdot S=1$ and $\alpha \in H_2(S, \mathbb{Z})$, and we assume that
all rational curves in $X$ representing $\alpha$ are contained in $X_0$.
Then
\begin{equation*}
n_b = \langle[\pt]\rangle^X_{0,1, h + \alpha}.
\end{equation*}
\end{prop}
\begin{proof}
For notation simplicity let $M_{\mathrm{op}} := \overline M_1(X_0,b)$ be the
open moduli and $M_{\mathrm{cl}} := \overline M_1(X,h + \alpha)$ be the
corresponding closed moduli. By evaluation at the marked point we have a $%
\mathbf{T}$-equivariant fibration
\begin{equation*}
\mathrm{ev}: M_{\mathrm{op}} \to \mathbf{T}
\end{equation*}
whose fiber at $p \in \mathbf{T} \subset X_0$ is denoted as $M_{\mathrm{op}%
}^{\mathrm{ev} = p}$. Similarly we have a $\mathbf{T}_\mathbb{C}$-equivariant
fibration
\begin{equation*}
\mathrm{ev}: M_{\mathrm{cl}} \to \bar{X}
\end{equation*}
whose fiber is $M_{\mathrm{cl}}^{\mathrm{ev} = p}$.
By the assumption that all rational curves in $X$ representing $\alpha$ is
contained in $X_0$, one has
\begin{equation*}
M_{\mathrm{op}}^{\mathrm{ev} = p} = M_{\mathrm{cl}}^{\mathrm{ev} = p}.
\end{equation*}
There is a Kuranishi structure on $M_{\mathrm{cl}}^{\mathrm{ev} = p}$ which
is induced from that on $M_{\mathrm{cl}}$ (please refer to \cite{Fukaya-Ono}
and \cite{FOOO_book} for the definitions of Kuranishi structures).
Transversal multisections of the Kuranishi structures give the virtual
fundamental cycles $[M_{\mathrm{op}}] \in H_n (X_0, \mathbb{Q})$ and $[M_{%
\mathrm{op}}^{\mathrm{ev} = p}] \in H_0 (\{p\}, \mathbb{Q})$. In the same
way we obtain the virtual fundamental cycles $[M_{\mathrm{cl}}] \in H_{2n}
(X, \mathbb{Q})$ and $[M_{\mathrm{cl}}^{\mathrm{ev} = p}] \in H_0 (\{p\},
\mathbb{Q})$. By taking the multisections to be $\mathbf{T}_\mathbb{C}$- ($\mathbf{%
T}$-) equivariant so that their zero sets are $\mathbf{T}_\mathbb{C}$- ($\mathbf{T}
$-) invariant,
\begin{equation*}
\deg [\overline M_{\mathrm{cl/op}}^{\mathrm{ev} = p}] = \deg [\overline M_{%
\mathrm{cl/op}}]
\end{equation*}
and thus it remains to prove that the Kuranishi structures on $M_{\mathrm{cl}%
}^{\mathrm{ev} = p}$ and $M_{\mathrm{op}}^{\mathrm{ev} = p}$ are the same.
Let $[u_{\mathrm{cl}}] \in M_{\mathrm{cl}}^{\mathrm{ev} = p}$, which
corresponds to an element $[u_{\mathrm{op}}] \in M_{\mathrm{op}}^{\mathrm{ev}
= p}$. $u_{\mathrm{cl}}: (\Sigma,q) \to X$ is a stable holomorphic map with $%
u_{\mathrm{cl}} (q) = p$. $\Sigma$ can be decomposed as $\Sigma_0 \cup
\Sigma_1$, where $\Sigma_0 \cong {\mathbf{P}}^1$ such that $u_* [\Sigma_0]$
represents $h$, and $u_* [\Sigma_1]$ represents $\alpha$. Similarly the
domain of $u_{\mathrm{op}}$ can be docomposed as $\Delta \cup \Sigma_1$,
where $\Delta \subset \mathbb{C}$ is the closed unit disk.
We have the Kuranishi chart $(V_{\mathrm{cl}},E_{\mathrm{cl}},\Gamma_{%
\mathrm{cl}},\psi_{\mathrm{cl}},s_{\mathrm{cl}})$ around $u_{\mathrm{cl}}
\in M_{\mathrm{cl}}^{\mathrm{ev} = p}$, where we recall that $E_{\mathrm{cl}%
} \oplus \mathrm{Im} (D_{u_{\mathrm{cl}}} \bar{\partial}) = \Omega^{(0,1)}
(\Sigma, u_{\mathrm{cl}}^* TX)$ and $V_{\mathrm{cl}} = \{\bar{\partial} f
\in E; f(q) = p\}$. On the other hand let $(V_{\mathrm{op}},E_{\mathrm{op}%
},\Gamma_{\mathrm{op}},\psi_{\mathrm{op}},s_{\mathrm{op}})$ be the Kuranishi
chart around $u_{\mathrm{op}} \in M_{\mathrm{op}}^{\mathrm{ev} = p}$.
Now comes the key: since the obstruction space for the deformation of $u_{%
\mathrm{cl}} |_{\Sigma_0}$ is $0$, $E_{\mathrm{cl}}$ is of the form $0
\oplus E^{\prime}\subset \Omega^{(0,1)} (\Sigma_0, u_{\mathrm{cl}%
}|_{\Sigma_0}^* TX) \times \Omega^{(0,1)} (\Sigma_1, u_{\mathrm{cl}%
}|_{\Sigma_1}^* TX)$. Similarly $E_{\mathrm{op}}$ is of the form $0 \oplus
E^{\prime\prime}\subset \Omega^{(0,1)} (\Delta, u_{\mathrm{op}}|_{\Delta}^*
TX) \times \Omega^{(0,1)} (\Sigma_1, u_{\mathrm{op}}|_{\Sigma_1}^* TX)$. But
since $D_{u_{\mathrm{cl}}|_{\Sigma_1}} \bar{\partial} = D_{u_{\mathrm{op}%
}|_{\Sigma_1}} \bar{\partial}$, $E^{\prime}$ and $E^{\prime\prime}$ can be
taken as the same subspace! Once we do this, it is then routine to see that $%
(V_{\mathrm{cl}},E_{\mathrm{cl}},\Gamma_{\mathrm{cl}},\psi_{\mathrm{cl}},s_{%
\mathrm{cl}}) = (V_{\mathrm{op}},E_{\mathrm{op}},\Gamma_{\mathrm{op}},\psi_{%
\mathrm{op}},s_{\mathrm{op}})$.
\end{proof}
\begin{thm}
\label{thm_main} Let $X_0$ be a toric Calabi-Yau threefold and denote by $S$
the union of its compact toric divisors. Let $L$ be a Lagrangian torus fiber
and $b = \beta + \alpha \in \pi_2(X_0,L)$, where $\alpha \in H_2(S)$ and $%
\beta \in \pi_2(X_0,L)$ is of Maslov index two with $\beta \cdot S = 1$.
Given this set of data, a toric Calabi-Yau threefold $W_0$ can be
constructed explicitly with the following properties:
\begin{enumerate}
\item $W_0$ is birational to $X_0$.
\item Let $S_1 \subset W_0$ be the union of compact divisors of $W_0$. Then $%
S_1$ is the blow up of $S$ at one point, with $\alpha^{\prime}\in H_2(S_1)$
being the strict transform of $\alpha \in H_2(S)$.
\end{enumerate}
\noindent Then the open Gromov-Witten invariant $n_b$ of $(X_0,L)$ is equal
to the ordinary Gromov-Witten invariant $\langle 1 \rangle^{W_0}_{0,0,
\alpha^{\prime}}$ of $W_0$, i.e.,
\begin{equation*}
n_b = \langle 1 \rangle^{W_0}_{0,0, \alpha^{\prime}}
\end{equation*}
provided that every rational curve representative of $\alpha^{\prime}$ in $%
W_0$ lies in $S_1$.
\end{thm}
In particular, when $X_0=K_S$, we obtain Theorem \ref{cor_main} as its
corollary.
\begin{proof}
{We first construct the toric variety $W_0$. To begin with, let $D_\infty$
be the toric divisor corresponding to $v_\infty$. Let $x \in X$ be one of
the torus-fixed points contained in $D_\infty$. First we blow up $x$ to get $%
X_1$, whose fan $\Sigma_1$ is obtained by adding the ray generated by $w =
v_\infty + u_1 + u_2$ to $\Sigma$, where $v_\infty$, $u_1$ and $u_2$ are the
normal vectors to the three facets adjacent to $x$. There exists a unique
primitive vector $u_0 \not= w$ such that $\{ u_0, u_1, u_2 \}$ generates a
simplicial cone in $\Sigma_1$ and $u_0$ corresponds to a compact toric
divisor of $X_1$: If $\{v_0, u_1, u_2\}$ spans a cone of $\Sigma_1$, then
take $u_0 = v_0$; otherwise since $\Sigma_1$ is simplicial, there exists a
primitive vector $u_0 \subset \mathbb{R}\langle v_0, u_1, u_2 \rangle$ with
the required property. Now $\langle u_1, u_2, w \rangle_\mathbb{R}$ and $\langle
u_1, u_2, u_0 \rangle_\mathbb{R}$ form two adjacent simplicial cones in $\Sigma_1$,
and we may employ a flop to obtain a new toric variety $W$, whose fan $%
\Sigma_W$ contains the adjacent cones $\langle w, u_0, u_1 \rangle_\mathbb{R}$ and $%
\langle w , u_0, u_2 \rangle_\mathbb{R}$. (See Figure \ref{fig_flop}). }
\begin{figure}[h!]
\begin{center}
\setlength{\unitlength}{3mm}
\begin{picture}(28,10)(0,0)
\linethickness{0.075mm}
\put(0,0){\line(1,0){6}}\put(6,0){\line(0,1){6}}\put(0,0){\line(0,1){6}}
\put(0,6){\line(1,0){6}}
\put(0,6){\line(1,-1){6}}\put(-1,-1){$u_0$}\put(6,-1){$u_2$}\put(-1,6.5){$u_1$}\put(6,6.5){$w$}
\put(20,0){\line(1,0){6}}\put(26,0){\line(0,1){6}}\put(20,0){\line(0,1){6}}
\put(20,6){\line(1,0){6}}
\put(20,0){\line(1,1){6}}\put(19,-1){$u_0$}\put(26,-1){$u_2$}\put(19,6.5){$u_1$}\put(26,6.5){$w$}
\end{picture}
\end{center}
\caption{A flop.}
\label{fig_flop}
\end{figure}
$W$ is the compactification of another toric Calabi-Yau $W_0$ whose fan is
constructed as follows: First we add the ray generated by $w$ to $\Sigma_0$,
and then we flop the adjacent cones $\langle w, u_1, u_2 \rangle$ and $%
\langle u_0, u_1, u_2 \rangle$. $W_0$ is Calabi-Yau because
\begin{equation*}
\left( \underline{\nu}\, , \, w \right) = 1
\end{equation*}
and a flop preserves this Calabi-Yau condition. $\Sigma_W$ is recovered by
adding the ray generated by $v_\infty$ to the fan $\Sigma_{W_0}$.
Now we analyze the transform of classes under the above construction. The
class $h \in H_2(X, \mathbb{Z})$ can be written as $h^{\prime}+ \delta$,
where $h^{\prime}\in H_2(X, \mathbb{Z})$ is the class corresponding to the
cone $\langle u_1, u_2 \rangle_\mathbb{R}$ of $\Sigma$ and $\delta \in H_2 (X_0,
\mathbb{Z})$. Let $h^{\prime\prime}\in H_2(X_1, \mathbb{Z})$ be the class
corresponding to $\{u_1, u_2\} \subset \Sigma_1$, which is flopped to $e \in
H_2(W, \mathbb{Z})$ corresponding to the cone $\langle w, u_0 \rangle_\mathbb{R}$
of $\Sigma_W$. Finally let $\tilde{\delta}, \tilde{\alpha} \in H_2(W,
\mathbb{Z})$ be classes corresponding to $\delta, \alpha \in H_2(X_1,
\mathbb{Z})$ respectively under the flop. Then $\alpha^{\prime}=\tilde{\delta%
} + \tilde{\alpha} - e$ is actually the strict transform of $\alpha$.
Applying Proposition \ref{openclose2} and Theorem \ref{main}, we obtain the
equality
\begin{equation*}
n_b = \langle 1 \rangle^{W_0}_{0,0, \alpha^{\prime}}.
\end{equation*}
\end{proof}
Finally we give an example to illustrate the open Gromov-Witten invariants.
\begin{figure}[h!]
\begin{center}
\setlength{\unitlength}{3mm}
\begin{picture}(28,10)(0,0)
\linethickness{0.075mm}
\put(0,0){\line(1,0){6}}\put(6,0){\line(1,1){5}}\put(0,0){\line(-1,1){5}}
\put(3,1.5){\line(0,1){6}}\put(3,1.5){\line(2,-1){3}}\put(3,1.5){\line(-2,-1){3}}
\put(2.5,9){$K_{{\mathbf{P}}^2}$}
\put(19.8,0){\line(1,0){6.4}}\put(26.2,0){\line(1,1){5}}\put(19.8,0){\line(-1,1){5}}
\put(22,1.5){\line(-1,5){1.2}}\put(22,1.5){\line(1,0){2}}\put(22,1.5){\line(-3,-2){2.2}}
\put(24,1.5){\line(1,5){1.2}}\put(24,1.5){\line(3,-2){2.2}}
\put(22.5,9){$K_{\mathbb{F}_1}$}
\end{picture}
\end{center}
\caption{Polytope picture for $K_{{\mathbf{P}}^2}$ and $K_{\mathbb{F}_1}$.}
\label{KP2_KF1}
\end{figure}
\begin{eg}
Let $X_0 = K_{{\mathbf{P}}^2}$. There is exactly one compact toric divisor $%
D_0$ which is the zero section of $X_0\to{\mathbf{P}}^2$. The above
construction gives $W_0 = K_{\mathbb{F}_1}$. (Figure \ref{KP2_KF1}). Let $%
\alpha = kl \in H_2(X_0, \mathbb{Z})$, where $l$ is the line class of ${%
\mathbf{P}}^2 \subset K_{{\mathbf{P}}^2}$ and $k > 0$. By Theorem \ref%
{thm_main},
\begin{equation*}
n_{\beta_0+kl} = \langle 1 \rangle^{W_0}_{0,0, kl - e}
\end{equation*}
where $e$ is the exceptional divisor of $\mathbb{F}_1 \subset K_{\mathbb{F}%
_1}$. The first few values of these local invariants for $K_{\mathbb{F}_1}$
are listed in Table \ref{tableF1}.
\end{eg}
\section{A generalization to ${\mathbf{P}^{n}}$-bundles}
\label{generalization}
In this section we generalize Theorem \ref{main} to higher dimensions, that
is, to ${\mathbf{P}^{n}}$-bundles over an arbitrary smooth projective
variety.
Let $X$ be an $n$-dimensional smooth projective variety. Let $F$ be a rank $r
$ vector bundle over $X$ with $1\le r<n$. Let $p:W=\mathbf{P}(F\oplus%
\mathcal{O}_X)\to X$ be a ${\mathbf{P}^{r}}$-bundle over $X$. There are two
canonical subvarieties of $W$, say $W_0=\mathbf{P}(0\oplus\mathcal{O}_X)$
and $W_\infty=\mathbf{P}(F\oplus 0)$. We have $W_0\cong X$.
Let $S\subset X$ be a smooth closed subvariety of codimension $r+1$ with
normal bundle $N$. Let $\pi:{\tilde X}\to X$ be the blowup of $X$ along $S$
with exceptional divisor $E=\mathbf{P}(N)$. Then $F^{\prime}=\pi^*F\otimes%
\mathcal{O}_{{\tilde X}}(E)$ is a vector bundle of rank $r$ over ${\tilde X}$%
. Similar to $p:W\to X$, we let $p^{\prime}:W^{\prime}=\mathbf{P}%
(F^{\prime}\oplus\mathcal{O}_{{\tilde X}})\to {\tilde X}$.
It is easy to see that $W$ and $W^{\prime}$ are birational. We shall
construct an explicit birational map $g:W\dashrightarrow W^{\prime}$. It
induces a homomorphism between groups
\begin{equation*}
g^{\prime}:H_2(W,\mathbb{Z})\to H_2(W^{\prime},\mathbb{Z}).
\end{equation*}
Let $\beta=h+\alpha\in H_2(W,\mathbb{Z})$ with $h$ the fiber class of $W$
and $\alpha\in H_2(X,\mathbb{Z})$. Then we establish a relation between
certain Gromov-Witten invariants of $W$ and $W^{\prime}$.
\begin{prop}
\label{mainprop} {Let $Y=\mathbf{P}(F_S\oplus 0)\subset W$. For $%
g:W\dashrightarrow W^{\prime}$, we have
\begin{equation*}
\langle\gamma_1,\gamma_2,\cdots,\gamma_{m-1},PD([Y])\rangle^W_{0,m,\beta}=%
\langle\gamma^{\prime}_1,\cdots,\gamma^{\prime}_{m-1}\rangle^{W^{%
\prime}}_{0,m-1,\beta^{\prime}}.
\end{equation*}
Here $\gamma^{\prime}_i$ is the image of $\gamma_i$ under $H^*(W)\to
H^*(W^{\prime})$ and $\beta^{\prime}=g^{\prime}(\beta)$. }
\end{prop}
The birational map $g:W\dashrightarrow W^{\prime}$ we shall construct below
can be factored as
\begin{equation*}
W\overset{\pi_1^{-1}}{\dashrightarrow }{\tilde W}\overset{f}{\dashrightarrow
}W^{\prime}
\end{equation*}
Here $\pi_1:{\tilde W}\to W$ is a blowup along a subvariety $Y$. We assume
that every curve $C$ in class $\beta$ can be decomposed uniquely as $C=H\cup
C^{\prime}$ with $H$ a fiber and $C^{\prime}$ a curve in $X$. It follows
that the intersection of $C$ and $Y$ is at most one point. Under this
assumption we generalize Theorem \ref{thmGH} in a straightforward manner as
follows.
\begin{prop}
\label{propblowup} Let $E^{\prime}$ be the exceptional divisor of $\pi_1$.
Let $e$ be the line class in the fiber of $E^{\prime}\to Y$. Then we have
\begin{equation*}
\langle\gamma_1,\gamma_2,\cdots,\gamma_{m-1},PD([Y])\rangle^W_{0,m,\beta}=
\langle\tilde\gamma_1,\cdots,\tilde\gamma_{m-1}\rangle^{{\tilde W}%
}_{0,m-1,\beta_1},
\end{equation*}
where $\tilde\gamma_i=\pi_1^*\gamma_i$ and $\beta_1=\pi^!(\beta)-e$.
\end{prop}
The proof of Proposition \ref{mainprop} is similar to that of Theorem \ref%
{main}.
\begin{proof}[Proof of Proposition \protect\ref{mainprop}]
{Since $g=f\pi_1^{-1}$, applying Proposition \ref{propblowup}, it suffices
to show
\begin{equation*}
\langle\tilde\gamma_1,\cdots,\tilde\gamma_{m-1}\rangle^{{\tilde W}%
}_{0,m-1,\beta_1}=\langle\gamma^{\prime}_1,\cdots,\gamma^{\prime}_{m-1}%
\rangle^{W^{\prime}}_{0,m-1,\beta^{\prime}}
\end{equation*}
for the ordinary flop $f:{\tilde W}\dashrightarrow W^{\prime}$. }
Recall that Y-P.~Lee, H-W.~Lin and C-L.~Wang \cite{LLW} proved that for an
ordinary flop $f:M\dashrightarrow M_f$ of splitting type, the big quantum
cohomology rings of $M$ and $M_f$ are isomorphic. In particular, their
Gromov-Witten invariants for the corresponding classes are the same.
Therefore, the above identity follows.
\end{proof}
In the rest of the section we construct the birational map $%
g:W\dashrightarrow W^{\prime}$ in two equivalent ways.
Recall that $S\subset X$ is a subvariety. Let $p_S:Z=W\times_X S \to S$ be
the restriction of $p:W\to X$ to $S$. Then $Z=\mathbf{P}(F_S\oplus \mathcal{O%
}_S)$ with $F_S$ the restriction of $F$ to $S$. We denote $Y=Z\cap W_\infty=%
\mathbf{P}(F_S\oplus 0)$, and $q:Y\to S$ the restriction of $p_S$ to $Y$.
Since $Y$ is a projective bundle over $S$, we let $\mathcal{O}_{Y/S}(-1)$ be
the tautological line bundle over $Y$. The normal bundle of $Y$ in $Z$ is $%
N_{Y/Z}=\mathcal{O}_{Y/S}(1)$.
We start with the first construction of $g$. Let $\pi_1:{\tilde W}\to W$ be
the blowup of $W$ along $Y$. Since the normal bundle $N_{Y/W}$ is equal to $%
N_{Y/Z}\oplus N_{Y/W_\infty}=\mathcal{O}_{Y/S}(1)\oplus q^*N$, the
exceptional divisor of $\pi_1$ is
\begin{equation*}
E^{\prime}=\mathbf{P}(\mathcal{O}_{Y/S}(1)\oplus q^*N).
\end{equation*}
Let ${\tilde Z}$ be the proper transform of $Z$ and ${\tilde Y}={\tilde Z}%
\cap E^{\prime}$. The normal bundle of ${\tilde Z}$ in ${\tilde W}$ is ${%
\tilde N}=p_S^*N\otimes\mathcal{O}_{{\tilde Z}}(-{\tilde Y})$.
Because $Z^{\prime}\cong Z$ is a ${\mathbf{P}^{r}}$-bundle over $S$, and the
restriction of ${\tilde N}$ to each ${\mathbf{P}^{r}}$-fiber of ${\tilde Z}$
is isomorphic to $\mathcal{O}(-1)^{\oplus r+1}$, we have an ordinary ${%
\mathbf{P}^{r}}$-flop $f:{\tilde W}\dashrightarrow {\tilde W}_f$ along ${%
\tilde Z}$. It can be verified that ${\tilde W}_f=W^{\prime}$ after
decomposing $f$ as a blowup and a blowdown. Finally we simply define $g$ as
the composite $f\pi_1^{-1}:W\dashrightarrow W^{\prime}$.
We describe the second construction of $g$, from which it is easy to see the
relation ${\tilde W}_f=W^{\prime}$.
We let $\rho_1:W_1\to W$ be the blowup of $W$ along $Z$ whose exceptional
divisor is denoted by $E_1$. Because the normal bundle of $Z$ in $W$ is $q^*N
$ for $q:Z\to S$, we know
\begin{equation*}
E_1=\mathbf{P}(q^*N)\cong Z\times_S \mathbf{P}(N)=Z\times_S E.
\end{equation*}
Indeed, $W_1$ is isomorphic to the ${\mathbf{P}^{r}}$-bundle $\mathbf{P}%
(F_1\oplus\mathcal{O}_{{\tilde X}})$ over ${\tilde X}$ with $F_1=\pi^*F$.
Let $Y_1$ be the inverse image of $Y$. Now we let $\rho_2:W_2\to W_1$ be the
blowup of $W_1$ along $Y_1$ with exceptional divisor $E_2$. Let $E_1^{\prime}
$ be the proper transform of $E_1$ and $Y_2=E_1^{\prime}\cap E_2$. Notice
that $E_1^{\prime}\cong E_1$, and the normal bundle of $E_1$ is $%
N_1=q^*N\boxtimes\mathcal{O}_{E/S}(-1)$, we know the normal bundle of $%
E_1^{\prime}$ is $N_1^{\prime}=N_1\otimes\mathcal{O}_{E_1^{\prime}}(-Y_2)$.
Since $E_1^{\prime}\cong Z\times_S E$ is a ${\mathbf{P}^{r}}\times {\mathbf{P%
}^{r}}$-bundle over $S$, composed with the projection $Z\times_S E\to E$, we
see that $E_1^{\prime}\to E$ is a ${\mathbf{P}^{r}}$-bundle. Because the
restriction of $N_1^{\prime}$ to the ${\mathbf{P}^{r}}$-fiber of $%
E_1^{\prime}\to E$ is isomorphic to $\mathcal{O}(-1)^{\oplus r+1}$, we can
blowdown $W_2$ along these fibers of $E_1^{\prime}$ to get $\pi_3:W_2\to W_3=%
{\tilde W}_f$. From this description it is easy to see that $W_3=W^{\prime}$.
|
1,116,691,499,692 | arxiv | \subsection*{Impact Statement}
When the samples used to test the performance are drawn only from the 10 percent of the samples (i.e., those with the highest values of the critical temperature, greater or equal to 89 Kelvin), the new method called \textit{Spline Continued Fraction} Regression (Spln-CFR), is the top-ranked; the results also show that Spln-CFR, is competitive with the top-performing established techniques like \textit{XGBoost} and \textit{Random Forest}. However, trained with samples with all critical temperatures < 89 Kelvin, Spln-CFR correctly reported that 108 materials had temperatures greater or equal to 89 Kelvin, followed by Linear Support Vector Regression and multilayer perceptrons with 34 and 21. Due to its simplicity and time efficiency (nearly the same as of \textit{Random Forest}s), the approach deserves further study and extensions (e.g., with Bagging and Boosting variants) particularly due to its already proven capacity to approximate well outside of the range of values observed for the target value in the training set samples.
\keywords{Regression \and Continued Fractions \and Superconducting materials \and Superconductivity.}
\section{Introduction}
Superconductors are materials with the unique ability to conduct electrical current with zero resistance. This incredibly useful characteristic opens up a variety of applications for these substances. Among these usages, Magnetic Resonance Imaging (MRI) systems are used worldwide to produce detailed images of internal organs and tissues as a vital medical tool. As our energy demands continue to rise with the increasing prevalence of renewable energy, solar cars, and more, another indispensable usage could be the efficient forms of energy transfer.
Superconductors conducting current with zero resistance greatly reduce the amount of energy wasted as it is simply moved from one place to another.
However, a major drawback of today's superconductors is that they are unable to conduct current with zero resistance - their main appeal - unless cooled to their critical temperatures ($T_c$). These temperatures are extremely cold (often around -196\textdegree C), and are unique for each superconducting material~\cite{Superconductivity:2018-Hamidieh-DataDriven}. Because of the importance and variation of these temperatures, predicting the $T_c$ for superconductors has become a problem of great interest in the world of material science.
Here, we use various machine learning tools and introduce a new method based on multivariate continued fractions to build mathematical models that predict the critical temperature for superconductors only using information hidden in the superconductor's characterization of the chemical structure. The ability to accurately predict $T_c$ for superconductors will allow us to more easily use them to our advantage, opening up an electric new world of possibility.
\section{Background}
\subsection{Continued Fraction Regression}
In 2019, a new approach for multivariate regression using continued fractions was introduced in~\cite{DBLP:conf/cec/SunM19} and compared with a state of the art genetic programming method for regression.
A year later, this technique's results on 354 datasets from the physico chemical sciences were presented in
\cite{DBLP:conf/cec/MoscatoSH20} and compared with some of the state-of-the-art top 10 regression techniques. The new method was the top ranked performer in the training set in 352 out of the 354, and it was the also first in terms of generalisation in 192, more than half of the total of times of all other 10 methods combined. The figure of merit was the Mean Squared Error.
We named this known approach as `Continued Fraction regression', or CFR. The best existing algorithm currently utilizes a memetic algorithm for optimizing the coefficients of a model that approximates a target function as the convergent of a continued fraction~\cite{DBLP:conf/cec/SunM19,DBLP:conf/cec/MoscatoSH20,moscato2019analytic}.
Memetic Algorithms are well-established research areas in the field of Evolutionary Computation and the IEEE had established a Task Force in Computational Intelligence for their study. Therefore, it is important to refer the readers to some of the latest references and reviews on the field
~\cite{DBLP:reference/crc/CottaM07,DBLP:series/sci/Moscato12,DBLP:reference/sp/CottaMM18,Moscato2019:MA-accelerated-intro,DBLP:books/sp/19/MoscatoM19}.
Some basic introduction on analytic continued fraction approximation is perhaps necessary. A continued fraction for a real value $\alpha$ is of the following form
\eqref{general-continued-continued-fraction-expansion} and may be finite or infinite~\cite{sun}, according to $\alpha$ being a rational number or not, respectively.
\begin{equation}
\alpha= a_0 + \cfrac{b_1}{a_1 + \cfrac{b_2}{a_2 + \ldots}}
\label{general-continued-continued-fraction-expansion}
\end{equation}
Euler's proved a mathematical formula that allows us to write a sum of products as a continued fraction\eqref{euler-continued-fraction-formula}:
\begin{multline}
\beta = a_0 + a_0a_1 + a_0a_1a_2 + \ldots + a_0a_1a_2\dots a_n \\
= \cfrac{a_0}{1 - \cfrac{a_1}{1 + a_1 - \cfrac{a_2}{1 + a_2 - \cfrac{\ddots}{\ddots \frac{a_{n-1}}{1 + a_{n-1} - \frac{a_n}{1+a_n}}}}}}.
\label{euler-continued-fraction-formula}
\end{multline}
This simple yet powerful equation reveals how infinite series can be written as infinite continued fractions, meaning that continued fractions can be a good general technique to approximate analytic functions thanks to the improved optimization methods such as those provided by memetic algorithms~\cite{moscato2019analytic}. Indeed, CFR has already demonstrated to be an effective regression technique on the real-world benchmark provided by the \textit{Penn Machine Learning Database}~\cite{moscato2019analytic}.
In this paper, we will use Carl Friedrich Gauss' mathematical notation for generalized continued fractions \cite{knotation} (i.e. a compact notation where ``K'' stands for the German word ``Kettenbruch'' which means `Continued Fraction'). Using this notation, we may write the continued fraction in \eqref{general-continued-continued-fraction-expansion} as:
\begin{equation}
\alpha = a_0 + \operatornamewithlimits{K}_{i = 1}^{\infty} \frac{b_i}{a_i},
\end{equation}
\noindent
thus the problem of finding an approximation of an unknown target function of $n$ variables $\mathbf{x}$ given a training dataset of $m$ samples
$S=\{ ( \mathbf{x^{ (i) }}, y^{(i)} )\}$
is that of finding the set of functions
$F=\{a_0(x)..., b_1(x), ...\}$
such that a certain objective function is minimized; i.e. we aim to find
\begin{equation}
f(\mathbf{x}) = a_0(\mathbf{x}) + \operatornamewithlimits{K}_{i = 1}^{\infty} \frac{b_i(\mathbf{x})}{a_i(\mathbf{x})}.
\end{equation}
\section{Methodology}
\subsection{A new approach: Continued Fractions with Splines}
In previous contributions~\cite{DBLP:conf/cec/SunM19,DBLP:conf/cec/MoscatoSH20,moscato2019analytic}, a memetic algorithm was always employed to find optimal continued fraction solutions. Here, we present another method to fit continued fraction representations by iteratively fitting splines.
Splines are a regression technique that involves fitting piecewise polynomial functions to the given data~\cite{de1978Spline}. The domain is partitioned into intervals at locations known as ``knots''. Then, a polynomial model of degree $n$ is separately fitted for each interval, generally enforcing boundary conditions including continuity of the function as well as the continuity of the first $(n\text{-}1)$-order derivatives at each of the knots. Splines can be represented as a linear combination of basis functions, of which the standard is the B-spline basis. Thus, fitting a spline model is equivalent to fitting a linear model of basis functions. We refer to Hastie {\it et al}. \cite{principles} for the particular definition of the B-spline basis.
First, when all the functions $b_i(\mathbf{x})=1$, for all $i$, we have a \textit{simple continued fraction} representation, and we can write it as:
\begin{equation}
f(\mathbf{x}) = g_0(\mathbf{x}) + \cfrac{1}{g_1(\mathbf{x}) + \cfrac{1}{g_2(\mathbf{x}) + \cfrac{1}{g_3(\mathbf{x}) + ...}}}.
\end{equation}
\noindent
Note that for a term $g_i(\mathbf{x})$, we say that it is at ''depth'' $i$.
Finding the best values for the coefficients in the set of functions $\{g_i(\mathbf{x})\}$, can be addressed as a non-linear optimization problem as in
\cite{DBLP:conf/cec/SunM19,DBLP:conf/cec/MoscatoSH20,moscato2019analytic}. However, despite the great performance of that approach, we aim to introduce a faster variant that can scale well to larger datasets such as this one.
Towards that end, and thinking about the scalability, we fit the model iteratively by depth as follows: we first consider only the first term, $g_0(\mathbf{x})$ (at depth 0), ignoring all other terms. We fit a model for the first term using predictors $\mathbf{x}$ and the target $f(x)$. Next, we consider only the first and second depths, with the terms $g_0(\mathbf{x})$ and $g_1(\mathbf{x})$, ignoring the rest. We then fit $g_1(\mathbf{x})$ using the previously fit model for $g_0(\mathbf{x})$. For example, truncating the expansion at depth 1, we have that
\begin{equation}
g_1(\mathbf{x}) = \frac{1}{f(\mathbf{x}) - g_0(\mathbf{x})}.
\end{equation}
Thus, we fit $g_1(\mathbf{x})$ using the predictors $\mathbf{x}$ and the target $(f(\mathbf{x}) - g_0(\mathbf{x}))^{-1}$. We label this target as $y^{(1)}$. We repeat this process, fitting a new model by truncating at the next depth by using the models fit from previous depths and iterations.
We have that at depth $i > 0$, the target $y^{(i)}$ for the model $g_i(\mathbf{x})$ is $(\epsilon_{i - 1})^{-1}(\mathbf{x})$, where $\epsilon_{i - 1}(\mathbf{x})$ is the residual of the previous depth's model, $y^{(i-1)} - g_{i - 1}(\mathbf{x})$.
One notable characteristic of this approach is that if any model $g_i(\mathbf{x})$, $i > 0$ evaluates to 0, then we will have a pole in the continued fraction, which is often spurious. To remedy this, we modify the structure of the fraction such that each fitted $g_i(\mathbf{x})$, $i > 0$ is encouraged to be strictly positive on the domain of the training data. To do this, we add a constant $C_i$ to $\epsilon_i$ when calculating the target $y^{(i + 1)}$, where $C_i = |\min_x \epsilon_i|$. Thus, the targets $y^{(i)}$ for $i > 0$ are all non-negative, encouraging each $g_i(\mathbf{x})$, $i> 0$, to be strictly positive. For example, for $g_1(\mathbf{x})$, we would have that the target $y^{(1)} = (f(\mathbf{x}) - g_0(\mathbf{x}) + C_1)^{-1}$. Of course, we must then subtract $C_i$ from $g_{i - 1}(\mathbf{x})$ in the final continued fraction model.
We have found that data normalization often results in a better fit using this approach. It is sufficient to simply divide the targets uniformly by a constant when training and multiply by the same constant for prediction. We denote this constant parameter $\texttt{norm}$.
A good choice of the regression model for each $g_i(\mathbf{x})$ is a spline since they are well-established. For reasons stated in the next section, the exception is the first term $g_0(\mathbf{x})$, which is a linear model. We use an additive model to work with multivariate data where each term is a spline along a dimension. That is, given $m$ predictor variables, we have that
\begin{equation} \label{eq:spline_model}
g_i(\mathbf{x}) = \sum_{j=1}^{m} f_j(x_j)
\end{equation}
\noindent
for each term $g_i(\mathbf{x})$, $i > 0$, where each function $f_j$ is a cubic spline along variable $j$. That is, $f_j$ is a piecewise polynomial of degree 3 and is a function of variable $j$.
We implement the splines with a penalized cubic B-spline basis. That is, $f_j(\mathbf{x}) = \sum_{i=1}^{k} \beta_k B_k(x_j)$, where each $B_i(x)$ is one of $k$ cubic B-spline basis functions along dimension $j$ and corresponds to one of $k$ knots. We use the following loss function $L\left(\mathbf{B}\left(\mathbf{x}, \mathbf{y}, \pmb{\beta}\right)\right)$, i.e.
\begin{equation}
L\left(\mathbf{B}\left(\mathbf{x}, \mathbf{y}, \pmb{\beta}\right)\right) = \Vert \mathbf{y} - \mathbf{B} \pmb{\beta} \Vert^2 + \lambda \sum_{j=0}^{m} \pmb{\beta}^T \mathbf{P_j} \pmb{\beta}
\end{equation}
\noindent
where $\mathbf{B}$ is the matrix of cubic B-spline basis functions for all variables, $\pmb{\beta}$ is the vector of all of the weights, and $\mathbf{P_j}$ is the associated second derivative smoothing penalty matrix for the basis for the spline $f_j$. This is standard for spline models \cite{principles}. The pseudocode for this approach is shown in Algorithm~\ref{algo:spline-cfr}.
\begin{algorithm}
\SetAlgoLined
\KwInput{Training data $\mathcal{D} = \{(\mathbf{x_1}, f(\mathbf{x_1}), ..., (\mathbf{x_n}, f(\mathbf{x_n}))\}$ and parameters $\lambda$, $k$, $\texttt{norm}$, and $\texttt{max\_depth}$}
\tcc{Let $n$ be the number of samples; \\ $m$ be the number of variables}
\tcc{Let $\mathbf{X} \in \mathbb{R}^{n \times m}$ be the data matrix and $\mathbf{y} \in \mathbb{R}^{n}$ be the vector of targets.}
$\texttt{knot\_indices} = \{\}$
$\mathbf{y^{(0)}} \leftarrow \mathbf{y} / \texttt{norm}$
\For{$i \leftarrow$ 0, 1, ..., \texttt{max\_depth}}{
\eIf{$i = 0$} {
\tcc{$g_0$ is a linear model parameterized by $\mathbf{\beta}$, and is fit with least squares.}
$\mathbf{\beta} \leftarrow \text{argmin}_{\mathbf{\beta}} \displaystyle \Vert \mathbf{y^{(0)}} - \mathbf{X}\mathbf{\beta} \Vert^2$
}
{
\tcc{Let $g_i$ be an additive spline model as given in equation \eqref{eq:spline_model}, parameterized by $\mathbf{\beta}$. For each predictor variable, the knots are at the samples indexed by the first $k$ indices in $\texttt{knot\_indices}$}
\For {$j \leftarrow$ 1, 2, ..., $m$} {
$f_j \leftarrow$ new SplineModel()
\For {each index $p$ in $\texttt{knot\_indices}$} {
$f_j \leftarrow$ AssignKnotAt($\mathbf{X}[p][j]$)
}
}
$g_i = \sum_{j=1}^{m} f_j(x_j)$
\tcc{Construct the splines, and then fit with regularized least squares}
$\mathbf{B} \leftarrow$ BSplineBasisMatrix($g_i$.knots)
$\mathbf{P}_j \leftarrow$ BSplinePenaltyMatrix($f_j$: for each $f_j$ in $g_i$)
$\mathbf{\beta} \leftarrow \text{argmin}_{\mathbf{\beta}} \displaystyle \Vert \mathbf{y^{(i)}} - \mathbf{B}\mathbf{\beta} \Vert^2 + \lambda \sum_{j = 1}^{m} \mathbf{\beta}^T\mathbf{P}_j \mathbf{\beta}$
}
\tcc{Compute $\mathbf{\epsilon_i}$, the vector of residuals of the $i$th model, and then compute the targets and knot locations for the next depth.}
$\mathbf{\epsilon}_i \leftarrow \mathbf{y^{(i)}} - g_i(\mathbf{X})$
$C_i \leftarrow | \min_x \mathbf{\epsilon}_i |$
$\mathbf{y^{(i + 1)}} \leftarrow (\mathbf{\epsilon_{i}} + C_{i})^{-1}$
$\texttt{knot\_indices} \leftarrow$ SelectKnots($\mathbf{\epsilon_i}$)
}
The estimate for $f(\mathbf{\mathbf{x}})$ is, $f^{(\texttt{max\_depth})}(\mathbf{x})$
\begin{equation*}
= \texttt{norm} \; \cdot \left[ g_0(\mathbf{x}) - C_0 + \operatornamewithlimits{K}_{i = 1}^{\texttt{max\_depth}} \frac{1}{g_i(\mathbf{x}) - C_i} \right]
\end{equation*}
\caption{Iterative CFR using additive spline models with adaptive knot selection}\label{algo:spline-cfr}
\end{algorithm}
\subsection{Adaptive knot selection}
\begin{figure}
\centering
\subfloat[depth 3]{\includegraphics[width =0.4 \columnwidth]{gamma3.pdf}}
\subfloat[depth 5]{\includegraphics[width =0.4 \columnwidth]{gamma5.pdf}} \\
\subfloat[depth 10]{\includegraphics[width =0.4 \columnwidth]{gamma10.pdf}}
\subfloat[depth 15]{\includegraphics[width =0.4 \columnwidth]{gamma15.pdf}}
\caption{Examples of the fit obtained by the \textit{Spline Continued Fraction} using a dataset generated thanks to the gamma function with added noise. We present several continued fractions with depth of 3 (a), 5 (b), 10 (c), and 15 (d). In this example, the number of knots $k$ was chosen to be 3, $norm$ = 1, and $\lambda$ = 0.1}.
\label{fig:gamma}
\end{figure}
The iterative method of fitting continued fractions also allows for an adaptive method of selecting knot placements for the additive spline models. For the spline model $g_i(\mathbf{x})$ at depth $i > 0$, we use all of the knots of the spline model $g_{i - 1}(\mathbf{x})$ at depth $i - 1$. Then, for each variable, we place $k$ new knots at the unique locations of the $k$ samples with the highest absolute error from the model $g_{i - 1}(\mathbf{x})$ at depth $i - 1$. As the points with the highest error can be likely to be very close to each other, we impose the condition that we take the samples with the highest error, but they must have alternating signs
That is, for $g_i(\mathbf{x})$, $i > 0$, we select $k$ knots, with the first knot at the location of the sample with the highest absolute error computed from the model $g_{i - 1}(\mathbf{x})$. For the rest of the knots, the $j^\text{th}$ knot is selected at the sample's location with the next highest absolute error than the sample used for the $(j - 1)^\text{th}$ knot. Nevertheless, only if the sign of the (non-absolute) error of that sample is different from the sign of the (non-absolute) error of the sample used for the $(j - 1)^\text{th}$ knot. Otherwise, we move on to the next highest absolute error sample, and so on, until we fulfill this condition. This knot selection procedure is shown in Algorithm~\ref{algo:adaptive-knot}. Note that we let $g_0$ be a linear model as there is no previous model to obtain the knot locations from.
\begin{algorithm}
\SetAlgoLined
\KwInput{$\epsilon_i$}
\tcc{Given the vector of residuals $\mathbf{\epsilon_i}$ of the spline model at depth $i$, select the knot placements for the next spline model at depth $i + 1$}
\tcc {Sort by indices of highest absolute error}
$\texttt{abs\_error} \leftarrow$ elementWiseAbsoluteValue($\mathbf{\epsilon_i}$)
$\texttt{highest\_error\_indices} \leftarrow$ argsortDecreasing($\texttt{abs\_error}$)
\tcc {Take the top $k$ highest order indices, such that each error term has opposite sign of the last}
$\texttt{current\_sign} \leftarrow null$
$\texttt{knots\_added} \leftarrow 0$
\For {each $\texttt{i}$ in $\texttt{highest\_error\_indices}$} {
\If {$\texttt{knots\_added} \geq k$} {
\Break
}
\If {sign($\epsilon_i$[$\texttt{i}$] $\neq \texttt{current\_sign}$)} {
$\texttt{current\_sign} \leftarrow$ sign($\epsilon_i$)
$\texttt{knot\_indices}$.append($\epsilon_i$[$\texttt{i}$])
$\texttt{knots\_added} \leftarrow \texttt{knots\_added} + 1$
}
}
\Return $\texttt{knot\_indices}$
\caption{SelectKnots (Adaptive Knot Selection)}\label{algo:adaptive-knot}
\end{algorithm}
The goal of using additive spline models with the continued fraction is to take advantage of the continued fraction representation's demonstrated ability to approximate general functions (see the discussion on the relationship with Pad\'{e} approximants in~\cite{moscato2019analytic}). The fraction's hierarchical structure allows for the automatic introduction of variable interactions, which is not included in the additive models individually that constitute the fraction. The iterative approach to fitting allows for a better algorithm for knot selection.
An example of this algorithm modeling the well-known gamma function (with standard normally distributed noise added) is demonstrated in Fig.~\ref{fig:gamma}. Here, we showed how the fitting to gamma is affected by different values of depths (3, 5, 10, 15) in \textit{Spline Continued Fraction}. As desired, it is evident from the figure that \textit{Spline Continued Fraction} with more depth fits better with the data.
\section{Experimental Design}
\subsection{Data and Study Method}
We used the superconductivity dataset, also used by
Hamidieh \cite{Superconductivity:2018-Hamidieh-DataDriven}, from the UCI Machine Learning repository\footnote{\url{https://archive.ics.uci.edu/ml/datasets/Superconductivty+Data}}. The website contains two files. In this work, we have only used the
\textit{train.csv} file, which contains information of 21263 superconductors along with the critical temperature and a total of 81 attributes for each of them.
We conducted two main studies to see the generalization capabilities of many regression algorithms. We denote them as the \textit{Out-of-Sample} and \textit{Out-of-Domain}, respectively. For the \textit{Out-of-Sample} study, the data is randomly partitioned into 2/3rds training data and 1/3rd test data. Each model was fit on the training data, and the \textit{RMSE} is calculated on the separated test portion of the data.
For the \textit{Out-of-Domain} study, the data was partitioned such that the training samples are always extracted from the set of samples with the lowest 90\% of critical temperatures. For the test set, the samples come from the highest 10\% of critical temperatures. It turned out the lowest 90\% have critical temperatures < 89~K, whereas the highest 10\% have temperatures greater or equals to 89~K that range from 89~K to 185~K (we highlight that the range of variation of the test set is more than the one of the training set making the generalization a challenging task).
For each of the 100 repeated runs of \textit{Out-of-Domain} test, we have randomly taken 1/2 of the training set (from lowest 90\% of the observed value) to train the models and the same ratio from the test data (from 10\% of the highest actual value) to estimate the model performance.
This said the \textit{Out-of-Domain} study allows us to see the capacity of several regression models in ``predicting'' on a set of materials that have higher critical temperatures, meaning that generalization, in this case, is strictly connected with the extrapolation capacity of the fitted models. We executed both the \textit{Out-of-Sample} and \textit{Out-of-Domain} tests for 100 times to help us validate our conclusions with statistical results.
The \textit{Spline Continued Fraction} model had a depth of 5, five knots per depth, a normalization constant of $1000$, and a regularization parameter $\lambda$ of 0.5.
These parameters resulted from one-dimensional non-linear model fitting to problems like the gamma function with noise (already discussed in Fig.~\ref{fig:gamma}) and others such as fitting the function $f(x)=\sin(x)/x$. The parameters were selected empirically using these datasets, and no problem-specific tuning on the superconductivity datasets was conducted.
The final model was then iteratively produced by beginning at a depth of 1 and increasing the depth by one until the error was greater than the one observed for a previous depth (which we considered as a proxy for overfitting the data).
To evaluate the performance of the \textit{Spline Continued Fraction} (\texttt{Spln-CFR}) introduced in this paper with other state-of-the-art regression methods, {we used a set of 11 regressors form two popular Python libraries (\textit{XGBoost}~\cite{Chen:2016:XGBoost} and \textit{Scikit-learn} machine learning library~\cite{scikit-learn})}.
The name of the regression methods are listed as follows:
\begin{itemize}
\item \textit{AdaBoost} (\texttt{ada-b})
\item \textit{Gradient Boosting} (\texttt{grad-b})
\item \textit{Kernel Ridge} (\texttt{krnl-r})
\item \textit{Lasso Lars} (\texttt{lasso-l})
\item \textit{Linear Regression} (\texttt{l-regr})
\item \textit{Linear SVR} (\texttt{l-svr})
\item \textit{MLP Regressor} (\texttt{mlp})
\item \textit{Random Forest} (\texttt{rf})
\item \textit{Stochastic Gradient Descent} (\texttt{sgd-r})
\item \textit{XGBoost} (\texttt{xg-b})
\end{itemize}
The \textit{XGBoost} code is available as an open-source package\footnote{https://github.com/dmlc/xgboost}. The parameters of the \textit{XGBoost} model were the same as used in Hamidieh (2018)~\cite{Superconductivity:2018-Hamidieh-DataDriven}. We kept the parameters of other machine learning algorithms the same as Scikit defaults.
All executions of the experiments were performed on an Intel$^{\circledR{}}$ Core$^{\text{TM}}$ i7-9750H hex-core based computer with hyperthreading and 16GB of memory. The machine was running on Windows 10 operating system. We used Python v3.7 to implement the \textit{Spline Continued Fraction} using pyGAM~\cite{daniel_serven_2018_1476122} package. All experiments were executed under the same Python runtime and computing environment.
\subsection{Results}
\begin{table}
\centering
\caption{Results form the 100 runs of the proposed {Spline Continued Fraction} and ten regression methods all trained on the dataset, with median of Root Mean Squared Error (\textit{RMSE}) and standard deviation as the uncertainty of error. }
\label{tab:res-all-models}
\setlength{\tabcolsep}{10pt}
\begin{tabular}{lrr}
\hline
\multirow{2}{*}{Regressor} & \multicolumn{2}{c}{Median RMSE Score $\pm Std$}\\
\cline{2-3}{} & {\textit{Out-of-Sample}} & {\textit{Out-of-Domain}}\\
\hline
{Spln-CFR} & 10.989 $\pm$ 0.382 & \textbf{36.327 $\pm$ 1.187} \\
{xg-b} & \textbf{9.474 $\pm$ 0.190} & 37.264 $\pm$ 0.947 \\
{rf} & 9.670 $\pm$ 0.197 & 38.074 $\pm$ 0.751 \\
{grad-b} & 12.659 $\pm$ 0.178 & 39.609 $\pm$ 0.619 \\
{l-regr} & 17.618 $\pm$ 0.187 & 41.265 $\pm$ 0.466 \\
{krnl-r} & 17.635 $\pm$ 0.163 & 41.427 $\pm$ 0.464 \\
{mlp} & 19.797 $\pm$ 5.140 & 41.480 $\pm$ 9.640 \\
{ada-b} & 18.901 $\pm$ 0.686 & 47.502 $\pm$ 0.743 \\
{l-svr} & 26.065 $\pm$ 7.838 & 47.985 $\pm$ 1.734 \\
{lasso-l} & 34.234 $\pm$ 0.267 & 74.724 $\pm$ 0.376 \\
{sgd-r}$^{\mathrm{a}}$ & N.R. & N.R. \\
\hline
\multicolumn{3}{p{0.45\textwidth}}{
$^{\mathrm{a}}$The \textit{Stochastic Gradient Descent} Regressor (\texttt{sgd-r}), without parameter estimation, predicted unreasonable high values and the predicted error measure is extreme. Hence, we are not reporting (N.R.) the performance of \texttt{sgd-r} and have omitted it form further analysis.}
\end{tabular}
\end{table}
Table~\ref{tab:res-all-models} presents the results of the regression methods along-with with those of the \textit{Spline Continued Fraction} approach for both of \textit{Out-of-Sample} and \textit{Out-of-Domain} studies.
The median \textit{RMSE} value obtained from 100 runs is taken as the \textit{Out-of-Sample RMSE} estimate.
For each of the 100 repeated runs of \textit{Out-of-Domain} test, we estimate the model performance via the \textit{Out-of-Domain RMSE} score. The median \textit{RMSE} score obtained from this test performance is reported in Table~\ref{tab:res-all-models} as \textit{Out-of-Domain RMSE}. We also report on some other descriptive statistics like, for instance, the number of times that the regressor correctly predicted a material to have a critical temperature greater or equal to 89~K.
\begin{figure}[]
\centering
\subfloat[Heatmap for \textit{Out-of-Sample} test]{\includegraphics[width=0.38\columnwidth]{plot-Heatmap-Out-Sample.pdf}}\label{fig:heatmap-out-sample}
\subfloat[Critical Diagram plot for \textit{Out-of-Sample} test]{\includegraphics[width=0.6\columnwidth]{plot-CD-Out-Sample.pdf} }\label{fig:cd-plot-out-sample}
\caption{Statistical Comparison of the regressors for the \textit{Out-of-Sample} test. a) Heatmap showing the significance levels of $p$-values obtained by the Friedman Post-hoc Test and b) Critical difference (CD) plot showing the statistical significance of rankings achieved by the regression methods.}
\label{fig:stat-test-out-sample}
\end{figure}
\begin{figure*}[]
\centering
\subfloat[Heatmap for \textit{Out-of-Domain} test]{\includegraphics[width=0.38\columnwidth]{plot-Heatmap-Out-Domain.pdf}}\label{fig:heatmap-out-domain}
\subfloat[Critical Diagram plot for \textit{Out-of-Domain} test]{\includegraphics[width=0.6\columnwidth]{plot-CD-Out-Domain.pdf}}\label{fig:cd-plot-out-domain}
\caption{Statistical Comparison of the regressors for the \textit{Out-of-Domain} Test. a) Heatmap showing the significance levels of $p$-values obtained by the Friedman Post-hoc Test and b) Critical difference (CD) plot showing the statistical significance of rankings achieved by the regression methods.}
\label{fig:stat-test-out-domain}
\end{figure*}
\subsection{Out-of-Sample Test}
For the \textit{Out-of-Sample} testing, \textit{XGBoost} achieved the lowest error (median \textit{RMSE} score of 9.47) among the 11 regression methods. The three closest regression methods to \textit{XGBoost} are \textit{Random Forest} (median \textit{RMSE} of 9.67), \textit{Spline Continued Fraction} (median \textit{RMSE} of 10.99) and \textit{Gradient Boosting} (median \textit{RMSE} of 12.66). The \textit{Stochastic Gradient Descent}, without parameter estimation, performed the worst among all regression methods used in the experiment and due to the unreasonable high error observed in the runs we have omitted it from further analysis.
\subsubsection{Statistical Significance Testing on the Results obtained for Out-of-Sample test\label{sec:res-stat-out-sample}}
To evaluate the significance in results obtained by different regression methods for \textit{Out-of-Sample}, we applied a Friedman test for repeated measure~\cite{friedman1937use} for the 100 runs. Here, we computed the ranking of the methods for each of the runs based on the \textit{RMSE} score obtained in the test distribution of the \textit{Out-of-Sample} settings. It will help us determine if the experiment's techniques are consistent in terms of their performance. The statistical test found $p$-value = \num{1.9899e-183} which \textit{``rejected''} the \textit{null} hypothesis \textit{``all the algorithms perform the same''} and we proceeded with the post-hoc test.
We applied Friedman's post-hoc test on the ranking of 10 regressors computed for the test \textit{RMSE} scores obtained for 100 runs of \textit{Out-of-Sample} test. In Fig.~\ref{fig:stat-test-out-sample} (a) the $p$-values obtained for the test are plotted as a heatmap. It is noticeable that there exist \textit{`no significant differences' (NS)} in performances of \textit{Spline Continued Fraction} (\texttt{Spnl-CFR}) with: \texttt{rf} and \texttt{grad-b}.
Additionally, we generated the Critical Difference (CD) diagram proposed in~\cite{demvsar2006statistical} to visualize the differences among the regressors for their median ranking. The CD plot used Nyemeni post-hoc test and placed the regressors on the $x$-axis of their median ranking. It then computes the \textit{critical difference} of rankings between them and connects those which are closer than the critical difference with a horizontal line denoting them as statistically \textit{`non-significant'}.
We plot the CD graph, in Fig.~\ref{fig:stat-test-out-sample} (b), using the implementation from Orange data mining toolbox~\cite{Python:Orange} in Python. The Critical Difference (CD) is found to be $1.25$. We can see that the \texttt{xg-b} ranked \nth{1} among the regressors with \textit{`no significant difference'} with \nth{2} ranked \texttt{rf}. The median ranking of the proposed \textit{Spline Continued Fraction} is ranked \nth{3} with \textit{`no significant differences'} in the performance rankings of \texttt{rf} and \texttt{grad-b}.
\subsection{Out-of-Domain Test}
For the task of \textit{Out-of-Domain} prediction, the \textit{Spline Continued Fraction} regressor exhibited the best performance (median \textit{RMSE} score of 36.3) among all regression methods used in the experiment (in Table~\ref{tab:res-all-models}). Three closest regressors to the proposed \textit{Spline Continued Fraction} method are \textit{XGBoost} (median \textit{RMSE}=37.3), \textit{Random Forest} (median \textit{RMSE}=38.1) and \textit{Gradient Boosting} (median \textit{RMSE}=39.6).
\subsubsection{Statistical Significance Testing on the Results obtained for Out-of-Domain test\label{sec:res-stat-out-domain}}
To test the significance of the results obtained by different regression methods for \textit{Out-of-Domain} test, we employed the same statistical test used for \textit{Out-of-Sample} (in Sec.~\ref{sec:res-stat-out-sample}). The test returned a $p$-value = \num{1.2065e-156} which \textit{``rejected''} the \textit{null} hypothesis and we proceeded with the post-hoc test.
The $p$-values obtained for the post-hoc test are plotted as a heatmap in Fig.~\ref{fig:stat-test-out-domain} (a) for \textit{Out-of-Domain} test. It is noticeable that there exist \textit{`no significant differences'} (NS) in performances of \textit{Spline Continued Fraction} (\texttt{Spnl-CFR}) with \textit{Random Forest} (\texttt{rf}) and \textit{XGBoost} (\texttt{xg-b}). There is also no significance difference in performance ranking of \textit{Linear Regression} (\texttt{l-regr}) with \texttt{mlp, l-svr, krnl-r} and \texttt{grad-b}.
We plot the Critical Difference (CD) graph, in Fig.~\ref{fig:stat-test-out-domain} (b), for \textit{Out-of-Domain} test. The Critical Difference (CD) is $1.3898$. From the critical difference plot; it is evident that the top three methods in \textit{Out-of-Domain} prediction are \textit{Spline Continued Fraction}, \textit{XGBoost} and \textit{Random Forest}. We can see that the average ranking of \texttt{Spln-CFR} is very close to 2, which is the best-ranking performance among the 10 regressors. There is \textit{no significant difference} of \textit{Spline Continued Fraction} with the \nth{2} best ranked method, \textit{XGBoost} (\texttt{xg-b} with average ranking is between \nth{2} and \nth{3}), in \textit{Out-of-Domain} predictions.
\subsubsection{Runtime Required by the methods for out-of-domain test}
\begin{figure}[]
\centering
\includegraphics[width =0.6\columnwidth]{Rplot-Runtime_Out_Domain_embed.pdf}
\caption{Run-time (in seconds) required for model building and predicting by the regressors for 100 runs of the \textit{Out-of-Domain} test, where samples with the lowest 90\% of critical temperatures were drawn to be the training data, with an equal number of samples constitute the test data (but these were withdrawn from the top 10\% highest critical temperatures).}
\label{fig:runtime-out-domain}
\end{figure}
Fig.~\ref{fig:runtime-out-domain} shows the running time required by each of the regression methods (in s) for the 100 runs of \textit{Out-of-Domain} test. We can see that the \textit{Linear Regression} (\nth{50} percentile runtime of 0.02~s and maximum runtime 0.158~s) and Lasso lars (\nth{50} percentile 0.013~s and maximum of 0.027~s) required lowest running times. \textit{XGBoost} (\texttt{xg-b}) required the most amount of CPU time (\nth{50} percentile runtime of 55.33~s and maximum 79.05~s). On the other hand, \textit{Random Forest} and the proposed \textit{Spline Continued Fraction} Regression required nearly similar running time (\nth{50} percentile runtime of 36.88~s and 41.65~s for \texttt{rf} and \texttt{Spln-CFR}, respectively) for the \textit{Out-of-Domain} test.
\section{Discussion}
\begin{figure}[]
\centering
\subfloat[\textit{Linear Regression}]{\includegraphics[width = 0.33\linewidth]{plt_all_data_l-regr.pdf}} \hfill
\subfloat[\textit{XGBoost}]{\includegraphics[width = 0.33\linewidth]{plt_all_data_xg-b.pdf}} \hfill
\subfloat[\textit{Spline Continued Fraction}]{\includegraphics[width = 0.33\linewidth]{plt_all-data_Spln-CFR.pdf}}
\caption{\textit{Out-of-Sample} Test results showing Predicted vs actual temperatures for entire data with regression models trained on the training data. a) Results replicate \textit{Linear Regression} outcome from Hamidieh, b) \textit{XGBoost} and b) \textit{Spline Continued Fraction} model.}
\label{fig:out-sample}
\end{figure}
To illustrate on the performance of models in the \textit{Out-of-Sample} study, we employed \textit{Linear regression}, \textit{XGBoost} and \textit{Spline Continued Fraction} on the training set and plotted the prediction vs actual temperatures for the entire dataset (in Fig.~\ref{fig:out-sample}). We show that we were able to reproduce the result of the \textit{Out-of-Sample} test from Hamidieh \cite{Superconductivity:2018-Hamidieh-DataDriven}
Fig.~\ref{fig:out-sample} (a),
with \textit{RMSE} of 17.7. The \textit{Out-of-Sample} model for \textit{Spline Continued Fraction} and \textit{XGBoost} model are used to predict the critical temperature for the entire dataset.
Together, the figures show that \texttt{Spln-CFR} performed better in modelling \textit{Out-of-Sample} critical temperatures than that of \textit{Linear Regression}, particularly for the larger temperatures.
\begin{figure}[]
\centering
\subfloat[\textit{Linear Regression}]{\includegraphics[width = 0.33\linewidth]{plt_out_domain_89_l-regr.pdf}} \hfill
\subfloat[\textit{XGBoost}]{\includegraphics[width = 0.33\linewidth]{plt_out_domain_89_xg-b.pdf}} \hfill
\subfloat[\textit{Spline Continued Fraction}]{ \includegraphics[width = 0.33\linewidth]{plt_out_domain_89_Spln-CFR.pdf}
\caption{\textit{Out-of-Domain} Test results showing Predicted vs actual temperatures of the samples for the highest 10\% critical temperatures, where a model is fitted using the samples with the lowest 90\% critical temperatures. We have shown the $x$-axis values up to 145~K which only left an extreme value (185~K) out of the visual area. Results of \textit{Out-of-Domain} test for a) \textit{Linear Regression} with \textit{RMSE} of 41.3 b) \textit{XGBoost} with \textit{RMSE} of 36.3 and c) \textit{Spline Continued Fraction} model's with \textit{RMSE} of 34.8.
}
\label{fig:out-domain}
\end{figure}
Fig.~\ref{fig:out-domain} shows actual vs predicted critical temperature for the \textit{Out-of-Domain} test for \textit{Linear Regression}, \textit{XGBoost} and \textit{Spline Continued Fraction} models. We recall that in \textit{Out-of-Domain} settings, we trained each of the models with the samples from the bottom 90\% of the observed temperature (which is < 89~K). We measured the samples' testing performance with the top 10\% of the observed critical temperatures (containing 2126 samples in the test set).
\begin{table}
\centering
\caption{Predicted vs. Actual critical temperatures for the materials with the top 20 predicted temperatures in the {Out-of-Domain} study,
i.e.\ the one in which the lowest 90\% of critical temperature samples were used for drawing the training data. The average values of the critical temperatures ($\bar{x}$), the average relative error ($\bar{\eta}$), and the Root mean squared error (\textit{RMSE}) of these materials for the top 20 predictions (which are not necessarily the same since they depend on the models) are shown in the last rows.}
\label{tab:out-domain-top-20-temp}
\begin{scriptsize}
\rotatebox{90}{
\setlength\tabcolsep{1pt}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}} l rr rr rr rr rr rr rr rr rr rr @{}}
\toprule
{} &
\multicolumn{2}{c}{Spln-CFR} & \multicolumn{2}{c}{xg-b} & \multicolumn{2}{c}{rf} & \multicolumn{2}{c}{grad-b} & \multicolumn{2}{c}{mlp} & \multicolumn{2}{c}{l-regr} & \multicolumn{2}{c}{l-svr} & \multicolumn{2}{c}{krnl-r} & \multicolumn{2}{c}{ada-b} & \multicolumn{2}{c}{lasso-l} \\
& y & pred & y & pred & y & pred & y & pred & y & pred & y & pred & y & pred & y & pred & y & pred & y & pred\\
\midrule
& 92.00 & 114.14 & 89.20 & 89.64 & 91.19 & 87.89 & 89.50 & 83.44 & 109.00 & 100.81 & 98.00 & 91.59 & 112.00 & 94.81 & 98.00 & 91.02 & 89.50 & 58.63 & 89.00 & 27.06 \\
& 90.00 & 109.69 & 94.20 & 89.19 & 89.90 & 87.88 & 89.90 & 83.44 & 124.90 & 100.31 & 112.00 & 89.14 & 100.00 & 93.49 & 112.00 & 88.67 & 89.50 & 58.63 & 89.00 & 27.06 \\
& 111.00 & 108.54 & 89.88 & 88.69 & 90.00 & 87.88 & 90.50 & 83.44 & 114.00 & 99.70 & 105.00 & 87.53 & 132.60 & 93.49 & 105.00 & 86.84 & 89.70 & 58.63 & 89.00 & 27.06 \\
& 93.50 & 108.01 & 89.93 & 88.34 & 90.20 & 87.88 & 91.50 & 83.44 & 128.40 & 99.59 & 117.00 & 87.06 & 105.00 & 92.94 & 117.00 & 86.65 & 89.80 & 58.63 & 89.00 & 27.06 \\
& 99.00 & 106.50 & 90.00 & 88.15 & 90.90 & 87.88 & 90.00 & 83.42 & 127.40 & 99.53 & 100.00 & 85.92 & 115.00 & 92.93 & 100.00 & 85.88 & 89.80 & 58.63 & 89.00 & 27.06 \\
& 105.60 & 105.01 & 90.10 & 88.15 & 91.00 & 87.88 & 91.80 & 83.42 & 127.80 & 99.53 & 132.60 & 85.92 & 111.00 & 92.90 & 132.60 & 85.88 & 89.90 & 58.63 & 89.00 & 27.06 \\
& 113.00 & 104.35 & 91.00 & 88.15 & 92.00 & 87.88 & 90.00 & 82.22 & 130.10 & 98.76 & 115.00 & 85.50 & 110.00 & 92.84 & 115.00 & 85.51 & 90.00 & 58.63 & 89.00 & 27.06 \\
& 113.00 & 103.95 & 91.30 & 88.15 & 92.20 & 87.88 & 89.50 & 79.29 & 128.50 & 98.55 & 111.00 & 84.97 & 106.70 & 92.54 & 111.00 & 84.46 & 90.00 & 58.63 & 89.00 & 27.06 \\
& 106.60 & 103.95 & 96.10 & 88.15 & 92.40 & 87.88 & 90.00 & 79.29 & 128.40 & 98.45 & 132.00 & 84.96 & 126.90 & 91.73 & 132.00 & 84.42 & 90.50 & 58.63 & 89.00 & 27.06 \\
& 128.70 & 103.92 & 90.00 & 88.10 & 92.50 & 87.88 & 91.00 & 79.29 & 128.80 & 98.45 & 110.00 & 84.31 & 117.00 & 91.73 & 110.00 & 84.38 & 91.50 & 58.63 & 89.00 & 27.06 \\
& 91.80 & 102.10 & 91.40 & 88.10 & 92.74 & 87.88 & 91.80 & 79.29 & 131.40 & 98.33 & 106.70 & 83.95 & 126.80 & 91.30 & 106.70 & 82.97 & 100.00 & 58.63 & 89.00 & 27.06 \\
& 108.00 & 101.56 & 92.60 & 87.82 & 92.80 & 87.88 & 92.30 & 79.29 & 128.80 & 98.10 & 126.90 & 82.72 & 115.00 & 90.84 & 95.00 & 82.64 & 108.00 & 58.63 & 89.00 & 27.06 \\
& 92.00 & 101.32 & 91.60 & 87.53 & 93.00 & 87.88 & 90.00 & 78.85 & 128.70 & 93.96 & 105.00 & 82.63 & 95.00 & 90.80 & 105.00 & 82.01 & 110.00 & 58.63 & 89.00 & 27.06 \\
& 90.00 & 101.19 & 93.00 & 87.53 & 93.00 & 87.88 & 91.60 & 78.85 & 130.30 & 93.94 & 95.00 & 82.62 & 121.60 & 90.80 & 107.00 & 81.88 & 110.90 & 58.63 & 89.00 & 27.06 \\
& 105.10 & 100.50 & 93.80 & 87.49 & 93.05 & 87.88 & 89.10 & 78.79 & 131.30 & 93.93 & 107.00 & 82.47 & 100.00 & 90.78 & 126.90 & 81.82 & 114.00 & 58.63 & 89.00 & 27.06 \\
& 130.30 & 100.35 & 89.90 & 87.48 & 93.20 & 87.88 & 89.20 & 78.79 & 122.00 & 91.96 & 105.00 & 82.41 & 107.00 & 90.78 & 105.00 & 81.51 & 114.00 & 58.63 & 89.00 & 27.06 \\
& 93.00 & 100.24 & 90.00 & 87.48 & 93.40 & 87.88 & 89.40 & 78.79 & 123.50 & 91.64 & 126.80 & 82.12 & 90.00 & 90.63 & 90.00 & 81.40 & 116.00 & 58.63 & 89.10 & 27.06 \\
& 91.50 & 100.00 & 90.20 & 87.48 & 93.50 & 87.88 & 89.40 & 78.79 & 121.00 & 90.69 & 98.50 & 82.03 & 96.00 & 90.49 & 126.80 & 81.24 & 122.50 & 58.63 & 89.10 & 27.06 \\
& 91.50 & 99.18 & 90.90 & 87.48 & 91.80 & 87.75 & 89.40 & 78.79 & 115.00 & 90.14 & 112.00 & 82.03 & 128.70 & 90.48 & 117.00 & 80.89 & 127.00 & 58.63 & 89.10 & 27.06 \\
& 116.00 & 98.39 & 91.00 & 87.48 & 92.10 & 87.69 & 89.50 & 78.79 & 110.00 & 90.01 & 117.00 & 81.83 & 130.30 & 90.26 & 121.60 & 80.87 & 130.90 & 58.63 & 89.10 & 27.06 \\
\hline
$\bar{x}$ & 103.08 & 103.64 & 91.31 & 88.03 & 92.044& 87.86 & 90.27 & 80.49 & 124.47 & 96.32 & 111.63 & 84.59 & 112.33 & 91.83 & 111.68 & 84.05 & 102.68 & 58.63 & 89.02 & 27.06\\
$\bar{\eta}$ & \multicolumn{2}{c}{0.1085} & \multicolumn{2}{c}{0.036} & \multicolumn{2}{c}{0.0453} & \multicolumn{2}{c}{0.1083} & \multicolumn{2}{c}{0.224} & \multicolumn{2}{c}{0.2351} & \multicolumn{2}{c}{0.1733} & \multicolumn{2}{c}{0.2389} & \multicolumn{2}{c}{0.4187} & \multicolumn{2}{c}{0.696}
\\
\textit{RMSE} & \multicolumn{2}{c}{13.6023} & \multicolumn{2}{c}{3.7753} & \multicolumn{2}{c}{4.3261} & \multicolumn{2}{c}{10.0078} & \multicolumn{2}{c}{28.9783} & \multicolumn{2}{c}{29.3265} & \multicolumn{2}{c}{23.9282} & \multicolumn{2}{c}{30.2426} & \multicolumn{2}{c}{46.2473} & \multicolumn{2}{c}{61.96}
\\
\bottomrule
\end{tabular*}
\end{scriptsize}
\end{table}
Another set of observed results are interesting for discussion and might be relevant for future research directions.
In Table~\ref{tab:out-domain-top-20-temp}, we report the top 20 predicted vs, actual ($y$) temperatures for all ten regression methods for \textit{Out-of-Domain} test of a single run. The last row of the table shows the average of the corresponding (actual) critical temperature for the materials with the highest 20 predicted values by each of the models. Interestingly, \textit{XGBoost}'s top 20 predictions of the critical temperatures are all below 90~K (in the range of 87.48 to 89.64~K). Similarly, \textit{Random Forest}'s top 20 predictions are in the range of 87.69 to 87.89~K. The top 20 predicted critical temperatures by the \textit{Linear Regression} are in the range of 81.83 to 91.59~K. In contrast, the top 20 predicted critical temperature by \textit{Spline Continued Fraction} varies from 98.39 to 114.14~K, which, in comparison, has the highest starting and ending values among all regressors. We also reported the average temperature ($\bar{x}$), average relative errors ($\bar{\eta}$) and \textit{RMSE} score computed for the top 20 predictions. \textit{XGBoost} has showed the lowest value for both $\bar{\eta}$ (0.036) and \textit{RMSE} ($3.775$) among 10 regressors. In terms of those scores, the proposed \texttt{Spln-CFR} is \nth{4} position. However, if we look at the average of predictions, \texttt{Spln-CFR} has the highest average prediction temperatures for the top 20 predictions in \textit{Out-of-Domain} tests.
Since all the actual critical temperatures of the test set in \textit{Out-of-Domain} settings are $\geq 89$~K, it is relevant to evaluate for how many of these samples each regression method was able to predict above that value. Here, we considered the predicted value as \textbf{P} = \textit{critical temperature value} $\geq 89$~K (denoted as `P', for positive) and \textbf{N} = \textit{critical temperature value} < 89~K (denoted as `N', for negative). In Table~\ref{tab:res-sum_out_domain}, we reported the number of samples for which each of the methods predicted a temperature value in the P and N category for the whole testing set of \textit{Out-of-Domain} test. It is found that only six regression methods predicted the critical temperature being $\geq 89$~K for at least one sample. Both \textit{Linear Regression} and \textit{XGBoost} predicted two sample's temperature with critical temperature $\geq 89$~K. Kernel Ridge predicted only one sample's value within that range. MLP Regressor and Linear SVR predicted it for 21 and 34 samples, respectively. The proposed \textit{Spline Continued Fraction} predicted 108 sample's value $\geq 89$~K, which is the best among all regression methods used in the experiments.
\begin{table}
\centering
\caption{Number of times the methods predicted a critical temperature value $T_c \geq 89$~K (denoted as `P', for positive) and $T_c < 89$~K (denoted as `N' for Negative) for
{Out-of-Domain} test.}
\label{tab:res-sum_out_domain}
\setlength{\tabcolsep}{12pt}
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{Regressor} & \multicolumn{2}{c}{Out-of-domain predicted critical temperature, $T_c$}\\
\cline{2-3}
{} & {P ($T_c$ $\geq 89$~K)} & {N ($T_c$ < 89~K)} \\
\hline
Spln-CFR & 108 & 2018 \\
xg-b & 2 & 2124 \\
rf & 0 & 2126 \\
grad-b & 0 & 2126 \\
mlp & 21 & 2105 \\
l-regr & 2 & 2124 \\
l-svr & 34 & 2092 \\
krnl-r & 1 & 2125 \\
ada-b & 0 & 2126 \\
lasso-l & 0 & 2126 \\
\hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Inter-rater agreement between the pairs of regressor methods where the resulting models were able to predict at least one positive temperature value ($T_c \geq 89$~K).}
\label{tab:kappa-out-domain}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{llrc}
\hline
Rater 1 & Rater 2 & Value of Kappa ($\kappa$) & Level of Agreement \\
\hline
Spln-CFR & xg-b & -0.001851 & No Agreement \\
Spln-CFR & mlp & 0.030476 & None to Slight \\
Spln-CFR & l-regr & 0.016365 & None to Slight \\
Spln-CFR & l-svr & 0.104988 & None to Slight \\
Spln-CFR & krnl-r & -0.000933 & No Agreement \\
xg-b & mlp & -0.001721 & No Agreement \\
xg-b & l-regr & -0.000942 & No Agreement \\
xg-b & l-svr & -0.001780 & No Agreement \\
xg-b & krnl-r & -0.000628 & No Agreement \\
mlp & l-regr & -0.001721 & No Agreement \\
mlp & l-svr & 0.208516 & Fair \\
mlp & krnl-r & -0.000899 & No Agreement \\
l-regr & l-svr & 0.053874 & None to Slight \\
l-regr & krnl-r & 0.666457 & Substantial \\
l-svr & krnl-r & -0.000915 & No Agreement \\
\hline
\end{tabular}
\end{table}
We look at the consensus between regression methods in \textit{Out-of-Domain} prediction. Only five regressors (\texttt{Spln-CFR}, \texttt{xg-b}, \texttt{mlp}, \texttt{l-regr}, \texttt{l-svr} and \texttt{krnl-r}) which were able to predict at least one positive value (critical temperature $\geq 89$~K). We computed pairwise inter-rater agreement statistics, Cohen's kappa~\cite{cohen1960coefficient}, for those regression methods. We tabulated the value of Kappa($\kappa$) ordered by highest to lowest and outlined the level of agreement in Table~\ref{tab:kappa-out-domain}. We can see that most of the cases, there is either \textit{``No''} (9 cases) or \textit{``None to Slight''} (4 cases) agreement exists between the pairs of regressors. We witness such behaviour in the agreement between the pairs formed with \texttt{Spln-CFR} and each of the other five methods. MLP Regressor and Linear SVR have \textit{``Fair''} agreement in the predictions. We witnessed the highest value $\kappa=0.67$ for \textit{Linear Regression} and Kernel Ridge, which yields a \textit{``Substantial''} agreement.
\subsection{Extrapolation Capability of the Regressors in General}
As all of the results presented in this work are for a special case of finding models for the
extrapolation of the critical temperature of superconductors, we included more robust experimental outcomes with a set of six datasets used in \cite{DBLP:conf/cec/SunM19}. This additional test will help us to evaluate the extrapolation capabilities of the regressors in other problem domains.
Jerome Friedman proposed a Multivariate Adaptive Regression Splines (MARS) algorithm in~\cite{friedman1991MARS}
which aggregates multiple linear regression models throughout the range of target values. We used the implementation of the MARS algorithm from py-earth Python library~\footnote{\url{https://contrib.scikit-learn.org/py-earth/content.html\#multivariate-adaptive-regression-splines}}. We included a comparison of MARS with \texttt{Spln-CFR} and other regressors for extrapolation capability.
Here, the samples from each of the datasets were sorted based on the target value. Then we split it into the out-of-domain setting by taking samples with lower 90\% target values as train and higher 10\% target value as a test. We uniformly at random took half of the samples from the out-of-domain train to build the model and the same ratio from the out-of-domain test sets for prediction for each of the 100 independent runs. We applied min-max normalization on the train set and used the same distribution to normalize the test set.
\begin{table}[]
\centering
\caption{Number of times the prediction by a method in the range of out-of-domain threshold in test sets for 100 repeated runs on six datasets form~\cite{DBLP:conf/cec/SunM19}}
\label{tab:cec-od-range}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{lrlrlr}
\hline
Regressor & in Range & Regressor & in Range & Regressor & in Range \\
\hline
Spln-CFR & 13560 & grad-b & 1227 & ada-b & 0 \\
MARS & 3716 & mlp & 1158 & lasso-l & 0 \\
l-regr & 2594 & xg-b & 826 & rf & 0 \\
l-svr & 2045 & krnl-r & 735 & sgd-r & 0 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[]
\centering
\includegraphics[width =0.6\columnwidth]{plot-CD-CEC2019-6-Datasets-Norm.pdf}
\caption{Critical difference (CD) plot showing the statistical significance of rankings achieved by the regression methods for 100 runs on the six datasets form~\cite{DBLP:conf/cec/SunM19}.}
\label{fig:cd-100runs-cec2019}
\end{figure}
We have analyzed their performance statistically (in Fig.~\ref{fig:cd-100runs-cec2019}) and found that MARS has a median ranking of 5, and statistically significantly different from only \texttt{krnl-r, sgd-r} and \texttt{lasso-l}. However, the proposed \texttt{Spln-CFR} has achieved the first rank among all the methods with a median ranking between two to three. The predictions by each model are de-normalized to count the number of predictions above the threshold (the maximum target value in training portion of data) in out-of-domain settings. We show the complete outcome in Table~\ref{tab:cec-od-range}. These counts show that the \texttt{Spln-CFR} has the highest number of predictions (13560) followed by MARS (3716) and \texttt{l-regr} (2594) are in the range. These results demonstrate the strength of the regressors for their extrapolation capability.
\section{Conclusion and Future Work}
We give a brief summary of some of the results observed on this new technique:
\begin{itemize}
\item For the \textit{Out-of-Sample} study, the median \textit{RMSE} obtained for 100 independent runs, the proposed \texttt{Spln-CFR} is in the top three methods (in Table~\ref{tab:res-all-models}).
\item For the statistical test of \textit{Out-of-Sample} rankings, \texttt{Spln-CFR} is statistically similar to the \nth{2} ranked method (\textit{Random Forest}) in Fig.~\ref{fig:stat-test-out-sample} (b).
\item For \textit{Out-of-Domain} median \textit{RMSE} obtained for 100 runs, the proposed \texttt{Spln-CFR} is the top method (ranked \nth{1} in Table~\ref{tab:res-all-models}).
\item For the statistical test of \textit{Out-of-Domain} rankings, in Fig.~\ref{fig:stat-test-out-domain} (b), \texttt{Spln-CFR} is the best method (median ranking is close to 2) and statistically similar to the second best regressor, \textit{XGBoost} (with a median ranking between 2 and 3).
\item \texttt{Spln-CFR} correctly predicted that 108 unique materials have critical temperature values that are greater than or equal to 89~K in \textit{Out-of-Domain} test (close to twice the number of all other regression methods tested combined which was 60) (Table~\ref{tab:res-sum_out_domain}).
\end{itemize}
Table \ref{tab:out-domain-top-20-temp} also reveals interesting characteristics of all methods that deserve further consideration as an area of research.
First, note that the 20 top materials for each of the methods are not necessarily the same, although some intersection obviously may exist.
In the \textit{Out-of-Domain} study, the top 20 predicted critical temperature values by \texttt{Spln-CFR} were all above 98.9~K (with 18 being above 100~K). The average \textit{RMSE} critical temperature on this set (103.64 K) is nearly the same as the one predicted (103.08~K). The \textit{RMSE} of \texttt{xg-b}, however, is nearly three times smaller, but the method's top predictions are materials with relatively smaller values (average of 91.31~K). We observed, for the collected information of materials in the dataset, the top suggestions of critical temperatures in superconductors are closer to the measured temperature, at least on the average, by the \texttt{Spln-CFR}. Therefore, the usage of \texttt{Spln-CFR} as a surrogate model to explore the possibility of testing the superconductivity in materials may bring better returns.
Interestingly, we have also observed a similar behavior of \texttt{xg-b} with other multivariate regression techniques, but also important differences worth noting. For instance, \textit{Linear Regression}, perhaps the simplest scheme of them all, has an interesting behavior: the top 20 highest predictions are all in the range $[81.83, 91.59]$~K while the actual values are in the interval $[98.00, 132.60]$~K. For the multi-layer perceptron method (\texttt{mlp}), the top 20 highest predictions are all in the range $[90.01, 100.81]$~K, yet true values are in the interval $[109.00, 131.40]$~K. These means that trained using the MSE, these techniques could still give valuable information about materials that could be prioritized for testing if we better consider the ranking given to several materials and have less concerned about the predicted value.
Overall, the results show the limitations of the current dataset. One possible limitation is the lack of other useful molecular descriptors that can bring important problem domain knowledge about the structure of the materials and their properties. In addition, it is also possible that a careful ``segmentation'' of the different materials is necessary. In some sense, the results of the experiments presented here may help the AI community reflect on how to do these analyses and motivate the researchers to work in closer collaboration with superconductivity specialists to provide other molecular descriptors.
We actually often compare the inherent difficulties in prediction in this dataset to other areas on which some of us have been working extensively (like the prediction of survivability in breast cancer using transcriptomic data). In both cases, without separating the training samples into meaningful subgroups, the models obtained generalised poorly.
This said, one of the reasons that our continued fraction based method may be doing just a bit better in the generalisation test in our \textit{Out-of-Domain} study, is that there might be some structural similarities in the set of compounds used to define the ``last'' continued fraction approximation, then, indirectly perhaps, from the molecular descriptors present some useful information which the continued fraction representation has exploited. We will investigate this hypothesis in a future publication where we aim to include more relevant problem-domain information, in collaboration with specialists, to benefit from the structure and known properties of the actual compounds.
In terms of future research {on the algorithm we propose here}, it is clear that \texttt{Spln-CFR} is already a promising approach that has some obvious extensions worth considering in the future, for instance, the inclusion of \textit{bagging} and \textit{boosting} techniques which can improve the \textit{Out-of-Sample} performance. In addition, we consider that learning with modifications of the MSE in the training set may lead to better performance for the \textit{Out-of-Domain} scenario, and we plan to conduct further research in that area as well.
\normalem
\bibliographystyle{IEEEtran}
|
1,116,691,499,693 | arxiv | \section{Introduction}
A matrix is said to be \emph{completely positive semidefinite} if it admits a Gram representation by (Hermitian) positive semidefinite matrices of any size. The $n \times n$ completely positive semidefinite matrices form a convex cone, called the completely positive semidefinite cone, which is denoted by $\mathcal{CS}_{\hspace{-0.065em}+}^n$.
The motivation for the study of the completely positive semidefinite cone is twofold. Firstly, the completely positive semidefinite cone $\mathcal{CS}_{\hspace{-0.065em}+}^n$ is a natural analog of the completely positive cone $\mathcal{CP}^n$, which consists of the matrices admitting a factorization by nonnegative vectors. The cone $\mathcal{CP}^n$ is well studied (see, for example, the monograph
\cite{BSM03}), and, in particular, it can be used to model classical graph parameters. For instance, \cite{dKP02} shows how to model the stability number of a graph as a conic optimization problem over the completely positive cone. A second motivation lies in the connection to quantum information theory. Indeed, the cone $\mathcal{CS}_{\hspace{-0.065em}+}^n$ was introduced in \cite{LP15} to model quantum graph parameters (including quantum stability numbers) as conic optimization problems, an approach extended in \cite{MR15} for quantum graph homomorphisms and in \cite{Antonios:2015} for quantum correlations.
In this paper we are interested in the size of the factors needed in Gram representations of matrices. This type of question is of interest for factorizations by nonnegative vectors as well as by (Hermitian) positive semidefinite matrices.
Throughout we use the following notation.
For $X,Y\in \mathbb{C}^{d\times d}$, $X^*$ is the conjugate transpose and $\langle X, Y\rangle = \mathrm{Tr}(X^*Y)$ is the trace inner product. For vectors $u,v\in \mathbb{R}^d$, $\langle u,v\rangle= u^Tv$ denotes their Euclidean inner product.
A matrix $M$ is said to be \emph{completely positive} if there exist nonnegative vectors $v_1,\ldots,v_n \in \mathbb{R}_+^d$ such that $M_{i,j} = \langle v_i, v_j \rangle$ for all $i,j\in [n]$. We call such a set of vectors a \emph{Gram representation} or \emph{factorization} of $M$ by nonnegative vectors. The smallest $d$ for which these vectors exist is denoted by $\text{\rm cp-rank}(M)$ and is called the \emph{completely positive rank} of $M$.
Similarly, a matrix $M$ is called \emph{completely positive semidefinite} if there exist (real symmetric or complex Hermitian) positive semidefinite $d \times d$ matrices $X_1,\ldots,X_n$ such that $M_{i,j} = \langle X_i, X_j \rangle$ for all $i,j\in [n]$. We call such a set of matrices a \emph{Gram representation} or \emph{factorization} of $M$ by (Hermitian) positive semidefinite matrices. The smallest $d$ for which there exists a Gram representation of $M$ by Hermitian positive semidefinite $d \times d$ matrices is denoted by $\hcpsd(M)$, and the smallest $d$ for which these matrices can be taken to be real is denoted by $\scpsd(M)$. We call this the \emph{real/complex completely positive semidefinite rank} of $M$. If a matrix has a factorization by Hermitian positive semidefinite matrices, then it also has a factorization by real positive semidefinite matrices. In fact, for every $M \in \mathcal{CS}_{\hspace{-0.065em}+}^n$, we have
\[
\hcpsd(M) \leq \scpsd(M) \leq 2 \hcpsd(M)
\]
(see Section~\ref{sec:some properties}).
By construction, we have the inclusions
\[
\mathcal{CP}^n\subseteq \mathcal{CS}_{\hspace{-0.065em}+}^n \subseteq \mathcal S^n_+\cap\mathbb{R}^{n\times n}_+,
\]
where $\mathcal S_+^n$ is the cone of (real) positive semidefinite $n \times n$ matrices. The three cones coincide for $n\le 4$ (since doubly nonnegative matrices of size $n\le 4$ are completely positive), but both inclusions are strict for $n\ge 5$ (see \cite{LP15} for details).
By Carath\'eodory's theorem, the completely positive rank of a matrix in $\mathcal{CP}^n$ is at most $\binom{n+1}{2}+1$. In \cite{Shaked-Monderer} the following stronger bound is given:
\begin{equation}\label{eqcprank}
\text{\rm cp-rank}(M) \leq {n+1 \choose 2}-4 \quad \text{for} \quad M \in \mathcal{CP}^n \quad \text{and} \quad n \geq 5,
\end{equation}
which is also not known to be tight. No upper bound (as a function of $n$) is known for the completely positive semidefinite rank of matrices in $\mathcal{CS}_{\hspace{-0.065em}+}^n$. It is not even known whether such a bound exists. A positive answer would have strong implications. It would imply that the cone $\mathcal{CS}_{\hspace{-0.065em}+}^n$ is closed. This, in turn, would imply that the set of quantum correlations is closed, since it can be seen as a projection of an affine slice of the completely positive semidefinite cone (see \cite{MR14,Antonios:2015}). Whether the set of quantum correlations is closed is an open question in quantum information theory.
In contrast, as an application of the upper bound \eqref{eqcprank}, the completely positive cone $\mathcal{CP}^n$ is easily seen to be closed. A description of the closure of the completely positive semidefinite cone in terms of factorizations by positive elements in von Neumann algebras can be found in \cite{STM:15}. Such factorizations were used to show a separation between the closure of $\mathcal{CS}_{\hspace{-0.065em}+}^n$ and the doubly nonnegative cone $\smash{\mathcal S^n_+\cap \mathbb{R}^{n\times n}_+}$ (see \cite{FW14,LP15}).
\medskip
In this paper we show that if an upper bound exists for the completely positive semidefinite rank of matrices in $\mathcal{CS}_{\hspace{-0.065em}+}^n$, then it needs to grow at least exponentially in the matrix size $n$. Our main result is the following:
\begin{theorem*}\label{theomain}
For each positive integer $k$, there exists a completely positive semidefinite matrix $M$ of size $4k^2 +2k +2$ with $\hcpsd(M) = 2^k$.
\end{theorem*}
The proof of this result relies on a connection with quantum information theory and geometric properties of (bipartite) correlation matrices.
We refer to the main text for the definitions of quantum and bipartite correlations. A first basic ingredient is the fact from \cite{Antonios:2015} that a quantum correlation $p$ can be realized in local dimension $d$ if and only if there exists a certain completely positive semidefinite matrix $M$ with $\hcpsd(M)$ at most $d$. Then, the key idea is to construct a class of quantum correlations $p$ that need large local dimension.
The papers \cite{VertesiPal, Slofstra11, Ji13} each use different techniques to show the existence of different quantum correlations that require large local dimension. Our main contribution is to provide a unified, explicit construction of the quantum correlations from \cite{VertesiPal} and \cite{Slofstra11}, which uses the seminal work of Tsirelson \cite{Tsirelson:87,Tsirelson} combined with convex geometry and recent insights from rigidity theory. In addition, we also give an explicit proof of Tsirelson's bound (see Corollary~\ref{corbound}) and we show examples where the bound is tight.
More specifically, we construct such quantum correlations from bipartite correlation matrices. For this we use the classical results of Tsirelson \cite{Tsirelson:87,Tsirelson}, which characterize bipartite correlation matrices in terms of operator representations and, using Clifford algebras,
we relate the rank of extremal bipartite correlations to the local dimension of their operator representations. In this way we reduce the problem to finding bipartite correlation matrices that are extreme points of the set of bipartite correlations and have large rank.
\medskip
For the completely positive rank we have the quadratic upper bound \eqref{eqcprank}, and completely positive matrices have been constructed whose completely positive rank grows quadratically in the size of the matrix. This is the case, for instance, for the matrices
\[
M_k=\begin{pmatrix} I_k & {1\over k}J_k\cr {1\over k}J_k & I_k\end{pmatrix}\in \mathcal{CP}^{2k},
\]
where $\text{\rm cp-rank}(M_k)=k^2$. Here $I_k\in \mathcal S^k$ is the identity matrix and $J_k\in\mathcal S^k$ is the all-ones matrix. This leads to the natural question of how fast $\scpsd(M_k)$ and $\hcpsd(M_k)$ grow. As a second result we show that the completely positive semidefinite rank grows linearly for the matrices $M_k$, and we exhibit a link to the question of existence of Hadamard matrices. More precisely, we show that $\hcpsd(M_k) = k$ for all $k$, and $\scpsd(M_k)=k$ if and only if there exists a real Hadamard matrix of order $k$. In particular, this shows that the real and complex completely positive semidefinite ranks can be different.
\medskip
The completely positive and completely positive semidefinite ranks are symmetric analogs of the nonnegative and positive semidefinite ranks. Here the nonnegative rank, denoted $\mathrm{rank}_+(M)$, of a matrix $M \in \mathbb{R}_+^{m \times n}$, is the smallest integer $d$ for which there exist nonnegative vectors $\{u_i\}$ and $\{v_j\}$ in $\smash{\mathbb{R}_+^d}$ such that $M_{i,j} = \langle u_i, v_j \rangle$ for all $i$ and $j$, and the positive semidefinite rank, denoted $\mathrm{rank}_{\mathrm{psd}}(M)$, is the smallest $d$ for which there exist positive semidefinite matrices $\{X_i\}$ and $\{Y_j\}$ in $\mathcal S_+^d$ such that $M_{i,j} = \langle X_i, Y_j \rangle$ for all $i$ and $j$. These notions have many applications, in particular to communication complexity and for the study of efficient linear or semidefinite extensions of convex polyhedra (see \cite{Ya91,GPT13}). Unlike in the symmetric setting, in the asymmetric setting the following bounds, which show a linear regime, can easily be checked:
\[ \text{\rm rank}_{\text{\rm psd}}(M) \leq \rank_+(M) \le \min(m,n). \]
We refer to \cite{psdrank} and the references therein for a recent overview of results about the positive semidefinite rank.
\bigskip\noindent
\textbf{Organization of the paper.}
In Section~\ref{sec:some properties} we first present some simple properties of the (real/complex) completely positive semidefinite rank, and then investigate its value for the matrices $M_k$, where we also show a link to the existence of Hadamard matrices. We also give a simple heuristic for finding approximate positive semidefinite factorizations.
The proof of our main result in Theorem \ref{theomain} boils down to several key ingredients which we treat in the subsequent sections.
In Section \ref{secCmn} we group old and new results about the set of bipartite correlation matrices.
In particular, we give a geometric characterization of the extreme points, we revisit some conditions due to Tsirelson and links to rigidity theory, and we construct a class of extreme bipartite correlations with optimal parameters.
In Section \ref{secquantumcor} we recall some characterizations, due to Tsirelson, of bipartite correlations in terms of operator representations. We also recall connections to Clifford algebras, and for bipartite correlations that are extreme points we relate their rank to the dimension of their operator representations.
Finally in Section \ref{secfinal} we introduce quantum correlations and recall their link to completely positive semidefinite matrices. We show how to construct quantum correlations from bipartite correlation matrices, and we prove the main theorem.
\bigskip\noindent
\textbf{Note.}
Upon completion of this paper we learned of the recent independent work \cite{PrakashSikoraVarvitsiotisWei}, where a class of matrices with exponential $\cpsd$ is also constructed. The key idea of using extremal bipartite correlation matrices having large rank is the same. Our construction uses bipartite correlation matrices with optimized parameters meeting Tsirelson's upper bound (\ref{eqrankC}) (see Corollary \ref{corbound}). As a consequence, our completely positive semidefinite matrices have the best ratio between $\cpsd$ and size that can be obtained using this technique.
\section{Some properties of the completely positive semidefinite rank}
\label{sec:some properties}
In this section we consider the (complex) completely positive semidefinite rank of matrices in the completely positive cone. In particular, for a class of matrices, we show a quadratic separation in terms of the matrix size between the completely positive and completely positive semidefinite ranks. We also mention a simple heuristic for building completely positive semidefinite factorizations, which we have used to test several explicit examples.
We start by collecting some simple properties of the (complex) completely positive semidefinite rank.
A first observation is that if a matrix $M$ admits a Gram representation by Hermitian positive semidefinite matrices of size $d$, then it also admits a Gram representation by real symmetric positive semidefinite matrices of size $2d$, and thus
$M$ is completely positive semidefinite with $\scpsd(M) \leq 2d$. This is based on the well-known fact that mapping a Hermitian $d \times d$ matrix $X$ to
\[
\frac{1}{\sqrt{2}} \begin{pmatrix} \Re(X) & \Im(X)\\ \Im(X)^{\sf T} & \Re(X) \end{pmatrix} \in \mathcal S^{2d}
\]
is an isometry that preserves positive semidefiniteness. The (complex) completely positive semidefinite rank is subadditive; that is, for $A, B \in \mathcal{CS}_{\hspace{-0.065em}+}^n$ and $\mathbb{K}=\mathbb{R}$ or $\mathbb{K}=\mathbb{C}$, we have
\[
\cpsd_{\mathbb{K}}(A+B) \leq \cpsd_{\mathbb{K}}(A) + \cpsd_{\mathbb{K}}(B),
\]
which can be seen as follows: If $A$ is the Gram matrix of $X_1,\ldots,X_n \in \mathbb{K}^{k \times k}$ and $B$ is the Gram matrix of $Y_1,\ldots,Y_n \in \mathbb{K}^{r \times r}$, then $A+B$ is the Gram matrix of the matrices $X_1\oplus Y_1,\ldots,X_n \oplus Y_n \in \mathbb{K}^{(k+r) \times (k+r)}$.
For $M \in \mathcal{CS}_{\hspace{-0.065em}+}^n$ we have the inequalities
\[
\binom{\scpsd(M)+1}{2} \geq \mathrm{rank}(M) \quad \text{and} \quad \hcpsd(M)^2 \geq \mathrm{rank}(M),
\]
since a factorization of $M$ by real symmetric (resp., complex Hermitian) positive semidefinite $r \times r$ matrices yields another factorization of $M$ by real vectors of size ${r+1\choose 2}$ (resp., by real vectors of size $r^2$).
Finally, the next lemma shows that if the (complex) completely positive semidefinite rank of a matrix is high, then each factorization by (Hermitian) positive semidefinite matrices must contain at least one matrix with high rank.
\begin{lemma}
Let $M \in \mathcal{CS}_{\hspace{-0.065em}+}^n$. For each Gram representation of $M$ by (Hermitian) positive semidefinite matrices $X_1,\ldots,X_n\in \mathbb{K}^{d\times d}$, with $\mathbb{K} \in \{\mathbb{R}, \mathbb{C}\}$, we have
\[
\cpsd_{\mathbb{K}}(M) \leq \rank(X_1+\ldots +X_n).
\]
\end{lemma}
\begin{proof}
Let $v_1,\ldots,v_{d-k}$ be an orthonormal basis of $\ker(X_1) \cap \ldots \cap \ker(X_n)$, and let $u_1,\ldots,u_k$ be an orthonormal basis of $(\ker(X_1) \cap \ldots \cap \ker(X_n))^\perp$. Let $U$ be the $d\times k$ matrix with columns $u_1,\ldots,u_k$, and let $V$ be the $d \times (d-k)$ matrix with columns $v_1,\ldots,v_{d-k}$, so that $P = \begin{pmatrix}
U& \hspace{-0.5em}V \end{pmatrix}$ is an orthogonal matrix.
Set $Y_i = U^* X_i U\in \mathbb{K}^{k\times k}$ for $i\in [n]$. Then $Y_i$ is (Hermitian) positive semidefinite.
Since $X_iV=0$ by construction, we have
\[
\big\langle Y_i, Y_j \big\rangle = \big\langle U^* X_i U, U^* X_j U \big\rangle =
\big\langle P^* X_i P, P^* X_j P \big\rangle=
\big\langle X_i,X_j\big\rangle
\]
for all $i,j \in [n]$, which shows $M = \mathrm{Gram}(Y_1,\ldots,Y_n)$.
We have
\[
\cpsd_{\mathbb K}(M) \leq k = n - \dim(\ker(X_1) \cap \ldots \cap \ker(X_n)),
\]
and because the matrices $X_1,\ldots,X_n$ are positive semidefinite, the right hand side is equal to
\[
n - \dim(\ker(X_1 + \ldots + X_n)) = \rank(X_1 + \ldots + X_n).\qedhere
\]
\end{proof}
\subsection{A connection to the existence of Hadamard matrices}
\label{sec:hadamard}
Consider the $2k \times 2k$ matrix
\[
M_k = \begin{pmatrix} I_k & \frac{1}{k} J_k \\ \frac{1}{k} J_k & I_k \end{pmatrix},
\]
where $I_k$ is the $k \times k$ identity matrix and $J_k$ the $k \times k$ all-ones matrix.
The completely positive rank of $M_k$ equals $k^2$, which is well known and easy to check (see Proposition \ref{cpranknsquared} below). This means the completely positive rank of these matrices is within a constant factor of the upper bound \eqref{eqcprank}. The significance of the matrices $M_k$ stems from the recently disproved (see \cite{Bomze, Bomze2}) Drew-Johnson-Loewy conjecture \cite{DJL94}. This conjecture states that $\lfloor n^2/4 \rfloor$ is an upper bound on the completely positive rank of $n \times n$ matrices, which means the matrices $M_k$ are sharp for this bound.
It is therefore natural to ask whether the matrices $M_k$ also have large (quadratic in $k$) completely positive semidefinite rank. As we see below this is not the case. We show that the complex completely positive semidefinite rank is $k$, and we show that the real completely positive semidefinite rank is equal to $k$ if and only if a real Hadamard matrix of order $k$ exists, which suggests that determining the completely positive semidefinite rank is a difficult problem in general.
A real (complex) {Hadamard} matrix of order $k$ is a $k\times k$ matrix with pairwise orthogonal columns and whose entries are $\pm 1$-valued (complex valued with unit modulus). A complex Hadamard matrix exists for any order; take for example
\begin{equation}\label{eqHk}
(H_k)_{i, j} = e^{2 \pi \mathbf{i}(i-1)(j-1)/k} \quad \text{ for } \quad i,j\in [k].
\end{equation}
On the other hand, it is still an open conjecture whether a real Hadamard matrix exists for each order $k$ that is a multiple of 4.
For completeness we first give a proof that the completely positive rank is $k^2$. Here, the support of a vector $u\in\mathbb{R}^d$ is the set of indices $i \in [d]$ for which $u_i\ne 0$.
\begin{proposition} \label{cpranknsquared}
The completely positive rank of $M_k$ is equal to $k^2$.
\end{proposition}
\begin{proof}
For $i\in [k]$ consider the vectors
$v_i = \smash{1/\sqrt{k}} \, e_i \otimes \mathbf{1}$ and $u_i = \smash{1/\sqrt{k}}\, \mathbf{1} \otimes e_i$, where $e_i$ is the $i$th basis vector in $\mathbb{R}^k$ and $\mathbf{1}$ is the all-ones vector in $\mathbb{R}^k$. The vectors $v_1,\ldots,v_k, u_1,\ldots,u_k$ are nonnegative and form a Gram representation of $M_k$, which shows $\text{\rm cp-rank}(M_k)\le k^2$.
Suppose $M_k = \text{Gram}(v_1, v_2, \ldots, v_k, u_1, u_2, \ldots, u_k)$ with $v_i,u_i \in \mathbb{R}^d_+$. In the remainder of the proof we show $d\ge k^2$. We have $(M_k)_{i,j} = \delta_{ij}$ for $1 \leq i,j \leq k$. Since the vectors $v_i$ are nonnegative, they must have disjoint supports. The same holds for the vectors $u_1,\ldots,u_k$. Since $(M_k)_{i,j} = 1/k > 0$ for $1 \leq i \leq k$ and $k+1 \leq j \leq 2k$, the support of $v_i$ overlaps with the support of $u_j$ for each $i$ and $j$. This means that for each $i \in [k]$, the size of the support of the vector $v_i$ is at least $k$. This is only possible if $d \geq k^2$.
\end{proof}
\begin{proposition}
For each $k$ we have
$
\hcpsd(M_k) = k.
$
Moreover, we have $\scpsd(M_k)=k$ if and only if there exists a real Hadamard matrix of order $k$.
\end{proposition}
\begin{proof}
The lower bound $\hcpsd(M_k) \geq k$ follows because $I_k$ is a principal submatrix of $M_k$ and $\hcpsd(I_k) = k$. To show $\hcpsd(M_k) \leq k$, we give a factorization by Hermitian positive semidefinite $k \times k$ matrices. For this consider the complex
Hadamard matrix $H_k$ in (\ref{eqHk}) and define the factors
\[X_i = e_i e_i^{\sf T} \quad \text{and} \quad Y_i= \frac{u_i u_i^*}{k} \quad \text{for} \quad i \in [k],\] where $e_i$ is the $i$th standard basis vector of $\mathbb{R}^k$ and $u_i$ is the $i$th column of $H_k$. By direct computation it follows that $M_k = \Gram(X_1,\ldots,X_{k}, Y_1,\ldots, Y_k)$.
We now show that $\scpsd(M_k)=k$ if and only if there exists a real Hadamard matrix of order $k$. One direction follows directly from the above proof: If a real Hadamard matrix of size $k$ exists, then we can replace $H_k$ by this real matrix and this yields a factorization by real positive semidefinite $k \times k$ matrices.
Now assume $\scpsd(M_k) = k$ and let $X_1, \ldots, X_k, Y_1, \ldots, Y_k \in \mathcal S_+^k$ be a Gram representation of $M$.
We first show there exist two orthonormal bases $u_1, \ldots, u_k$ and $v_1, \ldots, v_k$ of $\mathbb{R}^k$ such that $X_i = u_i u_i^{\sf T}$ and $Y_i = v_i v_i^{\sf T}$. For this we observe that $I = \Gram(X_1,\ldots,X_k)$, which implies $X_i \neq 0$ and $X_i X_j = 0$ for all $i \neq j$. Hence, for all $i \neq j$, the range of $X_j$ is contained in the kernel of $X_i$. Therefore the range of $X_i$ is orthogonal to the range of $X_j$. We now have $\smash{\sum_{i=1}^k} \mathrm{dim}(\mathrm{range}(X_i)) \leq k$ and $\mathrm{dim}(\mathrm{range}(X_i)) \geq 1$ for all $i$. From this it follows that $\rank(X_i) = 1$ for all $i \in [k]$. This means there exist $u_1, \ldots, u_k \in \mathbb{R}^k$ such that $X_i = u_i u_i^{\sf T}$ for all $i$. From $I = \Gram(X_1,\ldots,X_k)$ it follows that the vectors $u_1,\ldots,u_k$ form an orthonormal basis of $\mathbb{R}^k$. The same argument can be made for the matrices $Y_i$, thus $Y_i=v_iv_i^{\sf T}$ and the vectors $v_1,\ldots,v_k$ form an orthonormal basis of $\mathbb{R}^k$.
Up to an orthogonal transformation we may assume that the first basis is the standard basis; that is, $u_i = e_i$ for $i \in [k]$. We then obtain
\[
\frac{1}{k} = (M_k)_{i,j+k} = \ip{e_i,v_j}^2 = \big((v_j)_i\big)^2 \quad \text{for} \quad i,j \in [k],
\]
hence $(v_j)_i = \pm 1/\sqrt{k}$. Therefore, the $k \times k$ matrix whose $k$th column is $\sqrt{k}\, v_k$ is a real Hadamard matrix.
\end{proof}
The above proposition leaves open the exact determination of $\scpsd(M_k)$ for the cases where a real Hadamard matrix of order $k$ does not exist. Extensive experimentation using the heuristic from Section~\ref{sec:heuristic} suggests that for $k = 3, 5, 6, 7$ the real completely positive semidefinite rank of $M_k$ equals $2k$, which leads to the following question:
\begin{question}
Is the real completely positive semidefinite rank of $M_k$ equal to $2k$ if a real Hadamard matrix of size $k \times k$ does not exist?
\end{question}
We also used the heuristic from Section~\ref{sec:heuristic} to check numerically that the aforementioned matrices from \cite{Bomze}, which have completely positive rank greater than $\lfloor n^2/4 \rfloor$, have small (smaller than $n$) real completely positive semidefinite rank. In fact, in our numerical experiments we never found a completely positive $n \times n$ matrix for which we could not find a factorization in dimension $n$, which leads to the following question:
\begin{question}
Is the real (or complex) completely positive semidefinite rank of a completely positive $n \times n$ matrix upper bounded by $n$?
\end{question}
\subsection{A heuristic for finding Gram representations}
\label{sec:heuristic}
In this section we give an adaptation to the symmetric setting of the seesaw method from \cite{WernerWolf2001}, which is used to find good quantum strategies for nonlocal games. Given a matrix $M \in \mathcal{CS}_{\hspace{-0.065em}+}^n$ with $\scpsd(M) \leq d$, we give a heuristic to find a Gram representation of $M$ by positive semidefinite $d \times d$ matrices. Although this heuristic is not guaranteed to converge to a factorization of $M$, for small $n$ and $d$ (say, $n,d \leq 10$) it works well in practice by restarting the algorithm several times. The following algorithm seeks to minimize the function
\[
E(X_1,\ldots,X_n) =\underset{i,j \in [n]}{\max} |\langle X_i, X_j \rangle - M_{i,j}|.
\]
\begin{algorithm}
Initialize the algorithm by setting $k=1$ and generating random matrices $X_1^{0},\ldots,X_n^{0} \in \mathcal S_+^d$ that satisfy $\langle X_i^{0}, X_i^{0} \rangle = M_{i,i}$ for all $i \in [n]$. Iterate the following steps:
\begin{enumerate}
\item Let $(\delta,Y_1,\ldots,Y_n)$ be a (near) optimal solution of the semidefinite program
\[
\min \Big\{ \delta : \delta \in \mathbb{R}_+, \, Y_1,\ldots,Y_n \in \mathcal S_+^d, \, \Big| \big\langle X_i^{k-1}, Y_j \big\rangle - M_{i,j} \Big| \leq \delta \text{ for } i,j \in [n] \Big\}.
\]
\item Perform a line search to find the scalar $r \in [0, 1]$ minimizing
\[
E\big( (1-r) X_1^{k-1} + r Y_1, \ldots, (1-r) X_n^{k-1} + r Y_n\big),
\]
and set $X_i^k = (1-r) X_i^{k-1} + r Y_i$ for each $i \in [n]$.
\item If $E(X_1^k,\ldots,X_n^k)$ is not small enough, increase $k$ by one and go to step (1). Otherwise, return the matrices $X_1^k$, $\ldots$, $X_n^k$.
\end{enumerate}
\end{algorithm}
\section{The set of bipartite correlations}\label{secCmn}
In this section we define the set $\Cor(m,n)$ of bipartite correlations and we discuss properties of the extreme points of $\Cor(m,n)$, which will play a crucial role in the construction of $\mathcal{CS}_{\hspace{-0.065em}+}$-matrices with large complex completely positive semidefinite rank.
In particular we give a characterization of the extreme points of $\Cor(m,n)$ in terms of extreme points of the related set ${\mathcal E}_{m+n}$ of correlation matrices. We use it to give a simple construction of a class of extreme points of $\Cor(m,n)$ with rank $r$, when $m=n={r+1\choose 2}$.
We also revisit conditions for extreme points introduced by Tsirelson \cite{Tsirelson:87} and point out links with universal rigidity. Based on these we can construct extreme points of $\Cor(m,n)$ with rank $r$ when $m=r$ and $n={r\choose 2}+1$, which are used to prove our main result (Theorem~\ref{theomain}).
\subsection{Bipartite correlations and correlation matrices}
A matrix $C \in \mathbb{R}^{m \times n}$ is called a {\em bipartite correlation matrix} if there exist real unit vectors $x_1,\ldots,x_m,$ $y_1,\ldots,y_n\in \mathbb{R}^d$ (for some $d\ge 1$) such that
$C_{s,t} = \langle x_s, y_t \rangle$ for all $s \in [m]$ and $t \in [n]$. Following Tsirelson \cite{Tsirelson:87}, any such system of real unit vectors
is called a {\em $C$-system}. We let $\Cor(m, n)$ denote the set of all $m\times n$ bipartite correlation matrices.
The \emph{elliptope} ${\mathcal E}_n$ is defined as
\[
\mathcal{E}_n = \Big\{ E \in \S^n: E_{ii} = 1 \text{ for } i =1, \ldots, n\Big\},
\]
its elements are the {\em correlation matrices}, which can alternatively be defined as all matrices of the form $(\langle z_i,z_j\rangle)_{i,j=1}^n$ for some real unit vectors $z_1,\ldots,z_n\in \mathbb{R}^d$ ($d\ge 1$).
We have the surjective projection
\begin{equation}\label{eqpi}
\pi \colon \mathcal E_{m+n} \to \Cor(m,n), \, \begin{pmatrix} Q & C \\ C^{\sf T} & R \end{pmatrix} \mapsto C.
\end{equation}
Hence, $\Cor(m,n)$ is a projection of the elliptope ${\mathcal E}_{m+n}$ and therefore a convex set. Given $C \in \Cor(m,n)$, any matrix $E \in \mathcal E_{m+n}$ such that $\pi(E)=C$ is called an \emph{extension} of $C$ to the elliptope and we let $\operatorname{fib}(C)$ denote the fiber (the set of extensions) of $C$.
Theorem~\ref{theoExCor} below characterizes extreme points of $\Cor(m,n)$ in terms of extreme points of $\mathcal{E}_{m+n}$.
It is based on two intermediary results. The first result (whose proof is easy) relates extreme points $C\in\Cor(m,n)$ to properties of their set of extensions $\operatorname{fib}(C)$. It is shown in \cite{ELV14} in a more general setting.
\begin{lemma}[{\cite[Lemma 2.4]{ELV14}}] \label{lemELV}
Let $C\in \Cor(m,n)$. Then $C$ is an extreme point of $\Cor(m,n)$ if and only if the set $\operatorname{fib}(C)$ is a face of $\mathcal E_{m+n}$. Moreover, if $C$ is an extreme point of $\Cor(m,n)$, then every extreme point of $\operatorname{fib}(C)$ is an extreme point of $\mathcal E_{m+n}$.
\end{lemma}
The second result (from Tsirelson \cite{Tsirelson:87}) shows that every extreme point $C$ of $\Cor(m,n)$ has a unique extension $E$ in $\mathcal E_{m+n}$, we give a proof for completeness.
\begin{lemma}[\cite{Tsirelson:87}] \label{lemExVec}
Assume $C$ is an extreme point of $\Cor(m,n)$.
\begin{itemize}
\item[(i)] If $x_1,\ldots,x_m,y_1,\ldots,y_n$ is a $C$-system, then
\[
\Span\{x_1,\ldots,x_m\}=\Span\{y_1,\ldots,y_n\}.
\]
\item[(ii)] The matrix $C$ has a unique extension to a matrix $E \in \mathcal E_{m+n}$, and there exists a $C$-system $x_1,\ldots,x_m,y_1,\ldots,y_n \in \mathbb{R}^r$, with $r = \rank(C)$, such that
\[
E = \Gram(x_1,\ldots,x_m,y_1,\ldots,y_n).
\]
\end{itemize}
\end{lemma}
\begin{proof}
We will use the following observation: Each matrix $C=(\langle a_s,b_t \rangle)_{s\in [m],t\in [n]}$, where $a_s,b_t$ are vectors with $\|a_s\|,\|b_t\|\le 1$, belongs to $\Cor(m,n)$ since it satisfies
\[
C_{s,t} = \left\langle \begin{pmatrix}a_s\\ \sqrt{1-\|a_s\|^2}\\ 0 \end{pmatrix}, \begin{pmatrix} b_t\\ 0\\ \sqrt {1-\|b_t\|^2}\end{pmatrix} \right\rangle\ \ \text{ for all } (s,t)\in [m]\times [n].
\]
(i) Set $V=\Span\{x_1,\ldots,x_m\}$ and assume $y_k \not\in V$ for some $k\in [n]$. Let
$w$ denote the orthogonal projection of $y_k$ onto $V$. Then $\|w\|<1$ and one can choose a nonzero vector $u \in V$ such that $\|w \mathbin{\mathpalette\@mypm\relax} u\| \le 1$. Define the matrices $C^{\mathbin{\mathpalette\@mypm\relax}} \in \mathbb{R}^{m \times n}$ by
\[
C_{s,t}^{\mathbin{\mathpalette\@mypm\relax}} =
\begin{cases}
\langle x_s, w \mathbin{\mathpalette\@mypm\relax} u \rangle & \text{if } t = k,\\
\langle x_s, y_t \rangle & \text{if } t \neq k.
\end{cases}
\]
Then, $C^{\mathbin{\mathpalette\@mypm\relax}} \in \Cor(m,n)$ (by the above observation) and $C=(C^++C^-)/2$. As $C$ is an extreme point of $\Cor(m,n)$ one must have $C=C^+=C^-$. Hence $u$ is orthogonal to each $x_s$ and thus $u=0$, a contradiction. This shows the inclusion
$\Span\{y_1,\ldots,y_m\} \subseteq \Span\{x_1,\ldots,x_m\}$ and the reverse one follows in the same way.
(ii) Assume $\{x_s',y_t'\}$ and $\{x_s'',y_t''\}$ are two $C$-systems. We show $\langle x'_r,x_s'\rangle = \langle x_r'',x_s''\rangle$ for all $r,s\in S$ and
$\langle y_t',y_u'\rangle=\langle y_t'',y_u''\rangle$ for all $t,u\in T$.
For this define the vectors
\[
x_s = \frac{x'_s \oplus x_s''}{\sqrt{2}} \quad \text{and} \quad y_t= \frac{y_t'\oplus y_t''}{\sqrt{2}},
\]
which again form a $C$-system. Using (i), for any $s\in S$, there exist scalars $\lambda_t^{s}$ such that $x_s=\sum_{t\in T} \lambda^{s}_t y_t$ and thus
$x_s'=\sum_{t\in T} \lambda ^{s}_t y_t'$ and $x_s''=\sum_{t \in T} \lambda ^{s}_t y_t''$. This shows
\[
\langle x_r',x_s'\rangle= \sum_{t \in T} \lambda^{r}_t \langle y_t', x_s'\rangle = \sum_{t \in T}\lambda^{r}_t C_{s,t}=
\sum_{t \in T}\lambda^{r}_t\langle y_t'',x_s''\rangle = \langle x_r'',x_s''\rangle
\]
for all $r,s \in S$. The analogous argument shows $\langle y_t',y_u'\rangle=\langle y_t'',y_u''\rangle$ for all $t,u \in T$. This shows $C$ has a unique extension to a matrix $E \in {\mathcal E}_{m+n}$.
Finally, we show that $\rank (E)=\rank (C)$.
Say $E$ is the Gram matrix of $x_1,\ldots,x_m,y_1,\ldots,y_n$.
In view of (i), $\rank(E)=\rank\{x_1,\ldots,x_m\}$ and thus it suffices to show that
$\rank\{x_1,\ldots,x_m\}\le \rank (C)$.
For this note that if $\{x_s: s\in I\}$ (for some $I\subseteq S$) is linearly independent then the corresponding rows of $C$ are linearly independent, since
$\sum_{s\in I}\lambda_s \langle x_s,y_t\rangle =0$ (for all $t\in T$) implies $\sum_{s\in I}\lambda_s x_s=0$ (using (i)) and thus $\lambda_s=0$ for all $s$.
\end{proof}
\begin{theorem} \label{theoExCor}
A matrix $C$ is an extreme point of $\Cor(m,n)$ if and only if $C$ has a unique extension to a matrix $E\in {\mathcal E}_{m+n}$ and $E$ is an extreme point of ${\mathcal E}_{m+n}$.
\end{theorem}
\begin{proof}
Direct application of Lemma \ref{lemELV} and Lemma \ref{lemExVec} (ii).
\end{proof}
We can use the following lemma to construct explicit examples of extreme points of $\Cor(m,n)$ for the case $m=n$.
\begin{lemma} \label{lemElliptopeCor}
Each extreme point of $\mathcal{E}_n$ is an extreme point of $\Cor(n,n)$.
\end{lemma}
\begin{proof}
Let $C$ be an extreme point of $\mathcal{E}_n$. Define the matrix
\[
E = \begin{pmatrix} C & C \\ C & C \end{pmatrix}.
\]
Then $E \in \mathcal{E}_{2n}$ is an extension of $C$. In view of Theorem \ref{theoExCor} it suffices to show that $E$ is the unique extension of $C$ and that $E$ is an extreme point of $\mathcal{E}_{2n}$. With $e_1,\ldots,e_n$ denoting the standard unit vectors in $\mathbb{R}^n$, observe that the vectors $e_i\oplus -e_i$ ($i\in [n]$) lie in the kernel of any matrix $E'\in \operatorname{fib}(C)$. Indeed, since $E'$ and $C$ have an all-ones diagonal we have
\[
(e_i \oplus - e_i)^{\sf T} E' (e_i \oplus -e_i) = 0,
\]
and since $E'$ is positive semidefinite this implies that $e_i \oplus -e_i \in \ker(E')$. This implies that $\operatorname{fib}(C)=\{E\}$.
We now show that $E$ is an extreme point of $\mathcal{E}_{2n}$. For this let $E_1, E_2 \in \mathcal{E}_{2n}$ and $0 < \lambda < 1$ such that $E = \lambda E_1 + (1-\lambda) E_2$.
As $E_1,E_2$ are positive semidefinite, the kernel of $E$ is the intersection of the kernels of $E_1$ and $E_2$.
Hence the vectors $e_i\oplus -e_i$ belong to the kernels of $E_1$ and $E_2$ and thus
\[
E_1 = \begin{pmatrix} C_1 & C_1 \\ C_1 & C_1 \end{pmatrix} \quad \text{and} \quad E_2 = \begin{pmatrix} C_2 & C_2 \\ C_2 & C_2 \end{pmatrix}
\]
for some $C_1,C_2\in {\mathcal E}_n$. Hence, $C=\lambda C_1+(1-\lambda) C_2$, which implies $C=C_1=C_2$, since $C$ is an extreme point of ${\mathcal E}_n$. Thus $E=E_1=E_2$, which completes the proof.
\end{proof}
The above lemma shows how to construct extreme points of $\Cor(n,n)$ from extreme points of the elliptope $\mathcal E_n$. Li and Tam \cite{LiTam} give the following characterization of the extreme points of ${\mathcal E}_n$.
\begin{theorem}[\cite{LiTam}] \label{theoLiTam}
Consider a matrix $E\in {\mathcal E}_n$ with rank $r$ and unit vectors $z_1,\ldots,z_n\in \mathbb{R}^r$ such that $E=\Gram(z_1,\ldots,z_n)$. Then $E$ is an extreme point of ${\mathcal E}_n$ if and only if
\begin{equation}\label{eqrkE}
\binom{r+1}{2} = \dim(\Span\{z_1z_1^{\sf T} ,\ldots,z_nz_n^{\sf T} \}).
\end{equation}
In particular, if $E$ is an extreme point of ${\mathcal E}_n$, then ${r+1\choose 2}\le n$.
\end{theorem}
\begin{example} [\cite{LiTam}] \label{LiTamLem}
For each integer $r\ge 1$ there exists an extreme point of $\mathcal E_n$ of rank $r$, where $n = \binom{r+1}{2}$. For example, let $e_1, \ldots, e_r$ be the standard basis vectors of $\mathbb{R}^r$ and define $$E = \mathrm{Gram}\Big(e_1, \ldots, e_r, \frac{e_1 + e_2}{\sqrt{2}}, \frac{e_1 + e_3}{\sqrt{2}}, \ldots, \frac{e_{r-1} + e_r}{\sqrt{2}}\Big).
$$ Then $E$ is an extreme point of ${\mathcal E}_n$ of rank $r$.
\end{example}
Note that the above example is optimal in the sense that a rank $r$ extreme point of ${\mathcal E}_n$ can exist only if $n \geq \smash{\binom{r+1}{2}}$ (by Theorem~\ref{theoLiTam}). By combining this with Lemma~\ref{lemElliptopeCor}, this gives a class of extreme points of $\Cor(m,n)$ with rank $r$ and $m=n={r+1\choose 2}$.
\subsection{Tsirelson's bound}
If $C$ is an extreme point of $\Cor(m,n)$ with rank $r$, then by Theorems~\ref{theoExCor} and \ref{theoLiTam} we have ${r+1\choose 2}\le m+n$. Tsirelson~\cite{Tsirelson} claimed the stronger bound ${r+1\choose 2}\le m+n-1$ (see Corollary~\ref{corbound} below).
In the rest of this section we show how to derive this stronger bound of Tsirelson (which is given in \cite{Tsirelson} without proof). In the next section, we construct two classes of extreme bipartite correlation matrices, of which one meets Tsirelson's bound. To show Tsirelson's bound we need to investigate in more detail the unique extension property for extreme points of $\Cor(m,n)$.
Let $C\in \Cor(m,n)$ with rank $r$, let $\{x_s\}$, $\{y_t\}$ be a $C$-system in $\mathbb{R}^r$, and let
\[
E=\Gram(x_1,\ldots,x_m,y_1,\ldots,y_n)\in{\mathcal E}_{m+n}.
\]
In view of Theorem~\ref{theoExCor}, if $C$ is an extreme point of $\Cor(m,n)$,
then $E$ is the unique extension of $C$ in $ {\mathcal E}_{m+n}$.
This uniqueness property can be rephrased as the requirement that an associated semidefinite program has a unique solution. Namely, consider the following dual pair of semidefinite programs:
\begin{equation}\label{eqsdpP}
\max \Big\{ 0 : X \in \mathcal S_+^{S \cup T}, \, X_{k,k}=1 \text{ for } k\in S \cup T, \, X_{s,t} = C_{s,t} \text{ for } s\in S,t\in T \Big\},
\end{equation}
\begin{equation}\label{eqsdpD}
\min \Big\{ \sum_{s\in S} \lambda_s +\sum_{t\in T}\mu_t +2\sum_{s\in S,t\in T}W_{s,t} C_{s,t} : \Omega=\begin{pmatrix}\text{\rm Diag}(\lambda) & W \cr W^{\sf T} & \text{\rm Diag}(\mu) \end{pmatrix} \in \mathcal S_+^{S \cup T} \Big\}.
\end{equation}
The feasible region of problem (\ref{eqsdpP}) consists of all possible extensions of $C$ in ${\mathcal E}_{m+n}$, and the feasible region of (\ref{eqsdpD}) consists of the positive semidefinite matrices $\Omega$ whose support (consisting of all off-diagonal pairs $(i,j)$ with $\Omega_{i,j}\ne 0$) is contained in the complete bipartite graph with bipartition $S\cup T$. Moreover,
the optimal values of both problems are equal to $0$. Finally, for any primal feasible (optimal) $X$ and dual optimal $\Omega$, equality $\Omega X=0$ holds, which implies that $\rank(X)+\rank(\Omega) \le m+n$.
Theorem \ref{theoLV} below (shown in \cite{LV14} in the more general context of universal rigidity) shows that if equality $\rank(X)+\rank(\Omega)=m+n$ holds (also known as \emph{strict complementarity}), then $X$ is in fact the {\em unique} feasible solution of program (\ref{eqsdpP}), and thus $C$ has a {\em unique} extension in ${\mathcal E}_{m+n}$.
\begin{theorem}
\label{theoLV}
Let $C\in \Cor(m,n)$ and let $\{x_s\}$, $\{y_t\}$ be a $C$-system spanning $\mathbb{R}^r$. Assume $E=\Gram(x_1,\ldots,x_m,y_1,\ldots,y_n)$ is an extreme point of ${\mathcal E}_{m+n}$. If there exists an optimal solution $\Omega$ of program (\ref{eqsdpD}) with $\rank(\Omega)=m+n-r$, then $E$ is the only extension of $C$ in ${\mathcal E}_{m+n}$.
\end{theorem}
\begin{proof}
Apply \cite[Theorem 3.2]{LV14} to the bar framework $G(\mathbf p)$, where $G$ is the complete bipartite graph $K_{m,n}$ with bipartition $S\cup T$ and $\mathbf p=\{x_s (s\in S), y_t (t\in T)\}$. Note that the conditions (v), (vi) in \cite[Theorem 3.2]{LV14} follow from $\Omega E=0$ and the fact that $\{x_s\}, \{y_t\}\subset \mathbb{R}^r$ are C-systems spanning $\mathbb{R}^r$.
\end{proof}
In addition one can relate uniqueness of an extension of $C$ in the elliptope to the existence of a quadric separating the two point sets $\{x_s\}$ and $\{y_t\}$ (Theorem~\ref{theoExCorTsi} below). Roughly speaking, such a quadric allows us to construct a suitable optimal dual solution $\Omega$ and to apply Theorem \ref{theoLV}. This property was stated by Tsirelson \cite{Tsirelson}, however without proof. Interestingly, an analogous result was shown recently by Connelly and Gortler \cite{CG} in the setting of universal rigidity. We will give a sketch of a proof for Theorem~\ref{theoExCorTsi}. For this we use Theorem~\ref{theoLV}, arguments in \cite{CG}, and the following basic property of semidefinite programs (which can be seen as an analog of Farkas' lemma for linear programs).
\begin{lemma} \label{theoFarkas}
Given $A_1,\ldots,A_m \in \mathcal S^n$ and $b \in \mathbb{R}^m$, and assume that there exists a matrix $X \in \mathcal S^n$ such that $\langle A_j, X \rangle = b_j$ for all $j \in [m]$.
Then exactly one of the following two alternatives holds:
\begin{itemize}
\item[(i)]
There exists a matrix $X \succ 0$ such that $\langle A_j, X\rangle = b_j$ for all $j \in [m]$.
\item[(ii)] There exists $y \in \mathbb{R}^m$ such that $\Omega=\sum_{j=1}^m y_jA_j \succeq 0$, $\Omega\ne 0$, and $\Omega X=0$.
\end{itemize}
\end{lemma}
\begin{theorem}[{\cite[Theorems 2.21-2.22]{Tsirelson}}]
\label{theoExCorTsi}
Let $C\in \Cor(m,n)$, let $\{x_s\}$, $\{y_t\}$ be a $C$-system spanning $\mathbb{R}^r$, and let
$E=\Gram(x_1,\ldots,x_m,y_1,\ldots,y_n)\in{\mathcal E}_{m+n}$.
\begin{itemize}
\item[(i)] If $C$ is an extreme point of $\Cor(m,n)$, then there exist nonnegative scalars $\lambda_1,\ldots,\lambda_m,$ $\mu_1,\ldots,\mu_n$, not all equal to zero, such that
\begin{equation}\label{eqTsi}
\sum_{s=1}^m \lambda_s x_sx_s^{\sf T} = \sum_{t=1}^n \mu_t y_ty_t^{\sf T} .
\end{equation}
\item[(ii)] If $E$ is an extreme point of ${\mathcal E}_{m+n}$ and there exist strictly positive scalars $\lambda_1,\ldots,\lambda_m,\mu_1,\ldots,\mu_n$ for which relation (\ref{eqTsi}) holds, then $C$ is an extreme point of $\Cor(m,n)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) By assumption, $C$ is an extreme point of $\Cor(m,n)$, so by Lemma~\ref{lemExVec}~(ii) $E$ is the only feasible solution of the program (\ref{eqsdpP}).
As $E$ has rank $r<m+n$, it follows that the program (\ref{eqsdpP}) does not have a positive definite feasible solution.
Applying Lemma~\ref{theoFarkas} it follows that there exists a nonzero matrix $\Omega$ that is feasible for the dual program (\ref{eqsdpD}) and satisfies $\Omega E=0$.
This gives:
$$\lambda_s x_s +\sum_{t\in T} W_{s,t}y_t=0 \ (s\in S),\ \
\mu_ty_t +\sum_{s\in S} W_{s,t} x_s=0\ (t\in T).$$
Since $\Omega\succeq 0$, the scalars $\lambda_s,\mu_t$ are nonnegative. We claim that they satisfy (\ref{eqTsi}). We multiply the left relation by $x_s^{\sf T}$ and the right one by $y_t^{\sf T}$ to obtain
$$\lambda_s x_s x_s^{\sf T} +\sum_{t\in T} W_{s,t} y_t x_s^{\sf T}=0 \ (s\in S),\ \
\mu_ty_t y_t^{\sf T} +\sum_{s\in S} W_{s,t} x_s y_t^{\sf T}=0\ (t\in T).$$
Summing the left relation over $s\in S$, and summing the right relation over $t\in T$ and taking the transpose, we get:
$$\sum_{s\in S} \lambda_s x_s x_s^{\sf T}= -\sum_{s\in S}\sum_{t\in T}W_{s,t}y_t x_s^{\sf T}= \sum_{t\in T} \mu_t y_t y_t^{\sf T},$$
and thus (\ref{eqTsi}) holds.
(ii) Assume that $E$ is an extreme point of ${\mathcal E}_{m+n}$ and that there exist strictly positive scalars $\lambda_1,\ldots,\lambda_m,\mu_1,\ldots,\mu_n$ for which (\ref{eqTsi}) holds.
The key idea is to construct a matrix $\Omega$ that is optimal for the program (\ref{eqsdpD}) and has rank $m+n-r$, since then we can apply Theorem~\ref{theoLV} and conclude that $E$ is the only extension of $C$ in ${\mathcal E}_{m+n}$. The construction of such a matrix $\Omega$ is analogous to the construction given in \cite{CG} for frameworks (see Theorem 4.3 and its proof), so we omit the details.
\end{proof}
\begin{corollary}[\cite{Tsirelson}]\label{corbound}
If $C$ is an extreme point of $\Cor(m,n)$, then
\begin{equation}\label{eqrankC}
{\rank (C)+1\choose 2} \le n+m-1.
\end{equation}
\end{corollary}
\begin{proof}
Let $x_1,\ldots,x_m,y_1,\ldots,y_n \in \mathbb{R}^r$, with $r = \rank(C)$, be a $C$-system spanning $\mathbb{R}^r$ and let $E$ be their Gram matrix.
As $E$ is an extreme point of ${\mathcal E}_{m+n}$, it follows from relation (\ref{eqrkE}) that $\mathcal S^r$ is spanned by the $m+n$ matrices $x_ix_i^{\sf T} , y_jy_j^{\sf T} $ ($i\in S, j\in T$). Combining this with the identity (\ref{eqTsi}) this implies that $\mathcal S^r$ is spanned by a set of $m+n-1$ matrices and thus its dimension ${r+1\choose 2}$ is at most $m+n-1$.
\end{proof}
Our first construction in the next section provides instances where the bound (\ref{eqrankC}) is tight.
\subsection{Constructions of extreme bipartite correlation matrices}
We construct two families of extreme points of $\Cor(m,n)$, which we will use in Section~\ref{secfinal} to construct completely positive semidefinite matrices with exponentially large completely positive semidefinite rank. The first construction meets Tsirelson's bound and is used to prove Theorem~\ref{theomain}. The second construction will be used to recover one of the results of \cite{Slofstra11}.
\medskip
We begin with constructing a family of extreme points $C_1$ of $\Cor(m,n)$ with $\rank(C_1)=r$, $m=r$, and $n={r\choose 2}+1$, which thus shows that inequality \eqref{eqrankC} is tight. Such a family of bipartite correlation matrices can also be inferred from \cite{VertesiPal}, where the correlation matrices are obtained through analytical methods as optimal solutions of linear optimization problems over $\Cor(m, n)$. Instead, we use the sufficient conditions for extremality of bipartite correlations given above.
For this we will construct matrices $E_1,\Omega_1\in \mathcal S^{r+n}$ that satisfy the conditions of Theorem~\ref{theoLV}; that is, $E_1$ is an extreme point of ${\mathcal E}_{r+n}$, $\Omega_1$ is positive semidefinite with support contained in the complete bipartite graph $K_{r,n}$, $\rank(E_1)=r$, $\rank(\Omega_1)=n$, and $\Omega_1 E_1=0$. Our construction of $\Omega_1$ is inspired by \cite{GP}, which studies the maximum possible rank of extremal positive semidefinite matrices with a complete bipartite support.
Consider the matrix $\widehat B\in \mathbb{R}^{r\times {r\choose 2}}$, whose columns are indexed by the pairs $(i,j)$ with $1\le i<j\le r$, with entries $\smash{\widehat B_{i, (i,j)}}=1$, $\smash{\widehat B_{j,(i,j)}}=-1$ for $1\le i<j\le r$, and all other entries 0. We also consider the matrix $B \in \mathbb{R}^{r\times n}$ obtained by adjoining to $\smash{\widehat B}$ a last column equal to the all-ones vector $e$. Note that $BB^{\sf T} = r I_r$ and $\smash{\widehat B\widehat B^{\sf T}} = rI_r-J_r$. Then define the following matrices:
$$\Omega'=\begin{pmatrix} nI_r & \sqrt n B\cr \sqrt n B^{\sf T} & r I_n\end{pmatrix} \in \mathcal S^{r+n},\ \
E'= \begin{pmatrix} I_r & -{\sqrt n\over r}B\cr -{\sqrt n \over r} B^{\sf T} & {n\over r^2} B^{\sf T} B\end{pmatrix}\in \mathcal S^{r+n}.$$
Since
\[
\Omega'=\begin{pmatrix} \sqrt{n\over r} B \cr \sqrt r I_n\end{pmatrix} \begin{pmatrix} \sqrt{n\over r} B \cr \sqrt r I_n\end{pmatrix}^{\sf T} \quad \text{and} \quad
E' =\begin{pmatrix} I_r \cr -{\sqrt n\over r}B^{\sf T} \end{pmatrix} \begin{pmatrix} I_r \cr -{\sqrt n\over r}B^{\sf T} \end{pmatrix}^{\sf T},
\]
it follows that $\Omega'$ and $E'$ are positive semidefinite, $\Omega'E'=0$, $\rank(\Omega')=n$, and $\rank(E')=r$.
It suffices now to modify the matrix $E'$ in order to get a matrix $E_1$ with an all-ones diagonal.
For this, consider the diagonal matrix $$D=I_r\oplus {r\over \sqrt {2n}} I_{n-1} \oplus {\sqrt{r\over n}} I_1$$ and set
$E_1 = D E'D$ and $\Omega_1 = D^{-1} \Omega'D^{-1}.$
Then $E_1$ has an all-ones diagonal, it is in fact the Gram matrix of the vectors $e_1,\ldots,e_r$, $ (e_i-e_j)/\sqrt 2 $ (for $1\le i<j\le r$), and $(e_1+\ldots +e_r)/\sqrt r$, and thus $E_1$ is an extreme point of ${\mathcal E}_{r+n}$.
Moreover, $\Omega_1 E_1=0$,
$\rank E_1=r$, and $\rank \Omega_1 = n$.
Therefore the conditions of Theorem \ref{theoLV} are fulfilled and we can conclude that
the matrix $C_1=\pi(E_1)$ is an extreme point of $\Cor(r,n)$.
So we have shown part (i) in Theorem \ref{lemExtremePoint} below.
\medskip
Our second construction is inspired by the XOR-game considered by Slofstra in \cite[Section 7.2]{Slofstra11}. We construct a family of extreme points $C_2$ of $\Cor(m,n)$ with $\rank(C_2) = r-1$, $m =r$ and $n = {r \choose 2}$. Define the $(r+n)\times (r+n)$ matrices
\[
\Omega_2=\left(\begin{matrix}
{\sqrt n} I_r & \widehat B\cr \widehat B^{\sf T} & {r\over \sqrt n} I_n\end{matrix}\right), \ \
E_2=\left(\begin{matrix} {1\over r-1}\widehat B\widehat B^{\sf T} & -{r\over 2\sqrt n}\widehat B\cr -{r\over 2\sqrt n}\widehat B^{\sf T} & {1\over 2}\widehat B^{\sf T}\widehat B\end{matrix}\right).
\]
Note that
\[
\Omega_2= \sqrt n \begin{pmatrix} \frac{1}{\sqrt r} \widehat B & \frac{1}{\sqrt r} e \\ \sqrt{\frac{r}{n}} I_n & 0\end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt r} \widehat B & \frac{1}{\sqrt r} e \\ \sqrt{\frac{r}{n}} I_n & 0\end{pmatrix}^{\sf T},\ \ E_2 =\begin{pmatrix} \frac{-1}{\sqrt{2n}} \widehat B \widehat B^{\sf T} \cr \frac{1}{\sqrt{2}} \widehat B^{\sf T} \end{pmatrix} \begin{pmatrix} \frac{-1}{\sqrt{2n}} \widehat B \widehat B^{\sf T} \cr \frac{1}{\sqrt{2}} \widehat B^{\sf T} \end{pmatrix}^{\sf T},
\]
where we use that $\widehat B \widehat B^{\sf T} \widehat B = (r I_r - J_r) \widehat B = r \widehat B$. It follows that $\Omega_2$ and $E_2$ are positive semidefinite, $\rank(\Omega_2)= n+1$ and $\rank(E_2)=r-1$.
Moreover, one can check that $\Omega_2 E_2 = 0$.
In order to be able to apply Theorem~\ref{theoLV} it remains to verify that $E_2$ is an extreme point of $\mathcal E_{r+n}$.
The above factorization of $E_2$ shows that it is the Gram matrix of the system of vectors in $\mathbb{R}^r$:
$$\left\{u_k={1\over \sqrt{2n}}(e-re_k): k\in [r]\right\}
\cup \left\{v_{ij}={1\over \sqrt 2}(e_i-e_j): 1\le i<j\le r\right\}.$$
As the vectors $u_k,v_{ij}$ lie in $\mathbb{R}^r$ while $E_2$ has rank $r-1$ we need to consider another Gram representation of $E_2$ by vectors in $\mathbb{R}^{r-1}$. For this, let $Q$ be an $r\times r$ orthogonal matrix with columns $p_1,\ldots,p_r$ and $p_r=1/\sqrt{r}e$. Then the vectors $\smash{\{Q^{\sf T} u_k\} \cup \{Q^{\sf T} v_{ij}\}}$ form again a Gram representation of $E_2$. Furthermore, as all $u_k,v_{ij}$ are orthogonal to the vector $p_r$ it follows that the vectors $Q^{\sf T} u_k$ and $Q^{\sf T} v_{ij}$ are all orthogonal to $Q^{\sf T} p_r=e_r$. Hence $Q^{\sf T} u_k=(x_k,0)$ and $Q^{\sf T} v_{ij} = (y_{ij},0)$ for some vectors $x_k,y_{ij} \in \mathbb{R}^{r-1}$ which now provide a Gram representation of $E_2$ in $\mathbb{R}^{r-1}$.
In order to conclude that $E_2$ is an extreme point of $\mathcal E_{r+n}$ it suffices, by Theorem~\ref{theoLiTam}, to verify that the set $\{x_k x_k^{\sf T}\} \cup \{ y_{ij} y_{ij}^{\sf T}\}$ spans the whole space $\mathcal S^{r-1}$. Equivalently, we must show that the set $\{Q^{\sf T} u_k u_k^{\sf T} Q \} \cup \{Q^{\sf T} v_{ij} v_{ij}^{\sf T} Q\}$ spans the subspace $\{R\oplus 0: R\in\mathcal S^{r-1}\}$ of $\mathcal S^r$, or, in other words, that the set $\{u_k u_k^{\sf T}\} \cup \{ v_{ij} v_{ij}^{\sf T}\}$ spans the subspace
\[
\mathcal M:=\{Q (R\oplus 0)Q^{\sf T}: R\in \mathcal S^{r-1}\}\subseteq \mathcal S^r.
\]
Observe that $\text{dim}(\mathcal M) = {r \choose 2}$. We also have that $\text{span} \{v_{ij} v_{ij}^{\sf T}: 1 \leq i < j \leq r\}$ is contained in
\[
\mathrm{span}( \{u_k u_k^{\sf T}: k \in [r]\} \cup \{v_{ij} v_{ij}^{\sf T}: 1 \leq i < j \leq r\}) \subseteq \mathcal M,
\]
and that
\[
\text{span} \{v_{ij} v_{ij}^{\sf T}: 1 \leq i < j \leq r\} = \text{span} \{ (e_i - e_j) (e_i -e_j)^{\sf T}: 1 \leq i < j \leq r\}
\] has dimension ${r \choose 2}$. Therefore, equality holds throughout:
\[
\text{span}( \{u_k u_k^{\sf T}: k \in [r]\} \cup \{v_{ij} v_{ij}^{\sf T}: 1 \leq i < j \leq r\} )= \mathcal M,
\]
and thus $E_2$ is an extreme point of $\mathcal E_{r+n}$.
This shows that the conditions of Theorem \ref{theoLV} are satisfied and we can conclude that
the matrix $C_2=\pi(E_2)$ is an extreme point of $\Cor(r,n)$. So we have shown part (ii) in Theorem \ref{lemExtremePoint} below.
\begin{theorem} \label{lemExtremePoint}
Consider an integer $r\ge 1$ and let $e_1,\ldots,e_r$ denote the standard unit vectors in $\mathbb{R}^r$.
\begin{itemize}
\item[(i)]
There exists a matrix $C_1$ which is an extreme point of $\smash{C(r,{r\choose 2}+1)}$ and has rank $r$. We can take $C_1$ to be the matrix with columns $(e_i-e_j)/\sqrt 2$ (for $1\le i<j\le r$) and $(e_1+\ldots+e_r)/\sqrt r$.
\item[(ii)]
There exists a matrix $C_2$ which is an extreme point of $\Cor(r,{r\choose 2})$ and has rank $r-1$.
We can take $C_2$ to be the matrix whose columns are $- \sqrt{r/(2(r-1))}(e_i-e_j)$ for $1\le i<j\le r$.
\end{itemize}
\end{theorem}
We conclude this section with explaining how our second construction permits to recover a lower bound of Slofstra \cite{Slofstra11} for the amount of entanglement needed by any optimal quantum strategy for the XOR-game he considers in \cite[Section 7.2]{Slofstra11}.
The goal of an XOR-game is to find a quantum strategy with maximal winning probability, or, equivalently, a strategy that maximizes the bias of the game. For an introduction to XOR-games we refer to, e.g., \cite{JopThesis,CHTW04}. An XOR-game is given by a game matrix, and the game presented in \cite[Section 7.2]{Slofstra11} has game matrix $\smash{\widehat B}$ as defined above. An optimal quantum strategy corresponds to an optimal solution of the following optimization problem:
\begin{equation} \label{slofstraopt1}
\max \{\langle \widehat B, C \rangle: C \in \Cor(m,n)\}.
\end{equation}
Slofstra \cite{Slofstra11} showed (using the notion of `solution algebra' of the game) that any tensor operator representation of any optimal solution $C$ of (\ref{slofstraopt1}) has local dimension at least $2^{\lfloor (r-1)/2\rfloor}$ (see Section~\ref{secquantumcor} for the definition of a tensor operator representation). As we now point out this can also be derived from Tsirelson's results using our treatment.
For this note first that problem (\ref{slofstraopt1}) is equivalent to
\begin{equation} \label{slofstraopt1b}
\min \{ \langle \widehat B, C \rangle: C \in \Cor(m,n)\}
\end{equation}
(since $C \in \Cor(m,n)$ if and only if $-C \in \Cor(m,n)$).
Problem (\ref{slofstraopt1b})
is in turn equivalent to the following optimization problem over the elliptope:
\begin{equation} \label{slofstraopt}
\min \{\langle \Omega_2, E \rangle: E \in \mathcal E_{m+n}\},
\end{equation}
with $\Omega_2$ being defined as above (since $E\in \mathcal E_{m+n}$ is optimal for (\ref{slofstraopt}) if and only if $C=\pi(E)\in \Cor(m,n)$ is optimal for
(\ref{slofstraopt1b})).
As $\Omega_2$ is positive semidefinite and $\langle \Omega_2,E_2\rangle =0$, it follows that $E_2$ is optimal for (\ref{slofstraopt}) and thus $C_2=\pi(E_2)$ is optimal for (\ref{slofstraopt1b}).
Moreover, as $\rank(E_2)=m+n-\rank(\Omega_2)$ is the largest possible rank of an optimal solution of (\ref{slofstraopt}), it follows from a geometric property of semidefinite programming that $E_2$ must lie in the relative interior of the set of optimal solutions of (\ref{slofstraopt}). This, combined with the fact that $E_2$ is an extreme point of ${\mathcal E}_{m+n}$, implies that $E_2$ is the unique optimal solution of (\ref{slofstraopt}) and thus $C_2$ is the unique optimal solution of (\ref{slofstraopt1b}). Finally, as $C_2$ is an extreme point of $\Cor(m,n)$ with rank $r-1$, we can conclude using Corollary~\ref{cortensordim} below
that any tensor operator representation of $C_2$ uses local dimension at least $2^{\lfloor (r-1)/2\rfloor}$, and the same holds for the unique optimal solution $-C_2$ of \eqref{slofstraopt1}.
\section{Lower bounding the size of operator representations}
\label{secquantumcor}
We start with recalling, in Theorem~\ref{Tsir}, some equivalent characterizations for bipartite correlations in terms of operator representations, due to Tsirelson.
For this consider a matrix $C\in \mathbb{R}^{m\times n}$. We say that $C$ admits a \emph{tensor operator representation} if there exist an integer $d$ (the \emph{local dimension}), a unit vector $\psi \in \mathbb{C}^d \otimes \mathbb{C}^d$, and Hermitian $d \times d$ matrices $\{X_s\}_{s=1}^m$ and $\{Y_t\}_{t=1}^n$ with spectra contained in $[-1,1]$, such that $C_{s,t} = \psi^* (X_s \otimes Y_t) \psi$ for all $s$ and $t$.
Moreover we say that $C$ admits a (finite dimensional) {\em commuting operator representation} if there exist an integer $d$, a Hermitian positive semidefinite $d \times d$ matrix $W$ with $\mathrm{trace}(W) = 1$, and Hermitian $d \times d$ matrices $\{X_s\}$ and $\{Y_t\}$ with spectra contained in $[-1,1]$, such that $X_s Y_t = Y_t X_s$ and $C_{s,t} = \Tr(X_s Y_t W)$ for all $s$ and $t$. A commuting operator representation is said to be \emph{pure} if $\rank(W) = 1$.
Existence of these various operator representations relies on using Clifford algebras. For an integer $r\ge 1$ the \emph{Clifford algebra} $\Clifford(r)$ of order $r$ can be defined as the universal $C^*$-algebra with Hermitian generators $a_1,\ldots,a_r$ and relations
\begin{equation} \label{eqCliffordrelations}
a_i^2 = 1 \quad \text{and} \quad a_ia_j + a_ja_i = 0 \quad \text{for} \quad i \neq j.
\end{equation}
We call these relations the \emph{Clifford relations}.
To represent the elements of $\Clifford(r)$ by matrices we can use the following map, which is a $*$-isomorphism onto its image:
\begin{equation} \label{eqClifford}
\varphi_r \colon \Clifford(r) \to \mathbb{C}^{2^{\lceil r/2 \rceil} \times 2^{\lceil r/2 \rceil}}, \, \varphi_r(a_i) = \begin{cases}
Z^{\otimes \frac{i-1}{2}} \otimes X \otimes I^{\otimes \lceil \frac{r}{2} \rceil - \frac{i+1}{2}} & \text{for $i$ odd},\\
Z^{\otimes \frac{i-2}{2}} \otimes Y \otimes I^{\otimes \lceil \frac{r}{2}\rceil - \frac{i}{2}} & \text{for $i$ even}.
\end{cases}
\end{equation}
Here we use the \emph{Pauli matrices}
\[
X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad Y = \begin{pmatrix} 0 & -\mathbf{i} \\ \mathbf{i} & 0 \end{pmatrix}, \quad Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}.
\]
For even $r$ the representation $\varphi_r$ is irreducible and thus $\Clifford(r)$ is isomorphic to the full matrix algebra with matrix size $2^{r/2}$.
For odd $r$ the representation $\varphi_r$ decomposes as a direct sum of two irreducible representations, each of dimension $2^{\lfloor r/2 \rfloor}$. Therefore, if $X_1,\ldots,X_r$ is a set of Hermitian matrices satisfying the relations $X_i^2 = I$ and $X_i X_j + X_j X_i = 0$ for $i \neq j$, then they must have size at least $2^{\lfloor r/2 \rfloor}$.
We refer to \cite[Section 5.4]{Procesi} for details about (representations of) Clifford algebras.
\begin{theorem}[\cite{Tsirelson:87}] \label{Tsir}
Let $C \in \mathbb{R}^{m \times n}$. The following statements are equivalent:
\begin{enumerate}
\item $C$ is a bipartite correlation.
\item $C$ admits a tensor operator representation.
\item $C$ admits a pure commuting operator representation.
\item $C$ admits a commuting operator representation.
\end{enumerate}
\end{theorem}
\begin{proof}
$(1) \Rightarrow (2)$ Let $C \in \Cor(m,n)$. That means there exist unit vectors $\{x_s\}$ and $\{y_t\}$ in $\mathbb{R}^r$, where $r = \rank(C)$, such that $C_{s,t} = \ip{x_s, y_t}$ for all $s$ and $t$.
Set $d = 2^{\lfloor r/2 \rfloor}$ and define
\begin{equation*}
X_s = \sum_{i=1}^r (x_s)_i \pi(a_i), \quad Y_t = \sum_{i=1}^r (y_t)_i \pi(a_i)^{\sf T},
\end{equation*}
where $\pi$ is an irreducible representation of $\mathcal C(r)$ by matrices of size $d$ (note that for $r$ even we could use the explicit representation $\varphi_r$).
With $\psi = \frac{1}{\sqrt{d}} \sum_{i=1}^d e_i \otimes e_i$ one can use the Clifford relations to derive the following identity (see for example \cite{JopThesis}):
\[
C_{s,t} = \ip{x_s, y_t} = \Tr(X_s Y_t^{\sf T})/d = \psi^* (X_s \otimes Y_t) \psi \quad \text{for all} \quad s\in S, t \in T.
\]
The eigenvalues of the matrices $\pi(a_1),\ldots,\pi(a_r)$ lie in $\{-1,1\}$, and the Clifford relations \eqref{eqCliffordrelations} can be used to derive that the eigenvalues of $X_s$ and $Y_t$ also lie in $\{-1, 1\}$. Thus, $(\{X_s\}, \{Y_t\}, \psi)$ is a tensor operator representation of $C$.
$(2) \Rightarrow (3)$ If $(\{X_s\}, \{Y_t\}, \psi)$ is a tensor operator representation of $C$, then the operators $X_s \otimes I$ and $I \otimes Y_t$ commute, and by using the identity \[\psi^* (X_s \otimes Y_t) \psi = \Tr((X_s \otimes I)(I \otimes Y_t) \psi \psi^*)\]
we see that $(\{X_s \otimes I\}, \{I \otimes Y_y\}, \psi\psi^*)$ is a pure commuting operator representation.
$(3) \Rightarrow (4)$ This is immediate.
$(4) \Rightarrow (1)$
Suppose $(\{X_s\}, \{Y_t\}, W)$ is a commuting operator representation of $C$. Since $W$ is positive semidefinite and has trace $1$, there exist nonnegative scalars $\lambda_i$ and orthonormal unit vectors $\psi_i \in \mathbb{C}^d \otimes \mathbb{C}^d$ such that $W = \sum_i \lambda_i \psi_i\psi_i^*$ and $\sum_i \lambda_i = 1$. Then,
\[
C_{s,t} = \Tr(X_s Y_t W) = \sum_i \lambda_i \Tr(X_s Y_t \psi_i \psi_i^*) = \sum_i \lambda_i \psi_i^* X_s Y_t \psi_i.
\]
So, with
\[
x_s = \bigoplus_i \sqrt{\lambda_i} \begin{pmatrix} \Re(X_s \psi_i) \\ \Im(X_s \psi_i) \end{pmatrix} \quad \text{and} \quad y_t = \bigoplus_i \sqrt{\lambda_i} \begin{pmatrix} \Re(Y_t \psi_i) \\ \Im(Y_t \psi_i) \end{pmatrix}
\]
we have $C_{s,t} = \ip{x_s, y_t}$ and $\|x_s\|, \|y_s\| \leq 1$, and by using the observation in the proof of Lemma~\ref{lemExVec} we can extend the vectors $x_s$ and $y_t$ to unit vectors.
\end{proof}
\begin{corollary} \label{remTsi}
If C is a bipartite correlation matrix of rank $r$, then it admits a tensor operator representation in local dimension $2^{\lfloor r/2 \rfloor}$. If $C$ is a bipartite correlation matrix that admits a tensor operator representation in local dimension $d$, then it has a commuting operator representation by matrices of size $d^2$.
\end{corollary}
The remainder of this section is devoted to showing that there are bipartite correlation matrices for which every operator representation requires a large dimension.
For this we need two more definitions. A commuting operator representation $(\{X_s\}, \{Y_t\} ,W)$ is \emph{nondegenerate} if there does not exist a projection matrix $P \neq I$ such that $PW\mkern-3.1muP = W$, $X_s P = PX_s$, and $Y_t P = P Y_t$ for all $s$ and $t$.
It is said to be \emph{Clifford} if there exist matrices $Q \in \mathbb{R}^{m \times m}$ and $R \in \mathbb{R}^{n \times n}$ with all-ones diagonals, such that
\begin{align*}
X_s X_{s'} + X_{s'} X_s &= 2 Q_{s,s'} I \quad \text{for all} \quad s,s' \in S,\\
Y_t Y_{t'} + Y_{t'} Y_t &= 2R_{t,t'} I \quad \text{for all} \quad t,t' \in T.
\end{align*}
We will use the following theorem from Tsirelson as crucial ingredient.
\begin{theorem} [{\cite[Theorem 3.1]{Tsirelson:87}}] \label{Thrm3.1}
If $C$ is an extreme point of $\Cor(m,n)$, then any nondegenerate commuting operator representation of $C$ is Clifford.
\end{theorem}
We can now state and prove the main result of this section.
\begin{theorem} \label{RankRep}
Let $C$ be an extreme point of $\Cor(m,n)$ and let $r = \rank(C)$. Every commuting operator representation of $C$ uses matrices of size at least $(2^{\lfloor r/2 \rfloor})^2$.
\end{theorem}
\begin{proof}
Let $(\{X_s\}, \{Y_t\}, W)$ be a commuting operator representation of $C$ where $X_s, Y_t$ and $W$ are matrices of size $d$. We will show $d\ge (2^{\lfloor r/2 \rfloor})^2$. If this representation is degenerate, then there exists a projection matrix $P \neq I$ such that $PW\mkern-3.1muP = W$, $X_sP = PX_s$, and $Y_tP = PY_t$ for all $s$ and $t$. Let $P = \sum_{i=1}^k v_i v_i^*$ be its spectral decomposition, where the vectors $v_1,\ldots,v_k$ are orthonormal, and set $U = (v_1,\ldots,v_k)$. Then one can verify that $(\{ U^* X_s U \}, \{ U^* Y_s U \}, U^* W U)$ is a commuting operator representation of $C$ of smaller dimension. So, since we are proving a lower bound on the dimension, we may assume $(\{X_s\}, \{Y_t\}, W)$ to be a nondegenerate commuting operator representation.
By extremality of $C$ we may assume the operator representation is pure. Hence, there is a unit vector $\psi$ such that $W = \psi \psi^*$. This gives
\[
C_{s,t} = \Tr(X_s Y_t W) = \psi^* X_sY_t \psi = \ip{x_s, y_t},
\]
where
\[
x_s = \begin{pmatrix} \Re(X_s \psi) \\ \Im(X_s \psi) \end{pmatrix} \quad \text{and} \quad y_t = \begin{pmatrix} \Re(Y_t \psi) \\ \Im(Y_t \psi) \end{pmatrix}.
\]
These vectors $x_s$ and $y_t$ are unit vectors because $C$ is extreme (see the proof of Lemma~\ref{lemExVec}), and therefore, they form a $C$-system.
By Theorem~\ref{Thrm3.1} the commuting operator representation $(\{X_s\}, \{Y_t\}, W)$ is Clifford. So, there exist matrices
$Q \in \mathbb{R}^{m \times m}$ and $R \in \mathbb{R}^{n \times n}$ with all-one diagonals such that
\begin{align*}
X_s X_{s'} + X_{s'} X_s &= 2Q_{s,s'} I \quad \text{for all} \quad s,s' \in S,\\
Y_t Y_{t'} + Y_{t'} Y_t &= 2R_{t,t'} I \quad \text{for all} \quad t,t' \in T.
\end{align*}
We show that $E$ is an extension to the elliptope of $C$, where
\[
E = \begin{pmatrix} Q & C \\ C^{\sf T} & R \end{pmatrix} .
\]
For this, we have to show $Q_{s,s'} = \ip{x_s,x_{s'}}$ and $R_{t,t'} = \ip{y_t,y_{t'}}$. Indeed,
\begin{align*}
\ip{x_s,x_{s'}} + \ip{x_{s'},x_s} &= \Re\big(\psi^*X_s X_{s'} \psi + \psi^* X_{s'} X_s \psi\big) \\
&= \Re\big(\psi^* (X_s X_{s'} + X_{s'} X_s) \psi\big) \\
&= \Re\big(\psi^* ( 2Q_{s,s'} I ) \psi\big) = 2Q_{s,s'},
\end{align*}
and in the same way $\ip{y_t,y_{t'}} + \ip{y_{t'},y_t} = 2 R_{t,t'}$.
By Theorem \ref{theoExCor} the matrix $E$ is the unique extension of $C$ to the elliptope. Furthermore, Lemma~\ref{lemExVec} tells us that $\mathrm{rank}(Q) = \mathrm{rank}(R) = \mathrm{rank}(C) = r$.
Consider the spectral decomposition $Q = \sum_{i=1}^r \alpha_i v_i v_i^*$, where the vectors $v_1,\ldots,v_r$ are orthonormal, and consider the algebra $\mathbb{C}\langle A_1,\ldots,A_r \rangle$, where
\[
A_i = \frac{1}{\sqrt{\alpha_i}} \sum_{s = 1}^m (v_i)_s X_s \quad \text{for} \quad i \in [r].
\]
We have
\begin{align*}
A_i A_j + A_j A_i &= \frac{1}{\sqrt{\alpha_i\alpha_j}} \sum_{s,s' = 1}^m \left((v_i)_s (v_j)_{s'} X_s X_{s'} + (v_j)_s (v_i)_{s'} X_s X_{s'}\right)\\
&= \frac{1}{\sqrt{\alpha_i\alpha_j}} \sum_{s,s' = 1}^m (v_i)_s (v_j)_{s'} \left(X_s X_{s'} + X_{s'} X_s \right)\\
&= \frac{1}{\sqrt{\alpha_i\alpha_j}} \sum_{s,s' = 1}^m (v_i)_s (v_j)_{s'} 2Q_{s,s'} I
= \frac{2}{\sqrt{\alpha_i\alpha_j}} v_i^* Q v_j I = 2 \delta_{i,j} I,
\end{align*}
which means that we have the representation $\pi_A \colon \mathcal C(r) \to \mathbb{C}\langle A_1,\ldots,A_r \rangle$ defined by $\pi_A(a_i) = A_i$, where the $a_i$ are the generators of $\mathcal C(r)$. In the same way we can define matrices $B_1,\ldots,B_r$ by taking linear combinations of the matrices $Y_t$ so that we obtain the representation $\pi_B \colon \mathcal C(r) \to \mathbb{C}\langle B_1,\ldots, B_r \rangle$ defined by $\pi_B(a_i) = B_i$.
By assumption, the algebras $\mathbb{C}\langle X_1,\ldots,X_m \rangle$ and $\mathbb{C}\langle Y_1,\ldots, Y_n \rangle$ commute. This implies that the algebras $\mathbb{C}\langle A_1,\ldots,A_r \rangle$ and $\mathbb{C}\langle B_1,\ldots,B_r \rangle$ also commute and that
$
\mathbb{C}\langle A_1,\ldots,A_r \rangle \mathbb{C}\langle B_1,\ldots,B_r \rangle
$
is an algebra. Moreover, we have
\[
[\pi_A(a), \pi_B(b)] = \pi_A(a)\pi_B(b) - \pi_A(a)\pi_B(b) = 0 \quad \text{for all} \quad a,b \in \mathcal C(r).
\]
By the universal property of the tensor product of algebras (see, e.g., \cite[Proposition II.4.1]{Kas}), there exists a (unique) algebra homomorphism
$$\pi: \mathcal C(r) \otimes \mathcal C(r)\to \mathbb{C}\langle A_1,\ldots,A_r \rangle \mathbb{C}\langle B_1,\ldots,B_r \rangle$$ such that
$\pi(a\otimes 1)=\pi_A(a)$ and $\pi(1\otimes a)=\pi_B(a)$ for all $a\in \mathcal C(r)$.
Moreover, each finite dimensional, irreducible representation of a tensor product of algebras is the tensor product of two irreducible representations of those algebras (see, e.g., \cite[Remark 2.27]{EGHLSVY11}). This means that each irreducible representation of $\mathcal C(r) \otimes \mathcal C(r)$ is the tensor product of two irreducible representations of $\mathcal C(r)$. Since irreducible representations of $\mathcal C(r)$ have size at least $2^{\lfloor r/2 \rfloor}$, it follows that irreducible representations of the tensor product $\mathcal{C}(r) \otimes \mathcal{C}(r)$ must have size at least $(2^{\lfloor r/2 \rfloor})^2$. Since $\pi$ is a representation of $\mathcal C(r) \otimes \mathcal C(r)$, this means that the matrices $A_i$ and $B_j$ must have size at least $(2^{\lfloor r/2 \rfloor})^2$, which shows $d \geq (2^{\lfloor r/2 \rfloor})^2$.
\end{proof}
\begin{corollary}\label{cortensordim}
Let $C$ be an extreme point of $\Cor(m,n)$ and let $r = \rank(C)$. The minimum local dimension of a tensor operator representation of $C$ is $2^{\lfloor r/2 \rfloor}$.
\end{corollary}
\begin{proof}
The proof follows directly from Corollary~\ref{remTsi} and Theorem~\ref{RankRep}.
\end{proof}
\section{Matrices with high completely positive semidefinite rank}
\label{secfinal}
In this section we prove our main result and construct completely positive semidefinite matrices with exponentially large $\cpsd$. In order to do so we are going to use an additional link between bipartite correlations and quantum correlations, combined with the fact that quantum correlations arise as projections of completely positive semidefinite matrices. We start with recalling the facts that we need about quantum correlations.
Let $A$, $B$, $S$, and $T$ be finite sets. A function $p \colon A \times B \times S \times T \to [0,1]$ is called a \emph{quantum correlation}, realizable in {\em local dimension} $d$, if there exist a unit vector $\psi \in \mathbb{C}^d \otimes \mathbb{C}^d$ and Hermitian positive semidefinite $d \times d$ matrices $X_s^a$ ($s\in S$, $a\in A$) and $Y_t^b$ ($t\in T$, $b\in B$) satisfying the following two conditions:
\begin{equation}\label{eqqc1}
\sum_{a \in A} X_s^a = \sum_{b \in B} Y_t^b = I \quad \text{for all} \quad s \in S, t \in T,
\end{equation}
\begin{equation}\label{rqqc2}
p(a,b|s,t) = \psi^* (X_s^a \otimes Y_t^b) \psi \quad \text{for all} \quad a \in A, b \in B, s \in S, t \in T.
\end{equation}
The next theorem shows a link between quantum correlations and $\mathcal{CS}_{\hspace{-0.065em}+}$-matrices. This result can be found in \cite[Theorem 3.2]{Antonios:2015} (see also \cite{MR14}). This link allows us to construct $\mathcal{CS}_{\hspace{-0.065em}+}$-matrices with large complex completely positive semidefinite rank by finding quantum correlations that cannot be realized in a small local dimension.
\begin{theorem} \label{thrm1}
A function $p \colon A \times B \times S \times T \to [0,1]$ is a quantum correlation that can be realized in local dimension $d$ if and only if there exists a completely positive semidefinite matrix $M$, with rows and columns indexed by the disjoint union $(A \times S) \sqcup (B \times T)$,
satisfying the following conditions:
\begin{equation} \label{eqrank}
\hcpsd(M) \leq d,
\end{equation}
\begin{equation} \label{eqMprob}
M_{(a,s), (b,t)} = p(a,b|s,t) \quad \text{for all} \quad a\in A, b \in B, s \in S, t \in T,
\end{equation}
and
\begin{equation} \label{eqMsums}
\sum_{a \in A, b \in B} M_{(a,s),(b,t)} = \sum_{a,a' \in A} M_{(a,s),(a',s')} = \sum_{b,b' \in B} M_{(b,t),(b',t')} = 1
\end{equation}
for all $s,s' \in S$ and $t,t'\in T$.
\end{theorem}
Next we show how to construct from a bipartite correlation $C\in \Cor(m,n)$ a quantum correlation $p$, with $|A|=|B|=2$ and $S=[m]$, $T=[n]$, having the property that the smallest local dimension in which $p$ can be realized is lower bounded by the smallest local dimension of a tensor representation of $C$.
\begin{lemma} \label{borp}
Let $C \in \Cor(m, n)$ and assume $C$ admits a tensor operator representation in local dimension $d$, but does not admit a tensor operator representation in smaller dimension.
Then there exists a quantum correlation $p$ defined on $\{0,1\} \times \{0,1\} \times [m] \times [n]$,
satisfying the relations
\begin{equation}\label{corp}
C(s,t) = p(0,0|s,t) + p(1,1|s,t) - p(0,1|s,t) - p(1,0|s,t) \text{ for } s \in [m], t \in [n],
\end{equation}
that can be realized in local dimension $d$, but cannot be realized in smaller dimension.
\end{lemma}
\begin{proof}
We first show the existence of a quantum correlation that satisfies \eqref{corp}. Let $C \in \Cor(m,n)$. By assumption there exists a unit vector $\psi \in \mathbb{C}^{d} \otimes \mathbb{C}^{d}$, and Hermitian $d \times d$ matrices $X_1,\ldots,X_m,Y_1,\ldots,Y_n$, whose spectra are contained in $[-1, 1]$, such that $C_{s,t} = \psi^* (X_s \otimes Y_t) \psi$ for all $s$ and $t$. We define the Hermitian positive semidefinite matrices
\begin{equation}\label{eqXY}
X^a_s={I+(-1)^a X_s\over 2},\ Y^b_t={I+(-1)^b Y_t\over 2}\ \text{ for } a,b\in \{0,1\}.
\end{equation}
Using the fact that $X_s^0 + X_s^1 = Y_t^0 + Y_t^1 = I$, $X_s = X_s^0 - X_s^1$, and $Y_t = Y_t^0 - Y_t^1$, it follows that the function $p(a,b|s,t) = \psi^* (X_s^a \otimes Y_t^b) \psi$ is a quantum correlation that can be realized in local dimension $d$ and satisfies \eqref{corp}.
Assume that $p$ can be realized in dimension $k$, we show that $k\ge d$. As $p$ is realizable in dimension $k$ there exist a unit vector $\tilde\psi \in \mathbb{C}^k \otimes \mathbb{C}^k$ and Hermitian positive semidefinite $k \times k$ matrices $\{\tilde X_s^a\}$ and $\{\tilde Y_t^b\}$ such that
\[
\sum_{a \in \{0,1\}} \tilde X_s^a = \sum_{b \in \{0,1\}} \tilde Y_t^b = I \quad \text{for all} \quad s \in S, t \in T,
\]
for which we have $p(a,b|s,t) = \tilde \psi^* ( \tilde X_s^a \otimes \tilde Y_t^b) \tilde\psi$. Observe that the spectrum of the operators $ \tilde X_s^a$ and $ \tilde Y_t^b$ is contained in $[0,1]$. We define $ \tilde X_s = \tilde X_s^0 - \tilde X_s^1, \tilde Y_t = \tilde Y_t^0- \tilde Y_t^1$. Then, using \eqref{corp}, we can conclude \[C_{s,t} = \tilde \psi^*( \tilde X_s \otimes \tilde Y_t) \tilde \psi.\] This means that $C$ has a tensor operator representation in local dimension $k$ and thus, by the assumption of the lemma, $k \geq d$.
\end{proof}
We can now prove our main theorem:
\begin{theorem*}
For each positive integer $k$, there exists a completely positive semidefinite matrix $M$ of size $4k^2 +2k +2$ with $\hcpsd(M) = 2^k$.
\end{theorem*}
\begin{proof}
Let $k$ be a positive integer, let $r = 2k$, and set $n = \binom{r}{2}+1$. By Theorem~\ref{lemExtremePoint}(i) there exists an extreme point $C$ of $\Cor(r,n)$ with $\rank(C) = r$. Corollary~\ref{cortensordim} tells us there exists a tensor operator representation of $C$ using local dimension $d =2^{\lfloor r/2\rfloor}= 2^k$, and that there does not exist a smaller tensor operator representation.
Then, by Lemma~\ref{borp}, there exists a quantum correlation $p \colon \{0,1\} \times \{0,1\} \times [r] \times [n] \to [0, 1]$ that can be realized in local dimension $d$ and not in smaller dimension. Let $M$ be a completely positive semidefinite matrix constructed from $p$ as indicated in Theorem~\ref{thrm1}, so that $\hcpsd(M) = d$ and the size of $M$ is $2r + 2n = r^2 + r + 2 = 4k^2 + 2k + 2$.
\end{proof}
We note that by using Theorem~\ref{lemExtremePoint}(ii) we would get a matrix with the same completely positive semidefinite rank $2^k$, but with larger size $4k^2+6k+2$. Likewise, the result of \cite{Ji13} combined with Theorem \ref{thrm1} also leads to a matrix with the same completely positive semidefinite rank, but with larger size ($148k^2-58k$). It is an open problem to find a family of completely positive semidefinite matrices where the ratio of the completely positive semidefinite rank to the matrix size is larger than in the above theorem. It is not possible to obtain such an improved family by the above method. Indeed, if $M$ is a completely positive semidefinite matrix with $
\hcpsd(M) = 2^k$, constructed from an extreme bipartite correlation matrix $C \in \Cor(m,n)$ as in the above theorem, then the size $2m+2n$ of $M$ is at least $4k^2 +2k+2$. To see this, note that, by Corollary~\ref{cortensordim} and the results in this section, $C$ has to have rank $2k$. Then, by Tsirelson's bound, $m+n-1 \geq {2k+1 \choose 2}$ and therefore $2m+2n \geq 4k^2+2k+2$.
\bigskip
\noindent {\bf Acknowledgements.} We are grateful to an anonymous referee for his/her careful reading and helpful comments, and for bringing the works \cite{Ji13,Slofstra11} to our attention.
|
1,116,691,499,694 | arxiv | \section{Section title}
\section{Introduction \& Outline}
\label{sec:intro}
Our poor understanding of clouds has been identified as a major source of uncertainty in the prediction of future climate scenarios with Earth-system models used in future climate prediction \citep{IPCC_AR5}.
This unfortunate state-of-affairs concerns the radiative properties of all clouds types as well as their role in the hydrological cycle, including their complex interactions with natural and anthropogenic aerosols.
Due to their propensity for producing precipitation, clouds that form in convective dynamical regimes, both shallow and deep, over both water and land, are particularly problematic.
Normally, active (i.e., lidar and radar) and passive remote sensing would be a reliable source of knowledge about clouds and precipitation, with the later modality providing the spatial coverage and the former the vertical structure.
Clearly, no single observational technology reveals all we need to know.
NASA is therefore proactively selecting, under a fixed budget and in consultation with the scientific community, the optimal sensor suite for its next generation of satellites, in pursuit of science goals spelled out in the 2017 Decadal Survey (DS17) for Earth Science \citep{DS2017}.
The DS17 ``Aerosol/Cloud-Convection-Precipitation'' track is confronted with conflicting needs to maintain a robust program-of-record (POR) and for innovation.
Operational cloud sensing in reflected solar light is currently limited by its assumption of plane-parallel cloud geometry, hence 1D radiative transfer (RT) to predict visible/near-IR (VNIR) and shortwave-IR (SWIR) observations \citep{platnick2017}.
Biases ensue in the retrieved cloud properties (cloud optical thickness, effective particle size), even under the best circumstances, i.e., extended stratiform cloud layers that approximate the assumed plane-parallel geometric model \citep{NK_bispectral_90}.
Due to their inherently 3D nature, vertically-developed convectively-driven clouds are all but forsaken in the sense that retrievals are performed on them but rarely pass tests for reliability \citep{cho2015}.
We believe that, to break out of the paradigm of pixel-by-pixel 1D RT-based retrievals that operational passive cloud sensing is committed to---now because of additional pressures of POR continuity---we need to improve our fundamental understanding of how images of optically thick 3D clouds are formed.
Out of this quest, a tale emerges where two intertwined radiative diffusion processes unfold in the cloud's ``veiled core'' (VC) and ``outer shell'' (OS).
In essence, the OS is the radiative equivalent of a boundary layer as defined in fluid dynamics, that is, where the presence of boundary sources and sinks of solar radiation are strongly felt.
In the OS, radiation-matter interactions thus shape the structure of the radiance field, including multi-angle imaging by remote sensors.
The VC is defined empirically in a previous study \citep{Forster_etal21} as the potentially large inner region of the cloud where small-scale details of the extinction field have a negligible impact on the remotely-observed escaping radiance fields, i.e., multi-view images.
A ``negligible'' effect is defined here as not exceeding the level of sensor noise.
Figure~\ref{fig:Koch_in+out_VC} uses a fractal cloud model to visualize the VC and OS.
Although Fig.~\ref{fig:Koch_in+out_VC} shows a sharp boundary between the OS and VC, the physically-correct definition presented further on leads to a more gradual transition.
\begin{figure}
\begin{center}
\includegraphics[width=3.14in]{Koch_in+out_VC.jpg}
\end{center}
\caption{
OS and VC of a 2D toy cloud model described by \cite{Forster_etal21} (e-supplement) based on the Koch fractal for its outer shape, and fractional Brownian terrain for its inner structure, tuned to mimic turbulence.
There is also a cloud-scale vertical gradient to capture how an adiabatic parcel-model would distribute liquid water content (LWC) vertically.
Optical thickness along the central column is set to 40.
{\bf (a)} This version illustrates solar illumination for $\theta_0$ = 60$^\circ$.
Numbered asperities are discussed further on (Fig.~\ref{fig:Koch_R_vs_T} in \S\ref{sec:VC_processes}b).
{\bf (b)} This version visualizes the whole VC by accounting for all 9 of MISR's viewing angles: 0$^\circ$, $\pm$26.1$^\circ$, $\pm$45.6$^\circ$, $\pm$60.0$^\circ$, and $\pm$70.5$^\circ$, respectively, for An, Aa/f, Ba/f, Ca/f, and Da/f cameras.
Any point within the colored areas is in the OS, while the VC is the grey zone.
}
\label{fig:Koch_in+out_VC}
\end{figure}
Cloud tomography (CT) using fused multi-angle/multi-spectral imaging from the likes of the Multi-angle Imaging SpectroRadiometer \citep[MISR,][]{Diner_etal98} and MODerate resolution Imaging Spectrometer \citep[MODIS,][]{Salomonson_etal02} on Terra is a much-needed breakthrough in passive VNIR-SWIR cloud remote sensing from space.
CT has been demonstrated so far \citep{Levis_etal15,Levis_etal17,Levis_etal20} only with synthetic and real airborne imaging sensor data where image pixels and grid voxels are relatively small, hence optically thin and arguably homogeneous, which is consistent with assumptions in the forward model at the core of the retrieval, namely, SHDOM \citep{evans1998spherical}.\footnote{
An alternative 3D RT-based approach to CT has been formulated mathematically \citep{martin2014adjoint} and demonstrated \citep{martin2018demonstration}, but so far only on a 2D transect through an LES cloud field.}
Moreover, the clouds subjected to CT so far are relatively small, so no particular attention was paid to the OS/VC partition.
Specifically, data from the Airborne Multi-Spectral Imaging Polarimeter (AirMSPI) \citep{diner2013} was used, with $\sim$20~m pixels, after rigorous uncertainty quantification (UQ) of the retrieval methodology using synthetic data from Large-Eddy Simulation (LES) of vertically-developed 3D clouds and high-fidelity 3D RT \citep{Levis_etal15}.
Again, key to the success of the CT demonstration is that voxels and pixels are comparable in scale at a few 10s of meters, and can reasonably be assumed optically thin and internally uniform.
The situation changes radically for above-mentioned space-based sensors where pixels are 14 (MISR-red) to 25 (MODIS-SWIR) times larger than those of AirMSPI.
The cloudy atmospheric columns defined by MISR or MODIS pixels will almost surely be optically thick in \emph{horizontal} directions as well as internally variable.
These simple facts are problematic for current SHDOM-based CT codes because SHDOM assumes grid cells to be uniform and optically thin in all directions.
Figure~\ref{fig:EUREC4A} (top panel) displays a 250~m resolution MODIS ``true color'' image captured during the EUREC$^4$A (ElUcidating the Role of Clouds-Circulation Coupling in ClimAte) field campaign \citep{Bony_etal17,Stevens_etal21}.
The bottom panel in Fig.~\ref{fig:EUREC4A} shows a subset of the MODIS data in the top panel, along with colocated data from MISR (red channel, 275~m resolution).
Also shown is collocated imagery from the spectrometer of the Munich Aerosol Cloud Scanner \citep[specMACS,][]{Ewald_etal16}, rendered at $\approx$20~m resolution, that was onboard the DLR High Altitude and Long Range Aircraft (HALO) during the Terra under-flight.
This deliberate deployment during EUREC$^4$A will create an opportunity for validation of CT using relatively coarse-scale data from space-based sensors, which is currently under development.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.7]{MODIS+MISR+HALO.jpg}
\caption{
VIS imagery from MODIS and MISR and the airborne specMACS polarization resolving stereo camera captured on Feb 5$^\text{th}$, 2020, while the DLR HALO (High Altitude and LOng range research) aircraft was under-flying the Terra satellite during the EUREC$^4$\!A field campaign.
Opaque 3D clouds of various sizes are observed by all 3 sensors and are thus candidates for reconstruction using CT, with retrievals from the fine-scale airborne imagery offering a form of validation for CT methods now under development for coarse-scale satellite data.
}
\label{fig:EUREC4A}
\end{center}
\end{figure}
Fortunately for future development of CT from sensors in space, we will show herein that there is a clear spatial separation of transport regimes:
\begin{itemize}
\item
\emph{standard 3D RT} in the OS controls the gradual blurring of internal cloud structure in the imagery;
\item
\emph{radiative diffusion} in the VC controls the cloud-scale gradient in brightness between the illuminated and self-shaded sides of the cloud.
\end{itemize}
The formulation of space-based CT as a large ill-posed inverse problem will be informed by these insights, and new hybrid forward models for CT will make judicious use of them.
Even the initialization of necessarily-iterative CT schemes can be expedited using the diffusion theoretical prediction for the contrast ratio of mean radiances for the sunny and shady sides of vertically-developed convective clouds \citep[cf.][]{davis2002}.
The paper is organized as follows.
In upcoming Section~\ref{sec:OS_processes}, we investigate quantitatively radiative processes that unfold in the OS, with the theoretical background being covered in e\nobrkhyph{}Supplement~``A'' (hereafter referred to as ``Appendix~A'').
Section~\ref{sec:VC_processes} addresses radiation transport across the VC, with the underlying theory being covered in e\nobrkhyph{}Supplement~``B'' (hereafter referred to as ``Appendix~B'').
In Section~\ref{sec:OS_VC_interaction}, we examine how the OS and VC interact radiatively and, ultimately, how entire cloud images are formed.
We summarize and discuss our findings in Section~\ref{sec:discuss} and apply them to a case study in Section~\ref{sec:Case_study}.
We draw our conclusions in Section~\ref{sec:concl} and describe future research in support of cloud CT using merged MISR and MODIS data from NASA's flagship Terra platform.
\section{Directional smearing and spatial drift with lateral dispersion in the OS}
\label{sec:OS_processes}
\subsection{Connection between image features and cloud structures goes from sharp to fuzzy}
An important concept at the core of CT is the causal connection between the 2D spatial variability of cloud images and the 3D spatial structure of the cloud.
Figure~\ref{fig:emerging_features} displays a series of high-precision numerical experiments using the MYSTIC 3D RT solver \citep{mayer2009,buras2010} that shows how an image feature is associated with a simple internal cloud structure.
A geometrically plane-parallel cloud is divided into 9 equally-spaced layers, each of optical thickness unity.
There is a Lambertian surface with an albedo of 2/3 at the lower boundary while, at the upper boundary, the solar beam impinges on the medium at solar zenith angle (SZA) 60$^\circ$.
The top 5 or 6 layers can be thought of as the OS, and the lower ones along with the partially reflective surface as the VC.
The domain is divided into 9$\times$9 = 81 MISR-like pixels.
An ``object'' is embedded in the cloudy medium that covers the central 3$\times$3 = 9 pixels, and has the same \emph{physical} thickness as the layer (see upper right panel of Fig.~\ref{fig:emerging_features}), but its \emph{optical} thickness exceeds that of the background by factors of 2, 5 and 20, from top to bottom.
As expected, we see in Fig.~\ref{fig:emerging_features} that the stronger the internal extinction gradients, the stronger the resulting image feature.
Moreover, the presence of the object in the cloudy medium is easily detectable in the observations if it is at sufficiently shallow optical depth.
However, the image feature that the object creates eventually blends into the sensor noise as it sinks to ever larger optical depths at a rate that depends on the ``strength'' of the cloud structure, i.e., how opaque (or tenuous) it is compared to the background.
Objects of optical thickness $\tau_\text{object}$ up to 20$\times$ the background create image gradients that vanish into the radiometric noise at an optical depth $\lesssim$5.
As expected, even the most opaque objects can go undetected after they reach the OS/VC interface at $\tau_\text{layer} \approx 5$, and sink further into the VC.
The upper right panel in Fig.~\ref{fig:emerging_features} regroups the results in the right-hand plots of Fig.~\ref{fig:emerging_features} as well as intermediate cases in a 2D density plot in ($\tau_\text{layer},\tau_\text{object}$) space.
The feature ``emergence'' threshold of 0.05 for the adopted contrast ratio (between the max-to-min radiance difference normalized by the image transect mean) is highlighted.
To be detectable as an image feature, the object's parameters must be to the left of the white line: as anticipated, opaque and/or shallow objects generate observable features.
This experiment reinforces the choice of $\approx$5 as the threshold optical thickness of the OS.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.14in]{emerging_features_2+5+20+summary.jpg}
\end{center}
\caption{
The upper right panel schematic depicts an embedded ``object'' in an otherwise uniform plane-parallel medium of optical thickness 9: a 3x3x1 voxel disturbance viewed from above; SZA is 60$^\circ$ coming from the left, as indicated with the dashed line.
The three lower left-hand panels show the resulting 3 nadir pixel-level radiances across the middle of the object (dashed line) along with 3 more pixels on either side of it, hence 9 pixels in all.
From top to bottom, 3 selected optical thicknesses of the object are used: $\tau_\text{object}$ = 2, 5, and 20, in an environment where the optical thickness of a single layer is unity.
All radiances are computed to high precision, with $\approx$0.5\% numerical error.
The image transects are displayed for optical depths from the upper boundary down to the top of the object (denoted $\tau_\text{layer}$) varying between 0 to 8 (color coding in first panel applies to all).
Where perceptible, the width of the lines captures the Monte Carlo noise level (2 standard deviations).
The three lower right-hand panels show the contrast ratio defined as the maximum-to-minimum radiance difference (denoted as ``feature'') divided by to the mean radiance along the transect in the left panel (denoted as ``background'').
This ratio is a metric for the strength of the apparent image feature, and is plotted as a function of $\tau_\text{layer}$.
The horizontal grey line indicates the highlighted 5\% contrast level at which the feature barely emerges from the radiometric sensor noise (i.e., $\approx$3\% for each radiance in the difference).
Finally, the upper right panel compiles the contrast ratio as a function of both $\tau_\text{layer}$ and $\tau_\text{object}$, with the critical 5\% contrast level highlighted in white.
}
\label{fig:emerging_features}
\end{figure}
In this case of oblique illumination (SZA = 60$^\circ$), we also see that change in radiance value across pixels is invariably steeper on the illuminated side of the object than on its shaded side, even when the object is not particularly opaque.
This is because radiation flows across more pixels (i.e., cloud columns) when the light is more in the diffuse field and less in the direct beam---a phenomenon known as radiative smoothing \citep{Marshak_etal1995,Davis_etal1997}.
That said, all gradients are naturally flattened as the object goes deeper into the OS and eventually the VC.
At any rate, the sharper pixel-to-pixel gradients are more conducive to accurate feature detection and tracking in a horizontal wind and/or stereo height determination.
The conventional definition of ``cloud top'' in stereographic CTH retrievals \citep[e.g.,][]{marchand2001,HorvathDavies2004} is $\approx$1 optical depth below the actual cloud top, as defined by the highest occurrence of any condensed water particles, which can be viewed as the ``microphysical'' definition of cloud top.
Figure~\ref{fig:emerging_features} shows that there is in fact a range of possible optical depths below the microphysical cloud top.
If we could determine for each detected feature a robust metric for its ``strength,'' then level curves in a 2D plot such as the upper right panel of Fig.~\ref{fig:emerging_features} would help to constrain the actual optical depth of the physical heterogeneity that caused the image feature.
From there, we could learn how to relate optical to physical depth, and thus refine the determination of actual CTH and the height assignment of the horizontal wind retrieval.
{\bf The OS of any cloud, cumuliform or stratiform, is therefore a region of gradual transition where sunlight goes from streaming ballistically outside the cloud to wandering randomly in space when it reaches the VC.}
That means that we have to pay attention to how directional properties interact with spatial counterparts.
From the remote sensing standpoint, the OS is where we will find the internal cloud structures that lead to ``features'' in the satellite or airborne imagery, e.g., for cloud top height (CTH) determination using stereographic methods.
From the cloud tomography perspective, the OS is where we should focus the bulk of the computational effort to reconstruct 3D cloud structure since that is where the spatial information encoded in the multi-angle imagery is connected with inherent 3D cloud structure.
In short, this is the realm of RT per se in the most 3D sense: details of the phase function (PF) matter, as do those of the spatial structure of the extinction field.
Notwithstanding, we show in the following that insights into the gradual transition are gained by focussing primarily on the angular distribution of the radiance field in the case of unbounded uniform 3D media.
\subsection{Angular distribution of radiance in the OS from a random walk on the 2D sphere}
Figure~\ref{fig:PF_convolve} shows the rotationally invariant distribution of radiance as a function of polar angle $\theta$ after $N$ scatterings, $0 \le N \le 10$.
In App.~A, we are reminded that, at every scattering event, the prevailing angular distribution of radiance is convolved (in the spherical sense) with the PF.
Therefore, as far as direction of propagation is concerned, the radiance field is represented by successive convolutions of the PF with itself.
At $N = 0$, the light is confined to a perfectly collimated beam.
At $N = 1$, we simply have an image of the adopted cloud droplet PF, with over 5~orders-of-magnitude in range.
After only $N = 5$ scatterings, the range is drastically reduced to only one order-of-magnitude.
At $N = 10$, the field is nearly isotropic.
Following the formulation of RT in App.~A, we can recast this radiance field evolution as a discrete-time random walk (RW) on the unit sphere, i.e., the transport direction sub-space mapped out with polar coordinates $(\theta,\phi)$, where $\theta \in [0,\pi]$ and $\phi \in [0,2\pi)$.
Let $p(\mu_{\rm s})$ be the PF, where $\mu_{\rm s} = \cos\theta_{\rm s}$, with $\theta_{\rm s}$ denoting the scattering angle.
We normalize $p(\mu_{\rm s})$ such that $2\pi \int_{-1}^{+1} p(\mu_{\rm s}) {\rm d}\mu_{\rm s}$ is unity.
As usual in RW theory, all the ``particles'' are released at the North pole ($\theta = 0$) in uniformly random azimuthal directions.
At time $N = 1$, the particles are thus distributed across the sphere uniformly in azimuthal angle, as a reflection of the rotational symmetry of the PF.
In sharp contrast with azimuthal angle $\phi$, particles are now distributed very unevenly in polar angle $\theta$, specifically, according to the highly variable PF in Fig.~\ref{fig:PF_convolve}.
At time $N = 1$, the polar angle is simply $\theta = \cos^{-1}\mu_{\rm s}$.
Of special interest in RW theory is the mean step size, a.k.a. mean-free-path (MFP),
\begin{eqnarray}
\langle \theta_{\rm s} \rangle &=& 2\pi \int_0^\pi \theta_{\rm s} p(\cos\theta_{\rm s}) \sin\theta_{\rm s} {\rm d}\theta_{\rm s} \nonumber \\
&=& 2\pi \int_{-1}^{+1} \cos^{-1}\mu_{\rm s} p(\mu_{\rm s}) {\rm d}\mu_{\rm s}.
\label{eq:mean_theta_s}
\end{eqnarray}
For the PF in Fig.~\ref{fig:PF_convolve}, we have $\langle \theta_{\rm s} \rangle$ = 18.9$^\circ$.
For comparison, the full-width-half-max (FWHM) of the forward peak is only $\approx$1.7$^\circ$, and one half of the scattering events are in the cone defined by $\theta \lesssim 5.5^\circ$.
In RT theory, we are more familiar with is the PF's asymmetry factor
\begin{equation}
g = \langle \mu_{\rm s} \rangle = 2\pi \int_{-1}^{+1} \mu_{\rm s} p(\mu_{\rm s}) {\rm d}\mu_{\rm s},
\label{eq:g_factor}
\end{equation}
which is 0.86 for the PF in Fig.~\ref{fig:PF_convolve}.
An alternative metric for the RW step size is therefore $\cos^{-1}g$ = 30.5$^\circ$ for the PF in Fig.~\ref{fig:PF_convolve}.
So much for $N = 1$.
After $N = 5$ scatterings, the \emph{effective} asymmetry factor of the iterated PF is $g^5 \approx 0.47$ from App.~A, Eq.~(A2), and the associated polar angle has already more than doubled on average to 61.7$^\circ$.
After $N = 10$ scatterings, we have $g^{10} \approx 0.22$ and we are 77$^\circ$ away from the North pole.
In other words, memory of the original direction of propagation is all but lost, almost as it would be after a single isotropic scattering.
In App.~A, we show that for this RW on the sphere, positional correlations decay exponentially.
Technically speaking, it is a Markov process with \emph{short-term} memory.
Fundamentally, this is because the sphere is a finite manifold in 3D space.
{\bf The characteristic (a.k.a. ``e-folding'') time for the $g^n$ series is}
\begin{equation}
n_\text{smear} = -1/\log g.
\label{eq:n_smear}
\end{equation}
If $g \lesssim 1$, then $n_\text{smear} \approx g/(1-g)$; see App.~A, Eq.~(A7).
Numerically, these two estimators lead to $n_\text{smear}$ = 6.7 and $\approx$ 6.2, respectively, for $g$ = 0.86.
\begin{figure}
\begin{center}
\includegraphics[width=3.14in]{PF_N.pdf}
\end{center}
\caption{
A typical liquid water cloud phase function convolved $N$ times with itself for $N$ = 0, 1,..., 10, on a log-scale.
Specifically, we assume $r_{\rm e}$ = 10~$\mu$m, $v_{\rm e}$ = 0.1 in a Gamma particle size distribution (PSD) from \citep{cahalan2005}, for $\lambda$ = 670~nm (MISR's red channel).
Outcomes for increasing $N$ are color-coded: $N = 0$, black; $N = 1$, gray; $N \ge 2$, rainbow colors with increasing wavelengths.
Thus, ballistic motion is in black, singly-scattered radiance is in gray, multiply-scattered radiance is in color, going from cool to warm.
We clearly see the gradual erasing of identifiable single-scattering features such as the strong forward peak at $\theta = 0$, the back-scatter/glory peak at $\theta = 180^\circ$, as well as the primary and secondary rainbows at $\theta \approx 142^\circ$ and $126^\circ$, respectively.
}
\label{fig:PF_convolve}
\end{figure}
\subsection{Thickness of the OS, from longitudinal drift due to forward scattering}
What are the spatial ramifications of the RW on the sphere for the associated discrete-time RW in 3D space?
The strong forward peak of the PF forces the position of the particle to stay, on average, close to the axis defined by its original direction of propagation, say, the $z$-axis.
We show in App.~A that after $N$ steps, the mean position along the $z$-axis, denoted $\langle z_N \rangle$, is $\ell \, (1-g^{N+1})/(1-g)$, where $\ell$ is the MFP in 3D space.
In a uniform optical medium, $\ell = 1/\sigma$, where $\sigma$ is the extinction coefficient.
In optical distance, that systematic drift is thus given by
\begin{equation}
\langle \tau_N \rangle = \sigma\langle z_N \rangle = \frac{\langle z_N \rangle}{\ell} = \frac{1-g^{N+1}}{1-g}.
\label{eq:mean_z_drift}
\end{equation}
How far has the 3D space RW drifted after $N = n_\text{smear}$ steps?
For a typical cloud droplet PF with $g$ = 0.86, we estimated that $n_\text{smear} \approx 6.5$, which yields $\langle \tau_{n_\text{smear}} \rangle \approx 4.8$.
The proximity of this optical distance with the optical depth of the OS/VC interface determined empirically by \cite{Forster_etal21}, namely, $\approx$5 is not coincidental.
Those authors define the 3D cloud's VC as the inner region where the extinction field can be rearranged in physically plausible ways, but the resulting changes in the remotely measured radiances are commensurate with sensor noise.
Approaching the VC from the opposite direction, one can ask:
{\bf How \emph{optically} deep can a reasonably strong fluctuation in the cloud's 3D extinction field be, and still be detectable in the outgoing 2D radiance field (i.e., an image of the cloud)?}
This is, in essence, a question about visibility through cloudy air.
It is also a key question in CT in the same way that the VC is, because any extinction field structure deeper than this threshold has little impact on the outgoing radiance fields, and is not worth trying too hard to retrieve.
The numerical experiment summarized in Fig.~\ref{fig:emerging_features} investigated this question empirically, but now we can address it theoretically with the above general results in RT.
Considering only directionality of light leaving the region of fluctuation in the extinction field in the direction of a remote imaging sensor, it is more-and-more dispersed as the object sinks deeper into the cloudy medium.
This means there is less-and-less light heading toward the pixels that would sharply image that region in the absence of cloud.
So, apart from being extinguished, the remaining light is spread over more-and-more pixels at the focal plane.
We venture that the threshold optical depth is roughly $\langle \tau_N \rangle$ in (\ref{eq:mean_z_drift}), which works out to be $\approx$5 for $N = n_\text{smear}$ in (\ref{eq:n_smear}) when $g$ = 0.86.
Now, a perfect sensor would always register some difference as the optical distance to the region of interest increases, but real sensors have noise floors that determine change detectability limits.
Typical signal-to-noise ratios (SNRs) for good VNIR/SWIR sensors are in the 100s.
However, direct or near-direct transmission across an optical distance of $\langle \tau_{n_\text{smear}} \rangle \approx 5$ is ${\rm e}^{-5} \approx 6.7\,10^{-3}$, which would be approaching the noise level (assuming a typical cloud reflectivity does not saturate the sensor).
This prediction should be quite robust since two extreme limits lead to logical answers.
First, in the theoretical limit of no scattering at all ($g \to 1$), then $n_\text{smear} \to \infty$ and $\langle \tau_{n_\text{smear}} \rangle \to n_\text{smear}+1 = \infty$ in (\ref{eq:mean_z_drift}), which is logical (at least in the absence of absorption, i.e., pure unattenuated ballistic propagation).
Second, in the limit of isotropic scattering ($g \to 0^+$), then $n_\text{smear} \to 0$ and $\langle \tau_{n_\text{smear}} \rangle \to n_\text{smear}+1 = 1$, which is also logical since there is no forward scattering trend for light to get any deeper than a MFP on average before complete reorientation.
From (\ref{eq:mean_z_drift}), we can compute $\langle z_\infty \rangle = \ell/(1-g)$, which is known as the ``transport'' MFP (tMFP) in the case of non-absorbing optical media.
This quantity resurfaces and plays a crucial role in \S\ref{sec:VC_processes} below.
As carefully worded in the above, the answer to our question depends entirely on the scattering PF, and our theoretical arguments even bring that down to $g$.
Now, if asking about \emph{physical} rather than \emph{optical} depth, then the answer would also call for an effective extinction value, equivalently, an effective MFP, for the variable medium.
\subsection{Spatial resolution in the OS, from lateral dispersion based on sideways and forward scattering}
In App.~A, we investigate lateral dispersion of the scattered light field as well as the longitudinal drift examined in the above.
This enables us to quantify in 3D physical space the ``smearing'' effect discussed so far only as a directional phenomenon.
Returning to the RW-based formulation of RT in the OS, we computed exactly the mean position of the RWing particle along the original direction of propagation, namely, $\langle z_N \rangle$, after $N \ge 0$ scatterings.
We must note however that the analytical estimate is strictly for a uniform \emph{unbounded} 3D space.
In fact, it is the possibility of large excursions of the RW into the $z < 0$ region that keeps $\langle z_N \rangle$ strongly bounded, and indeed makes it converge to a finite value $\langle z_\infty \rangle$ that we equated to the tMPF in the above.
To gain further insights, we also investigated in App.~A RWs in a uniform unbounded 3D \emph{half-}space, this time numerically: RWs are simply terminated if they enter $z < 0$ territory (see Fig.~A3 and corresponding discussion).
The most drastic outcome is that, without the potentially very large negative-$z$ excursions, $\langle z_N \rangle$ now increases without bound with increasing $N$ (cf. Fig.~A3a).
That said, the analytical result we used in the above for $\langle z_N \rangle$ remains reasonably accurate up to $N \approx 6$ (hence $\langle \tau_N \rangle \approx 5$), which is roughly $n_\text{smear}$ when $g$ = 0.86.
This finding reinforces our prediction of the optical thickness of the OS based on $\langle \tau_{n_\text{smear}} \rangle$.
We can do the same \emph{mean} position estimation for horizontal directions, and indeed very easily: $\langle x_N \rangle = \langle y_N \rangle = 0$, after $N \ge 0$ scatterings, simply because of the rotational symmetry of the PF and its iterations by convolution in Fig.~\ref{fig:PF_convolve}.
Thus, to get to the interesting quantities, we now need to move on from 1$^\text{st}$-order statistics to their 2$^\text{nd}$-order counterparts, namely, variances $\langle z_N^2 \rangle-\langle z_N \rangle^2$ and $\langle x_N^2 \rangle$.
As seen in Figs.~Abc, we succeeded in getting exact results for the (full) unbounded 3D space only for $N$ = 0,1 and, for $\langle x_N^2 \rangle$, $N$ = 2.
The numerical investigation in App.~A shows that the 2$^\text{nd}$-order analytical model deteriorates much faster in $x$ than in $z$, in the sense of underestimation (as rationalized in App.~A).
The upgrade in the numerics from the full 3D space to the $z > 0$ half-space naturally makes no difference along the $x$- and $y$-axes.
However, by eliminating deviations into $z < 0$ space, it reduces the variance along the $z$-axis, which slightly favors the 2$^\text{nd}$-order analytical model.
It is noteworthy that the analytical models for variances require, like for the mean of $z$, knowledge of $g$, which comes from the 1$^\text{st}$-order coefficient in the Legendre expansion of the PF, and also for the 2$^\text{nd}$-order coefficient.
Quantitatively, variances in the position of the particles are nearly equal in all three axes for all but the first couple of steps when the full 3D space is accessible to their RWs.
The same is still true in the case of the half-space for as long as the mean $z_N$ is still in the radiative boundary layer (i.e., the analog of the OS for a uniform medium), specifically, for $N \lesssim n_\text{smear}$.
For larger values of $N$, the half-space variance in vertical direction becomes smaller than that of its horizontal counterparts.
{\bf What does this tell us about visibility in the cloudy medium and for guidance in CT algorithm development?}
We are primarily interested here in the value of the standard deviation $\sqrt{\langle x_N^2 \rangle}$ while $\langle z_N \rangle$ is still in the radiative boundary layer, i.e., $\lesssim 5\ell$, hence $N \lesssim n_\text{smear}$ when $g \approx 0.86$.
That is a metric of lateral dispersion of a light beam that is not only collimated but also localized.
It therefore informs us about the spatial resolution that can be achieved when imaging through the cloudy optical medium.
To see this, visualize a laser source at the upper boundary of the half-space pointing vertically into the medium.
The resulting radiance field is, for all practical purposes, the Green function for the 3D RT problem, having a unitary Dirac-delta source in both physical and directional sub-spaces.
Inside the half-space, this laser light is broken into two components: ({\it i}) a directly-transmitted contribution that is still a $\delta$-function in the original direction, with a weight $\exp(-z/\ell)$ that decays rapidly with penetration depth $z$; ({\it ii}) a once- and more-scattered light field that can be identified as the point-spread function (PSF) of the cloudy medium, with a complementary weight that rapidly approaches unity when $z$ much exceeds the MFP $\ell$.
Both components are key to questions about imaging through the scattering medium.
Indeed, the imaginary laser source can be viewed as emitting ``reciprocal'' light propagating in reverse direction away from a given pixel at the sensor.
The direct component captures what the sensor can see in the absence of scattering, although reduced in strength by Beer's brutal law of exponential extinction.
The diffuse PSF component quantifies the ``importance'' for that pixel of all of the volume not along the direct beam, which quickly becomes the dominant source of light at the pixel of interest as $\tau = z/\ell$ increases well past unity.
In other words, at optical depths where the PSF overwhelms the direct light, spatial resolution is inherently limited to structures that are larger than the root-mean-square (RMS) of the lateral transport (in MFPs) via RW at optical depth $\tau$, namely, $\sqrt{\langle x^2 \rangle(\tau)}/\ell$.
Unfortunately, we do not have a closed-form expression for this important quantity.
However, the computational experiment resulting in Fig.~A3 enables us to make a statement about anticipated spatial resolution for CT in the OS.
The blue dots in the three panels indicate respectively: (a) $N \approx n_\text{smear} \approx 6$ at which $\langle \tau_N \rangle$ reaches 5, the predicted optical thickness of the OS; (b) the standard deviation of $\tau_N = z_n/\ell$ for the same $N$, which is $\approx$2.5; (c) same as (b) but for $x_N/\ell$, which is also $\approx$2.5.
Doubling that number yields an estimate of the full width of the PSF at the OS/VC interface.
That in turn is an indication in optical units of the size of a fluctuation in the extinction field that can be retrieved robustly by tomography at the edge of the VC, assuming that a large sensor pixel size is not a prior limitation.
At shallower optical depths, we anticipate that smaller structures can be reconstructed accurately.
At the edge of the cloudy medium, i.e., $\lesssim$1 optical depth, directly transmitted light is by definition a significant part of the signal.
Consequently, tomographic reconstruction methods will work the best in these optically shallow regions, and the spatial resolution of the 3D cloud imaging is determined only by the 2D sensor pixel scale.
In this last case of optically shallow structures in the extinction field, they are naturally the ones that drive the sharpest gradients in the cloud images (see Fig.~\ref{fig:emerging_features}).
In turn, those are precisely the ones picked up by feature-matching algorithms, and then used in CTH determination via stereo.
\subsection{Experimental validation}
Bohren et al. (\citeyear{Bohren_etal95}) ask: {\bf At what optical thickness does a cloud completely obscure the sun?}
We will argue that this question is closely related to the one about the optical depth of the effective OS/VC interface, to which we answer: $\approx$5, both empirically \citep{Forster_etal21} and theoretically (here).
Bohren and coworkers address their question experimentally using human subjects looking at a surrogate sun through a homogeneous artificial cloud.
Their ``cloud'' was composed of neutrally-buoyant spheres made of polystyrene (refractive index $\approx$1.59) with various size distributions (radii 0.652$\pm$0.0048~$\mu$m, 5.3$\pm$1.2~$\mu$m, 15.9$\pm$2.9~$\mu$m) suspended in a 0.26$\times$0.26$\times$0.50~m$^3$ tank filled with distilled water.
Mie computations (for the relative refractive index of 1.2) yield $g$-values of roughly 0.8, 0.85, and 0.9, respectively, for these PSDs.
Cloud optical thickness (COT) was gradually increased from 0 to the point where all three human observers agreed that they could no longer determine where the ``sun'' was located based on the transmitted light.
They did this by adding known numbers of spheres, and throughly mixing the contents of the tank.
They carefully measured COT (using Beer's law) at the 700~nm wavelength they used while the particle concentrations were low, and extrapolated to COT for higher ones using the calibrated linear relation.
The rule-of-thumb established by the authors was that a cloud with COT $\approx$ 10 will completely obscure the solar disk.
They found a weak trend toward larger COTs with larger particles (hence increased $g$), due to the enhanced forward scattering, but not enough to change the rule-of-thumb.
Bohren et al.'s ``solar disk vanishes at COT $\approx$ 10'' rule-of-thumb is consistent with our determination of an optical depth of $\approx$5 from the solar source forward and from the sensors backward to the OS/VC interface.
Indeed, for any identifiable structure in the cloud's extinction field to have a measurable (i.e., $>$noise) impact on remotely observable radiances it needs to be illuminated with at least partially collimated incoming sunlight and, similarly, be viewed with reflected light with at least residual directionality.
We determined in the above that, by examining iterated self-convolutions of the PF in both directions, the measured light cannot have suffered more than $\approx n_\text{smear}$ scatterings lest its directionality be seriously degraded.
That is precisely what lead to the prediction of $\approx$5 as the optical thickness of the OS (equivalently, optical depth of the VC).
It is therefore not a surprise that \emph{all} memory of directionality is lost (the solar disk is no longer detectable) at precisely twice that optical distance, namely, 10.
In short, we regard this alignment with Bohren et al. (\citeyear{Bohren_etal95}) as validation of our predictions based a controlled laboratory experiment.
\subsection{Partition of reflected sunlight between the OS and the VC}
Not all of the incoming sunlight reaches the OS/VC interface.
Some fraction is reflected back to space before that happens, and it can be estimated roughly, on the conservative side, by using the expression for transmittance from the $\delta$-Eddington version of the two-stream model \citep[e.g.,][]{MeadorWeaver80} in the absence of absorption:
\begin{equation}
\label{eq:T_deltaEdd}
T_g(\tau,\mu_0) = \frac{(2+3\mu_0)+(2-3\mu_0){\rm e}^{-\tau/\mu_0}}{4+3(1-g)\tau},
\end{equation}
where $\mu_0$ is the cosine of the angle between the solar direction and the local outgoing normal to the plane-parallel cloud's outer boundary.
Now, a real cloud's outer boundary is a flimsy fractal entity where droplets are condensing and evaporating all the while being flung around by turbulent motions over a wide range of scales.
Such a fractal surface has no well-defined normal.
However, one can imagine a ``running mean'' surface that cuts through the most tenuous outer layers of the cloud.
Alternatively, one can define the outer surface as the convex hull of the fractal boundary, allowing for regions of null extinction to exist inside the convex optical medium.
At any rate, we're interested in the value of $T_g(\tau,\mu_0)$ when $g = 0.85$ and $\tau = 5$.
It ranges from $\approx$0.32 to $\approx$0.82 for $0 < \mu_0 \le 1$, recalling that the local irradiance is $\propto\mu_0$.
So, to get our rough estimate of how much sunlight ever reaches the VC, we are interested in an average of $T_{0.85}(5,\mu_0)$ over the illuminated portion of the cloud's surface.
If the distribution of $\mu_0$ across the surface is uniform on (0,1], then the mean $T$ is $\approx$0.57.
The proverbial spherical cloud leads to $\approx$0.64.
Our rule-of-thumb can be $\approx$60\%.
Factoring in the spatial variability of the OS, which is known to lead to an underestimate of transmittance \citep[e.g.,][]{cahalan94}, we can settle on $\approx$2/3.
{\bf Consequently, we reckon that $\approx$1/3 of the incoming sunlight never enters the VC.}
To a first approximation, that is the light that forms the ``features'' in the cloud imagery on (or near) the illuminated side of the cloud.
Let us further examine the fate of the light that does \emph{not} enter the VC by revisiting Fig.~\ref{fig:Koch_in+out_VC}.
Figure~\ref{fig:Koch_in+out_VC} builds on results from \cite{Forster_etal21} to visualize the VC and OS in a toy cloud model based on a fractal outer shape and a scale-invariant stochastic model for the inner structure that mimics turbulence overlaid with cloud-scale vertical gradient that captures convectively-driven stratification of the extinction field.
COT down the central column is 40.
Version (a) shows the solar rays penetrating the cloud from the West ($\theta_0$ = 60$^\circ$).
They are color-coded by optical distance $\tau_0$ to the solar source, and terminated at $\tau_0$ = 5.
Version (b) shows the OS in color and the VC in grey, as defined by the nine MISR view angles, although the three dominant cameras are Df, An and Da.
We see that there are lots of locations where the optical distance to the sun and to the cameras are both relatively small.
This is an opportunity for singly-scattered light to reach the sensor and convey the sharpest possible imprint of physical cloud structure as an image feature.
We note however that there is not much incoming/outgoing overlap for the most oblique forward-looking camera (Df) and, due to this particular cloud and illumination geometry, it occurs only very near the top of the cloud.
There is increasing overlap for the less oblique forward-looking cameras, and maximal overlap for all the aft-looking cameras.
The strong forward peak of the PF allows for still quite sharp imaging of structures in somewhat deeper layers thanks to light that has suffered as much as a handful of scatterings.
However, the more scatterings suffered, the less structured the light field.
It will therefore convey less of the information about cloud structure that is exploited in the inverse imaging problem in CT.
\section{Diffusional transport across the VC}
\label{sec:VC_processes}
So far, we have defined and visualized the VC as a complex domain inside the Koch cloud that inherits a fractal structure from it due to the solar and reciprocal sensor beams drilling into the medium, as illustrated in Fig.~\ref{fig:Koch_in+out_VC}.
In sharp contrast, \cite{Forster_etal21} performed their numerical experiments using a simple geometric shape to approximate the VC inside the fractal Koch cloud.
Figure~\ref{fig:adaptive_VC} shows an adaptive compromise between these two operative definitions.
Although the VC definition threshold is lowered from 5 to 4, the resulting VC is then shrunk into a \emph{convex} shape circumscribed by a smoothed version of the original fractal VC.
This brings the optical distance from the solar source and all 9 MISR cameras to the adaptive VC boundary back to $\approx$5.
The procedure is illustrated in Figs.~\ref{fig:adaptive_VC}a--c.
Figure~\ref{fig:adaptive_VC}d shows that the radiance field differences between the original cloud and the simplified version in Fig.~\ref{fig:adaptive_VC}a are inside the 5\% tolerance.
Imposition of a convex shape onto the VC is rationalized further on, based on computational considerations.
By the same token, we see that the yellow version of the VC, along with some of the regularly-shaped VCs used by \cite{Forster_etal21}, reaches the very bottom of the cloud, by its definition based on the solar source and overhead sensors.
The final orange version, however, does not touch cloud base.
That is also a desirable property of the VC for physical and computational reasons since, there too, the RT will be influenced by the presence of the lower boundary.
3D RT codes need to account correctly for the gradual transition from multiple scattering to pure streaming, even if the light is diffuse rather than collimated.
However, there are neither strongly-directional sources nor well-collimated sensors that see the cloud base in the present context of passive space-based cloud remote sensing.
Consequently, the required optical thickness of the OS between cloud base and VC will probably not need to be 5-to-6, but some lower number to be determined in a future study.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.14in]{adaptive_VC.jpg}
\caption{
(a) The Koch \emph{fractal} cloud by \cite{Forster_etal21} in Fig.~\ref{fig:Koch_in+out_VC}, but with optical thickness 20 (down from 40) along the central column, and its \emph{adaptive} VC identified as the \emph{regular} internal region; the uniform extinction value therein is the mean taken inside the VC.
(b) Genesis of the adaptive VC, from left to right:
a VC mask in yellow, based on the threshold optical depth of 4 (down from the usual 5) and, in orange, a convex set circumscribed by the yellow VC mask after a couple of ``erosion/dilation'' operations \citep[e.g.,][]{JackwayDeriche96} that boost the \emph{mean} optical depth of the VC back to 5-or-so;
internal turbulence field inside the fractal yellow VC;
the same, but for the convex orange VC.
(c) Same internal turbulence field inside the convex orange VC, with mean from panel (a) removed.
(d) MYSTIC-based high-precision estimate of radiance differences between original Koch cloud and its counterpart with turbulent extinction field replaced by its mean value inside the convex orange VC; phase function for $r_{\rm e} = 10 \mu$m, SZA = 0$^\circ$.
}
\label{fig:adaptive_VC}
\end{center}
\end{figure}
\subsection{Angular distribution of radiance in the VC and associated RW in its 3D volume}
\label{sec:DA_from_RT}
Looking back at Fig.~\ref{fig:PF_convolve}, we clearly see that the sunlight that has filtered through the OS, i.e., predominantly with $N \gtrsim n_\text{smear}$ ($\approx$6.5 for $g$ = 0.86), and is impinging on the VC has been highly smoothed angularly after multiple scatterings.
This radiance distribution in direction space can be represented by ever more truncated Legendre series.
For single scattering (gray curve), we summed 627 terms while, after the 10$^\text{th}$ scattering (red curve), the iterated phase function is already approximated reasonably well with the first two terms.
Formally, we are stating that $p(\theta)^{(*10)} \approx 1+3g_{10}\cos\theta$, where $g_{10} = g^{10} = 0.22$ and the superscript ``$*(10)$'' means ``10 times self-convolved.''
This suggests that (optically) far enough from collimated light sources and other regions where we anticipate the light will be streaming ballistically (near boundaries), we can approximate the radiance field $I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega)$ with just two spherical harmonics:
\begin{equation}
\label{eq:I_diffusion}
I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega) \approx \frac{1}{4\pi} \left[ J(\mathbf x} \newcommand{\by}{\mathbf y) + 3\boldsymbol \Omega\cdot\bF(\mathbf x} \newcommand{\by}{\mathbf y) \right],
\end{equation}
where $\mathbf x} \newcommand{\by}{\mathbf y$ is position in the optical medium M, $\boldsymbol \Omega$ is direction of propagation, and
\begin{eqnarray}
\label{eq:J_scalarF}
J(\mathbf x} \newcommand{\by}{\mathbf y) &=& \int_{4\pi} I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega) \, {\rm d}\boldsymbol \Omega \\
\label{eq:F_vectorF}
\bF(\mathbf x} \newcommand{\by}{\mathbf y) &=& \int_{4\pi} \boldsymbol \Omega \, I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega) \, {\rm d}\boldsymbol \Omega
\end{eqnarray}
are respectively the scalar and vector radiative flux fields.
Similarly, we use a 2-term harmonic expansion of the PF:
\begin{equation}
\label{eq:p_diffusion}
p(\boldsymbol \Omega\cdot\boldsymbol \Omega^\prime) \approx \frac{1}{4\pi} \left[ 1 + 3g \boldsymbol \Omega\cdot\boldsymbol \Omega^\prime \right].
\end{equation}
These natural assumptions have far-reaching ramifications for the 3D RT in sufficiently opaque clouds to justify them.
The general steady-state 3D RT equation in a uniform but arbitrarily shaped scattering medium M is \citep{Chandra1950,mishchenko2002vector}
\begin{equation}
\label{eq:3D_RTE}
\left[ \boldsymbol \Omega\cdot\boldsymbol \nabla + \sigma_{\rm e} \right] I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega) =
\sigma_{\rm s} \int_{4\pi} p(\boldsymbol \Omega\cdot\boldsymbol \Omega^\prime) I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega^\prime) \frac{{\rm d}\boldsymbol \Omega^\prime}{4\pi},
\end{equation}
where $\sigma_{\rm e}$ and $\sigma_{\rm s} \, (\le \sigma_{\rm e})$ are respectively extinction and scattering coefficients.
This integro-differential RT equation (RTE) balances sinks (left-hand side, advection and extinction) and sources (right-hand side, in-scattering) of light in a small volume around $\mathbf x} \newcommand{\by}{\mathbf y$ in 3D space propagating into a small cone centered on direction $\boldsymbol \Omega$ on the 2D unit sphere.
For the present application, we assume that all light sources are isotropic and confined to the boundary of M, denoted ``$\partial$M:''
\begin{equation}
\label{eq:iso_BCs}
I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega) = f(\mathbf x} \newcommand{\by}{\mathbf y) |\boldsymbol \Omega\cdot\bn(\mathbf x} \newcommand{\by}{\mathbf y)|/\pi,\text{ for }\mathbf x} \newcommand{\by}{\mathbf y\in\partial{\rm M}\text{ and }\boldsymbol \Omega\cdot\bn(\mathbf x} \newcommand{\by}{\mathbf y)<0
\end{equation}
where $\bn(\mathbf x} \newcommand{\by}{\mathbf y)$ is the outgoing normal of $\partial$M at point $\mathbf x} \newcommand{\by}{\mathbf y$, and $f(\mathbf x} \newcommand{\by}{\mathbf y)$ is the given incoming irradiance.
Taking the integrals $\int_{4\pi}[\cdots]{\rm d}\boldsymbol \Omega$ and $\int_{4\pi}\boldsymbol \Omega[\cdots]{\rm d}\boldsymbol \Omega$ of the RTE in (\ref{eq:3D_RTE}), using (\ref{eq:I_diffusion})--(\ref{eq:p_diffusion}) and the PF normalization, we obtain coupled 1$^\text{st}$-order partial differential equations (PDEs):\footnote{
There are other paths from the RT equation in (\ref{eq:3D_RTE}), potentially with spatially variable optical properties $(\sigma_{\rm e},\sigma_{\rm s})$ and $p(\cdot)$, to the diffusion model in (\ref{eq:J_scalarF})--(\ref{eq:F_vectorF}).
The most illuminating one is probably via asymptotic analysis of the RT equation in the limit of scaling $(\sigma_{\rm e}/\epsilon,\sigma_{\rm s}/\epsilon)$ when $\epsilon \to 0$ \citep{Larsen1980,Pomraning1989}.
}
\begin{eqnarray}
\label{eq:conservation}
\boldsymbol \nabla\cdot\bF(\mathbf x} \newcommand{\by}{\mathbf y) &=& -\sigma_{\rm a} J(\mathbf x} \newcommand{\by}{\mathbf y) \\
\label{eq:constitutive}
\bF(\mathbf x} \newcommand{\by}{\mathbf y) &=& -\frac{\ell_{\rm t}}{3} \boldsymbol \nabla J(\mathbf x} \newcommand{\by}{\mathbf y)
\end{eqnarray}
where $\sigma_{\rm a} = \sigma_{\rm e}-\sigma_{\rm s} \in [0,\sigma_{\rm e}]$ is the absorption coefficient, and $\ell_{\rm t}$ is the \emph{transport} MFP (tMPF) introduced in App.~A using RWs on the 2D sphere in the absence of absorption ($\sigma_{\rm a} = 0$).
Here, we are including the potential for absorption, and find
\begin{equation}
\label{eq:MFP_t}
\ell_{\rm t} = \frac{1}{\sigma_{\rm t}} = \frac{1/\sigma_{\rm e}}{1-\omega g} = \frac{\ell}{1-\omega g},
\end{equation}
where $\sigma_{\rm t} = (1-\omega g)\sigma_{\rm e}$, is the \emph{transport} or ``scaled'' extinction coefficient, and $\omega = \sigma_{\rm s}/\sigma \in [0,1]$ is the single scattering albedo (SSA).
In (\ref{eq:conservation}), we have the \emph{exact} expression of radiance energy conservation.
In (\ref{eq:constitutive}), we have the constitutive law (a.k.a. Fick's law) that defines the radiative diffusion \emph{approximation}.
Thus, in (\ref{eq:constitutive}), we identify $D = \ell_{\rm t}/3$ as the (steady-state) diffusivity coefficient.
Finally, based on (\ref{eq:iso_BCs}), the BCs for the diffusion transport model in (\ref{eq:conservation})--(\ref{eq:constitutive}) is completed with
\begin{equation}
\label{eq:BCs_DA}
\frac{1}{4}\left[ J(\mathbf x} \newcommand{\by}{\mathbf y) - 3\chi\bn(\mathbf x} \newcommand{\by}{\mathbf y)\cdot\bF(\mathbf x} \newcommand{\by}{\mathbf y) \right] = f(\mathbf x} \newcommand{\by}{\mathbf y),
\end{equation}
where $\chi$ is the so-called ``extrapolation length'' factor.
The common (Marshak) assumption\footnote{
We assume for simplicity a purely absorbing BC in (\ref{eq:iso_BCs} and \ref{eq:BCs_DA}), i.e., $f(\mathbf x} \newcommand{\by}{\mathbf y) = 0$.
Without loss of generality, we can take $\bn$ along the $z$-axis at any given boundary point.
Then the incoming/outgoing radiant energy fluxes at said boundary point are, based on (\ref{eq:I_diffusion}), $J/4 \mp F/2$, where $F = \|\bF\|$.
Thus, in the case of no incoming flux, $J/F = 2$.
Substituting Fick's law (\ref{eq:constitutive}), we find $J/|\nabla J| = (2/3)\ell_{\rm t}$.
By identifying with the definition of the extrapolation length factor $\chi$ at a boundary point, $J/|\nabla J| = \chi\ell_{\rm t}$, we find $\chi$ = 2/3.
In practice, this tells us the distance from the boundary at which $J$ would vanish if it followed a linear trend.
This, in turn, tells how much extra volume we have to add to the medium M if we wanted to replace the exact Robin BCs in (\ref{eq:BCs_DA}) by Dirichlet counterparts, which is sometimes a convenient approximation.
Other arguments lead to other values for $\chi$; see \cite{CaseZweifel1967}.
}
is $\chi = 2/3$.
Equation (\ref{eq:BCs_DA}) expresses a 3$^\text{rd}$-type (a.k.a. Robin) BC that combines both density $J(\mathbf x} \newcommand{\by}{\mathbf y)$ and current $\bn(\mathbf x} \newcommand{\by}{\mathbf y)\cdot\bF(\mathbf x} \newcommand{\by}{\mathbf y)$ at the domain's boundary ($\mathbf x} \newcommand{\by}{\mathbf y\in\partial$M).
The (charge) ``density'' and (electrical) ``current'' terminology respectively for $J$ and for $\bF$ is widely used but clearly borrowed from electrodynamics where (\ref{eq:conservation}) expresses conservation of charge, and (\ref{eq:constitutive}) is Ohm's law.
Although we are advocating for particle diffusion and RWs, transport of charge through a conductor is a valid analog of light transport in the VC where the gradient in illumination between the sun-exposed and self-shading sides replaces the imposed drop in potential.
An even better analog is the transport of a liquid through a porous medium, where (\ref{eq:conservation}) expresses conservation of mass, and (\ref{eq:constitutive}) is D'Arcy's law, with pressure head playing the role of the illumination differential across the VC.
A good example in geophysics is ground water flow in the vadose zone \citep[e.g.,][]{groundwater}.
Substituting (\ref{eq:constitutive}) into (\ref{eq:conservation}), we get the standard 2$^\text{nd}$-order elliptical Helmholtz PDE:
\begin{equation}
\label{eq:Helmholtz_PDE}
\left[ \boldsymbol \nabla^2 + \frac{3\sigma_{\rm a}}{\ell_{\rm t}} \right] J = 0,
\end{equation}
subject to BCs $\left. \left[ 1 - \chi\ell_{\rm t}\bn(\mathbf x} \newcommand{\by}{\mathbf y)\cdot\boldsymbol \nabla \right] J(\mathbf x} \newcommand{\by}{\mathbf y) \right|_{\mathbf x} \newcommand{\by}{\mathbf y\in\partial{\rm M}} = 4f(\mathbf x} \newcommand{\by}{\mathbf y)$, from (\ref{eq:BCs_DA}) and (\ref{eq:constitutive}).
In the absence of absorption ($\sigma_{\rm a} = 0$), that PDE reduces to the classic Laplace PDE, $\boldsymbol \nabla^2 J = 0$, which describes the steady-state 3D RW (a.k.a. Brownian motion) of particles from boundary sources (where $f(\mathbf x} \newcommand{\by}{\mathbf y) > 0$) to boundary sinks (anywhere, but especially on the non-illuminated side of the VC).
The PDE in~(\ref{eq:Helmholtz_PDE}) defines an important length scale in radiation transport theory, namely, the diffusion length:
\begin{equation}
\label{eq:Diffusion_length}
L_{\rm d} = \sqrt{\frac{\ell_{\rm t}}{3\sigma_{\rm a}}} = \frac{\ell}{\left[ 3(1-\omega)(1-\omega g) \right]^{1/2}}.
\end{equation}
$L_{\rm d}$ is the characteristic scale of the exponential terms in the fundamental solution of (\ref{eq:Helmholtz_PDE}).
In essence, $L_{\rm d}$ is the distance from (in this case, boundary) sources at which light is severely extinguished by way of the absorption by droplets that occurs at every (now non-elastic) scattering.
Table~\ref{tab:MODIS_SWIR_chans} lists the optical properties in (\ref{eq:Diffusion_length}) across 3 MODIS SWIR channels, along with the extinction coefficient for a nominal LWC of 0.1~g/m$^3$ (assuming the same PSD as in Fig.~\ref{fig:PF_convolve}); MODIS's red channel is also displayed for reference.
While neither the extinction (hence COT or MFP) nor $g$ vary much across the VNIR-SWIR spectral range, SSA $\omega$ does, in the sense that the single scattering co-albedo $(1-\omega)$ that appears in (\ref{eq:Diffusion_length}) varies by orders of magnitude.
Thus $L_{\rm d}$ varies significantly across the MODIS SWIR channels, while $\ell$ does not.
\begin{table}[htp]
\caption{Wavelength, MODIS channel \#, extinction coefficient (assuming LWC = 0.1~g/m$^3$), SSA, asymmetry factor, and diffusion length in MFPs from (\ref{eq:Diffusion_length}) for the cloud droplet PSD used in Fig.~\ref{fig:PF_convolve}. First column is color-coded to reflect choices in Fig.~\ref{fig:Diffusion_length_Koch}.}
\small
\begin{center}
\begin{tabular}{|c|c|cccr|}
\hline
\hline
$\lambda$ & channel & $\sigma_{\rm e}$ & $\omega$ & $g$ & $L_{\rm d}/\ell$ \\
[nm] & \# & [1/km] & [-] & [-] & [-] \\
\hline
\tb{645} & 1 & 15.82 & 0.999996 & 0.8610 & 774.3 \\
\tg{1240} & 5 & 16.14 & 0.998042 & 0.8499 & 33.5 \\
\ty{1640} & 6 & 16.51 & 0.991238 & 0.8456 & 15.3 \\
\tr{2130} & 7 & 16.70 & 0.971810 & 0.8418 & 8.1 \\
\hline
\end{tabular}
\end{center}
\normalsize
\label{tab:MODIS_SWIR_chans}
\end{table}
Figure~\ref{fig:Diffusion_length_Koch} illustrates $L_{\rm d}$ across MODIS SWIR wavelengths in the present context of VC and OS characterization, as a prelude to CT with MISR and MODIS.
We see that, while $L_{\rm d}$ at the essentially non-absorbing wavelength of 645~nm is off the charts, its magnitude quickly shrinks as wavelength increases into the SWIR region.
Specifically, at 1240~nm, $L_{\rm d}$ is seen in panel (a) to be commensurate with the overall size of the cloud, so we anticipate very little difference in the already low sensitivity to VC structure obtained at 645~nm; panel (b) confirms this prediction.
At 1640~nm, $L_{\rm d}$ is roughly halved from 1240~nm, and as a result the light is strongly absorbed in the OS and as it enters the VC; sensitivity to VC structure takes a major hit, as is seen in panel (b).
Finally, at 2130~nm, $L_{\rm d}$ is just a little more than the typical thickness of the OS (hence depth of the VC); sensitivity to a change in the VC is all but gone because the VC will be essentially dark due to the intense absorption in the outer layers.
The Koch cloud used here has a COT of 22.4 on average along the $x$ dimension.
Optically thinner clouds won't have much of a VC.
In optically thicker clouds, $L_{\rm d}$ values across the SWIR channels are reduced in proportion, while the VC is increasing in volume.
All the sensitivities to VC details are therefore further diminished.
\begin{figure}
\begin{center}
\includegraphics[width=3.14in]{Diffusion_length_Koch.jpg}
\end{center}
\caption{
{\bf (a)} Various length scales are shown graphically for the Koch fractal cloud in Fig.~\ref{fig:Koch_in+out_VC} with COT = 40 along the central column (denoted $\tau_\text{central}$), which is highlighted with a 4-km long black diamond-ended line.
For comparison, the horizontally-averaged COT is 22.4 and represented with the accordingly shorter white diamond-ended line.
Taking $\ell$ as the inverse of the mean extinction in (\ref{eq:Diffusion_length}), we use 4 lines to show the extents of $L_{\rm d}$ color-coded as in Table~\ref{tab:MODIS_SWIR_chans} and in panel (b).
{\bf (b)} Reproduced for context from \cite{Forster_etal21}, Fig.~9b, SZA = 0$^\circ$ and VZA = 9$^\circ$ (average for MODIS across the MISR swath).
It shows the along-track relative differences between radiances observed by MODIS for two Koch clouds that differ only in the detailed structure of the extinction field inside the grey rectangle in panel (a).
Differences are well within the noise-determined threshold for significance in CT at 5\%.
The grey rectangle (and variations thereof) is used extensively by \cite{Forster_etal21} as a proxy for the VC.
}
\label{fig:Diffusion_length_Koch}
\end{figure}
In summary, we can think of the VC as a region M inside the cloud where ``particles'' representing units of radiant energy (at a non-absorbing wavelength) wander randomly in the optical medium M from an entry point on $\partial$M to an escape point on $\partial$M.
From that standpoint, the VC can also be called the ``diffusion zone'' in an opaque 3D cloud where the light arrives after enough scatterings to smooth radiance directionality down to an isotropic term and a dipole term in (\ref{eq:I_diffusion}), and then scatters many more times before emerging.
The RW in the VC is characterized by effectively isotropic scatterings and steps of mean length $\ell_{\rm t}$, which is $(1-g)^{-1}$ times longer then the usual MFP $\ell = 1/\sigma_{\rm e}$, which in turn is the mean of the exponential distribution of steps (a.k.a. free paths) associated with Beer's law of extinction.
Consequently, each \emph{effective} isotropic scattering counts, on average, for $(1-g)^{-1}$ scatterings according to the forward-peaked PF.
Recall that, for $g$ = 0.86 (from Fig.~\ref{fig:PF_convolve}), we have $(1-g)^{-1}$ = 7.14 as scaling factor for the RW, in both the size of the steps across 3D space and the (discrete) time it takes.
In the presence of absorption by cloud particles, the SSA $\omega$ multiplies $g$ in the above forward-peaked PF considerations: $(1-g)^{-1}$ becomes $(1-\omega g)^{-1}$, which changes little since $\omega$ is only slightly less than unity in the SWIR region.
However, in the full-blown diffusion regime the effect of absorption is highly amplified in the highly scattered radiation.
Light is quickly extinguished at distances from sources (in our case, the boundary of the VC that faces the Sun) that exceed the ``diffusion length'' $L_{\rm d}$, which scales with $(1-\omega)^{-1/2}$ in (\ref{eq:Diffusion_length}).
\subsection{Control of cloud-scale contrast in observable radiances by VC}
To understand the primary imprint of the VC on cloud images, we use in App.~B a tutorial cloud model that is \emph{all} VC.
In other words, the transport regime is diffusive in the entire cloud, a reasonable approximation for clouds with very large ($\gg$5) optical thickness in all three dimensions.
To explore remote sensing of broken clouds from potentially very oblique view angles, \cite{davis2002} sought a 3D cloud geometry that is not plane-parallel yet remains analytically tractable, at least in the diffusion limit, which he found in the perfect sphere.
The author adopted the natural definitions of reflectivity $R$ and transmissivity $T$ as outgoing fluxes integrated respectively over the illuminated and shaded hemispheres, as a special case of the general formulation by \cite{DavisKnyazikhin2005}, and normalized by the solar flux through the area of the cloud projected perpendicular to the incoming beam.
\cite{davis2002} showed that, in the absence of absorption,
\begin{equation}
\label{eq:RoT_ratio}
\frac{R}{T} = \frac{(1-g)\tau}{2\chi},
\end{equation}
where $\tau$ is the optical diameter of the spherical cloud.
This diffusion-based prediction was validated with Monte Carlo simulations of RT in spherical media with varying $\tau$.
Indeed, as $\tau_{\rm t} = (1-g)\tau$ reaches and exceeds unity, the predicted linear increase of $R/T$ with $\tau$ proves accurate when $\chi$ was set to 2/3.
The \cite{HenyeyGreenstein41} PF was adopted, with $g = 0.85$.
The uniform collimated solar beam impinges on the sunny side of the cloud boundary at an angle varying between 0 and $\pi$/2 depending on the angular distance $\vartheta$ from the sub-solar point, as viewed from the center of the sphere.
Two source models were then used: the light either continues into the cloud without change in direction, or it is sent into the cloud at a random direction, as in (\ref{eq:iso_BCs}) with $f(\mathbf x} \newcommand{\by}{\mathbf y) \propto \cos\vartheta(\mathbf x} \newcommand{\by}{\mathbf y)$.
As $\tau$ increases, the diffusion-theoretical prediction for $R/T$ in (\ref{eq:RoT_ratio}) is approached at similar rates for both boundary sources, but from opposite directions.
\cite{davis2002} notes that, if the cloud is an infinite plane-parallel slab, then the $R/T$ contrast ratio is again as in (\ref{eq:RoT_ratio}) where $\tau$ is then the optical thickness of the cloud.
Spheres and slabs are, in essence, ellipsoids with respectively three identical finite semi-axis and one finite with two infinite semi-axis.
This leads to the prediction that cylinders (ellipsoids with two identical finite semi-axis and one infinite one) should have the same $R/T$ contrast ratio.
This is prediction is confirmed in App.~B using, for simplicity, circular media in 2D RT and 2D diffusion theory.
Therein, 2D Monte Carlo simulations are used to verify that the accuracy of associated 2D diffusion-theoretical prediction becomes reasonable when $(1-g)\tau$ becomes {\it O}(1) and continues to improve as $\tau$ continues to increase (cf. Fig.~B2).
How does the sunny-to-shady-side contrast ratio in (\ref{eq:RoT_ratio}) fare with more realistic clouds in terms of both outer and inner structure?
Figure~\ref{fig:Koch_R_vs_T} uses the 2D Koch fractal cloud used by \cite{Forster_etal21} with SZA = 60$^\circ$, coming from the West, and VZA = $\pm 60^\circ$, i.e., the equivalents of MISR's Cf and Ca cameras.
In this configuration, the Cf camera sees only the shaded side of the cloud, and the cloud image is registered to ground pixels West of the cloud.
By the same token, the Ca camera sees all of the illuminated side of the cloud in exact retro-scattering geometry, and the cloud image is registered to ground pixels East of the cloud.
Although we are dealing with radiances rather than fluxes, we can still integrate them across the two cloud images in Fig.~\ref{fig:Koch_R_vs_T}a: Cf yields a proxy for $T$ while Ca yields one for $R$.
Figure~\ref{fig:Koch_R_vs_T}b displays the proxy $R/T$ ratio, and we can see that it is a monotonic function of $\tau$ (averaged horizontally) that is not far from a linear trend as soon as the cloud is optically thick enough to develop a VC ($\tau_\text{central} \gtrsim 10$).
$R/T$ is off-trend for the most opaque incarnation ($\tau_\text{central} = 320$), but that may be due to the Monte Carlo noise in the estimation of $T$.
The slope is shallower than for the diffusion theory predictions for circular or spherical scattering media.
However, there is no reason to expect them to agree since, apart from the fractal outer shape here, the theory is for angularly integrated radiance while the simulation yields a single sampled viewing direction each for the $R$ and $T$ proxies.
Moreover, the analytical predictions are for strictly uniform clouds.
We know that, for a fixed optical thickness, heterogeneous clouds like this one yield lower $R$ and higher $T$, hence lower $R/T$, as observed here.
\begin{figure}
\begin{center}
\includegraphics[width=3.14in]{RoT_Koch.jpg}
\end{center}
\caption{
{\bf (a)} The same 2D Koch fractal cloud model as in Fig.~\ref{fig:Koch_in+out_VC} is used with an opacity that is doubled 5 times, starting at $\tau_\text{central}$ (integrated extinction down the middle column) = 5.
To the right, $\tau_\text{central}$ is translated into $\tau_\text{avg}$ (horizontally averaged optical thickness), and its scaled value $(1-g)\tau_\text{avg}$, where $g = 0.86$.
Numbered image ``features'' will be discussed further on (\S\ref{sec:concl}).
{\bf (b)} Rough estimates of $R$ and $T$ are obtained by averaging the radiances (expressed in non-dimensional Bidirectional Reflection Function or ``BRF'' units) across the cloudy pixels, respectively in the Ca and Cf cameras.
Thus a proxy for the $R/T$ contrast ratio is computed and plotted against $(1-g)\tau_\text{avg}$ yielding a near-linear relation, as predicted for clouds with simple shapes (slabs, spheres, cylinders) that are all VC.
Predicted slopes for 3D and 2D diffusion models are indicated for reference, but not expected to apply directly to this $R/T$ metric, as explained in the main text.
}
\label{fig:Koch_R_vs_T}
\end{figure}
Radiances from real 3D clouds with complex outer geometries cannot be so easily partitioned into ``$R$'' and ``$T$'' pixels.
Looking ahead, such clouds observed by present and future space-based multi-angle sensors will have \emph{effective} ``$R$'' and ``$T$'' values obtained from the appropriate image segmentation algorithms tuned to separate sunny and shady cloud boundaries at the sensor's pixel scale.
From there, an effective cloud-scale $R/T$ ratio can be formed.
Realistic LES-generated clouds can be used to train the segmentation algorithm and calibrate the monotonic relation between observed $R/T$ and the mean $\tau$.
From the CT perspective, this yields an initial estimate of mean cloud extinction since the cloud's volume will have been constrained, for instance, by ``space-carving,'' (i.e., the volumetric intersection of the 2D cloud masks across all observing angles, cf. \cite{Lee_etal18}).
\subsection{Lateral radiation transport along the OS/VC interface}
We have established that a surplus of \emph{diffuse} light impinges on the part of the VC boundary that is exposed to the Sun, and that there is a deficit on the opposite side of the VC facing the self-shaded part of the cloud.
This cloud-scale gradient in radiant energy density drives an overall flow through the cloud's VC and, for that matter, the whole cloud.
That said, for the purposes of physics-informed CT algorithm design, we need to characterize the lateral transport of radiant energy along the boundary of the VC since that will define the spatial smoothing of the radiance field by the VC itself.
To answer this question quantitatively, we will mine the literature on cloud PSFs in the diffusion regime.
Interestingly, there is a bifurcation between research interested in reflected and transmitted radiation due to the different remote sensing technologies of interest.
On the reflection/back-scattering side of the cloud, the technology driver was MUliple-Scattering Cloud Lidar (MUSCL), an active cloud probing methodology that was being explored in the early 2000s.
Before that, multiple scattering lidar experiments \citep{Flesia1995MUSCLE} were primarily about capturing the effects of one or more forward-scattering events not accounted for in the standard lidar equation but present in lidar returns due to the finite field-of-view (FOV) of the receiver.
This gradual deviation from direct propagation not unlike our OS concern in App.~A and \S\ref{sec:OS_processes}.
Breaking away from that trend, Davis et al. (\citeyear{davis1999,davies1999}) developed the diffusion-theoretical framework for predicting MUSCL signals and their information content in terms of cloud properties.
A quantity of particular interest to the present investigation is the RMS radius of the reflected Green function, equivalently, the RMS distance between entry and escape positions of the RWing ``particle'' carrying units of radiant energy in the diffusion model, which is tantamount to an estimation of the cloud PSF half-width.
Let $\rho_{\text{rms},R}$ be the RMS entry-to-escape distance along the illuminated boundary.
For sufficiently opaque media, diffusion theory predicts that, in the absence of absorption, we have:
\begin{equation}
\label{eq:RMSrho_R}
\rho_{\text{rms},R} \propto \sqrt{\ell_{\rm t} H} = H/\sqrt{\tau_{\rm t}},
\end{equation}
where $H$ is the physical thickness of the medium.
The exact prefactor and pre-asymptotic corrections in $\ell_{\rm t}/H = 1/\tau_{\rm t}$ for (\ref{eq:RMSrho_R}) were derived in closed-form for plane-parallel slabs, and validated their expressions using Monte Carlo simulation.
Here, in arbitrary VC geometry, we only expect the first-order scaling expressed in (\ref{eq:RMSrho_R}) to apply, with $H$ interpreted as a measure of VC size.
The easiest derivation of (\ref{eq:RMSrho_R}) builds on discrete-time RW theory.
We know that in boundless 3D space the variance of the distance from the origin, $\langle \mathbf r_n^2 \rangle$, scales linearly with the number $n$ of steps taken and with the variance of each step.
Now, the latter is in essence the MFP squared, in this case, equated to $\ell_{\rm t}^2$.
The same scaling applies to all three coordinates individually.
Thus, in a finite space of size $H$, such as the VC, the characteristic number of steps needed to cross that distance is $n_H \sim (H/\ell_{\rm t})^2$.
To model reflection by the medium, we need to visualize RW in a 3D half-space (rather than fully unbounded 3D space), say $z > 0$.
We then invoke the ``law of first passage'' that gives the probability that a RW that starts in the positive $z$ direction will return to the $z = 0$ plane after exactly $n \ge 1$ steps, which goes as $p_n \propto n^{-3/2}$ \citep{Redner2001}.
Interestingly, the mean value of $n$ is infinite, in view of the very long tail of $p_n$.
However, in finite spaces, the law is truncated at the above characteristic value of $n_H$.
Therefore, we can compute the mean value of $n$ in reflected light as $\langle n \rangle_R \approx (\sum_1^{n_H} n p_n)/(\sum_1^{n_H} p_n) \sim n_H^{1/2}$, as long as $n_H \gg 1$ (denominator approaches unity).
From there, we can estimate $\rho_{\text{rms},R}^2 \sim \ell_{\rm t}^2 \langle n \rangle_R \sim \ell_{\rm t}^2 n_H^{1/2} \sim \ell_{\rm t} H$, hence (\ref{eq:RMSrho_R}).
The scaling relation in (\ref{eq:RMSrho_R}) was derived with active remote sensing in mind, i.e., a laser beam actually exciting a diffuse reflected radiance field with that RMS radius.\footnote{
This experiment was successfully implemented with real-world stratiform clouds from below \citep{polonsky2005wide} and from above \citep{cahalan2005thor} with, moreover, consideration of laser pulse stretching (i.e., temporal smoothing by the multiple scattering before reflection).}
Equation (\ref{eq:RMSrho_R}) can however be interpreted in passive remote sensing as a cutoff scale in a low-pass spatial filter or smoothing kernel for light reflected off the VC.
In fact, the expression was first used to interpret quantitatively the phenomenon of radiative smoothing observed as a break in the scaling of wavenumber spectra of LANDSAT images of marine stratocumulus \citep{Davis_etal1997} and reproduced in early Monte Carlo simulations \citep{Marshak_etal1995} where the internal cloud structure was modeled using a turbulence-like stochastic model \citep{Marshak_etal94}.
Due to the spatial heterogeneity in the OS, the light impinging upon the VC is structured, a priori across a wide range of scales.
A fraction $R$ of all that light is reflected (``DC'' component).
Furthermore, it will be reflected back into the OS spatially smoothed: only structures larger than $\rho_{\text{rms},R}$ in (\ref{eq:RMSrho_R}) will be present (``AC'' component).
We can now revisit an aspect of Fig.~\ref{fig:Koch_R_vs_T}a that was overlooked.
The experiment therein was to increase and decrease the overall opacity of the tutorial 2D Koch cloud model concocted by \cite{Forster_etal21} (e-supplement) and learn from the optically thin-to-thick sequence.
Solar and viewing directions were carefully chosen to show entirely reflected and entirely transmitted radiance fields, and thus support an investigation of \emph{integrated} cloud reflection and transmission inspired by a diffusion-theoretical prediction for how the ratio of those cloud-scale quantities varies with horizontally-averaged COT.
Mean COT was thus boosted by factors of 2 from 2.8 to almost 180.
The interesting smaller-scale phenomenon worthy of belated consideration is that there are five distinct features in the image of the sunlit side of the cloud (in this case, MISR's Ca camera).
Individually numbered in Fig.~\ref{fig:Koch_R_vs_T}a, they are mapped to 1$^\text{st}$- and 2$^\text{nd}$-generation growths in the generation of the Koch fractal in Fig.~\ref{fig:Koch_in+out_VC}a.
The strength of these image features clearly increases with cloud opacity.
This is a clear illustration of how radiative smoothing works.
Extending the application of the radiative smoothing scale for reflected light in (\ref{eq:RMSrho_R}) from the VC to the whole cloud, we can take $H$ = 4~km as the cloud's size.
Dividing $H$ by square-root of $\tau_{\rm t} = (1-g)\tau_\text{avg}$ in Fig.~\ref{fig:Koch_R_vs_T} yields estimates of $\rho_{\text{rms},R}$.
They start at values $\gtrsim H$ for the optically thinnest cases, hence all structures are smoothed.
In sharp contrat, $\rho_{\text{rms},R} \lesssim 1$~km for the most opaque incarnations, hence the resolution of roughly $H/\rho_{\text{rms},R} \gtrsim 4$ equally-spaced structures across the along the sunlit side of the Koch cloud, which projects onto $\approx$36 MISR pixels in the C cameras used here.
If the cloud's opacity was further boosted, ever finer details become resolvable, assuming the camera's pixel size remains somewhat smaller than the finest cloud detail.
Similar questions about radiative smoothing have been asked about light transmitted through optically thick clouds.
\cite{davis2002space} showed that
\begin{equation}
\label{eq:RMSrho_T}
\rho_{\text{rms},T} \propto H,
\end{equation}
irrespective of $\ell_{\rm t}$ (hence of $\tau_{\rm t}$).
Again, this is only the first-order scaling, and all that is of interest in the present context.
The authors derived the exact prefactor and pre-asymptotic corrections in $\ell_{\rm t}/H = 1/\tau_{\rm t}$ for (\ref{eq:RMSrho_T}) in closed-form for plane-parallel slabs, and performed extensive Monte Carlo simulations for numerical validation.
\cite{VonSavigny2002time} further validated observationally the theoretical prediction of \cite{davis2002space} by analyzing across a wide range of scales zenith radiance transmitted to the ground through an extended stratus layer as it was advected across a vertically-pointing narrow-FOV sensor.
The stratus cloud's internal structure is known to have a power-law scaling power spectrum in $k^{-5/3}$ from several km down to scales of a few meters \citep[e.g.,][]{davis1999horizontal}, much less than cloud thickness $H$, which is $\sim$100s of meters.
However, in the observed zenith radiance time-series, once converted into a spatial field using Taylor's frozen turbulence hypothesis, structures less that $\sim$$H$ in scale had been smoothed out.
What does (\ref{eq:RMSrho_T}) tell us about radiation transport across the VC?
Again, a fraction $T = 1-R$ of the sunlight impinging on the VC is transmitted.
We note that, since it was argued that (\ref{eq:RoT_ratio}) applies to the VC, we have $R/T = (H/\ell_{\rm t})/2\chi$ in the present notations; we can thus solve for $T = 1/[1+(H/\ell_{\rm t})/2\chi]$, and $R = 1-T$ follows.
So much for the ``DC'' component.
As for the ``AC'' component, it will be smoothed in transmission across a length scale $\rho_{\text{rms},T}$, which is commensurate with $H$, the outer size of the VC.
That is to say that there is really only a DC component: the light transmitted through the VC is spread evenly over the non-illuminated side.
Extending the application of (\ref{eq:RMSrho_T}) from the VC to the whole cloud, we can again look back at Fig.~\ref{fig:Koch_R_vs_T}a, this time focusing on the Cf camera image.
In this case, the radiance field is dominated by transmitted light for all but the leftmost pixels that associated with the very top of the cloud where the OS is seen directly by both the sun and the camera (cf. Fig.~\ref{fig:Koch_R_vs_T}).
That cloud-top spike in radiance (present at all COTs) is akin to seeing from above a cloud's ``silver lining,'' a forward-scattering phenomenon familiar to any ground observer (scattering angle here is 60$^\circ$).
Otherwise, the image is quite bland at all COTs, in accordance with (\ref{eq:RMSrho_T}) as a cutoff for distinguishable features.
\subsection{Spatial variability in the VC that may matter for CT}
So far, we have treated the VC as if it was a spatially uniform region inside the 3D cloud of interest.
In real clouds it is of course far from that.
Are we at risk of biasing future MISR+MODIS CT outcomes if we enforce VC spatial uniformity in the forward model, or as a regularization?
At this stage, we need to further mine the literature for analogous situations and, again, MUSCL comes to mind.
\cite{davis2008multiple} addresses the impact of internal cloud structure on MUSCL spatial and temporal signals, with a concern about what happens to cloud property retrievals if it is neglected.
Being motivated by lidar, only \emph{reflected} light is investigated.
\cite{Davis_etal2009} survey systematically the application of spatial and/or temporal Green functions in cloud remote sensing, both passive and active, and they add new results about the impact of internal cloud structure for light \emph{transmitted} all the way through the cloud.
In both of these studies, the two kinds of internal variability in the 3D cloud extinction field addressed are: (1) cloud-scale vertical gradients, and (2) sub-cloud scale random turbulence.
As far as MUSCL (hence reflection) is concerned, the conclusion is that cloud-scale gradient effects dominate those of the smaller-scale internal turbulence.
\cite{Forster_etal21} state emphatically in connection with their Figs.~3 and 4 that simply replacing a surrogate VC (grey square region in Fig.~\ref{fig:Diffusion_length_Koch}a) by the average extinction therein leads to noticably different cloud images, with up to 20\% changes in pixel-scale radiances.
They therefore proceed in their computational experimentation on the sensitivity MISR data to internal VC structure by replacing the turbulent extinction field inside the surrogate VC with alternative realizations.
These alternatives must however comply with stochastic continuity conditions at the VC/OS interface.
That requires keeping the same cloud-scale vertical gradient across realizations.
Apart from being physically-incorrect, not doing it leads to observable differences in the synthetic MISR imagery.
To quantify the impact of multi-scale random turbulence in the VC in the absence of a cloud-scale gradient, we performed Monte Carlo simulations using MYSTIC on a plane-parallel but internally turbulent cloud model.
We ask the defining question of the VC/OS:
{\it At what OS thickness does the detailed internal structure of the VC no longer affect the observed radiance field in a measurable way?}
Figure~\ref{fig:VC_turbulence_vs_mean} displays the outcome where, following \cite{Forster_etal21}, we define ``measurable difference'' as one that exceed 5\% (to be confident that it is not just sensor noise fluctuations).
The pixel-scale Monte Carlo noise is kept at a much lower level.
The VC for a plane-parallel cloud is defined similarly as for the fractal cloud, as shown in Fig.~\ref{fig:VC_turbulence_vs_mean}a, where three selected thresholds are illustrated.
The ``reference'' cloud is 100\% turbulence, while in the OS/VC partitioned clouds the grey region is made uniform at the mean extinction level.\footnote{
We note that the adaptive VC defined here will has, by construction, an extinction discontinuity almost everywhere at its boundary.
This contrasts with the surrogate VCs, such as the square in Fig.~\ref{fig:Diffusion_length_Koch}a, where continuity is required to bring images differences down to the sensor noise level.
We attribute this to the fact that not all parts of the square's perimeter are at optical distances in excess of 5 from all the sensors as well as the solar source.
}
Figure~\ref{fig:VC_turbulence_vs_mean}b shows the dispersion of the relative radiance differences across all MISR-scale pixels, and, unsurprisingly, the 5\% tolerance is crossed at $\tau_\text{thres} \approx 5$--6.
There is a systematic negative bias in the radiance differences in Fig.~\ref{fig:VC_turbulence_vs_mean}b, especially for the optically shallow definition of the VC.
That is traceable to the fact that heterogeneous optical media are always less reflective than homogeneous ones for a fixed average extinction in the Independent Pixel Approximation \citep[IPA, e.g.,][]{cahalan94}.
Moreover, in this case of azimuthally symmetric illumination, it was shown that cross-pixel transport deepens the so-called plane-parallel bias \citep{DavisMarshak2001}.
\begin{figure}
\begin{center}
\includegraphics[width=2.7in]{VC_turbulence_vs_mean.jpg}
\end{center}
\caption{
{\bf (a)} A plane-parallel cloud with a nominal COT of 20 is internally randomized with a single realization of a zero-mean 2D scale-invariant stochastic model known as ``fractional Brownian terrain;'' see \cite{Forster_etal21} (e-supplement) for details and references.
Assuming SZA = VZA = 0$^\circ$, the OS/VC interface is determined by setting its optical depth $\tau_\text{thres}$ at integer values from 1 to 10; $\tau_\text{thres}$ = 1, 5, 10 cases are illustrated here.
{\bf (b)} Relative differences in \% cumulated across pixels between the original cloud with 2D spatial variability everywhere and its counterpart where the extinction in the VC (defined as optical depth $>\tau_\text{thres}$) is replaced by its mean value.
We see that, to pass the $\pm 5\%$ bar rationalized by \cite{Forster_etal21}, we need to set $\tau_\text{thres}$ in the 5-to-6 range.
}
\label{fig:VC_turbulence_vs_mean}
\end{figure}
In summary, from the standpoint of CT with MISR and MODIS data, small-scale spatial variability of the 3D extinction field inside the VC is, by definition, in the noise.
However, some large-scale features demand some attention:
({\it i}) at a minimum, there will be a cloud state parameter for the mean extinction in the VC and in the inversion it will likely come out biased low because spatially variable media are always less reflective than uniform ones with the same mean, but there are methods for correcting this bias \citep[e.g.,][]{cahalan94,cairns2000};
({\it ii}) if there is reason to believe that there is a cloud-scale vertical gradient in the VC, for instance in the convective core of a vertically-developing cumulus, then another parameter should be assigned to capture it in the forward model.
\section{OS/VC interaction}
\label{sec:OS_VC_interaction}
We have shown so far that features in images of opaque convective clouds originate by-and-large in extinction field structures in the OS, while the overall brightness contrast between the sunny and shaded sides of the cloud is controlled by the VC.
We have estimated that roughly 1/3 of the incoming sunlight is reflected back by the OS, and never enters the VC.
Therefore, roughly 2/3 of the sunlight impinging on the cloud does enter the VC, of which a fair fraction is reflected by the VC back into the part of the OS facing the sun.
If the $R/T$ contrast ratio is $\sim$2, which is at the lower end of opaque clouds of interest here (cf. Fig.~\ref{fig:Koch_R_vs_T}), then the VC itself roughly reflects and transmits evenly.
More opaque clouds reflect more, implying that their VCs reflect more than transmit, which is typical of media dominated by diffusive radiation transport.
Here, we examine more closely how the three radiation transport regimes intertwine: streaming (outside the cloud), multiple scattering (in the OS), and diffusion (in the VC).
\subsection{Coupling 3D RT and its diffusion approximation (DA) at the OS/VC interface}
Figure~\ref{fig:hybrid_RT_3D} sets up the CF forward modeling problem in terms of computational physics.
The complete optical medium is M$_\text{RT}\cup$M$_\text{DA}$, union of an outer region M$_\text{RT}$ and an inner one M$_\text{DA}$, assumed to be convex.
M$_\text{RT}$ is further divided into a surrounding region of optical void where radiation just streams unobstructed, and the cloud's OS where extinction is non-vanishing, and where transport is therefore modeled by RT with multiple scattering.
M$_\text{DA}$ is the VC and transport therein is accurately modeled with the diffusion approximation (DA).
$\partial$M$_\text{RT}$ is the boundary surrounding the whole computational domain, while $\partial$M$_\text{DA}$ is the OS/VC interface.
The 3D RT problem to be solved in the cloud's OS and surrounding vacuum is defined by the coupled integral equations (US) for an ``upwind sweep'' and (SF) for the ``source \emph{function}'' definition.
That pair of integral equations determines everywhere the diffuse radiance field $I(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega)$.
The homogeneous (i.e., ``no incoming radiation'') BC for the 3D RT problem on $\partial$M$_\text{RT}$ is expressed in Eq.~(HBC), and it is echoed in (US) as the upper option in the 2$^\text{nd}$ line.
Equations~(BL) and (ST) are the required definitions respectively of the Beer's law-based propagation kernel and of the solar source \emph{term}.
The mechanics of (US) are illustrated with two instances of $(\mathbf x} \newcommand{\by}{\mathbf y,\boldsymbol \Omega)$ in the upper-left corner.
Note that this key upwind sweep gets far more complicated if the VC is not a convex domain since some beams would then be exiting the VC, then re-entering and re-exiting it, once or more.
In the above, we have shown how an adaptive VC can be constrained to be convex; see Fig.~\ref{fig:adaptive_VC}.
The 3D DA model used inside the VC is defined by Eqs.~(PDE)/(\ref{eq:Helmholtz_PDE}) and (RBC)/(\ref{eq:BCs_DA}), the ``Robin BC,'' where we can set $\chi$ = 2/3.
Thus there are two only parameters in the DA problem to be solved: $k^2$ and $\ell_{\rm t}$, which are expressed in Fig.~\ref{fig:hybrid_RT_3D} in terms of cloud optical properties $(\omega,g)$, themselves dependent on wavelength $\lambda$ and PSD (where $g$ is, however, essentially invariant, cf. Table~\ref{tab:MODIS_SWIR_chans}), and one statistical parameter, $\langle\sigma_{\rm e}\rangle$, the VC-averaged extinction coefficient.
Of these quantities, only the latter is a new CT unknown, via $\ell_{\rm t}$ in (RBC) when using MISR data, since $k^2$ in (PDE) vanishes when $\omega$ = 1 at 670~nm.
However, when factoring in the MODIS SWIR wavelengths to gain sensitivity to cloud microphysics, there are three extra unknowns to describe the VC, namely, $\{\omega_{1240},\omega_{1640},\omega_{2130}\}$ via $k_\lambda^2$ in (PDE).
As for MODIS operational retrievals, each SSA translates to a potentially different effective radius $r_{\rm e}(\lambda)$ that is representative of an average over the diffusion length-scale $L_{\rm d} = 1/k_\lambda$ in (\ref{eq:Diffusion_length}), as illustrated in Fig.~\ref{fig:Diffusion_length_Koch}.
In a growing convective cloud, we would normally expect $r_{\rm e}(1240) < r_{\rm e}(1640) < r_{\rm e}(2130)$ since $r_{\rm e}$ grows with height above cloud base, and $L_{\rm d}$ decreases with increasing $\lambda$.
Finally, in the lower-right corner, we formalize the radiative coupling between the VC/DA zone and the OS/RT zone at their interface $\partial$M$_\text{DA}$.
Specifically, RT$\rightarrow$DA shows how incoming irradiance for the VC is computed from outgoing radiance from the OS.
At the same boundary point, the two expressions in DA$\rightarrow$RT show how outgoing hemispherical flux from the VC becomes an isotropic boundary source for the OS.
To the best of our knowledge no 3D RT solver has implemented this hybridization, and it would be a good idea to do so.
In principle, it should greatly accelerate the solution of the of the forward model for multi-angle imaging signals with tolerable loss of accuracy, commensurate with sensor noise.
Indeed, there are very efficient methods for solving the boundary-value PDE problem in (PDE)--(RBC), e.g., using sparse matrix inversion.\footnote{
We note here that, in numerical analysis, typically much effort (and CPU time) is spent on ensuring very high accuracy of the PDE solution, possibly verging on machine precision.
Such ultra-high numerical accuracy may be important for problems where the Helmholtz equation describes the physics exactly, e.g., in electrostatics or electrodynamics.
However, in the present context, it is patently an approximation of the true physics, which is RT.
Therefore a faster lower-precision solver is desirable.}
From the CT inverse problem standpoint, we also anticipate a major acceleration since all the values of extinction on the 3D grid inside the VC are replaced by just two unknowns (assuming, for simplicity, a uniform VC and prescribed microphysics).
Thus, as candidate clouds for CT-based reconstruction get bigger, the number of unknowns increases as the cloud's surface rather than its volume.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=6.5in]{diffusion+RT_math.pdf}
\end{center}
\caption{Formulation of the computational physics problem where a 3D RTE solver is implemented in the OS and surrounding optical void, and an efficient 3D elliptical PDE solver is implemented in the VC, assumed to be convex.
Special attention is paid on how the two solvers are coupled as radiant energy flows in both directions through the OS/VC interface.
Detailed discussion in the main text.}
\label{fig:hybrid_RT_3D}
\end{figure*}
\subsection{Hybrid forward RT modeling illustrated in uniform plane-parallel media}
Implementation of the hybrid 3D RT forward model described in Fig.~\ref{fig:hybrid_RT_3D} is out of scope for the present study.
However, a limited test of the basic idea can be easily performed in 1D RT; see Fig.~\ref{fig:hybrid_RT_1D}a.
We only need to have access to an accurate and flexible 1D RT solver such as DISORT \citep{Stamnes_etal_1988} to produce benchmark results and play the role of the RT model in the OS.
We will also need two closed-form expressions from diffusion theory to emulate the computational solution of the diffusion problem in the VC.
One expression is the albedo $R_g(\tau_\text{VC},\mu_0) = 1-T_g(\tau_\text{VC},\mu_0)$ from (\ref{eq:T_deltaEdd}), itself coming from the $\delta$-Eddington version of two-stream theory for non-absorbing (a.k.a. conservative) plane-parallel optical media.
The optical thicknesses of the VC and OS are denoted $\tau_\text{VC}$ and $\tau_\text{OS}$, respectively.
The other expression is $R_\text{dif}(\tau_{\rm t}/2\chi) = 1/(1+2\chi/\tau_{\rm t})$, where $\tau_{\rm t} = (1-g)\tau_\text{VC}$ is the scaled optical thickness of the VC, and $\chi$ can be set to 2/3.
In contrast with $R_g(\tau_\text{VC},\mu_0)$, which models a collimated solar beam, this last expression applies to diffuse illumination at the upper boundary.
It results directly from (\ref{eq:RoT_ratio}) for the $R/T$ ratio in the diffusion limit, which applies to plane-parallel media \citep{davis2002} combined with radiant energy conservation, i.e., $R + T = 1$.
DISORT is configured for a collimated solar beam (SZA = $\cos^{-1}\mu_0$) at the upper boundary, and a Lambertian reflector at the lower boundary with an effective albedo
\begin{eqnarray}
\alpha &=& f_\text{dir} \, R_g(\tau_\text{VC},\mu_0) \nonumber \\
&+& (1-f_\text{dir}) \, R_\text{dif}\left( (1-g)\tau_\text{VC}/2\chi \right),
\label{eq:hybrid_RT_1D}
\end{eqnarray}
where $f_\text{dir} = \exp(-\tau_\text{OS}/\mu_0)$.
DISORT uses Mie scattering phase functions similar to that shown in Fig.~\ref{fig:PF_convolve} (with an asymmetry factor $g$ = 0.86), for effective droplet radii $r_{\rm e}$ = 5, 10, and 15 $\mu$m and wavelengths specified below.
For MODIS SWIR wavelengths, an expression similar to (\ref{eq:hybrid_RT_1D}) is used at the lower boundary, but where the closed-form diffusion theoretical models have an extra argument, namely, SSA $\omega$ from Table~\ref{tab:MODIS_SWIR_chans}.
We refer to \cite{MeadorWeaver80} or \cite{Davis_etal2009} for formulas for $R_{\{\omega,g\}}(\tau_\text{VC},\mu_0)$ and $R_\text{dif}\left( (1-g)\tau_\text{VC}/2\chi, L_{\rm d}/\ell \right)$, where the 2$^\text{nd}$ non-dimensional argument is found in (\ref{eq:Diffusion_length}) and Table~\ref{tab:MODIS_SWIR_chans}.
Finally, DISORT is run for a single layer of optical depth $\tau_\text{OS} = \tau_\text{tot} - \tau_\text{VC}$, where $\tau_\text{tot}$ is set to 10, 20, and 40.
Accuracy benchmark results are obtained by setting $\tau_\text{VC}$ = 0, hence $\tau_\text{OS} = \tau_\text{tot}$ (pure DISORT for each case).
Figures~\ref{fig:hybrid_RT_1D}bc display prediction errors of the above 1D RT hybrid model for given $\tau_\text{OS}$ between 0 and 10, and a broad range of parameters: $\tau_\text{tot} \in \{10, 20, 40\}$; $r_{\rm e} \in \{5, 10, 15\}$ [$\mu$m]; and SZA = \{0,60\} [$\null^\circ$].
In addition, we scan VZA across MISR's nine viewing directions (cf. Fig.~\ref{fig:Koch_in+out_VC}) for its red channel ($\lambda$ = 670~nm) in panel (b).
In panel (c), VZA = 9$^\circ$ and four MODIS spectral channels are sampled ($\lambda$ = 645, 1240, 1640, and 2130 [nm]), with the three latter SWIR channels having significant liquid water absorption (cf. Fig.~\ref{fig:Diffusion_length_Koch}) and, from there, cloud microphysical sensitivity to be exploited.
In Figs.~\ref{fig:hybrid_RT_1D}bc, one representative case ($\tau_\text{tot}$ = 20, $r_{\rm e}$ = 10~$\mu$m, SZA = 0$^\circ$) is highlighted; error over the 9 MISR views (same answer for fore-and aft-VZAs when SZA = 0$^\circ$) and the 4 MODIS wavelengths are plotted in different colors.
The full spread of the outcomes across parameter space is captured with the grey region around those selected means.
We see that the maximal error for all MISR views becomes less than the highlighted $\pm$5\% when $\tau_\text{OS} \gtrsim 6$, while the counterpart for MODIS across all spectral channels crosses that max-error threshold at $\tau_\text{OS} \gtrsim 6.5$.
In essence, Fig.~\ref{fig:hybrid_RT_1D} does for angular and spectral diversity what Fig.~\ref{fig:VC_turbulence_vs_mean} does for spatial variability across pixels.
Interestingly, the two experiments independently put the optical depth of the OS in the 5-to-6 range, yet again.
Both studies transpose the notions of OS and VC from vertically-developed clouds to a plane-parallel setting, reminding us that the OS is fundamentally the \emph{radiative boundary layer} of the optical medium, irrespective of shape and internal structure.
Across this boundary layer, radiation transport transitions smoothly from a diffusion process in the VC to the streaming that occurs outside the cloud and onto the solar source and the overhead sensors.
This transition has to be modeled with bone fide 3D RT, but the transport of solar radiation across the VC can be vastly simplified in the interest of computational efficiency.
We have advocated a standard 3D diffusion model in this paper, but it is not the only option.
For instance, \emph{generalized} radiative transfer theory \citep{davis2014generalized,xu2016markov} and non-classical linear transport \citep{LarsenVasques11,frank2010generalized} are related but distinct homogenizations of 3D RT in spatially variable optical media, and the latter has its own diffusion limit \citep{frank2015nonclassical}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=2.7in]{hybrid_RT_1D.jpg}
\end{center}
\caption{
(a) Schematic of the hybrid 1D RT model: DISORT on the top (in OS); diffusion model at the bottom (in VC).
(b) Performance of the hybrid model w.r.t. DISORT benchmark for all nine MISR view angles.
(c) Same as (b) but for all four MODIS spectral channels of interest here (red and 3$\times$SWIR).
If $\tau_\text{OS}$ = 0, the model is pure diffusion; if $\tau_\text{OS}$ = 10, the model is pure DISORT but only in the special case where $\tau_\text{tot}$ = 10 too.
Colored curves are for a representative configuration: $\tau_\text{tot}$ = 20; $r_{\rm e}$ = 10~$\mu$m; and sun at zenith (hence same answers for MISR's fore- and aft-views).
Grey regions are the spread in model error for the whole parameter set described in the main text.
}
\label{fig:hybrid_RT_1D}
\end{figure}
\section{Summary and discussion of our findings}
\label{sec:discuss}
\cite{Forster_etal21} discovered the VC of opaque convective clouds by using synthetic multi-angle imagery of a toy model to search for that part of the cloud that paradoxically does nothing measurable for the image of the cloud, at least not the finer details of the cloud image (as observed by realistic sensors with $\approx$3\% radiometric noise levels).
Those finer details are not only what makes clouds fascinating phenomena to contemplate and ponder.
They are also the image ``features'' that enable automated retrievals of cloud top heights and winds using stereoscopic methods to process satellite imagery, e.g., from MISR.
Moreover, the physical linkage between spatial structures across multi-angle/multi-spectral imaging of vertically-developed cumulus clouds is key to their 3D imaging using optical tomographic methods.
That is indeed the challenge of CT, a promising new technique in passive cloud remote sensing in the VNIR-SWIR, which has already motivated a multi-nanosat mission, CloudCT, currently under development in Germany and Israel \citep{cloudct}.
CloudCT will use \emph{high} spatial resolution cameras covering a narrow swath to build directly on recent CT demos \citep{Levis_etal15,Levis_etal17,Levis_etal20} using LES clouds and AirMSPI data \citep{diner2013}, where both grid-scales and pixels are $\sim$20~m.
Here, we pursue CT using existing space-based imager data, specifically, with MISR and MODIS on Terra, which have relatively modest spatial resolutions, between 250 and 500~m pixels.
This endeavor is not just about legacy data.
The next generation of NASA's Earth observing assets in space will have comparable or improved capabilities; the Multi-Angle Imager for Aerosols investigation \citep[MAIA,][]{diner2018advances} comes to mind, where then main improvement for cloud remote sensing is to have access to microphysics through both polarization and (now multi-angle) SWIR.
At any rate, graduating CT from airborne to satellite sensing is a quantum leap in computational methodology for both forward and inverse problems because there will be strong sub-pixel spatial variability to reckon with and, moreover, pixel-scale optical thickness can far exceed unity.
We recall that efficient (hence deterministic) numerical solvers for 3D RT assume uniformity and mandate at least semi-transparency inside each grid-cell.
We are convinced that discovery of the VC will be key to meeting this new forward-modeling challenge in computational 3D RT, as well as in the proper definition of the large inverse problem in CT using a combination of MISR's multi-angle and MODIS's multi-spectral data.
Having found in the VC what does \emph{not} affect the intricate texture of cloud imagery, we set out in the present study to understand on the basis of physics what does encode 3D cloud structure into 2D imagery, respectively, output and input of CT.
Here are our main findings along the way:
\begin{enumerate}
\item
We established that, whatever structure there is in the VC, it will not influence the fine-scale structure of the cloud images in a detectable way.
However, {\bf the overall optical ``mass'' (mean extinction and size) of the VC does control the cloud-scale radiance contrast} between the part of the cloud's outer boundary that is directly illuminated (reflected light) and its self-shaded side (transmitted light).
\begin{itemize}
\item Moreover, there is a near-linear relationship between the reflected/transmitted radiance contrast ratio and the mean optical thickness of the whole 3D cloud---a fact that will prove helpful in the initialization of the CT optimization procedure.
\end{itemize}
\item
We showed that the {\bf 3D RT modeling in the VC can be vastly simplified} by assuming that it is uniform, with the possible exception of a cloud-scale vertical gradient, which either way reduces significantly the number of unknowns in the CT inverse problem.
\begin{itemize}
\item In addition, the 3D RT in the VC can be modeled with sufficient accuracy by using the 3D diffusion approximation, for which there exist extremely fast numerical solvers; this is good news for the forward model embedded in the CT optimization problem.
\item The RT physics in the VC is therefore analogous to a steady-state diffusion process where RWs start at the illuminated side of the VC and can end anywhere but, if starting on the illuminated side, generally not so far from where they started.
\item RW theory was used to derive the scaling of the radiative smoothing scale as the harmonic mean of the only two lengths in the diffusion problem in the absence of absorption: size of the medium and size of the steps, a.k.a. mean-free-path (MFP) but, in this case, the ``transport'' MFP, which is where, on average, an effectively isotropic scattering happens (factoring in the forward scattering).
\end{itemize}
\item
Outside the VC, we find the cloud's OS, which \cite{Forster_etal21} would have defined as the first 5-to-6 optical depths of the medium, coming in from the solar source or any of the imaging sensors, but before reaching the VC.
Forster and coworkers found that 5-to-6 optical depth for the OS/VC interface empirically, by requiring that the impact of any physically reasonable rearrangement of the extinction field inside the VC on the observed imagery does not exceed the sensor noise threshold for a radiance difference.
Here, {\bf we view the OS as the radiative analog of the boundary layer in fluid dynamics}---the region where the presence of the boundary strongly influences the structure of the radiance field.
\begin{itemize}
\item We confirmed numerically the 5-to-6 optical depth of the OS, but coming from the opposite direction as Forster and coworkers; we ask: At what optical depth into the cloud does an opaque object cease to leave a detectable imprint in the remotely-sensed cloud image?
\item Furthermore, we use Green function theory for 3D RT in an infinite medium to predict the 5-to-6 optical thickness of the OS based only on phase function characteristics (indeed only its asymmetry factor) by recasting 3D RT as a RW on the 2D unit sphere of directions and an associated RW in 3D space, but with angularly-correlated steps.
\item Steps in the RW on the 2D sphere are isotropic, but the fact that they are relatively small compared to the radius of curvature strongly impacts the related RW in 3D space, where step lengths are independent, but directions are not; this leads to a finite drift of the RW position in the direction of the 1st step as well as to a lateral dispersion.
\item Both longitudinal drift and lateral dispersion are computed analytically and numerically in unbounded 3D space and half-space for every step; in the limit of an infinite number of steps, the drift is identical to the ``transport'' or scaled MFP.
\item Three other experiments independently confirm the same 5-to-6 prediction: two were conducted herein numerically (in pixel- and angle-space, respectively) while the third path was to relate our theoretical and computational results to a controlled laboratory investigation by \cite{Bohren_etal95} on how optically thick a cloud needs to be to totally obscure the sun.
\end{itemize}
\item
Finally, motivated by securing for CT highly-desirable sensitivity to cloud particle size, we examined how cloud images are impacted by absorption in liquid water droplets at three SWIR wavelengths sampled by MODIS.
Based on the diagnostic diffusion length scale, we find that for moderately opaque cumulus clouds (mean optical thickness $\sim$22), {\bf the effective microphysical probing depth is commensurate with cloud size at 1240~nm, reaches into the VC at 1640~nm, but barely covers the OS at 2130~nm}; see Fig.~\ref{fig:Diffusion_length_Koch}.
\end{enumerate}
The last item will be critical in the study of cumuliform clouds in convective dynamical regimes using CT to recover information on the droplet size profile.
At a fundamental level, VNIR-SWIR cloud images are formed by two intertwined radiative diffusion processes.
\begin{itemize}
\item One unfolds in the OS:
{\bf a RW on the 2D unit sphere that models the gradual loss of directional memory} in the sunlight as it penetrates the cloud, and that drives a spatial RW with strongly correlated steps in direction space (due to the forward-peaked phase function).
\item The other unfolds in the VC:
{\bf a standard RW in 3D space akin to particles in Brownian motion} that start at the OS/VC interface facing the sun and end anywhere on it, but generally not very far to the starting point if it is a reflection rather than a transmission event.
\end{itemize}
We reckoned that $\sim$1/3 of the red-channel sunlight impinging on a cloud opaque enough to have a sizable VC never reaches it, as it escapes back to space before crossing the OS/VC interface.
In the absence of absorption (i.e., MISR- and MODIS-red channels), the VC partitions the remaining $\sim$2/3 of the incoming light between reflection and transmission, using the solar direction as a discriminator, with the former dominating the latter more-and-more as the VC increases in opacity.\footnote{
``Sideways'' deflection by the VC is a concept that literally borrows from the two main categories of reflected and transmitted light in a way that is essentially subjective.}
Finally, the sunlight escapes the cloud---with or without reaching the VC---streaming into directions defined by individual pixels at the focal plane of the space-based imaging sensor.
Since it is (still or back) in the OS, this outgoing light is again following a RW of the first kind---on the 2D sphere, driving an angularly-correlated one in 3D space.
However, {\bf this last RW is best visualized as starting at the sensor's pixel and sending well-collimated ``reciprocal'' or ``adjoint'' light into the cloud}.
Therein this backtracking light diffuses across direction space and propagates in 3D space until it crosses paths \emph{at every step} with forward-propagating sunlight, either in the OS or at the OS/VC interface.
At those points, all physically realizable paths from the sun to the sensor through the cloud have thus been accounted for.\footnote{
Certain Monte Carlo 3D RT codes used in photorealistic computer graphics implement literally this forward-and-backward methodology, for instance, using the powerful ``photon mapping'' approach \citep{jensen2009realistic}.}
\section{Case study}
\label{sec:Case_study}
To close out the present study, we offer Fig.~\ref{fig:LES_big_SHDOM} where we revisit the protocol used in Fig.~\ref{fig:Koch_R_vs_T} in which cloud opacity is iteratively boosted.
However, we start here at an overall smaller optical thickness ($\sim$1 on average, down from $\sim$3) and we use two factors of 10 rather than six factors of 2.
Rather than our ``Koch fractal'' toy model, this time we use a moderate-size 3D cloud from an LES simulation using RICO \citep{rauber2007rain} forcing conditions (left panels), and the 3D RT is performed using deterministic rather than Monte Carlo code to generate the 3 nadir images (right panels).
The image pixel size is the same as the LES horizontal grid scale, namely, 20~m; for the present context, we also show (lower right) the areas covered by MISR-red, MODIS-red and MODIS-SWIR pixels, respectively, 275, 250, and 500~m, from Fig.~\ref{fig:EUREC4A}.
That makes clear the challenge of tomographic reconstruction of the 3D cloud structure from MISR and MODIS data.
This is indeed one of the two LES clouds successfully reconstructed by \cite{Levis_etal15} at the native LES resolution (similar to what can be achieved with airborne sensors)\footnote{
Such resolutions can of course be achieved from space, but at the expense of image swath and SNR.}
where the pixels and voxels are optically thin and internally uniform, as required for deterministic 3D RT modeling.
However, the two Terra-based imagers are plagued with pixels that are potentially opaque and certainly heterogeneous, in blatant contradiction with said requirements.
We therefore set out to help overcome these challenges by improving our fundamental understanding of how cloud images are formed in the first place.
How can this advance help to comprehend the three images to the right in Fig.~\ref{fig:LES_big_SHDOM}?
We first notice that the top and bottom images show the sharpest features, but we will argue that it is for opposite reasons.
\begin{itemize}
\item
We start with the optically thinner (bottom) case where the maximum optical thickness is $\approx$6 so, in essence, this cloud is all OS.
The MFP (corresponding to one unit of optical thickness) in fact exceeds cloud thickness in all of the many pixels near the meandering cloud edges.
These parts of the image are thus dominated by single-scattered light, which never leaves the column it first hits when both sun and sensor are at zenith, as is the case here.
In the most optically thick columns around the center, there is however clear evidence of multiple scattering.
That said, the strongly forward-peaked phase function keeps, on average, the light relatively close to the column it first enters, until it is finally backscattered.
In short, this image is almost like an X-ray of the cloud if one compares it with the vertically-integrated extinction map to its right, plotted on a log scale.
\item
The middle panel shows the cloud at its ``natural'' opacity, where the average cloudy column has a respectable optical thickness of nearly 10 and, in the center of the cloud, we find optical thicknesses in the many 10s.
This cloud therefore has a well-developed VC, although large portions of the cloud with low COT will not be part of this VC and, moreover, it will probably not be itself optically thick enough to use the diffusion approximation.
Nonetheless, since there is a VC, we can invoke the above-mentioned idea of a radiative smoothing scale, which quantifies to first order the lateral distance between entry and escape of the sunlight.
Factoring in the $\approx$45$^\circ$ inclination of this cloud (upper left panel), its physical thickness along the diagonal is on the order of 0.6~km and, in the central region, its optical thickness along the vertical is now in the 30--60 range, according to the legend (lower left panel), hence 20--40 projected along the diagonal.
With these rough estimates, our Eq.~(\ref{eq:RMSrho_R}) for the reflective smoothing scale yields 0.25--0.35~km, hence 0.17--0.25~km projected back onto the horizontal plane.
To the eye, that looks like a reasonable estimate of the resolvable level detail in the optically thick center of the cloud, in comparison with the optical thickness map (lower left).
\item
In the optically thicker (top) case, the maximum optical thickness is now $\approx$600.
As evidenced by the logarithmic vertical optical thickness scale, all but the most peripheral columns have COT in excess of 100, so this cloud is practically all VC, thus making radiative smoothing theory more accurate.
According the reflective smoothing scale in (\ref{eq:RMSrho_R}), all lateral transport distances in the middle panel are reduced by a factor of $\sqrt{10}$.
Therefore, we should be able to resolve details between 55 and 80~m in size, about 3 or 4 pixels, respectively, which to the eye seems about right.
We note that LES cloud structure is not expected to be realistic at these small scales, but somewhat too smooth due to the nature of the numerics in the fluid mechanics.
This deficit in small-scale variability is in addition to the radiative smoothing, and explains the ``blobby'' appearance of the cloud in the simulated image.
Interestingly, and in sharp contrast with the lower-right image, there is only a vague resemblance here between the COT map (lower left) and this simulated cloud image.
That is because it is clearly the variable height of the cloud's upper surface that shapes this image.
One can count 6 ``levels'' leading from cloud base to the top of the highest convective ``plume,'' as highlighted in the two top panels.
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.14in]{LES_big_SHDOM.jpg}
\end{center}
\caption{
{\bf Left panels: }
A single vertically-developed cloud generated by the JPL LES \citep{matheou2014large} viewed as two zero-masked optical thickness fields plotted on a log-scale:
lower panel is the usual vertical integration;
upper panel shows integrals of the extinction coefficient along the $y$-axis.
An effective radius of 10~$\mu$m was assumed to convert the gridded LWC field into a 3D distribution of extinction values for $\lambda$ = 670~nm (MISR red); the mean COT across cloudy columns is then 9.72.
{\bf Right panels: }
The middle panel shows a synthetic image of the cloud at the native LES resolution computed with the popular open-source 3D RT solver SHDOM \citep{evans1998spherical}.
Solar and view angles are both set to 0$^\circ$, hence no shadows are visible, no cloud surface hidden (except by overhangs); surface is black; no aerosol nor Rayleigh scattering is considered.
Top and bottom panels are images of the same cloud, but with extinction everywhere multiplied or divided by 10, respectively, bringing the mean COT to 97.2 (top) and 0.972 (bottom).
Radiometric scales are more than doubled at each step to accommodate the increasing range of radiance values from bottom to top.
For context, the footprints of MISR-red, MODIS-red and -SWIR pixels are shown as insets in the bottom panel.
Numbers and pointers in the top two panels are discussed in the main text.
}
\label{fig:LES_big_SHDOM}
\end{figure}
\section{Conclusion \& outlook}
\label{sec:concl}
We set out to uncover the physical principles at work in the formation of convective cloud images, as observed from space by MISR and MODIS on Terra, and kindred imaging sensors operating in the solar spectrum, present and future.
We were motivated by the challenges that arise, due to pixel size and spatial variability, when migrating 3D cloud tomography (CT) algorithms from high to moderate spatial resolution (i.e., airborne to satellite platforms).
We did not resolve these challenges, but we are now in a better position to do so, building on a new understanding of cloud image formation.
That said, the present study applies not only to CT but, broadly-speaking, to any remote sensing operation that uses cloud image ``features,'' e.g., stereographic determination of cloud-top heights and winds.
We have indeed reduced the complex radiation-cloud interactions that culminate in the formation of an image to a single unifying concept in statistical physics, namely, diffusion processes, equivalently, random walk (RW) theory.\footnote{
Not to be confused with the so-called ``Markov chain'' formulation of computational radiative transfer that is closely related to RW theory in ``transport'' space, a merger of 1D or 3D space of positions and the 2D space of directions.
See \cite{esposito1978radiative} and \cite{xu2011markov} for implementations in plane-parallel media, without and with polarization, respectively.}
Furthermore, we have identified a three-part storyline where RW theory plays a central role and that unfolds as sunlight travels from the illuminated cloud boundary to the imaging sensors in space.
\begin{itemize}
\item
The {\it First Act} is about how the spatially uniform and unidirectional incoming solar beam gradually loses its directionality over the first 5-to-6 optical depths into the cloud; uniformity is of course also broken as the sunlight flows in and around the spatially variable extinction field.
In this first segment, light travels across the region we call the illuminated section of the ``outer shell'' (OS).
The deep RW here is on the 2D unit sphere of directions that, in turn, drives a \emph{non-standard} RW in 3D space with directionally-correlated steps.
The two consequences of these angular correlations are that:
({\it i}) the mean position of the RW drifts systematically in the direction of the incoming solar illumination;
({\it ii}) simultaneously, light coming in at a specific point on the cloud boundary spreads laterally as it penetrates the medium.
This first diffusion process ends when the light is well on its way to ``forgetting'' its original direction of propagation.
\item
As argued next, {\it Act Two} is in fact optional, but only under some special circumstances.
It starts whenever the now very diffuse sunlight reaches the ``veiled core'' (VC) of the cloud, as defined empirically in our previous study \citep{Forster_etal21}; therein sunlight executes a \emph{standard} (isotropic step direction) RW in the available 3D space, which is necessarily finite in vertically-developed convective clouds.
Steps in this RW are isotropic because the extinction field has been scaled back to account for the forward-peaked phase function following, e.g., the similarity theory of \cite{joseph76delta}.
Based on RW theory, we can compute how far the light travels on average along the VC boundary from entry to escape, and relate this lateral radiation transport to the phenomenon of radiative smoothing \citep{Marshak_etal1995}.
We can also compute from RW/diffusion theory the net radiative flux across the VC that, in turn, determines the cloud-scale contrast in radiance between the brighter sunlit side of the cloud to its dimmer self-shaded side.
\item
The {\it Final Act} is like the first, but in reverse since we are now interested in how ``reciprocal'' or ``adjoint'' light that starts at a specific image pixel, i.e., a small area in 3D space in combination with a specific direction on the 2D sphere.
By reciprocity, this backward propagation from the sensor into the cloud is affected by the same angular smearing and spatial spreading as the incoming solar beam.
Spatially, this last RW quantifies how, in principle, the entire cloudy medium can affect the radiance in a single pixel/angle.
In fact, the intensity of this adjoint light has been called ``importance,'' following \cite{Marchuk1964}.
Where it is high, the ambient light coming from the solar source, directly or not, contributes a lot to the pixel-scale radiance and, conversely, where it is low, the ambient light contributes little.
\quad Depending on the pixel and viewing direction, there can be strong overlap between the forward and backward propagating light.
If that happens in the (illuminated part of the) OS, typically at relatively low orders-of-scattering, then it is an opportunity for the sunlight to flow from entry to escape without crossing the OS/VC interface, thus making the Second Act unnecessary.
This shortened playbook is conducive to generating strong image features, and we reckoned that it occurs for $\sim$1/3 of the incoming sunlight.
\quad On top of the $\sim$2/3 that reaches the (illuminated side of) the VC, there are always MISR viewing angles where many if not all pixels have no such overlap with incoming light.
Either way, there is a significant chance that the adjoint light reaches the OS/VC interface.
In that case, the highly diffuse outgoing light field produced across the OS/VC interface during Act Two is what the back-propagating adjoint light collects for the pixel/direction of interest.
In this case, cloud structures in OS, as seen from space are essentially being backlit with the relatively dim and diffuse light that has filtered through the VC.
Image features here will therefore not be as sharp as on the sunlit side of the cloud where backscattered light that never entered the VC prevails.
\end{itemize}
3D cloud structure is thus encoded in MISR's multi-angle imagery using the rules of 3D radiative transfer (RT), and the goal of CT is to invert that map.
However, to do this with the larger pixels and larger clouds, we need to reformulate the CT problem based on insights from physics (here) and computational experimentation \citep{Forster_etal21}, bearing in mind the finite radiometric precision of our sensors.
Forster and coworkers defined a cloud's VC by locating the in-cloud region where the impact of physically-plausible structural changes inside the VC are not detectable, i.e., result in radiance changes at best commensurate with the noise level.
They thereby established that the VC is the in-cloud region where only mean extinction should be a CT target, thus drastically reducing the number of unknowns in the inverse CT problem and avoiding a situation leading to overfitting.
Here, we furthermore suggest that, into the future, the forward 3D RT model in the CT inversion scheme can be vastly simplified in the VC by using the diffusion approximation.
MISR's red channel provides the multi-angle imagery required for CT at 275~m resolution, and we have so far used prescribed uniform cloud droplet sizes for our sensitivity studies.
However, for actual CT, experience tells us that we want to do better than that, and we therefore plan to include in the CT optimization MODIS's SWIR channels where liquid water absorbs in proportion to the droplet's volume.
Even if there is just one viewing direction, we thus gain sensitivity to cloud microphysics, at least at the cloud top and edges.
Implementation details (e.g., prescribed or prior profile of effective radius) are a subject for future study, but we showed here already that, at SWIR wavelengths, the VC will be larger than for non-absorbing VNIR wavelengths.
That is more good news for future forward 3D RT modeling at the core of CT inversion algorithms.
\acknowledgments
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
LF received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sk\l{}odowska-Curie Grant Agreement No. 754388 and from LMU Munich's Institutional Strategy LMUexcellent within the framework of the German Excellence Initiative (No. ZUK22).
AD was funded primarily by NASA's SMD/ESD Radiation Sciences Program under the ROSES TASNPP element (contract \#17-TASNPP17-0165).
He also acknowledges a strong influence on his thinking from the ``Planetary Boundary Layer'' working group at JPL supported by the SRTD program.
We are thankful for many fruitful conversations on cloud CT with Aviad Levis, Katie Bouman, Masada Tzemach, Yoav Schechner, Jesse Loveridge, and Larry Di Girolamo.
\bibliographystyle{ametsoc2014}
|
1,116,691,499,695 | arxiv | \section{Introduction}
The flavour content of atmospheric neutrino interactions has previously
been measured
in four underground experiments \cite{kam,imb,frejus,nusex}.
The first two experiments, those performed in water Cherenkov detectors, found
that the ratio of $\nu_\mu$ to $\nu_e$ events (tagged by the outgoing lepton)
was different from that expected from their Monte Carlo calculations.
On the other hand
the latter two experiments, carried out in iron calorimeters, found consistency
, albeit with inferior statistical
precision.
\par In order to cancel the uncertainties in the overall cosmic
ray flux it is desirable to present the result in the form of the
double ratio
\begin{displaymath}
R_t=
\frac{\left(\frac{\nu_\mu}{\nu_e}\right)_{data}}
{\left(\frac{\nu_\mu}{\nu_e}\right)_{MC}} \,.
\end{displaymath}
The water
Cherenkov experiments have selected as a measure of the $\nu_\mu$ rate,
single track (muon) events and the $\nu_e$ rate, single shower
(electron) events. One can then form the experimental ratio
\begin{displaymath}
R=
\frac{\left(\frac{tracks}{showers}\right)_{data}}
{\left(\frac{tracks}{showers}\right)_{MC}} \,.
\end{displaymath}
The water Cherenkov detectors found values of $R$ between
$0.54 \pm 0.07$ and $0.62 \pm 0.08$ \cite{Kamneutron}.
The Frejus iron calorimeter
experiment, using all events rather than only single prong events and
including uncontained events, found a double ratio consistent with 1.0
\par Since the first reports of this anomaly much effort has been invested in
verification of the Monte Carlo calculations \cite{Gaisser} and in checking
the experimental procedures \cite{Kamtest}.
No convincing explanation for the water Cherenkov anomaly
not involving new physics has been put forward. However
there still may be undetected backgrounds or experimental problems.
In particular
it has been postulated \cite{Ryaz} that the effect may be due to a
background of neutron produced events, though evidence against this has been
produced by the Kamiokande experiment \cite{Kamneutron}.
The possibility remains that the flavour content has changed
between the $\nu$ production in the upper atmosphere and their
interaction in the underground detectors, implying that $\nu$ flavour
oscillations have taken place, that neutrinos have mass and that physics beyond
the standard model is being observed.
\par In this letter we report a measurement of the flavour ratio in Soudan~2
from an exposure of 1.52 fiducial kton-years. The value of $R$ obtained is:
\par $R=0.72\pm0.19 ^{+0.05}_{-0.07}$
\par Although on its own the deviation from unity
is not significant,
the agreement of the sign of the discrepancy with
the water Cherenkov data adds weight to the hypothesis of a real effect.
\par Soudan 2
is an iron calorimeter with
different experimental systematics from the water Cherenkov detectors
and with a different geometry and detection
technique to the Frejus experiment.
Background events produced by neutral particles
entering the detector from the interactions of cosmic ray muons in the
surrounding rock are tagged by a hermetic active shield.
We show that our low value of $R$ is not due to a
contamination from such events. Our measured value of the track/shower ratio
for neutron produced events does not support
the hypothesis that the anomaly in the Kamiokande and IMB
experiments is due to such a contamination.
\section{The Soudan 2 detector}
The Soudan 2 experiment is located 710 meters underground in the
Soudan Underground
Mine State Park, Soudan, Minnesota,USA. The main detector is a
time projection,tracking
calorimeter with a total mass of 963 metric tons. It consists of 224
modules each weighing 4.3 tons and having an average density of 1.6
g/cc. It is surrounded by an
active shield of aluminum proportional tubes.
\par About 85\% of the mass of a module
is provided by 1.6 mm thick sheets of corrugated
steel. The sheets are stacked to form a hexagonal `honeycomb' structure.
Plastic
drift tubes (1.0 m long and 15 mm in diameter) fill the spaces
in the honeycomb.
An 85\% argon/15\% CO$_{2}$ gas mixture is recirculated through the modules.
Ionization deposited in the gas
drifts toward the closer end of
the tube in an 180 volt/cm electric field.
The drift velocity is approximately $0.6$ cm/$\mu$sec,
which yields a
maximum drift time of 83 $\mu$sec.
\par On reaching the end of the tube, the
charge is detected by vertical anode wires and horizontal
cathode strips. The signals from widely separated wires and strips are
summed to reduce the number
of readout channels. The summing is designed
such that matching a pulse from
an anode and cathode channel uniquely identifies
the module and tube from which the ionization drifted.
The
signals are digitized every 200 nsec and are stored in a 1024 word buffer.
The
primary trigger condition requires activity at different times in any 7 anode
OR 8 cathode channels out of any block of 16 channels within a total gate
of 72 $\mu$sec.
Further details of module construction may be found in reference
\cite{modcon}, its performance in a cosmic ray
test stand in \cite{modperf} and the performance at the Soudan mine in
\cite{gallagher}.
\par The calorimeter is surrounded by a 1700 m$^2$ active shield
designed to identify particles which enter or exit the
detector cavern. The shield covers about 97\% of the total solid
angle.
The basic element is an extruded aluminium manifold, up to 8m long,
consisting of eight hexagonal proportional tubes
arranged in two layers of four.
The four tubes in each layer are connected together and
read out as one signal. The random rate in a tube layer coming from
natural radioactivity ($\sim300$~Hz~m$^{-2}$) would produce an unacceptably high
rejection rate. Thus a coincidence of an adjacent
inner and outer layer is required to signal a high energy
particle entering or leaving the cavern. The measured efficiency of a
coincidence for a single, high energy particle traversing a shield element
is 95\%.
More details of the shield construction
and performance can be found in reference \cite{shield}.
The completed detector runs at a trigger rate of $\approx$ 0.5
Hz. Approximately two thirds of triggers come from cosmic ray muons passing
through the detector. Most of the remainder are due to electrical noise
or naturally occurring radioactivity.
The detector routinely runs with an overall efficiency of $\sim80$\% which
rises to over $90\%$ during nights and weekends when the laboratory is not
occupied.
Immediately after completion of a run the data are processed to
reconstruct the events and sort them into output files of candidate
events for various physics analyses.
Every 240 seconds a data acquisition sequence is initiated,
irrespective of detector activity. These `pulser' events provide a snapshot
of the background levels in the main detector and
are used as underlying events to add detector noise to
Monte Carlo events.
\section{Data Analysis}
\subsection{Data reduction}
\label{sec:dataanal}
The data considered in this letter come from a 1.52 kton-year exposure
between April 1989 and December 1993. During this
period the detector was under construction, starting with a total mass of
275 tons and ending with the complete 963 tons. A total of 43 million
triggers was taken.
The goal of the data reduction is to obtain a sample of
`contained events' which will be used both for the atmospheric neutrino
analysis described here and for a search for proton decay. A contained
event is defined as one in which no primary particle in the event
leaves the fiducial volume of the detector, defined by a 20 cm depth cut on
all sides of the detector.
\par The events are passed through a
software filter to reject events with tracks entering or leaving the
fiducial volume
(mostly cosmic ray muons) or events which have the characteristics
of radioactive background or electronic noise. Approximately 1 event
per 1500 triggers passes this filter.
\par The selected events are then double scanned to check containment and
to reject background events, using an interactive graphics program.
The
main backgrounds are residual radioactive and electronic noise,
badly reconstructed cosmic ray muons
and events where muons
pass down the gaps between individual modules, either finally entering a
module and stopping or interacting in material
in the gap and sending secondary tracks
into the modules. Any event with a track which starts or ends on a
gap, or which can be projected through
a gap to the exterior of the detector is rejected. In addition, events with
a vertex in the crack region are rejected.
Differences between scanners are resolved by a second level scan.
Approximately 1 event in 40 passed by the program filter is finally
selected as contained.
The average efficiency of individual scanners in selecting contained events
was 93.5\%. Further details of the event selection procedure can be found
in reference \cite{gallagher}.
\subsection{Monte Carlo analysis}
\par A Monte Carlo simulation of the experiment has been developed
which reproduces as closely as possible the experimental data. In particular,
Monte Carlo events have been made visually
indistinguishable from true data events
to experienced physicist scanners. This currently
enables Monte Carlo events to be inserted randomly
into the data stream
and to be processed simultaneously with the data events,
ensuring that they are treated identically.
This version of the Monte Carlo program was not available at
the beginning of the experiment. The last third of the
data set reported here had
Monte Carlo events inserted at the scanning level. The first
two thirds were initially processed independently of the Monte Carlo.
Although
the Monte Carlo corresponding to this earlier data was processed and scanned
separately great care was taken to follow the same procedures as for the
real data and thus avoid biases.
\par Monte Carlo events equivalent to 5.9 times the exposure of the real data
were generated and passed
through exactly the same data analysis procedure as described in section
\ref{sec:dataanal}.
\par The neutrinos were generated using the BGS flux\cite{barr}.
The variation of the $\nu$ intensity with the solar cycle was corrected
using neutron monitor data\cite{gallagher,beiber}.
\par At the low $\nu$ energies characteristic of the atmospheric flux the
predominant interactions are quasi-elastic or resonance production. Full
details of the event generation process and a detailed comparison with
all available low energy data are given in reference
\cite{gallagher}. Nuclear physics effects were represented by the Fermi gas
model. Rescattering of pions within the nucleus was applied
using data obtained by comparison of bubble chamber $\nu$
interactions on deuterium and neon \cite{intranuke}.
\par Events were generated to simulate the exact
size and configuration of the detector as it grew during this exposure.
Particles produced in
the neutrino interactions were tracked through the detector geometry using
the EGS and GEISHA codes.
Particles crossing the drift tubes had amounts
of ionization deposited in the gas selected from the distribution of
reference \cite{allison}. The
ionization was drifted, with appropriate attenuation and diffusion, to the
anode wires where the effects of the
avalanche and electronics response were closely simulated.
The generated
event was superimposed on a pulser trigger which
reproduces noise and background in the detector as
they vary with calendar time.
\par A comparison of physical quantities, including
topologies and energy distributions, between data and Monte Carlo
showed no discrepancies outside the
possible effects of the atmospheric neutrino flavour anomaly discussed in
this paper. In addition the Monte Carlo representation of tracks and showers
has been tested against data taken at the Rutherford Appleton Laboratory ISIS
test beam facility which provided electrons, pions and muons
up to a momentum of 400 MeV/c and protons up to 800 Mev/c.
\subsection{Event classification and reconstruction}
\par The aim of this analysis is to measure the flavour content of
neutrinos incident on the detector after their
passage from the upper atmosphere. Given the
predominance of
quasi-elastic scattering the relative rate of single shower (electron)
and single track (muon) events is a good measurement of the flavour
content. It is also the measurement made in the water Cherenkov detectors.
We expect in the future to use the superior track separation
and reconstruction properties of Soudan 2 to flavour classify events with
multiple tracks but the objective of this paper is to repeat
the earlier measurements.
\par The lepton flavour of each event is determined by the second level
scanners who flag them as `track', `shower' or `multiprong'. Tracks
which have heavy ionization and are straight are further classified as
`protons'. Proton recoils accompanying tracks and showers are an additional
tag of quasi-elastic scattering and are ignored in the classification. Any
second track or shower in the event results in a multiprong
classification. As a test of the systematic uncertainties introduced by
the classification process, all scanning was done independently by two groups
prior to merging for the final results.
\par The quality
of the flavour assignment was measured using the Monte Carlo data.
Table \ref{mismatrix} gives the
identification matrix for Monte Carlo events selected as contained.
\begin{table}[h]
\caption[Monte Carlo identification matrix]
{Monte Carlo identification matrix.\\}
\label{mismatrix}
\begin{tabular}{|l|cccc|}
\hline
& &\multicolumn{2}{c}{Assigned}&\\
Generated & Track & Shower & Multiprong & Proton \\
\cline{1-5}
$\nu_\mu$ cc & 242 & 3 & 98 & 6 \\
$\nu_e$ cc & 15 &255 & 110 & 1 \\
Neutral current & 21 & 9 & 44 & 18 \\
\hline
\end{tabular}
\end{table}
It can be seen that 87\% of events assigned as
tracks have muon flavour and 96\% of showers
electron flavour. The identification matrix is consistent between the
first two thirds of the data when the scanners were aware that they
were scanning MC events and the last third when the events were
randomly mixed. The ratio of accepted muon to electron charged current
events is
approximately 1:1, different from the expected ratio of 2:1 from the
$\pi \rightarrow \mu \rightarrow e$ decay chain. At these low
energies threshold effects due to the difference in the muon and electron
masses cause the generated event
ratio to be approximately
1.5:1. Acceptance differences for high energy muons and electrons and
the cuts required to remove background produced by cosmic ray muons
passing down the gaps between modules further
reduce the ratio.
\par
Each contained event is reconstructed, using the interactive graphics system,
to determine its position and the energy of the identified particles.
A vertex is assigned, and the
location of the ends of any tracks is marked.
\par Electron showers are reconstructed using a clustering algorithm to
select all hits lying within 60 cm of their nearest
neighbour. The shower
direction is determined using these hits and the vertex defined by the scanner.
The shower energy is calculated from the number
of hits. The energy is calibrated using the results of a test beam exposure
of a module to electrons below 400 MeV
at the Rutherford
Appleton Laboratory ISIS facility and
to Monte Carlo showers at higher energies.
Early data had some contamination of the shower sample from electrical
breakdown in some modules. This was much improved as the experiment
progressed by optimization
of the wireplane voltages and refurbishment of the worst
modules. In order to remove this contamination a cut which required
$\ge9$ hits was applied to the showers, corresponding to an energy cut of
approximately 150 MeV. Raising this cut
had no significant effect on the ratio $R$.
\par Tracks are reconstructed by fitting a polynomial
to the hits belonging to the track. The amount of material traversed by
the particle along the fit trajectory is calculated by tracking the polynomial
through the detailed geometry of the module. The range
is then converted into
a particle energy by integrating the Bethe-Bloch equation, assuming a muon
mass.
The energy calibration has again been checked using data from the test beam
exposure.
A minimum of
6 hits on the track was required, corresponding to a muon
kinetic energy cut-off of approximately 40 MeV.
Tracks produce a very regular pattern of hits in the honeycomb geometry which
breakdown processes do not reproduce and
there is no evidence of such
contamination
of the track sample.
\subsection{Shield data and the identification of $\nu$ events}
\label{sec:shield}
A total of 723 data events are classified as contained.
This is much greater than the expected neutrino rate of about 100
events/kton-year.
We conclude that the majority of these events
are due to the interactions of neutral particles (neutrons or photons)
produced by muon interactions in the rock around the detector.
The active shield
is designed to flag such events by detecting the muon and/or other charged
particles which are produced in the muon interaction but do not enter the
main detector.
It was placed as close to
the cavern wall and as far away from the detector as possible to maximize
the probability
of detecting the accompanying charged particles. Calculations
\cite{pdk642} indicate that only a few per cent
of such events will not have charged
particles traversing the shield.
\begin{figure}
\leavevmode\psfig{file=adjcs.ps,width=5.5in}
\caption
{\label{adjc}
Histogram of the number of shield hits for the data events (top) and Monte
Carlo events (bottom). The data is a mixture of neutrino events and rock
background. The Monte Carlo plot contains only generated neutrino events.}
\end{figure}
\par Figure \ref{adjc}(top) is a histogram of the number of coincident
shield hits accompanying each track or shower event. The events with no shield
hits are defined as our $\nu$ sample (`gold' events) and the events with
shield hits are defined as arising from muon interactions (`rock' events).
\par Figure \ref{adjc}(bottom) shows the same plot for Monte Carlo
contained events.
The events
with shield hits are due to random shield hits in the background
pulser events during the allowed time window of the MC event.
A total of 53 out of 598 (8.9\%) of Monte Carlo events had random
shield coincidences.
The random veto events are almost all of multiplicity 1, consistent with
the veto being due to Compton electrons produced by photons
from the natural radioactivity in the rock. The
random vetoing of real events is simulated by selecting only
events with no shield hits for the Monte Carlo gold sample.
\par Rock events produced by a muon which passes through the cavern should
give at least two shield hits.
The one shield hit events are a combination of zero shield hit events with
random hits, genuine one shield hit events where the entering charged
particle is stopped in the cavern and potential two shield hit events
with a missing hit due to shield inefficiency. The efficiency of the shield
has been measured using cosmic ray muons detected in the main detector. It
ranges from 81\% during the early data runs before the geometrical coverage
was complete
to 93\% at the end of this data period, equal to the convolution of the
geometrical coverage and the single tube efficiency.
Using the number of 0, 1 and 2 hit
events we estimate that $7\pm2$ gold events are due to muon interactions
with a charged particle passing through the shield which was not recorded
due to shield inefficiency.
\par Our sample of rock events, used to determine
the properties of any potential non-neutrino background, was defined as
those with $\ge2$
shield hits since the one shield hit event sample also contains randomly vetoed
neutrino events
\par Table \ref{raw_numbers} gives the raw numbers of gold, rock and gold
MC events
in our sample, divided into track, shower, multiprong and proton. The ratio
of single prong to multiprong events in our data is higher than in previous
experiments. Our track energy threshold is lower than the Cherenkov
threshold in water and both track and shower thresholds are considerably
lower than those of Frejus. Also the requirement that no track ends on a
gap between modules preferentially rejects multiprongs.
\begin{table}[h]
\caption[Classifications for the contained events
before corrections. ]{Classifications for the contained events
before corrections.\\}
\label{raw_numbers}
\begin{tabular}{|l|c|c|c|c|}
\hline
& Track & Shower & Multiprong & Proton \\ \hline
Data: gold & 47 & 60 & 51 & 10 \\
Data: rock & 160 & 169 & 90 & 56 \\
MC & 278 & 267 & 252 & 25 \\
\hline
\end{tabular}
\end{table}
\section{Measurement of the flavour ratio}
\subsection{Background determination}
\par In section \ref{sec:shield} it was estimated that a background
of $7\pm2$ rock events was expected in the gold sample
because of shield inefficiency.
There is also the possibility
that neutrons or photons may enter the detector without being accompanied
by charged particles in the shield.
Our large sample of rock events enables
us to investigate this potential background
by studying the depth
distribution of the events in the detector.
\par The
events produced by photons and neutrons will be attenuated towards the centre
of the detector,
whilst the neutrino
events will be uniformly distributed through the detector.
Since the directions of the particles produced in neutron interactions
will not in general be
the same as that of the incident neutron we cannot directly measure the
distance that the neutron travelled through the detector. Instead
we define a measure of the
proximity of the event to the detector exterior by calculating
the minimum perpendicular distance from the event vertex to the detector edge.
Since few rock photons and neutrons are expected to travel upwards and
the base of the detector does not have an excess
of rock vertices, the floor is not considered to be an `edge' for the
purposes of this calculation. Figure \ref{depth} shows this
depth distribution for gold,
Monte Carlo and rock tracks and showers.
The Monte Carlo distributions
are normalized to the exposure of the experiment and the rock sample
is normalized to the same number of events as the data sample.
\begin{figure}
\leavevmode\psfig{file=depths.ps,width=5.5in}
\caption
{\label{depth}
The depth distributions for tracks (top) and showers
(bottom).
The data points are the gold data, the shaded histogram is
the gold Monte Carlo, normalized to the experiment exposure,
and the unshaded histogram
is the rock data, normalized to the same number of events as the data sample.}
\end{figure}
\par The rock shower and rock track depth
distributions are
different. The track distribution is consistent with being produced by
incoming neutrons with an interaction length of approximately
80 cm. The shower distribution appears to have two components,
a long range component consistent with neutrons and a short range component
which we attribute to photons. The short range component has a depth
distribution consistent with the photon conversion length of 15cm measured in
a module at the ISIS test beam \cite{garcia}.
Figure \ref{ratio}(bottom) shows the integral
track/shower ratio as a function of
depth cut. The ratio rises as events near the edge of the detector are
removed, reaching a plateau at a depth cut of around 60 cm when the photon
component has been fully attenuated.
\begin{figure}
\leavevmode\psfig{file=fig3.ps,width=5.5in}
\caption
{\label{ratio}
The track/shower ratio for rock events
as a function of the number of shield hits (top)
and the depth cut (bottom).}
\end{figure}
\par Comparison of the Monte Carlo and the gold data
depth distributions indicates that there may be a small excess of events
at small depth in the shower sample whilst the track distribution closely
follows the expected neutrino distribution. However the discrimination
between rock and MC distributions is better in the shower sample because
of the short distance photon component.
\par The interactions of the neutral particles in the detector are not
expected to be strongly correlated with the number of shield hits. This is
verified in figure \ref{ratio}(top) which shows the track/shower ratio to be
constant as a function of number of shield hits. We therefore use the full
rock sample to estimate the amount of zero shield hit rock background in the
gold sample.
We have made $\chi^2$
fits of the shape of the gold distributions of figure \ref{depth}
to the sum of the shapes of the
rock and MC distributions with one free parameter, the fraction of the rock
background required in the gold distribution. The bins at large depths were
combined to give at least 5 gold events per bin, giving a total of 6 bins
in each distribution.
A fit with the background set to zero gave $\frac{\chi^2}{NDF}$ of 0.22 and 1.42
for the track and shower distributions respectively.
The fit of the tracks is very good
while the shower fit probability is about 20\%. To this level the fits do not
require any background contribution. However as described above we expect
some rock contamination so we continue with a second fit
which allows a free amount of background
in each distribution. The $\frac{\chi^2}{NDF}$ are now 0.19 and 0.54.
The track fit is not improved but there is
a significant drop in $\frac{\chi^2}{NDF}$ for the shower distribution.
As expected
from examination of figure \ref{depth} more background is suggested in
the shower depth distribution than in the track distribution.
We find $4.5\pm6.9$ background events in the
track sample and $14.2\pm5.9$ in the shower sample, yielding a
track/shower ratio of $0.3 \pm 1.4$ for background events.
This is consistent
with, but numerically rather different to the measured rock event ratio
of $0.95 \pm 0.10$.
In a third fit we constrain the ratio of the
background in the gold tracks and showers to be equal to the measured ratio
in the rock events. The $\frac{\chi^2}{NDF}$ for the combined track and shower fit
is 0.42 and it gives a total background of $20.6\pm8.9$ events, which are
to be divided between tracks and showers in the
measured rock ratio.
The number of background
events is consistent with those found in the unconstrained fits
and the fit quality is equally good. We use the constrained fit
in the calculation of $R$ since
it is consistent with the other fits,
it uses the maximum amount of measured information,
and it produces the smallest errors on $R$.
The small systematic errors which are
introduced by the assumption that the background in the gold sample is
represented by the rock sample are considered in the next section.
\subsection{Calculation of $R$}
\par To calculate $R$ we correct the raw numbers of gold events using the
background estimated in the constrained fit. Note that the error on the
correction depends on the errors on the fraction of rock events in the
gold sample and the
measured rock ratio, not on the uncorrelated errors on the number of
background events.
The numbers entering the calculation and the corrected and uncorrected
values of $R$ are given in table \ref{results}. The error on $R$ includes the
error due to the background subtraction as well as the statistical errors
on the numbers of data and Monte Carlo events. The
background correction has only a small effect on the value of $R$ but adds to
the error.
\begin{table}
\caption[]{ Values of the various quantities used in the calculation
of $R$. The Monte Carlo numbers in parentheses are scaled by the nominal factor
of 5.9.\\}
\label{results}
\begin{tabular}{|l|l|}
\hline
Number of gold tracks & 47 \\
Number of gold showers & 60 \\
Number of MC tracks & 278 (47.1)\\
Number of MC showers & 267 (45.3)\\
Number of rock tracks & 160\\
Number of rock showers & 169\\
Rock track/shower ratio & $0.95\pm0.10$ \\
Fraction of rock events in gold sample & $0.062\pm0.027$\\
Corrected number of $\nu$ tracks & 37.0\\
Corrected number of $\nu$ showers & 49.4\\
& \\
\hline
Raw value of $R$ (no background correction) &$ 0.75\pm0.16$\\
\hline
Corrected value of $R$ & $0.72\pm0.19$\\
\hline
\end{tabular}
\end{table}
\par The systematic errors which could effect the value
of $R$ may be divided into the following categories:
\begin{itemize}
\item Systematic uncertainties in the incident neutrino flux ratio. A number
of calculations have been made of the neutrino fluxes, summarized and
corrected in a recent review \cite{Gaisser}. There is agreement that
although the absolute rate is uncertain to the order of $\pm20\%$ the flux
ratio is much better known.
We take an
uncertainty of $\pm5\%$ in $\frac{\delta R}{R}$.
\item Systematic uncertainties in the neutrino generator. These include
factors such as the uncertainty in the axial vector mass, the various cross
sections, the treatment of Fermi motion, the uncertainty in the intranuclear
absorption etc. All these factors are considered in more detail in
reference \cite{gallagher}. It should be remembered that neutrino
universality constrains the $\nu_e$ and $\nu_\mu$ cross sections to be equal
up to mass effects.
We estimate they contribute an amount $\pm0.03$
in $\delta R$.
\item Systematic uncertainties introduced by the scanning process. In order
to estimate this contribution the data was independently scanned and
classified by two different groups before the groups merged.
A value of $R$ was calculated by each group independently.
The difference in the raw $R$ value was 0.02.
Since
the final result was obtained by combining the two groups and resolving
differences we expect the final
error due to systematic scanning differences to be smaller than this.
However we take the full difference and assign a systematic error on $R$ of
$\pm0.02$.
\item Systematic uncertainties on the background subtraction.
The main systematic
error lies in
the assumption that the
track/shower ratio of the zero shield hit rock background is the same as
that of the $\ge2$ shield hit rock events. It was shown in
figure \ref{ratio}(top) that this ratio is constant as a function of number
of shield hits.
However, it might be expected that zero shield hit events
arise from interactions deeper in the rock than
those giving shield hits since both the muon and any associated charged
particles have to miss the shield. Neutrons and photons produced
in these interactions would have to pass through more rock absorber.
The photon component would be attenuated faster than the neutron and the
resulting events would contain a reduced
fraction of shower events.
\par The effect of absorption in the rock may be simulated by
calculating the track/shower ratio for different depth cuts in the main
detector. As in the rock,
the photon component is attenuated faster than the neutron
component. The track/shower ratio, plotted as a function of depth cut (figure
\ref{ratio}(bottom)), rises to a plateau when the photon component
is completely absorbed. We take $1.75\pm0.41$, the value at a depth of
80 cm, as our measurement of the ratio for pure
neutrons. We estimate the systematic error on $R$ by
repeating the calculation with background ratios having values between those
measured for the full rock sample and the pure neutron sample,
including allowance
for the errors on these numbers. This produces a
variation of $R$ from 0.74 to 0.67. We take this variation
as an estimate of the systematic error.
Note that the possible rise in the background ratio due
to absorption of the photon component
results in a shift towards
smaller R, i.e. further from the expected value of 1.0.
\par We have studied the effects of applying an extra cut on the depth
distribution to remove events closest to the exterior of the detector.
Within the statistical limits on the data
we see no significant change in $R$. The uncut data
provides the maximum statistics and the best determination of the background
fraction and thus the smallest error on $R$.
\end{itemize}
\begin{table}
\caption[]{ Values of the components of the systematic error on R.\\}
\label{system}
\begin{tabular}{|l|c|}
\hline
Error & $\delta R$ \\
\hline
Neutrino flux & $\pm 0.038$ \\
Monte Carlo systematics & $\pm 0.03$ \\
Scanning systematics & $\pm0.02$ \\
Background subtraction & +0.02 -0.05 \\
& \\
\hline
Total systematic error & +0.05 -0.07 \\
\hline
\end{tabular}
\end{table}
\vskip 0.5cm
\par The systematic errors are summarized in table \ref{system}.
\subsection{Absolute rates}
The rate of track events is $0.79 \pm 0.18$
of the expected rate and of showers $1.09 \pm 0.21$. The errors do not
include the systematic error on the flux calculation, estimated to be $\pm20\%$
by reference \cite{Gaisser}.
If the BGS flux\cite{barr} is accurate we would support
the hypothesis
that the anomaly results more from a loss of $\nu_\mu$ events than a gain
of $\nu_e$ events.
\par We have investigated other possible systematic effects on the absolute
rates. These include uncertainties in the Fermi gas model, particularly
in the Pauli blocking of inelastic interactions producing a low energy
nucleon, uncertainties in the cross-sections,
biases introduced by the detector trigger and biases in
the scanning process. We estimate that these produce a further 6.4\%
systematic error on the ratio of measured to expected tracks and 7.5\% on
the showers.
\section{Conclusions}
\par We have measured the flavour ratio of ratios ($R$)
in atmospheric neutrino
interactions using a 1.52 kton-year exposure of Soudan~2.
We find $R=0.72\pm0.19^{+0.05}_{-0.07}$. This value is
about $1.5\sigma$ from the expected value of 1.0 and is
consistent with the anomalous ratios measured by the Kamiokande and IMB
experiments. However we note that since our acceptance matrix is different
from those of the water Cherenkov experiments we would not expect to measure
the same value of R, particularly
if physics processes are occurring which are not simulated
in our Monte Carlo. There is
approximately a 7\% chance that our measurement would statistically give
0.72 or less if the true answer is 1.0. To this level we support the
observation of an anomaly in the atmospheric neutrino flavour ratio
in a detector using a completely different
detection technique and with different systematic biases.
Data taking in Soudan~2 is continuing and completion of our planned 5
kton-year exposure in 1999 should definitively resolve the question of
the presence or otherwise of an anomaly.
\par We have investigated and corrected for backgrounds due to the interaction
of neutrons or photons produced by
$\mu$ interactions in the rock surrounding
our detector.
We have measured the track/shower ratio
for neutrons entering Soudan 2 and find a value of $1.75\pm0.41$. Making
allowance for the fraction of the tracks (pions or protons) which would be
below Cherenkov threshold in water we cannot reduce this ratio significantly
below 1.0. This is
in contradiction to the hypothesis \cite{Ryaz} that the
anomaly in the Kamiokande and IMB detectors could be due to a substantial
excess of shower events in neutron background.
\begin{ack} This work was undertaken with the support of
the U.S. Department of Energy, the State and University of
Minnesota and the U.K. Particle Physics and
Astronomy Research Council.
We wish to thank the following for their invaluable help with
the Soudan 2 experiment: the staffs of the collaborating laboratories; the
Minnesota Department of Natural Resources for allowing us to use the facilities
of the Soudan Underground Mine State Park; the staff of the Park, particularly
Park Managers D. Logan and P. Wannarka, for their day to day support; and Messrs
B. Anderson, J. Beaty, G. Benson, D. Carlson, J. Eininger and J. Meier of the
Soudan Mine Crew for their work in the installation and running of the
experiment.
\end{ack}
|
1,116,691,499,696 | arxiv | \subsection{Datasets and Evaluation Metrics}
\subsubsection{Datasets}
\vspace{-0.2cm}
We employ a publicly available dataset \cite{kermany2018identifying} to evaluate the performance of our Sparse-GAN. The whole dataset was from Spectralis OCT (Heidelberg Engineering, German), and contains data with three different lesions: drusen, DME (diabetic macular edema), and CNV (choroidal neovascularization).
The detailed description about this dataset could be found in \cite{kermany2018identifying}.
To train the proposed Sparse-GAN and determine the threshold of anomaly score, we divide original training set into two parts: new training set with 50,140 normal images, validation set consists of 3000 disease images and 1000 normal images. The testing set is the same as the original dataset.
\vspace{-0.3cm}
\subsubsection{Evaluation Metrics}
\vspace{-0.2cm}
For a given test image $\mathbf{I}_{in}$, we use $\mathcal{A}(\mathbf{I}_{in})$ given in Eq. (\ref{eq:anomaly}) to compute the anomaly score. Further, we use $\mathcal{C}(\mathbf{I}_{in})$ given in Eq. (\ref{eq:classifcation}) for diagnosis.
Based on the anomaly score, we mainly use \textbf{AUC} (Area under the ROC Curve) to evaluate our method.
To compute accuracy (\textbf{Acc}), we need to determine the threshold $\phi$ of anomaly score on the validation set, which includes 75\% disease images and 25\% normal images.
We adopt sensitivity (\textbf{Sen}) as the third evaluation metric.
Finally, the threshold $\phi$ is then used for testing.
\vspace{-0.3cm}
\subsection{Training Details}
The proposed Sparse-GAN is implemented in PyTorch with NVIDIA graphics processing units (GeForce TITAN V). The input image size is $224 \times 224$, while the batch size is 32. The optimizer is Adam and the learning rate is 0.001. Empirically, we let $\lambda_{re}=20, \lambda_{adv}=1$, and $\lambda_{sp}=50$.
\vspace{-0.3cm}
\subsection{Quantitative Experimental Results}
\vspace{-0.6cm}
\begin{table}[htb]
\normalsize
\setlength{\tabcolsep}{3mm}
\centering
\caption{Quantitative results for ablation studies and comparison with state-of-the-arts. }
\begin{tabular}{c|c|ccc}
\hline
\multirow{2}{*}{Method} & Val-set & \multicolumn{3}{c}{Test-set} \\ \cline{2-5}
& AUC & AUC & Acc & Sen \\ \hline
Auto-Encoder & 0.729 & 0.783 & 0.751 & 0.834 \\
AnoGAN\cite{schlegl2017unsupervised} & 0.815 & 0.846 & 0.789 & 0.917 \\
f-AnoGAN\cite{schlegl2019f} & 0.849 & 0.882 & 0.808 & 0.871 \\ \hline
pix2pix \cite{isola2017image} \#1 & 0.805 & 0.861 & 0.818 & 0.879 \\
pix2pix \cite{isola2017image} \#2 & 0.837 & 0.874 & 0.815 & 0.900 \\
Sparse-GAN & \textbf{0.885} & \textbf{0.925} & \textbf{0.841} & \textbf{0.951} \\ \hline
\end{tabular}
\vspace{-0.2cm}
\begin{flushleft}
\small \#1, image level \\
\small \#2, latent space \\
\end{flushleft}
\vspace{-0.3cm}
\label{tabel:result}
\end{table}
\vspace{-1.0cm}
\subsubsection{Ablation Study.}
\vspace{-0.2cm}
To justify the benefits of the anomaly score in latent space and the sparsity regulirization nets, we conduct the following ablation studies, we conduct some ablation studies: \#1 denotes Image-to-Image GAN \cite{isola2017image} predicting anomaly score in image-level, and \#2 denotes Image-to-Image GAN \cite{isola2017image} predicting anomaly score $\mathcal{A}(\mathbf{I}_{in})$ in latent feature.
By including $\mathcal{L}_{adv}$ loss based on Auto-Encoder, we improve the AUC result from 0.729 to 0.805 on the validation set. That is to say, adversarial learning is helpful. By transforming the reconstruction image into latent space, the result is improved from 0.805 to 0.837 on the validation set since the noise in images is harmful to diagnosis. Finally, by regularizing the latent features with our proposed Sparsity Regularization Net, the result is improved from 0.837 to 0.885, which means the sparsity regularization is effective. On the test set, the ablation studies validate the effectiveness of different modules too.
Table \ref{tabel:result} summarized the results.
\vspace{-0.2cm}
\subsubsection{Performance Comparison.}
\vspace{-0.2cm}
We further compare the proposed method with state-of-the-art networks, inlcuding Auto-Encoder, AnoGAN \cite{schlegl2017unsupervised} and f-AnoGAN \cite{schlegl2019f}.
By comparing our adopted Image-to-Image GAN (i.e. $\#$ 1) with primary AnoGAN \cite{schlegl2017unsupervised}, we improve the AUC result from 0.846 to 0.861 on the test set. That is to say, the end-to-end optimized generator is better than two stage trained generator.
Compared with these methods, we get the highest AUC than others on both the validation set and test set. The accuracy of our method on the test set is comparable to supervised deep learning methods, and the sensitivity $=0.951$ denotes missed diagnosis of our model is very low, which is more meaningful for clinicians. The results are also summarized in Table \ref{tabel:result} .
\vspace{-0.3cm}
\subsection{Qualitative Analysis with Anomaly Activation Map}
To further understand what the role of the lesion is for disease clinical diagnosis, some example images are shown in Fig \ref{fig:abnormal}. When Sparse-GAN classifies a given image as abnormal, AAM will be computed. In addition to the anomaly heatmap, we also show the output images and difference between the input image and output one.
Since Sparse-GAN is only trained on the normal set, the model could not reconstruct abnormal patterns. \textit{Diff} images show that noise in images is harmful to reconstruction.
The heatmap can localize the lesion in general and this validates the effectiveness of our proposed AAM for anomaly detection framwork.
\begin{figure}[ttt]
\centering
\includegraphics[width=3.2in]{figures/abnormal}
\caption{Anomaly heatmap on abnormal images. \textit{Diff} images show that noise in images is harmful for reconstruction, and AAM images show the lesion play an important role for diagnosis in Sparse-GAN. (Best viewed with colors.)
} \label{fig:abnormal}
\vspace{-0.4cm}
\end{figure}
\subsection{Image-to-Image GAN for Anomaly Detection}
\vspace{-0.3cm}
As discussed earlier, we adopt the image-to-image \cite{isola2017image} generator as the $G$ in the GAN, which consists of encoder $G_{en}$ and decoder $G_{de}$, while $D$ denotes the discriminator. Let $\mathbf{I}_{in}$ be input images, their latent feature $\mathbf{H}_{in}$ are converted from input images $\mathbf{H}_{in} = G_{en}(\mathbf{I}_{in})$, then the latent feature are transformed into reconstructed images $\mathbf{I}_{re}= G_{de}(\mathbf{H}_{in})$. Image-to-Image GAN \cite{isola2017image} is optimized with a reconstruction loss comprised of an adversarial loss,
\begin{equation}
\min\limits_G \max\limits_D \mathcal{L}_G = \min\limits_G \left( \lambda_{adv} \max\limits_D(\mathcal{L}_{adv}) + \lambda_{re} \mathcal{L}_{re} \right),
\end{equation}
where $\lambda_{adv}$ and $\lambda_{re}$ are regularization parameters. The adversarial loss and reconstruction loss are defined as,
\begin{equation}
\mathcal{L}_{adv} = \mathbb{E}_{\mathbf{I}_{in}}[\log D(\mathbf{I}_{in})] + \mathbb{E}_{\mathbf{I}_{in}, \mathbf{H}_{in}}[\log(1 - D(G(\mathbf{I}_{in}), \mathbf{H}_{in}))],
\end{equation}
\begin{equation}
\mathcal{L}_{re} = \frac{1}{m}\sum_{i=1}^m(\mathbf{I}_{in}^{(i)} - \mathbf{I}_{re}^{(i)})^2,
\end{equation}
where $m$ is the batch-size.
\vspace{-0.2cm}
\subsection{Predict Anomaly Score in Latent Space}
One challenge in reconstructing the OCT images is the speckle noise. To reduce the influence of speckle noise, we propose to transform the reconstruction image $\mathbf{I}_{re}$ into latent space by encoder $E$, i.e. $\mathbf{H}_{re} = E(\mathbf{I}_{re})$. To cut down computational cost, encoder $E$ share the same values with $G_{en}$. In latent space, the model predicts anomaly score $\mathcal{A}(\mathbf{I}_{in})$ and diagnosis results $\mathcal{C}(\mathbf{I}_{in})$ as follows:
\begin{equation}
\label{eq:anomaly}
\mathcal{A}(\mathbf{I}_{in}) = \left\| \mathbf{H}_{in} - \mathbf{H}_{re} \right\|_2
= \left\| G_{en}(\mathbf{I}_{in}) - E(G(\mathbf{I}_{in})) \right\|_2 ,
\end{equation}
\begin{equation}
\label{eq:classifcation}
\text{and }\mathcal{C}(\mathbf{I}_{in}) =
\begin{cases}
\text{normal,} \quad \text{if} \ \mathcal{A}(\mathbf{I}_{in}) < \phi \\
\text{disease,} \quad \text{if} \ \mathcal{A}(\mathbf{I}_{in}) \geqslant \phi
\end{cases}
\end{equation}
where $\phi$ is the anomaly score threshold determined on the validation set.
\subsection{Sparse Regularization on Latent Feature}
On the one hand, without additional regularization, generator $G$ may learn an approximation to the identity function, which can not distinguish disease images from normal images. On the other hand, sparse coding is interpretable and have the capability for anomaly detection \cite{luo2017revisit, luo2019video}.
Based on this observation, we propose a novel Sparsity Regularization Net which recast the solution of sparse coding as a novel convolutional long short term memory unit (LSTM). Moreover, we regularize the sparsity of latent feature $\mathbf{H}_{in}$ with the proposed Sparsity Regularization Net (i.e., $S(\cdot)$) as shown in Fig. \ref{fig:overall_arch}. Letting $S$ denote Sparsity Regularization Net, we propose a novel Sparsity-constrained GAN (Sparse-GAN) with sparsity regularization $\mathcal{L}_{sp} = S(\mathbf{H}_{in})$.
The proposed Sparsity Regularization Net is inspired from Sparse LSTM \cite{zhou2018sc2net}. However, sparsity reguliarzaiton net is different from sparse LSTM in two aspects. Firstly we apply the convolutional operation to replace element-wise multiplication in Sparse LSTM since the convolutional operation accelerates the computation. Secondly the input of the Sparse Constrained Net is the latent feature rather than the original image.
The loss to train Sparsity Regularization Net is defined as follows,
\begin{eqnarray}
\mathcal{L}_{scl}({\mathbf W}_d, {\mathbf s}) = \left\| \mathbf{H}_{in} - {\mathbf W}_d^T {\mathbf s} \right\|_F^2 + \left\| {\mathbf s} \right \|_1
\end{eqnarray}
where ${\mathbf s}$ is the sparse code w.r.t. $\mathbf{H}_{in}$ and ${\mathbf W}_d$ is the dictionary.
Overall, the final loss of Sparse-GAN is given as the following:
\begin{equation}
\mathcal{L} = \lambda_{re}\mathcal{L}_{re} + \lambda_{adv} \max\limits_D(\mathcal{L}_{adv}) + \lambda_{sp}\mathcal{L}_{sp},
\end{equation}
where $\lambda_{re}, \lambda_{adv} \text{ and } \lambda_{sp}$ are regularization parameters.
\subsection{Anomaly Activation Map for Visualization}
Since anomaly detection is significantly different from supervised classification, Class Activation Map (CAM) \cite{zhou2016learning} is not suitable in our framework to show the role of lesions for diagnosis. To address the weakness of CAM, we propose Anomaly Activation Map (AAM) to visualize lesions in anomaly detection framework.
We firstly perform Global Average Pooling ($GAP$) for latent feature $\mathbf{H}_{in} \in \mathbb{R}^{1024 \times 7 \times 7}$ and $\mathbf{H}_{re} \in \mathbb{R}^{1024 \times 7 \times 7}$. Then we obtain the anomaly vector $\mathbf{W}_{aam} = w_1, w_2, \cdots, w_n$ as follows,
\begin{equation}
\mathbf{W}_{aam} = \left \| GAP(\mathbf{H}_{in}) - GAP(\mathbf{H}_{re}) \right \|_1,
\end{equation}
where $\mathbf{W}_{aam} \in \mathbb{R}^{1024 \times 1 \times 1}$, $n$ is the number of the channels of the latent feature.
Finally, we multiply the feature map $\mathbf{H}_{in}$ by anomaly vector in channel-wise fashion and get the anomaly activation map.
\section{Introduction}
\vspace{-0.3cm}
\label{introduction}
\input{introduction}
\vspace{-0.5cm}
\section{Method}
\vspace{-0.3cm}
\label{method}
\input{method}
\vspace{-0.3cm}
\section{Experiments}
\vspace{-0.3cm}
\label{experiments}
\input{experiment}
\vspace{-0.2cm}
\section{CONCLUSION}
\vspace{-0.2cm}
\label{Coucusion}
In this work, we propose a novel Sparse-GAN for anomaly detection, which detects anomalies in latent space and the feature in latent space is constrained by a novel Sparsity Regularizer Net.
The quantitative experimental results on a public dataset validate the feasibility of anomaly detection for OCT images and also validate the effectiveness of our method. Further, we also show the anomaly activation maps of the lesion to make our results more explainable.
\vspace{-0.2cm}
\section{Acknowledge}
\vspace{-0.2cm}
The project is partially supported by ShanghaiTech-Megavii Joint Lab, in part by the National Natural Science Foundation of China
(NSFC) under Grants No. 61932020, and supported by the ShanghaiTech-UnitedImaging Joint Lab, Ningbo “2025 S\&T Megaprojects” and Ningbo 3315 Innovation team grant. We also acknowledge the contribution of Weixin Luo and Wen Liu for their insightful comments with regard to the reconstruction-based anomaly detection method.
\bibliographystyle{IEEEbib}
|
1,116,691,499,697 | arxiv | \section{\label{}}
\section{Introduction}
Reactors are a copious source of antineutrinos, and there is long history of exploiting them for measurements of fundamental neutrino properties. The first reactor antineutrino experiment was in fact the discovery of the neutrino by \cite{ReinesCowan}. More recently, the KamLAND experiment, \cite{KamLAND}, used the reactors of Japan to prove that the effects seen in solar experiments, \cite{SNOIII,SK,Borexino}, is due to a non-zero neutrino mass and the resulting oscillation phenomenon. To complete our understanding of neutrino oscillation, several experiments are using reactors to measure the only unknown mixing angle $\theta_{13}$. These experiments include Double Chooz, the experiment that motivates this work, \cite{DC2006}, as well as the Daya Bay, \cite{DB2007}, and the RENO, \cite{RENO2010} experiments. Reactors are also good sources for experiments searching for sterile neutrinos, \cite{scraam}, coherent nuclear scattering, \cite{CoherentScatter2004}, and the neutrino magnetic moment, \cite{MUNU2004}. In addition, there is also a large effort to use antineutrinos to monitor reactors for nonproliferation purposes, \cite{NonProlif2007}. All of these efforts require simulations for their particular reactor cores, and therefore need a code like DRAGON, \cite{DRAGON1994}.
Antineutrinos are produced when fission fragments $\beta$-decay in the reactor core. There are four primary fissile isotopes that produce antineutrinos in the energy range most relevant to experiment. They are $^{235}$U, $^{238}$U, $^{239}$Pu, and $^{241}$Pu, and they produce neutrinos with energies $\lesssim$~8.5~MeV. The flux of antineutrinos that arrives at a detector can be expressed as
\begin{equation}\label{theEquation}
\frac{d^2 n_{\bar{\nu}}}{dE_{\bar{\nu}} dt} = \sum_i^{\text{isotopes}} f_i(t) S_i(E_{\bar{\nu}})
\end{equation}
where $f_i(t)$ is the fission rate of one of the four primary fissile isotopes and $S_i(E_{\bar{\nu}})$ is the spectrum of neutrinos emitted per fission of that isotope. These spectra are extracted from $\beta$-spectrum measurements at the ILL research reactor by \cite{Spec1981,Spec1985,Spec1982,Spec1989}. It is the re-analysis of these spectra by \cite{Mueller2011} and \cite{HuberReactor} that has led to the ``Reactor Anomaly", \cite{Anomoly2011}. The DRAGON simulation provides the fission rates, $f_i(t)$, based on a detailed description of the reactor design and operating conditions during a given time period.
To understand the accuracy of the DRAGON predictions, it is necessary to compare the simulation results to data from nuclear reactors. In \cite{Takahama2001}, the Japanese Atomic Energy Research Institute has published the mass inventories of the key isotopes from spent fuel rods used in the Takahama-3 reactor core. This data has been used to benchmark many widely-used simulation codes. We can use this data to compare DRAGON's results to those from these other codes, and to understand the systematic uncertainties in these results. This work is a subset of the effort presented by \cite{Jones2011}.
\section{Reactor Basics}
There are two main types of nuclear reactors: boiling water reactors (BWR) and pressurized water reactors (PWR). Most modern reactors, including Takahama-3, are PWRs. The cores of such reactors are formed by fuel assemblies. Assemblies in turn are made up of fuel rods arranged in a grid. In this grid, some locations are used for instrumentation and some for shaping the neutron flux with special gadolinium rods. A fuel rod is typically 4~m long, and fashioned out of tiny cylindrical fuel pellets measuring 1~cm in diameter and 1~cm in height. The fuel pellet is composed of UO$_2$ enriched to a few percent in $^{235}$U by weight. The structure of the rod is maintained by Zircaloy cladding. Zircaloy is a zirconium alloy chosen for its high melting point.
The assembly is the elementary unit of fuel in the core. It arrives at the power plant assembled with fresh fuel, ready to be installed. Reactors refuel approximately once a year. The time between refueling is called a fuel cycle, and during each fuel cycle approximately a third of the fuel assemblies will be fresh. During each refueling, the assemblies are arranged very precisely to create an approximately uniform neutron flux across the core while extracting the most power out of the fuel.
\section{DRAGON}
Since the fuel assembly is the building block of the reactor core, it is logical that DRAGON simulates individual assemblies. To model the whole reactor core with DRAGON, one needs to assume that the spill-in of neutrons from neighboring assemblies matches the loss of neutrons from spill-out. The results presented here indicate that this is good assumption. For those interested in modeling the full core more accurately, DRAGON results can be interfaced with the full core simulation DONJON, \cite{DONJON2004}. The DRAGON code is open source\footnote{http://www.polymtl.ca/nucleaire/DRAGON/en/index.php} making it attractive for use in antineutrino studies. In fact, small modifications are made in DRAGON for this work to allow the extraction of the fission rates for the antineutrino flux calculation.
Codes that simulate reactors must solve the neutron transport equations. The deterministic approach uses simplifications to allow the direct solution of these equations. In contrast, Monte Carlo based codes use the random generation of a large neutron sample to solve the equations statistically. DRAGON is a deterministic 2D lattice code. This means that it simulates the assembly by solving the neutron transport equation explicitly on a 2D lattice of fuel cells roughly corresponding to the fuel rods. For a given time step in the evolution of an assembly, the neutron transport equation is solved, and the solution is used to evolve the fuel composition according to the Bateman equations. The deterministic technique is fast compared to Monte Carlo based approaches. Symmetries in the assembly geometry can be exploited to further simplify and accelerate these calculations.
\section{The Takahama Benchmark}
Takahama-3 is PWR reactor located in Japan. Three fuel rods in two fresh assemblies were part of the study outlined in \cite{Takahama2001}. We focus on fuel rod SF97 because it has the largest exposure or burnup, and any cumulative systematic effects will be maximized. The assembly containing SF97 was present in three consecutive fuel cycles of 385, 402, and 406 days with 88 days and 62 days of cool-down time between cycles. At the end of the exposure, six sample disks, 0.5~mm in width, were removed at the positions indicated in Table~\ref{tab:zaxismod}. The samples were dissolved and a chemical separation was performed. Mass spectroscopy was used to determine the mass inventories of these samples. For the four isotopes of greatest interest to us, $^{235}$U, $^{238}$U, $^{239}$Pu, and $^{241}$Pu, the uncertainty is $<$0.1\% for uranium isotopes and $< $0.3\% for plutonium isotopes, ~\cite{Takahama2001}.
\begin{table}
\caption{\label{tab:zaxismod}The position of samples taken from rod SF97, with the corresponding moderator temperatures, and burnup values for that sample. Measurements are in
mm from the top of the rod. The bottom of the rod is at 3863 mm.}
\begin{center}
\begin{small}
\begin{tabular}{| c | c | c | c |}
\hline
\textbf{Sample} & \textbf{Position} & \textbf{Moderator Temp.} & \textbf{Burnup} \\
& [mm] & [K] & [GW-days/ton] \\
\hline
1 & 163 & 593.1 & 17.69 \\
\hline
2 & 350 & 592.8 & 30.73 \\
\hline
3 & 627 & 591.5 & 42.16\\
\hline
4 & 1839 & 575.8 & 47.03 \\
\hline
5 & 2926 & 559.1 & 47.25\\
\hline
6 & 3556 & 554.2 & 40.79 \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
Detailed information on the geometry of the assembly and the basic operation of the reactor is provided. This information encompasses most of the inputs needed to simulate the assembly containing SF97. These inputs are summarized in Table~\ref{tab:inputs}. However, there is insufficient information to simulate the full core, so by definition all Takahama simulations are assembly simulations. The fuel density used in simulations is an effective density to account for the variations in fuel pellet packing. This effective density must be less than 10.96~g/cm$^{3}$, the theoretical density of UO$_{2}$, but no more details are provided by \cite{Takahama2001}. For this reason, we use the value 10.07~g/cm$^3$ suggested by \cite{Roque2004}. Another popular assumption is 95\% of the theoretical density of UO$_2$.
\begin{figure*}
\centering
\includegraphics[width=80mm]{PowerHistory.eps}
\caption{The burnup of the six SF97 samples as determined using the $^{148}$Nd technique.} \label{power}
\end{figure*}
\begin{table}
\caption{\label{tab:inputs} Key parameters for the DRAGON simulation.}
\begin{center}
\begin{small}
\begin{tabular}{| c | c |}
\hline
Parameter & Value \\
\hline
Moderator Density & 0.72 g/cm$^3$ \\
\hline
Moderator Temperature & 600.0 K \\
\hline
Cladding Temperature & 600.0 K \\
\hline
Fuel Temperature & 900.0 K \\
\hline
Fuel Density & 10.07 g/cm$^3$ \\
\hline
Fuel Cell Mesh & 1.265 cm \\
\hline
Fuel Rod Radius & 0.4025 cm \\
\hline
Fuel Cladding Radius & 0.475 cm \\
\hline
Guide Tube Inner Radius & 0.573 cm \\
\hline
Guide Tube Outer Radius & 0.613 cm \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
The last important input to the simulation is the time dependent power of the reactor core. The type of power information provided by the Takahama benchmark is particularly unique. Simulations usually rely on the individual assembly power densities with the full core thermal power. The destructive analysis allows the use of the $^{148}$Nd concentration to produce a detailed power history along the rod, as displayed in Fig.\ref{power}. The $^{148}$Nd technique has a 3\% uncertainty compared with uncertainties of $<$2\% for the full core thermal power, \cite{Zelimir2009}. The impact of this and other key uncertainties is discussed in the following section.
\section{Results for Rod SF97}
DRAGON uses the description of the fuel assembly from Table~\ref{tab:inputs} to simulate the Takahama assembly. This simulation is performed in time steps with different power and duration as outlined in Fig.~\ref{power}. To simplify the model, we assume a constant temperature along the rod of 600~K instead of the detailed temperature gradient shown in Table~\ref{tab:zaxismod}. We also assume a constant non-burnable concentration of boron in the moderator of 630~ppm. To increase the speed of the calculation, two gadolinium rods are added to the assembly bringing the total to 16. This creates an eight-fold symmetry, and allows the simulation of a 1/8 segment of the assembly. A nuclear cross-section library must be provided for the calculation. Our nominal simulation is performed using ENDF/B-VI, \cite{ENDFVI}, and we compare these results to those obtained with JENDL 3.2, \cite{JENDL}.
\begin{figure*}
\centering
\includegraphics[width=135mm]{Summary_SF97_B630.eps}
\includegraphics[width=135mm]{Sensitivity.eps}
\caption{The comparison of DRAGON simulation results to the measured mass inventories. Top: the DRAGON results are compared to other standard codes for reactor core simulation. Bottom: the critical inputs to the simulation are varied within their uncertainties. } \label{results}
\end{figure*}
The results of this DRAGON simulation are compared to the results from the destructive assay of the fuel rod. These are presented in Fig.~ \ref{results} as a function of the sample's position along the rod. The codes used for comparison range from ORIGEN 2.1, a simple fuel depletion model, ~\cite{ORIGEN}, SCALE4.4a, a code that uses an effective 1D geometry to model the neutron transport, ~\cite{ScaleHelios1998}, to the full Monte Carlo treatment employed by MONTEBURNS, ~\cite{Monteburns2009}. The code most similar to DRAGON is HELIOS, ~\cite{ScaleHelios1998}, another 2D deterministic lattice code.
We only present the results for $^{235}$U, $^{239}$Pu, and $^{241}$Pu in Fig.~\ref{results} because the depletion $^{238}$U is on the same order as the uncertainty in the destructive assay, and therefore a useful comparison is not possible. For sample SF97-1, all codes show substantial deviations in the inventories of the plutonium isotopes. This sample is near the top of the rod. Therefore, the modeling of neutron leakage is difficult leading to these large deviations. Neglecting sample SF97-1, we calculate the average deviation along the rod. For $^{235}$U, DRAGON's deviation is 3.2\%. For $^{239}$Pu, and $^{241}$Pu, the deviation is -1.3\% and -3.9\% respectively. For comparison the range for the other codes is -2.2\% to 4.5\% for $^{235}$U, -0.6\% to 6.5\% for $^{239}$Pu, and -4.0\% to 3.4\% for $^{241}$Pu. The DRAGON results have comparable accuracy to those from the other standard codes.
Most work benchmarking reactor simulation codes stops here, but this neglects the uncertainty in these simulated results. To understand the effect of the input uncertainties on the simulation, we vary key input parameters within their uncertainties. We confirm that these key parameters are the power, moderator temperature, boron concentration and fuel density, as originally outlined in \cite{Zelimir2009}.
We vary the power by 3\%, the uncertainty in the $^{148}$Nd method. We vary the moderator temperature by $\pm$50~K, the variation along the length of the rod, and the boron concentration by 10\%. The fuel density is varied by 1.5\%, but we note that a deviation as large as 3\% is needed to cover the range of values suggested for this effective density.
The largest effect for $^{235}$U is the power variation, a 6\% effect. Since $^{239}$Pu and $^{241}$Pu are the result of fast neutron reactions on $^{238}$U, these isotopes are sensitive to the total amount of fuel. Therefore, they are sensitive to the fuel density, a 2.5\% effect. The results of these variations are summarized in the bottom of Fig.\ref{results}. If the larger 3\% fuel density uncertainty is assumed, then the DRAGON results are consistent with both the measured mass inventories and the results of the other codes. We also find that the choice of cross-section library induces a 1.2\% change in the $^{235}$U prediction, a 0.8\% change for $^{241}$Pu and no change for $^{239}$Pu.
The Takahama benchmark focuses on the measured mass inventories. However, the quantities of interest for experiment are the fission rates, $f_i(t)$ in Eq.~\ref{theEquation}, used to calculate the antineutrino flux. The mass inventories are proportional to the integrated number of fissions and the instantaneous fission rates. Because the systematic uncertainties on the Takahama inputs are large and the provided power is in a form not available for most reactor cores, calculating systematic uncertainties on fission rates directly from the Takahama results is not possible. However, it is possible that an experiment could use these results to constrain the uncertainties in the flux calculation in its analysis. Regardless, this work has outlined a procedure for evaluating the major uncertainties and their effect on the fission rates. This work is applicable to all experiments that would use reactors as their source of antineutrinos.
\section{Conclusion}
The comparison of measured mass inventories from the Takahama-3 reactor to the DRAGON simulation demonstrates that the results obtained with DRAGON are accurate and of equal quality to other widely used codes for reactor modeling. We outline the procedure for evaluating the input uncertainties and their effect on the fission rates that are critical to the antineutrino flux calculation. There are many interesting measurements to be done with reactor antineutrinos, and DRAGON is an excellent tool that should be used to aid these measurements.
\begin{acknowledgments}
The author thanks the NSF for their generous support. This work is done in collaboration with the greater Double Chooz Reactor Group, and the author thanks them for their valuable input. The author especially thanks Christopher Jones for the DRAGON simulations presented here.
\end{acknowledgments}
\bigskip
|
1,116,691,499,698 | arxiv | \section{Introduction}
\label{sec:intro}
In many automatic speech recognition (ASR) applications, the audio signal originates
from an audio-visual source (e.g. YouTube video, TV broadcast).
The visual signal
provides both contextual information that may be related to the spoken content
(e.g. what is being discussed may be visually present on the
video) as well as information conveyed by the motion
of the speaker's mouth.
This can be integrated into conventional ASR, leading to \emph{audio-visual automatic speech recognition}~(AV-ASR)~\cite{Neti2000-ca,Gupta2017-lz,Makino2019-lm}.
In this work, we exploit the information provided by the
visible motion of the speaker's speech production articulators, using video crops centered around the speaker's mouth.
When this visual information alone is being used for recognition, the task
is known as \emph{lip reading}~\cite{Afouras2019-jp}.
Because the visual information is not affected by ambient noise, it has been
shown~\cite{Makino2019-lm} to improve the robustness of ASR systems under adverse conditions.
Lip-reading and AV-ASR require extracting visual features from the video stream.
This is often done
using a trainable 3D convolutional network operating both in time and spatially
over the image axes, similar to VGG features used in computer
vision~\cite{Simonyan2015-tq}, and has led to the design of state-of-the-art AV-ASR and lip-reading systems~\cite{Makino2019-lm,Afouras2018-gl}.
Recently, a series of breakthroughs were made
due to the development of the attention-based transformer architecture~\cite{Vaswani2017-di}.
It has been shown that transformer models are beneficial
to a variety of sequence-to-sequence learning tasks such as NLP~\cite{Devlin2018-gz, Radford2019-wr} and
speech recognition~\cite{Zhang2020-nr}.
This was adopted in computer vision as well, as illustrated by the work on
\emph{vision transformers} (ViT,~\cite{Dosovitskiy2020-nh}).
In a nutshell, the ViT model extracts non-overlapping 16x16 image patches,
embeds them with a linear transform, and runs a transformer.
That work emphasises that the ViT model has fewer inductive biases
than convolutional networks.
Therefore, given enough training data, ViT is able to learn features that
improve the performance on a variety of image processing tasks.
The ViT architecture was extended to video inputs in~\cite{Arnab2021-mq, Bertasius2021-ke}.
Hence, a number of hard vision tasks that were thought to require convolutions
were tackled using transformers.
In this work, we focus on the vision task of embedding the visual features for AV-ASR.
Our paper aims to test and explore the viability of using a \emph{fully transformer-based} architecture, where both the video and audio front-ends are transformer networks.
We believe that such architectures are important to research because
they have fewer inductive biases and should be able to better fit the training data.
Therefore, we propose to use a transformer video encoder to learn visual features for AV-ASR and lip reading, keeping the rest of the model architecture unchanged.
We compare the proposed approach with our baseline system that uses a VGG convolutional network to extract visual features, similar to the work done in~\cite{Makino2019-lm}.
We design a video transformer akin to~\cite{Arnab2021-mq}, but tailored specifically for an AV-ASR task.
We extract \emph{tubelets} (voxels, or in other words, 3D~patches) from the video input and embed each of them with a shared linear transform.
The transformer is applied to the embedded patches combined with the positional information.
\newpage
The contributions of this paper are:
\begin{itemize}
\item The design and use of a fully transformer-based architecture for visual feature extraction for lip reading and A/V ASR applications. The design choices for our model, as well as
a strong convolutional baseline are reported in the Section~\ref{sec:model};
\item The evaluation of our models against a competitive baseline and prior works (Section~\ref{sec:exp}).
In particular, we use a lip-reading task to assess the model's ability to encode video
(Section~\ref{ssec:exp:lip_reading}), and report
up to 8\% relative improvement over a convolutional baseline.
In addition, we evaluate the proposed model on a real-world AV-ASR task (Section~\ref{ssec:exp:av_asr}),
where the transformer shows at least a similar result as the convolution.
\end{itemize}
\section{Related Work}
\label{sec:related}
\paragraph*{Audio-Visual Automatic Speech Recognition.}
Audio-visual speech recognition~\cite{Neti2000-ca} made significant
progress thanks to the introduction of the end-to-end approaches~\cite{Assael2016-zg, Chung2016-bd, Makino2019-lm}.
Using deep neural networks and end-to-end training allowed these and other works
to tackle the audio-visual speech recognition ``in the wild'',
i.e. unconstrained open-world utterances.
Recently, an outstanding work~\cite{Ma2021-al} achieved the state of the art on the tasks
of audio-visual speech recognition and lip reading.
The work used a combination of the CTC~\cite{Graves2006-vf} and seq2seq~\cite{Bahdanau2014-vc}
losses with a conformer network~\cite{Gulati2020-jh}.
Our work matches this state of the art on the LRS3-TED eval set.
\paragraph*{Transformer-based Models for Video.}
Since the attention-based~\cite{Bahdanau2014-vc} transformer architecture was introduced in~\cite{Vaswani2017-di},
it quickly became a model of choice for natural language processing~\cite{Devlin2018-gz, Radford2019-wr}.
Later, it was employed for other sequential tasks, such as speech recognition~\cite{Zhang2020-nr}.
A highly influential paper~(ViT,~\cite{Dosovitskiy2020-nh}) was the first work demonstrating that
the transformer architecture performs at least as well as convolutions.
The visual transformer was extended to the video~\cite{Arnab2021-mq, Sharir2021-mh, Neimark2021-md}, in particular for the tasks such as video classification and action recognition.
In contrast to these works, this paper focuses on a sequence-to-sequence task.
While the sequence output provides a signal stronger than classification,
the task is harder due to the fact that the network is required to learn
the alignment.
\section{Model}
\label{sec:model}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{avasr2.pdf}
\caption{An overview of end-to-end AV-ASR and lip reading models.
The video is encoded with a video front-end.
The visual features and the acoustic features are concatenated
and fed through the AV encoder to be used for the RNN-T loss.}
\label{fig:avasr}
\end{figure}
In this section, we present an overview of the shared ASR model architecture used in all of our experiments, as well as the two video front-ends we are comparing in the present work: a state-of-the art baseline based on purely (2+1)D convolutional blocks~\cite{Tran2017-nn}, and our proposed front-end, which employs self-attention on video patches through transformer blocks instead of convolutions.
\subsection{Common A/V ASR Model Architecture}
Besides the video front-end, we share the same model architecture illustrated in Figure~\ref{fig:avasr} in all our experiments, and we outline the main components next.
\
\noindent
{\bf Acoustic Features.} We employ mel filterbank features as acoustic features. The 16kHz-sampled input audio is framed with 25ms windows smoothed with the Hann window function, with steps of 10ms between consecutive frames. We compute energies in 80 mel filter bank channels at each frame, compressing their range with a $\log$ function. We then fold every 3 consecutive feature vectors together, yielding a 240 dimensional feature vector every 30ms, which corresponds to acoustic features at about 33.3Hz. We denote the input acoustic features tensor by ${\tens{A}} \in \mathbb{R}^{B\times T \times D_A}$, where $B$ is the batch size, $T$ is the number of time steps and $D_A$ ($=240$) the dimension of the acoustic features.
\
\noindent
{\bf Visual Features.} The videos in our training set have frame rates ranging from around 23 to 30 fps, so in order to make the input uniform we synchronize the videos with the acoustic features by resampling the video with nearest neighbor interpolation in time at the acoustic features sample rate (33.3Hz). In the spatial dimension, we crop the full face tracks around the mouth region to generate images of resolution $128\times128$, with RGB channels normalized between $-1$ and $1$.
We then extract visual features from the synchronized mouth track video using a \emph{video front-end}, which outputs a tensor ${\tens{V}} \in \mathbb{R}^{M\times T\times D_v}$. An example of a video front-end is a 3D ConvNet \cite{LeCun1998-pe}.
\
\noindent
{\bf Modality fusion. } We combine the visual and acoustic features, which have the same temporal resolution, with simple concatenation. The output of the combined audio-visual frontend is thus a tensor ${\tens{F}} = [{\tens{A}}; {\tens{V}}] \in \mathbb{R}^{M\times T\times (D_a + D_v)}$.
\
\noindent
{\bf Encoder.} The resulting merged audio-visual features ${\tens{F}}$ are then fed into the \emph{audio-visual encoder}, which consists of a standard $14$-layer Transformer encoder~\cite{Vaswani2017-di}.
\
\noindent
{\bf Decoder.} The output of the AV encoder is fed into an RNN-T~\cite{Graves2006-vf, Graves2012-ad} decoder, consisting of 2 LSTM layers of 2048 units with character tokens.
\subsection{Video Front-Ends}
\subsubsection{(2+1)D ConvNet Baseline}
\label{ssec:model:baseline}
We use a convolutional video front-end baseline for all our experiments.
We build upon a prior work~\cite{Makino2019-lm}.
Our baseline has several modifications.
\cite{Makino2019-lm} uses a 3D~convnet to capture both spatial and temporal features of the video.
We improve the efficiency by decomposing the 3D~convolution into separate spatial and temporal kernels~\cite{Tran2017-nn,Xie2017-vb} (i.e. a kernel of the dimension [3,~3,~3] becomes a kernel of dimension [1,~3,~3] followed by another kernel of dimension [3,~1,~1]).
Thus, we construct a 5-layer VGG-like network and decompose it into a 10-layer\footnote{
Kernel sizes are: 23, 64, 230, 128, 460, 256, 921, 512, 460, 512.} VGG~(2+1)D net.
We apply max pooling of size 2 after each pair of layers (with an exception for layer 4).
\subsubsection{Video Transformer Front-end}
\label{ssec:model:vit}
The proposed architecture is inspired by~\cite{Dosovitskiy2020-nh} and~\cite{Arnab2021-mq}.
We construct a transformer for the feature extraction from the mouth tracks.
The architecture considered in this work is depicted in the Figure~\ref{fig:avasr_vit}.
First, we extract 4-dimensional `tubelets' from the video input (a 3D version of the image patches).
The tubelets are flattened and fed through an affine projection.
These embeddings for each tubelet are combined with a positional embedding~\cite{Shaw2018-px} and fed
into an off-the-shelf transformer.
We use a 6-layer version of the transformer commonly used for the sequence tasks.
This transformer has 8-headed attention and 512-dimensional features.
Finally, we take the first output of the last layer of the transformer and forward it into
the rest of the AV-ASR or lip reading network.
We extract $32\times32\times8$ (H$\times$W$\times$T) tubelets at each time-step.
This corresponds to the temporal stride of 1.
The tubelets do not intersect in the spatial dimensions.
In total, we use 16 (4x4) tubelets at each time-step.
This set of tubelets is fed into the transformer.
\begin{figure*}
\centering
\includegraphics[width=\textwidth, trim=0 350 0 0,clip]{avasr_vit.pdf}
\caption{An overview of the proposed architecture for the video-encoding transformer.
The input video is split into `tubelets'.
The tubelets are embedded with a linear projection and fed into a transformer.}
\label{fig:avasr_vit}
\end{figure*}
\section{Experiments}
\label{sec:exp}
In this section, we first outline our training procedure, with a brief description of our datasets, followed by an evaluation of the proposed transformer visual frontend on two scenarios: First, we evaluate our model on the lip reading scenario, where only the visual signal is present. Then, we evaluate on the audio-visual scenario, where we use both the audio and video signals for speech recognition.
\
\noindent
{\bf Datasets.} In all our experiments we use a training dataset derived from around 90k hours of transcribed, public YouTube videos. We train on short video segments, limited to 512 frames (around 15 seconds), using the semi-supervised procedure originally proposed in \cite{Liao2013-em} and extended in \cite{Makino2019-lm,Shillingford2019-sc} to include video. We extract short segments where the force-aligned user uploaded transcription matches the transcriptions from a production quality ASR system. From these segments we then keep the ones in which the face tracks match the audio with high confidence. Please consult the original references for more details.
In order to obtain the development and the test sets we use a separate set of YouTube videos
which was transcribed by professionals, the YTDEV18 set.
In addition to the YTDEV18 set, we evaluate on the LSR3-TED corpus~\cite{Afouras2018-pq}.
We run the same pipeline for extracting the mouth tracks and the acoustic features.
We found that the LSR3-TED corpus is significantly easier than YTDEV18.
Furthermore, the results on LSR3-TED tend to have high variance.
Despite these downsides, we use this public evaluation set to be able to compare
to prior publications.
\
\noindent
{\bf Training.} For all our models we use the following training schedule.
First, we increase the learning rate linearly up to $1e^{-4}$ for the first 30,000 iterations
and maintain it constant for the next 170,000 iterations.
Then, the learning rate is annealed exponentially down to $1e^{-6}$ for the next 100,000 iterations.
We use the batch size of 1024 and the Adam~\cite{Kingma2014-dh} training algorithm.
\subsection{Lip Reading}
\label{ssec:exp:lip_reading}
The lip reading models are trained on YouTube videos with audio omitted.
The results for the lip reading experiments are summarized in the Table~\ref{tab:lip_reading}.
We observe that the transformer based model ViT 3D outperforms our convolutional baseline.
Furthermore, our models outperform the prior published models~\cite{Afouras2018-gl, Ma2021-al},
although we should emphasize that our training data is different from these publications.
The ViT model shows 4\% relative improvement over
the VGG~(2+1)D baseline on the YTDEV18 set and 8\% on the LRS3-TED.
There is 23\% relative improvement comparing to~\cite{Makino2019-lm}
and 40\% comparing to~\cite{Ma2021-al}.
Therefore, we conclude that the video transformer is able to extract rich features from the mouth tracks
and these features can be leveraged by the decoder.
\subsection{Audio-Visual Automatic Speech Recognition}
\label{ssec:exp:av_asr}
Inspired by the positive results for the lip reading, we continue to the AV-ASR task.
\paragraph*{First, we evaluate our AV-ASR models on the YTDEV18 set.}
The results are summarized in the Table~\ref{tab:av_asr}.
The audio is a much stronger signal for the recognition model.
Therefore, despite the fact that the ViT 3D is able to extract better visual features,
it performs just as well as the VGG baseline when the audio is clean.
In order to demonstrate the strength of the transformer video front-end,
we introduce noise into the eval data.
We artificially apply additive noise of magnitudes 20, 10, 0dB.
Finally, we introduce noise by overlapping a random short utterance ($<$5s)
at the start or end of the evaluated utterance.
We observe that the transformer model matches the performance of
the VGG baseline with low amounts of noise.
The ViT model outperforms the baseline in the case of high level of noise (0dB).
Finally, in the case of the overlapped audio, the ViT performance degrades.
We hypothesise that this is due to the more drastic domain shift between
the train data and the test data.
More specifically, the train data is augmented with MTR~\cite{Cui2015-ox},
therefore it is easy to generalize to the additive noise.
The first row of the table reports the performance of an audio-only model.
By removing the video input and the video front-end we demonstrate
the importance of the visual information.
Especially the visual information helps for highly noisy data (0dB set).
\paragraph*{Second, we test our model on a public dataset LRS3-TED.}
This dataset consists of recordings of TED talks.
The LRS3-TED eval set is smaller than YTDEV18,
which usually leads to a higher variance in the results.
On the other hand, this eval set is considerably simpler than YTDEV18.
One of the reasons is that the audio quality is high and the video is clean,
high definition, and almost always centered at the speaker.
We use the ``0.4'' version of this dataset,
where the sets of speakers in the train and eval sets are disjoint
and we additionally ensure that the TED videos are excluded from our train set.
We report the results on the LRS3-TED in the Table~\ref{tab:ted_results}.
The proposed model outperforms the convolutional baseline.
However, due to the differences in the training and testing conditions,
the performance of our vanilla models (VGG~(2+1)D and AV~ViT) is inferior to
the previous results~\cite{Ma2021-al}.
Therefore, we fine-tune our best model on the 50-50 mix of
our training data and the LRS3-TED train set.
More precisely, we fine-tune the model for 10,000 steps with the mini-batch size of 4096.
We use the learning rate of $1e^{-5}$ and anneal it exponentially down to~$5e^{-8}$.
This fine-tuning process leads to a result comparable to~\cite{Ma2021-al} matching
the state of the art.
\subsection{Computational Performance Analysis}
\label{ssec:exp:performance}
We run a series of benchmarks to measure the computational performance
of the proposed model compared to the baseline.
In order to assess the speed of training, we run the forward propagation
20 times on a mini-batch and report the average metric.
We use the standard TensorFlow utilities to measure the number of floating point operations
and the wall clock time to measure the latency on a TPU.
The results are summarized in the Table~\ref{tab:performance}.
Despite the fact that the transformer front-end requires significantly more
floating point operations, the decrease in latency is small.
This can be attributed to the fact that many transformer computations
can be performed in parallel.
Another important point is that we can fit a much higher number of trainable parameters
into the same memory due to the large feature maps when using convolution.
\begin{table}
\centering
\caption{Lip-reading performance, \%WER.}
\begin{tabular}{lS[table-format=2.2]S[table-format=2.2]}
\toprule
\bfseries Model & {\bfseries YTDEV18} & {\bfseries LRS3-TED} \\
\midrule
TM-seq2seq~\cite{Afouras2018-gl} & {--} & 58.9 \\
ResNet+Conf~\cite{Ma2021-al} & {--} & 43.3 \\
RNN-T~\cite{Makino2019-lm} & 48.5 & 33.6 \\
\midrule
VGG (2+1)D & 40.5 & 28.2 \\
ViT 3D & 38.8 & 25.9 \\
\bottomrule
\end{tabular}
\label{tab:lip_reading}
\end{table}
\begin{table}
\centering
\caption{Audio-visual ASR performance, \%WER. ($\infty$dB) is the clean subset;
20db, 10dB, 0dB -- data with artificial noise added;
``Overlap'' -- contains overlapped utterances.
}
\begin{tabular}{lccccc}
\toprule
\bfseries Model & $\infty$dB & 20dB & 10dB & 0dB & Overlap \\
\midrule
Audio-only & 16.5 & 17.0 & 19.8 & 42.9 & 35.0 \\
\midrule
VGG & 14.4 & 14.5 & 15.6 & 23.4 & \bfseries 31.2 \\
AV ViT & 14.4 & 14.6 & 15.6 & \bfseries 23.1 & 31.9 \\
\bottomrule
\end{tabular}
\label{tab:av_asr}
\end{table}
\begin{table}
\centering
\caption{AV-ASR performance on the LRS3-TED dataset.
Models denoted with $^*$ are trained on a large dataset of YouTube videos.
The last row corresponds to a fine-tuned AV ViT model.}
\label{tab:ted_results}
\begin{tabular}{lS[table-format=2.1]}
\toprule
\bfseries Model & {WER, \%} \\
\midrule
TM-CTC~\cite{Afouras2018-gl} & 27.7 \\
EG-s2s~\cite{Xu2020-sf} & 6.8 \\
RNN-T~\cite{Makino2019-lm}$^*$ & 4.5 \\
ResNet+Conf~\cite{Ma2021-al} & 2.3 \\
\midrule
VGG (2+1)D$^*$ & 3.9 \\
AV ViT$^*$ & 3.6 \\
\hspace{0.3cm} + fine-tune & 2.3 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Comparison of the computational performance.
We measure the speed in terms of the number of
floating point operations, and in terms of wall clock time.
We benchmark 20 times and report the average.
Additionally we report the number of parameters.
Notice, that we are able to fit significantly more parameters for a transformer model.}
\label{tab:performance}
\begin{tabular}{lcS[table-format=2.1]S[table-format=3.1]}
\toprule
\bfseries Video front-end & GFLOPS & {Params, M} & {Latency, ms} \\
\midrule
VGG (2+1)D & 299.3 & 7.0 & 120.7 \\
ViT & 520.7 & 37.2 & 162.3\\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
In this work we designed a transformer visual front-end for the AV-ASR task. To the best of our knowledge, the use of a purely transformer-based visual front-end in combination with a transformer encoder makes this the first fully-transformer end-to-end architecture for AV-ASR.
Our proposed model outperforms a strong baseline employing convolutions for the visual frontend on the lip-reading scenario, and matches its performance on the audio-visual ASR scenario.
Finally, we fine-tuned our model on the publicly available LRS3-TED dataset and were able to achieve a new state-of-the-art word error rate on the lip-reading scenario, while matching the performance of the best published model when both acoustic and visual signals are employed.
As a side note, unfortunately we were not able to evaluate on the test sets with BBC data (LRS2 and LRW), datasets which are widely reported in the literature but whose licence prohibits its use by individual scientists and research in industry.
The future work includes using a conformer front-end to further improve the performance.
Other possible directions include extending the architecture to online recognition
and multiple speakers.
For the online recognition, the video transformer would need to take into account
only the past frames.
Tackling multiple speakers would require employing an architecture that can handle multiple video inputs, such as~\cite{Braga2020-yw,Braga2021-lj}.
\section{Safe AI Principles}
\label{sec:safe_ai}
We are fully aware of the sensitive nature of the audio-visual speech recognition research
and other AI technologies used in this work.
Therefore, we ensure that this work abides by the Google AI Principles~\cite{noauthor_undated-lg}.
\bibliographystyle{IEEEbib}
|
1,116,691,499,699 | arxiv | \section{Introduction}
Inspired by the appearance of multiple zeta values in quantum field theories \cite{BK}, \cite{CENSUS} Kontsevich informally conjectured in 1997 that for every graph the number of zeros of the
graph polynomial (see Sect.\ \ref{s21} for a definition) over a finite field ${\mathbb F}_q$ is a polynomial in $q$ \cite{KONT}. This conjecture puzzled graph theorists for quite a while.
In 1998 Stanley proved that a dual version of the conjecture holds for complete as well as for `nearly complete' graphs \cite{STAN}. The result was extended in 2000 by
Chung and Yang \cite{CY}. On the other hand, in 1998 Stembridge verified the conjecture by the Maple-implementation of a reduction algorithm for all graphs
with at most 12 edges \cite{STEM}. However, in 2000 Belkale and Brosnan were able to disprove the conjecture (in fact the conjecture is maximally false in a certain sense) \cite{BB}.
Their proof was quite general in nature and in particular relied on graphs with an apex (a vertex connected to all other vertices). This is not compatible with physical
Feynman rules allowing only low vertex-degree (3 or 4). It was still a possibility that the conjecture works for `physical' graphs where it originated from.
Moreover, explicit counter-examples were not known.
We show that the first counter-examples to Kontsevich's conjecture are graphs with 14 edges (all graphs with $\leq13$ edges are of polynomial type).
Moreover, these graphs are `physical': Amongst all `primitive' graphs with 14 edges in $\phi^4$-theory we find six graphs for which the number $\bar N(q)$
of points in the projective complement of the graph hypersurface (the zero locus of the graph polynomial) is not a polynomial in $q$.
Five of the six counter-examples fall into one class that has a polynomial behavior $\bar N(q)=P_2(q)$ for $q=2^k$ and $\bar N(q)=P_{\neq 2}(q)$ for all $q\neq2^k$ with
$P_2\neq P_{\neq 2}$ (although the difference between the two polynomials is minimal [Eqs.\ (\ref{22a1}) -- (\ref{22b2})]).
Of particular interest are three of the five graphs because for these the physical period is known to be a weight 11 multiple zeta value [Eq.\ (\ref{23})].
The sixth counter-example is of a new kind. One obtains three mutually (slightly) different polynomials $\bar N(q)=P_i(q)$, $i=-1,0,1$ depending on the remainder of $q$ mod 3 [Eq.\ (\ref{22c})].
At 14 edges the breaking of Kontsevich's conjecture by $\phi^4$-graphs is soft in the sense that after eliminating the exceptional prime 2 (in the first case) or
after a quadratic field extension by cube roots of unity (leading to $q=1$ mod 3) $\bar N(q)$ becomes a polynomial in $q$.
At 16 edges we find two new classes of counter-examples. One resembles what we have found at 14 edges by providing three different polynomials this time depending
on the remainder of $q$ mod 4 [Eq.\ (\ref{22d})]. The result is of polynomial type after a quadratic field extension by fourth roots of unity (leading to $q=1$ mod 4).
The second class is of an entirely new type. A formula for $\bar N(q)$ can be given that entails a polynomial in $q$ together with the number of points in the complement of
a surface in ${\mathbb P}^3$ (Eqs.\ (\ref{22e1}) -- (\ref{22e6}). (The surface has been identified as a singular K3. In fact it is a Kummer surface with respect to the elliptic curve
$y^2+xy=x^3-x^2-2x-1$, corresponding to the weight 2 level 49 newform \cite{BS}.)
This inplies that the motive of the graph hypersurface is of non-mixed-Tate type.
The result was found by computer algebra using the reduction Thm.\ \ref{thm1} which is proved with geometrical
tools that lift to the Grothendieck ring of varieties $K_0($Var$_k)$. This allows us to state the result as a theorem in the Grothendieck ring:
The equivalence class of the graph hypersurface $X$ of graph Fig.\ 1(e) minus vertex 2 is given by the Lefschetz motive ${\mathbb L}=[{\mathbb A}^1]$ and the class $[F]$ of
the singular degree 4 surface in ${\mathbb P}^3$ given by the zero locus of the polynomial
\begin{equation*}
a^2b^2+a^2bc+a^2bd+a^2cd+ab^2c+abc^2+abcd+abd^2+ac^2d+acd^2+bc^2d+c^2d^2\!,
\end{equation*}
namely (Thm.\ \ref{thm2})
\begin{eqnarray*}
[X]&=&{\mathbb L}^{14}+{\mathbb L}^{13}+4{\mathbb L}^{12}+16{\mathbb L}^{11}-8{\mathbb L}^{10}-106{\mathbb L}^9+263{\mathbb L}^8-336{\mathbb L}^7\nonumber\\
&&\quad+\,316{\mathbb L}^6-199{\mathbb L}^5+45{\mathbb L}^4+19{\mathbb L}^3+[F]{\mathbb L}^2+{\mathbb L}+1.
\end{eqnarray*}
Although Kontsevich's conjecture does not hold in general, for physical graphs there is still a remarkable connection between $\bar N(q)$ and the quantum field theory period, Eq.\ (\ref{1a}).
In particular, in the case that $\bar N(q)$ is a polynomial in $q$ (after excluding exceptional primes and finite-degree field extensions) we are able to predict the weight
of the multiple zeta value from the $q^2$-coefficient of $\bar N$ (see Remark \ref{rem1}).
Likewise, a non mixed-Tate ${\mathbb L}^2$-coefficient $[F]$ in the above equation could indicate that the (yet unknown) period of the corresponding graph is not a multiple zeta value.
In an outlook we make the attempt to define a perturbative quantum field theory over ${\mathbb F}_q$.
We keep the algebraic structure of the Feynman-amplitudes, interpret the integrands as ${\mathbb F}_q$-valued functions and replace integrals by sums over ${\mathbb F}_q$.
We prove that this renders many amplitudes zero (Lemma \ref{lem4}). In bonsonic theories with momentum independent vertex-functions only superficially convergent amplitudes survive.
The perturbation series terminates for renormalizable and non-renormalizable quantum field theories.
Only super-renormalizable quantum field theories may provide infinite (formal) power series in the coupling.
\vskip1ex
\noindent{\it Acknowledgements.}
The author is grateful for very enlightening discussions with S. Bloch and F.C.S. Brown on the algebraic nature of the counter-examples.
The latter carefully read the manuscript and made many valuable suggestions.
More helpful comments are due to S. Rams, F. Knop and P. M\"uller. H. Frydrych provided the author by a C$++$ class that facilitated
the counting in ${\mathbb F}_4$ and ${\mathbb F}_8$. Last but not least the author is grateful to J.R. Stembridge for making his beautiful programs publicly available
and to have the support of the Erlanger RRZE Computing Cluster with its friendly and helpful staff.
\pagebreak[4]
\section{Kontsevich's Conjecture}
\subsection{Fundamental Definitions and Identities}\label{s21}
Let $\Gamma$ be a connected graph, possibly with multiple edges and loops (edges connecting to a single vertex). We use $n$ for the number of edges of $\Gamma$.
The graph polynomial is a sum over all spanning trees $T$. Each spanning tree contributes by the product of variables corresponding to edges not in $T$,
\begin{equation}\label{1}
\Psi_\Gamma(x)=\sum_{T\,\rm span.\,tree}\;\prod_{e\not \in T}x_e.
\end{equation}
The graph polynomial was introduced by Kirchhoff who considered electric currents in networks with batteries of voltage $V_e$ and resistance $x_e$ at
each edge $e$ \cite{KIR}. The current through any edge is a rational function in the $x_e$ and the $V_e$ with
common denominator $\Psi_\Gamma(x)$. In a tree where no current can flow the graph polynomial is 1.
The graph polynomial is related by a Cremona transformation $x\mapsto x^{-1}:=(x_e^{-1})_e$ to a dual polynomial built from the edges in $T$,
\begin{equation}\label{2}
\bar \Psi_\Gamma(x)=\sum_{T\,\rm span.\,tree}\;\prod_{e \in T}x_e\;=\;\Psi_\Gamma(x^{-1}) \prod_ex_e.
\end{equation}
The polynomial $\bar \Psi$ is dual to $\Psi$ in a geometrical sense: If the graph $\Gamma$ has a planar
embedding then the graph polynomial of a dual graph is the dual polynomial of the original graph.
Both polynomials are homogeneous and linear in their coordinates and for any simple graph we have
\begin{equation}\label{2a}
\Psi_\Gamma=\Psi_{\Gamma-1}x_1+\Psi_{\Gamma/1},\quad\bar\Psi_\Gamma=\Psi_{\Gamma/1}x_1+\Psi_{\Gamma-1},
\end{equation}
where $\Gamma-1$ means $\Gamma$ with edge 1 removed whereas $\Gamma/1$ is $\Gamma$ with edge 1 contracted (keeping double edges).
The degree of the graph polynomial equals the number $h_1$ of independent cycles of $\Gamma$ whereas $\deg(\bar\Psi)=n-h_1$.
In quantum field theory graph polynomials appear as denominators of period integrals
\begin{equation}\label{1a}
P_\Gamma=\int_0^\infty\cdots\int_0^\infty\frac{{\rm d}x_1\cdots{\rm d}x_{n-1}}{\Psi_\Gamma(x)^2|_{x_n=1}}
\end{equation}
for graphs with $n=2h_1$. The integral converges for graphs that are primitive for the Connes-Kreimer coproduct
which is a condition that can easily be checked for any given graph (see Lemma 5.1 and Prop.\ 5.2 of \cite{BEK}).
If the integral converges, the graph polynomial may be replaced by its dual due to a Cremona transformation.
A necessary and sufficient condition for a graph to be primitive is given in \cite{BEK} (Prop.\ 5.2).
The polynomials $\Psi$ and $\bar \Psi$ have very similar (dual) properties. To simplify notation we mainly restrict
ourself to the graph polynomial although for graphs with many edges its dual is more tractable and was hence used in \cite{BB}, \cite{CY}, \cite{STAN}, and \cite{STEM}.
The graph polynomial (and also $\bar\Psi$) has the following basic property
\begin{lem}\label{lem1}
Let $\Psi=ax_ex_{e'}+bx_e+cx_{e'}+d$ for some variables $x_e$, $x_{e'}$ and polynomials $a,b,c,d$, then
\begin{equation}\label{3}
ad-bc=-\Delta_{e,e'}^2
\end{equation}
for a homogeneous polynomial $\Delta_{e,e'}$ which is linear in its variables.
\end{lem}
\begin{proof}
For the dual polynomial this is Theorem 2.8 in \cite{STEM}. The result for $\Psi$ follows by a Cremona transformation, Eq.\ (\ref{2}).
\end{proof}
As a simple example we may take a cycle $C_3$ with 3 edges.
\begin{ex}\label{ex1}
\begin{eqnarray*}
\Psi_{C_3}(x)&=&x_1+x_2+x_3,\quad \Delta_{1,2}=1,\\
\bar\Psi_{C_3}(x)&=&x_1x_2+x_1x_3+x_2x_3,\quad \Delta_{1,2}=x_3.
\end{eqnarray*}
The dual of $C_3$ is a triple edge with graph polynomial $\bar\Psi_{C_3}$ and dual polynomial $\Psi_{C_3}$.
\end{ex}
The zero locus of the graph polynomial defines an in general singular projective variety (the graph hypersurface) $X_\Gamma\subset{\mathbb P}^{n-1}$.
In this article we consider the projective space over the field ${\mathbb F}_q$ with $q$ elements.
Counting the number of points on $X_\Gamma$ means counting the number $N(\Psi_\Gamma)$ of zeros of $\Psi_\Gamma$. In this paper we prefer to (equivalently) count the points in the complement of the graph hypersurface.
In general, if $f_1,\ldots,f_m$ are homogeneous polynomials in ${\mathbb Z}[x_1,\ldots,x_n]$ and $N(f_1,\ldots,f_m)_{{\mathbb F}_q^n}$ is the number of their common zeros
in ${\mathbb F}_q^n$ we obtain for the number of points $\bar N$ in the projective complement of their zero locus
\begin{eqnarray}\label{5}
\bar N(f_1,\ldots,f_m)_{{\mathbb P}{\mathbb F}_q^{n-1}}&=&|\{x\in{\mathbb P}{\mathbb F}_q^{n-1}|\exists i:f_i(x)\neq 0\}|\nonumber\\
&=&\frac{q^n-N(f_1,\ldots,f_m)_{{\mathbb F}_q^n}}{q-1}.
\end{eqnarray}
If $\bar N$ is a polynomial in $q$ so is $N$ (and vice versa). We drop the subscript ${\mathbb P}{\mathbb F}_q^{n-1}$ if the context is clear.
The duality between $\Psi$ and $\bar\Psi$ leads to the following Lemma (which we will not use in the following).
\begin{lem}\label{lem2}
The number of points in the complement of the graph hypersurface can be obtained from the dual surface of the graph and its minors. Namely,
\begin{equation}\label{9}
\bar N(\Psi_\Gamma)=\sum_{T,S}(-1)^{|S|}\bar N(\bar\Psi_{\Gamma/T-S})
\end{equation}
where $T\sqcup S\subset E$ is a partition of an edge subset into a tree $T$ and an arbitrary edge set $S$ and $\Gamma/T-S$ is the contraction of $T$ in $\Gamma-S$.
\end{lem}
\begin{proof}
The prove is given in \cite{STEM} (Prop.\ 3.1) following an idea of \cite{STAN}.
\end{proof}
Calculating $\bar N(\Psi_\Gamma)$ is straight forward for small graphs. Following Ex.\ \ref{ex1} we find that $\Psi_{C_3}$ has $q^2$ zeros in ${\mathbb F}_q^3$ (defining a hyperplane).
Therefore $\bar N(\Psi_{C_3})=(q^3-q^2)/(q-1)=q^2$.
The same is true for $\bar\Psi_{C_3}$, but here the counting is slightly more difficult. A way to find the result is to observe that whenever $x_2+x_3\neq0$ we
can solve $\bar\Psi_{C_3}=0$ uniquely for $x_1$. This gives $q(q-1)$ zeros. If, on the other hand, $x_2+x_3=0$ we conclude that $x_2=-x_3=0$ while $x_1$ remains arbitrary.
This adds another $q$ solution such that the total is $q^2$.
A generalization of this method was the main tool in \cite{STEM} basically only augmented
by the inclusion-exclusion formula $N(fg)=N(f)+N(g)-N(f,g)$. We add coordinate rescalings to the toolbox and obtain the following proposition.
\begin{prop}\label{prop1}
Let $f_1,\ldots,f_m={\bf f}={\bf f}_{1...m}$ be homogeneous polynomials in ${\mathbb Z}[x_1,\ldots,x_n]$. Then
\begin{enumerate}
\item
\begin{equation}\label{6}
\bar N(f_1f_2,{\bf f}_{3...m})=\bar N(f_1,{\bf f}_{3...m})+\bar N(f_2,{\bf f}_{3...m})-\bar N(f_1,f_2,{\bf f}_{3...m})|_{{\mathbb P}{\mathbb F}_q^{n-1}}.
\end{equation}
\item
Let $f_1=g_1x_1-g_0$, with $g_1,g_0\in{\mathbb Z}[x_2,\ldots ,x_n]$, $\bar h=h_kg_0^k+
h_{k-1}g_0^{k-1}g_1+\ldots +h_0g_1^k$ be the resultant of $f_1$ and $h=h_kx_1^k+h_{k-1}x_1^{k-1}+\ldots+h_0$, and $\hat h=h_kg_0$ if $k>0$ while $\hat h=h_0$ if $k=0$. We have ($\bar{\bf f}_{2...m}=(\bar f_2,\ldots,\bar f_m)$, $\hat{\bf f}_{2...m}=(\hat f_2,\ldots,\hat f_m)$)
\begin{equation}\label{7}
\bar N({\bf f})=\bar N(g_1,g_0,{\bf f}_{2...m})_{{\mathbb P}{\mathbb F}_q^{n-1}}+\bar N(\bar{\bf f}_{2...m})_{{\mathbb P}{\mathbb F}_q^{n-2}}-\bar N(g_1,\hat{\bf f}_{2...m})_{{\mathbb P}{\mathbb F}_q^{n-2}}.
\end{equation}
\item
If, for $I\subset \{1,\ldots ,n\}$ and polynomials $g,h\in{\mathbb Z}[(x_j)_{j\not\in I}]$, a coordinate transformation (re\-scal\-ing) $x_i\mapsto x_ig/h$ for
$i\in I$ maps ${\bf f}$ to $\tilde {\bf f}g^k/h^\ell$ with (possibly non-homogeneous) polynomials $\tilde {\bf f}$ and integers $k,\ell$ then ($\tilde{\bf f}=(\tilde f_1,\ldots,\tilde f_m)$),
\begin{equation}\label{8a}
\bar N({\bf f})_{{\mathbb F}_q^n}=\bar N(gh,{\bf f})_{{\mathbb F}_q^n}+\bar N(\tilde{\bf f})_{{\mathbb F}_q^n}-\bar N(gh,\tilde{\bf f})_{{\mathbb F}_q^n}.
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
Inclusion-exclusion, Prop.\ 1.3, and Remark 1.4 of \cite{STEM} together with Eq.\ (\ref{5}) lead
to (1) and (2). Equation (\ref{8a}) is another application of inclusion-exclusion.
On $gh\neq0$ the rescaling gives an isomorphism between the varieties defined by $\bf f$ and $\tilde{\bf f}$. Hence in ${\mathbb F}_q^n$ we have $N({\bf f})=N(gh,{\bf f})+
N(\tilde{\bf f}|_{gh\neq0})$ and $N(\tilde{\bf f}|_{gh\neq0})=N(\tilde{\bf f})-N(gh,\tilde{\bf f})$. Translation to complements leads to the result.
\end{proof}
In practice, one first tries to eliminate variables using (1) and (2). If no more progress is possible (3) is the next best chance to proceed (see the proof of Thm.\ \ref{thm2}).
In this case it may be convenient to work with non-homogeneous polynomials in affine space. One can always swap back to projective space by
\begin{equation}\label{8b}
\bar N({\bf f})_{{\mathbb P}{\mathbb F}_q^{n-1}}=\bar N({\bf f}|_{x_1=0})_{{\mathbb P}{\mathbb F}_q^{n-2}}+N({\bf f}|_{x_1=1})_{{\mathbb F}_q^{n-1}}.
\end{equation}
This equation is clear by geometry. Formally, it can be derived from Eq.\ (\ref{8a}) by
the transformation $x_i\mapsto x_ix_1$ for $i>1$ leading to $\tilde{\bf f}={\bf f}|_{x_1=1}$.
In the case of a single polynomial ${\bf f}=f$ we have the following corollary:
\begin{cor}\label{cor1}
Fix a variable $x_k$. Let $f=f_1 x_k+f_0$ be homogeneous, with $f_1,f_0\in{\mathbb Z}[x_1,\ldots ,\hat{x_k},\ldots ,x_n]$.
If $\deg(f)>1$ then
\begin{equation}\label{11}
\bar N(f)=q\bar N(f_1,f_0)_{{\mathbb P}{\mathbb F}_q^{n-2}}-\bar N(f_1)_{{\mathbb P}{\mathbb F}_q^{n-2}}.
\end{equation}
If $f$ is linear in all $x_k$ and $0<\deg(f)<n$ then $\bar N(f)\equiv0\mod q$.
\end{cor}
\begin{proof}
We use Eq.\ (\ref{7}) for $f_1=f$. Because $\deg(f)>1$ neither $f_1$ nor $f_0$ are constants $\neq0$ in the first term on the right hand side.
Hence, a point in the complement of $f_1=f_0=0$ in ${\mathbb P}{\mathbb F}_q^{n-1}$ has coordinates $x$ with $(x_2,\ldots,x_n)\neq0$.
Thus $(x_2:\ldots:x_n)$ are coordinates in ${\mathbb P}{\mathbb F}_q^{n-2}$ whereas $x_1$ may assume arbitrary values in ${\mathbb F}_q$. The second term
in Eq.\ (\ref{7}) is absent for $m=1$ and we obtain Eq.\ (\ref{11}). Moreover, modulo $q$ we have $\bar N(f)=-\bar N(f_1)_{{\mathbb P}{\mathbb F}_q^{n-2}}$.
We may proceed until $f_1=h$ is linear yielding $\bar N(f)=\pm\bar N(h)_{{\mathbb P}{\mathbb F}_q^{n-\deg(f)}}=\pm q^{n-\deg(f)}\equiv0$ mod $q$, because $\deg(f)<n$.
\end{proof}
In the case of two polynomials $f_1,f_2$ we obtain a result analogous to Lemma 2.3 in \cite{STEM}:
\begin{cor}\label{cor2}
Fix a variable $x_k$. Let $f_1=f_{11} x_k+f_{10}$, $f_2=f_{21} x_k+f_{20}$ be homogeneous, with $f_{11},f_{10},f_{21},f_{20},\in{\mathbb Z}[x_1,\ldots,\hat{x_k},\ldots ,x_n]$.
If $\deg(f_1)>1$, $\deg(f_2)>1$ then
\begin{equation}\label{12}
\bar N(f_1,f_2)=q\bar N(f_{11},f_{10},f_{21},f_{20})+\bar N(f_{11}f_{20}-f_{10}f_{21})-\bar N(f_{11},f_{21})|_{{\mathbb P}{\mathbb F}_q^{n-2}}.
\end{equation}
If $f_1,f_2$ are linear in all their variables, $f_{11}f_{20}-f_{10}f_{21}=\pm\Delta^2$, $\Delta\in{\mathbb Z}[x_1,\ldots,\hat{x_k},\ldots ,x_n]$
for all choices of $x_k$, $0<\deg(f_1)$, $0<\deg(f_2)$, and $\deg(f_1f_2)<2n-1$ then $\bar N(f_1,f_2)\equiv0\mod q$.
\end{cor}
\begin{proof}
Double use of Eq.\ (\ref{7}) and Eq.\ (\ref{6}) lead to
\begin{eqnarray}\label{12a}
\bar N(f_1,f_2)&=&\bar N(f_{11},f_{10},f_{21},f_{20})_{{\mathbb P}{\mathbb F}_q^{n-1}}\nonumber\\
&&\quad+\,\bar N(f_{11}f_{20}-f_{10}f_{21})_{{\mathbb P}{\mathbb F}_q^{n-2}}-\bar N(f_{11},f_{21})_{{\mathbb P}{\mathbb F}_q^{n-2}}.
\end{eqnarray}
If $\deg(f_1)>1$, $\deg(f_2)>1$ we obtain Eq.\ (\ref{12}) in a way analogous to the proof of the previous corollary.
If $f_{11}f_{20}-f_{10}f_{21}=\pm\Delta^2$ and $\deg(f_1f_2)<2n-1$ then $\deg(\Delta)<n-1$ and the second term on the right hand side is 0 mod $q$ by
Cor.\ \ref{cor1}. We obtain $\bar N(f_1,f_2)\equiv-\bar N(f_{11},f_{21})_{{\mathbb P}{\mathbb F}_q^{n-2}}$ mod $q$.
Without restriction we may assume that $d_1=\deg(f_1)<d_2=\deg(f_2)$ and continue eliminating variables until $f_{11}\in{\mathbb F}_q^\times$. In this situation
Eq.\ (\ref{12a}) leads to
\begin{equation}\label{12b}
\bar N(f_1,f_2)\equiv\pm[\bar N(1)_{{\mathbb P}{\mathbb F}_q^{n-d_1}}+\bar N(\Delta)_{{\mathbb P}{\mathbb F}_q^{n-d_1-1}}-\bar N(1)_{{\mathbb P}{\mathbb F}_q^{n-d_1-1}}]\mod q.
\end{equation}
Still $0<\deg(\Delta)=(d_2-d_1+1)/2<n-d_1$ such that the middle term vanishes modulo $q$. The first and the third term add up to $q^{n-d_1}\equiv0\mod q$
because $d_1<n-1$.
\end{proof}
We combine both corollaries with Lemma \ref{lem1} (Eq.\ (\ref{13}) is basically Thm.\ 2.4 in \cite{STEM}).
\begin{cor}\label{cor3}
Let $f=f_{11} x_1x_2+f_{10} x_1+f_{01} x_2+f_{00}$ be homogeneous with $f_{11}$, $f_{10}$, $f_{01}$, $f_{00}\in{\mathbb Z}[x_3,\ldots,x_n]$.
If $\deg(f)>2$ and $f_{11}f_{00}-f_{10}f_{01}=-\Delta_{12}^2$, $\Delta_{12}\in{\mathbb Z}[x_3,\ldots,x_n]$ then
\begin{eqnarray}\label{13}
\bar N(f)&=&q^2\bar N(f_{11},f_{10},f_{01},f_{00})\nonumber\\
&&\quad+\,q[\bar N(\Delta_{12})-\bar N(f_{11},f_{01})-\bar N(f_{11},f_{10})]+\bar N(f_{11})|_{{\mathbb P}{\mathbb F}_q^{n-3}}.
\end{eqnarray}
If $f$ is linear in all its variables, if the statement of Lemma \ref{lem1} holds for $f$ and any choice of variables $e,e'$, and if $0<\deg(f)<n-1$
then $\bar N(f)\equiv0\mod q^2$. In particular $\bar N(\Psi_\Gamma)=0\mod q^2$ for every simple graph with $h_1>0$.\end{cor}
\begin{proof}
Eq.\ \ref{13} is a combination of Eqs.\ (\ref{11}) and (\ref{12}). The second statement is trivial for $\deg(f)=1$ and straight forward for $\deg(f)=2$ using Cors.\ \ref{cor1} and \ref{cor2}.
To show it for $\deg(f)>2$ we observe that modulo $q^2$ the second term on the right hand side of Eq.\ (\ref{13}) vanishes due to Cor.\ \ref{cor1} while
the third and fourth term vanish due to Cor.\ \ref{cor2}. We thus have $\bar N(f)\equiv\bar N(f_{11})_{{\mathbb P}{\mathbb F}_q^{n-3}}$ mod $q^2$
and by iteration we reduce the statement to $\deg(f)=2$. Any simple non-tree graph fulfills the conditions of the corollary by Lemma \ref{lem1}.
\end{proof}
Now, we formulate the main theorem of this subsection
\begin{thm}\label{thm1}
Let $\Gamma$ be a simple graph with vertex-connectivity $\geq 2$. Then
\begin{eqnarray}\label{13c}
\bar N(\Psi_\Gamma)&=&q^{n-1}+O(q^{n-3}),\\
\bar N(\Psi_\Gamma)&\equiv&0\mod q^2.\nonumber
\end{eqnarray}
If $\Gamma$ has a 3-valent vertex $v$ with attached edges $1$,$2$,$3$ then
($\Gamma-1/23$ means edge $1$ removed, edges $2$,$3$ contracted, etc.)
\begin{eqnarray}\label{13b}
\bar N(\Psi_\Gamma)&=&q^3\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23},\Psi_{\Gamma-2/13},\Psi_{\Gamma/123})\\
&&-\,q^2\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23},\Psi_{\Gamma-2/13})|_{{\mathbb P}{\mathbb F}_q^{n-4}}.\nonumber
\end{eqnarray}
gives the number of points in the projective complement of the graph hypersurface in terms of graph polynomials of
minors. Alternatively, with
\begin{eqnarray}\label{14}
\Delta_{12}&=&\Psi_{\Gamma-123}x_3+\Delta,\\\label{14b}
\Delta&=&\frac{\Psi_{\Gamma-1/23}+\Psi_{\Gamma-2/13}-\Psi_{\Gamma-3/12}}{2}\in{\mathbb Z}[x_4,\ldots,x_n],
\end{eqnarray}
we have
\begin{equation}\label{15}
\bar N(\Psi_\Gamma)=q\bar N(\Psi_{\Gamma/3})_{{\mathbb P}{\mathbb F}_q^{n-2}}+q\bar N(\Delta_{12})_{{\mathbb P}{\mathbb F}_q^{n-3}}-q^2\bar N(\Delta)_{{\mathbb P}{\mathbb F}_q^{n-4}}.
\end{equation}
In particular,
\begin{equation}\label{15a}
\bar N(\Psi_\Gamma)\equiv q\bar N(\Delta_{12})_{{\mathbb P}{\mathbb F}_q^{n-3}}\equiv q^2\bar N(\Psi_{\Gamma-123},\Delta)_{{\mathbb P}{\mathbb F}_q^{n-4}} \mod q^3.
\end{equation}
If, additionally, there exists an edge $4$ such that edges $2$,$3$,$4$ form a triangle we have
\begin{equation}\label{14a}
\delta=\frac{\Psi_{\Gamma-123/4}+\Psi_{\Gamma-24/13}-\Psi_{\Gamma-34/12}}{2}\in{\mathbb Z}[x_5,\ldots,x_n]
\end{equation}
and
\begin{eqnarray}\label{15b}
\bar N(\Psi_\Gamma)&\!\!=&\!q(q-2)\bar N(\Psi_{\Gamma-2/3})|_{{\mathbb P}{\mathbb F}_q^{n-3}}\nonumber\\
&&\!+\,q(q-1)[\bar N(\Psi_{\Gamma-123})+\bar N(\Psi_{\Gamma-24/3})]+q^2\bar N(\Psi_{\Gamma-2/34})|_{{\mathbb P}{\mathbb F}_q^{n-4}}\\
&&\!+\,q^2[\bar N(\Psi_{\Gamma-1234})+\bar N(\Psi_{\Gamma-123/4})\nonumber\\
&&\quad\;-\,\bar N(\Psi_{\Gamma-1234},\delta)-\bar N(\Psi_{\Gamma-123/4},\delta)-(q-2)\bar N(\delta)]|_{{\mathbb P}{\mathbb F}_q^{n-5}}.\nonumber
\end{eqnarray}
\end{thm}
\begin{proof}
From the definition it is clear that the graph polynomial can factorize only if the graph has vertex-connectivity $\leq1$.
Hence, $\Psi_\Gamma$ is irreducible and $N(\Psi_\Gamma)_{{\mathbb F}_q^n}=q^{n-1}+O(q^{n-2})$.
For the projective complement we obtain Eq.\ (\ref{13c}) while $\bar N(\Psi_\Gamma)\equiv0$ mod $q^2$
is Cor.\ \ref{cor3}.
Every spanning tree has to reach $v$. Hence $\Psi_\Gamma$ can not have a term proportional to $x_1x_2x_3$.
Similarly, the coefficients of $x_1x_2$, $x_1x_3$, and $x_2x_3$ have to be equal to the graph polynomial of $\Gamma-123$.
Hence $\Psi_\Gamma$ has the following shape
\begin{equation*}
\Psi_{\Gamma-123}(x_1x_2\!+\!x_1x_3\!+\!x_2x_3)+\Psi_{\Gamma-1/23}x_1+\Psi_{\Gamma-2/13}x_2+\Psi_{\Gamma-3/12}x_3+\Psi_{\Gamma/123},
\end{equation*}
From this we calculate $\Delta_{12}$ to
\begin{equation*}
\Delta_{12}^2=(\Psi_{\Gamma-123}x_3+\Delta)^2-\Delta^2+\Psi_{\Gamma-1/23}\Psi_{\Gamma-2/13}-\Psi_{\Gamma-123}\Psi_{\Gamma/123},
\end{equation*}
with Eq.\ (\ref{14b}) for $\Delta$ and non-zero $\Psi_{\Gamma-123}$ (because $\Gamma$ is simple, two-connected).
The left hand side of the above equation is a square by Lemma \ref{lem1} which leads to Eq.\ (\ref{14}) plus
\begin{equation}\label{18}
\Psi_{\Gamma-123}\Psi_{\Gamma/123}-\Psi_{\Gamma-1/23}\Psi_{\Gamma-2/13}=-\Delta^2
\end{equation}
(which is Eq.\ (\ref{3}) for $\Gamma/3$). This leads to
\begin{equation}\label{19}
\Psi_{\Gamma-1/23}\Psi_{\Gamma-2/13}\equiv\Delta^2 \mod \Psi_{\Gamma-123}.
\end{equation}
Substitution of Eq.\ (\ref{14b}) into 4-times Eq.\ (\ref{18}) leads to
\begin{equation}\label{20}
\Psi_{\Gamma-3/12}\equiv\Psi_{\Gamma-2/13} \mod (\Psi_{\Gamma-123},\Psi_{\Gamma-1/23}),
\end{equation}
where $(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23})$ is the ideal generated by $\Psi_{\Gamma-123}$ and $\Psi_{\Gamma-1/23})$.
A straight forward calculation eliminating $x_1,x_2,x_3$ using Eq.\ (\ref{13}) and Prop.\ \ref{prop1}
(one may modify the Maple-program available on the homepage of J.R. Stembridge to do this) leads to
\begin{eqnarray*}
\bar N(\Psi_\Gamma)&=&q^3\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23},\Psi_{\Gamma-2/13},\Psi_{\Gamma-3/12},\Psi_{\Gamma/123})\\
&&+\,q^2\big[-\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23},\Psi_{\Gamma-2/13},\Psi_{\Gamma-3/12})\\
&&\quad\quad+\,\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23},\Psi_{\Gamma-2/13})+\bar N(\Psi_{\Gamma-123},\Delta)\\
&&\quad\quad-\,\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-2/13})-\bar N(\Psi_{\Gamma-123},\Psi_{\Gamma-1/23})\big]\Big|_{{\mathbb P}{\mathbb F}_q^{n-4}}.
\end{eqnarray*}
From this equation one may drop $\Psi_{\Gamma-3/12}$ by Eq.\ (\ref{20}). Now, replacing $\Delta$ by $\Delta^2$ and Eq.\ (\ref{19}) with inclusion-exclusion (\ref{6})
proves Eq.\ (\ref{13b}).
Alternatively, we may use Eqs.\ (\ref{11}) and (\ref{13}) together with Eq.\ (\ref{14}) to obtain Eq.\ (\ref{15}).
By Cor.\ \ref{cor3} we have $\bar N(\Psi_{\Gamma/3})$, $\bar N(\Psi_{\Gamma-123})\equiv0$ mod $q^2$ and by Cor.\ \ref{cor1} we have $\bar N(\Delta)\equiv0 $ mod $q$ which makes
Eq.\ (\ref{15a}) a consequence of Eqs.\ (\ref{11}) and (\ref{15}).
The claim in case of a triangle 2,3,4 follows in an analogous way from Eq.\ (\ref{13b}): With the identities
\begin{eqnarray*}
&&\Psi_{\Gamma-123}=\Psi_{\Gamma-1234}x_4+\Psi_{\Gamma-123/4},\quad\Psi_{\Gamma-1/23}=\Psi_{\Gamma-123/4}x_4,\\
&&\Psi_{\Gamma-2/13}=\Psi_{\Gamma-24/13}x_4+\Psi_{\Gamma-2/134},\quad\Psi_{\Gamma/123}=\Psi_{\Gamma-2/134}x_4.
\end{eqnarray*}
which follow from the definition of the graph polynomial we use Prop.\ \ref{prop1} and
\begin{equation*}
\Psi_{\Gamma-1234}\Psi_{\Gamma-2/134}-\Psi_{\Gamma-123/4}\Psi_{\Gamma-24/13}=-\delta^2
\end{equation*}
which is analogous to Eq.\ (\ref{18}) to prove Eq.\ (\ref{15b}).
\end{proof}
Every primitive $\phi^4$-graph comes from deleting a vertex in a 4-regular graph. Hence, for these graphs Eqs.\ (\ref{13b}) -- (\ref{15a}) are always applicable.
In some cases a 3-valent vertex is attached to a triangle. In this case it is best to apply Prop.\ \ref{prop1} to Eq.\ (\ref{15b}) although
this equation is somewhat lengthy (see Thm.\ \ref{thm2}).
Note that Eq.\ (\ref{15a}) gives quick access to $\bar N(\Psi_\Gamma)$ mod $q^3$. In particular, we have the following corollary.
\begin{cor}\label{cor4}
Let $\Gamma$ be a simple graph with $n$ edges and vertex-connectivity $\geq2$.
If $\Gamma$ has a 3-valent vertex and $2h_1(\Gamma)<n$ then $\bar N(\Psi_\Gamma)\equiv0\mod q^3$.
\end{cor}
\begin{proof}
We have $\deg(\Psi_{\Gamma-123})=h_1-2$ and $\deg(\Delta)=h_1-1$ in Eq.\ (\ref{15a}), hence $\deg(\Psi_{\Gamma-123})+\deg(\Delta)<n-3$.
By the Ax-Katz theorem \cite{AX}, \cite{KATZ} we obtain $N(\Psi_{\Gamma-123},\Delta)_{{\mathbb F}_q^{n-3}}\equiv0\mod q$ such that the corollary follows from Eq.\ (\ref{5}).
\end{proof}
If $2h_1=n$ we will be able to trace $\bar N$ mod $q^3$ by following a single term in the reduction algorithm (details will be published in \cite{BS}):
Because in the rightmost term of Eq.\ (\ref{15a}) the sum over the degrees equals
the number of variables we can apply Eq.\ (\ref{12}) while keeping only the middle term on the right hand side. Modulo $q$ the first term vanishes
trivially whereas the third term vanishes due to the Ax-Katz theorem. As long as $f_{11}f_{20}-f_{10}f_{21}$ factorizes we can continue using Eq.\ (\ref{12}) which
leads to the `denominator reduction' method in \cite{BROW}, \cite{BY}, see Eq.\ (\ref{20a}).
In the next subsection we will see that $\bar N(\Psi_\Gamma)$ mod $q^3$ starts to become non-polynomial for graphs with 14 edges (and $2h_1=n$) whereas higher powers
of $q$ stay polynomial (see Result \ref{res3}). On the other hand $\bar N$ mod $q^3$ is of interest in quantum field theory. It gives access to the most singular part
of the graph polynomial delivering the maximum weight periods and we expect the (relative) period Eq.\ (\ref{1a}) amongst those.
Moreover, $\Delta_{12}^2$ [as in Eq.\ (\ref{15a})] is the denominator of the integrand after integrating over $x_1$ and $x_2$ \cite{BROW}.
For graphs that originate from $\phi^4$-theory we make the following observations:
\begin{remark}\label{rem1}
Let $\Gamma$ be a 4-regular graph minus one vertex, such that the integral Eq.\ (\ref{1a}) converges.
Let $c_2(f,q)\equiv\bar N(f)/q^2\mod q$ for $f$ the graph polynomial $\Psi_\Gamma$ or its dual $\bar\Psi_\Gamma$. We make the following empirical observations:
\begin{enumerate}
\item $c_2(\Psi_\Gamma,q)\equiv c_2(\bar\Psi_\Gamma,q)\mod q$.
\item If $\Gamma'$ is a graph with period $P_{\Gamma'}=P_\Gamma$ [Eq.\ (\ref{1a})] then $c_2(\Psi_\Gamma,q)\equiv c_2(\Psi_{\Gamma'},q)\mod q$.
\item If $c_2(\Psi_\Gamma,q)=c_2$ is constant in $q$ then $c_2=0$ or $-1$.
\item If $c_2(\Psi_\Gamma,p^k)$ becomes a constant $c_2$ after a finite-degree field extension and excluding a finite set of primes $p$ then $c_2=0$ or $c_2=-1$.
\item If $c_2=-1$ (even in the sense of {\rm (4)}) and if the period is a multiple zeta value then it has weight $n-3$, with $n$ the number of edges of $\Gamma$.
\item If $c_2=0$ and if the period is a multiple zeta value then it may mix weights. The maximum weight of the period is $\leq n-4$.
\item One has $c_2(\Psi_\Gamma,q)\equiv\bar N(\Delta_{e,e'})/q\mod q$ for any two edges $e,e'$ in $\Gamma$ (see Eq.\ (\ref{3}) for the definition of $\Delta$).
An analogous equivalence holds for the dual graph polynomial $\bar\Psi_\Gamma$ which is found to give the same $c_2\mod q$ by observation {\rm (1)}.
\end{enumerate}
\end{remark}
\begin{proof}[Proof of the first statement in {\rm (7)}]
By the arguments in the paragraph following Cor.\ \ref{cor4} we can eliminate variables starting from $\bar N(\Delta_{e,e'})$ keeping
only one term mod $q^2$. In \cite{BROW} it is proved that one can always proceed until five variables (including $e,e'$) are eliminated
leading to the `5-invariant' of the graph. This 5-invariant is invariant under changing the order with respect to which the varibales are eliminated.
This shows that $\bar N(\Delta_{e,e'})=\bar N(\Delta_{f,f'})$ mod $q^2$ for any four edges $e,e',f,f'$ in $\Gamma$. The eqivalence in (7) follows from
Eq.\ (\ref{15a}) and the fact that $\Gamma$ has four 3-valent vertices. In fact, every `primitive' graph has at least four 3-valent vertices such that
observation (7) holds for those graphs in general.
\end{proof}
By the proven part of (7) we know that `denominator reduction' \cite{BROW} of a primitive graph $\Gamma$ gives $\bar N(\Gamma)$ mod $q^3$:
If a sequence of edges leads to a reduced denominator $\psi$ in $m$ (non-reduced) variables we have
\begin{eqnarray}\label{20a}
\bar N(\Psi)&\equiv&(-1)^m\bar N(\psi)_{{\mathbb P}{\mathbb F}_q^{m-1}},\hbox{ if }m\geq1,\\
\bar N(\Psi)&\equiv&-\bar N(\psi)\quad\quad\quad\quad\;,\hbox{ if }\psi\in{\mathbb Z},\nonumber
\end{eqnarray}
where $\bar N(z)$ for $z\in{\mathbb Z}$ is 1 if gcd$(z,q)=1$ and 0 otherwise. This explains observations (3) and (4) for `denominator reducible' graphs (for which there
exists a sequence of edges, such that $\psi\in{\mathbb Z}$). In this situation observations (5) and (6) are proved in \cite{BROW}.
Moreover, for a class of not too complicated graphs (6) can be explained by means of \`etale cohomology and Lefschetz's fixed-point formula \cite{DORY}.
Of particular interest will be the case when $\bar N$ is a polynomial in $q$. In this situation we have the following statement.
\begin{lem}\label{lem3}
For homogeneous $f_1,\ldots,f_m$ let $\bar N(f_1,\ldots,f_m)_{{\mathbb P}{\mathbb F}_q^{n-1}}=c_0+c_1q+\ldots+c_{n-1}q^{n-1}$ be a polynomial in $q$.
We obtain for the local zeta-function $Z_q(t)$ of the projective zero locus $f_1=\ldots=f_m=0$,
\begin{equation}\label{10}
Z_q(t)=\prod_{k=0}^{n-1}(1-q^kt)^{c_k-1}.
\end{equation}
By rationality of $Z_q$ \cite{DWOR} we see that all coefficients $c_k$ are integers, hence $\bar N\in{\mathbb Z}[q]$.
\end{lem}
\begin{proof}
A straight forward calculation using Eq.\ (\ref{5}) shows that $Z_q(t)=\exp(\sum_{k=1}^\infty N_{{\mathbb P}{\mathbb F}_{q^k}^{n-1}}t^k/k)$ leads to Eq.\ (\ref{10}).
\end{proof}
We end this subsection with the following remark that will allows us to lift some results to general fields (see Thm.\ \ref{thm2}).
\begin{remark}\label{rem2}
All the results of this subsection are valid in the Gro\-then\-dieck ring of varieties over a field $k$ if $q$ is replaced by the equivalence class of the affine line $[{\mathbb A}_k^1]$.
\end{remark}
\begin{proof}
The results follow from inclusion-exclusion, cartesian products, ${\mathbb F}_q^\times$-fibrations which behave analogously in the Grothendieck ring.
\end{proof}
\subsection{Methods}
Our main method is Prop.\ \ref{prop1} applied to Thm.\ \ref{thm1}. Identities (1) and (2) of Prop.\ \ref{prop1} have been implemented by J.R. Stembridge in a nice
Maple worksheet which is available on his homepage. Stembridge's algorithm tries to partially eliminate variables and expand products in a balanced way
(not to generate too large expressions). But, actually, it turned out to be more efficient to completely eliminate variables and expand all products once the sequence of
variables is chosen in an efficient way. Thm.\ \ref{thm1} reflects this strategy by providing concise formulae for completely eliminating variables that are attached to a vertex
(and a triangle). A good sequence of variables will be a sequence that tries to complete vertices or cycles.
Such a sequence is related to \cite{BROW} by providing a small 'vertex-width'. So, in fact, the author modified Stembridge's algorithms to work in a less intelligent way.
\begin{method}\label{met1}
Choose a sequence of edges $1,2,\ldots,n$ such that every sub-sequence $1,2,\ldots,k$ contains as many complete vertices and cycles as possible.
Start from Thm.\ \ref{thm1} (if possible). Pick the next variable in the sequence that can be eliminated completely (if any) and apply Prop.\ \ref{prop1} (2).
Factor all polynomials. Expand all products by Prop.\ \ref{prop1} (1).
Continue until no more variables can be eliminated completely (because no variable is linear in all polynomials).
Next, apply the above algorithm to each summand. Continue until Prop.\ \ref{prop1} (2) can no longer be applied (because no variable is linear in any polynomial).
Finally (if necessary), try to use Prop.\ \ref{prop1} (3) to modify a polynomial in such a way that it becomes linear in (at least) one variable.
If successful continue with the previous steps.
\end{method}
In most cases (depending on the chosen sequence of variables) graphs with up to 14 edges reduce completely and the above method provides a polynomial in $q$.
Occasionally one may have to stop the algorithm because it becomes too time-consuming. This depends on Maple's ability to factorize polynomials and to handle large expressions.
But working over finite fields we do not have to quit where the algorithm stops: We can still count for small $q$.
A side effect of the algorithm is that it eliminates many variables completely before it stops. This makes counting significantly faster.
If $\bar N$ is a polynomial, by Eq.\ (\ref{13c}) we have to determine the coefficients $c_2,c_3,\ldots,c_{n-3}$. We can do this for $n=14$ edges
by considering all prime powers $q\leq16$. By Lemma\ \ref{lem3} the coefficients have to be integers. Conversely, if interpolation does not provide
integer coefficients we know that $\bar N$ can not be a polynomial in $q$. For graphs with 14 edges this is a time consuming though possible method even if hardly any variables
were eliminated. D. Doryn used a similar method to prove (independently) that one of the graphs obtained from deleting a vertex from Fig.\ 1(a) is a counter-example to Kontsevich's
conjecture \cite{DORY}.
We implemented a more efficient polynomial-test that uses the fact that the coefficients are not only integers but have small absolute value. This determines
the coefficients by the Chinese-Remainder-Theorem if $\bar N$ is known for a few small primes. For graphs with 14 edges it was sufficient to use $q=2$, 3, 5, and 7
because the coefficients are two-digit integers (and test the result with $q=4$). For graphs with 16 edges we had additionally to count for $q=11$.
\begin{method}\label{met2}
Select a set of small primes $p_1,p_2,\ldots,p_k$.
Evaluate $d_2(i)=\bar N(p_i)/p_i^2$ for these primes. Determine the smallest (by absolute value) common representatives $c_2$ of $d_2(i)\mod p_i$ (usually take the smallest one and maybe
the second smallest if it is not much larger than the smallest representative).
For each of the $c_2$ calculate $d_3(i)=(d_2(i)-c_2)/p_i$. Proceed as before to obtain a set of sequences $c_2$, $c_3$, $\ldots$, $c_{n-1}$.
If for one of the sequences one has $d_n(i)=0$ for all $i$ and [see Eq.\ (\ref{13c})] $c_{n-2}=0$, $c_{n-1}=1$ (and the set of sequences was not too large) then it is likely that
$\bar N(q)$ is a polynomial in $q$, namely $c_2q^2+c_3q^3+\ldots+c_{n-3}q^{n-3}+q^{n-1}\mod(q-p_1)(q-p_2)\cdots(q-p_k)$.
If $\bar N$ is a polynomial with coefficients $c_i$ such that $|c_i|<p_1p_2\cdots p_k/2$ then it is determined uniquely by the smallest representative for each $c_i$.
\end{method}
Note that one can use the above method to either test if $\bar N(q)$ is a polynomial in $q$ (this test may occasionally give a wrong answer in both directions)
or to completely determine a polynomial $\bar N(q)$ with a sufficient number of primes counted.
Normally, one would use the smallest primes, but because (as we will see in the next subsection) $p=2$ may be an exceptional prime it is useful to try the method without $p=2$
if it fails when $p=2$ is included. Similarly one may choose certain subsets of primes (like $q=1$ mod 3) to identify a polynomial behavior after finite field extensions.
Because only few primes are needed to apply this method it can be used with no reduction beyond Thm.\ \ref{thm1} for graphs with up to 16 edges. Calculating modulo small
primes is fast in C$++$ and counting can easily be parallelized which makes this Method a quite practical tool.
The main problem is to find a result for $\bar N(q)$ if it is not a polynomial in $q$. It turned out that for $\phi^4$-graphs with 14 edges the deviation from being polynomial
can be completely determined mod $q^3$. This is no longer true for graphs with 16 edges, but at higher powers of $q$ we only find terms that we already had in graphs
with 14 edges (see Result \ref{res3}). Therefore a quick access to $\bar N(q)$ mod $q^3$ is very helpful.
\begin{method}\label{met3}
Determine $c_2(q)\equiv\bar N(q)/q^2 \mod q$ using Eq.\ (\ref{15a}) together with Eq.\ (\ref{12}) [or Eq.\ (\ref{20a})] and Remark \ref{rem1}.
Afterwards check if $\bar N(q)/q^2-c_2(q)$ is a polynomial in $q$.
\end{method}
In practice it is often useful to combine the methods. Typically one would first run Method \ref{met1}. If it fails to deliver a complete reduction one may
apply Method \ref{met3} to determine its polynomial discrepancy and eventually Method \ref{met2} to determine the result.
\subsection{Results}
\begin{figure}[t]
\epsfig{file=QFT_Fqfig1.eps,width=\textwidth}
\caption{4-regular graphs that deliver primitive $\phi^4$-graphs by the removal of a vertex. Every such $\phi^4$-graph is a counter-example to Kontsevich's conjecture.
Graphs (a) -- (c) give a total of six non-isomorphic counter-examples with 14 edges. Graphs (d), (e) provide another seven counter-examples with 16 edges.
The graph hypersurface of (e) minus any vertex entails a degree 4 non-mixed-Tate two-fold (a K3 \cite{BS}). The graphs are taken from \cite{CENSUS} where they have the names
$P_{7,8}$, $P_{7,9}$, $P_{7,11}$, $P_{8,40}$, and $P_{8,37}$, respectively. See Eqs.\ (\ref{22a1}) -- (\ref{22e6}) for the results.}
\end{figure}
First, we applied our methods to the complete list of graphs with 13 edges that are potential counter-examples to Kontsevich's conjecture. This list due to the
1998 work by Stembridge and it is available on his homepage. We found that for all of these graphs $\bar N$ is a polynomial in $q$. This extends Stembridge's
result \cite{STEM} from 12 to 13 edges.
\begin{result}\label{res1}
Kontsevich's conjecture holds for all graphs with $\leq13$ edges.
\end{result}
Second, we looked at all graphs with 14 edges that originate from primitive $\phi^4$-graphs [graphs with finite period Eq.\ (\ref{1a})].
These graphs come as 4-regular graphs with one vertex removed. They have $n=2h_1$ edges, four of which are 3-valent whereas all others are 4-valent.
A complete list of 4-regular graphs that lead to primitive $\phi^4$-graphs with up to 16 edges can be found in \cite{CENSUS}.
\begin{result}\label{res2}
Kontsevich's conjecture holds for all primitive $\phi^4$-graphs with 14 edges with the exception of the graphs obtained from Figs.\ 1(a) -- (c) by the removal of a vertex.
\end{result}
The counter-examples Fig.\ 1(a) -- (c) fall into two classes: One, Figs.\ 1(a), (b) with exceptional prime 2, second, Fig.\ 1(c) with a quadratic extension.
These counter-examples are the smallest counter-examples to Kontsevich's conjecture by Result \ref{res1}.
Next, we tested the power of our methods to primitive $\phi^4$-graphs with 16 edges. We were scanning through the graphs with Method \ref{met3}
to see whether we find some new behavior. Only in the last five graphs of the list in \cite{CENSUS} we expect something new.
We were able to pin down the result for graphs coming from Fig.\ 1(d), (e). Figure 1(d) features a fourth root of unity extension together with an
exceptional prime 2 whereas Fig.\ 1(e) leads to a degree 4 surface in ${\mathbb P}^3$ which is non-mixed-Tate.
\begin{result}\label{res3}
All graphs coming from Fig.\ 1 by removal of a vertex are counter-examples to Kontsevich's conjecture (six with $14$ edges, seven with $16$ edges).
We list $\bar N(\Psi)/q^2$, the number of points in the projective complement of the graph hypersurface divided by $q^2$.
The second expression [in brackets] contains the result $\bar N(\bar\Psi)/q^2$ for the dual graph hypersurface.
In the following $\bar N(2)=\bar N(2)_{{\mathbb P}{\mathbb F}_q^0}=0$ if $q=2^k$ and $1$ otherwise,
$\bar N(a^2+ab+b^2)=\bar N(a^2+ab+b^2)_{{\mathbb P}{\mathbb F}_q^1}=q-\{1,0,-1\}$ if
$q\equiv1,0,-1\mod 3$, $\bar N(a^2+b^2)=\bar N(a^2+b^2)_{{\mathbb P}{\mathbb F}_q^1}=q-\{1,0,-1\}$ if
$q\equiv1,0$ or $2,-1\mod 4$, respectively, and
\begin{eqnarray}\label{22}
f\;=\;f(a,b,c,d)&=&a^2b^2+a^2bc+a^2bd+a^2cd+ab^2c+abc^2\nonumber\\
&&\quad +\,abcd+abd^2+ac^2d+acd^2+bc^2d+c^2d^2.
\end{eqnarray}
\begin{eqnarray}\label{22a1}
&&\quad\quad\quad(1)\hbox{ Fig.\ 1(a) $-$ vertex $1$}\\
&&\hspace*{-14pt}q^{11}\!-\!q^8\!-\!24q^7\!+\!54q^6\!-\!36q^5\!-\!2q^4\!+\!34q^2\!-\!32q\!-\!\bar N(2)\nonumber\\
&&\hspace*{-14pt}[q^{11}\!-\!5q^8\!-\!11q^7\!+\!24q^6\!+\!q^5\!-\!50q^4\!+\!83q^3\!-\!47q^2\!-\bar N(2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22a2}
&&\quad\quad\quad(2)\hbox{ Fig.\ 1(a) $-$ vertex $2$, $3$, $4$, or $5$}\\
&&\hspace*{-14pt}q^{11}\!-\!3q^8\!-\!13q^7\!+\!34q^6\!-\!26q^5\!+\!13q^4\!-\!14q^3\!+\!13q^2\!-\!4q\!-\bar N(2)\nonumber\\
&&\hspace*{-14pt}[q^{11}\!-\!6q^8\!-\!6q^7\!+\!23q^6\!-\!9q^5\!-\!11q^4\!+\!10q^3\!+\!9q^2\!-\!12q\!-\bar N(2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22a3}
&&\quad\quad\quad(3)\hbox{ Fig.\ 1(a) $-$ vertex $6$, $7$, $8$, or $9$}\\
&&\hspace*{-14pt}q^{11}\!-\!4q^8\!-\!11q^7\!+\!38q^6\!-\!39q^5\!+\!24q^4\!-\!16q^3\!+\!11q^2\!-\!4q\!-\bar N(2)\nonumber\\
&&\hspace*{-14pt}[q^{11}\!-\!6q^8\!-\!6q^7\!+\!26q^6\!-\!12q^5\!-\!8q^4\!-\!7q^3\!+\!28q^2\!-\!16q\!-\bar N(2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22b1}
&&\quad\quad\quad(4)\hbox{ Fig.\ 1(b) $-$ vertex $1$, $2$, or $3$}\\
&&\hspace*{-14pt}q^{11}\!-\!3q^8\!-\!16q^7\!+\!41q^6\!-\!27q^5\!+\!q^4\!-\!5q^3\!+\!24q^2\!-\!18q\!-\bar N(2)\nonumber\\
&&\hspace*{-14pt}[q^{11}\!-\!5q^8\!-\!9q^7\!+\!28q^6\!-\!11q^5\!-\!10q^4\!+\!5q^3\!+\!13q^2\!-\!14q\!-\bar N(2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22b2}
&&\quad\quad\quad(5)\hbox{ Fig.\ 1(b) $-$ vertex $4$, $5$, $6$, $7$, $8$, or $9$}\\
&&\hspace*{-14pt}q^{11}\!-\!4q^8\!-\!13q^7\!+\!44q^6\!-\!46q^5\!+\!32q^4\!-\!29q^3\!+\!24q^2\!-\!9q\!-\bar N(2)\nonumber\\
&&\hspace*{-14pt}[q^{11}\!-\!5q^8\!-\!9q^7\!+\!34q^6\!-\!26q^5\!+\!5q^4\!-\!8q^3\!+\!18q^2\!-\!11q\!-\bar N(2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22c}
&&\quad\quad\quad(6)\hbox{ Fig.\ 1(c) $-$ any vertex}\\
&&\hspace*{-14pt}q^{11}\!-\!3q^8\!-\!15q^7\!+\!41q^6\!-\!32q^5\!+\!7q^4\!-\!3q^3\!+\!15q^2\!-\!15q\!+\bar N(a^2\!+\!ab\!+\!b^2)\nonumber\\
&&\hspace*{-14pt}[q^{11}\!-\!5q^8\!-\!9q^7\!+\!28q^6\!-\!7q^5\!-\!18q^4\!+\!3q^3\!+\!22q^2\!-\!17q\!+\bar N(a^2\!+\!ab\!+\!b^2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22d}
&&\quad\quad\quad(7)\hbox{ Fig.\ 1(d) $-$ any vertex}\\
&&\hspace*{-14pt}q^{13}\!-\!3q^{10}\!-\!11q^9\!+\!2q^8\!+\!90q^7\!-\!191q^6\!+\!208q^5\!-\!153q^4\!+\!79q^3\nonumber\\
&&\quad-[25+\bar N(2)]q^2\!-\!q\!+\!\bar N(a^2+b^2)\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!7q^{10}\!-\!5q^9\!+\!9q^8\!+\!46q^7\!-\!108q^6\!+\!197q^5\!-\!294q^4\!+\!253q^3\nonumber\\
&&\quad-[105\!+\!\bar N(2)]q^2\!-\![q\!+\!8\bar N(2)]q\!+\!\bar N(a^2+b^2)]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22e1}
&&\quad\quad\quad(8)\hbox{ Fig.\ 1(e) $-$ vertex $1$}\\
&&\hspace*{-14pt}q^{13}\!-\!2q^{10}\!-\!19q^9\!+\!14q^8\!+\!103q^7\!-\!266q^6\!+\!374q^5\!-\!410q^4\!+\!322q^3\nonumber\\
&&\quad-97q^2\!-\!43q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!5q^{10}\!-\!11q^9\!+\!8q^8\!+\!84q^7\!-\!187q^6\!+\!267q^5\!-\!386q^4\!+\!427q^3\nonumber\\
&&\quad-221q^2\!-\![11\!-\!2\bar N(a^2\!+\!ab\!+\!b^2)]q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22e2}
&&\quad\quad\quad(9)\hbox{ Fig.\ 1(e) $-$ vertex $2$ or $4$}\\
&&\hspace*{-14pt}q^{13}\!-\!3q^{10}\!-\!15q^9\!+\!9q^8\!+\!107q^7\!-\!262q^6\!+\!337q^5\!-\!315q^4\!+\!199q^3\nonumber\\
&&\quad-45q^2\!-\!19q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!5q^{10}\!-\!12q^9\!+\!19q^8\!+\!63q^7\!-\!174q^6\!+\!229q^5\!-\!241q^4\!+\!181q^3\nonumber\\
&&\quad-50q^2\!-\![20\!-\!\bar N(a^2\!+\!ab\!+\!b^2)]q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22e3}
&&\quad\quad\quad(10)\hbox{ Fig.\ 1(e) $-$ vertex $3$ or $5$}\\
&&\hspace*{-14pt}q^{13}\!-\!3q^{10}\!-\!18q^9\!+\!25q^8\!+\!71q^7\!-\!214q^6\!+\!282q^5\!-\!246q^4\!+\!133q^3\nonumber\\
&&\quad-13q^2\!-\!24q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!5q^{10}\!-\!13q^9\!+\!24q^8\!+\!56q^7\!-\!177q^6\!+\!255q^5\!-\!283q^4\!+\!212q^3\nonumber\\
&&\quad-54q^2\!-\!22q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22e4}
&&\quad\quad\quad(11)\hbox{ Fig.\ 1(e) $-$ vertex $6$}\\
&&\hspace*{-14pt}q^{13}\!-\!3q^{10}\!-\!21q^9\!+\!41q^8\!+\!36q^7\!-\!168q^6\!+\!237q^5\!-\!208q^4\!+\!93q^3\nonumber\\
&&\quad+24q^2\!-\!37q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!5q^{10}\!-\!14q^9\!+\!27q^8\!+\!48q^7\!-\!161q^6\!+\!215q^5\!-\!199q^4\!+\!115q^3\nonumber\\
&&\quad-3q^2\!-\![29\!+\!2\bar N(2)]q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22e5}
&&\quad\quad\quad(12)\hbox{ Fig.\ 1(e) $-$ vertex $7$ or $8$}\\
&&\hspace*{-14pt}q^{13}\!-\!4q^{10}\!-\!16q^9\!+\!33q^8\!+\!38q^7\!-\!157q^6\!+\!214q^5\!-\!185q^4\!+\!96q^3\nonumber\\
&&\quad-7q^2\!-\!15q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!5q^{10}\!-\!14q^9\!+\!32q^8\!+\!42q^7\!-\!170q^6\!+\!234q^5\!-\!200q^4\!+\!91q^3\nonumber\\
&&\quad+10q^2\!-\!22q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}]\nonumber
\end{eqnarray}
\begin{eqnarray}\label{22e6}
&&\quad\quad\quad(13)\hbox{ Fig.\ 1(e) $-$ vertex $9$ or $10$}\\
&&\hspace*{-14pt}q^{13}\!-\!3q^{10}\!-\!15q^9\!+\!11q^8\!+\!99q^7\!-\!252q^6\!+\!333q^5\!-\!318q^4\!+\!213q^3\nonumber\\
&&\quad-61q^2\!-\!18q\!+\!\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}\nonumber\\
&&\hspace*{-14pt}[q^{13}\!-\!5q^{10}\!-\!11q^9\!+\!13q^8\!+\!81q^7\!-\!210q^6\!+\!290q^5\!-\!329q^4\!+\!269q^3\nonumber\\
&&\quad-90q^2\!-\![24+2\bar N(2)]q+\bar N(f)_{{\mathbb P}{\mathbb F}_q^3}]\nonumber
\end{eqnarray}
\end{result}
Interestingly, the period Eq.\ (\ref{1a}) associated to Fig.\ 1(a), Eqs.\ (\ref{22a1}) -- (\ref{22a3}), has been determined by `exact numerical methods' as weight 11 multiple zeta value \cite{CENSUS}, namely
\begin{eqnarray}\label{23}
P_{7,8}&=&\frac{22383}{20}\zeta(11)-\frac{4572}{5}[\zeta(3)\zeta(5,3)-\zeta(3,5,3)]-700\zeta(3)^2\zeta(5)\nonumber\\
&&\quad+\,1792\zeta(3)\left[\frac{27}{80}\zeta(5,3)+\frac{45}{64}\zeta(5)\zeta(3)-\frac{261}{320}\zeta(8)\right],
\end{eqnarray}
where $\zeta(5,3)=\sum_{i>j}i^{-5}j^{-3}$ and $\zeta(3,5,3)=\sum_{i>j>k}i^{-3}j^{-5}k^{-3}$. So, a multiple zeta period does not imply
that $\bar N$ is a polynomial in $q$. The converse may still be true: If $\bar N$ is a polynomial in $q$ then the period (\ref{1a}) is a multiple zeta value.
It would be interesting to know if the period of Fig.\ 1(e) is a multiple zeta value, but regretfully this is beyond the power of the present `exact numerical
methods' used in \cite{BK} and \cite{CENSUS}.
Most of the above results were found applying the counting Method \ref{met2} at some stage. We mainly used the prime-powers $q=2$, 3, 4, 5, 7, 8, and 11.
The counting for $q=8$ and $q=11$ for graphs with 16 edges (using Eq.\ (\ref{13b}) or similar equations for the dual graph polynomial and in case of an extra triangle)
were performed on the Erlanger RRZE Computing Cluster.
Resorting to the counting Method \ref{met2} is not necessary for most graphs with 14 edges. Eqs.\ (\ref{15}) and (\ref{15b}) of Thm.\ \ref{thm1} are powerful enough
to determine the results by pure computer-algebra. But in some cases finding good sequences can be time consuming and the 14-edge results had been
found by the author prior to Eqs.\ (\ref{15}) and (\ref{15b}). The results have been checked by pure computer-algebra for Fig.\ 1(a) minus vertex 2, 3, 4, or 5
[Eq.\ (\ref{22a2})] and Fig.\ 1(e) minus vertex 2 or 4 [Eq.\ (\ref{22e2})] because the latter may be of interest exhibiting a presumably non-mixed-Tate
surface in ${\mathbb P}^3$. In connection with Remark \ref{rem2} we can state the following theorem:
\begin{thm}\label{thm2}
Let $\Gamma$ be the graph of Fig.\ 1(e) minus vertex $2$ (or minus vertex $4$) and $X$ its graph hypersurface in ${\mathbb P}^{15}$ defined by the vanishing locus of graph polynomial $\Psi_\Gamma$.
Let $[X]$ be the image of $X$ in the Grothendieck ring $K_0($Var$_k)$ of varieties over a field $k$, let ${\mathbb L}=[{\mathbb A}_k^1]$ be the equivalence class of the
affine line, and $1=[$Spec $k]$. With $[F]$ the image of the (singular) zero locus of $f$ [given by Eq.\ (\ref{22})] in ${\mathbb P}^3$ we obtain the identity
\begin{eqnarray}\label{23a}
[X]&=&{\mathbb L}^{14}+{\mathbb L}^{13}+4{\mathbb L}^{12}+16{\mathbb L}^{11}-8{\mathbb L}^{10}-106{\mathbb L}^9+263{\mathbb L}^8-336{\mathbb L}^7\nonumber\\
&&\quad+\,316{\mathbb L}^6-199{\mathbb L}^5+45{\mathbb L}^4+19{\mathbb L}^3+[F]{\mathbb L}^2+{\mathbb L}+1.
\end{eqnarray}
\end{thm}
\begin{proof}
By Remark \ref{rem2} and translation from complements to hypersurfaces in projective space Eq.\ (\ref{23a}) is equivalent to Eq.\ (\ref{22e2}).
To prove Eq.\ (\ref{22e2}) we use Eq.\ (\ref{15b}) in Thm.\ \ref{thm1} with edges 1, 2, 3, 4 corresponding to (1,3), (1,4), (1,5), (4,5) (edge (1,3) connects
vertex 1 with vertex 3 in Fig.\ 1(e), etc.). Terms without $\delta$ in Eq.\ (\ref{15b}) refer to minors of $\Gamma$. The most complicated of these is the first one
which has 14 edges and is isomorphic to Fig.\ 1(a) minus vertex 2. This minor has again a triangle with a 3-valent vertex such that Eq.\ (\ref{15b})
applies to it. Having two edges less than $\Gamma$ it is relatively easy to calculate $\bar N$ for this minor by Method \ref{met1} with the result given in Eq.\ (\ref{22a2})
[use e.g.\ the sequence (1,3), (1,4), (1,5), (4,5), (3,9), (3,8), (5,8), (5,9), (4,6), (6,8), (7,8), (4,7), (6,9), (7,9)]. The other minors have 13 edges or less.
They give polynomial contributions to $\bar N(\Psi_\Gamma)$ by Result \ref{res1}. These are easy to determine.
The first of the three terms containing $\delta$ in Eq.\ (\ref{15b}) can be reduced by Method \ref{met1} using the sequence
(4,7), (4,6), (3,7), (3,9), (6,9), (6,10), (9,10), (7,10), (7,8), (8,9), (5,8), (5,10). With the Maple 9.5-implementation used by the author (a modified version
of Stembridge's programs) it takes somewhat less than a day on a single core to produce the result which is the polynomial $q^{11}+q^{10}-q^9-6q^8-7q^7+51q^6-95q^5+101q^4-59q^3+11q^2+4q$.
The third term with $\delta$ is much simpler and produces $q^{11}-2q^9-10q^8+28q^7-25q^6+13q^5-18q^4+27q^3-16q^2-\bar N(2)q$ within two minutes using the sequence
(4,6), (6,9), (6,10), (9,10), (4,7), (3,9), (3,7), (5,10), (7,10), (7,8), (8,9), (5,8). Interestingly it cancels the $\bar N(2)$-dependence coming
from the 14-edge minor, Eq.\ (\ref{22a2}).
Only the second term with $\delta$ contains the degree 4 surface in ${\mathbb P}^3$.
Eliminating variables according to the sequence (3,7), (3,9), (4,7), (4,6), (6,9), (6,10), (9,10), (5,10), (5,8), (8,9), (7,10), (7,8) (if possible) leaves us
(after about one day of computer algebra) with a degree 5 three-fold and two simpler terms which add to an expression
polynomial in $q$ after applying a rescaling, Eq.\ (\ref{8a}), to one of them.
The three-fold depends on the variables $x_{5,10}$, $x_{5,8}$, $x_{8,9}$, $x_{7,10}$, $x_{7,8}$ corresponding to the last five edges of the sequence.
To simplify the three-fold we first go to affine space using Eq.\ (\ref{8b}) with $x_1=x_{7,8}$. Afterwards we rescale $x_{5,10}$ and $x_{7,10}$ by the factor
$x_{5,8}x_{8,9}+x_{5,8}+x_{8,9}$ to obtain a degree 4 two-fold. We decided to apply another rescaling, namely $x_{7,10}\mapsto x_{7,10}(x_{8,9}+1)/x_{8,9}$, to
eliminate powers of 3 from the two-fold that otherwise would have appeared after going back to projective space using Eq.\ (\ref{8b}) backwards.
The variables $a$, $b$, $c$ in Eq.\ (\ref{22}) correspond to $x_{5,10}$, $x_{8,9}$, $x_{7,10}$, respectively. The variable $d$ is introduced by homogenizing the polynomial.
\end{proof}
Counting $\bar N(f)_{{\mathbb P}{\mathbb F}_p^3}$ mod $p$ for all primes $<10000$ we observe the following behavior: (This result is an immediate consequence of the fact that
$F$ is a Kummer surface \cite{BS}.)
\begin{result}\label{res4}
For $p>2$ we have $\bar N(f)_{{\mathbb P}{\mathbb F}_p^3}\equiv 28k(p)^2\mod p$ with $k(p)=0$ if $p=7$ or $p\equiv3$, $5$, $6$ mod $7$ ($-7$ is not a square in ${\mathbb F}_p$) and
$k(p)\in \{1,2,\ldots,\lfloor \sqrt{p/7}\rfloor\}$ otherwise. We have (confirmed to 4 digits)
\begin{equation}\label{23b}
\sup_p \frac{7k(p)^2}{p}=1.
\end{equation}
\end{result}
Equation (\ref{23b}) gives us a hint that the surface $f=0$ can not be reduced to a curve (or a finite field extension) because from the local zeta-function
and the Riemann hypothesis for finite fields we know \cite{HART} that the number of points on a projective non-singular curve of genus $g$ over $F_q$ is given by $q+1+\alpha$ with
$|\alpha|\leq 2g\sqrt{q}$. Thus, modulo $q$ this number is relatively close to 0 for large $q$. We can not see such a behavior in Eq.\ (\ref{23b}).
We expect that the graphs derived from $P_{8,38}$, $P_{8,39}$, $P_{8,41}$ in \cite{CENSUS} also lead to 16-edge graphs which are counter-examples to Kontsevich's conjecture
none of which being expressible in terms of exceptional primes and finite field extensions. By an argument similar to the one above it seems that the
graph hypersurfaces of these graphs reduce to varieties of dimension $\geq2$. The (likely) absence of curves was not expected by the author.
\section{Outlook: Quantum Fields over ${\mathbb F}_q$}
In this section we try to take the title of the paper more literally. The fact that the integrands in Feynman-amplitudes are of algebraic nature allows us to make an attempt to define
a quantum field theory over a finite field ${\mathbb F}_q$. Our definition will not have any direct physical interpretation. In particular, it should not
be understood as a kind of lattice regularization. In fact, the significance of this approach is unclear to the author.
We start from momentum space. The parametric space used in the previous section is not a good starting point
because it is derived from momentum or position space by an integral transformation that does not translate literally to finite fields.
We work in general space-time dimension $d$ and consider a bosonic quantum field theory with momentum independent vertex-functions. A typical candidate of such
a theory would be $\phi^k$-theory for any integer $k\geq 3$. In momentum space the `propagator' (see \cite{IZ}) is the inverse of a quadric in $d$ affine variables.
Normally one uses $Q=|p|^2+m^2$, where $|p|$ is the euclidean norm of $p\in{\mathbb R}^d$ and $m$ is the mass of the particle involved.
One may use a Minkowskian metric (or any other metric) as well.
The denominator of the integrand in a Feynman amplitude is a product of $n$ quadrics $Q_i$ for a graph $\Gamma$ with $n$ (interior) edges.
The momenta in these propagators are sums or differences of $h_1$ momentum vectors, with $h_1$ the number of independent cycles of $\Gamma$.
The Feynman-amplitude of $\Gamma$ has the generic form
\begin{equation}\label{25}
A(\Gamma)=\int_{{\mathbb R}^{dh_1}}{\rm d}^dp_1\cdots{\rm d}^dp_{h_1}\frac{1}{\prod_{i=1}^n Q_i(p)}.
\end{equation}
The asymptotical behavior of the differential form on the right hand side for large momenta is $\sim|p|^c$, where
\begin{equation}\label{26}
c=dh_1-2n
\end{equation}
is called the `superficial degree of divergence' (if $h_1>0$). It is clear that (at least) graphs with $c\geq0$ are ill-defined and need regularization.
From these amplitudes $A(\Gamma)$ we can construct a correlation function as sum over certain classes of graphs weighted by the order
of the automorphism group,
\begin{equation}\label{27}
\Pi=\sum_\Gamma\frac{g^{|\Gamma|}A(\Gamma)}{|{\rm Aut}(\Gamma)|},
\end{equation}
where $g$ is the coupling and $|\Gamma|$ is an integer that grows with the size of $\Gamma$ (like $h_1$).
The correlation function demands renormalization to control the regularization of the single graphs.
For a renormalizable quantum field theory all graphs $\Gamma$ in the sum have the same superficial degree of divergence.
In a super-renormalizable theory (at low dimensions $d$) the divergence becomes less
for larger graphs, whereas the converse is true for a non-renomalizable theory (like quantum gravity).
Working over a finite field it seems natural to replace the integral in Eq.\ (\ref{25}) by a sum
\begin{equation}\label{28}
A(\Gamma)_{{\mathbb F}_q}=\sum_{p\in{\mathbb F}_q^{dh_1}\!:\,Q_i(p)\neq0}\frac{1}{\prod_{i=1}^n Q_i(p)}.
\end{equation}
The amplitude is well-defined (whereas $|{\rm Aut}(\Gamma)|$ in the denominator of Eq.\ (\ref{27}) causes problems for small $q$).
It is zero in many cases.
\begin{lem}\label{lem4}
Let $\Gamma$ be a graph with $n$ edges, $h_1>0$ independent cycles and superficial degree of divergence $c$. If $q>2$ then
\begin{equation}\label{29}
A(\Gamma)_{{\mathbb F}_q}=0\quad\hbox{if}\quad(q-1)c+2n>0.
\end{equation}
\end {lem}
\begin{proof}
For all $x\in{\mathbb F}_q^\times$ we have $x^{q-1}=1$. Hence the amplitude (\ref{28}) can be written as
\begin{equation}\label{30}
A(\Gamma)_{{\mathbb F}_q}=\sum_{p\in{\mathbb F}_q^{dh_1}}\prod_{i=1}^n Q_i(p)^{q-2}
\end{equation}
where the restriction to non-zero $Q_i$ can be dropped for $q>2$. The right hand side is a polynomial in the coordinates of the $p_i$ of degree $2n(q-2)$.
On the other hand we have (we use $0^0:=1$)
\begin{equation}\label{31}
\sum_{x\in{\mathbb F}_q}x^k=\left\{\begin{array}{rl}-1&\hbox{if }0<k\equiv0\mod(q-1)\\
0&\hbox{else}\end{array}\right.
\end{equation}
which is obvious if one multiplies both sides of the equation by any $1\neq y^k\in{\mathbb F}_q^\times$ [if $k\not\equiv0$ mod $(q-1)$].
In particular, the sum over a polynomial in $x$ vanishes unless the
polynomial has a minimum degree $q-1$. In case of $dh_1$ variables we need a minimum degree $dh_1(q-1)$.
The right hand side of (\ref{30}) does not have this minimum degree if $2n(q-2)<dh_1(q-1)$ which by Eq.\ (\ref{26}) gives Eq.\ (\ref{29}).
\end{proof}
We see that only superficially convergent graphs (with $c<0$) can give a non-zero amplitude. The complexity of the graph is limited by $q-1$ times
the degree of convergence. This means for the three possible scenarios of quantum field theory:
\begin{enumerate}
\item If the quantum field theory is non-renormalizable then $c$ becomes positive for sufficiently large graphs.
All correlation functions are polynomials in the coupling $g$ of universal ($q$-independent) maximum degree.
\item If the quantum field theory is renormalizable then $c$ is constant for all graphs that contribute to the correlation function.
The correlation function becomes a polynomial in the coupling with degree that may grow with $q$.
If the correlation function has $c\geq0$ only the tree level (with $h_1=0$) contributes.
\item If the quantum field theory is super-renormalizable then $c$ becomes negative for sufficiently large graphs.
In this case all correlation functions may be infinite (formal) power series.
\end{enumerate}
It is interesting to observe that finite fields give an upside down picture to normal quantum field theories. The most problematic non-renormalizable
quantum field theories give the simplest results whereas the most accessible super-renormalizable theories may turn out to be the most complicated ones
over finite fields. In between we have the renormalizable quantum field theories that govern the real world.
Another theme of interest could be an analogous study of $p$-adic quantum field theories.
\bibliographystyle{plain}
\renewcommand\refname{References}
|
1,116,691,499,700 | arxiv | \section{Introduction}
The recent technology revolution has infiltrated most aspects of modern day life,
and has lead to an increase in demand for computing resources. Between 2010 and 2018 there was an estimated 550\% increase globally in the number of data center workloads and computing instances \cite{masanet2020recalibrating}. This increased computing demand has commanded a shift from smaller data centers to large-scale, highly optimized and efficient facilities. These facilities are referred to as \emph{hyper-scalar} data centers and are operated by large technology companies such as Amazon, Facebook, Google, Microsoft and Alibaba. These companies operate vast networks of these data centers which are dispersed geographically throughout the world \cite{nrdc_2014, yevgeniy_analysts}.
Data centers currently consume
around 1-2\% of electricity
globally \cite{masanet2020recalibrating}. This low percentage is largely due to the increased efficiency of hyper-scalar data centers, which have processed the increased computing demand while only increasing electricity use by an estimated 6\% \cite{masanet2020recalibrating}. However,
as the scope for further efficiency gains is
largely exhausted, it is projected that electricity use for computing will increase rapidly in the future.
Hyper-scalar data centers are large loads on electric power networks with unique characteristics.
For example, data center loads can defer when computing tasks are processed or process them at different locations. These properties equip data center operators with the unique ability to participate in both geographic and temporal load shifting.
Motivated by recent pledges made by technology companies to reduce their carbon emissions \cite{googleenvironmentalreport, amazonenvironment,googleblog}, we are interested in understanding how these hyper-scale data centers can effectively interact with electricity markets in a way that minimizes the carbon emissions caused by their computing loads. Start-ups such as Lancium consider computing that adapts to the operation of the electric grid \cite{lancium-press}, an idea that plays a critical role in the vision of zero carbon cloud computing \cite{yang2016zccloud, chien2019zero}.
When shifting load, data centers interact with electricity markets operated by independent system operators (ISOs).
Previous research has examined the impact of integrating data centers and demand response \cite{dcdr_survey,liu2013data,liu2014pricing, zhou2016bilateral, zhou2020a, zhou2015when, chen2019an} or have considered geographical load shifting to reduce electricity costs \cite{ li_bao_li_2015, rao_liu_ilic_liu_2012, dou2017carbon}, including load shifting between different electricity markets \cite{rao_liu_xie_liu_2010}. Other work has modelled data center flexibility through the use of virtual links in time and space \cite{zhang2019flexibility}.
Much of this body of work considers the shifting of computing load to reduce the carbon emissions of data centers \cite{goiri2013parasol, liu2012renewable, dou2017carbon} and increase absorption of renewable energy \cite{kim_yang_zavala_chien_2017, zheng2020mitigating}.
There are, however, several major challenges to effective collaboration between the ISO and data centers, such as barriers to the exchange of possibly sensitive information. Another challenge is that ISOs and data centers have different objectives, namely, ISOs focus on minimizing electricity cost while data centers would like to minimize carbon emissions.
While it is a common assumption that those two objectives are well aligned,
our previous research \cite{lindberg2021a} found that if data centers are specifically targeting the objective of reducing carbon emissions (rather than reducing cost) they can achieve better results by
doing the load shifting outside of the electricity market rather than collaborating with the ISO. Since ISOs are unlikely to change the market to directly minimize carbon emissions in the near future, we focus on a method to identify geographical load shifts that reduce carbon emissions within the existing electricity market.
To effectively shift load geographically, we require information about the locational variation in the carbon footprint of electricity, similar to the way locational marginal prices describe the locational variations in the cost of electricity.
While data on electricity prices is easily available, similar information on the carbon emissions associated with electricity usage is not. As a result, it is less straight forward to develop metrics that provide good guidance for shifting load to reduce carbon emissions.
Previous work in this vein has assumed that prices are directly tied to the fraction of non-renewable energy \cite{2015liu}, or considered average carbon emissions for electricity in a region and/or renewable energy curtailment \cite{chien2015zero, yang2017large, zheng2020mitigating}. A benefit to these metrics is that several companies provide the necessary information needed to compute them \cite{tomorrow, caisoemissions,caisocurtailment}. A downside is that they all fail to consider necessary aspects of electric grid operation, such as the impact of a marginal change in load and transmission capacity limits. Specifically, there is no consideration for which generators in the system will be providing the additional generation or if is there is sufficient transmission capacity available to deliver the generation.
To effectively guide load shifting, we need
a measure of marginal emissions \cite{lindberg2021a}.
The use of marginal emissions to assess energy efficiency or the potential impact of renewable energy has been considered in \cite{siler2012marginal, callaway2018location}, proposed to
guide renewable energy investments \cite{2010Ruiz, 2011Rudkevich, siler2012marginal},
or considered the impact of marginal emissions rates at cogeneration facilities \cite{tabors2021methodology}.
More recently, a measure of locational marginal carbon emissions was proposed for data center geographical load shifting in \cite{lindberg2021the}. The superiority of this metric over others such as average carbon emissions, curtailment or LMPs for data center geographic load shifting was demonstrated in \cite{lindberg2021a}.
While the previous load shifting models in \cite{lindberg2021the, lindberg2021a} presented promising results, it included several unrealistic assumptions such as the electricity market being solved twice in each time step. Here, we present a model that addresses these issues and also leverages regularization to increase the accuracy of the model.
Further, due to the fact that the shifting metric derived in \cite{lindberg2021the} is a linear sensitivity, we expect that at each time step this metric is not necessarily finding the best load shift. For this reason, we present a new benchmark model for optimal load shifting. This bilevel program finds the best load shift a data center can perform, given the current electricity market.
In summary, the three main contributions of this paper are the following:
\noindent
\emph{1) Realistic model for data center load shifting:} We address the unrealistic assumptions in our previous model by implementing load shifting in a cumulative fashion and incorporating a regularization parameter to increase the accuracy of shifting.
\noindent
\emph{2) Benchmark model for optimal data center shifting:} We develop a new benchmark model that computes the optimal data center load shift, given access to all system information. This bilevel optimization method determines the optimal load shift to reduce carbon emissions, subject to the operation of the current electricity market.
\noindent
\emph{3) Computational Analysis:} We compare both models using the RTS-GMLC system with one year of 5 minute load and generation data, and observe that the shifting model based on locational marginal carbon emissions performs quite well.
The remainder of the paper is organized as follows.
In Section~\ref{sec:2} we review the existing data center driven shifting model, while Section~\ref{sec:3} presents the proposed improvements to this model. In Section~\ref{sec:4}, we present a new benchmark model for optimal load shifting. Section~\ref{sec:5} demonstrates the efficacy of our model in a case study, and Section \ref{sec:6} concludes.
\section{Review of Existing Load Shifting Model}\label{sec:2}
Previous work has considered ISO independent load shifting to reduce carbon emissions via a value called the \textit{locational marginal carbon emissions}, $\lambda_{\text{CO}_2}$ \cite{lindberg2021the}. This value is calculated as a linear sensitivity around the optimal solution to the DC OPF. We give a brief outline of this load shifting model here, but suggest \cite{lindberg2021the} as a more detailed reference.
\noindent \textbf{Step 1:} This model begins by assuming the ISO solves a DC OPF \cite{christie2000transmission,litvinov2010}. The DC OPF is a linear optimization problem that seeks to minimize generation costs subject to network and demand constraints with decision variables $x = [\theta \ P_g]$ where $P_g$ are the generation variables, $\theta$ are the voltage angles at each node. Let $\mathcal{N}$ be the set of all nodes, $\mathcal{L}$ the set of lines and $\mathcal{G}$ the set of generators. The DC OPF is defined as
\begin{subequations}\label{dcopf}
\begin{align}
\min_{\theta, P_g} ~~&c^T P_g \label{dcopfcost} \\
\text{s.t.} ~~& \textstyle \sum_{\ell\in\mathcal{G}_i} \!P_{g,\ell} -\! \textstyle \sum_{\ell\in\mathcal{D}_i} P_{d,\ell} = &&
\nonumber\\
&\qquad\quad\textstyle\sum_{j:(i,j)\in\mathcal{L}} \!\!\!\!-\beta_{ij}(\theta_i \!- \!\theta_j), &&\forall i\in\mathcal{N} \label{balance}\\
-&P^{lim}_{ij} \!\leq\! -\beta_{ij}(\theta_i \!-\!\theta_j) \!\leq\! P^{lim}_{ij}, &&\forall (i,j)\in\mathcal{L}
\label{lineineq}\\
& P^{min}_{g,i} \leq P_{g,i} \leq P^{max}_{g,i}, && \forall i\in\mathcal{G}
\label{genineq}\\
& \theta_{ref} = 0. \label{refnode}
\end{align}
\end{subequations}
The cost function \eqref{dcopfcost} minimizes the cost of generation, where $c_i$ is the cost of generation at generator $i$. Constraints \eqref{balance}-\eqref{genineq} constrain the nodal power balance, transmission line and generator capacity constraints respectively. Finally, \eqref{refnode} sets the voltage angle at the reference node to zero.
\noindent \textbf{Step 2:} Independent of any ISO collaboration, data center operators shift their load to minimize carbon emissions. To guide this effort, a metric known as the \textit{locational marginal carbon emissions} was proposed in \cite{lindberg2021the}. In \cite{lindberg2021a} the authors demonstrated the superiority of this metric to other more commonly studied metrics such as the average carbon emissions or excess low carbon power.
To calculate the locational marginal carbon emissions, we first consider a basic optimal solution $x^* = [\theta^*, P_g^*] \in \mathbb{R}^n$ to the DC OPF \eqref{dcopf}. From sensitivity analysis in linear optimization theory \cite{bertsimas1997introduction}, a basic optimal solution can be written as $Ax^* = b$, where $A \in \mathbb{R}^{n \times n}$ is a full rank matrix consisting of all the active constraints of \eqref{dcopf} at the optimal solution $x^*$. In the case of the DC OPF, the rows of $A$ consist of the equality constraints \eqref{balance} and \eqref{refnode} as well as a subset of the inequality constraints \eqref{lineineq}, \eqref{genineq} that are satisfied at equality for $x^*$.
A small change in load can be represented as a small change in the right hand side $b$, given by $\Delta b = \begin{bmatrix} \Delta P_d & 0 \end{bmatrix}^T$. Assuming that the change is small enough as not to alter the set of active constraints, we compute the optimal change in generation as $A \cdot \Delta x = \Delta b$ where $\Delta x = [\Delta \theta \ \Delta P_g]$. This gives rise to the linear relationship
\vspace{-1mm}
\begin{align}
\begin{bmatrix} \Delta \theta \\ \Delta P_g \end{bmatrix} &= A^{-1} \cdot \begin{bmatrix} \Delta P_{d} \\ 0 \end{bmatrix}
\end{align}
If we denote the matrix consisting of the last $|\mathcal{G}|$ rows and first $|\mathcal{N}|$ columns of $A^{-1}$ by $B$, this gives the linear relationship between load and generation changes, $\Delta P_g = B \cdot \Delta P_d$.
Consider the cost vector $g \in \mathbb{R}^{| \mathcal{G}|}$ that measures the carbon emissions of each generator per MW. Specifically, the $i$th component of $g$, is the carbon intensity of generator $i$. Multiplying each side of $\Delta P_g = B \cdot \Delta P_d$ on the left by $g$ gives us the change in carbon emissions
\vspace{-1mm}
\begin{align}
\Delta CO_2 = g \cdot \Delta P_g = g \cdot B \cdot \Delta P_d = \lambda_{\text{CO}_2} \Delta P_d \label{newobj}.
\end{align}
where $\lambda_{\text{CO}_2} = g \cdot B$. Intuitively, the $k$th component of $\lambda_{\text{CO}_2}$ measures how an increase of $1$ MW of load at node $k$ will affect the total carbon emissions of the system.
We let $\mathcal{C}$ denote the set of data center loads that can geographically shift load and consider optimization variables that denote the change in load at data center $i$, $\Delta P_{d,i}$, and the shift in load from data center $i$ to $j$, $s_{ij}$. The geographic load shifting optimization problem is given by:
\begin{subequations}\label{datacenteropt}
\begin{align}
\min_{\Delta P_{d}, s} \ \ & \sum_{i \in \mathcal{C}} \lambda_{\text{CO}_2,i} \Delta P_{d,i} \label{loadshiftobj} \\
\text{s.t. } \ \ &\Delta P_{d,i} = \textstyle\sum_{j\in\mathcal{C}} s_{ji} - \textstyle\sum_{k\in\mathcal{C}} s_{ik} \quad &&\forall i\in{C} \label{datacenterflex1} \\
\textstyle &\sum_{i\in\mathcal{C}} \Delta P_{d,i}= 0 \label{datacenterflex2} \\
&- \epsilon_i \cdot \text{Cap}_i \leq \Delta P_{d,i} \leq \epsilon_i \cdot \text{Cap}_i \quad \!\!&&\forall i\in\mathcal{C} \label{datacenterlim1} \\
&0 \leq \Delta P_{d,i} + P_{d,i} \leq \text{Cap}_i \quad &&\forall i\in\mathcal{C} \label{datacenterlim3} \\
& 0 \leq s_{ij} \leq M_{ij} \quad &&\forall ij\in\mathcal{C}\!\times\!\mathcal{C}. \label{datacenterlim2}
\end{align}{}
\end{subequations}
The objective value \eqref{loadshiftobj} minimizes the change in carbon emissions as a function of the change in load, \eqref{datacenterflex1} enforces that the change in load at a given data center is equal to the total load shifted in minus the total load shifted out, while \eqref{datacenterflex2} says the sum of all load shifts must be zero. Constraint \eqref{datacenterlim1} limits the amount each data center can shift as a fraction $\epsilon$ of the data center capacity $\text{Cap}$. Constraint \eqref{datacenterlim3} puts an upper bound $M_{ij}$ on the total capacity at each data center and \eqref{datacenterlim2} limits how much load data center $i$ can send to data center $j$.
\noindent \textbf{Step 3:} Finally, the ISO resolves the DC OPF \eqref{dcopf} with new load profile, $P_{d,i}' = P_{d,i} + \Delta P_{d,i}^*$, where $\Delta P_{d,i}^*$ is the optimal solution to \eqref{datacenteropt} for all $i \in \mathcal{N}$.
\section{Realistic Data Center Load Shifting Model}\label{sec:3}
The above model has several drawbacks. First, it is unrealistic to assume that the ISO resolves the market clearing twice, once before and once after the shifting has happened. Second, since the model is linear, we tend to see large load shifts even with small differences in $\lambda_{\text{CO}_2}$ between data center locations. Since $\lambda_{\text{CO}_2}$ is a local sensitivity factor that is only accurate near the previous optimal solution, these large shifts lead to inaccurate results that sometimes increase carbon emissions. To address these issues, we introduce two improvements to the model: cumulative load shifting and regularization.
\subsection{Cumulative load shifts}
We refine the model defined in \cite{lindberg2021the} by considering \emph{cumulative load shifts}.
Instead of resolving the DC OPF in Step $3$ of the above model, the load shift is applied to the market clearing in the next time step.
Specifically, the algorithm runs as follows:
\noindent\textbf{Step 1}: At time $t$, the ISO solves the DC OPF \eqref{dcopf} with data center load set to $P_d^{t}$.
\noindent\textbf{Step 2}: Given information about $\lambda_{\text{CO}_2}$ as described above, the data center operator computes a load shift $\Delta P_d^{t}$ according to \eqref{datacenteropt}. Then, the data center load for time $t+1$ is set to
$P_d^{t+1}= P_d^{t} + \Delta P_d^{t}$, and the algorithm proceeds to Step $1$ of the next time step.
While the cumulative load shifting model more accurately reflects the current market set up, it introduces an additional inaccuracy in our model. The locational marginal carbon emission value $\lambda_{\text{CO}_2}$ at each data center is derived as a linearization from the operating point at time $t$, but the internal data center shifting optimization will only affect the market clearing at time $t+1$. The expectation is that since operating conditions remain similar between time steps, shifting with respect to $\lambda_{\text{CO}_2}$ will still lead to a decrease in total system carbon emissions.
We also note that cumulative load shifting can increase accuracy relative to the existing model, particularly the load shift allowed in each time step is small (i.e., only a small fraction $\epsilon$ can be shifted). In this case, changes in the data center load build up slowly over time. This is in contrast to our previous model, where the data center load was reset to the original value $P_d$ in each time step.
\subsection{Regularizing load shifts}
To discourage large load shifts which can cause oscillations and increased emissions,
we propose to use a regularization term (i.e., a quadratic penalty) that discourages large shifts.
Specifically, this model replaces the objective value \eqref{loadshiftobj} with
\[
\sum_{i \in \mathcal{C}} \lambda_{\text{CO}_2, i} \Delta P_{d,i} + \gamma \| \Delta P_{d,i} \|_2^2
\]
where $\gamma \in \mathbb{R}$ is a regularization parameter. The goal in using this regularization term is to discourage large shifts that lead to an increase in carbon emissions as well as increase the accuracy of the data center driven shifting model. However, the regularization term can also be interpreted as a quadratic cost on load shifting. This ensures that while small shifts are cheap and frequent, we only shift a large amount of load when there will be a large reduction in carbon emissions.
Throughout the rest of this paper we refer to the model outlined in
this section
as ($\lambda_{\text{CO}_2}-$shift).
\section{Benchmark model for optimal shifting}\label{sec:4}
Our next contribution is to introduce a new model to benchmark the data center driven shifting model. Since the shifts provided by $\lambda_{\text{CO}_2}$ are calculated by a linear sensitivity, they can be inaccurate, even giving shifting profiles that increase carbon emissions.
The problem of identifying the optimal load shift data center operators should employ to minimize carbon emissions can be modelled as a bilevel linear program.
The upper level problem identifies the optimal choice of load shift $\Delta P_d$ to minimize carbon, i.e.,
\begin{align}
\min_{\Delta P_d,s, P_g^*} &\ g^T P_g^*
\nonumber \\
\text{s.t. }~~&P_g^* = \arg \min \eqref{opt:dcopf_with_flex} \label{opt:bilevel} \tag{Opt-shift}\\
&(\Delta P_d, s) \in \mathcal{P} \nonumber
\end{align}
Here, the last constraint represents the set of feasible load shifts from the data center perspective, i.e., $\mathcal{P}$ is the polytope of permissible load shifts defined by the constraints in \eqref{datacenteropt}..
The first constraint states that the generation values $P_g^*$ is the solution to the lower level optimization problem \eqref{opt:dcopf_with_flex}. This problem is a version of the standard DC OPF \eqref{dcopf} where the nodal balance constraints include the change in demand. Formally we write it as
\begin{align*}
&\min_{P_g, \theta} \ c^T P_g \quad
\text{subject to} \\
& \text{Constraints } \eqref{lineineq},\eqref{genineq},\eqref{refnode} \tag{DC-shift} \label{opt:dcopf_with_flex} \\
& \sum_{\ell\in\mathcal{G}_i} \! P_{g,\ell} -\!\!\! \sum_{\ell\in\mathcal{D}_i} \!(P_{d,\ell} + \Delta P_{d,\ell}) = \!\!\!\!\!\!\! \sum_{j:(i,j)\in\mathcal{L}} \!\!\!\!\!\!\!\!-\beta_{ij}(\theta_i \!- \! \theta_j), && \!\!\!\forall i\!\in\!\mathcal{N}
\end{align*}
As in $(\lambda_{\text{CO}_2}$-shift), we also consider cumulative load shifting in \eqref{opt:bilevel}.
Specifically, at each data center $\ell \in \mathcal{C}$, at time step $t$ we assume the load
$P_{d,\ell}$
in \eqref{opt:dcopf_with_flex} reflects the sum of new load $P_{d,\ell}$ from time $t$ and the load shift $\Delta P_{d, \ell}$ from time $t-1$. Herein out when we refer to the model \eqref{opt:bilevel} we assume it is employed with this cumulative load shifting.
\iffalse
\section{Benchmark model for optimal shifting}\label{sec:4}
Our next contribution is to introduce a new model to benchmark the data center driven shifting model. Since the shifts provided by $\lambda_{\text{CO}_2}$ are calculated by a linear sensitivity, they can be inaccurate, even giving shifting profiles that increase carbon emissions.
The problem of identifying the optimal load shift data center operators should employ to minimize carbon emissions can be modelled as a bilevel linear program.
Here, the lower level problem is the same as the standard DC OPF outlined in \eqref{dcopf} but where the nodal balance constraints include the change in demand. Formally we write it as,
\begin{align*}
&\min_{P_g, \theta} \ c^T P_g \quad
\text{subject to} \\
& \sum_{\ell\in\mathcal{G}_i} \! P_{g,\ell} -\!\!\! \sum_{\ell\in\mathcal{D}_i} \!(P_{d,\ell} + \Delta P_{d,\ell}) = \!\!\!\!\!\!\! \sum_{j:(i,j)\in\mathcal{L}} \!\!\!\!\!\!\!\!-\beta_{ij}(\theta_i \!- \! \theta_j), && \!\!\!\forall i\!\in\!\mathcal{N} \\
& \text{Constraints } \eqref{lineineq},\eqref{genineq},\eqref{refnode} \tag{DC-SHIFT} \label{opt:dcopf_with_flex}
\end{align*}
The upper level problem identifies the optimal choice of load shift $\Delta P_d$ to minimize carbon. Let $g$ be a vector of carbon emissions for each generator. The optimal load shift is a solution to the bilevel optimization problem:
\begin{align}
&\min_{\Delta P_d,s, P_g^*} \ g^T P_g^* \quad \text{subject to} \nonumber \\
&P_g^* = \arg \min \eqref{opt:dcopf_with_flex} \nonumber \\
&(\Delta P_d, s) \in \mathcal{P}
\tag{Opt-shift} \label{opt:bilevel}
\end{align}
where $\mathcal{P}$ is a polytope of permissible load shifts. The load shifts allowed depend on the type of flexibility assumed, either geographic or temporal. Both of these will further be constrained by operating constraints at each data center.
For example, when the data centers are assumed to have geographic load shifting flexibility, the polytope $\mathcal{P}$ is defined as the feasible set of \eqref{datacenteropt}.
As in the $\lambda_{\text{CO}_2}-$shift model, we also want to consider the impact of past load shifts on the current operating conditions. Therefore, we also consider cumulative load shifting with \eqref{opt:bilevel}. Specifically at each data center $\ell \in \mathcal{C}$, at time step $t$ we assume the load $P_{d,\ell} + \Delta P_{d, \ell}$ in \eqref{opt:dcopf_with_flex} reflects the new load $P_{d,\ell}$ from time step $t$ and the load shift $\Delta P_{d, \ell}$ from time step $t-1$. Herein out when we refer to the model \eqref{opt:bilevel} we assume it is employed with this cumulative load shifting.
\fi
\section{Computational Results}\label{sec:5}
We next analyze the efficacy in carbon reduction of $(\lambda_{\text{CO}_2}$-shift) versus \eqref{opt:bilevel}.
\subsection{Accuracy of $(\lambda_{\text{CO}_2}$-shift)}
An important aspect of $(\lambda_{\text{CO}_2}$-shift) that needs to be considered is the accuracy of the predicted change to carbon emissions and generation cost. Specifically, using the relation
$\Delta P_g = B \cdot \Delta P_d $,
we can get a predicted change in generation that will result as an effect of the load shift $\Delta P_d$. Using this predicted generation change, we can derive a value for the predicted change in cost and carbon emissions to the system as a result of the load shift.
We compare this predicted change to the true change in carbon emissions.
\subsection{Test Case}
We perform an extensive year long analysis of carbon reduction methods mentioned above using the RTS-GMLC system \cite{barrows2020the}. This system has $73$ buses, $158$ generators and $120$ lines. Since the original system does not designate data center loads, we assign data centers at buses $103, 107, 204$ and $322$. We assume that cumulatively, the four data centers consume a fixed power of $1000$ MW at each time step throughout the year, although the distribution of that power among the four data centers varies. We assume at time step $0$, each data center starts with $250$ MW of load. For all other loads and renewable generation, we use the real time, i.e. $5$ minute, load and generation data provided with \cite{barrows2020the}. Over the course of the year, this system serves $526,220,000$ MW of load, and $105,408,000$ MW or roughly $20.03\%$ of it is data center load.
Adding these large data center loads to the network greatly increases the total system load and results in time steps where the original DC OPF is infeasible. To remedy this we change the generation limits by setting $P_g^{\text{min}} = 0$ for all $g \in \mathcal{G}$, and increase the maximum generation limits by $50\%$. At each time step we allow each data center to shift up to $20\%$ of its total capacity, i.e. $50$ MW, and enforce that data center capacities remain between $0$ and $300$ MW. Further, we put no limitations on how much load each data center can shift to one another.
\subsection{The effect of regularization}
We first investigate the effect of the regularization parameter $\gamma$. The effect of various regularization parameters on generation cost, total system carbon emissions and total load shift is shown in Figure~\ref{fig:gamma_varies}, where the orange and blue lines represent the predicted and actual values, respectively. Figure~\ref{fig:gamma_varies}(a) show that the minimum total system carbon emissions occurs when the regularization parameter $\gamma = 1.5$ is used. In addition, we see in both Figure~\ref{fig:gamma_varies}(a) and (b) that as the regularization parameter $\gamma$ increases, the difference between the predicted carbon emissions and generation cost and the actual carbon emissions and generation costs decreases. This indicates that including a regularization term helps not only in the efficacy of the data center driven shifting model, but also in the accuracy.
The reason regularization is considered is to discourage load shifting in cases where it is not predicted to make large differences. Figure~\ref{fig:gamma_varies}(c) shows how as the regularization parameter increases, the total load shifted throughout the year decreases.
We see that when the regularization parameter is set at $\gamma = 1.5$, the total amount of load shifted is less than half of the amount of load shifted when $\gamma = 0$. Considering that the carbon emissions and generation cost when $\gamma = 1.5$ are lower than when $\gamma = 0$, this demonstrates that shifting less load, more strategically can lead to a larger reduction in carbon emissions and a smaller increase in generation costs.
\begin{figure}
\begin{subfigure}
\centering
\includegraphics[width=.8\linewidth]{carbon_emissions.png}
\caption{Change in carbon emissions.}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}
\centering
\includegraphics[width=.8\linewidth]{generation_cost.png}
\caption{Change in generation cost.}
\label{fig:sfig2}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=.8\linewidth]{total_shift.png}
\caption{Change in total load shift.}
\label{fig:sfig3}
\end{subfigure}
\caption{Change in carbon emissions, generation cost and load shift as the regularization parameter $\gamma$ varies.}
\label{fig:gamma_varies}
\end{figure}
\begin{table*}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& DC OPF & \eqref{opt:bilevel} & $\lambda_{\text{CO}_2}-$shift: $\gamma = 0$ & $\lambda_{\text{CO}_2}-$shift: $\gamma = 1.5$ \\
\hline
Generation Cost& $3,802,706,000$ & $4,981,076,000$ & $3,847,332,000$ & $3,843,847,000$ \\
CO$_2$ Emissions & $164,402,000$ & $110,444,000$ & $161,522,000$ & $161,427,000$ \\
Total Shifts & 0 & $1,048,000$ & $6,199,000$ & $2,245,000$\\
\hline
\end{tabular}
\vspace{1mm}
\caption{Summary of results from all models}
\label{tab:result_summary}
\end{table*}
\subsection{Comparison with Opt-Shift and Original DC OPF solution}
We next compare the solutions for $(\lambda_{\text{CO}_2}$-shift) with regularization parameters $\gamma=0$ and $\gamma=1.5$ with the original DC OPF solution and the solution obtained using our benchmark model \eqref{opt:bilevel}.
These results are given in Table~\ref{tab:result_summary}. We see that when considering ($\lambda_{\text{CO}_2}$-shift) with no regularization, carbon emissions relative to the original DC OPF decreases by around 2.8 million tons or $1.75\%$. This reduction is achieved while shifting around $6.2$ million MW of load. Conversely, once the regularization term $\gamma=1.5$ is added, we achieve an even greater reduction in carbon emissions, namely $2,975,000$ tons or $1.81 \%$ while only shifting around $2.25$ million MW of load. In addition, when considering regularization, total system generation costs only increased by $1.08\%$ while without regularization it increased by $1.17 \%$.
In contrast to the above results, we see a dramatic carbon savings when using the benchmark \eqref{opt:bilevel}. In this case we save $53,958,000$ tons of carbon, i.e. $32.82 \%$. This occurs while only shifting a little over $1$ million MW. This dramatic savings occurs at a major increase to generation costs. Namely, \eqref{opt:bilevel} results in an increase in $\$ 1,178,370,000$ to generation costs or $30.99 \%$ over the original DC OPF. This benchmark model suggests that dramatic reductions in carbon emissions are possible even with limited data center flexibility, but come at a large increase to generation costs.
\subsection{Carbon Emissions vs Generation Costs}\label{ss:first_results}
As seen above,
minimizing carbon emissions can lead to an increase in generation cost.
To better understand the trade-off between carbon emissions and cost, we consider the benchmark model \eqref{opt:bilevel} with objective function
\[
(\alpha c^T + (1 - \alpha)g^T)P_g^*
\]
and $(\lambda_{\text{CO}_2}-$shift) with objective function
\[
(\alpha \text{LMP} + (1 - \alpha)\lambda_{\text{CO}_2})\Delta P_d + 1.5 \cdot \| \Delta P_d \|_2^2
\]
in place of \eqref{loadshiftobj}
where $\alpha \in [0,1]$ is a trade off parameter that allows us to weight the emphasis on minimizing carbon emissions versus generation costs and LMP is a vector of the locational marginal prices at each node.
The trade off between minimizing carbon emissions and generation cost is shown graphically in Figure~\ref{fig:gen_constrained_bilevel}.
\begin{figure}
\centering
\includegraphics[width = 0.4\textwidth]{tradeoff.png}
\caption{Trade off between carbon emissions and generation cost.}
\label{fig:gen_constrained_bilevel}
\end{figure}
When considering ($\lambda_{\text{CO}_2}-$shift), shown in yellow, we see a small variation in the overall system generation cost and carbon emissions that remains close to the carbon emissions and generation cost of the original DC OPF. This is consistent with the results shown above, and is due to the fact that this model considers small shifts away from an operating point that minimizes generation costs.
The benchmark model \eqref{opt:bilevel} produces a much larger variation in operating points as we change the trade-off parameter $\alpha$.
As $\alpha$ increases, the model produces a large increase in carbon emissions for only a moderate cost savings. In addition, we see that for \eqref{opt:bilevel} to achieve lower carbon emissions than the DC OPF and the $(\lambda_{\text{CO}_2}-$shift), a major increase to generation cost is needed. This demonstrates that even with limited geographic load shifting flexibility, a large reduction in carbon emissions is possible but it comes at the price of significantly higher generation costs.
Figure~\ref{fig:gen_constrained_bilevel} also demonstrates an interesting phenomenon, namely the greedy nature of \eqref{opt:bilevel}.
When only trying to minimize carbon emissions, \eqref{opt:bilevel} is able to reduce total system carbon emissions by roughly $33 \%$ but this comes at a significant increase to total system generation cost.
However, for the same generation cost, \eqref{opt:bilevel} gives a solution with higher carbon emissions than the DC OPF or $(\lambda_{\text{CO}_2}$-shift). This demonstrates that the greedy nature of \eqref{opt:bilevel} is not necessarily an optimal way to shift load over a long time span. Specifically, \eqref{opt:bilevel} finds a load shift that gives the largest reduction in carbon emissions at that time step, with no consideration to how the load shift will affect the carbon emissions of the system at the next time step. Using forecasts of future load and generation information to aid in a long term load shifting strategy is left as future work.
\subsection{Data Center Operating Load}
Finally, we show consider the impact of each model on the data center operating load. We consider $(\lambda_{\text{CO}_2}$-shift) with regularization parameter $\gamma = 1.5$ and \eqref{opt:bilevel}, and two different limits on the amount of load that can be shifted in each time step, $\epsilon = 0.01$ and $\epsilon = 0.2$.
In Figure~\ref{fig:dc_load_lambda_CO2} we see the operating conditions of each data center over the course of the first day when using $(\lambda_{\text{CO}_2}$-shift) when $\epsilon = 0.01$ (left) and $\epsilon = 0.2$ (right). In both cases we see similar overall trends in operating load.
However, $\epsilon = 0.2$ leads to much quicker changes and also dramatic oscillations in the load at data centers $1$ and $3$ towards the end of the day.
Similarly, in Figure~\ref{fig:bilevel_load} we see the operating conditions of each data center over the course of the first day using \eqref{opt:bilevel} when $\epsilon = 0.01$ (left) and $\epsilon = 0.2$ (right). Again, we see similar trends data center load for both values of $\epsilon$, but for $(\lambda_{\text{CO}_2}$-shift), $\epsilon = 0.2$ leads to more oscillations in data center operating load.
Interestingly, there are some differences between the $(\lambda_{\text{CO}_2}$-shift) and \eqref{opt:bilevel}. In both cases we see an initial pull for data center $4$ to operate at maximum capacity while the other data centers operate at lower capacities. This implies that the $\lambda_{\text{CO}_2}$ value for data center $4$ is accurately dictating that it is the most carbon neutral data center. In contrast, we see that when shifting with respect to $(\lambda_{\text{CO}_2}$-shift), data center $2$ is also operating at maximum capacity. This is in contrast to shifting when using \eqref{opt:bilevel}. In this instance data center $2$ initially drops to be the data center operating at the lowest load. This discrepancy highlights the inaccuracy when shifting with respect to $\lambda_{\text{CO}_2}$.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.2\textwidth]{load_day1_01_no_legend.png}
\includegraphics[width = 0.2\textwidth]{load_day1_2_no_legend.png}
\includegraphics[width = 0.075\textwidth]{legend.png}
\caption{Load at each data center during the first $24$ hours using $\lambda_{\text{CO}_2}$-shift with $\gamma = 1.5$ and $\epsilon = 0.01$ (left) and $\epsilon = 0.2$ (right).}
\label{fig:dc_load_lambda_CO2}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width = 0.20\textwidth]{bilevel_load_day1_01.png}
\includegraphics[width = 0.20\textwidth]{bileve_load_day1_2.png}
\includegraphics[width = 0.075\textwidth]{legend.png}
\caption{Load at each data center during the first $24$ hours using \eqref{opt:bilevel} with $\epsilon = 0.01$ (left) and $\epsilon = 0.2$ (right).}
\label{fig:bilevel_load}
\end{figure}
Finally, we investigate how using different $\epsilon$ values impacts the overall effect on carbon emissions.
In Figure~\ref{fig:vary_epsilon} we see the change in total system carbon emissions as $\epsilon$ varies for both $(\lambda_{\text{CO}_2}$-shift) with $\gamma=1.5$ as well as \eqref{opt:bilevel}. In both cases we see only a very mild decrease in total carbon emissions as we allow $\epsilon$ to increase. Further, for $(\lambda_{\text{CO}_2}$-shift), we see that as $\epsilon$ increases, the accuracy of the model decreases and once $\epsilon >0.1$, the carbon emissions starts to increase. This indicates that allowing small shifts not only is more desirable from an operational stand point to avoid rapid changes and oscillations in data center loading, but it leads to similar carbon savings as allowing larger shifts.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.22 \textwidth]{co2_emissions_vary_epsilon.png}
\includegraphics[width = 0.22 \textwidth]{bilevel_emissions_epsilon_vary.png}
\caption{Predicted and actual change in carbon emissions using $\lambda_{\text{CO}_2}-$shift (left) and the change in carbon emissions using \eqref{opt:bilevel} (right) for varying epsilon values.}
\label{fig:vary_epsilon}
\end{figure}
\section{Conclusion}\label{sec:6}
In this paper we presented an improved model for data center load shifting to reduce carbon emissions. This model shifts load independently of ISO collaboration via a measure known as locational marginal carbon emissions. We built on existing work, but made several improvements to increase realism and accuracy of our model.
We also proposed a new benchmark model which gives the best load shift at each time step, and compared the results.
The main conclusion from the paper is that smaller load shifts, limited by regularization and shifting caps, are quite effective in reducing carbon emissions. Larger load shifts tend to decrease accuracy of the model and produce less carbon savings. Further, while our benchmark model is able to achieve large carbon reductions, it also significantly increases cost.
This paper demonstrates many natural directions for future work. First, this work shows that shifting load in a greedy way, i.e. shifting to reduce the maximal amount of carbon at each time step, is not necessarily the best approach if current load shifts will impact the load profile in future time steps.
This demonstrates that forecasting future load and generation patterns is important to obtain better solutions.
Finally, the information needed to calculate the locational marginal carbon emissions is currently not made publicly available. Therefore, finding ways to infer and predict $\lambda_{\text{CO}_2}$ at the data center nodes from publicly available data is needed in order to implement $(\lambda_{\text{CO}_2}$-shift) in practice.
\bibliographystyle{unsrt}
|
1,116,691,499,701 | arxiv |
\section{Introduction}
\label{sec:intro}
Active Galactic Nuclei (AGN) are known to often be triggered by interactions and mergers between their host galaxies \citep[e.g.][]{2000MNRAS.311..576K, comerford2015, Barrows201703, Goulding201706} which drive large amounts of gas towards the galaxy cores and Massive Black-Holes (MBHs) within \citep{barnes1992}. Many examples of ``dual-AGN'', pairs of observably accreting MBHs in the same system, have been identified in radio, optical and X-ray surveys \citep[e.g.][]{rodriguez2006,Komossa2006,koss2012,comerford2012}. After a galaxy merger, the two MBHs are expected to sink towards the center of the post-merger galaxy due to dynamical friction, which is very effective on $\sim 10^3$ pc scales \citep{Begelman1980, am12}. Once the MBHs reach $\sim$ pc separations and smaller, and eventually become gravitationally bound as a MBH Binary (MBHB), the continued merging of the system depends sensitively on individual stellar scatterings extracting energy from the binary \citep[e.g.][]{Sesana200612, Merritt200705}.
The effectiveness of stellar scattering in `hardening' MBH binaries remains unresolved, although studies are beginning to reach a consensus that the population of stars available for scattering (the `loss-cone') is efficiently refilled \citep[e.g.][]{sesana2015, vasiliev2015}. Of particular interest is whether and which systems are able to reach the $\lesssim 10^{-3}$--$10^{-1}$ pc separations\footnote{for masses $\sim 10^6$ -- $10^9 \, \tr{M}_{\odot}$.} at which point Gravitational Wave (GW) emission can drive the systems to coalesce within a Hubble time \citep{Begelman1980}. While dual-AGN have been observed, there are no known AGN in confirmed gravitational bound binaries. If MBHBs are able to reach periods of $\sim$ yr (frequencies $\sim$ nHz), their GW emission should be detectable by pulsar timing arrays \citep[PTAs;][]{hellings1983, foster1990}---the European \citep[EPTA,][]{desvignes2016}, NANOGrav \citep{arzoumanian201505}, Parkes \citep[PPTA,][]{reardon2016}, and the International PTA \citep[IPTA,][]{Verbiest201602}. The most recent and comprehensive models for the cosmological population of merging MBHBs suggest that PTAs will plausibly make a detection within roughly a decade \citep[e.g.][]{taylor2015, Rosado1503, paper2}, and indeed, the most recent PTA upper-limits on GW signals---particularly on the presence of a power-law, Gravitational-Wave Background (GWB) of unresolved, cosmological sources---have already begun to inform the astrophysical models (\citealt{Simon201603}, \citealt{Taylor201612}; but also, \citealt{middleton201707}).
MBH Binaries form on sub-parsec scales, which, even using VLBI, can only be spatially resolved at relatively low redshifts \citep[e.g.][]{DOrazio201712}. Spectroscopic, and especially photometric methods, which don't require binaries to be spatially resolved, have recently put forward large numbers of binary candidates \citep{Eracleous201106, Tsalmantza201106, graham2015, charisi2016}. On the theoretical side, few predictions have been made for the expected observability and detected rates of AGN in binary systems. On kpc scales, \citet{2016MNRAS.458.1013S} study dual- and offset- AGN during the (relatively) early stages of galaxy merger, providing interesting results on the nature and properties of the MBH in these systems. \citet{Volonteri2009} make rate predictions for tight MBHB, especially those with large orbital velocities that could be observable as dual broad-line AGN \citep[e.g.][]{Boroson2009}. The authors predict an upper-limit to the detection rate of such systems between $0.6$ -- $1.0\E{-3}$ per unabsorbed AGN.
The focus of this investigation are binaries and candidates identified by periodic variability in photometric surveys of AGN. In particular, \citet{graham2015} find 111 candidates in $\sim 240{,}000$ AGN using the CRTS survey; \citet{charisi2016} find 33 in $\sim 35{,}000$ AGN using PTF; and \citet{Liu201609} initially identify 3 candidates in 670 AGN using PanSTARRS, however none are persistent in archival data. These three detection rates are roughly consistent at $5\E{-4}$, $9\E{-4}$ and $\lesssim 4\E{-3} \, \textrm{AGN}^{-1}$. It is worth noting that these detection rates are very similar to the limits from orbital-velocity selected populations in \citet{Volonteri2009}, which have similar binary parameters to the orbital-period selection for variability candidates.
The connection between EM and GW observations of MBHBs has already begun to be leveraged using these photometric-variability candidates. While none of the individual candidate systems can be excluded by PTA measurements, \citet{sesana201703} demonstrate that the \textit{population} of MBHBs that they imply leads to a GWB amplitude in tension with existing PTA upper-limits. In \citet{sesana201703}, a phenomenological approach, with as few physical assumptions as possible, is used to connect EM and GW observations. In this followup analysis, we rely instead on physically motivated, theoretical models to explore the repercussions on EM observations. Specifically, we use binary populations based on the Illustris hydrodynamic, cosmological simulations \citep[e.g.][]{vogelsberger2014b, nelson2015} coupled with comprehensive semi-analytic merger models \citep{paper1, paper2} and synthetic AGN spectra to make predictions for the occurrence rates of periodically variable AGN. In \secref{sec:meth} we summarize the binary population, the AGN spectra we use to illuminate them, and the models of variability we consider. In \secref{sec:res} we present our results of expected detection rates and the parameters which determine binary observability. Finally in \secref{sec:disc} we discuss the limitations of our study and discuss its implications for identifying and confirming MBH binaries through photometric variability studies.
\section{Methods}
\label{sec:meth}
\subsection{MBH Binary Population and Evolution}
\label{sec:meth_mbhb}
Our MBHB populations are based on the MBHs and galaxies in the Illustris simulations. Illustris is an $\left(108 \, \mathrm{Mpc}\right)^3$ volume of gas-cells and particles representing dark matter, stars, and MBHs which is evolved from the early universe to redshift zero \citep[e.g.][]{vogelsberger2014a, vogelsberger2014b, genel2014, torrey2014, rodriguez-gomez2015, nelson2015}. The simulations include sub-grid models for star formation, stellar nucleosynthesis \& metal enrichment, and stellar \& AGN feedback. MBH particles are initialized with a seed mass of $\sim 10^5 \, \tr{M}_{\odot}$ in massive halo centers, after which they grow via accretion of local gas using a Bondi model. Details of the BH prescription and resulting MBH and AGN populations are presented in \citet{sijacki2015}. In the Illustris simulations, after or during a galaxy merger, once MBHs come within $\sim 10^2$ -- $10^3 \textrm{ pc}$ of one-another---roughly their gravitational smoothing length---they are manually merged and moved to the potential minimum of the halo. To more carefully examine the MBHB merger process and dynamics, we `post-process' the MBH mergers using semi-analytic models.
In this section we outline some of the key components of the merger models and the resulting merger dynamics that are described thoroughly in \citet{paper1, paper2}. MBH-MBH ``merger-events" are identified in Illustris on $\sim \textrm{kpc}$ scales. We then consider each of these events independently by extracting the MBH masses, and spherically-averaged galaxy density and velocity profiles for each constituent (dark matter, stars, gas) of the host. These profiles are then used to calculate hardening rates of the semi-major axis ($da/dt$) based on prescriptions for dynamical friction \citep{chan42, bt87}, stellar `loss-cone' scattering \citep{magorrian1999}, viscous drag from a circumbinary disk \citep{hkm09, Tang201703}, and GW emission \citep{peters1963, Peters1964}. Dynamical friction is required to harden the system on \mbox{$10$ -- $10^3 \textrm{ pc}$} scales, after which stellar scattering is typically dominant until the GW-dominated regime on \mbox{$\sim 10^{-2}$ -- $10^{-4} \textrm{ pc}$}. In some systems, viscous drag can be the primary hardening mechanism near $\sim 10^{-2} \textrm{ pc}$. Our population of binaries is roughly $10^4$ systems with total masses between $2\E{6} \, \tr{M}_{\odot}$ and $2\E{10} \, \tr{M}_{\odot}$, with a steeply declining mass-function. The mass ratio of the systems is inversely correlated with total mass: systems with low total-masses can only have near-equal mass ratios due to the minimum MBH mass, and high total-mass systems are dominated by extreme mass-ratio mergers.
Both stellar scattering and viscous drag remain highly uncertain processes. The largest uncertainty affecting merger outcomes is likely the effectiveness of stellar scattering: in particular, how efficiently the stellar `loss-cone'---those stars able to interact with the binary---are repopulated. Typical coalescence lifetimes are gigayears. Binaries which are both very massive $M \equiv M_1 + M_2 \gtrsim 10^9 \, \tr{M}_{\odot}$, and near equal mass ratio $q \equiv M_2/M_1 \gtrsim 0.1$, are generally able to coalesce within a Hubble time. Systems with both lower total masses ($M \lesssim 10^8 \, \tr{M}_{\odot}$), and more extreme mass ratios ($q \lesssim 10^{-2}$) often stall at either kpc or pc separations\footnote{E.g.~for binaries with $M \lesssim 10^7 \, \tr{M}_{\odot}$, $\sim 30\%$ of $q \gtrsim 0.3$ systems coalesce before redshift zero, and only $\sim 10\%$ of those with $q \lesssim 0.3$. For the latter, low mass-ratio systems, this is unsurprising as dynamical friction is often ineffective at hardening to below $\sim$kpc separations \citep[e.g.][]{mcwilliams2014}. In our models, we find that more comparable mass systems are more likely to stall at $\sim$pc scales, with stellar scattering becoming ineffective---likely due to less centrally-concentrated stellar mass distributions (see below).}. The fate of the remaining, intermediate systems depends more sensitively on the assumed dynamical parameters (i.e.~the loss-cone refilling rate). Note that this differs from some previous studies finding that more-massive systems merge \textit{less} effectively \citep[e.g.][]{yu2002, Cuadra200809, Dotti2015}\footnote{The discrepancy between Illustris-based models and those of some semi-analytic merger populations likely has to do with typical stellar densities in the cores of low mass galaxies. In particular, the densities from Illustris may be systematically lower, but this has yet to be examined in detail. The fraction of observable systems with low masses ($M \lesssim 10^7 \, \tr{M}_{\odot}$) drops rapidly (e.g.~\figref{fig:10_rate-comp}), implying that this difference between models has little effect on our results.}. The systems which reach $\sim \yr$ periods are thus somewhat biased against low total-masses, and extreme mass-ratios.
Predictions for the GWB and its prospects for detection by PTAs are presented in \citet{paper2}, along with a description of our formalism for eccentric binary evolution. Most models predict GWB amplitudes at periods of $1$ yr, $\ayr \approx 0.5$ -- $0.7 \E{-15}$, roughly a factor of 2 below current sensitivities, and detectable within about a decade. Predictions for GW signals from individually resolvable `single-sources' are presented in \citet{paper3}, and are comparable in detectability to the GWB. The results we present in this paper are relatively insensitive to variations in binary evolution parameters, compared to those of the electromagnetic and observational models we describe below. For reference, the evolutionary model used here assumes an always full loss-cone and initial binary eccentricities of $e_0 = 0.5$.
\subsection{MBH Accretion and AGN Spectra}
\label{sec:meth_spectra}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{{figs/1_acc-ratio}}}
\caption{Accretion ratio data points are from hydrodynamic simulations of MBHB in circumbinary accretion disks by \citet{farris201310}. The line is fit with the function and parameters that are shown in \refeq{eq:acc_rat}.}
\label{fig:1_acc_ratio}
\end{figure}
Our merger models follow the constituent MBHs of a given binary for long after it has ``merged" in Illustris. After the ``merger'', Illustris records the accretion rate of ambient gas onto the single, remnant MBH. We use this accretion rate as a measure of the fueling to the binary system as a whole, feeding the circumbinary disk: $\dot{M} = \dot{M}_1 + \dot{M}_2$. In our post-processing models, however, the two MBH are \textit{un}merged, leaving an ambiguity in the feeding rate to each individual component. To resolve this, we use the results from the detailed circumbinary-disk simulations in \citet{farris201310}, which give the ratio of accretion rates for a variety of binary mass-ratios: $\lambda = \lambda(q) \equiv \dot{M}_2 / \dot{M}_1$. The simulation data-points are plotted in \figref{fig:1_acc_ratio}, along with a fit described by the function,
\begin{align}
\label{eq:acc_rat}
\begin{split}
\lambda = q^{a_1} e^{-a_2/q} + \frac{a_3}{\left(a_4 q\right)^{a_5} + \left( a_4 q \right)^{-a_5}} \\
a_1 = -0.25, \,\, a_2 = 0.1, \,\, a_3 = 50, \,\, a_4 = 12, \,\, a_5 = 3.5.
\end{split}
\end{align}
We assume that the system is Eddington limited on large scales, i.e.~\mbox{$\dot{M} \leq \dot{M}_\trt{Edd} \equiv \dot{M}_{\textrm{Edd},1} + \dot{M}_{\textrm{Edd},2}$}, where $\dot{M}_\trt{Edd} = 1.4\E{18} \textrm{ g s}^{-1} \, \scale{M}{\tr{M}_{\odot}} \, \scale[-1]{\varepsilon_\trt{rad}}{0.1}$, and $\varepsilon_\trt{rad}$ is the radiative efficiency which we take as $0.1$. We let each MBH individually exceed Eddington\footnote{The alternative is explored in \secref{sec:app_pars}.} \citep[e.g.][]{Jiang2014} which can occur for the secondary when $\lambda > 1.0$, corresponding to $q \gtrsim 0.03$. The secondary accretion rate is maximized at $\lambda_\textrm{max} = \lambda(q \approx 0.08) \approx 25$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{{figs/2_agn-spectra}}}
\caption{AGN spectra for an MBH of mass $M = 10^9 \, \tr{M}_{\odot}$, and a variety of accretion rates. For Eddington ratios $\fedd \equiv \dot{M} / \dot{M}_\trt{Edd} < 10^{-2}$, we use the ADAF emission model from \citet{Mahadevan1997}, while for larger accretion rates we use a thermal, Shakura-Sunyaev spectrum. For reference, the vertical colored lines are the optical bands: $[i, r, g, v, b, u]$.}
\label{fig:2_agn_spec}
\end{figure}
The parameters for MBH evolution in Illustris are calibrated to match the observed M--$\sigma$ relation, and the AGN (bolometric) luminosity function based on a constant radiative efficiency of $0.05$ \citep[see,][]{sijacki2015}. For our analysis, we calculate full spectra for each MBH based on its mass and Eddington ratio, $\fedd \equiv \dot{M} / \dot{M}_\trt{Edd}$, obtained from the accretion rates described above. For $\fedd \geq 10^{-2}$, we assume the accretion flow is radiatively efficient and use a \citet{shakura1973} `thin'-disk solution, which assumes emission is purely thermal from each annulus of the disk. For $\fedd < 10^{-2}$ we assume radiatively inefficient accretion in the form of an ADAF \citep{Narayan1995b}, and use the emission model from \citet{Mahadevan1997}. The ADAF model includes self-absorbed synchrotron emission, bremsstrahlung, and inverse-Compton of synchrotron photons. We thus calculate AGN spectra as,
\begin{align}
\label{eq:spec}
F_\nu & = F_\nu^\trt{thin}(M, \fedd) \hspace{0.01in} & \fedd \geq 10^{-2}, \\
& = F_\nu^\trt{ADAF}(M, \fedd) \hspace{0.01in} & \fedd < 10^{-2}.
\end{align}
Spectra for a variety of accretion rates onto an $M = 10^9 \, \tr{M}_{\odot}$ BH are shown in \figref{fig:2_agn_spec}. The bolometric luminosity and the luminosities in the B- and V- band are shown in \figref{fig:3_agn_lbol}, along with the effective radiative-efficiency ($L_\textrm{bol} / L_\mathrm{Edd}$) and luminosity fractions (the inverse of the bolometric corrections). The change in spectral shape and optical luminosity at the transition of $\fedd = 10^{-2}$, corresponding to the change from thick to thin accretion flows, is clear in both \figref{fig:2_agn_spec}~\&~\figref{fig:3_agn_lbol}. While this transition is likely artificially abrupt, it is consistent with transitions in the state of X-ray binaries, associated with the same changes in accretion regime \citep[e.g.][]{Esin1997}. Regardless, the specific location and sharpness of this transition has little effect on our results as observable systems are predominantly at $\fedd \gtrsim 0.1$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{{figs/3_lum-bol-agn_band-bv_cols}}}
\caption{Luminosity and radiative efficiency versus Eddington ratio. The left panel shows bolometric (dashed), B-band and V-band luminosities (solid lines), calculated for a $M=10^9 \, \tr{M}_{\odot}$ MBH, against a variety of accretion rates. The right panel gives the overall radiative efficiency \mbox{$\varepsilon_\trt{rad} = L_\textrm{bol} / \ledd$}, as well as the fraction of energy emitted in the B- and V- bands (i.e.~the inverse of the bolometric corrections).}
\label{fig:3_agn_lbol}
\end{figure}
The luminosity function of Illustris AGN, using our spectral models, is compared to the observationally determined quasar luminosity function (QLF) from \citet{Hopkins2007} in \figref{fig:4_agn_lumfunc}. Comparing observed and simulated AGN observations is non-trivial. On the observational side there are extinction, bolometric corrections, K-corrections, and selection biases; while on the simulation side, we are using disk-integrated quantities based on semi-analytic models instead of either radiative transfer calculations or full disk-simulations. None the less, our models agree with observations to well within an order of magnitude, although the Illustris AGN tend to be systematically over-luminous compared to the observed QLF. Throughout our analysis we present our results primarily in terms of observable binary detections normalized to the predicted number of observable AGN. This is to reduce errors from our reproduction of the luminosity function. In \secref{sec:app_pars} we also present results for a model in which all accretion rates have been lowered by a factor of three, which does produce a better match to the observed QLF.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{{figs/4_agn-func_lum}}}
\caption{AGN luminosity functions constructed from Illustris MBH (solid lines) and the observationally-derived quasar luminosity function (QLF) from \citet[][points / dashed-lines]{Hopkins2007}. The top panel shows the luminosity functions at three different redshifts, \mbox{$z = 0.5$, $1.0$ \& $2.0$}, and the bottom panel shows the ratio of QLFs between Illustris and observations. While our models are consistent with observations to an order of magnitude, our MBH populations and synthetic spectra noticeably over-predict the luminosity function. Note that this comparison is shown for the B-band, while most of our analysis focuses on the V-band.}
\label{fig:4_agn_lumfunc}
\end{figure}
The luminosity functions from \citet{Hopkins2007}, shown in \figref{fig:4_agn_lumfunc}, correct for obscuration. In our analysis, we use the same model which assumes that a luminosity-dependent fraction of systems are observable,
\begin{equation}
\begin{split}
f(L) = \min \left[ 1, \, f_{46} \left( \frac{L}{10^{46} \textrm{ erg s}^{-1}}\right)^\beta\right], \\
\textrm{\textit{B-band}:} \,\,\,\, f_{46} = 0.260, \,\, \beta = 0.082,
\end{split}
\end{equation}
where $L$ is the bolometric luminosity, and the fit values are for the B-band. We convert between bolometric and spectral luminosity using a luminosity-dependent bolometric correction \citep{Hopkins2007},
\begin{equation}
\begin{split}
\frac{L}{L_i} = c_1 \left( \frac{L}{L_\odot}\right)^{k_1} + c_2 \left( \frac{L}{L_\odot}\right)^{k_2}, \\
\textrm{\textit{B-band}:} \,\,\,\, c_1 = 7.40, \,\, c_2 = 10.66, \,\, k_1 = -0.37, \,\, k_2 = -0.014.
\end{split}
\end{equation}
This bolometric correction, which would differ in general from that in our spectral models, is used here for the sole purpose of computing the obscuration fraction in a way consistent with \citet{Hopkins2007}. It is not used elsewhere in our analysis.
One additional adjustment is required for spectra from disks in binary systems: the presence of a companion leads to truncation of each ``circum-single'' disk at a radius comparable to the Hill-radius. Specifically we set the outer-edge of each disk to,
\begin{align}
\label{eq:disk-truncate}
r_\trt{max} & = \frac{a}{2} \, \scale[1/3]{M_i}{M}, \\
& \sim 74 \, \rs \scale[2/3]{p}{5 \, \yr} \, \scale[-2/3]{M}{10^9 \, \tr{M}_{\odot}}
\end{align}
where $M_i$ is the mass of the primary or secondary, $a$ is the semi-major axis, $P$ is the orbital period and $\rs$ is the Schwarzschild radius of the MBH in question. A typical geometry for the binary is shown schematically in \figref{fig:0_schematic}, showing the two circum-single disks around each MBH, within a larger circumbinary disk. While the bright, optical emission in AGN tends to come from relatively small radii, disk truncation can be important for especially massive BHs and those in short-period binaries. For example, the optical luminosity of a $10^9 \, \tr{M}_{\odot}$ MBH, in a $5 \, \yr$ period binary, can be decreased by $\sim 10$ -- $20\%$. When calculating the total luminosities of binaries, we also include contributions from the circumbinary portion of the accretion disk, where, fiducially, we assume that the inner-edge occurs at twice the binary separation, and the inner-edge of each circum-single disk is located at $3 \, \rs$ (i.e.~the inner-most stable circular orbit for a non-spinning BH).
\subsection{Populations and Observations}
\label{sec:meth_obs}
We wish to calculate the number of binaries observable at a given log-interval of period, and a particular interval of redshift, $d^2N / dz \, d\log_{10} p$. By using the number density in a comoving volume, $n \equiv dN / dV_c$, we can write,
\begin{equation}
\frac{d^2 N}{dz \, d\log_{10} p} = f_\trt{obs} \, \frac{p}{\ln 10} \, \frac{dn}{dp} \frac{dV_c}{dz}.
\end{equation}
We have introduced the factor $f_\trt{obs}$, the fraction of systems that are observable at a given redshift, which is generally a function of any binary parameter (e.g.~mass, orbital period, inclination, etc).
We consider particular periods of interest $p_j$, and for each simulated binary $i$ in our population, we find the redshift at which it reaches those periods: $z_{ij}$. To calculate the number or number density of sources, we consider discrete bins in redshift, $\Delta z'_k \in [z'_k, z'_{k+1})$, and identify all binaries reaching the period of interest in that bin. A given binary will be counted at all periods that it reaches before redshift zero. In this way, we effectively treat each moment in time, for each binary, as a separate data sample---i.e.~each represents an independent population of astrophysical binaries (at a slightly different redshift).
Because we are sampling in orbital period, instead of evenly in time, we must explicitly account for the time evolution of binaries by considering the fraction of time binaries spend emitting at each period. The temporal evolution of binaries can be written as,
\begin{equation}
\frac{dn}{dp} = \frac{dn}{dt} \, \frac{dt}{df} \frac{df}{dp}.
\end{equation}
Using Kepler's law along with the hardening time, $\bigt_\tr{h} \equiv a / (da/dt)$, we can write,
\begin{equation}
\frac{dt}{df} = \frac{2}{3} \bigt_\tr{h} \, p.
\end{equation}
The number of binaries at period $p = p_j$, in redshift bin $k$, is then,
\begin{align}
\begin{split}
\frac{d N_{jk}}{d\log_{10} p_j} & = \sum_i \frac{f_{\trt{obs},i}}{\ln 10} \cdot \delta(z'_k \leq z_{ij} < z'_{k+1}) \cdot \mathcal{T}_{ijk} \cdot \mathcal{V}_{k}, \\
\mathcal{T}_{ijk} & \equiv \min\left(\frac{2}{3} \frac{t_{\textrm{\tiny h},ij}}{\Delta t_k}\, , \, 1\right) ,\\
\mathcal{V}_{k} & \equiv \frac{1}{\volill} \frac{dV_c}{dz} \Delta z'_k.
\end{split}
\end{align}
The differential, comoving volume of the universe as a function of redshift, $dV_c / dz$, is given in \citet[][Eq.~28]{hogg1999}.
The fraction of systems observable at a given redshift and period, $f_\trt{obs}$, depends on the emission and variability model. We require that the flux from the source is above the flux threshold, $F_{\nu} \geq F_{\nu,\trt{sens}}$, and that the variability, $\delta_{\tiny F} \equiv \Delta F_{\nu} / F_{\nu}$, is above the variability threshold, $\delta_{\tiny F} \geq \delta F_{\nu,\textrm{\tiny sens}}$, i.e.,
\begin{equation}
f_\trt{obs} = \Theta(F_{\nu} \geq F_{\nu,\trt{sens}}) \cdot \Theta(\delta_{\tiny F} \geq \delta F_{\nu,\textrm{\tiny sens}}).
\end{equation}
Here $\Theta$ is the Heaviside function: unity when the argument is true, and zero otherwise. The variability threshold depends on the SNR and a minimum variability floor, $\delta_{{\tiny F}, \textrm{\tiny min}}$. Based on the smallest variability amplitudes seen in \citet{graham2015}, we use a fiducial value of $\delta_{{\tiny F}, \textrm{\tiny min}} = 0.05$. We calculate the variability threshold as,
\begin{equation}
\delta F_{\nu,\textrm{\tiny sens}} = \mathrm{SNR}^{-1} + \delta_{{\tiny F}, \textrm{\tiny min}},
\end{equation}
which is consistent with the results of variability studies in HST by \citet[][see their Fig.~3]{Sarajedini200308} and in PanSTARRS by \citet[][see their Fig.~4]{Liu201609}. In both cases, the authors find minimum detectable variabilities of $\sim 1$ -- $2\%$, and then select systems using a cut which is some factor larger, at $\sim 5\%$. Similarly, we calculate the SNR by assuming the flux-sensitivity threshold is a factor of five above the noise, i.e.~\mbox{$\mathrm{SNR} = 5 F_{\nu} / F_{\nu,\trt{sens}}$}.
Our results are calculated for a range of detector sensitivities which encompass current and near-future instruments. For convenience, we frequently present our results in terms of two characteristic values based on CRTS and LSST sensitivities. In particular, we use V-band sensitivities of \mbox{$F_{\nu,\trt{sens}}^\trt{CRTS} = 4\E{-28} \textrm{ erg/s/Hz/cm}^2$} ($m_\trt{V} \approx 19.5$) and \mbox{$F_{\nu,\trt{sens}}^\trt{LSST} = 3\E{-30} \textrm{ erg/s/Hz/cm}^2$} ($m_\trt{V} \approx 24.9$) for CRTS and LSST respectively\footnote{The CRTS value we get from the cutoff in the flux distribution of candidates from \citet{graham2015}, while the LSST value is from \citet{LSST2008}. Our detection rates are not strongly dependent on the particular flux-threshold, as discussed in \secref{sec:app_pars}.}. CRTS is considered in particular because of their large sample of binary candidates, but we consider this to be representative of current survey capabilities in general.
The overall number-density of sources (i.e.~in units of $\textrm{Mpc}^{-3}$) is the metric most directly extracted from our models. To better compare with observations, and to reduce the impact of systematic uncertainties in our luminosity functions, we focus on the number of simulated, observable binaries ($N^\trt{sim}_\trt{bin}$) as a fraction of the expected number of all observable AGN ($N^\trt{sim}_\trt{AGN}$), which we report in units of $\textrm{AGN}^{-1}$. To compute $N^\trt{sim}_\trt{AGN}$, we calculate the observability of all Illustris MBHs, using the same spectral models described in \secref{sec:meth_mbhb}, out to a redshift $z_\trt{max} = 4.0$. For the aforementioned sensitivities, our fiducial simulations predict $10^6$ all-sky AGN to be observable by CRTS, and $4\E{7}$ by LSST. CRTS actually observes $\approx 3\E{5}$ spectroscopically-confirmed AGN \citet{graham2015}. The spectroscopic data comes primarily from SDSS, which covers only about one-quarter of the sky, suggesting an all-sky number a little above $10^6$, and consistent with our calculation\footnote{We emphasize, however, that this degree of consistency is largely fortuitous, and even somewhat surprising as Illustris tends to over estimate the AGN luminosity function (e.g.~\figref{fig:4_agn_lumfunc}).}.
For a given sensitivity, our simulations predict an expected number of detected AGN and periodically-variable binaries. A more robust prediction for a given survey can be calculated from the binary detection rate per AGN, along with the actual number of monitored AGN in the survey. We refer to this prediction as a ``rescaled'' number of expected detections,
\begin{equation}
\label{eq:rescale}
N^\trt{rescaled}_\trt{bin} = N^\trt{sim}_\trt{bin} \, \frac{N^\trt{obs}_\trt{AGN}}{N^\trt{sim}_\trt{AGN}}.
\end{equation}
When making predictions for LSST, we assume a completeness $f_\trt{complete}^\trt{LSST} = 2$, relative to CRTS\footnote{CRTS binary candidates are identified from SDSS spectroscopically confirmed AGN, which have a high completeness over roughly 1/4 of the sky \citep[e.g.][]{2009ApJS..180...67R}. Studies suggest that through a combination of photometric and variability selection, LSST can achieve $>90\%$ completeness over its entire field of view \citep[e.g.][]{2010ApJ...714.1194S, 2011AJ....141...93B, 2011ApJ...728...26M}---roughly half of the sky, or twice that of SDSS (and thus the CRTS sample).}, i.e.,
\begin{equation}
N^\trt{obs,LSST}_\trt{AGN} = N^\trt{sim,LSST}_\trt{AGN} \frac{N^\trt{obs,CRTS}_\trt{AGN}}{N^\trt{sim,CRTS}_\trt{AGN}} f_\trt{complete}^\trt{LSST}.
\end{equation}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{{{figs/0_schematic}}}
\caption{Schematic representation of the binary, disk, and accretion geometries assumed in our models; informed from the results of hydrodynamic simulations \citep[e.g.][]{farris201310}. \textit{Left:} between the binary and the circumbinary disk is a `gap' with a radius roughly twice the binary separation. Around each MBH is a `circum-single' disk, fed by time-variable accretion streams extending from the circumbinary disk. Because the secondary MBH is farther from the center-of-mass, and closer to the circumbinary disk, it tends to receive a disproportionate share of the accretion rate. \textit{Right:} the hydrodynamic and Doppler mechanisms for producing photometric variability are depicted on the top and bottom respectively. The circumbinary disk orbits at longer periods than the circum-single disks that it feeds, causing periodic variations in accretion rate, and thus luminosity. For observers oriented near the orbital plane, Doppler boosting of the faster moving, and typically more luminous, secondary MBH can also produce brightness variations.}
\label{fig:0_schematic}
\end{figure*}
\subsection{Models of Variability}
\label{sec:meth_var}
The luminosity of an object in a binary system will not necessarily vary on the orbital period or at all. The premise of photometric identification of MBH binaries is that the binary period is somehow imprinted into variations of the observed luminosity. In the particularly convincing example of PG 1302-102 \citep{Graham201501}, sinusoidal variations in the light-curve can be well explained by Doppler-boosting from a mildly relativistic orbital velocity (\citealt{D'Orazio201509}; but see also \citealt{Liu201803}). Additionally, purely hydrodynamic modulations to accretion rates have been observed in simulations \citep[e.g.][]{farris201310}. Here we describe models for both types of variability mechanisms. Throughout our analysis, each variability mechanism is considered entirely independently, i.e.~we do not consider systems which may be both hydrodynamically \textit{and} Doppler variable.
The physical scenario is depicted schematically in \figref{fig:0_schematic}, assuming thin-disk geometries and a mass-ratio $q \gtrsim 10^{-2}$ (see \secref{sec:meth_var_hyd} below). On the left, the two MBH, each with a circum-single disk, are shown within a cavity (`gap') of material evacuated by the binary orbit which separates them from the circumbinary disk. The radii of the circum-single disks are determined by the Hill radius of each MBH (see \refeq{eq:disk-truncate}). Despite the presence of the gap, the circumbinary disk continues to transport angular momentum outwards, requiring material to be accreted inwards. Material from the disk overflows across the gap as accretion streams onto each circum-single disk. The orbital period at the inner edge of the circumbinary disk is longer than that of the binary, causing periodic variations in the accretion rate onto each circum-single disk (\figref{fig:0_schematic}, upper-right). The secondary MBH is farther from the center of mass and closer to the circumbinary disk edge, which leads to it receiving a disproportionate fraction of the accreting material (as described by \refeq{eq:acc_rat} and shown in \figref{fig:1_acc_ratio}). Each circum-single disk, in addition to the circumbinary disk (although typically to a lesser extent), will produce AGN-like emission.
\subsubsection{Doppler Variability}
\label{sec:meth_var_dop}
Any component of the binary orbital-velocity along the observer's line-of-sight will lead to both a relativistic boost and a Doppler shift in the observed spectrum of each circum-single disk. The boost is calculated using the Doppler factor, \mbox{$D \equiv \left[ \gamma \left(1 - v_{||}/c\right)\right]^{-1}$}, where the Lorentz factor \mbox{$\gamma \equiv \left(1 - v^2/c^2\right)^{-1/2}$}. The line-of-sight velocity, \mbox{$v_{||} \equiv v \sin(i)$}, depends on the binary inclination $i$, which we define as zero for face-on systems. The observed flux is calculated as \citep{D'Orazio201509},
\begin{equation}
\label{eq:dop_1}
F_{\nu} = D^3 F'_{\nu'},
\end{equation}
where the observed frequency $\nu$ is related to the rest-frame frequency as, $\nu = D \, \nu'$. Assuming a power-law of index $\alpha_\nu$ for the section of the spectrum being observed, the Doppler variation in flux from the source will be \citep{Charisi2018},
\begin{equation}
\label{eq:dop_2}
\frac{\Delta F_\nu^d}{F_\nu} = \left(3 - \alpha_\nu\right) \left|\frac{v}{c}\right| \sin i,
\end{equation}
where $v$ is the orbital velocity and $c$ is the speed of light. The sensitivity of Doppler boosting to frequency and thus spectral shape offers a powerful method of testing it as a variability mechanism. \citet{D'Orazio201509} have shown that, in both the optical and ultraviolet, this model explains the periodic variations observed in PG 1302-102.
In full generality, an AGN spectra may not be a power-law at the frequency of interest, so we construct a full spectrum for each MBH in our simulations and numerically calculate the change in flux using \refeq{eq:dop_1}. Additionally, the Doppler-boosting of each MBH in a binary is necessarily $\pi$ out of phase, thus we determine the overall system variation as,
\begin{equation}
\label{eq:var_doppler}
\delta_{\tiny F}^\trt{dop} \equiv \frac{\Delta F_{\nu,1}^d - \Delta F_{\nu,2}^d}{F_{\nu,1} + F_{\nu,2}}.
\end{equation}
To handle the inclination dependence, for each simulated binary we calculate the SNR based on the un-boosted flux, $F_{\nu}$, and determine the minimum observable inclination $i_\trt{min}$, such that the variability is observable. The fraction of solid-angles at which the system is observable, which, for randomly oriented inclinations is $\cos(i_\trt{min})$, then contributes linearly to the observability fraction, i.e.,
\begin{equation}
f_\trt{obs}^\trt{dop} = \delta(F_{\nu} \geq F_{\nu,\trt{sens}}) \cdot \delta(\delta_{\tiny F} \geq \delta F_{\nu,\textrm{\tiny sens}}) \cdot \cos(i_\trt{min}).
\end{equation}
\subsubsection{Hydrodynamic Variability}
\label{sec:meth_var_hyd}
Periodic variations in accretion rates are frequently observed in hydrodynamic simulations of circumbinary disks \citep[e.g.][]{1994ApJ...421..651A, 1996ApJ...467L..77A, Hayasaki200609, Roedig201202, 2013MNRAS.436.2997D, farris201310, Munoz2016}. While significant uncertainties remain in understanding these accretion flows, the general pattern emerging is that three distinct mass-ratio regimes exist. For extreme mass-ratios, $q \lesssim q_\trt{min}$, where $q_\trt{min} \sim 10^{-2}$, the secondary is a minor perturbation to the circumbinary disk, and the accretion flow remains steady. At intermediate mass ratios, $q_\trt{min} \lesssim q \lesssim q_\trt{crit}$, where $q_\trt{crit} \approx 1/3$, a gap is opened by the secondary and the accretion rate onto it varies by a factor of order unity, on the binary orbital period \citep{D'Orazio2016}.
For near-equal mass ratio systems ($q \gtrsim q_\trt{crit}$), a highly distorted cavity is evacuated around the binary, out to roughly twice the binary separation. At the outer edge of the cavity, a significant over density of material develops \citep[see also:][]{2015ApJ...807..131S, 2012ApJ...755...51N}. The Keplerian orbital period of that over density sets the variation timescale as $5$ -- $6$ times the binary period. The binaries we are considering (i.e.~$M > 10^6 \, \tr{M}_{\odot}$, $\bigt_\trt{orb} \sim $ yr) are almost always in the GW-dominated regime in which the hardening timescale---the duration a given binary spends at that separation---decreases rapidly with decreasing orbital period. Thus, if a given variational timescale is probing binaries at shorter periods, the number of observable systems decreases \citep{2015MNRAS.452.2540D}.
We assume that hydrodynamic variability takes place for binaries in the thin-disk state, i.e.~$\fedd \geq 10^{-2}$. While fluctuations in the accretion rate are also likely to occur for ADAF disks, the simulations exploring this phenomenon \citep[e.g.][]{D'Orazio2016} are primarily applicable to thin disks. Modeling only variations in high-accretion rate systems has negligible effects on our overall results, as the low accretion rate systems are much harder to observe. Based on \citet{farris201310}, we assume for our fiducial models that all binaries with mass ratios above $q_\trt{min} = 0.05$ exhibit hydrodynamic variability, and those above $q_\trt{crit} = 1/3$ are observable at $\bigt_\trt{var} = 5 \, \bigt_\trt{orb}$. The accretion-rate variations in simulations predominantly effect the secondary MBH \citep[e.g.][]{farris201310}, so we model the overall hydrodynamic variations as,
\begin{equation}
\label{eq:var_hydro}
\delta_{\tiny F}^\trt{hyd} \equiv \frac{\Delta F_{\nu,2}^h}{F_{\nu,1} + F_{\nu,2}} = \frac{ F_{\nu,2}(M_2, \chi f_{\trt{Edd},2}) - F_{\nu,2}(M_2, f_{\trt{Edd},2})}{F_{\nu,1} + F_{\nu,2}},
\end{equation}
where we take the effective enhancement to the accretion rate as $\chi = 1.5$. In \secref{sec:app_pars} we explore alternative values of $\chi$, and the importance of the $\bigt_\trt{var}$ assumption.
\section{Results}
\label{sec:res}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{{{figs/5_var-grid_fe-1.00_per3.14_band-v_trunc-True_circum-2.0_var-adaf-False_annot-False}}}
\caption{\textbf{Variability amplitudes in total-mass--mass-ratio space for Doppler and hydrodynamic variability.} In this example, we use a fixed orbital period of \mbox{$\bigt_\trt{orb} = 3.14 \, \yr$}, and a system accretion-rate of \mbox{$f_\trt{Edd,sys} = 10^{-1}$}. The accretion rate onto the secondary is shown in the green color-bar to the right. The hashed regions correspond to luminosities below the detection threshold for LSST (larger circles) and CRTS (smaller dots) at a uniform redshift \mbox{$z = 1$}. Note that fiducially, based on the results of \citet{farris201310}, only systems with $q > q_\trt{min} = 0.05$ (red, dashed line) are considered as variables, although here we show variability at lower mass-ratios. The contours correspond to the indicated variability amplitudes.}
\label{fig:5_var-dop_grid}
\end{figure*}
Flux variability amplitudes for a grid of mass-ratio and total-mass are shown in \figref{fig:5_var-dop_grid} for both Doppler (left) and hydrodynamic variability (right). Here we use fixed values of orbital period: \mbox{$\bigt_\trt{orb} = 3.14 \, \yr$}, the system accretion-rate: \mbox{$f_\trt{Edd,sys} = 10^{-1}$}, and redshift: \mbox{$z = 1$}. The accretion rate onto the secondary MBH, determined by the accretion partition function (\refeq{eq:acc_rat}), is illustrated by the green color-bar. For both variability mechanisms, the secondary's variations tend to dominate across the parameter space, although for shorter periods and very high total-mass systems, the primary can occasionally dominate.
Figure~\ref{fig:5_var-dop_grid} shows a sharp discontinuity in variability amplitudes at $q \approx 2.5\E{-3}$, corresponding to the point at which the secondary's accretion rate drops below $\fedd = 10^{-2}$, and its disk transitions to the ADAF state. Note that the location at which this transition occurs is sensitive to the two simulated data points constraining the fitting function in this $q \ll 1$ domain (see \figref{fig:1_acc_ratio}). In the case of hydrodynamic variability, recall that our model considers only variations occurring in the thin-disk state, and additionally, only systems with $q \ge q_\trt{min} = 0.05$ are considered as variable in our fiducial configuration. Systems near this transitionary mass ratio will alternate between thin and ADAF disks, which produces very high variability amplitudes\footnote{This is the case even allowing for variability in the ADAF state (not shown).}. Doppler variability shows a similar, although more mild, discontinuity at the same transition mass-ratio, due entirely to the drop in optical luminosity of the secondary as it transitions to the ADAF state.
Doppler variability amplitudes are significantly larger for higher total-mass systems which have larger orbital velocities. At the largest total masses, however, truncation of the secondary's accretion disk becomes significant, and overall variability amplitudes again decline. Besides the mass-ratio and accretion-rate cutoffs imposed in our model, the hydrodynamic variability amplitudes have no explicit dependence on total mass or mass ratio. The trends seen in the right panel of \figref{fig:5_var-dop_grid} are instead due to the relative brightness of the secondary compared to that of the primary and circumbinary disk. At large total masses, the secondary's disk is significantly truncated, which decreases its luminosity and thus the overall variability amplitude. When most of the emitting region of the secondary's accretion disk is preserved (i.e.~at lower total masses), the relative accretion rate onto the secondary determines its brightness and the variability amplitude.
The total luminosity of the system is tied closely to its total mass. Near $q \approx 0.1$, the accretion rate onto the secondary is enhanced by more than the inverse mass-ratio, leading to the secondary outshining the primary. Near this mass-ratio sweet spot, systems remain observable even at lower total-masses. The hatched regions of \figref{fig:5_var-dop_grid} show systems with V-band spectral fluxes below the LSST (circles) and CRTS (dots) thresholds for systems at a fixed luminosity distance of \mbox{$\dl(z=1) \approx 6.5 \textrm{ Gpc}$}. At this distance, systems with $\fedd = 10^{-1}$ can be seen down to $M \approx 2\E{8} \, \tr{M}_{\odot}$ and $M \approx 3\E{6} \, \tr{M}_{\odot}$ for CRTS and LSST respectively.
\subsection{Event Rates}
\begin{figure*}
\includegraphics[width=2\columnwidth]{{{figs/6_num-variables_vs_redshift-sensitivity__dop-hyd_t0.5-5.0_rescale-False}}}
\caption{\textbf{Cumulative number of detectable, periodically variable MBH Binaries in sensitivity versus redshift space}. Doppler variable binaries (left) are considered independently from hydrodynamic ones (right). The plotted values are all-sky detection rates for binaries with observer-frame periods between $0.5$ and $5.0 \, \yr$. The sensitivities of CRTS, \mbox{$F_{\nu,\trt{sens}}^\trt{CRTS} = 4\E{-28} \textrm{ erg/s/Hz/cm}^2$} ($m_v \approx 19.5$), and LSST, \mbox{$F_{\nu,\trt{sens}}^\trt{LSST} = 3\E{-30} \textrm{ erg/s/Hz/cm}^2$} ($m_v \approx 24.9$), are marked with dashed, grey lines.}
\label{fig:6_num-var_sens_redshift}
\end{figure*}
The number of detectable, periodically variable MBH binaries are shown as a function of survey sensitivity and redshift in \figref{fig:6_num-var_sens_redshift}. The detection of Doppler variable binaries (left panel) becomes plausible only for sensitivities $m_v \gtrsim 21$. Hydrodynamic variables are much more common, and models predict that sources could be identifiable even near $m_v \approx 18.5$. At low sensitivities, observable sources from both models of variability are likely to appear near $z \approx 0.1$. With much higher sensitivities, sources could be identified out to $z \approx 1.0$, but likely not much farther. To study the observability of variables in more detail, we focus on two sensitivities representative of CRTS and LSST, \mbox{$F_{\nu,\trt{sens}}^\trt{CRTS} = 4\E{-28} \textrm{ erg/s/Hz/cm}^2$} ($m_v \approx 19.5$) and \mbox{$F_{\nu,\trt{sens}}^\trt{LSST} = 3\E{-30} \textrm{ erg/s/Hz/cm}^2$} ($m_v \approx 24.9$) respectively, which are marked in \figref{fig:6_num-var_sens_redshift} with dashed, grey lines. At the sensitivities of CRTS, our models predict $0.5$ and $10$ Doppler and hydrodynamic variables respectively, to be observable on the full sky; for LSST those numbers increase substantially to $30$ and $200$.
\begin{figure}
\includegraphics[width=\columnwidth]{{{figs/7_num-variables_vs_redshift__dop-hyd_crts-lsst_t0.5-5.0}}}
\caption{\textbf{Observability of MBH Binaries versus redshift} for CRTS (top) and LSST (bottom). Observable AGN are shown in black, while all binaries are in purple, and those above the flux limit of each instrument are in red. Binaries with observable variability are shown in light blue for Doppler variables, and dark blue for hydrodynamic variables. The left panels give the absolute number of systems in the full sky, while the right panels give the the number of binaries as a fraction of observable AGN. Differential distributions are shown with solid lines, and cumulative ones with dashed lines. Values for binaries include all systems with observer-frame periods between $0.5$ and $5.0 \, \yr$.}
\label{fig:7_num-var_redshift}
\end{figure}
We compare the populations of AGN, MBHBs, and observably variable binaries in \figref{fig:7_num-var_redshift}. The black curves show the distribution of AGN, while all binaries are shown in purple, and those above the flux limit of each instrument are shown in red. Binaries which are observably variable are plotted in light and dark blue for Doppler and hydrodynamic variability respectively. The most accurate predictions from our models are likely the number of binaries per observable AGN (right panels) which should reduce systematic uncertainties in the typical luminosities of our simulated AGN and binaries. For convenience, we also include a second, left y-axis with the predicted number of sources rescaled to the number of studied, spectroscopically confirmed AGN in CRTS\footnote{In other words, the `CRTS scaled' y-axis is shifted such that the total number of predicted AGN matches the observed $3.3\E{5}$ of CRTS; see \refeq{eq:rescale}.}.
From \figref{fig:7_num-var_redshift}, we can see that the number of all binaries (purple) closely traces that of AGN, except two orders of magnitude fewer. For $z \lesssim 0.6$, our models predict that most MBHBs are above the flux limit for CRTS (top, red) which implies that on the order of $1\%$ of AGN out to similar distances could be in binaries---$\approx 10^3$ systems. The number of observably bright binaries falls off rapidly above $z \approx 1$ for CRTS, and above $z \approx 1.5$ for LSST. In the full volume ($z \approx 4.0$), the fraction of AGN in binaries decreases to about $10^{-3}$ for CRTS, and $10^{-4}$ for LSST.
Out to a redshift of $z \approx 0.2$, roughly $10\%$ of binaries observable above the CRTS flux limit are also identifiable as hydrodynamic variables (dark blue), corresponding to $\approx 10^{-3} \, \textrm{AGN}^{-1}$ hydrodynamically variable binaries. This decreases to $\approx 10^{-4.5} \, \textrm{AGN}^{-1}$ at $z\approx 1$, and $\approx 10^{-5} \, \textrm{AGN}^{-1}$ in the full volume. Scaling to the number of monitored CRTS AGN, our models predict $5$ hydrodynamically variable binaries should be visible. For LSST, $\approx 25\%$ of binaries above the flux limit are also identifiable as hydrodynamically variable systems to $z \approx 0.2$, and $\approx 10\%$ to $z \approx 1$. The total number of systems with hydrodynamic-variability above the LSST threshold is predicted to be $\approx 2\E{2}$ in the full sky, or $\approx 100$ assuming twice the completeness of CRTS (i.e.~complete for roughly half of the sky).
The number of binaries with Doppler variability above the variability threshold (light blue) is drastically fewer than hydrodynamic ones, as only binaries with nearly-relativistic velocities produce sufficiently large brightness modulations. Our models predict that, optimistically, of order unity Doppler-variable binaries should be present in the \citet{graham2015} CRTS data set, specifically an expectation value of $0.5$ all sky, and $0.2$ after rescaling to the number of CRTS AGN. While Doppler variables are expected at a rate of $\sim 10^{-6} \, \textrm{AGN}^{-1}$ for both CRTS and LSST, this corresponds to a much more promising number of LSST sources: $30$ Doppler variable binaries all-sky, or $20$ after rescaling.
\begin{figure}
\includegraphics[width=\columnwidth]{{{figs/8_num-variables_vs_period_for_mtot__dop-hyd_crts-lsst_per-agn-True}}}
\caption{\textbf{Observability of Variable MBH Binaries versus period} for CRTS (top) and LSST (bottom). Doppler variable systems are shown in the left column while hydrodynamic variables are shown on the right. Binaries are broken into bins of different total masses indicated by the different colored lines, while the distribution of all binaries is in black. Solid lines indicate distributions per log-decade of period, while dashed lines are cumulative. The grey, horizontal dashed lines provide an estimate of the minimum observable occurrence rates, scaled to the CRTS completeness and twice that for LSST.}
\label{fig:8_num-var_dop_period}
\end{figure}
The predicted number of detectable variables are shown versus period in \figref{fig:8_num-var_dop_period}, grouped in bins of different total masses. The horizontal dashed lines provide an estimate of the minimum plausibly-observable event rates. For CRTS, this is the inverse of the number of monitored AGN from the variability survey, and for LSST we again rescale to twice the completeness of CRTS (see \refeq{eq:rescale}). The shaded region highlights a decade of period between $0.5$ and $5.0 \, \yr$, corresponding to the values plotted \figref{fig:7_num-var_redshift} and representative of the identified CRTS candidates. The number of binaries declines sharply with decreasing orbital period for all masses, reflecting the GW-hardening timescale, \mbox{$\bigt_\tr{gw} \propto \bigt_\trt{orb}^{8/3}$}, by which nearly all binaries at these periods will be dominated \citep[see, e.g.][]{hkm09, paper1}.
The most obvious feature of \figref{fig:8_num-var_dop_period} is that binaries with both types of variability, and observed by either current CRTS-like instruments or even future LSST-like surveys, will be dominated by systems at the longest orbital periods. This is unfortunate as the same is naturally expected from red-noise contaminated single AGN\footnote{Most AGN variability is well fit by a damped random walk, i.e.~red-noise at high frequencies, but flattening to white below some critical frequency. While the transition is typically at inverse-months, the distribution has tails extending to many years containing binaries which would still seem red \citep{MacLeod2010}.}. Our models suggest that Doppler-variable systems are only marginally observable by CRTS, with sources only likely to occur near and above $p \approx 5 \, \yr$. LSST on the other hand will likely detect systems down to $p \approx 2 \, \yr$, where multiple orbital cycles could be observed, thereby decreasing the chance of red-noise contamination. Hydrodynamic variables are far more common, and could be observed down to $p \approx 1 \, \yr$ by CRTS and $p \approx 0.5 \, \yr$ by LSST.
Doppler variability depends on the orbital velocity, and thus favors larger total masses at a fixed orbital period. This is clearly seen in \figref{fig:8_num-var_dop_period}, where systems detectable by CRTS are dominated by those in the $10^8 - 10^9 \, \tr{M}_{\odot}$ and $10^9 - 10^{10} \, \tr{M}_{\odot}$ mass bins. Because LSST is much more sensitive, it begins to probe lower total masses, especially at smaller periods where the orbital velocity increases at fixed mass. Apparent in the Doppler-variable LSST population (bottom left), lower mass systems are relatively more observable at shorter periods, where their velocities are larger. Hydrodynamic variability has no explicit total-mass dependence, and thus those binaries tend to be dominated by the much more numerous, lower-mass systems. Hydrodynamic variables are still not representative of all binary masses as the more massive systems can be more luminous, and thus observed in a larger volume.
\subsection{Observation Efficiency}
\label{sec:res_dop_eff}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{{figs/9_det-eff_mtot-mrat_dop-hyd_crts-lsst_t0.5-5.0}}}
\caption{\textbf{Detection Efficiency of MBH Binaries as periodic variables} in bins of total mass (left) and mass ratio (right) for sensitivities of both CRTS (solid) and LSST (dashed). The top panels show binary occurrence rates per observable AGN, and the bottom panels show the fraction of binaries which are observable: above each survey's flux-limit (red), and hydrodynamically variable (dark blue) and Doppler variable (light blue) above each survey's variability threshold. Values are for binaries at all redshifts with observer-frame periods between $0.5$ and $5.0 \, \yr$.}
\label{fig:9_det-eff}
\end{figure}
To get a better sense of the fraction of binaries that are detectable, and their parametric dependencies, we plot detection efficiency versus total mass and mass ratio in \figref{fig:9_det-eff}. The top panels, which show the number of binaries per AGN, demonstrate that the overall number of binaries falls quickly with increasing total mass (left), and thus also with decreasing mass ratio (right) as these quantities are strongly inversely-correlated. Not only is the mass-function of MBH sharply decreasing with mass, but also for a fixed orbital period, higher mass systems harden faster---further decreasing their number\footnote{While this point does not effect our results, note again that some studies find that more massive binaries do not harden faster.}. The lowest total-mass bin is an outlier, with noticeably fewer binaries due primarily to the mass cut at $10^6 \, \tr{M}_{\odot}$, but also to the difficulty for low-mass systems to merge effectively and reach the binary separations corresponding to $\sim \yr$ periods.
The lower panels of \figref{fig:9_det-eff} show the number of observable binaries (red and blues) divided by the total number of binaries in each bin. This highlights that while observable binaries tend to follow the distribution of all binaries, their completeness drops significantly at lower masses. Doppler-variable binaries (light blue) have an especially strong total-mass dependence, such that the efficiency of CRTS falls from $\approx 10^{-1}$ at $10^{10} \, \tr{M}_{\odot}$ to well below $10^{-2}$ at $10^9 \, \tr{M}_{\odot}$. LSST maintains an efficiency of $\approx 10^{-1}$ down to roughly $10^{8.5} \, \tr{M}_{\odot}$ before dropping. While the peak recovery fraction of hydrodynamic variables (dark blue) is no better than for Doppler ones, the orders of magnitude higher efficiency for the most numerous binaries at $\lesssim 10^{8} \, \tr{M}_{\odot}$ leads to vastly more, detectable hydrodynamically-variable binaries overall.
Both variability mechanisms have strong and similarly shaped mass-ratio dependence, but largely for different reasons. Our models assume that for mass ratios $q < q_\trt{min}$ (fiducially, $q_\trt{min} = 0.05$) there is no hydrodynamic variability as the secondary acts as a minor perturber \citep[see][]{farris201310}. For mass ratios $q > q_\trt{min}$, however, there is no dependence in our model between the amplitude of variability and binary mass ratio, though a strong trend is evident in the detection efficiencies (dark blue, lower-right panel of \figref{fig:9_det-eff}). While mass ratio does not affect the amplitude of variability, at high mass-ratios, $q > q_\trt{crit}$ (fiducially, $q_\trt{crit} = 0.3$), the period of variability is taken to be five times longer than the orbital period. This means that high mass-ratio hydrodynamic variables observed at $p \sim 5 \, \yr$, for example, are actually produced by the much smaller population of systems at $p \sim 1 \, \yr$. That shift in period leads to a significant drop in the number of hydrodynamic variables observed in the highest mass-ratio bin. The tendency for higher mass-ratio systems to be lower in total mass likely also contributes to the high mass-ratio decline, as evidenced by a similar (though more subtle) decline for systems above the flux-limit (red).
Binaries are easier to observe as Doppler variables when they have larger orbital velocities and thus smaller mass-ratios at a fixed total mass. At the same time, at very low mass ratios, the variabilities which are produced by the secondary become washed out by the brighter primaries. This leads to a strong peak in the detection efficiency of Doppler variables with mass ratio near $q \approx 0.03$ -- $0.07$. For both Doppler and hydrodynamic variability, there is also a boost for systems near $q \approx 0.05$ where the accretion rate onto the secondary peaks (see \figref{fig:1_acc_ratio}).
\subsection{Model Effects and Parameters}
\label{sec:app_pars}
\newcommand{`$q$-accrete'}{`$q$-accrete'}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{{figs/10_simple-comparisons}}}
\caption{\textbf{Comparison of detection rates for parametric changes from our fiducial model} for CRTS (colored faces) and LSST (colored edges). The left panel shows all-sky expected detection rates, and the right panel shown rates normalized to the number of observable AGN by each instrument. Red points indicate Doppler variables while blue points indicate hydrodynamic ones. The vertical lines denote the detection rates of the fiducial models. See the surrounding text for a description of each alternative model. The change in detection rates is typically well under an order of magnitude, except for the `$q$-accrete'{} model which eliminates all Doppler detections and decreases hydrodynamic ones by $\approx 10\times$ and $\approx 3 \times$ for CRTS and LSST respectively.}
\label{fig:10_rate-comp}
\end{figure}
Our models of binary AGN variability include a variety of effects and uncertain parameters. Figure~\ref{fig:10_rate-comp} compares our fiducial detection rates (top row and vertical lines) with variations from different parametric changes to our models. Each row gives the expected all-sky (left) and per-AGN (right) detection rates for each alternative configuration. Doppler variables are shown in red and hydrodynamic variables in blue. The variations we consider are meant to illustrate both parametric uncertainties and the significance of particular physical effects. Each alternative model is described below.
\begin{itemize}
\item \textbf{1) `$q$-accrete'{}}: the accretion rate of each component MBH is scaled proportionally to its mass, i.e.~\mbox{$\lambda \equiv \dot{M}_2 / \dot{M}_1 = q$}. While the relative accretion rate has been studied extensively in the planetary and binary communities, a consensus has not been reached. For both Doppler and hydrodynamic variability, the secondary MBH is almost always the source of observable periodicity, making the particular form of the accretion partition function (\figref{fig:1_acc_ratio}) important. We test this, very simple model to explore our sensitivity to the accretion function. Scaling the accretion rate of each MBH to its mass produces significantly lower secondary accretion rates than using the \citealt{farris201310} model. Because Doppler variable sources are preferentially lower mass-ratio, the decreased accretion rate onto the secondary means that its variability is hidden by the brighter primary, and effectively never detectable. The rate of hydrodynamic variable detections is decreased by an order of magnitude for CRTS and roughly a factor of three for LSST.
\item \textbf{2) `un-obscur'}: neglecting AGN obscuration. Interestingly, the overall number of detectable variables is virtually unaffected by the presence of obscuration, while, without obscuration, the total number of observable AGN is increased by a factor of a few. This highlights the fact that the limiting condition for observing binaries photometrically tends to be in the variability sensitivity as opposed to the overall flux sensitivity. Improving the minimum detectable variation amplitudes (i.e.~the floor variability sensitivity, $\delta_{{\tiny F}, \textrm{\tiny min}}$) could significantly increase the number of binary detections, even at a fixed flux sensitivity.
\item \textbf{3) `un-trunc'}: each MBH's accretion disk extends to indefinitely, instead of being truncated by the presence of the companion. By definition this model only affects the observability of binaries and not single AGN. All binary detection rates are increased by the presence of larger emitting regions, although Doppler-variable binaries detected by LSST are only slightly increased, implying that the intrinsic variability amplitude is the limiting factor and not the brightness of the secondary. Doppler and hydrodynamic detections by CRTS are increased by about a factor of four and two respectively, while LSST hydrodynamic detections are increased by roughly a factor of three. This model demonstrates the importance of disk truncation. While AGN in the ADAF state (and thus, typically at low accretion rates) are unlikely to contribute significantly to the detectable population, better understanding if and how truncation occurs will be important in determining their relative observability.
\item \textbf{4) `high-hydro'}: the amplitude of hydrodynamic variability is increased from $\chi = 1.5$ to $2.0$. In the simulations of \citet{farris201310}, the authors find different variability amplitudes for different mass-ratios, generally varying from $\approx 1.5$ to $\approx 3.0$. In this model, we double the brightness increase that occurs from hydrodynamic variability and find that it produces only a moderate increase in hydrodynamic detections, $\approx 90\%$. The systems which become detectable in this model are typically in either small mass-ratio binaries, or at lower periods, where truncation of each MBH's circum-single disk hampers its observability.
\item \textbf{5) `Edd-limit'}: the accretion rate in each circum-single disk is limited to Eddington. In our fiducial model, only the overall accretion rate to both MBHs is Eddington limited. Because the secondary accretion rate is larger than the primary's for $q \sim 0.1$, it can exceed the Eddington limit individually. Conceptually, the `Edd-limit' model assumes that the gas inflow is regulated effectively even at small scales, or alternatively that the radiative efficiency does not increase for Eddington fractions above unity\footnote{Some theoretical and numerical work have shown that the radiative efficiency increases only logarithmically for super-Eddington accretion rates in the `slim'-disk regime \citep[e.g.][]{1980AcA....30....1J, 2005gbha.conf..257A, 2009ApJS..183..171S}.}. Estimated Doppler variable detections by CRTS and LSST are both decreased negligibly. Hydrodynamic variable detection rates are decreased by $\approx 50\%$ for both instruments. Hydrodynamically variable systems are more sensitive to this limit likely due to their tendency to be at slightly higher mass-ratios ($q \approx 0.1$), which are closer to the peak of the accretion partition function.
\item \textbf{6) `hydro-$q$'}: The minimum mass-ratio for variability is set to $q_\trt{min} = 0.0$ (fiducial $q_\trt{min} = 0.05$). Because the contribution from the secondary MBH in lower mass ratio systems is very small, they tend to be unobservable even if they produce hydrodynamic variability, and thus this model shows an effectively negligible change to detection rates.
\item \textbf{7) `$\dot{M}/3$'}: The accretion rates from Illustris are uniformly scaled down by a factor of three. The AGN luminosity function constructed from Illustris is generally consistent with observations, but noticeably over-predicts the number of observable systems. The luminosity functions fit better if the accretion rates (or radiative efficiencies) are systematically decreased by $\approx 50\%$, but in that case the predicted number of AGN detectable by CRTS becomes too low. This model attempts to quantify the uncertainty in the normalization of the Illustris luminosity function by systematically and significantly decreasing MBH accretion rates.
For CRTS, the all sky predicted number of identifiably variable systems decreases from $0.51$ to $0.39$ (Doppler) and $14$ to $6.7$ (hydrodynamic). At the same time, the predicted number of CRTS detectable AGN decreases from $1.1\E{6}$ to $2.5\E{5}$, which is inconsistently low compared to the number monitored by CRTS. Rescaling to the actual number of CRTS observed AGN leads to an increase in the predicted variable detection rates: from $0.16$ to $0.51$ (Doppler) and $4.6$ to $8.9$ (hydrodynamic). Systematically decreasing accretion rates leads to fewer observable variables in our simulated sky, and far fewer AGN. Assuming that the number of observable variables \textit{per AGN} is the most accurate predictor, however, actually leads to an increase in predicted detection rate for CRTS and LSST from the low accretion-rate model.
\item \textbf{8) `orb-time'}: hydrodynamic variability always occurs at the orbital period, instead of shifted to longer periods for $q > q_\trt{crit} = 0.3$. Some hydrodynamic simulations suggest that high mass-ratio systems will exhibit variability at periods roughly five times longer than the binary period, corresponding to the orbital time of an over-density of material at the outer edge of the gap in the disk. For these systems, observed periodicity at $\sim 1 \, \yr$ is actually coming from sources with orbital periods of $\sim 0.2 \, \yr$, which are far less numerous. If variability always occurs at the orbital period, the number of hydrodynamically variable detections double for both CRTS and LSST. Specifically, rescaled detection rates increase from $4.6$ to $11$ for CRTS, and from $130$ to $400$ for LSST. It's worth noting that in the \citet{farris201310} simulations, while variations are \textit{predominantly} at $\approx 5 \times$ the orbital period, there is still a component directly at the orbital period itself which could be identifiable for sufficiently high signal-to-noise systems. To be conservative, we use the time-shifted configuration as our fiducial model.
\item \textbf{9) `no-circum'}: emission from the circumbinary portion of the accretion disk is neglected. Without the circumbinary disk, fractional brightness variations increase as there is less steady emission to compete with. This increases Doppler detection rates by only $\approx 20\%$ for CRTS and negligibly for LSST, but increased Hydrodynamic detection rates by almost a factor of four for both CRTS and LSST. Circumbinary emission seems to primarily hamper the detection of near equal-mass binary systems, which are significantly less important for Doppler detection rates. This model highlights that circumbinary emission is an important source of flux to consider. Additionally, recent simulations suggest that the circumbinary portion of the disk can also contribute to periodic variability signatures \citep{2018MNRAS.476.2249T} which is not considered here.
\item \textbf{10) `B-Band'} and \textbf{11) `I-Band'}: spectra are sampled in the rest-frame B-Band and I-Band instead of the V-Band. For both CRTS and LSST, more variables are detectable in the B-Band and fewer in the I-Band. Doppler detections by CRTS increase by almost a factor of two, and negligibly for LSST, while hydrodynamic detections increase by $\approx 40\%$ for both instruments. Keeping in mind that our models do not take into account factors such as color-dependent extinction, this effect seems to be driven by the AGN circum-single disks being intrinsically brighter on the bluer side of the spectrum.
\end{itemize}
\section{Discussion}
\label{sec:disc}
In this paper, we make predictions for the electromagnetic detection of MBH binaries as photometrically periodic AGN. We use a population of binaries drawn from cosmological hydrodynamic simulations of MBHs and galaxies, evolved using semi-analytic, post-processing models of the detailed merger physics. Employing synthetic AGN spectra, along with models of both Doppler and hydrodynamic variability, we have calculated detection rates for the flux and variability sensitivities of CRTS and LSST. Here we present the results and implications of our study, after first discussing their limitations.
\subsection{Caveats}
\label{sec:disc_cavs}
Numerous limitations exist in our current methods, both in terms of our binary populations and our models of variability. In the former class, while the masses of MBHs in Illustris nicely reproduce the observed, redshift zero BH--galaxy scaling relations \citep{sijacki2015}, there is still significant uncertainty in the full distribution of MBH masses \citep[e.g.][]{mcconnell2013}. The MBH accretion rates have been calibrated to produce accurate masses and reproduce observational, bolometric luminosity functions \citep{sijacki2015}. Using spectral models, and a simple model of obscuration, applied to the entire population of Illustris MBHs, we predict a total number of observable AGN (see \tabref{tab:rates}) which are consistent with CRTS observations, but the AGN luminosity function from our model over predicts the observed relation from \citet[][reproduced in \figref{fig:4_agn_lumfunc}]{Hopkins2007}. While we overestimate the luminosity function even for the brightest systems, we still suffer from small number statistics and incompleteness in the most massive MBHs, due to the finite volume of the Illustris box.
Instead of using bolometric luminosities and corrections, or characteristic spectral indices, we have constructed full spectral models for each of the MBHs in our binary populations. Still, these spectra are highly simplified in the complex and actively developing field of AGN emission. Perhaps the most important deficiency of our spectral models are the lack of any lines, color-dependent extinction, or non-thermal contributions from outside of the disk. We also do not consider any intrinsic AGN variability. Full spectroscopic observations of AGN, including variations between observing epoch, should be incorporated into calculations to carefully consider the effectiveness with which periodic photometric variability can be accurately classified.
We have also relied very strongly on the results of \citet{farris201310}, which use 2D, isolated, purely-hydrodynamic simulations. Other groups \citep[e.g.][]{Cuadra200809, Roedig201202, 2012ApJ...749..118S} have supported the \citet{farris201310} results whose conclusions seem robust for their simulated conditions. Accounting for thick-disk accretion (and mass flow out of the disk plane, possibly enhanced by magnetic fields and radiation), and turbulent flows with varying inflow rates on large scales, are likely to affect detailed predictions. Using the \citet{farris201310} accretion rates also produces an inconsistency in our models: the large accretion rates to $q \sim 0.1$ binaries, combined with the typically large systemic accretion rates in Illustris, imply that binaries will grow towards $q \sim 1$ relatively quickly. Naively, the e-folding time for the secondary at these mass ratios is often as short as $10$ Myr. At the same time, the accretion is typically super-Eddington in these systems, and whether those accretion rates accurately correspond to mass-growth rates (i.e.~neglecting outflows) is unclear. If the secondary MBHs were allowed to grow self consistently, it would likely decrease our predictions for both types of variables. At the same time, some studies in the context of planetary systems \citep[e.g.][]{1999ApJ...526.1001L} find that even at relatively extreme mass-ratios (e.g.~$q \sim 10^{-3}$), the accretion rate onto the secondary can still be a significant fraction of the total accretion rate. If accretion rates remain high at low mass-ratios, it could significant increase the incidence of detectable systems.
We expect many of the simplifications and uncertainties in our models to tend towards \textit{fewer} systems being observable as variables. For example, thick-disk and turbulent accretion flows are more likely to smooth out the periodic variations in emission rather than enhance them. Light from AGN host galaxies will also produce an additional background from which variability must be disentangled. AGN are also known to exhibit strong intrinsic variability, especially at long periods which easily mimics periodicity. While our model for variability sensitivity is based on observational studies of signal identification, it is an extremely simplistic accounting of a very difficult task, as shown in the detailed analyses of \citet{graham2015}, \citet{charisi2016}, and \citet{Liu201609} which include careful treatments of noise.
\subsection{Conclusions}
\label{sec:disc_conc}
A summary of expected detection rates for variability periods from $0.5$ -- $5$ yr are presented in \tabref{tab:rates}. Our models predict that MBH binaries should be detectable at rates of roughly $5\E{-7} \, \textrm{AGN}^{-1}$ and $10^{-5} \, \textrm{AGN}^{-1}$ for Doppler and hydrodynamic variability, respectively. In our simulations, this corresponds to all-sky rates of $0.5$ and $10$ sources at the CRTS sensitivity, while scaling to the number of CRTS-monitored AGN yields $0.2$ and $5$ binaries. For the expected sensitivity of LSST, and assuming twice the completeness of AGN confirmation, we predict $20$ and $100$ Doppler and hydrodynamic binaries to be observable. Additional data is presented in \secref{sec:app} and online to facilitate detection-rate predictions for optical surveys with different sensitivities and durations.
\renewcommand{\arraystretch}{2.0}
\setlength{\tabcolsep}{4pt}
\begin{table*}
\begin{center}
\begin{tabular}{ r | l l | l l | l l }
& \multicolumn{4}{c|}{Observable, Variable Binaries} & \multicolumn{2}{c}{AGN} \\
& \multicolumn{2}{c|}{Doppler} & \multicolumn{2}{c|}{Hydrodynamic} & & \\
Number & CRTS & LSST & CRTS & LSST & CRTS & LSST \\ \hline
All Sky & $5\E{-1}$ $\left(_{3\E{-1}}^{2\E{+0}}\right)$ & $3\E{+1}$ $\left(_{2\E{+1}}^{4\E{+1}}\right)$ & $1\E{+1}$ $\left(_{6\E{+0}}^{7\E{+1}}\right)$ & $2\E{+2}$ $\left(_{8\E{+1}}^{1\E{+3}}\right)$ & $1\E{6}$ $\left(_{3\E{5}}^{4\E{6}}\right)$ & $4\E{7}$ $\left(_{2\E{7}}^{2\E{8}}\right)$ \\
$\textrm{AGN}^{-1}$ & $5\E{-7}$ $\left(_{1\E{-7}}^{2\E{-6}}\right)$ & $8\E{-7}$ $\left(_{2\E{-7}}^{1\E{-6}}\right)$ & $1\E{-5}$ $\left(_{4\E{-6}}^{6\E{-5}}\right)$ & $5\E{-6}$ $\left(_{1\E{-6}}^{2\E{-5}}\right)$ & $1\E{0}$ $\left(-\right)$ & $1\E{0}$ $\left(-\right)$ \\
Scaled & $2\E{-1}$ $\left(_{5\E{-2}}^{6\E{-1}}\right)$ & $2\E{+1}$ $\left(_{6\E{+0}}^{5\E{+1}}\right)$ & $5\E{+0}$ $\left(_{1\E{+0}}^{2\E{+1}}\right)$ & $1\E{+2}$ $\left(_{4\E{+1}}^{6\E{+2}}\right)$ & $3\E{5}$ $\left(-\right)$ & $3\E{7}$ $\left(_{3\E{7}}^{4\E{7}}\right)$ \\
\end{tabular}
\caption{\textbf{Expected observability of MBH binary, periodically variable AGN}.
The first four columns give the expected detection rates of Doppler and Hydrodynamic variable binaries with periods between $0.5$ and $5.0\,\yr$. The last two columns give the expected detection rates for (single) AGN. Each cell includes in parenthesis the range of values from the models discussed in \secref{sec:app_pars}. The first row is the all-sky prediction from our simulations, while the second row is normalized to the predicted number of detectable AGN for each instrument. The last row rescales our results based on the number of AGN monitored in the CRTS variability study \citep[$\approx 3.3\E{5}$;][]{graham2015}, and assuming twice that completeness for LSST.}
\label{tab:rates}
\end{center}
\end{table*}
Our predictions for current instruments are significantly lower than the rate of candidates put forward by \citet{graham2015} and \citet{charisi2016}. Our models suggest that having multiple Doppler variables are unlikely in CRTS, and only a small fraction of the candidates can be explained as hydrodynamic variables. This is consistent with gravitational wave limits which imply that the published candidates contain false positives \citep{sesana201703}. But our predictions do indicate that there should exist numerous true MBH binaries within the candidate populations. The systems painstakingly identified by \citet{graham2015} and \citet{charisi2016} provide an extremely valuable opportunity to identify examples of MBHBs. The candidates deserve significant followup to find the binaries they contain, as no examples of gravitationally-bound MBHBs have been confirmed to date. Additionally, the candidates put forward are very convincing in demonstrating periodic variability \textit{above that produced by the best fitting models of intrinsic AGN variability}. The characterization and study of the mechanisms producing false positives thus presents an interesting opportunity to not only better identify binaries, but also to explore the fundamental accretion processes at play.
Simply extending the temporal baselines of candidate observations will provide a determinant in distinguishing red-noise contaminated systems. Candidates exhibiting red-noise fluctuations misconstrued as periodicity are expected to eventually deviate from sinusoidal behavior. Unfortunately, this test is not without issue. Disk turbulence and time-varying feeding rates of gas can not only introduce their own luminosity fluctuations, but also decrease the coherent, periodic variations from a binary. Some simulations also see accretion alternate from primarily feeding one MBH to then predominantly feeding the other \citep[e.g.][]{2015MNRAS.448.3545D, 2018ApJ...853L..17B}, even in otherwise smooth disks. These factors introduce a significant complication in separating AGN with significant red-noise from those which are binaries, but exhibit excursions from periodicity. While we find Doppler variables to be intrinsically very rare, the characteristic spectral dependence of their variations is a valuable test of their origin \citep{D'Orazio201509}. Systems which may be seen edge-on, like Doppler variables, can also produce periodic lensing spikes \citep{2017PhRvD..96b3004H, 2018MNRAS.474.2975D}. While our analysis does not consider identification of systems simultaneously exhibiting both Doppler and hydrodynamic variability, these phenomena could be coincident and observationally distinguishable. It is also worth considering `triggered' searches, for example: from a candidate Doppler-variable binary, a survey could search for signs of hydrodynamic variability at multiples of the orbital period with boosted signal-to-noise. Ultimately, we expect that time-varying spectroscopic features of binarity (e.g., \citealt{Comerford200810}; but see also \citealt{Eracleous1997}) may be the most robust identifier of spatially unresolved MBHBs.
Our results offer hints at what candidate system parameters may be most indicative of true binarity. We find that neither Doppler nor hydrodynamic variables are likely to be observed much beyond $z \approx 1$, as variability amplitudes are simply too low to be distinguished at larger redshifts. Out to a redshift $0.6$, our models suggest that almost $1\%$ of AGN could harbor binaries, although only a small fraction are identifiable as such. To plausibly detect hydrodynamically variable binaries, a survey must have a sensitivity of $m_v \gtrsim 18.5$, and $m_v \gtrsim 21$ for Doppler variable binaries. Owing to the strong period dependence of the GW hardening rate, binaries are most likely to be observed at longer periods---unfortunately the same trend as for red-noise contamination.
The Doppler variability model depends on nearly relativistic velocities, which means that more massive binaries are strongly favored. We expect both current and future instruments to detect Doppler variables predominantly above $10^8 \, \tr{M}_{\odot}$, and LSST will likely see mostly $10^8$ -- $10^9 \, \tr{M}_{\odot}$ binaries. Hydrodynamic variability is insensitive to the total binary mass, and thus is dominated by the far more numerous systems at lower masses. Below $\approx 10^7 \, \tr{M}_{\odot}$ however, the lower luminosities begin to limit the sensitive volume. For both CRTS and LSST, hydrodynamically variable binaries should be mostly between $\approx 10^7$ and $10^8 \, \tr{M}_{\odot}$, although systems between $10^{6.5}$ and $10^9 \, \tr{M}_{\odot}$ should be observable, and higher mass systems are likely too rare.
Both Doppler and hydrodynamic variability strongly favor systems with mass-ratios $q \sim 0.1$, in our fiducial model. In the case of Doppler variability, this trend is due to the larger orbital velocities of secondary MBHs with lower mass-ratios. The peak mass-ratio sensitivity occurs near $q \approx 0.05$, as secondaries become too faint to produce observable variability when their masses become much lower. The bias towards $q \approx 0.1$ is enhanced by the heightened accretion rates for secondary MBHs near that mass ratio seen by \citet{farris201310}. Hydrodynamically variable binaries are most apparent near $q \approx 0.1$, both because of the enhanced accretion rate, and because systems with $q \sim 1$ have variability periods shifted largely to longer periods. In binaries with $q \ll 1$, the secondary again becomes quite faint, and the variability induced in the accretion flow also begins to be negligibly small.
Selection biases in binary parameters identifiable as variables is important in estimating the GWB implied by a given binary population. \citet{sesana201703} show that the CRTS candidates are in tension with current GWB upper limits from PTAs. The authors show that the tension can be decreased if the mass-ratio distribution of MBHBs is biased towards lower values, as systems with low $q$ produce weaker GW signals. Our full binary populations have relatively flat mass-ratio distributions in log-space, but the selection effects for detecting variability prefer binaries with lower mass-ratios. This observational bias has the opposite effect of an intrinsically low-$q$ population. The high-$q$ systems, which dominate the GWB, are absent from the observational candidates, but contribute more significantly to the GWB. The amplitude of this effect can be estimated as follows. For a given population of candidates, their most likely binary parameters can be used to estimate a GW strain that they produce directly, $h_c^{direct}$. Our results suggest that over the full range of masses, less than $10\%$ of binaries are observable. Furthermore, these systems should have $q \lesssim 0.1$, while binaries with $q \approx 1$ are intrinsically at least three times more common. Those nearer equal mass binaries produce GWs with strains roughly ten times larger\footnote{Strain is related to chirp-mass as, $h \propto \mchirp^{5/3}$, and the chirp-mass is $\mchirp \equiv M q^{3/5} / (1 + q)^{6/5}$.}. All together, because GW strains add in quadrature, this implies that the true GWB amplitude is $h_c \approx 300^{1/2} h_c^{direct}$.
In conclusion, our models are able to explain only a fraction of previously identified, periodically variable binary candidates. The distributions of detectable-binary parameters that we find suggest that existing PTA constraints on the GWB also require a large fraction of candidates to be false positives. On the other hand, our results suggest that many of the candidates in CRTS and PTF could indeed be true MBH binaries. We have presented the parameters of variables which we expect to be MBHBs in the hope that the candidates can be followed up with additional photometric and spectroscopic monitoring to find the binaries they contain. Confirmed examples of year-period MBHBs would present a boon to AGN and MBH binary astrophysics. Such systems contain key information on MBH growth, MBH binary evolution, and offer stringent constraints and insights into our predictions for low-frequency GW signals soon to be detectable by PTAs, and eventually by LISA.
\section*{Acknowledgments}
The authors are grateful for insightful conversations and comments from Laura Blecha and Daniel D'Orazio, and helpful feedback from Edo Berger, Daniel Eisenstein, and Avi Loeb.
This research made use of \texttt{Astropy}, a community-developed core Python package for Astronomy \citep{astropy2013}, in addition to \texttt{SciPy}~\citep{scipy}, \texttt{ipython}~\citep{ipython}, \& \texttt{NumPy}~\citep{numpy2011}. All figures were generated using \texttt{matplotlib}~\citep{matplotlib2007}.
ZH was supported in part by NASA through grants 16-SWIFT16-0015 and NNX17AL82G and by NSF through grant 1715661. AS is supported by the Royal Society.
\let\oldUrl\url
\renewcommand{\url}[1]{\href{#1}{Link}}
\quad{}
\bibliographystyle{mnras}
|
1,116,691,499,702 | arxiv | \section{Introduction}
We are concerned with the recurrence properties of two repelling
random walks $\{S^i_n$; $i=1,2, n\geq 0\}$ taking values on $\mathbb Z$ in
which the repulsion is determined by the full previous history of the
joint process. Formally, assume that $S_0^i,\ldots S_{n_0}^i \in \mathbb Z$
are known for given but arbitrary $n_0 \geq 1$, and let $\mathfrak F_n =
\sigma(\{S_k^1, S_k^2 : 0 \leq k \leq n\})$ be the natural filtration
generated by both walks. The transition probability for each process
is defined as
\begin{equation}\label{eqn:transProb}
\mathbb P\big(S_{n+1}^i = S_n^i + 1 \, \big{|} \, \mathfrak F_n \big)
=
\psi\big((S_{n}^j - S_{0}^j)/n\big)
=
1 - \mathbb P\big(S_{n+1}^i = S_n^i - 1 \, \big{|} \, \mathfrak F_n \big),
\end{equation}
with $i=1,2$, $j=3-i$, $n \geq n_0$, and $\psi: [-1,1] \to [0,1]$,
defined by
\begin{equation}\label{eqn:psi}
\psi(y) = \frac{1}{1 + \exp(\beta y)},
\quad\beta \geq 0.
\end{equation}
When $\beta = 0$, then $\psi(y) = \frac{1}{2}$ for all $y \in [-1,1]$
and both $S^1_n$ and $S^2_n$ form two independent simple random walks
on $\mathbb Z$. To analyse the behaviour for $\beta > 0$, note that the
quantity $y = (S^j_n - S^j_0)/n$ represents the difference between the
proportions of times the $j$-th walk made a right and a left
transition up to time $n$. Thus, if $y$ is positive, then $S_{n+1}^i$
transits with highest probability $1 - \psi(y) > \frac12$ to the left. By
contrast if $y < 0$, that is, if $S_n^j$ has moved more to the left
than to the right, then $S_{n+1}^i$ moves to right with highest
probability $\psi(y) > \frac12$. It is worth mentioning that $\psi$
satisfies the following symmetry relation $\psi(-y) = 1-\psi(y)$, and
hence it is not biased in any direction, left or right. The parameter
$\beta$ strengthens the repulsion between the walks: the larger the
value of $\beta$, the higher is the probability each walk goes in the
direction less transited by the other walk. For given arbitrary
initial conditions, the coordination of the walks towards a limiting
direction, if any, is far from trivial.
We regard a walk $S_n^i$ as recurrent (transient) if every vertex of
$\mathbb Z$ is visited by $S_n^i$ infinitely (only finitely) many times
almost surely. Our main results are stated as follows.
\begin{theorem}\label{th:trans} If $\beta > 2$, both random walks
$S^1_n$ and $S^2_n$ are transient and
\[
\lim_{n\to\infty} S^1_n =
-\lim_{n\to\infty} S^2_n = \pm \infty \quad \textup{a.s.}
\]
\end{theorem}
\begin{theorem}\label{th:rec}
If $\beta \in [0, 1]$, then both $S_n^1$ and $S_n^2$ are recurrent.
\end{theorem}
\begin{remark}\label{conj:recurrence} The case $\beta = 0$ is
trivial. Indeed, when $\beta=0$, both $S_n^1$ and $S_n^2$ are two
independent simple symmetric random walks and hence recurrent. The
case $\beta \in (1, 2]$ remains widely open. The problem that arises
in this case is mentioned the end of this article in
Remark~\ref{remark:speed}.
\end{remark}
According to (\ref{eqn:transProb}) and (\ref{eqn:psi}), the
probability of a transition in a given direction decreases with the
number of previous transitions made by the opponent walk in that
direction. This allows to recognise the process studied throughout as
being formed by two interacting reinforced random walks, namely one in
which the reinforcement is set by repulsive behaviour of each
walk. Self-attracting reinforced random walks were formally introduced
in an unpublished paper by D. Coppersmith and P. Diaconis and have
since been the subject of intense research, see for instance
\cite{D90}, \cite{P92}, \cite{B97}, \cite{V01}, \cite{T04},
\cite{MR09}, \cite{ACK14}, and \cite{CT17}. Self-repelling walks, have
also deserved some attention, see \cite{T95}, \cite{T01} and
references therein. The recurrence properties of self-attracting walks
have been considered among others by \cite{S06}, \cite{MR09},
\cite{S14} and \cite{CK14}. With the exception of \cite{C2014}, there
are relatively few studies of interacting vertex reinforced random
walks with `competition' or `cooperation'. \cite{C2014} considers two
random walks that compete for the vertices of finite complete graphs
and focuses on the asymptotic properties of the overlap of their
vertex occupation measures.
In this article, we study the recurrence properties of $S_n^i$,
$i=1,2$, by analysing the proportions of times each walk $i$ makes a
left and a right transition up to time $n$. To do so, we identify the
vector of empirical measures defined by these proportions with a
stochastic approximation process. The latter have been quite
effective while dealing with several reinforced processes such as
vertex reinforced walks and generalized P\'olya urns, see \cite{P07}
for a survey and further references. More precisely, we study the
asymptotic behaviour of the involved stochastic approximation by
considering the dynamical system approach described in \cite{B96} and
\cite{B99}. The rest of this article is organised as
follows. Section~\ref{sec:DS} shows that the vector of empirical
measures of the times that each walk makes a left and a right
transition forms a stochastic approximation process. This process is
related to the flow induced by a smooth vector field defined on the
product of two 1-simplices. It is therefore sufficient to consider the
planar dynamics defined by the restriction of the field to the unit
square. This together with the fact that the vector field has negative
divergence suffices to show that the limit set of the stochastic
approximation process corresponds to the set of equilibria of the
vector field. Section~\ref{sec:DS} presents a characterisation of the
equilibria in terms of the repulsion parameter $\beta$, and then shows
that the stochastic approximation process converges to stable
equilibria and does not converges to unstable
equilibria. Section~\ref{sec:recurrence} finally presents the proof of
Theorems~\ref{th:trans} and \ref{th:rec}. The proof of
Theorem~\ref{th:trans} is a straightforward application of the results
in Section~\ref{sec:DS}. By contrast, the proof of
Theorem~\ref{th:rec} is more involved. Beyond showing that the
proportion of times each walk makes a left and a right transition
converges toward $\frac{1}{2}$, the proof of Theorem~\ref{th:rec}
relies on an estimate for the
speed of convergence, a zero-one law and a coupling argument.
\section{The dynamical system approach}\label{sec:DS}
\subsection{Stochastic approximations}
For $n \geq 0$, $i =1,2$, define
\begin{equation}\label{eqn:xis}
\xi(n) = \big (\xi^1_l(n), \xi^1_r(n),
\xi^2_l(n), \xi^2_r(n) \big),
\,\,\,
\xi^i_{l}(n) = \mathbf 1_{\{S_{n+1}^i - S_n^i = - 1\}},
\,\,\,
\xi^i_{r}(n) = \mathbf 1_{\{S_{n+1}^i - S_n^i = 1\}},
\end{equation}
and then let
\begin{equation}\label{eqn:xxis}
X^i_{l}(n) = \frac{1}{n}\sum_{k=0}^{n-1} \xi^i_{l}(k),
\qquad
X^i_{r}(n) = \frac{1}{n}\sum_{k=0}^{n-1} \xi^i_{r}(k),
\end{equation}
be the proportion of left and right transitions of the $i$-th walk up
to time $n$. Hereafter we denote by $X = \{X(n)\}_{n \geq 0}$ the
process determined by $X(n) = (X^1_l(n), X^1_r(n),
X^2_l(n), X^2_r(n))$, defined on a suitable probability space
$(\Omega, \mathfrak F, \mathbb P)$.
The process $X$ takes values on the set $\D =
\triangle\times\triangle$, which equals the two-fold Cartesian product
of the one-dimensional simplex $\triangle = \{x \in \mathbb R^2 \mid x_v \geq
0, \sum_v x_v = 1\}$. We will hereafter use $(x^1_l, x^1_r,
x^2_l, x^2_r)$ to denote the coordinates of any point $x \in
\D$ and also $x^i = (x^i_l, x^i_r)$ for $i=1, 2$. Let $\textup{T}\D =
\{(x^1, x^2) \in \mathbb R^{2\times 2} \mid x^i_l+x^i_r = 0, i=1,2\}$
be the tangent space of $\D$. Now, let $\pi : \D \to\D$ be the map
\begin{equation}\label{eqn:pifirst}
x \mapsto \pi(x)
=
\big(\pi^1_l(x), \pi^1_r(x),
\pi^2_l(x), \pi^2_r(x)\big)
\end{equation}
where for $i=1,2$ and $v = l, r$,
\begin{equation}
\label{eqn:pipsi}
\pi^i_v(x) = \psi(2 x_v^j -1), \qquad j = 3 - i.
\end{equation}
For further computations, it is also worth observing that, since
$x^j \in \triangle$,
\begin{equation}\label{eqn:pipsi-novo}
\pi^i_v(x) = \frac{e^{-\beta x^j_v}}{e^{-\beta x^j_l}
+ e^{-\beta x^j_r}}.
\end{equation}
\begin{lemma}\label{lem:X_is_SA}
The process $X=\{X(n)\}_{n\geq 0}$ satisfies the following recursion
\begin{equation}
\label{eqn:SA}
X(n+1) - X(n) = \gamma_n(F(X(n))+U_n)
\end{equation}
where
\begin{equation}
\label{eqn:gamma_and_U}
\gamma_n = \frac{1}{n+1},
\qquad\qquad
U_n = \xi(n) - \E[\xi(n)\mid\mathfrak F_n]
\end{equation}
and $F:\D \to \textup{T}\D$ is the vector field $F= (F^1_l, F^1_r,
F^2_l, F^2_r)$ defined by
\begin{equation}
\label{eqn:TheField}
F(X(n)) = -X(n) + \pi(X(n)).
\end{equation}
\end{lemma}
The proof of Lemma~\ref{lem:X_is_SA} is presented in the Appendix. A
discrete time process whose increments are recursively computed
according to (\ref{eqn:SA}) is known as a stochastic
approximation. Provided the random term $U_n$ can be damped by
$\gamma_n$, (\ref{eqn:SA}) may be thought as a Cauchy-Euler
approximation scheme, $x(n+1) - x(n) = \gamma_n F(x(n))$, for the
numerical solution of the autonomous ODE
\[
\dot x = F(x).
\]
Under this perspective, a natural approach to determine the limit
behaviour of the process $X$ consists in studying the asymptotic
properties of the related ODE. This heuristic, known as the ODE
method, has been rather effective while studying various reinforced
stochastic processes.
Let $x = (x^1_l, x^1_r, x^2_l, x^2_r)$ be a generic
point of $\D$. By (\ref{eqn:TheField}), the ODE determined
by the stochastic approximation in our case is given by the equation
\begin{equation}
\label{eqn:ODE}
\dot x = F(x) = -x + \pi(x).
\end{equation}
By using (\ref{eqn:pipsi-novo}), equation (\ref{eqn:ODE}) explicitly
reads as
\begin{equation}
\label{eqn:ODEexplicit}
\begin{aligned}
&\frac{d}{dt} x^1_v
=
-x^1_v + \frac{e^{-\beta x^2_v}}{e^{-\beta x^2_l}
+ e^{-\beta x^2_r}},
\\
&\frac{d}{dt} x^2_v
=
-x^2_v + \frac{e^{-\beta x^1_v}}{e^{-\beta x^1_l}
+ e^{-\beta x^1_r}},
\end{aligned}
\qquad
v = l, r.
\end{equation}
Because each $x^1$ and $x^2$ assume values on the one-dimensional
simplex $\triangle$, the system of four equations described by
\eqref{eqn:ODEexplicit} can be reduced to two equations; for instance,
those governing the evolution of $x^1_l$ and
$x^2_l$. The dynamics of the ODE in \eqref{eqn:ODE} can therefore
described on the unit square $[0, 1]^2$ by identifying the field
with its projection $F \equiv (F^1_l, F^2_l)$. This
observation allows to use the dynamical system approach to planar
stochastic approximations described in \cite{BH99} and
\cite{B99}. Theorem~\ref{th:X_as_convergence}, stated bellow, is a
consequence of this. It provides a crucial characterisation for the
asymptotic behaviour of the process $X$.
A point $x \in \D$ is an equilibrium of $F$ if $F(x) =
0$. The set of equilibria of $F$ will hereafter be denoted by
$\boocal{E}$.
\begin{theorem}\label{th:X_as_convergence} Let $X = \{X(n)\}_{n\geq
0}$ be a process satisfying the recursion \eqref{eqn:SA}. For any
$\beta \in [0, \infty){\setminus}\{2\}$, the process $X$ converges
almost surely toward an equilibrium of the vector field $F$ defined in
\eqref{eqn:TheField}.
\end{theorem}
The proof of Theorem~\ref{th:X_as_convergence} is presented in
Section~\ref{sec:ProofTh3}. We present first a description of the
equilibria of the vector field $F$.
\subsection{Equilibria}
This section identifies the equilibria of vector field defined by
(\ref{eqn:TheField}) and further studies their stability depending on
the repulsion parameter $\beta$. For any point $x \in \D$, let
$\JF{x}$ be the Jacobian matrix of the vector field $F$ at $x$ and let
$\sigma(\JF{x})$ be the set of its eigenvalues. The equilibrium $x$
is hyperbolic if all the eigenvalues of $\sigma(\JF{x})$ have non-zero
real parts. The hyperbolic equilibrium $x$ is linearly stable if
$\sigma(\JF{x})$ contains only eigenvalues with negative real parts;
otherwise $x$ is said to be linearly unstable.
\begin{lemma}\label{lem:EquilibriaForm}
Let $g : [0,1]\to [0,1]$ be a strictly decreasing function and such
that $g(1-w)=1-g(w)$ for all $w \in [0,1]$. Let $E_1$ and $E_2$ be
the sets defined as
\begin{align*}
&E_1 = \Big\{x\in\D\ \Big|\ x_v^i = g(x_v^j)\ \text{for all $i$,
$v$ and $j=3-i$} \Big\}, \\
&E_2 = \Big\{x\in\D\ \Big|\ x = (w, 1-w, 1-w, w) \text{ where }
w=g(1-w) \Big\}.
\end{align*}
Then $E_1 = E_2$. In particular, $\big(\frac{1}{2}, \frac{1}{2},
\frac{1}{2}, \frac{1}{2}\big) \in E_2$, and $(w, 1-w, 1-w, w) \in E_2$
if and only if $(1-w, w, w, 1-w) \in E_2$.
\end{lemma}
\begin{proof}
Assume $x \in E_1$. First we show that $x_l^1 =
x_r^2$. Suppose by contradiction, and without loss of generality,
that $x_r^2 < x_l^1$. Since $g$ is strictly decreasing, we
would have that $1 - x_r^2 = x_l^2 = g(x_l^1) <
g(x_r^2) = x_r^1 = 1- x_l^1$, contradicting the hypothesis
that $x_r^2 < x_l^1$. Since $x_l^1 = x_r^2$, by
setting $w = x_l^1 = x_r^2$, we have that $x_r^1 =
x_l^2 = (1 - w)$. To conclude that $x \in E_2$, it sufficient to
observe that $w = g(1 -w)$. Indeed,
\[
w=x^1_l = g(x^2_l) = g(1 - x^2_r) = g(1 -w).
\]
The second inequality holds because $x \in E_1$ and the third, because
$x^2_l = 1 - x^2_r$.
Conversely, assume that $x \in E_2$. Then $(x^1_l, x^1_r,
x^2_l, x^2_r) = (w, 1-w, 1-w, w)$ for some $w$ with $w = g(1-
w)$. As an immediate consequence, we have that $x_l^1 =
g(x_l^2)$ and $x_r^2 = g(x_r^1)$. To conclude, we show
next that $x_r^1 = g(x_r^2)$ and $x_l^2 =
g(x_l^1)$. Indeed,
\[
x_r^1 = x_l^2 = 1 - w = 1 - g(1 - w) = 1 - (1 - g(w)) = g(w)
= g(x_l^1) = g(x_r^2).
\]
The third equality holds because $x \in E_2$ and hence $w = g(1 -
w)$. The fourth equality holds by hypothesis on $g$, that is, $g(1 -
w) = 1 - g(w)$ for all $w$. The last two equalities follow because $w
= x_l^1 = x_r^2$.
\end{proof}
\begin{lemma}\label{lem:eqtypes} For $\beta \in [0,2]$, the point
$\big(\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}\big)$
is the only equilibrium for the vector field $F$ given by
\eqref{eqn:TheField}. For any $\beta > 2$, the field has three
equilibria,
\begin{equation}\label{eqtypes}
\big(\textstyle\frac{1}{2}, \frac{1}{2}, \frac{1}{2},
\frac{1}{2}\big),
\quad
(w, 1-w, 1-w, w)
\quad \mbox{and}\quad
(1-w, w, w, 1-w),
\end{equation}
where $w \in (0, \frac{1}{2})$ is uniquely determined by $\beta$. The
equilibrium $\big(\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$,
$\frac{1}{2}\big)$ is linearly stable for $\beta \in [0, 2)$ and linearly
unstable for $\beta > 2$. The equilibria $(w, 1-w, 1-w, w)$ and $(1-w,
w, w, 1-w)$ are linearly stable for $\beta > 2$.
\end{lemma}
\begin{proof}
Let $\boocal{E}$ be the set of equilibria of the vector field given by
(\ref{eqn:TheField}), and $\psi$ be given as in
(\ref{eqn:psi}).
First we show that $\boocal{E} = E_2$, where $E_2$ is defined
as in Lemma \ref{lem:EquilibriaForm} for $g(w) = \psi(2 w -1)$. To
that end, note that $x \in \boocal{E}$ if and only if $x_v^i = \pi_v^i(x) =
\psi(2 x_v^j -1) = g(x_v^j)$ for all $i$ and $v$, where $j = 3
-i$. This shows that $\boocal{E} = E_2$. Next, we show that $E_1 = E_2$. The
previous equality is ensured by Lemma \ref{lem:EquilibriaForm},
provided that $g$ is strictly decreasing and $g(1-w)=1-g(w)$ for all
$w \in [0,1]$. These two assertions follow immediately by inspection
on $g(w)$, where
\[
g(w) = \frac{1}{1+e^{2 \beta w - \beta}}.
\]
This shows that $\boocal{E} = E_2$ with $g(w) = \psi(2w -1)$. In particular,
for all $\beta \geq 0$,
\begin{equation}
\label{eqn:Def_of_E}
\boocal{E} =
\Big\{x\in\D\ \Big|\ x = (w, 1-w, 1-w, w), \text{ where } w
= g(1 - w)\Big\}.
\end{equation}
By Lemma \ref{lem:EquilibriaForm}, it follows that $(\frac{1}{2}$,
$\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$) $\in \boocal{E}$ and $(w, 1-w,
1-w, w) \in \boocal{E}$ if and only if $(1-w, w, w, 1-w) \in \boocal{E}$. To conclude,
it is sufficient to show two things. First, if $\beta \in [0,2]$, then
there is no $w \in [0,\frac{1}{2})$ such that $w = g(1-w)$; and
second, if $\beta \in (2,\infty)$, then there is only one $w \in
[0,\frac{1}{2})$ such that $w = g(1-w)$.
If $\beta = 0$, then the first assertion holds because $g(1 - w) =
\frac{1}{2}$ for all $w \in [0,\frac{1}{2})$. If $\beta > 0$, then
both assertions hold because $g(1 - w)$ is bounded from below by zero,
increasing, strictly convex on $[0,\frac{1}{2})$, and such that
\[
\frac{\partial}{\partial w} g(1 - w)\big{|}_{w=\frac{1}{2}} > 1
\qquad \text{if and only if $\ \ \beta > 2$}.
\]
The stability of an isolated equilibrium point is determined by
studying a linearization of the vector field provided by the Jacobian
matrix at that point. For any $x \in \D$ let $\JF{x} = [\partial
F^i_k(x)/\partial x^j_s]$ for $i=1,2$, $j=1,2$, and $k, s \in
\{l,r\}$. For $x_* =(\frac{1}{2}, \frac{1}{2}, \frac{1}{2},
\frac{1}{2}) \in \boocal{E}$, the Jacobian matrix is
\[
\JF{x_*}
=
\begin{bmatrix*}[r]
-1 & 0 & -\frac{\beta}{4} & \frac{\beta}{4} \\[.4em]
0 & -1 & \frac{\beta}{4} & -\frac{\beta}{4} \\[.4em]
-\frac{\beta}{4} & \frac{\beta}{4} & -1 & 0 \\[.4em]
\frac{\beta}{4} & -\frac{\beta}{4} & 0 & -1
\end{bmatrix*}.
\]
The four eigenvalues of $\JF{x_*}$ are easily computed and equal
\begin{equation}\label{eqn:eingenv}
-1, \quad -1, \quad -1 - \frac{\beta}{2}, \quad\text{and }\ -1 +
\frac{\beta}{2}.
\end{equation}
This shows that the equilibrium $x_* =(\frac{1}{2}, \frac{1}{2},
\frac{1}{2}, \frac{1}{2})$ is linearly stable if $\beta < 2$ and
linearly unstable if $\beta > 2$.
Now, suppose that $\beta > 2$, and let $x_w = (w, 1-w, 1-w, w) \in
\boocal{E}$, where $w \in (0,\frac{1}{2})$. The Jacobian of the vector field
at $x_w$ is given in this case by the matrix
\[
\JF{x_w}
=
\begin{bmatrix*}[r]
-1 & 0 & -h(w,\beta) & h(w,\beta) \\[.4em]
0 & -1 & h(w,\beta) & -h(w,\beta) \\[.4em]
-h(w,\beta) & h(w,\beta) & -1 & 0 \\[.4em]
h(w,\beta) & -h(w,\beta) & 0 & -1
\end{bmatrix*},
\]
where
\[
h(w, \beta) = \frac{\beta}{2+2\cosh(\beta-2 w \beta)}.
\]
Two eigenvalues of this matrix equal $-1$. The two other eigenvalues
are $-1 \mp 2 h(w,\beta)$. Simple analysis shows that the eigenvalue
$-1-2h(w,\beta)$ is negative for any $\beta > 2$ and $w \in
\big[0,\frac{1}{2}\big)$. To conclude that $x_w$ is stable, it remains
to show that $-1 + 2h(w,\beta)$ is negative. Note that, for $\beta >
2$ and $v \in \big[0, \frac{1}{2}\big)$, the map $v\mapsto
-1+2h(v,\beta)$ is increasing and equals 0 at a single value $w_*$
determined by
\[
w_* = \frac{\beta - \text{arcosh}(\beta-1)}{2\beta}.
\]
To conclude that $-1+2h(w,\beta)$ is negative, we show that $w <
w_*$. A straightforward computation shows that $w_*$ is the unique
solution to
\[
\frac{\partial}{\partial w_*} g(1-w_*) = 1 \quad\text{for}\ w_* \in
\Big[0,\frac{1}{2}\Big).
\]
Since $x_w = (w, 1-w, 1-w, w) \in \boocal{E}$ and $w \in
\big[0,\frac{1}{2}\big)$, we have that $g(1-w) = w$, where $g$ is the
map used in the definition of $\boocal{E}$ in (\ref{eqn:Def_of_E}). Since
$g\big(\frac{1}{2}\big) = \frac{1}{2}$, $w \mapsto g(1- w)$ is
continuous, strictly increasing and strictly convex for $w \in \big[0,
\frac{1}{2}\big]$, it follows that $w < w_*$.
An analogous argument shows that the equilibrium $(1-w, w, w, 1-w)$ is
stable when $\beta > 2$, because the Jacobian of the vector field at
this point has the same spectrum as the Jacobian at $(w, 1-w, 1-w,
w)$.
\end{proof}
\subsection{Convergence to equilibria}
\label{sec:ProofTh3}
This section presents the proof of Theorem~\ref{th:X_as_convergence}.
Its proof relies on Lemma~\ref{lem:LimitSet-second-part} stated
bellow. This lemma allows us to relate the limiting behaviour of the
random process $X$ to the one of the flow induced by the vector field
in \eqref{eqn:TheField}. We will make use of the following
terminology, mostly taken form \cite{B99}, to state this result.
A semi-flow on $\D$ is a continuous map $\phi:\mathbb R_+\times\D\to\D$ such
that $\phi_0$ is the identity on $\D$, and $\phi_{t+s} =
\phi_t\circ\phi_s$ for any $t, s \geq 0$. To simplify notation we used
$\phi_t(x)$ instead of $\phi(t, x)$. A subset $A\subset \D$ is said to
be positively invariant if $\phi_t(A)\subset A$ for all $t\geq 0$. Let
$F$ be a continuous Lipschitz vector field on $\D$. The semi-flow
induced by $F$ is the unique smooth map $\Phi=\{\phi_t\}$ such that:
1. $\phi_0(x_0) = x_0$ for any $x_0 \in \D$, and 2.
$\frac{d}{dt}\phi_t(x_0) = F(\phi_t(x_0))$ for all $t \geq 0$.
A simple verification shows that vector field $F$ in
(\ref{eqn:TheField}) is Lipschitz continuous, hence the induced
semi-flow $\Phi$ is uniquely determined by $F$. Moreover, the
following lemma shows that $\D$ is positively invariant by $\Phi$.
\begin{lemma}\label{lem:DomainInvariance}
$\D$ is positively invariant for the semi-flow $\Phi$ induced by the
vector field $F$ in (\ref{eqn:TheField}).
\end{lemma}
\begin{proof} Let $z = (z_l^1, z_r^1, z_l^2, z_r^2)$
be a generic point in $\D$. Suppose that $z \in \partial\D$ and hence,
without loss of generality, that $z_l^1 = 0$ and $z_r^1 =
1$. Suppose $\phi_t$ is a solution of (\ref{eqn:ODE}) with $\phi_0 =
z$. For any $t \geq 0$, write $\phi(t, z) = \phi_t(z)$. Since $F(z)
\in \textup{T}\D$, it is sufficient to show that $\frac{d }{dt}\phi_l^1(t,
z)\big|_{t=0} > 0$, in which case, it holds also that $\frac{d }{dt}
\phi_r^1(t, z)\big|_{t=0} = - \frac{d }{dt} \phi_l^1(t, z)\big|_{t=0}
< 0$. By (\ref{eqn:pifirst}), (\ref{eqn:pipsi-novo}), and
(\ref{eqn:ODE}), it follows that
\[
\frac{d }{dt} \phi_l^1(t, z)|_{t=0} = \psi(2 z_l^2 - 1) \geq
\inf_y \psi(2 y - 1)
=
\frac{1}{1+e^{\beta}} > 0.
\]
This shows that $F(z)$ points inwards whenever $z \in \partial\D$,
and hence that $\phi_t \in \D$ for all $t>0$ if $\phi_0 \in \D$.
\end{proof}
In terms of the semi-flow $\Phi$, a point $x\in \D$ is said to be an
equilibrium if $\phi_t(x) = x$ for all $t \geq 0$. A point $x \in \D$
is periodic if $\phi_T(x) = x$ for some $T>0$. The set $\gamma(x) =
\{\phi_t(x)\, :\, t \geq 0\}$ is the orbit of $x$ by $\Phi$. A subset
$\Gamma \subset \D$ is a orbit chain for $\Phi$ provided that for some
natural number $k\geq 2$, $\Gamma$ can be expressed as the union
$\Gamma = \{e_1, \ldots, e_k\}\bigcup \gamma_1 \bigcup \ldots \bigcup
\gamma_{k-1}$ of equilibria $\{e_1, \ldots, e_k\}$ and nonsingular
orbits $\gamma_1$, $\ldots$, $\gamma_{k-1}$ connecting them. If $e_1 =
e_k$, $\Gamma$ is called a cyclic orbit chain.
Let $\delta >0$, $T>0$. A $(\delta, T)$-pseudo orbit from $x \in \D$
to $y \in \D$ is a finite sequence of partial orbits $\{\phi_t(y_i) :
0 \leq t \leq t_i\}$; $i=0, \ldots, k-1$; $t_i\geq T$ of the semi-flow
$\Phi= \{\phi_t\}_{t\geq 0}$ such that
\[
\|y_0 - x\| < \delta,
\qquad
\|\phi_{t_i}(y_i) - y_{i+1}\| < \delta,\ \ i=0, \ldots, k-1,
\quad\text{and}\quad
y_k = y.
\]
A point $x \in \D$ is chain-recurrent if for every $\delta>0$
and $T>0$ there is a $(\delta, T)$-pseudo orbit from $x$ to
itself. The set of chain-recurrent points of $\Phi$ is denoted by
$\mathcal{R}(\Phi)$. The set $\mathcal{R}(\Phi)$ is closed, positively invariant by $\Phi$ and
such that $\boocal{E} \subset \mathcal{R}(\Phi)$.
Let $\mathfrak L\big(\{X(n)\}\big)$ be the limit set of the stochastic
approximation process $X = \{ X(n)\}_{n\geq 0}$. That is, for any point
$\omega \in \Omega$, the value of $\mathfrak L\big(\{X(n)\}\big)$ at $\omega$
is given by the set of points $x \in \mathbb R^{md}$ for which $\lim_{k\to
\infty} X(n_k, \omega) = x$, for some strictly increasing sequence of
integers $\{n_k\}_{k \in \mathbb N}$.
Next we show that $\mathfrak L\big(\{X(n)\}\big)$ is almost surely connected
and included in $\mathcal{R}(\Phi)$. This is the content of
Lemma~\ref{lem:LimitSet-second-part}. To show
Lemma~\ref{lem:LimitSet-second-part}, we use the following lemma.
\begin{lemma}\label{lem:Kushner}
Let $X = \{X(n)\}_{n\ge 0}$ be a process satisfying the recursion in
\eqref{eqn:SA} such that $F$ defined by \eqref{eqn:TheField} is a
continuous vector field with unique integral curves. Then
\begin{enumerate}[(i), nosep]
\item $\{X(n)\}_{n\geq 0}$ is bounded,
\item $\lim_{n\to\infty}\gamma_n=0$, $\sum_{n\geq 0}
\gamma_n = \infty$,
\item\label{as:KushnerLemma} for each $T>0$, almost surely it holds
that
\[
\lim_{n\to\infty}
\Bigg(\sup_{\{\, r\, :\, 0\, \leq\, \tau_r - \tau_n\, \leq\, T\, \}}
\Bigg\|\sum_{k=n}^{r-1} \gamma_k U_k\Bigg\|
\Bigg) = 0,
\]
where $\tau_0=0$ and $\tau_n = \sum_{k=0}^{n-1} \gamma_k$.
\end{enumerate}
\end{lemma}
\begin{proof}
Item (\emph{i}) follows by definition of $\{X(n)\}_{n\geq 0}$ in
\eqref{eqn:xxis}. Item (\emph{ii}) is immediate by the form of
$\gamma_n$ in (\ref{eqn:gamma_and_U}). The proof of (\emph{iii}) is
presented in the Appendix.
\end{proof}
\begin{lemma}\label{lem:LimitSet-second-part}
Let $X = \{X(n)\}_{n\ge 0}$ be a process satisfying the recursion in
\eqref{eqn:SA} and $\mathcal{R}(\Phi)$, the chain-recurrent set of the semi-flow
induced by the vector field $F$ in \eqref{eqn:TheField}. Then,
$\mathfrak L\big(\{X(n)\}\big)$ is almost surely connected and
included in $\mathcal{R}(\Phi)$.
\end{lemma}
\begin{proof} Since $X$ satisfies the properties
(\emph{i})-(\emph{iii}) in Lemma~\ref{lem:Kushner}, the proof of the
lemma follows from Theorem 1.2 in \cite{B96}.
\end{proof}
We are now in the position to present the proof of
Theorem~\ref{th:X_as_convergence}.
\begin{proof}[Proof of Theorem~\ref{th:X_as_convergence}]
We show first that $\mathcal{R}(\Phi) \subset \boocal{E}$. Let $\Phi=\{\phi_t\}_{t\geq 0}$
denote the planar semi-flow induced by the vector field $F\equiv
(F^1_l, F^2_l)$, where $F^1_l$ and $F^2_l$ are two of
the coordinate functions of the field defined by
\eqref{eqn:TheField}. By Lemma~\ref{lem:eqtypes}, we have that the
field $F$ has isolated equilibria. It then follows from Theorem 6.12
in \cite{B99} that for any point $p \in \mathcal{R}(\Phi)$ one of the following
holds:
\begin{enumerate}[($a$), nosep]
\item $p$ is an equilibrium
\item $p$ is periodic
\item There exists a cyclic orbit chain $\Gamma \subset \mathcal{R}(\Phi)$ which
contains $p$.
\end{enumerate} A simple computation shows that div$F$, the divergence
of $F$, is negative, indeed
$
\text{div} F(x) = \partial F_l^1(x)/\partial x_l^1
+ \partial F_l^2(x)/\partial x_l^2 = - 2.
$
This implies that $\phi_t$ decreases area for $t > 0$. In this case,
according to Theorem 6.15 in \cite{B99}, it follows that:
\begin{enumerate}[1., nosep]
\item $\mathcal{R}(\Phi)$ is a connected set of equilibria which
is nowhere dense and which does not separate the plane
\item If $\Phi$ has at most countably many equilibrium points, then
$\mathcal{R}(\Phi)$ consists of a single stationary point.
\end{enumerate}
Both options, ($b$) and ($c$), are therefore ruled out and hence $\mathcal{R}(\Phi)
\subset \boocal{E}$.
Observe now that, by Lemma~\ref{lem:LimitSet-second-part}, we have
that $\mathfrak L\big(\{X(n)\}\big)$ is almost surely connected and
included in $\mathcal{R}(\Phi)$. Since $\mathcal{R}(\Phi) \subset \boocal{E}$ and $\boocal{E}$ is formed by
isolated points it follows that $X(n)$ converges almost surely towards
a point of $\boocal{E}$.
\end{proof}
\subsection{Non-convergence to the unstable equilibrium}
\label{sec:NonConvrg}
A step to characterise the asymptotic behaviour of the stochastic
approximation $X$ consists in establishing that this process does not
converges toward linearly unstable equilibria of $F$. This is
accomplished here by using Theorem 1 in~\cite{P90}, evoqued in the
proof of the following lemma.
\begin{lemma}\label{non-covergence-int} Let $X=\{X(n)\}_{n\ge 0}$ be a
process satisfying the recursion in (\ref{eqn:SA}). Then, if $\beta
> 2$,
\[
\mathbb P\Big(\lim_{n\to\infty} X(n)
=
\big(\textstyle\frac{1}{2}, \frac{1}{2}, \frac{1}{2},
\frac{1}{2}\big)\Big)
= 0.
\]
\end{lemma}
\begin{proof} Let $x_*=\big(\frac12, \frac12, \frac12, \frac12\big)$
and $U_n$ be defined as in Lemma~\ref{lem:X_is_SA}. Throughout,
$\|\cdot\|$ stands for the $L^1$ norm in $\mathbb R^4$. The proof follows
from Theorem 1 in \cite{P90}, provided that the following conditions
are satisfied:
\begin{enumerate}[(i), nosep]
\item $x_*$ is a linearly unstable critical point of $F$,
\item $\|U_n\| \leq c_1$ for some posite constant $c_1$, and
\item For every $x \in \mathcal B(x_*)$, $n > n_0$, and $\theta
\in \textup{T}\D$ with $\|\theta\|=1$, there is a postitive constant $c_2$
such that
\[
\E\Big[\textup{max}\big\{\big\langle\theta, U_n\big\rangle,0\big\}
\,\Big|\, X(n) = x, \mathfrak F_n\Big] \geq c_2.
\]
\end{enumerate}
Condition (\emph{i}) follows immediately from Lemma~\ref{lem:eqtypes}
and Condition (\emph{ii}), by the definition of $U_n$ in
(\ref{eqn:gamma_and_U}). The rest of the proof concerns the
verification of (\emph{iii}).
Let $\textup{T}\D_1 = \{ \theta \in \textup{T}\D \, : \, \|\theta\| = 1 \}$ and $n_0$
be defined as in the first paragraph of the introduction. For $w \in
\mathbb R$, let $w^+ = \textup{max}\{w,0\}$. It is sufficient to show that,
for all $n > n_0$, $x \in \D$, and $\theta \in \textup{T}\D_1$, we have that
\begin{equation}\label{Edesig}
\E \Big[ \big\langle\theta, U_n
\big\rangle^+ \Big{|} \,X(n) = x, \, \mathfrak F_n \Big] \geq s(x)
\end{equation}
where $s:\D \to \mathbb{R}$ is a continuous function with $s(x_*) >
0$.
Let
\begin{equation}
s(x) = \frac{1}{2} \Big(\min_{i,v} \pi_{v}^{i} \big(x\big)
\Big)^{3}.
\end{equation}
Clearly $s$ is continuous because $\pi_v^i$ are continuous. Since
$F(x) = -x + \pi(x)$ and since $F(x_*) = 0$, we have that $\pi(x_*) =
x_* = \big(\frac12, \frac12, \frac12, \frac12\big)$ and, therefore,
$s(x_*) > 0$.
It remains to show (\ref{Edesig}). Let $\theta \in \textup{T}\D_1$. For each
walk $i \in \{1,2\}$, choose a vertex $v^i \in \{l, r\}$, such
that
\[
\theta_{v^i}^i = \max_v \theta_v^i.
\]
Next, define the event $A = \bigcap_{i = 1, 2} \{\xi_{v^i}^i(n) =
1\}$, with $\xi$ as defined by (\ref{eqn:xis}). That is, $A$ is the
event in which walk $i \in \{1,2\}$ makes a transition to vertex $v^i$
at time $n+1$. For all
$n \geq n_0$ and $\theta \in \textup{T}\D_1$, we have that
\begin{equation}\label{Edesig33}
\E\Big[\big\langle\theta, U_n\big\rangle^+ \Big | \, X(n) = x, \mathfrak F_n
\Big ]
=
\E\Big[\big\langle\theta, U_n\big\rangle^+ \Big | \, X(n) = x \Big
]
\geq
q(x, \theta)
\end{equation}
where
\begin{equation}
\label{eqn:def_of_q}
q(x, \theta)
=
\E\Big[\big\langle\theta, U_n\big\rangle^+ \Big | \, A,\, X(n) = x
\Big]\mathbb P\big(A\, | \, X(n) = x\big).
\end{equation}
Note that the first equality
follows because the distribution of $U_n$ is uniquely
determined by $X(n)$ according to (\ref{eqn:gamma_and_U}). The
inequality in (\ref{Edesig33}) holds because $\langle\theta,
U_n\rangle^+$ is non-negative. Now, to show (\ref{Edesig}), it is
sufficient to prove that for all $\theta \in \textup{T}\D_1$ and $x \in \D$
\begin{equation}\label{inetheta}
q(x, \theta) \geq s(x).
\end{equation}
Assume without loss of generality, that $v^i = l$, $i = 1,
2$. That is, $\theta \in \textup{T}\D_1$ is of the form $(\theta_{l}^1,
\theta_{r}^1, \theta_{l}^2, \theta_{r}^2) = \big(a, -a,
\frac12 - a, a -\frac12\big)$ for some $a \in \big[0,
\frac12\big]$. In that case, $A = \{\xi_{l}^1(n) = 1, \,
\xi_{r}^1(n) = 0, \,\xi_{l}^2(n) = 1, \, \xi_{r}^2(n) =
0\}$. According to (\ref{eqn:gamma_and_U}), we have $(U_n)_v^i =
\xi_v^i(n) - \E[\xi_v^i(n)\mid\mathfrak F_n] = \xi_v^i(n) -
\pi_v^i(X(n))$. Using the previous equality and the particular
form of $\theta$ and $A$, it follows by the definition of $q$ in
\eqref{eqn:def_of_q} that
\begin{align*}
q(\theta, x)
%
%
%
&=
\Big[\sum_{i} \theta_{l}^i - \sum_{i,v} \theta_v^i
\pi_v^i(x)\Big ]^+ \mathbb P\big(A\, | \, X(n) = x \big) \\
&=
\Big[\frac{1}{2} - \sum_{i,v} \theta_v^i \pi_v^i(x)\Big]^+
\prod_{i=1}^2\pi_{l}^{i}(x) \\
&\geq
\Big[\frac{1}{2} - \sum_{i,v} \theta_v^i \pi_v^i(x)\Big ]^+
\Big(\min_{i,v}\pi_v^{i}(x) \Big)^2,
\end{align*}
where the last equality uses the fact that the transitions of the
walks are independent given the event $\{X(n) = x\}$ and, therefore,
$\mathbb P\big(A\, | \, X(n) = x \big) = \prod_{i=1}^2 \pi_{l}^{i}
(x)$.
To show (\ref{inetheta}), it is sufficient to show that $(\frac{1}{2}
- \sum_{i,v} \theta_v^i \pi_v^i(x))^+ \geq \frac{1}{2} \min_{i,v}
\pi_v^{i}(x)$. To simplify notation, set $\pi_v^{i} =
\pi_v^{i}(x)$. Since $(\theta_{l}^1, \theta_{r}^1,
\theta_{l}^2, \theta_{r}^2) = \big(a, -a, \frac12 - a, a
-\frac12\big)$, it
follows that
\begin{align*}
\frac{1}{2} - \sum_{i,v} \theta_v^i \pi_v^i
&= \frac{1}{2} - \Big(a
\pi_{l}^1 + \Big(\frac{1}{2} - a\Big)\pi_{l}^2\Big) + a
\pi_{r}^1 + \Big(\frac{1}{2} - a\Big)\pi_{r}^2 \\
&\geq \frac{1}{2} -
\Big(a \pi_{l}^1 + \Big(\frac{1}{2} - a\Big)\pi_{l}^2\Big)
+ a \min_{i,v}\pi_v^i + \Big(\frac{1}{2} - a\Big)
\min_{i,v}\pi_v^i \\
&\geq
\frac{1}{2} \min_{i,v}\pi_v^i,
\end{align*}
where the last inequality uses the fact that $a \in \big[0, \frac12\big]$,
$\pi_l^1, \pi_l^2, \in [0, 1]$, and, therefore, $\frac{1}{2} -
\big(a \pi_{l}^1 + (\frac{1}{2} - a)\pi_{l}^2 \big) \geq
\frac{1}{2} - \big(a + (\frac{1}{2} - a) \big) = 0$.
\end{proof}
\section{Proof of Theorems~\ref{th:trans} and
\ref{th:rec}}\label{sec:recurrence}
This section presents the proof of the transience of both walks
$S^i_n$, $i=1, 2$, when $\beta \in (2,\infty)$, and the recurrence
when $\beta \in [0,1]$. The problem that arises when $\beta \in (1,2)$
is mentioned in Remark~\ref{remark:speed} at the end of this section.
The transience will make use of the following lemma.
\begin{lemma}\label{th:differenceConv} There is a unique point $x
\in [0,1]$, depending on $\beta$, such that,
\[
\lim_{n\to\infty}\frac{1}{n} \Big(S_n^1-S_{n_0}^1,\,
S_n^2-S_{n_0}^2\Big)\ \in\ \Big\{ (x, -x),\, (-x, x)\Big\}
\qquad
\textup{a.s.}
\]
In addition, if\ \ $0 \leq \beta \leq 2$, then $x = 0$, and if $\beta >
2$, then $0 < x < 1$.
\end{lemma}
\begin{proof}
From Theorem~\ref{th:X_as_convergence}, $X$ converges almost surely
towards one element of the set $\boocal{E}$, the set of equilibria of the
vector field $F$ in \eqref{eqn:TheField}. This set is characterised by
Lemma~\ref{lem:eqtypes}. Noting that
\begin{equation}
\label{eqn:Sn_increment}
(S^i_n - S^i_{n_0})/n = 2 X^i_r(n) - 1,
\end{equation}
it follows by Lemma~\ref{lem:eqtypes} that $X^i_r(n)
\longrightarrow \frac{1}{2}$ a.s. for $0 \leq \beta < 2$. As a
consequence, when $0 \leq \beta < 2$ we have that
\[
\frac{S_n^i - S_{n_0}^i}{n} \longrightarrow 0 \,\, \,\, \text{a.s.}
\]
For the case $\beta > 2$, by Lemma~\ref{lem:eqtypes} and
Lemma~\ref{non-covergence-int} there is $w \in [0, \frac12)$ such that
\[
\lim_{n\to\infty}
\big(X^1_r(n), X^2_r(n)\big)
\ \ \in\ \
\big\{(w, 1-w), (1-w, w)\big\}\qquad\textup{a.s.}
\]
Using \eqref{eqn:Sn_increment} and setting $x=2w-1$, it follows
that $\lim_{n\to\infty} (S_n^i-S_{n_0}^i)/n \in \big\{(x,-x),
(-x,x)\big\}$ with $0 < x < 1$. This concludes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:trans}]
The proof of the theorem is an immediate consequence of
Lemma~\ref{th:differenceConv}. Indeed, if $\beta > 2$, by
Lemma~\ref{th:differenceConv} it follows that $\big(S_n^1/n,
S_n^2/n\big)$ converges a.s. to $(x, -x)$ or $(-x, x)$ where $x >
0$. Therefore we have that either $(S_n^1, S_n^2) \to (+\infty,
-\infty)$ a.s. or $(S_n^1, S_n^2) \to (-\infty, +\infty)$ a.s.
\end{proof}
The rest of this section is devoted to the proof of
Theorem~\ref{th:rec}, that is, of the recurrence of $S^1_n$ and
$S^2_n$ when $\beta \in [0, 1]$. Observe that in this case, according
to Lemma~\ref{lem:eqtypes}, the only equilibrium of $F$ is the point
$x_* = \big(\frac{1}{2}, \frac{1}{2}, \frac{1}{2},
\frac{1}{2}\big)$. We argue that both walks $S_n^i$ are recurrent
provided the process $X$ converges
sufficiently fast towards $x_*$. To this end, we will make use of
several lemmas. The first of these provides a rate of
convergence of $X$ towards $x_*$ when $\beta \in [0, 1]$. This is
obtained by considering the rate at which $\phi_t(x)$ converges
towards $x_*$ and the rate for the almost sure convergence of $X$
toward the trajectories of $\Phi$. The latter relies on the shadowing
techniques described in Section 8 of \cite{B99}. The proof follows
along the lines of the proof of Lemma 3.13 in \cite{BRS13}.
\begin{lemma}\label{lem:speedconv}
If $\beta \in [0, 1]$, then
\[
\big\|X(n) - x_*\big\| = \mathcal O\Big(\frac{1}{\sqrt{n}}\Big)
\qquad
\textup{a.s.}
\]
\end{lemma}
\begin{proof} Lemma~\ref{lem:eqtypes} shows that $x_*$ is the only
equilibrium of $F$ when $\beta \in [0,1]$. This lemma also shows that
$x_*$ is hyperbolic and linearly stable. By
Theorem~\ref{th:X_as_convergence} it then follows that a.s. $X(n) \to
x_*$. Further, according to Theorem 5.1 in \cite{R99}, p. 153, we have
that the equilibrium $x_*$ is exponentially attracting. More
precisely, there is a neighbourhood $\mathcal{U} \subset \D$ of $x_*$
and two constants $C\geq 1$, $\zeta > 0$, such that for any initial
condition $x \in \mathcal{U}$, the solution $\phi_t$ of
(\ref{eqn:ODE}) satisfies
\begin{equation}
\label{eqn:exp_rate}
\big\|\phi_t(x) - x_*\big\| \leq Ce^{-t \zeta}
\big\|x - x_*\big\| \quad
\text{ for all }\ t \geq 0.
\end{equation}
The constant $\zeta$ is such that $-\zeta$ is an upper bound for all
the eigenvalues $\lambda$ of $\JF{x_*}$, that is,
$\Re\text{e}(\lambda) \leq -\zeta < 0$. We observe that because of
(\ref{eqn:eingenv}), here we have the explicit expression $\zeta = 1 -
\beta/2$.
Let $\tau_n = \sum_{k=1}^n \gamma_k$ and let $Y:\mathbb R^+ \to \D$ be a
continuous time piecewise affine process defined such that: (\emph{i})
$Y(\tau_n) = X(n)$ and (\emph{ii}) $Y$ is affine on $[\tau_n,
\tau_{n+1}]$. $\{Y(t)\}_{t\geq 0}$ may be defined as the following
linear interpolation of $X(n)$,
\[
Y(\tau_n + s) = X(n) + s\frac{X(n_1)-X(n)}{
\tau_{n+1} - \tau_{n}},
\quad\text{for}\quad
0 \leq s \leq \gamma_{n+1}, \quad n \geq 0.
\]
By Proposition 8.3 in \cite{B99}, the interpolated process
$\{Y(t)\}_{t \geq 0}$ is almost surely a
$-\frac{1}{2}$-pseudotrajectory of $\Phi$, that is,
\begin{equation}
\label{eqn:shadow_rate}
\limsup_{t\to\infty} \frac1t \log\bigg(\sup_{0\leq h
\leq T}
\big\| \phi_h\big(Y(t)\big)-Y(t+h)\big\|\bigg) \leq -\frac{1}{2}
\end{equation}
for all $T > 0$.
In view of (\ref{eqn:exp_rate}) and (\ref{eqn:shadow_rate}), by Lemma
8.7 in \cite{B99}, it follows that
\[
\limsup_{t\to\infty} \frac1t \log\big(\big\| Y(t) - x_*\big\|\big)
\leq -\min\Big\{\frac12, \zeta\Big\}.
\]
This in turn implies that
\[
\|X(n) - x_*\| = \mathcal{O}\bigg(n^{-\min\big\{\frac12,\,
\zeta\big\}}\bigg)
\]
and hence concludes the proof because $\zeta \in
\big[\frac12, 1\big]$ when $\beta \in [0, 1]$.
\end{proof}
\begin{lemma}\label{lem:Pn_rate}
If $\beta \in [0, 1]$, then, for any $i =1,2$, $v=r, l$,
\[
\Big|\pi_v^i \big (X(n) \big ) - \frac{1}{2}\Big| =
\mathcal{O}\Big(\frac{1}{\sqrt{n}}\Big)
\qquad
\textup{a.s.}
\]
\end{lemma}
\begin{proof}
Let $x_* = \big(\frac{1}{2}, \frac{1}{2}, \frac{1}{2},
\frac{1}{2}\big)$. By the definition of $\pi$, namely by equations
(\ref{eqn:psi}) and (\ref{eqn:pipsi}), we have that $\| \nabla \pi_v^i
\big (x_* \big )\|_{\infty} = \beta/2 \leq \frac12$. By linearization
of $\pi_v^i \big (X(n) \big )$ at $x_*$ it follows that $\pi_v^i \big
(X(n) \big ) - \pi_v^i \big (x_* \big ) = \big\langle \nabla
\pi_v^i(x_*) ,X(n) - x_*\big\rangle + R(X(n))$, where $R(X(n))$ is the
error of the approximation. Therefore
\begin{align*}
\Big|\pi_v^i \big (X(n) \big ) - \frac{1}{2} \Big|
&=
\big|\pi_v^i \big (X(n) \big ) - \pi_v^i \big (x_* \big ) \big| \\
&\leq
\big\|\nabla \pi_v^i(x_*)\big\|_{\infty} \big\|X(n) - x_*\big\|
+ \big\|R(X(n)) \big\| \\
&\leq
\bigg(\frac12 + \frac{\big\|R(X(n)) \big\|}{\big\| X(n)
- x_*\big\|}\bigg)\big\|X(n) - x_*\big\|
\end{align*}
The proof is concluded by applying Lemma~\ref{lem:speedconv}
and observing that $\|R(X(n))\|/\|X(n) - x_*\|$ converges to zero as
$X(n)$ approaches $x_*$.
\end{proof}
\begin{corollary}\label{cor:Aepsilon}
Let $P_n = \pi_r^i \big (X(n) \big )$ for some fixed $i \in
\{1,2\}$. For each $\varepsilon > 0$, there are sufficiently large $b$
and $m$, depending on $\varepsilon$, such that
\begin{equation*}
\mathbb P(A) > 1 - \varepsilon, \, \text{ where } \, A =
\bigg \{ \Big |
P_n - \frac{1}{2} \Big | \leq \frac{b}{\sqrt{n}} \,\,\, \text{for
all} \,\,\, n > m \bigg\}.
\end{equation*}
\end{corollary}
\begin{proof}
Let $\varepsilon > 0$ be arbitray. According to Lemma
\ref{lem:Pn_rate}, there is a set $\Omega^1 \subset \Omega$ with
$\mathbb P(\Omega^1) = 1$, such that $ \big|P_n(\omega) - \frac{1}{2}\big| =
\mathcal{O}\big(\frac{1}{\sqrt{n}}\big)$ for each $\omega \in
\Omega^1$. Therefore, for each $\omega \in \Omega^1$, there are well
defined constants $n(\omega) > 0$ and $b(\omega) > 0$ such that
\[
\Big|P_n(\omega) - \frac{1}{2}\Big| \leq
\frac{b(\omega)}{\sqrt{n}} \,\, \text{ for all } \,\, n >
n(\omega).
\]
Define $\Omega_k = \big\{ \omega \in \Omega^1 \, \big | \, \max
\{b(\omega), n(\omega) \} \leq k \big\}$, $k =1, 2, 3,\ldots$ Note
that $(\Omega_k)_{k =1}^\infty$ is an increasing sequence of sets that
converges to $\Omega^1$. Since $\mathbb P(\Omega^1) = 1$, by continuty of
the probability measure, there is a sufficiently large $k_\varepsilon
> 0$ such that $\mathbb P(\Omega_{k_\varepsilon}) > 1 - \varepsilon$. Since
$A \supset \Omega_{k_\varepsilon}$ provided that $b > k_\varepsilon$
and $m > k_\varepsilon$, it follows that $\mathbb P(A) > 1 - \varepsilon$
for $b > k_\varepsilon$ and $m > k_\varepsilon$.
\end{proof}
\begin{lemma}\label{lemma:recurrence_pnn} Let $b > 0$ and $m > 0$,
and define $\{Z_n\}_{n \geq 0}$ as a non homogeneous random walk with
independent increments, parametrized by $b$ and $m$, as follows. Set
$Z_{n} = Z_0 + \sum_{k=1}^{n} Y_k$, where $Z_0 \in \mathbb Z$ and
$Y_1, Y_2, \ldots $ are independent random variables such that
$\mathbb{P}(Y_{n+1}=1) = p_n = 1 - \mathbb{P}(Y_{n+1}=-1)$. Let $c>0$
and $\sigma_n = \text{Var}(Z_n)^{\frac12}$. The following implications hold
\begin{align}
\label{eq:limsupgeq}
p_n
&=
\begin{cases}
0,&\text{if } n \leq m \\
\frac{1}{2} - \min\Big\{\frac12,
\frac{b}{\sqrt{n}}\Big\},&\text{otherwhise}
\end{cases}
\qquad \Rightarrow \qquad
\mathbb P\bigg(\limsup_{n} \frac{Z_n}{\sigma_n} \geq c \bigg) = 1,
\\[1em]
\label{eq:limsupgeqCOR}
p_n
&=
\begin{cases}
1,&\text{if } n \leq m \\
\frac{1}{2} + \min\Big\{\frac12,
\frac{b}{\sqrt{n}}\Big\},&\text{otherwhise}
\end{cases}
\qquad \Rightarrow \qquad
\mathbb P\bigg (\liminf_{n} \frac{Z_n}{\sigma_n} \leq -c \bigg)
= 1.
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:recurrence_pnn}]
We will only present the proof of (\ref{eq:limsupgeq}). The proof of
(\ref{eq:limsupgeqCOR}) goes analogously. Let $A_c = \{\limsup_n
Z_n/\sigma_n \geq c\}$ and $\boocal{Z}_n = \sigma\big(\{Z_k: k \geq
n\}\big)$. Define $\boocal Z = \bigcap_n \boocal{Z}_n$, the tail
sigma algebra generated by $Z_n$. Observe that, by the definition of
$Z_n$, we have that $\sigma_n = 2\big( \sum_{k=1}^{n}
p_k(1-p_k))^{\frac12}$, thus the event $A_c$ belongs to $\boocal Z$
because $\sigma_n \to\infty$ as $n\to\infty$. Therefore, in order to
show that $\mathbb P(A_c) = 1$, by Kolmogorov's zero-one law, it suffices to
show that $\mathbb P(A_c) > 0$.
Since $A_c = \{\limsup_n Z_n/\sigma_n \geq c\} \supseteq \lim\sup_n
\{Z_n/\sigma_n \geq c\}$, we have that
\begin{equation}
\label{eqn:lowerbound}
\begin{aligned}
\mathbb P(A_c)
&\geq
\mathbb P\bigg(\limsup_n \Big\{\frac{Z_n}{\sigma_n} \geq c\Big\}\bigg) \\
&\geq
\limsup_n \mathbb P\bigg(\frac{Z_n}{\sigma_n} \geq c\bigg) \\
&=
\limsup_n \mathbb P\bigg(\frac{Z_n-\E[Z_n]}{\sigma_n} \geq c -
\frac{\E[Z_n]}{\sigma_n}\bigg).
\end{aligned}
\end{equation}
Let $Z$ be a standard normal random variable and let
$\overset{d}{\longrightarrow}$ stand for convergence in
distribution. Let us assume and argue later on that $\ell =
\lim_{n\to\infty} \mathbb{E}(Z_n)/\sigma_n$ exists and that $|\ell| <
\infty$. By observing that the random variables $Y_n$ are uniformly
bounded and that $\sigma_n = \text{Var}(Z_n)^{\frac12} \to \infty$ as
$n\to\infty$, it follows that Lindeberg's conditions are met and thus
the following central limit theorem holds
\[
\frac{Z_n - \mathbb{E}(Z_n)}{\sigma_n}
\overset{d}{\longrightarrow}Z.
\]
Combining the limit $\ell$ with the bound in (\ref{eqn:lowerbound})
gives
\[
\mathbb P(A_c)
\geq
\mathbb P\big(Z > c - \ell\big) > 0.
\]
To conclude the proof, it remains to verify that the limit $\ell$
exists and is finite. By definition of $Z_n$, it follows that
\begin{equation}\label{conclude}
\frac{\mathbb{E}(Z_n)}{\sigma_n}
=
\frac{Z_0/2 + \sum_{k=1}^{n} p_k - n/2}
{\sqrt{\sum_{k=1}^{n} p_k(1-p_k)}},
\end{equation}
where $p_k = \frac{1}{2}- b/\sqrt{k}$ for
sufficiently large $k$. A straightforward computation shows that the
right hand-side of (\ref{conclude}) converges to $-4b$ as $n$
goes to infinity.
\end{proof}
We are ready for the proof of Theorem~\ref{th:rec}.
\begin{proof}[Proof of Theorem~\ref{th:rec}]
Throughout the proof, $i \in \{1,2\}$ will be fixed. It is sufficient
to show that $\mathbb P\big(\limsup_{n} \{S^i_n = 0\} \big) = 1$. This will
be achieved by proving that $\mathbb P\big(\limsup_{n} \{S^i_n = 0\} \big) >
1- \epsilon$ for arbitrary $\epsilon > 0$. Recall that
$\pi^i_r(X(n))$ is the probability of the event $\{S^i_{n+1} =
S^i_{n} + 1\}$ given $X(n)$ for $n \geq n_0$, where $n_0$ is as
defined in the Introduction. Let $\big\{U_n; n\geq 0 \big\}$ be a
sequence of independent and identically distributed uniform random
variables taking values on the open interval $(0, 1)$ and couple
$S^i_n$ with $U_n$ such that
\[
S^i_{n+1} = S^i_{n} + 1 \quad\text{if and only if}\quad
U_{n} \leq P_n \quad \text{ for } \,\, n \geq
n_0,
\]
where $P_n = \pi^i_r(X(n))$.
Choose $\varepsilon > 0$ arbitrarily. In accordance to
Corollary~\ref{cor:Aepsilon}, choose $b > 0$ and $m > n_0$ such that
\begin{equation}\label{eq:Aoneminusepsilon}
\mathbb P( A ) > 1 - \varepsilon, \, \text{ where } \, A =
\bigg \{ \Big |
P_n - \frac{1}{2} \Big | \leq \frac{b}{\sqrt{n}} \,\,\, \text{for
all} \,\,\, n > m \bigg\}.
\end{equation}
Now, let $Z_n$ be another walk with independent increments such that
$Z_0 = S^i_0$ and
\[
Z_{n+1} = Z_n + 1 \,\,\, \text{ if and only if }
\,\,\, U_n \leq
p_n \,\,\, \mbox{ for all } \,\,\, n \geq 0,
\]
where
\[
p_n =
\begin{cases}
0, &\text{if } 0 \leq n \leq m, \\
\frac{1}{2} - \min\Big\{\frac12, \frac{b}{\sqrt{n}}\Big\},
&\text{if } n > m.
\end{cases}
\]
Observe that the walks $S^i_n$ and $Z_n$ are coupled through $U_n$ as
follows. Given that $p_n \leq P_n$, it follows that $Z_{n+1} = Z_n +
1$ implies that $S^i_{n+1} = S^i_{n} + 1$. Indeed, given that
$Z_{n+1} = Z_n + 1$ and $p_n \leq P_n$, we have that $U_n \leq p_n
\leq P_n$ and therefore $S^i_{n+1} = S^i_{n} + 1$. Since $Z_0 = S^i_0$
and $p_n = 0$ for all $n = 0, 1, 2, \ldots, m$, it follows that $S^i_n
\geq Z_n$ for all $n \geq 0$, given the event $B = \big\{p_n \leq P_n$
for all $n > m \big\}$. As a consequence, we have that
\begin{equation}\label{eqn:limsupB}
\begin{aligned}
\mathbb P\bigg(\limsup_{n} \frac{S^i_n}{\sigma_n} \geq c\bigg)
&\geq
\mathbb P\bigg(\limsup_{n} \frac{S^i_n}{\sigma_n} \geq c\, \bigg|\,
B\bigg)\mathbb P(B)
\\
&\geq
\mathbb P\bigg(\limsup_{n} \frac{Z_n}{\sigma_n} \geq c\, \bigg|\, B
\bigg)\mathbb P(B) \\
&=
\mathbb P(B),
\end{aligned}
\end{equation}
where $\sigma_n = \text{Var}(Z_n)^{\frac12}$. The second inequality in
\eqref{eqn:limsupB} follows by the coupling of $S_n$ and $Z_n$. The
equality in \eqref{eqn:limsupB} follows by (\ref{eq:limsupgeq}),
because $Z_n$ satisfies the hypotheses of
Lemma~\ref{lemma:recurrence_pnn}, and hence $\mathbb P\big(\limsup_n
Z_n/\sigma_n \geq c\big ) = 1$ as well as $\mathbb P\big(\limsup_n
Z_n/\sigma_n \geq c \, | \, B \big ) = 1$.
Now, using (\ref{eqn:limsupB}) and observing that $B \supseteq A$,
we have that
\begin{equation}\label{eqn:lssigmanome}
\mathbb P\bigg(\limsup_{n} \frac{S^i_n}{\sigma_n} \geq c\bigg) \geq
\mathbb P(B) \geq \mathbb P(A) \geq 1-\varepsilon,
\end{equation}
where the last inequality in (\ref{eqn:lssigmanome}) follows by
definition of $A$ in (\ref{eq:Aoneminusepsilon}).
Since $\varepsilon$ and $c > 0$ where arbitrarily chosen, we have,
by (\ref{eqn:lssigmanome}), that $\mathbb P\big(\limsup_{n} S^i_n/\sigma_n
\geq c\big) = 1$ for all $c > 0$. By using \eqref{eq:limsupgeqCOR}, we
can show analogously that that $\mathbb P\big(\liminf_{n} S^i_n/\sigma_n
\leq -c\big) = 1$ for all $c > 0$. Using these two facts, and taking
into account that $\sigma_n$ converges to infinity as $n \to \infty$,
we conclude that
\begin{align*}
\mathbb{P}\Big(\limsup_{n} \{S_n^i = 0\}\Big)
&\geq
\mathbb{P}\bigg(\limsup_{n} \frac{S_n^i}{\sigma_n} = +\infty,\ \
\liminf_{n} \frac{S_n^i}{\sigma_n} = - \infty \bigg) \\
&=
\lim_{c \rightarrow \infty} \mathbb{P}\bigg(\limsup_{n}
\frac{S^i_n}{\sigma_n} \geq c,\ \ \liminf_{n} \frac{S_n^i}{\sigma_n}
\leq - c\bigg) = 1.
\qedhere
\end{align*}
\end{proof}
\begin{remark}\label{remark:speed}
The conclusion of Lemma~\ref{lemma:recurrence_pnn}, required in the
demonstration of Theorem~\ref{th:rec}, relies
on the fact that $p_n$ converges sufficiently fast towards
$\frac{1}{2}$. The computations involved in
Lemma~\ref{lemma:recurrence_pnn} show that the rate of convergence
must be such that $p_n \leq \frac{1}{2} - b n^{-\rho}$ for $b > 0$
and $\rho = \frac12$ for suficiently large $n$. The exponent $\rho
= \frac12$ is critical in the sense that the conclusion of
Lemma~\ref{lemma:recurrence_pnn} does not holds if $\rho <
\frac12$. According to Lemma~\ref{lem:speedconv}, when $\beta \in
[0,1]$ we have exactly the critical rate $\rho = \frac12$. The same
arguments used throughout the proof of Lemma~\ref{lem:speedconv} also
give the estimate $\rho = \zeta$, for $\zeta = 1 - \beta/2$ when
$\beta \in (1, 2]$. This shows that the convergence of $p_n$ towards
$\frac12$ can be arbitrarily slow as $\beta \nearrow 2$, and in fact
too slow for any $\beta>1$. The question about the
recurrence/transience of both random walks remains therefore open when
$\beta \in (1, 2]$.
\end{remark}
\section*{Appendix}
\begin{proof}[Proof of Lemma~\ref{lem:X_is_SA}] By (\ref{eqn:xxis}),
it follows that
\begin{align*}
X^i_{l}(n+1) - X^i_{l}(n)
&=
\frac{1}{n+1}\big(-X^i_l(n) + \xi^i_l(n)\big).
\end{align*}
Likewise, an analogous expression for $X^i_r(n+1) - X^i_r(n)$
can be derived in terms of $\xi^i_r(n)$ and $X^i_r(n)$.
Hence, by using (\ref{eqn:gamma_and_U}) and (\ref{eqn:TheField}), it
follows that
\begin{equation}\label{eqn:increment}
X(n+1) - X(n)
=
\gamma_n\Big\{ F\big (X(n) \big ) + \E[\xi(n)\mid\mathfrak F_n] - \pi\big
(X(n) \big ) + U_n\Big\}.
\end{equation}
To conclude that (\ref{eqn:SA}) holds, we will show that
$\E[\xi(n)\mid\mathfrak F_n] - \pi(X(n)) = \mathbf{0}$. By using the
definition of the probabilities \eqref{eqn:transProb} and of $\xi(n)$
in (\ref{eqn:xis}), and further observing that $\psi$, defined in
(\ref{eqn:psi}), satisfies $1 - \psi(y) = \psi(-y)$ for all $y$, we
have that
\begin{align*}
\E[\xi(n)\mid\mathfrak F_n]
&=
\big(
\mathbb P(S^1_{n+1}-S^1_n=-1\mid\mathfrak F_n), \ldots,
\mathbb P(S^2_{n+1}-S^2_n= 1\mid\mathfrak F_n)
\big) \\
&=
\Big(
\psi \Big(\frac{S^2_{0}-S^2_n}{n} \Big),
\psi \Big( \frac{S^2_{n}-S^2_0}{n} \Big),
\psi \Big(\frac{S^1_{0}-S^1_n}{n} \Big),
\psi \Big( \frac{S^1_{n}-S^1_0}{n} \Big)
\Big).
\end{align*}
Now, since $(S^j_{0}-S^j_n)/n = 2 X_l^j(n) - 1$ and
$(S^j_{n}-S^j_0)/n = 2 X_r^j(n) - 1$, we conclude, by using the
definition of $\pi$, given by (\ref{eqn:pifirst}) and
(\ref{eqn:pipsi}), that
\begin{equation}\label{eqn:epi}
\E[\xi(n)\mid\mathfrak F_n] =
\big(
\pi_l^1 \big(X(n)\big),
\pi_r^1 \big(X(n)\big),
\pi_l^2 \big(X(n)\big),
\pi_r^2 \big(X(n)\big)\big) =
\pi \big(X(n)\big).
\end{equation}
In view of (\ref{eqn:epi}), equation (\ref{eqn:increment})
reduces to (\ref{eqn:SA}).
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Kushner}.\ref{as:KushnerLemma}]
Let $M_n = \sum_{k=0}^n \gamma_k U_k$. The process $\{M_n\}_{n\geq 0}$
is a martingale with respect to $\{\mathfrak F_n\}_{n\geq 0}$, that is
\[
\E[M_{n+1}\mid\mathfrak F_{n+1}] = \sum_{k=0}^n \gamma_k U_k +
\gamma_{n+1}\E[U_{n+1}\mid\mathfrak F_{n+1}] = M_n.
\]
Observe that
\begin{align*}
\E\big[\|M_{n+1} - M_n\|^2\big|\ \mathfrak F_{n+1}\big]
&=
\gamma_{n+1}^2 \E\big[\|U_{n+1}\|^2\big|\ \mathfrak F_{n+1}\big] \\
&\leq
\gamma_{n+1}^2 \bigg(\sum_{v \in \{l, r\},\ i \in
\{1,2\}} \xi^i_v(n+1) \bigg)^2
\leq
16\, \gamma^2_{n+1}.
\end{align*}
By using Doob's decomposition for the sub-martingale $M_n^2$, consider
the predictable increasing sequence $A_{n+1} = M_n^2 + M_n$ with
$A_1=0$. The conditional variance formula for the increment $M_{n+1} -
M_n$ gives
\[
A_{n+2}-A_{n+1}
=
\E\big[M_{n+1}^2\big|\ \mathfrak F_n\big] - M_n^2 =
\E[\|M_{n+1}-M_n\|^2\mid \mathfrak F_{n+1}],
\]
and hence for any $n$,
\[
A_{n+2}
=
\sum_{k=0}^n \E\big[\|M_{k+1}
- M_k\|^2\big|\ \mathfrak F_{n+1}\big]
\leq
16 \sum_{k=0}^n \gamma_{n+1}^2.
\]
Passing to the limit $n\to\infty$ shows that almost surely $A_\infty <
\infty$. According to Theorem~5.4.9 in \cite{D2010}, p. 254, this in
turn implies that $M_n$ converges almost surely to a finite limit in
$\mathbb R^{2\times2}$ and hence that $\{M_n\}_{n\geq 0}$ is a Cauchy
sequence. This is sufficient to conclude the proof.
\end{proof}
\bibliographystyle{alpha}
|
1,116,691,499,703 | arxiv | \section{Strategy, Objectives and Summary}
Recently much attention has been paid to four dimensional conformal theories, since conformal theories or nearly conformal theories are attractive candidates for the beyond standard model.
To confront the nature, it is important to understand each conformal theory, and for this purpose,
it is urgent to clarify the global structure of conformal theories~\cite{review1}.
In this article, we investigate the global structure of $SU(3)$ conformal theories with $N_f$ fundamental fermions,
on a lattice using the Wilson fermion. Since we have reported the results in detail~\cite{coll-full}, I will try to make
this writeup is complementary to ~\cite{coll-full}, giving a r\'{e}sum\'{e} without detailed discussion.
We first point out that
the following two categories in $SU(3)$ gauge theories with fundamental $N_f$ fermions possess an IR fixed point:
\begin{itemize}
\item
Large $N_f (N_f^{c} \le N_f \le 16)$ QCD within the conformal window (referred as Conformal QCD); $N_f^c$ is the lower critical flavor number for the conformal window.
\item
small $N_f (2 \le N_f \le N_f^{c}-1)$ QCD at temperature $T/T_c > 1$ with $T_c$ being the critical temperature (referred as High Temperature QCD)
\end{itemize}
We then introduce a new concept ``conformal theories with an IR cutoff''~\cite{coll1}.
In the case of Conformal QCD in the continuum limit, the compact space and/or time gives an IR cutoff.
In the case of High Temperature QCD, the temperature $T$ plays a role of an IR cutoff together with a cutoff due to possible compact space, depending on how to take the continuum limit.
We note any lattice calculation is performed on a finite lattice. Thus any calculation on a lattice possesses an IR cutoff.
Finally the objectives of this article are (for details of numerical simulations see Ref.~\cite{coll-full}.)
\begin{enumerate}
\item
Verify numerically on a lattice of the size $16^3\times 64$ that
the ``conformal region'' exists together with the confining region and the deconfining region
in the phase structure parametrized by $\beta$ and $K$,
both in Conformal QCD and High Temperature QCD.
Further verify the vacuum of the conformal region is
the nontrivial $Z(3)$ twisted vacuum modified by non-perturbative effects
and temporal propagators of meson behave at large $t$ as a power-law corrected Yukawa-type decaying form.
The transition from the conformal region to the deconfining region or the confining region
is a transition between different vacua and therefore the transition is a first order transition
both in Conformal QCD and in High Temperature QCD.
\item
Verify a precise correspondence between Conformal QCD and High Temperature QCD within the conformal region is realized under the change of a continuous parameter $T/T_c$ and a discrete parameter $N_f$, respectively: the one boundary is close to meson states and the other is close to free quark states.
\end{enumerate}
We stress that High Temperature QCD is intrinsically accompanied with an IR cutoff. Therefore the understanding the vacuum structure and the property of correlation functions is the key to resolve long standing issues in High Temperature QCD.
\begin{figure*}[thb]
\includegraphics [width=7.5cm]{fig/conformal_region_small_Nf_nosim.pdf}
\hspace{1cm}
\includegraphics [width=7.5cm]{fig/conformal_region_large_Nf_nosim.pdf}
\caption{ Phase diagram on a finite lattice : (left) $1 \le N_f \le N_f^c -1$ ; (right) $N_f^c \le N_f \le 16$;
In the case $N_f^c \le N_f \le 16$ the massless quark line originating from the UV fixed point hits the bulk transition point at finite $\beta$ and no massless line exists in the confining region. The region above the bulk transition corresponds to the doublers of the Wilson fermion. On the other hand, in the case $1 \le N_f \le N_f^c -1$
on the massless quark line there is a chiral phase transition point. Below the critical point the massless line is in the confining region.}
\label{phase diagram finite lattice}
\end{figure*}
\section{The existence of an IR fixed point}
The existence of an IR fixed point in Conformal QCD is well known as the Banks-Zaks IR fixed point
\cite{Banks1982}.
In High Temperature QCD the existence of an IR fixed point has been recently pointed out in Ref.~\cite{coll2}.
Define a running coupling constant $g(\mu; T)$ at temperature $T$ in the massless quark case~(See for example \cite{karsch}).
When $T/T_c > 1$, where the quark is not confined, the running coupling constant $g(\mu; T)$ cannot be arbitrarily large. This means that there is an IR fixed point with non-trivial zero of the beta function when $T/T_c > 1$.
This is the key observation. For example,
numerical results of the running coupling constant $g(r; T)$ shown in Fig. 2 in \cite{karsch} are consistent with
the above: the running coupling constant $g(r; T)$ increases as $r$ increases up to some value and does not further increases more than that.
The existence of an IR fixed point is not common in the lattice community.
One possible reason might be due to the non-vanishing trace anomaly in High Temperature QCD.
To make clear the implication of vanishing of the beta function at finite temperature, we recall the relation between the trace anomaly of energy momentum tensor and the beta function with massless quarks:
$$
\langle T^{\mu}_{\ \mu} \rangle|_T = \mathcal{B}(g^{-2}(\mu)) \langle \mathrm{Tr}(F_{\mu\nu} (\mu))^2 \rangle|_T \ ,
$$
where $\mathcal{B}(g^{-2}(\mu))$ is the zero temperature beta function evaluated at $g = g(\mu)$, and
$\langle \mathrm{Tr}(F_{\mu\nu} (\mu))^2 \rangle|_T$ is the field strength squared at temperature $T$ renormalized at scale $\mu$
(Appendix B of~\cite{coll-full}).
\begin{figure*}[t]
\includegraphics [width=15cm]{fig/v4.pdf}
\caption{Correspondence between Conformal QCD and High Temperature QCD in terms of the beta function.
The horizontal line on the top represents the correspondence between the number of flavor $N_f$ and the temperature $T/T_c$. }
\label{betaf2}
\end{figure*}
In Lorentz invariant zero-temperature field theories, the vanishing
beta function means that the trace anomaly vanishes and the theory is conformal invariant.
However, when the beta
function at finite temperatures vanishes,
it does not imply vanishing of the trace of the energy-momentum tensor.
Thus the vanishing beta function at $T>T_c$ does not contradict with the non-vanishing of
the difference of energy density and three times the pressure.
Another reason for that the existence of an IR fixed point is not common might be due to the existence of an intrinsic IR cutoff in High Temperature QCD. We will discuss such cases below.
\section{Conformal theories with an IR cutoff and Conformal Region}
We first define the conformal field theories with an IR cutoff. The first assumption is that the beta function (either zero-temperature or finite temperature) vanishes. Of course, if there were no dimensionful quantities, this would imply that the theory is scale invariant and all the correlation functions show a strict power behavior.
Our new observation is that when such theories have a finite cutoff, then they will show the universal behavior that we call ``conformal field theories with an IR cutoff":
the ``conformal region'' exists together with the confining region and the deconfining region
in the phase structure
We have verified numerically on a lattice of the size $16^3\times 64$ the existence of the conformal region
for $N_f=7, 8 , 12$ and $16$ as depicted on the right panel of Fig.~\ref{phase diagram finite lattice}
and for $N_f=2$ as on the left panel.
In the conformal region we find the vacuum is
the nontrivial $Z(3)$ twisted vacuum modified by non-perturbative effects
and temporal propagators of meson behave at large $t$ as a power-law corrected Yukawa-type decaying form.
The transition from the conformal region to the deconfining region or the confining region
is a transition between different vacua and therefore the transition is a first order transition
both in Conformal QCD and in High Temperature QCD.
\begin{figure*}[htb]
\includegraphics [width=8cm]{fig/Fig_effm_B115K0125hNf16L16x64.pdf}
\includegraphics [width=8cm]{fig/Fig_effm_B115K0125lNf16L16x64.pdf}
\caption{ The effective mass: both for $N_f=16$ at $\beta=11.5$ and $K=0.125$: (left) from larger $K$ and (right) from smaller $K$;
See the main text for the three types of sources.}
\label{nf16_effm}
\end{figure*}
\begin{figure*}[htb]
\includegraphics [width=7.5cm]{fig/Fig_complex_nf16beta115k125_h.pdf}
\hspace{1cm}
\includegraphics [width=7.5cm]{fig/Fig_complex_nf16beta115k125_l.pdf}
\caption{ The scattered plots of Polyakov loops in the $x$, $y$ and $z$ directions overlaid;
both for $N_f=16$ at $\beta=11.5$ and $K=0.125$: (left) from larger $K$ and (right) from smaller $K$.}
\label{nf16_comp}
\end{figure*}
\begin{figure*}[thb]
\includegraphics [width=6.7cm]{fig/Fig_alpha_B65K146Nf2L16x64.pdf}
\hspace{1cm}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B60K1459Nf7L16x64.pdf}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B70K142Nf2L16x64.pdf}
\hspace{1cm}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B60K1457Nf8L16x64.pdf}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B80K139Nf2L16x64.pdf}
\hspace{1cm}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B60K1440Nf12L16x64.pdf}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B10K135Nf2L16x64.pdf}
\hspace{1.5cm}
\includegraphics [width=6.7cm]{fig/Fig_alpha_B115K13150Nf16L16x64.pdf}
\caption{
The correspondence of the local exponent $\alpha(t)$ for High Temperature QCD (left) and
for Conformal QCD (right).}
\label{correspondence of exponent}
\end{figure*}
Let us show the results for the existence of the conformal region and the transition to a deconfining region is first order, in Figs.~\ref{nf16_effm} and \ref{nf16_comp} for the $N_f=16$ case.
Both of Fig.~\ref{nf16_effm} represent effective mass plots at the same $\beta$ and $K$; $\beta=11.5$ and $K=0.125$ ($m_q=0.24$).
The left one is the result taking a configuration at larger $K$ as an initial state, while the right one at smaller $K$.
We see that not only the values at $t \sim 30$ are quite different from each other, but the behavior at large $t$ are quite different. On the left there is no plateau up to $T=31$. We are able to fit the data with a power-law corrected Yukawa-type decaying form for the range $t=[15:31]$ with the exponent $\alpha=1.3(1)$.
Fig.~\ref{nf16_comp} represents the scattered plots of Polyakov loops in spacial directions overlaid.
The parameters are the same as Fig.~\ref{nf16_effm}. The left panel shows that the argument of the Polyakov loops are $2\pi /3$ with the absolute values $0.18$, while the right panel shows the arguments are $0$ and the absolute values are $0.05\sim 0.2$. These results clearly show the difference of the vacua of the two states.
\section{ Correspondence between Conformal QCD and High Temperature QCD }
Based on our theoretical analysis based on the RG flow and our
numerical simulations, we propose that there is a precise correspondence between
Conformal QCD and High Temperature QCD and in the phase structure
under the change of the parameters $N_f$ and $T/T_c$
with the same anomalous mass dimension.
We show the two sets of $\alpha(t)$
side by side in Figs.~\ref{correspondence of exponent},
the Conformal QCD data on the right panel
and the High Temperature QCD data on the left panel in order to compare them directly.
Here $\alpha(t)$ is a local exponent defined by
parametrizing the propagator $G(t)$ as
\begin{equation}
G(t) = c\, \frac {\exp(-m(t)\, t)}{t^{\, \alpha(t)}},
\label{local}
\end{equation}
We observe the correspondence on the $t$ dependence of $\alpha(t)$ between the two sets of data is excellent with the following each pair:
$T\sim 2 T_c$ and $N_f=7$;
$T\sim 4 T_c$ and $N_f=8$;
$T\sim 16 T_c$ and $N_f=12$;
$T\sim 256T_c$ and $N_f=16$.
Thus we plot schematically the correspondence between Conformal QCD and High Temperature QCD
as in Fig.(\ref{betaf2}).
The correspondence is a powerful tool to investigate the properties of conformal theories.
The $T/T_c$ is a continuous variable, while the $N_f$ is a discrete variable.
Therefore we are able to use the information in High Temperature QCD to understand the properties of Conformal QCD.
Since High Temperature QCD covers $0.0 \le \gamma^{*} \le 2.0$ and Conformal QCD takes
discrete values of $\gamma^*$ between $0.0$ and $2.0$ ($\gamma^*$ is the anomalous mass dimention),
the correspondence is realized between a continuous parameter $T/T_c$ and a discrete parameter $N_f$.
This is the precise origin of the correspondence between the two
observed in the local-analysis of propagators.
The plateau at $15 \le t \le 31$ in $\alpha(t)$ for $T \sim 2\, T_c$ disappears,
as the temperature increases to $T \sim 4\, T_c$.
Translating this fact into Conformal QCD is that the plateau in $\alpha(t)$ at $15 \le t \le 31$
observed as the IR behavior of $N_f=7$ disappears for $N_f=8$.
\subsection{$N_f=7$ and $T/T_c \simeq 2$}
We note that both in the $N_f=7$ case of Conformal QCD and
at $T\sim 2 T_c$ in High Temperature QCD,
we have
a plateau in the $\alpha(t)$ at large $t$ ($15\, \le t \le \, 31$).
In the both cases the IR behavior of the state is well described by a meson unparticle model~\cite{Cacciapaglia:2008ns}.
The value of $\alpha(t)$ at plateau($t=15\sim 31$) is $0.8(1)$ for $K=0.1452$
and $K=0.1459$ in the $N_f=7$ case.
Applying the formula $\alpha(t)=2 -\gamma^*$, we have
$\gamma^* = 1.2(1).$
Although this value should be refined in the future by taking the continuum limit,
this value implies the anomalous mass dimension is of order unity.
\section*{Acknowledgments}
I would express gratitude to K.-I. Ishikawa, Yu Nakayama and T. Yoshie for a stimulating and
fruitful collaboration.
We would also like to thank
S. Aoki, H. Fukaya, E. Itou, K. Kanaya, T Hatsuda, Y. Taniguchi, A. Ukawa and N. Yamada
for useful discussion.
The calculations were performed using HA-PACS computer at CCS, University of Tsukuba and SR16000
at KEK. I would like to thank members of CCS and KEK for their strong support for this work.
|
1,116,691,499,704 | arxiv | \section{Introduction}
\label{sec:intro}
Understanding matter in extreme conditions is challenging. Especially warm dense matter (WDM), which is characterized by temperatures above a few electronvolt (eV) and solid densities exhibits non-negligible degeneracy and strong correlations that must be treated in a quantum mechanical many-body framework~\cite{Graziani,Glenzer2009}. Experimentally, due to their high energy density, these states can only be created transiently, and therefore, must be probed on short time scales using intense short-wavelength radiation. Shock-compression experiments are among the premier ways extreme conditions can be reached in the laboratory. They have been used to study the high-pressure phase diagram of various geological materials~\cite{Duffy2019}, metals like silver, gold and platinum~\cite{Briggs2019,Sharma2019,Sharma2020}, iron at super-Earth conditions~\cite{Kraus2022}, and hydrocarbons~\cite{Kraus2018,Hartley2019,Luetgert2021}, even revealing novel phenomena like the formation of diamonds in the interior of Neptune~\cite{Kraus2017}.
In shock and ramp compression studies, copper itself is often used as a resistivity gauge~\cite{Golyshev2011}, yet remarkably little attention has been paid to the effect of shock compression on the resistivity of copper itself. Conversely, the conductivity of expanded liquid copper has been studied in rapid wire evaporation experiments~\cite{DeSilva1998} and isochoric heating experiments using a closed vessel apparatus~\cite{Clerouin2012}.
The dynamic structure factor (DSF) is vital for accurately describing the dynamics of matter under extreme conditions. For instance, the ion-ion DSF can be used to extract the phonon dispersion in solids. It can also be connected to the hydrodynamic model~\cite{Hansen2006} in the liquid regime, as was demonstrated recently in Ref.~\cite{Schoerner2022}. With novel improvements to the spectral resolution at X-ray free electron laser facilities, it is now becoming possible to measure the ion-ion DSF of transient WDM states~\cite{McBride2018,Descamps2020,Wollenweber2021}. This allows us to probe the ion dynamics of dynamically compressed targets, requiring sophisticated many-body simulations, that take into account the quantum mechanical nature of WDM states, to compare with the experimental observations. \textit{Ab~initio} simulations, like density functional theory~\cite{Hohenberg1964,Kohn1965} coupled with molecular dynamics (DFT-MD), have proven successful at describing the principal shock Hugoniot~\cite{Knudson2015,Millot2018,Ravasio2021} and the the ion-ion DSF of various materials~\cite{Witte2017, Rueter2014, White2013, Gill2015}.
In this work, we calculate the ion-ion DSF of copper from DFT-MD simulations. First, we benchmark our results against experimental results for solid and liquid copper, and then compute the principal Hugoniot curve and study the change of the ion dynamics. A brief summary of the relevant equations and the details of the simulation methods are given in Sec.~\ref{sec:theory}. In Sec.~\ref{sec:solid-metal}, we determine the phonon spectrum of solid copper at ambient conditions from the ion-ion DSF and compute the dynamic electrical conductivity. For liquid copper at ambient pressure near the melting point, we compute the static and dynamic structure factor and compare to experimental results, see Sec.~\ref{sec:liquid-metal}. Subsequently, we compute the principal Hugoniot curve in Sec.~\ref{sec:hugo} and compare to shock compression experiments, and experimental and theoretical predictions for the melting line. Finally in Sec.~\ref{sec:shock-copper}, we study the evolution of the static and dynamic ion-ion structure factor during shock compression and extract the adiabatic speed of sound, which we compare to recent measurements by McCoy~\textit{et~al.}~\cite{McCoy2017}.
\section{Theoretical method}
\label{sec:theory}
Through DFT-MD simulations, we gain access to the time-dependent ion positions $\vec{r}_i(t)$ and velocities $\vec{v}_i(t)$.
By virtue of the Wiener-Khinchin theorem~\cite{Wiener, Khinchin}, the dynamic ion-ion structure factor
\begin{equation}\label{eq_Sii_WK}
S_{\mathrm{ii}} ( \vec{k},\, \omega) = \frac{1}{2\pi N} \int_{-\infty}^{\infty} \mathrm{d}t \, \langle n_{\vec{k}}(\tau) \, n_{-\vec{k}} (\tau + t) \rangle_{\tau} \, e^{i\omega t},
\end{equation}
which contains all information on the dynamics of the ion system, can be defined. Here $N$ is the number of ions in the system, $\vec{k}$ is the wave vector, $\omega$ is the frequency, and the spatial Fourier component of the ion density $n(\vec{r}, t)$ is given as
\begin{equation}\label{eq_ion_dens}
n_{\vec{k}} (t) = \int_{\mathbb{R}^3} \mathrm{d}^3r \, n(\vec{r}, \, t) \, e^{i\vec{k} \cdot \vec{r}} = \sum_{i=1}^{N} e^{i \vec{k} \cdot \vec{r}_i(t)} \, ,
\end{equation}
\begin{equation}
n (\vec{r}, t) = \sum_{i=1}^N \delta^3 (\vec{r} - \vec{r}_i (t)) \, ,
\end{equation}
with the time-dependent ion positions $\vec{r}_i(t)$. In Eq.~\eqref{eq_Sii_WK}, $\tau$ denotes an absolute time relative to the time delay $t$. The ensemble average, denoted by subscript $\tau$, is taken to be the sample average for independent configurations with different values of $\tau$ but the same value of $t$.
According to the hydrodynamic model~\cite{Hansen2006}, the dispersion of the collective side peak of $S_\mathrm{ii} ( \vec{k}, \omega )$, called sound mode in the hydrodynamic limit, is connected to the adiabatic speed of sound $c_\mathrm{s}$ via
\begin{equation}
\omega_\mathrm{sound} (\vec{k}) = c_\mathrm{s} |\vec{k}| \, .
\end{equation}
The position of the sound mode $\omega_\mathrm{sound}$ can be extracted from the DSF via a fitting scheme (for details, see Refs.~\cite{Rueter2014,Witte2017,Schoerner2022}).
The dispersion of the longitudinal current-current correlation spectrum $J(\vec{k}, \omega)$, which is closely related to the DSF via
\begin{equation}\label{eq_current_correl}
J(\vec{k}, \, \omega) = \frac{\omega^2}{k^2} S_\mathrm{ii} ( \vec{k}, \, \omega ),
\end{equation}
defines the apparent sound speed $c_\mathrm{l}$.
The long-wavelength limit of the static ion-ion structure factor, which is defined as the frequency integrated ion-ion DSF $S_{\mathrm{ii}} (k) = \int_{-\infty}^{\infty} S_{\mathrm{ii}} (k,\omega) \, d\omega$, can also be determined from
\begin{equation}\label{eq-Sk0}
\lim_{k \to 0} S_{\mathrm{ii}} (k) = \kappa_T n_\mathrm{i} k_{\mathrm{B}} T
\end{equation}
via thermodynamic relations~\cite{Chaturvedi}. Here, $n_\mathrm{i}$ is the ion density, $T$ is the temperature and $\kappa_T$ is the isothermal compressibility which is also accessible via the thermodynamic relation
\begin{equation}\label{eq_th_compress}
\kappa_T = - \frac{1}{V} \left( \frac{\partial V}{\partial P} \right)_T \ ,
\end{equation}
where $V$ is the volume of the simulation box and $P$ is the pressure.
In order to analyze shock compression experiments, we employ the Hugoniot equation~\cite{Hugoniot1887, Hugoniot1889, Rankine1851, Rankine1870}
\begin{align}\label{eq_hugoniot}
\epsilon_1 - \epsilon_0 = \frac{1}{2} & \left( P_1 +P_0 \right) \left( V_0 - V_1 \right), \\
\epsilon_a = & \frac{E_a}{m_a}, \quad a=0,1 \, ,
\end{align}
which can be derived from the conservation of energy~$E$, momentum~$p$, and mass~$m$ at a propagating shock front. Here, the subscript $0$ indicates the conditions of the unshocked material while subscript $1$ indicates the conditions of the shocked material.
The DFT-MD simulations in this work were performed within the Vienna Ab-Initio Simulation Package (VASP)~\cite{VASP1,VASP2,VASP3}. The electron density at each time step is computed according to the finite-temperature DFT approach~\cite{Mermin2}, using the generalized gradient approximation of Perdew, Burke and Ernzerhof~\cite{PBE} for the exchange correlation functional (XC-functional). The MD is carried out using the Born-Oppenheimer approximation by solving Newton's equations of motion for the ion positions. The forces are determined by the electronic charge density, where the electrons always remain in an instantaneous, thermal equilibration defined by the ion positions.
Within VASP the Kohn-Sham orbitals are expanded in a plane wave basis set up to a cutoff energy $E_\mathrm{cut}$, which we set at 800 eV. For the ion potential of copper a projector augmented-wave potential~\cite{Blochl1994} is used (PAW PBE Cu GW 19May2006), which treats the outer eleven electrons explicitly in the DFT framework, with the remaining electrons frozen in the core. For temperature control, the algorithm of Nos\'e{}-Hoover~\cite{Nose,Hoover} is used with a mass parameter corresponding to a temperature oscillation period of 40 time steps.
To allow for melting and freezing along the principal Hugoniot curve, the simulation box for copper is spanned by lattice vectors of the face-centered cubic~(fcc) structure.
The sampling of the Brillouin zone was carried out at the Baldereschi mean-value point~\cite{Baldereschi} for all DFT-MD simulations. For the conductivity calculations, at least ten uncorrelated snapshots are taken from the DFT-MD simulation and reevaluated using a more accurate energy convergence criterion and a 2x2x2 Monkhorst-Pack grid~\cite{Monkhorst}. Additionally, these snapshots were evaluated using the hybrid XC-functional of Heyd, Scuseria and Enzerhof~(HSE)~\cite{HSE1,HSE2}. For these calculations, however, only the Baldereschi mean-value point was considered due to the higher computational demand of HSE calculations. The electrical conductivity is computed from these simulations via the Kubo-Greenwood fomula~\cite{Kubo_stat,Greenwood}, using the eigenstates and eigenenergies of the Kohn-Sham orbitals (for details, see Ref.~\cite{Holst}). We employ a complex shift of $0.1$ in the Kramers-Kronig transformation.
We have carefully checked the convergence of our results with regard to plane wave energy cutoff, length of the time step, number of particles and Brillouin zone sampling.
Additionally, we compute electronic transport properties using time-dependent DFT (TD-DFT) in the linear response regime~\cite{Gross1985}, which is based on the linear density-density response of the electronic system to an external, time-dependent perturbation.
Furthermore, we train high-dimensional neural network interatomic potentials to reproduce the DFT forces and energies, enabling us to perform neural-network-driven molecular dynamics (NN-MD) simulations with upto 32~000 copper atoms. This improves the resolution of the phonon dispersion in the solid due to the larger number of available wave vectors, and it enables access to the hydrodynamic limit in the liquid regime. We use the implementation in the n2p2 software package~\cite{n2p2,Morawietz2016,Singraber2019}, which employs Behler-Parrinello symmetry functions to describe the environment of each copper atom and subsequently passes these symmetry functions to the input layer of the neural network. A Kalman filter is used to adjust the weights and biases of the neural network during training. We use two hidden layers with 40 nodes each, and set the cutoff radius between 6~\AA{} at ambient conditions and 4~\AA{} at the highest pressure condition along the Hugoniot. The environment of each atom is described by 13 radial symmetry functions and 12 narrow angular symmetry functions chosen according to the scheme described in Ref.~\cite{Gastegger2018}. The remaining parameters are set to their default values. The trained potential is subsequently used in conjunction with the LAMMPS molecular dynamics simulation code~\cite{Thompson2022} to produce the MD simulations. The temperature in the NN-MD simulations is also controlled using a Nos\'e{}-Hoover thermostat.
\section{Results for solid copper at ambient conditions}
\label{sec:solid-metal}
First, we test our simulations against known results for ambient solid copper at the density $\rho=8.94$~g/cm$^3$ and temperature $T=303$~K. While the DSF described in Eq.~\eqref{eq_Sii_WK} can be averaged over all wave vectors with equal magnitude in liquid and warm dense copper, the orientation of $\vec{k}$ relative to the crystallographic axes is relevant in the solid phase.
In Fig.~\ref{fig:phonon}, we show the phonon dispersion of solid copper at ambient conditions along a high-symmetry path through the first Brillouin zone. The phonon positions are determined from the transverse and longitudinal current-current correlation spectrum $J_\mathrm{t} (k, \omega)$ and $J_\mathrm{l} (k, \omega)$ (see Ref.~\cite{Schoerner2022} for details) of the NN-MD simulations by the peak finding routine \textit{find\_peaks} implemented in the SciPy library for scientific computing in python~\cite{scipy2020}. This analysis of phonon modes is fully dynamic and does not require a harmonic or quasi-harmonic oscillator model.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{Cu_phonon_dispersion.pdf}}
\caption{The phonon dispersion of solid copper at $\rho = 8.94$~g/cm$^3$ and $T=303$~K extracted from the longitudinal and transverse current-current correlation spectrum. The x-axis is scaled by the lattice constant $a$ of ambient copper. Experimental data from Nicklow~\textit{et al.} is given as a reference~\cite{Nicklow1967}.}
\label{fig:phonon}
\end{figure}
The observed agreement with experimental data by Nicklow~\textit{et~al.}~\cite{Nicklow1967} is very good, indicating that the lattice dynamics in solid copper are well described by our simulations. We show an example of the transverse and longitudinal current-current correlation spectrum along the high-symmetry path $\Gamma-K-X$ in Fig.~\ref{fig:phonon3D}. The contributions due to longitudinal density oscillations are colored green and the the correlation spectrum of transverse currents is colored red.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{Sii_k.pdf}}
\caption{The current-current correlation spectrum of solid copper at $\rho = 8.94$~g/cm$^3$ and $T=303$~K along the high-symmetry path $\Gamma-K-X$. The transverse part is colored red, while the longitudinal part is colored green.}
\label{fig:phonon3D}
\end{figure}
Furthermore, we investigate the dynamic electrical conductivity using the Kubo-Greenwood formula~\cite{Kubo_stat,Greenwood}. Results for PBE and HSE calculations are shown in Fig.~\ref{fig:solid-cond} compared to an experimental result by Henke \textit{et al.}~\cite{Henke} and predictions from TD-DFT linear response calculations. Experimentally, Henke~\textit{et al.} measured the absorption coefficient $\alpha(\omega)$ of ambient copper which can be translated to the real part of the dynamic electrical conductivity by Kramers-Kronig relations.
We also compute the dynamic electrical conductivity using linear response TD-DFT. Here, we compute the density-density response function due to an external perturbation potential within a simulation cell containing 4 atoms. The effect of electron-electron interactions is incorporated using the random-phase approximation, which accounts for local-field effects from the Coulomb interaction but neglects exchange-correlation. From the response function, transport properties, such as the electrical conductivity or the DSF, are extracted by aid of the fluctuation-dissipation theorem. TD-DFT has been used to compute XRTS spectra using both the real-time~\cite{Baczewski} and linear response formalisms~\cite{Ramakrishna2021}.
The agreement in Fig.~\ref{fig:solid-cond} is good, while the theoretical models predict a significantly lower conductivity at $\approx 2$~eV. Also, the measurements are lower than the theory predictions at $\approx 7$~eV and the feature at $\approx 25$~eV, which is observed by theory and experiment, occurs at lower frequencies in the simulations. This shift corresponds to shifted energy states relative to the continuum of states. The DFT-MD simulation predicts the energy states responsible for the observed feature at higher energies than the experiment indicates. This underestimation of energy gaps is a well known problem of DFT with commonly used XC-functionals like PBE~\cite{PBE}. Better agreement with experimental observations can be achieved using the hybrid functional HSE~\cite{HSE1, HSE2} as shown, e.g., for aluminum~\cite{Witte2018}. However, in this case, the HSE calculations overestimate the position of the feature, shifting it to $\approx 28$~eV.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{conductivity_exp.pdf}}
\caption{Dynamic electrical conductivity of solid copper at ambient conditions computed from a DFT simulation with 125 atoms using the Kubo-Greenwood formula and results of TD-DFT linear response calculations with 32 atoms and the adiabatic local density approximation. The black line indicates results achieved with the PBE XC-functional, while the grey line represents results using the HSE XC-functional. Measurements of Henke~\textit{et al.}~\cite{Henke} are given as reference.}
\label{fig:solid-cond}
\end{figure}
\section{Results for liquid copper at ambient pressure}
\label{sec:liquid-metal}
As another test of the method we perform simulations of liquid copper at $\rho = 7.69$~g/cm$^3$ and $T=1773$~K in order to compare to experimental data by Waseda and Ohtani~\cite{Waseda} who performed x-ray diffraction experiments at these conditions. We additionally compare to neutron diffraction data by Eder~\textit{et al.}~\cite{Eder1980}.
The atomic form factor must be regarded if the ion dynamics are to be extracted from x-ray scattering. Waseda and Ohtani used form factors computed from relativistic Dirac-Slater wave functions~\cite{Cromer_Waber} with anomalous dispersion corrections~\cite{Cromer}, while for neutron diffraction merely the multiple scattering in the sample must be accounted for~\cite{Eder1980}. Fig.~\ref{fig:statS1770} shows static ion structure factors calculated from a DFT-MD simulation with 125 atoms and a NN-MD simulation with 32~000~atoms.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{compression_1773.pdf}}
\caption{Static structure factor of liquid copper at ${\rho = 7.69}$~g/cm$^3$ and $T=1773$~K computed from a DFT-MD simulation with 125 atoms and a NN-MD simulation with 32~000~atoms. Two experimental results from x-ray diffraction (Waseda and Ohtani~\cite{Waseda}) and neutron diffraction (Eder \textit{et al.}~\cite{Eder1980}) are shown as reference. The right inset zooms in on the long-wavelength behavior and shows the $k \rightarrow 0$ prediction by DFT-MD and an experimental result of liquid copper near the melting point by Filippov~\cite{Filippov1966}. The top inset zooms in on the behavior around the first correlation peak.}
\label{fig:statS1770}
\end{figure}
The static structure factors inferred from Waseda and Ohtani ($T=1773$~K) and from the diffraction experiments of Eder \textit{et al.} ($T=1883$~K) are displayed for comparison. The general agreement is good while there are noticeable differences in the first correlation peak at around $3\, \mathrm{\AA}^{-1}$ and for the low-$k$ limit, displayed in the insets of Fig.~\ref{fig:statS1770}. The first correlation peak is characterized by the length scale at which the minimum of the interatomic potential occurs. Here, longer simulations lead to better statistics, which better resolves the dynamics in this area, leading to a lowering of the peaks. Additionally, smaller simulation boxes artificially enhance the near-field order that is induced by the minimum of the interatomic potential.
Experimentally, it is influenced by the angular resolution of the detector and spectral width of the light/neutron source. Remarkably, the first correlation peak of Eder~\textit{et al.} lies higher than that of Waseda and Ohtani, although higher temperatures generally lead to diminishing correlations, corresponding to a lower correlation peak.
The low-$k$ limit is of interest because the value for $k \to 0$ can be determined by the isothermal compressibility via Eq.~\eqref{eq-Sk0} which is accessible through DFT-MD simulations via Eq.~\eqref{eq_th_compress}. We perform additional simulations at 5\% higher and lower densities in order to evaluate the derivative.
The limit determined this way is indicated by the black diamond in the inset, while the violet asterisk is determined from inserting an experimental compressibility (inferred from the speed of sound)~\cite{Filippov1966} into Eq.~\eqref{eq-Sk0}. Eder \textit{et al.} artificially extended their structure factor for $k$-values smaller than $0.5\,\mathrm{\AA}^{-1}$ to match the value computed from the compressibility near the melting point by Filippov~\textit{et al.}~\cite{Filippov1966}, also used for the violet asterisk, and a density determined by Cahill and Kirshenbaum~\cite{Cahill}.
It approaches a higher value than the two indicated limits due to the higher temperature of the experiment (see Eq.~\eqref{eq-Sk0}). The simulation with 125 atoms allows access to smaller $k$-values due to the larger simulation box. However, it can only be concluded that the values seem to agree qualitatively while no definite conclusion on the agreement of $S_\mathrm{ii}(k)$ with the predicted limit can be made without further investigation with more atoms. For wave vectors $k$ between $0.8\,\mathrm{\AA}^{-1}$ and $2.5\,\mathrm{\AA}^{-1}$, observations differ between Eder~\textit{et al.} and Waseda and Ohtani. The DFT-MD simulations agree well with the former while the latter is significantly lower in that region (see inset in the lower right corner of Fig.~\ref{fig:statS1770}). A reason for this difference could be the form factor which must be additionally considered for x-ray diffraction experiments.
\begin{figure}[!thb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{speed_sound_1773_final.pdf}}
\caption{Peak position of the ion acoustic mode taken from the DSF and from the longitudinal current-current correlation $J_\mathrm{l} \left(k,\,\omega\right)$ (see Eq.~\eqref{eq_current_correl}) dependent on the $k$-value. For wave numbers below the first correlation peak results from the NN-MD simulation are shown, while all other results are taken from the DFT-MD simulation. A best linear fit for the low $k$ limit of the ion acoustic mode of the DSF is indicated and the corresponding adiabatic speed of sound is presented. The inset shows a comparison between DFT-MD and NN-MD results for small $k$.}
\label{fig:peakpos1770}
\end{figure}
Furthermore, we compute the DSF~$S_\mathrm{ii} (k, \, \omega)$ and the closely related longitudinal current-current correlation spectrum~$J_\mathrm{l} (k, \, \omega)$. We extract various properties of the different contributing modes by fitting to a generalized collective modes approach~\cite{Bryk2012,Wax2013} with one diffusive and one propagating mode, see Ref.~\cite{Schoerner2022} for details.
Fig.~\ref{fig:peakpos1770} shows the peak positions of the ion acoustic mode extracted from the DSF as a function of the wave vector $k$. This can be considered as the dispersion relation of the ion acoustic mode. Also shown in this figure are the peak positions of the longitudinal current correlation given by Eq.~\eqref{eq_current_correl} which describes collective excitations via currents~\cite{Zwanzig} and determines the apparent speed of sound $c_\mathrm{l}$~\cite{Balucani}. The DSF, on the other hand, can be used to extract the adiabatic speed of sound $c_\mathrm{s}$ in the hydrodynamic limit~\cite{Hansen2006}. Both of these quantities are indicated in Fig.~\ref{fig:peakpos1770}, as well as the free-particle limit of a non interacting classical system. In this limit, the peak position of $J_\mathrm{l}$ is determined by the non collective mode which is described by
\begin{equation}
\label{eq:freeparticle}
\omega_J (\vec{k}) = \sqrt{ \frac{2}{m_\mathrm{i} \beta}} \; |\vec{k}| \, ,
\end{equation}
see Ref.~\cite{Scopigno2}. The different physical regimes that correspond to low- and high-$k$-values were discussed in detail recently in Ref.~\cite{Schoerner2022} for warm dense aluminum. As the ions behave like free particles in the high-$k$ limit, their dispersion is described by the free-particle limit, as can be seen from Fig.~\ref{fig:peakpos1770}.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{peak_width_64_125.pdf}}
\caption{Peak width of the thermal mode of the ion-ion DSF dependent on the $k$-value. Experimental results from Hagen~\textit{et al.}~\cite{Hagen} and data calculated from experimentally determined static ion structure factors by Eder~\textit{et al}.~\cite{Eder1980} are given as comparison. The inset shows a comparison between the results of the DFT-MD~(125~atoms) and NN-MD~(32~000~atoms) simulations in the region where experimental data is available.}
\label{fig:peakwidth1770}
\end{figure}
Another feature that can be extracted from the DSF is the width of the diffusive thermal mode represented in Fig.~\ref{fig:peakwidth1770}.
It corresponds to the random thermal movement of the ions and its shape is connected to how much energy can be coupled to this mode. The wider the peak, the higher the energy transfer to an individual ion can be. The magnitude of the peak is determined by the static structure factor which accounts for how many ions are present on the length scale defined by $k$. Therefore, the width and the height of the peak determine the energy that can be transferred to the thermal mode. In Fig.~\ref{fig:peakwidth1770}, the widths for DFT-MD simulations with 125~atoms and NN-MD simulations with 32~000~atoms are shown and compared to an experimental result from inelastic neutron scattering by Hagen~\textit{et al.}~\cite{Hagen} and a result inferred from neutron scattering by Eder~\textit{et al.}~\cite{Eder1980}. While the former represents a direct measurement, the results by Eder~\textit{et al.} are calculated from the static structure factor in Fig.~\ref{fig:statS1770}. They use a simple model~\cite{Copley} with one free parameter to artificially introduce the dynamics. The ab-initio DFT-MD approach describes the dynamics in a more consistent way than the model chosen by Eder~\textit{et al.} Since their static structure factor agreed well with DFT-MD simulations (see Fig.~\ref{fig:statS1770}), the observed deviations are attributed to the artificially introduced dynamics.
\section{The principal Hugoniot curve}
\label{sec:hugo}
The Hugoniot equation~\eqref{eq_hugoniot} is dependent on pressure $P$, density $\rho$, and specific internal energy $\epsilon$. For a given temperature, the equation of state (EOS) defines the values for $P$, $\rho$, and $\epsilon$ that satisfy the Hugoniot equation. We compute 16 isotherms ranging from 1700~K upto 60~000~K, with four to five different densities per isotherm.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{hugo_compare_combined.pdf}}
\caption{Lower panel: Hugoniot curve for copper in the $P$-$\rho$ plane inferred from isotherms calculated using DFT-MD (black) and isotherms from the SESAME 3333 table~\cite{Trainor1983}~(grey). The temperatures for some data points are annotated. High pressure experimental values by McCoy~\textit{et~al.}~\cite{McCoy2017}, Glushak~\textit{et~al.}~\cite{Glushak1989}, Kormer~\textit{et~al.}~\cite{Kormer1962}, Altshuler~\textit{et~al.}~\cite{Altshuler1962} and Mitchell~\textit{et~al.}~\cite{Mitchell1991} are shown.\\
Upper panel: Zoomed in view of the low pressure range, where experimental data by Mitchell~\textit{et~al.}~\cite{Mitchell}, Altshuler~\textit{et~al.}~\cite{Altshuler}, McQueen~\textit{et~al.}~\cite{McQueen} are shown.}
\label{fig:hugo_exp}
\end{figure}
The pressure and internal energy are interpolated using cubic splines, and Eq.~\eqref{eq_hugoniot} is solved numerically to give the principal Hugoniot curve depicted in Fig.~\ref{fig:hugo_exp}. As a comparison, we compute the principal Hugoniot curve from the standard SESAME 3333 EOS table~\cite{Trainor1983}.
Fig.~\ref{fig:hugo_exp} illustrates the results obtained from DFT-MD isotherms and from SESAME isotherms in the pressure-density plane.
Experimentally, the Hugoniot curve of copper in the pressure-density plane has been constrained well for pressures up to $4$~Mbar (see Fig.~\ref{fig:hugo_exp}). For higher pressures, the spread of experimental results is significantly larger~\cite{McCoy2017} and experimental uncertainties increase.
\begin{table}[tb]
\caption{Conditions for compressed copper predicted from DFT-MD isotherms.}
\label{tab:hugo_cond}
\begin{ruledtabular}
\begin{tabular}{cccc}
$P$ [GPa]& $T$ [K] & $\rho$ [g/cm$^3$] & $u$ [kJ/g] \\
\hline
113 & $1700$ & $12.193$ & $-3.9774$ \\
162 & 3000 & 12.959 & $-2.8628$ \\
212 & $4600$ & $13.583$ & $-1.6052$ \\
237 & 5400 & 13.862 & $-0.9597$ \\
264 & 6300 & 14.134 & $-0.2339$ \\
289 & 7200 & 14.373 & 0.4534 \\
304 & $7700$ & $14.499$ & $0.8458$ \\
352 & 8200 & 14.793 & 2.1275 \\
393 & $9500$ & $15.095$ & $3.3089$ \\
436 & 10900 & 15.373 & 4.5471 \\
476 & 12400 & 15.617 & 5.7083 \\
506 & $13500$ & $15.805$ & $6.6302$ \\
586 & 16600 & 16.249 & 9.1182 \\
676 & $20000$ & $16.694$ & $11.8969$ \\
941 & $30000$ & $17.792$ & $20.5124$ \\
1799 & $60000$ & $20.220$ & $50.4514$
\end{tabular}
\end{ruledtabular}
\end{table}
The SESAME EOS predicts consistently higher temperatures during the compression process. While the temperature difference at $\approx 1$~Mbar at roughly similar conditions is $\approx 150$~K, the difference becomes $\approx 3200$~K around $6.7$~Mbar. While the deviation in the pressure-density-plane is small and difficult to assess experimentally, the temperature difference is significant and could be used to test the respective EOS.
The agreement in Fig.~\ref{fig:hugo_exp} is best between $2.5$~Mbar and $3.5$~Mbar which is the region in which both EOS predict the melting point as can be seen in Fig.~\ref{fig:hugo_melt}.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{melting_line.pdf}}
\caption{Hugoniot curve for copper in the $T$-$P$-plane inferred from isotherms calculated using DFT-MD (black) and isotherms from the SESAME 3333 table (grey). Melting lines determined by Wu \textit{et al.}~\cite{Wu}, Moriarty \textit{et al.}~\cite{Moriarty1987} and Belonoshko \textit{et al.}~\cite{Belonoshko2000} using different variations of MD simulations as well as experimental results by Hayes \textit{et al.}~\cite{Hayes} are indicated.}
\label{fig:hugo_melt}
\end{figure}
Experimental results by Mitchell \textit{et al.}~\cite{Mitchell}, Altshuler \textit{et al.}~\cite{Altshuler} and McQueen \textit{et al.}~\cite{McQueen} are indicated in the lower panel of Fig.~\ref{fig:hugo_exp} and show better agreement with the SESAME data while the DFT-MD Hugoniot lies slightly higher in the pressure-density-plane.
The upper panel of Fig.~\ref{fig:hugo_exp} shows the available high-pressure Hugoniot compression data by McCoy~\textit{et~al.}~\cite{McCoy2017}, Glushak~\textit{et~al.}~\cite{Glushak1989}, Kormer~\textit{et~al.}~\cite{Kormer1962}, Altshuler~\textit{et~al.}~\cite{Altshuler1962} and Mitchell~\textit{et~al.}~\cite{Mitchell1991}. In this regime, both EOS studied here are compatible with the experimental data due to their large experimental uncertainties. The only exception is the high-pressure point by Mitchell~\textit{et~al.} around 1500~GPa, which agrees with neither of the theoretical predictions.
In the temperature-pressure plane, the melting point along the Hugoniot is easily identifiable by a kink which is due to the latent heat needed for the phase transition. From the inset in Fig.~\ref{fig:hugo_melt}, the melting point as predicted by SESAME lies between $2.3$~Mbar and $3$~Mbar, and the melting point predicted by DFT-MD calculations lies between $3$~Mbar and $3.5$~Mbar. While the MD simulations by Wu~\textit{et al.}~\cite{Wu} and Moriarty~\textit{et al.}~\cite{Moriarty1987}, as well as experiments by Hayes~\textit{et al.}~\cite{Hayes} agree roughly with the SESAME results, the two-phase MD simulations by Belonoshko~\textit{et al.}~\cite{Belonoshko2000} display a trend that tends towards the melting point predicted by DFT-MD simulations. However, their calculations did not cover the pressure range in question. For the subsequent examination of the material along the principal Hugoniot curve, the conditions indicated by horizontal dashed lines in the upper panel of Fig.~\ref{fig:hugo_exp} were used to perform extended simulations.
\section{Ion dynamics of shock-compressed copper}
\label{sec:shock-copper}
The static ion structure factors in the liquid phase are displayed in Fig.~\ref{fig-statS4568}. The correlation peaks exhibit a shift to higher-$k$ values for increasing density. Shifts for constant pressure differences along the Hugoniot are expected to become smaller, as the density $\rho$ is a convex function of the pressure $P$ (as seen in Fig.~\ref{fig:hugo_exp}). A smoothing and lowering of the correlation peaks is also observable due to the higher kinetic energy of the ions at higher temperature which suppresses correlation effects caused by the Coulomb potential. The limits for $k \to 0$ are also indicated by diamonds in the inset of Fig.~\ref{fig-statS4568}. The limits are computed via the compressibility using Eqs.~\eqref{eq_th_compress} and \eqref{eq-Sk0}, analogous to Sec.~\ref{sec:liquid-metal}. The lowest wave number available through the DFT-MD simulations is $0.9~\mathrm{\AA}^{-1}$, due to the small simulation boxes with 64 atoms.
In order to test the computed limits, we perform NN-MD simulations with 32~000 atoms, enabling access to wave numbers down to $0.1~\mathrm{\AA}^{-1}$. The static structure factor at the additionally accessible wave numbers is given by the dashed lines in the inset of Fig.~\ref{fig-statS4568}, demonstrating good agreement with the DFT-MD data and the thermodynamically determined limit. The determined compressibilities will be used to compute the adiabatic speed of sound in the following.
\begin{figure}[!t]
\center{\includegraphics[angle=0,width=1.0\linewidth]{compare_statS_4_5_68.pdf}}
\caption{DFT-MD results of the static structure factor of liquid copper along the Hugoniot curve. In the inset the low $k$ behavior is shown on a log scale and the NN-MD results are shown for wave numbers that are not accessible to the DFT-MD results. The limits for $k \rightarrow 0$ are also indicated (see Eq.~\eqref{eq-Sk0}). Further information on the conditions is given in Fig.~\ref{fig:hugo_exp} and Tab.~\ref{tab:hugo_cond}.}
\label{fig-statS4568}
\end{figure}
\begin{figure}[!b]
\center{\includegraphics[angle=0,width=1.0\linewidth]{hugo_phonons.pdf}}
\caption{Phonon dispersion of solid copper along the principal Hugoniot curve along high-symmetry direction in the Brillouin zone. The x-axis is normalized by the lattice constant $a$ to make the results at different densities comparable. Further information on the conditions is given in Fig.~\ref{fig:hugo_exp} and Tab.~\ref{tab:hugo_cond}.}
\label{fig:phonon_hugo_solid}
\end{figure}
We study three points along the principal Hugoniot curve, where we expect solid conditions of copper according to Fig.~\ref{fig:hugo_exp} and Fig.~\ref{fig:hugo_melt}. The phonon dispersion of copper for ambient conditions and the Hugoniot conditions at $1.13$~Mbar, $2.12$~Mbar and $3.04$~Mbar are shown in Fig.~\ref{fig:phonon_hugo_solid}. While we only show the peak position here, a systematic broadening of the phonon modes is also observed, as expected due to the increasing temperature. Furthermore, a systematic hardening for most of the phonon branches and orientations can be observed. Along the $\Gamma-L$ direction, we observe a strong hardening of the longitudinal mode, while the transverse branch hardens significantly less than along the other shown orientations.
Once the copper melts, the effect of further compression on the ion acoustic mode can be investigated. Fig.~\ref{fig:dynS_hugo_liquid} shows the change of the dynamic ion structure factor at different $k$-values along the Hugoniot computed from the NN-MD simulations.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{dynS_compare_liquid_1.pdf}}
\caption{DSF of liquid copper along the principal Hugoniot curve computed from the NN-MD simulations. The curves are shifted by $0.35$~fs with respect to the next lower $k$-value for readability. Further information on the conditions is given in Fig.~\ref{fig:hugo_exp} and Tab.~\ref{tab:hugo_cond}.}
\label{fig:dynS_hugo_liquid}
\end{figure}
Since the simulations are performed at different densities, but with the same number of atoms, the size of the simulation box varies which leads to slightly different $k$-values in each case. However, the large-scale NN-MD simulations with $32~000$ atoms permit access to a dense $k$-grid in the considered range. Therefore, the $k$-values at all conditions are within $1$\% of the numbers given in Fig.~\ref{fig:dynS_hugo_liquid}. A similar study on aluminum~\cite{Schoerner2022} has shown that increased temperature leads to a more pronounced ion acoustic mode, but does not shift it to higher $\omega$. This effect is, therefore, attributed to the density increase along the principal Hugoniot curve. The more pronounced ion acoustic mode, as well as the generally elevated course of $S_\mathrm{ii} (k, \, \omega)$ for higher temperatures is in accord with the observation in Fig.~\ref{fig-statS4568} that the static structure factor $S_\mathrm{ii} (k)$ is greater at high temperatures than at low temperatures for $k$-values smaller than $2.8 \, \mathrm{\AA}^{-1}$.
\begin{figure}[htb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{speed_sound_compare_liquid_2.pdf}}
\caption{The upper panel shows the peak position of the ion acoustic mode $\omega_\mathrm{ion}$ taken from the DSF's in Fig.~\ref{fig:dynS_hugo_liquid} dependent on the $k$-value and a linear dispersion computed from equation~\eqref{eq:cs}. The corresponding adiabatic speeds of sound are presented. The lower panel shows the dispersion of the longitudinal current-current correlation spectrum $J_\mathrm{l}$ extracted from the NN-MD simulations and the free-particle behavior. For readability, the curves in the upper panel are shifted by $0.2~\mathrm{\AA}^{-1}$ to the right with respect to the next lower pressure condition.}
\label{fig:dispersion_hugo}
\end{figure}
The adiabatic speed of sound can be computed from the thermodynamic relation
\begin{equation}
\label{eq:cs}
c_\mathrm{s} = \sqrt{\frac{\gamma}{\kappa_T \rho}},
\end{equation}
with the isothermal compressibilities $\kappa_T$ computed in Fig.~\ref{fig-statS4568}. The upper panel of Fig.~\ref{fig:dispersion_hugo} shows the linear trends computed from equation~\eqref{eq:cs}, as well as the disperison of the ion acoustic mode taken from the DSF shown in Fig.~\ref{fig:dynS_hugo_liquid}. It is apparent that the observed peak positions converge to the linear behavior for all conditions, while for the more extreme conditions, the hydrodynamic regime is reached at smaller $k$-values. The bottom panel of Fig.~\ref{fig:dynS_hugo_liquid} shows the peak position of the longitudinal current-current correlation spectrum across a wide range of wave numbers. The free-particle limit of a non-interacting system (see Eq.~\eqref{eq:freeparticle}) is given as a reference. The disperison above $10$~\AA$^{-1}$ for all conditions is well approximated by this limiting behavior.
The adiabatic speed of sound is also accessible during shock wave experiments via VISAR measurements~\cite{Barker1972,Li2018}, where the time it takes the shock wave to travel through the target is recorded.
First measurements of the speed of sound in copper for this pressure region were observed by McCoy \textit{et al.}~\cite{McCoy2017} during a shock compression experiment. The inferred pressure-density conditions, amongst others, are shown in Fig.~\ref{fig:hugo_exp} and agree within error bars with our simulations. In Fig.~\ref{fig:sound_hugo}, we show the experimentally determined adiabatic speed of sound compared to the values computed through Eq.~\eqref{eq:cs}. All of the simulation data points lie slightly below the experimentally observed results. As the principal Hugoniot becomes steeper in the $P-\rho$ plane, the adiabatic speed of sound appears to flatten out towards higher pressures. Unfortunately, the experimental data does not extend to these pressure to verify this trend.
\begin{figure}[!thb]
\center{\includegraphics[angle=0,width=1.0\linewidth]{sound_speed_comparison.pdf}}
\caption{Adiabatic speed of sound computed from the thermodynamic relation~\eqref{eq:cs} compared to experimental VISAR measurements by McCoy~\textit{et~al.}~\cite{McCoy2017} for shock-compressed copper.
}
\label{fig:sound_hugo}
\end{figure}
\section{Conclusion}
\label{sec:concl}
In this work, we performed an extensive analysis of shock-compressed copper using DFT-MD simulations and MD simulations driven by high-dimensional neural network potentials.
By analyzing the ion-ion structure factor, we showed that our DFT-MD simulations are able to accurately describe the phonon spectrum of solid copper. Likewise, our analysis of the dynamic electrical conductivity in terms of the Kubo-Greenwood formula and linear response TD-DFT yielded close agreement with existing experimental data. Furthermore, we computed the static and dynamic ion-ion structure factor of liquid copper near the melting line. The agreement with diffraction data was observed to be excellent and the width of the thermal mode agreed well with experiments at wave numbers around the first correlation peak.
The Hugoniot curve was computed from several isotherms upto $60~000$~K and 18~Mbar and compared to predictions by the SESAME EOS. Good agreement in the pressure-density plane was achieved between DFT-MD, SESAME and experiments upto 4~Mbar. Differences in the temperature between DFT-MD and SESAME along the Hugoniot were identified, and the resulting shift of the melting point to higher pressures was highlighted. We observed the hardening of phonon spectra in the solid regime of the Hugoniot compression and, analogously, studied the shift of ion acoustic modes to higher excitation energies. Phonon hardening is currently lively debated and recent work has shown that phonons can be resolved at free electron laser facilities~\cite{McBride2018,Descamps2020,Wollenweber2021}, enabling future direct observations of phonon hardening in shock compression experiments.
We found the adiabatic speed of sound along the Hugoniot to be slightly underestimated by DFT-MD relative to recent experimental results.
We provided \textit{ab initio} predictions for the evolution of phonon and ion acoustic modes during shock compression of copper, as well as adiabtic speeds of sound for pressures beyond those previously reached by McCoy \textit{et~al.} We hope this inspires further high-pressure shock compression studies, coupled with high resolution x-ray scattering to resolve the ion dynamics of copper under these conditions.
\begin{acknowledgments}
We thank Emma McBride, Luke Fletcher and Siegfried Glenzer for helpful discussions. The DFT-MD and NN-MD simulations and further analysis were performed at the North-German Supercomputing Alliance (HLRN) and the ITMZ of the University of Rostock. MS and RR thank the Deutsche Forschungsgemeinschaft (DFG) for support within the Research Unit FOR~2440. This work was partially supported by the Center of Advanced Systems Understanding (CASUS) which is financed by Germany’s Federal Ministry of Education and Research (BMBF) and by the Saxon State Government out of the state budget approved by the Saxon State Parliament.
\end{acknowledgments}
|
1,116,691,499,705 | arxiv | \section{Introduction}
Let $\mu$ be a probability measure supported on an infinite subset $E$ of the real line. We assume that $\int_{E} |x^{n}| d\mu < \infty$ for every nonnegative number $n$.
A sequence of monic polynomials $\{P_{n} (x)\}_{n\geq0}$ is said to be orthogonal with respect to $\mu$ when deg $P_{n} = n$ and $\int_{E} P_{n} (x)P_{m} (x) d\mu(x)= 0$ for $n\neq m$.
It is very well known that these polynomials satisfy a three term recurrence
relation that yields for the orthonormalized polynomials a symmetric tridiagonal
(Jacobi) matrix such that the eigenvalues of the $n$ leading principal submatrix
are the zeros of the polynomial $P_{n}$. As a straightforward consequence of
this fact the zeros of $P_{n}$ are real, simple and interlace with the zeros of
$P_{n-1}$. On the other hand, they are located in the interior of the convex
hull of $E$.\\
The theory of orthogonal polynomials is strongly related to Sturm-Liouville problems. In particular, the so called classical orthogonal polynomials (Hermite, Laguerre, and Jacobi) appear as eigenfunctions of second order linear differential operators with polynomial coefficients. Indeed, the corresponding measure of orthogonality is absolutely continuous and their derivative with respect to the Lebesgue measure (weight function) is the density function of the normal, gamma and beta distributions, respectively. Notice that this fact was pointed out by E. Routh in 1884 \cite{Routh} as well as by S. Bochner in 1929 \cite{Bo}, but the orthogonality does not play therein any role. On the other hand, they are hypergeometric functions and, as a consequence, many analytic properties can be deduced from this fact. Moreover, certain properties of their zeros can be easily deduced using the classical Sturm theorems. Finally, a nice electrostatic interpretation of their zeros is deduced from the second order linear differential equation as an equilibrium problem for the logarithmic interaction of positive unit charges under an external field.\\
Exceptional orthogonal polynomials constitute a recent new approach
to spectral problems for second order linear differential operators
with polynomial eigenfunctions. Previously, a constructive theory of
orthogonal polynomials related to the classical ones has been done
in two directions. The first one is related to the spectral theory
of higher order linear differential operators with polynomial coefficients. For
fourth order differential operators the classification of their eigenfunctions,
which are sequences of orthogonal polynomials with respect to a nontrivial
probability measure supported on an infinite subset of the real line,
was done by H. L. Krall and A. M. Krall \cite{HKrall,AKrall1,AKrall2} and
essentially
yields the classical ones and perturbations of
some particular Laguerre weights ${\rm e}^{-x} + M \delta(x)$, Jacobi weights
$(1-x)^{\alpha} + M \delta(x)$ and Legendre weight $ 1 + M \delta(x-1) + M
\delta(x+1)$, $M\geq0$. For higher order, some examples are known but a general
theory and
classification constitutes an open problem. The second one appears when some
perturbations of the measure are considered. In particular, three cases are
considered in the literature in the framework of the so called spectral linear
transformations \cite{zhedanov97}. The Christoffel transformation (the multiplication of the
measure by a positive polynomial in the support of the measure), the Uvarov
transformation (the addition of mass points off the support of the measure) and
Geronimus transformation (the multiplication by the inverse of a positive
polynomial). They can be analyzed in terms of the discrete Darboux
transformation of the corresponding Jacobi matrices using the LU and UL
factorizations and commuting them \cite{BueMar}.\\
Exceptional orthogonal polynomials depart from the classical families in that the sequence of exceptional polynomials is not required to contain a polynomial of every degree, and as a consequence new differential operators exist, with \textit{rational} rather than polynomial coefficients. Despite this fact, the sequence of exceptional polynomial eigenfunctions is still dense in the corresponding weighted $L^2$ space and constitutes an orthogonal polynomial system. The measure of orthogonality for the exceptional families is a classical
measure divided by the square of a polynomial with zeros outside the support of the measure.
The first explicit examples of families of exceptional orthogonal
polynomials are the $\mathrm{X}_1$-Jacobi and $\mathrm{X}_1$-Laguerre polynomials,
which are of codimension one, and were first introduced in
\cite{GKM09,GKM10a}. In these papers, a characterization
theorem was proved for these orthogonal polynomial families, realizing
them as the unique complete codimension one families defined by a
Sturm-Liouville problem. One of the key steps in the proof was the
determination of normal forms for the flags of univariate polynomials
of codimension one in the space of all such polynomials, and the
determination of the second-order linear differential operators which
preserve these flags \cite{GKM05,GKM12b}.
Shortly after, Quesne \cite{Quesne1,Quesne2} observed the presence of a
relationship between exceptional orthogonal polynomials and the Darboux
transformation\footnote{By Darboux transformation, we do not mean here the
factorization of Jacobi matrices into upper triangular and lower triangular
matrices mentioned above, but the factorization of the second order
linear differential operator into two first order linear differential operators
\cite{GKM04a,GKM04b}.}. This enabled her to
obtain examples of potentials corresponding to orthogonal polynomial
families of codimension two, as well as explicit families of $\mathrm{X}_2$
polynomials. Higher-codimensional families were first obtained by
Odake and Sasaki \cite{Sasaki-Odake1}. The same authors further showed the
existence of two families of $\mathrm{X}_m$-Laguerre and $\mathrm{X}_m$-Jacobi
polynomials \cite{Sasaki-Odake4}, the existence of which was explained in \cite{GKM10b} for
$\mathrm{X}_m$-Laguerre polynomials and in \cite{GKM12b} for $\mathrm{X}_m$-Jacobi polynomials,
through the application of the isospectral algebraic Darboux
transformation first introduced in \cite{GKM04a,GKM04b}. These exceptional
orthogonal polynomials have been applied in a number of interesting physical
contexts, such as Dirac operators minimally coupled to external fields,
\cite{Ho11}, entropy measures in quantum information theory, \cite{DR12},
rational extensions of Morse and Kepler-Coulomb problems,
\cite{Grandati,Grandati2} or discrete quantum mechanics, \cite{SO6}.
The aim of our contribution is to explore analytic properties of these exceptional polynomials. In particular we will focus our attention in the distribution of their zeros in terms of the support of the orthogonality measure as well as their limit behavior. On the other hand, we will analyze some asymptotic properties as the outer relative asymptotics in terms of the corresponding classical orthogonal polynomials and the Mehler-Heine type formulas. Some properties of the zeros have also been analyzed numerically in \cite{HS}.
\section{Exceptional orthogonal polynomials}
Let $W(z)$ be a positive weight function with finite moments. Usually,
orthogonal polynomials are defined by applying Gram-Schmidt orthogonalization
to the
standard flag $1,z,z^2,\ldots$ relative to an $L^2$ inner product associated
with the weight $W$. Moreover, if the resulting orthogonal
polynomials are eigenfunctions of a Sturm-Liouville problem, we
speak of classical orthogonal polynomials. By Bochner's theorem,
the range of such polynomials is limited to the classical families
of Hermite, Laguerre, and Jacobi (for positive weights) and Bessel (for
signed weights).
In order to go beyond the classical families, we consider orthogonal
polynomials spanning a non-standard polynomial flag, say with a
basis $p_m(z), p_{m+1}(z),\ldots, $ where $\deg p_j = j$. Once we
drop the assumption that the OP sequence contains a polynomial \textit{of
every degree}, we obtain new classes of orthogonal polynomials
defined by Sturm-Liouville problems, which are commonly referred to
as exceptional orthogonal polynomials (XOPs)\footnote{Note that the
requirement that the degree sequence starts at $m$ and contains every integer
$j>m$ is not essential either, although all the families treated in this paper
belong
to this class. There exist also XOPs where the degree sequence has gaps. They
are related to state-adding Darboux transformations (as opposed to
isospectral) and contain for instance the $X$-Hermite families, beside many
others.}.
In the last two years it has become clear that the Darboux
transformation, appropriately generalized to the polynomial context,
plays an essential part in the deliniation of XOPs. To wit, let
$\mathcal{P}_n$ denote the vector space of polynomials of degree $\leq n$,
and consider a codimension $m$ polynomial flag
\[ \mathcal{U} = \{ U_k\}_{k=1}^\infty,\quad U_{k-1}\subset U_k \subset
\mathcal{P}_{m+k-1}\,.\] Furthermore, let
\begin{equation}
A[y] = b(z)(y'-w(z) y),\quad B[y]= \hat{b}(z)(y'-\hat{w}(z)y)
\end{equation}
be first order linear differential operators with rational coefficients such
that
\begin{equation}\label{eq:ABdef1}
B[\mathcal{P}_{k-1}] = U_k,\quad A[U_k] = \mathcal{P}_{k-1},\qquad k=1,2,\dots
\end{equation}
i.e. $B$ maps the standard flag into the codimension $m$ flag while $A$ maps
the codimension $m$ into the standard flag. Note that equation
\eqref{eq:ABdef1} implies that $\ker A=\ker B=0$ and the Darboux transformation
is isospectral.
Next, consider the second order differential operators
\begin{equation}
T = AB,\quad \hat{T} = BA.
\end{equation}
By construction, $T$ leaves invariant the standard polynomial flag, while
$\hat{T}$ leaves invariant the codimension-$m$ flag $\mathcal{U}$. Hence, by
Bochner's theorem,
\begin{equation}
T[y] = p(z) y'' + q(z) y' + r(z) y,\quad \hat{T}[y] = p(z) y''
+
\hat{q}(z) y' + \hat{r}(z) y
\end{equation}
where $p,q,r$ are polynomials with $\deg p \leq 2, \deg q\leq 1, \deg r= 0$
but where $\hat{q},
\hat{r}$ are in general rational functions.
It is then assured that $T$ and $\hat T$ have polynomial eigenfunctions
\footnote{Since no domains have been specified for $A$ and $B$, the term
\textit{eigenfunction} is not meant in the strict spectral
theoretic sense here, but rather as polynomial solutions to the eigenvalue
equation.} . Let $y_j,\; j\geq 0$, denote the polynomial eigenfunctions of
$T[y]$ and $\hat{y}_j,\; j\geq m$, denote the polynomial eigenfunctions of
$\hat{T}[y]$.
Again, by construction we have the following intertwining relations
\begin{equation}
TA = A \hat{T},\quad BT = \hat{T} B,
\end{equation}
which mean that
\begin{equation}
B[y_j] = \beta_j \hat{y}_{j+m},\quad A[\hat{y}_{j+m}] =
\alpha_j
y_j,\quad j=0,1,2,\ldots,
\end{equation}
where $\alpha_j, \beta_j$ are constants, i.e. operator $A$ maps
eigenfunctions of $T$ into eigenfunctions of $\hat T$ while $B$ does the
opposite transformation.
Furthermore, let
\begin{equation}
W(z) = \frac{1}{p}\exp \int^z \!\!\frac{q}{p},\qquad \hat{W} =
\frac{1}{p}\exp \int^z\!\!
\frac{ \hat{q}}{p}
\end{equation}
be the solutions of the Pearson's equations
\begin{equation}
\label{eq:pearson}
(p W)' = q W,\quad (p\hat{W})' = \hat{q} \hat{W}.
\end{equation}
This means that $T[y]$ is formally self-adjoint relative to $W$ while
$\hat{T}[y]$ is formally self-adjoint relative to $\hat{W}$. Consequently, the
eigenpolynomials $y_j$ are formally
orthogonal with respect to the weight $W(z)$, while $\hat{y}_j$ are
formally orthogonal relative to $\hat{W}(z)$. One can also
show
that
\[ \int A[f] g W dz + \int B[g] f \hat{W} dz = \text{ boundary
term};\] which means that operators $A$ and $-B$ are formally adjoint.
By a careful choice of the flags, and by imposing appropriate
boundary conditions one can construct examples where the above
formal relations hold in the $L^2$ setting (i.e. boundary conditions such that
the boundary terms vanish) and thereby obtain novel
classes of exceptional orthogonal polynomials.
In the present note we study asymptotic behaviour of XOPs of Laguerre and
Jacobi types. As we show, in the interval of orthogonality the exceptional
polynomials satisfy a variant of the classical Heine-Mehler formula. Outside
the interval of orthogonality, the convergence picture is less clear. However,
one can show that codimension $m$ exceptional orthogonal polynomials possess
$m$ extra zeros outside the interval of orthogonality, which we shall denote as
\textit{exceptional zeros}. These exceptional zeros have well-defined
convergence behaviour, and they converge to the zeros of some fixed
classical orthogonal polynomial.
\section{Type I Exceptional Laguerre polynomials}
Let us illustrate the above discussion with the particular example of
the so-called type I exceptional Laguerre polynomials. Let
$\Lag{{\alpha}}{n}(z)$ denote the classical Laguerre polynomial of
degree $n$ and
\[ \mathcal{L}_{\alpha}[y] =zy'' + ({\alpha}+1-z) y' \] the classical Laguerre operator.
Thus, $y=\Lag{{\alpha}}{n}$ is the unique polynomial solution of the equation
\[ \mathcal{L}_{\alpha}[y] = - n y, \quad\text{ with the normalization }\quad
y^{(n)}(0)=(-1)^n.\]
An equivalent boundary condition is
\begin{equation}
\label{eq:Lz=0}
\Lag{{\alpha}}{n}(0) = \binom{n+{\alpha}}{n}.
\end{equation}
For a fixed non-negative integer $m\geq0$, let us now define
\begin{align}
\label{eq:xidef} \xi_{{\alpha},m}(z) &= \Lag{{\alpha}}{m}(-z),\\
A^{{\rm{I}}}_{{\alpha},m}[y] &= \xi_{{\alpha},m} y'-\xi_{{\alpha}+1,m} y,\\
B^{{\rm{I}}}_{{\alpha},m}[y] &= \frac{z y' + (1+{\alpha})\,y}{\xi_{{\alpha},m}},\\
\XLagI{{\alpha}}{m}{m+j} &= -A^{{\rm{I}}}_{{\alpha}-1,m}\left[\Lag{{\alpha}-1}{j}\right],\qquad
j=0,1,2,\dots\\
\mathcal{L}^{{\rm{I}}}_{{\alpha},m}[y] &= \mathcal{L}_{\alpha}[y]+my -2
(\log\xi_{{\alpha}-1,m})'\left(zy'+{\alpha} y\right).
\end{align}
The following factorization relations follow from standard Laguerre
identities:
\begin{align}
\mathcal{L}_{{\alpha}} &= B^{{\rm{I}}}_{{\alpha},m} A^{{\rm{I}}}_{{\alpha},m} +{\alpha}+m+1,\\
\mathcal{L}^{{\rm{I}}}_{{\alpha},m} &= A^{{\rm{I}}}_{{\alpha}-1,m} B^{{\rm{I}}}_{{\alpha}-1,m} + {\alpha}+m.
\end{align}
Consider now the following codimension $m$ polynomial subspace
\begin{equation}
U^{\rm{I}}_{{\alpha},j} = \{ f \in \mathcal{P}_{j+m-1} : \xi_{{\alpha}-1,m} | (zf' +{\alpha}
f)\},\qquad j=1,2,\ldots,
\end{equation} where $f|g$ means polynomial $f(z)$
divides polynomial $g(z)$. At the level of flags, the above
factorizations correspond to the following linear isomorphisms:
\begin{equation}
B^{\rm{I}}_{{\alpha}-1,m}:U^{\rm{I}}_{{\alpha},j}\to\mathcal{P}_{j-1},\qquad
A^{\rm{I}}_{{\alpha}-1,m}:\mathcal{P}_{j-1}\to U^{\rm{I}}_{{\alpha},j}.
\end{equation}
The polynomials $\left\{\XLagI{{\alpha}}{m}{m+j}\right\}_{j=0}^\infty$ are known in
the literature as the\textit{ type I exceptional codimension $m$ Laguerre
polynomials} (for short,
type I $X_m$-Laguerre) \cite{Sasaki-Odake1,GKM10b}. By construction, the
$X_m$-Laguerre polynomials have
the following properties:
\begin{itemize}
\item they span the flag $ U^{\rm{I}}_{{\alpha},1}\subset U^{\rm{I}}_{{\alpha},2} \subset\cdots$
\item they satisfy the following second order linear differential equation:
\begin{equation}
\mathcal{L}^{\rm{I}}_{{\alpha},m}\left[ \XLagI{{\alpha}}{m}{m+j} \right]=- j
\XLagI{{\alpha}}{m}{m+j},\qquad j\geq 0,
\end{equation}
\item they are orthogonal with respect to the weight
\begin{equation}
\label{eq:WIkmdef}
W^{{\rm{I}}}_{{\alpha},m}(z) := \frac{z^{\alpha}
e^{-z}}{\xi_{\alpha-1,m}(z)^2}=\frac{z^{\alpha}
e^{-z}}{\left[\Lag{{\alpha}-1}{m}(-z)\right]^2},\quad z\in [0,\infty),
\end{equation}
\item they are dense in the Hilbert space $\mathrm{L}^2
\big([0,\infty),W^{{\rm{I}}}_{{\alpha},m}\big)$.
\end{itemize}
Note that for ${\alpha}\geq 0$ the polynomial
$\xi_{{\alpha}-1,m}(z)=\Lag{{\alpha}-1}{m}(-z)$ in the denominator of the weight
has its zeros on the negative real axis, and hence $W^{\rm{I}}_{{\alpha},m} dz$ is a
positive definite measure in $\mathbb R^+$, with well defined
moments of all orders. As a result, for ${\alpha}\geq 0$ the set $\{
\XLagI{{\alpha}}{m}{n}\}_{n=m}^\infty$ constitutes an orthogonal polynomial
basis of $\mathrm{L}^2 \big([0,\infty),W^{{\rm{I}}}_{{\alpha},m}\big)$. We observe that
the last property of the above list does not follow by the algebraic
construction and needs to be established by a separate argument. The
interested reader is referred to
\cite{GKM10b} for a direct proof of the
completeness of the $X_m$-Laguerre families.
\begin{prop}
The type I $X_m$-Laguerre polynomials $\XLagI{{\alpha}}{m}{n}$ can be expressed
in terms of classical associated Laguerre polynomials \emph{with the same
parameter}
${\alpha}$ as follows
\begin{equation}
\label{eq:L1altrep}
\XLagI{{\alpha}}{m}{m+j}= \xi_{{\alpha},m} \Lag{{\alpha}}{j} - \xi_{{\alpha},m-1}
\Lag{{\alpha}}{j-1} ,\qquad j\geq 0.
\end{equation}
\end{prop}
The above representation will be specially useful to
discuss the asymptotic properties of the zeros of type I $X_m$-Laguerre
polynomials. It is clear from it that $\XLagI{{\alpha}}{m}{m+j}$ has degree $m+j$.
This representation is reminiscent of the expansions obtained by rational
modifications of classical weights in the framework of
spectral linear transformations (see \cite{zhedanov97}). However, it should be
stressed that they are essentially different because in the case of exceptional
polynomials, although there is a rational modification of the weight, we
are not dealing with the standard flag.
\begin{proof}
Using elementary identities, we have
\begin{align*}
0&=\xi_{{\alpha},m}(\Lag{{\alpha}}{j} - \Lag{{\alpha}}{j-1} - \Lag{{\alpha}-1}{j}) +
\Lag{{\alpha}}{j-1}
(\xi_{{\alpha},m}-\xi_{{\alpha},m-1} - \xi_{{\alpha}-1,m}) \\
&= -(\xi_{{\alpha},m} \Lag{{\alpha}-1}{j} + \xi_{{\alpha}-1,m}
\Lag{{\alpha}}{j-1})+(\xi_{{\alpha},m}\Lag{{\alpha}}{j} -\xi_{{\alpha},m-1} \Lag{{\alpha}}{j-1}).
\end{align*}
Therefore, we can re-express the type I $X_m$-Laguerre polynomials as
\begin{align*}
\XLagI{{\alpha}}{m}{m+j} &= \xi_{{\alpha},m} \Lag{{\alpha}-1}{j} - \xi_{{\alpha}-1,m}
{\Lag{{\alpha}-1}{j}}'\\
&= \xi_{{\alpha},m} \Lag{{\alpha}-1}{j} +\xi_{{\alpha}-1,m} \Lag{{\alpha}}{j-1}\\
&= \xi_{{\alpha},m} \Lag{{\alpha}}{j} - \xi_{{\alpha},m-1} \Lag{{\alpha}}{j-1}.
\end{align*}
\end{proof}
We are now ready to prove an interlacing result for the zeros of type I
$X_m$-Laguerre polynomials, but before let us recall the following classical
identity
\begin{equation}
\label{eq:Lat0}
\Lag{{\alpha}}{n}(0) = ({\alpha}+1)_n /n!
\end{equation}
where
\[(x)_n = \begin{cases} x(x+1) \cdots (x+n-1) & \text{if $n\geq0$,}
\\
1 &\text{if $n= 0$,}
\end{cases}\]
is the usual Pochhammer symbol.
Using the above representation we obtain an analogous expression for
the type I $X_m$-Laguerre polynomials:
\begin{equation}
\label{eq:L1at0}
\XLagI{{\alpha}}{m}{m+j}(0) = \frac{{\alpha}+j+m}{k} \frac{({\alpha})_m}{m!
}\frac{({\alpha})_j}{j!}
\end{equation}
\begin{prop}
For ${\alpha}>0$ the type I exceptional Laguerre polynomial
$\XLagI{{\alpha}}{m}{m+j}(z)$ has $j$ simple zeros
in $z\in (0,\infty)$ and $m$ simple zeros in $z\in (-\infty,0)$. The
positive zeros of $\XLagI{{\alpha}}{m}{m+j}(z)$ are located between
consecutive zeros of $\Lag{{\alpha}}{j}$ and $\Lag{{\alpha}}{j-1}$ with the smallest
positive zero of $\XLagI{{\alpha}}{m}{m+j}(z)$ located to the left of the
smallest zero of $\Lag{{\alpha}}{j}$. The negative zeros of
$\XLagI{{\alpha}}{m}{m+j}(z)$ are located between the consecutive zeros of
$\xi_{{\alpha},m-1}$ and $\xi_{{\alpha},m}$.
\end{prop}
\begin{proof}
Let $0<\ezeta{{\alpha}}{n}{1}<\ezeta{{\alpha}}{n}{2}<\cdots < \ezeta{{\alpha}}{n}{n}$ denote
the zeros of $\Lag{{\alpha}}{n}(z)$ listed in increasing order. According to the
interlacing property of the zeros of classical orthogonal polynomials, we have
\[ \ezeta{{\alpha}}{j}{1} < \ezeta{{\alpha}}{j-1}{1} < \ezeta{{\alpha}}{j}{2} <
\ezeta{{\alpha}}{j-1}{2}
< \cdots < \ezeta{{\alpha}}{j-1}{j-1} < \ezeta{{\alpha}}{j}{j}.\] Recall that
$\Lag{{\alpha}}{n}(z)>0$ for $z\leq 0$. This implies that $\xi_{{\alpha},m}(z),
\xi_{{\alpha},m-1}(z)>0$ for $z\geq0$. Hence, by \eqref{eq:L1altrep},
\begin{gather*}
\operatorname{sgn} \XLagI{{\alpha}}{m}{m+j}(\ezeta{{\alpha}}{j}{i}) = (-1)^{i},\quad i=1,\ldots,
j;\\
\operatorname{sgn} \XLagI{{\alpha}}{m}{m+j}(\ezeta{{\alpha}}{j-1}{i}) = (-1)^{i},\quad i=1,\ldots,
j-1.
\end{gather*}
It follows by \eqref{eq:L1at0} that there is a zero of
$\XLagI{{\alpha}}{m}{m+j}$ in the interval $(0,\ezeta{{\alpha}}{j}{1})$ and a zero in
the interval $(\ezeta{{\alpha}}{j-1}{i-1},\ezeta{{\alpha}}{j}{i})$ for every
$i=2,\ldots, j$. An analogous argument places zeros of
$\XLagI{{\alpha}}{m}{m+j}$ in the intervals $(-\ezeta{{\alpha}}{m}{1},0)$ and
$(-\ezeta{{\alpha}}{m}{i},-\ezeta{{\alpha}}{m-1}{i-1}),\; i=2,\ldots, m$. By
exhaustion, each of the above intervals contains one simple zero of
$\XLagI{{\alpha}}{m}{m+j}$.
\end{proof}
We now study the distribution of the zeros of $\XLagI{{\alpha}}{m}{n}$ as
$n\to \infty$. To that end, we will use the classical Heine-Mehler
formula for Laguerre polynomials
\begin{equation}
\label{eq:hmclassic}
\Lag{{\alpha}}{n}\left(z/n\right) n^{-{\alpha}} \rightrightarrows z^{-{\alpha}/2}
J_{{\alpha}}(2\sqrt{z}),\qquad n\to \infty,
\end{equation}
where $J_{\alpha}(z)$ denotes the Bessel function of the first kind of order ${\alpha}$
$({\alpha}>-1)$ and the double arrow denotes uniform
convergence in compact domains of the complex plane.
The exceptional Laguerre polynomials admit a generalization of the classical
Heine-Mehler formula, given by the following:
\begin{prop}[Generalized Heine-Mehler formula]
We have
\begin{equation}
\label{eq:hmL1}
\XLagI{{\alpha}}{m}{n}\left(z/n\right) n^{-{\alpha}} \rightrightarrows
\binom{{\alpha}+m-1}{m}
z^{-{\alpha}/2} J_{\alpha}(2\sqrt{z}),\qquad n\to\infty.
\end{equation}
\end{prop}
A numerical representation of the convergence of the scaled exceptional
Laguerre polynomials to the Bessel function is given in Figure \ref{fig:HM}.
\begin{proof}
Multiply \eqref{eq:L1altrep} by $j^{-{\alpha}}$ and replace
$z\to z/j$. Taking the limit $j\to\infty$ and using the classical
Heine-Mehler formula \eqref{eq:hmclassic} leads to
\[
j^{-{\alpha}} \XLagI{{\alpha}}{m}{m+j} \rightrightarrows
\big(\xi_{{\alpha},m}(0)-\xi_{{\alpha},m-1}(0)\big)z^{-{\alpha}/2}J_{\alpha}(2\sqrt{z}).
\]
The final expression \eqref{eq:hmL1} is recovered by noting that
\[\xi_{{\alpha},m}(0)-\xi_{{\alpha},m-1}(0)=\binom{{\alpha}+m-1}{m}\]
as implied by \eqref{eq:xidef} and \eqref{eq:Lat0}.
\end{proof}
Note that for $m=0$ the classical Heine-Mehler formula is recovered as a
particular case.
\begin{figure}[h]\label{fig:HM}
\includegraphics[width=0.75\textwidth]{HM-Lag1.pdf}
\caption{Plot of $n^{-{\alpha}}\XLagI{n}{m}{{\alpha}}(x/n)$ for $m=3$, ${\alpha}=5.5$ and
$20\leq n\leq 100$. The dashed red line corresponds to the limiting
Bessel function $x^{-{\alpha}/2} \binom{{\alpha}+m-1}{m} J_{\alpha}(2\sqrt{x})$ predicted by
the generalized Heine-Mehler formula \eqref{eq:hmL1}.}
\end{figure}
\begin{cor} Let $\left\{\tilde z^{({\alpha})}_i\right\}_{i\geq 1}$ be the sequence
of zeros of the Bessel function $J_{\alpha}(z)$ listed in increasing order and let
$\{x_{j,i}\}_{i=1}^j$ denote the regular
zeros of $\XLagI{{\alpha}}{m}{m+j}(z)$ in the interval $[0,\infty)$. Then we get
the following asymptotic behaviour
\begin{equation}\label{eq:asymbessel} \lim_{j\to\infty} j x_{j,i} =
\frac{(\tilde z^{({\alpha})}_i)^2}{4}.
\end{equation}
\end{cor}
\begin{proof}
The above result follows from \eqref{eq:hmL1} and Hurwitz's
theorem.
\end{proof}
We have already seen that for fixed $m$ the asymptotic behaviour of the regular
zeros of the exceptional Laguerre polynomials coincides with that of the
classical Laguerre. We now investigate in the same limit the behaviour of the
$m$ exceptional zeros of
$\XLagI{{\alpha}}{m}{m+j}(z)$.
\begin{prop}
As $j\to \infty$ the $m$ zeros of
$\XLagI{k}{m}{m+j}$ in $(-\infty,0)$ converge to the $m$ zeros of
$\Lag{{\alpha}-1}{m}(-z)$.
\end{prop}
\begin{proof}
The classical Laguerre polynomials have the following outer ratio asymptotics
\begin{equation}
\label{eq:Lora}
\frac{\Lag{{\alpha}}{n+1}(z)}{ \Lag{{\alpha}}{n}(z)} \rightrightarrows 1,\qquad
z\notin
[0,\infty),\qquad n\to\infty.
\end{equation}
Hence, by \eqref{eq:L1altrep}, we have for $z\notin [0,\infty)$,
\begin{equation*}
\frac{\XLagI{{\alpha}}{m}{m+j}}{\Lag{{\alpha}}{j}} =\xi_{{\alpha},m}
-\xi_{{\alpha},m-1}\frac{\Lag{{\alpha}}{j-1}}{\Lag{{\alpha}}{j}}\rightrightarrows
\xi_{{\alpha},m}-\xi_{{\alpha},m-1}
= \xi_{{\alpha}-1,m}.
\end{equation*}
Therefore, by Hurwitz's theorem the $m$ exceptional zeros of
$\XLagI{{\alpha}}{m}{m+j}$ converge to the zeros of
$\xi_{{\alpha}-1,m}=\Lag{{\alpha}-1}{m}(-z)$.
\end{proof}
\begin{figure}[h]\label{fig:Lag1zeros}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{xzeros-Lag1-2dim.pdf} &
\includegraphics[width=0.45\textwidth]{xzeros-Lag1-1dim.pdf}
\end{tabular}
\caption{\textit{Left}: Exceptional zeros of the polynomials
$\XLagI{{\alpha}}{m}{m+j}(z)$ for
$m=6$, ${\alpha}=3.5$ and $1\leq j\leq 22$. The squares denote the zeros of
$\Lag{{\alpha}-1}{m}(-z)$ to which the zeros of $\XLagI{{\alpha}}{m}{m+j}(z)$
converge for $j\to\infty$. \textit{Right}: Convergence is better seen by
plotting the position of the negative zeros as a function of $j$. Solid lines
show the limiting values.}
\end{figure}
\section{Type II Exceptional Laguerre polynomials}
\subsection{Definition and identities.}
Let $m\geq 0$ be an integer and ${\alpha}$ a real number. Let us introduce
the polynomials
\begin{align}
\eta_{{\alpha},m}(z) := \Lag{-{\alpha}}{m}(z).
\end{align}
For a fixed non-negative integer $m$ and a real number $\alpha$ let us define
the following first and second order operators
\begin{align}
\label{eq:A2def}
A^{\rm{II}}_{{\alpha},m}[y]&:= z \eta_{{\alpha},m} y'+({\alpha}-m)\eta_{{\alpha}+1,m} y,\\
B^{\rm{II}}_{{\alpha},m}[y]&:= (y'-y)/\eta_{{\alpha},m},\\
\mathcal{L}[y] &:= z y'' + ({\alpha}+1-z) y',\\
\label{eq:L2def}
\mathcal{L}^{\rm{II}}_{{\alpha},m}[y] &:= \mathcal{L}[y] + 2z
(\log\eta_{{\alpha}+1,m})' (y-y') -my.
\end{align}
The following factorizations follow from standard Laguerre identities:
\begin{align}
\label{eq:B2A2}
\mathcal{L}_{\alpha} &= B^{\rm{II}}_{{\alpha},m} A^{\rm{II}}_{{\alpha},m} +{\alpha}-m,\\
\label{eq:A2B2}
\mathcal{L}^{\rm{II}}_{{\alpha},m}&= A^{\rm{II}}_{{\alpha}+1,m} B^{\rm{II}}_{{\alpha}+1,m}+{\alpha}+1-m.
\end{align}
For a real number $\alpha$ and given integers $n\geq m\geq 0$, we
define the $n^{\text{th}}$ degree type II exceptional Laguerre polynomial by
\begin{align}
\label{eq:L2ndef}
\XLagII{{\alpha}}{m}{n}(z) &:= -A^{\rm{II}}_{\alpha+1,m}[\Lag{{\alpha}+1}{j}],\qquad
j=n-m\geq 0.
\end{align}
Expanding \eqref{eq:A2def} and applying standard identities,
the following dual representations of the type II polynomials hold:
\begin{align}
\label{eq:L2ndef1}
\XLagII{{\alpha}}{m}{m+j} &= z\Lag{-{\alpha}-1}{m} \Lag{{\alpha}+2}{j-1}
+(m-{\alpha}-1)\Lag{-{\alpha}-2}{m} \Lag{{\alpha}+1}{j} \\
\label{eq:L2ndef2}
&=-z\Lag{-{\alpha}}{m-1} \Lag{{\alpha}+1}{j}
-({\alpha}+1+j)\Lag{-{\alpha}-1}{m} \Lag{{\alpha}}{j}.
\end{align}
Hence, up to a constant
factor, the type II polynomials extend their classical counterparts, which are
recovered for the particular case $m=0$:
\begin{equation}
\label{eq:L2m=0}
\XLagII{{\alpha}}{0}{n} = -(1+{\alpha}+n) \Lag{{\alpha}}{n}.
\end{equation}
The factorizations \eqref{eq:B2A2} and \eqref{eq:A2B2} yield the following
intertwining relations between the standard Laguerre operator $\mathcal{L}_{\alpha}$ and
the type II $X$-Laguerre operator $\mathcal{L}^{\rm{II}}_{{\alpha},m}$:
\begin{align}
\label{eq:LIIA}
&\mathcal{L}^{\rm{II}}_{{\alpha},m} A^{\rm{II}}_{{\alpha}+1,m} = A^{\rm{II}}_{{\alpha}+1,m}\mathcal{L}_{{\alpha}+1},\\
\label{eq:BLII}
&B^{\rm{II}}_{{\alpha}+1,m} \mathcal{L}^{\rm{II}}_{{\alpha},m} = \mathcal{L}_{{\alpha}+1} B^{\rm{II}}_{{\alpha}+1,m}.
\end{align}
The former relation provides the eigenvalue relation for the type II
exceptional Laguerre polynomials:
\begin{equation}
\label{eq:IIeigenval}
\mathcal{L}^{\rm{II}}_{{\alpha},m} \left[\XLagII{{\alpha}}{m}{m+j}\right]= -j
\XLagII{{\alpha}}{m}{m+j},\qquad j=0,1,2\dots.
\end{equation}
The latter gives the following ``lowering'' relation between the type
II exceptional Laguerre polynomials and their classical counterparts:
\begin{equation}
\label{eq:L2L}
\XLagII{{\alpha}}{m}{n}\,{}' - \XLagII{{\alpha}}{m}{n} = (1+n-{\alpha}) \Lag{-{\alpha}-1}{m}
\Lag{{\alpha}+1}{j},\quad j=n-m\geq 0
\end{equation}
In order to find raising and lowering relations for the exceptional
Laguerre polynomials, let us introduce the following first order linear
differential operators
\begin{align}\label{sifact1}
{\hat{A}}^{\rm{II}}_{{\alpha},m}[y] &:= \frac{\eta_{{\alpha}+2,m}}{\eta_{{\alpha}+1,m}} \left(
y' -(\log \eta_{{\alpha}+2,m})' y\right),\\
{\hat{B}}^{\rm{II}}_{{\alpha},m}[y]&:= \frac{\eta_{{\alpha}+1,m}}{\eta_{{\alpha}+2,m}}
\left(zy'+({\alpha}+1-z) y\right) - z(\log \eta_{{\alpha}+2,m})' y.
\end{align}
In terms of these operators we have the following \textit{shape-invariant}
factorizations
\begin{align}
\mathcal{L}^{\rm{II}}_{{\alpha},m} &= {\hat{B}}^{\rm{II}}_{{\alpha},m} {\hat{A}}^{\rm{II}}_{{\alpha},m},\\
\mathcal{L}^{\rm{II}}_{{\alpha}+1,m} &= {\hat{A}}^{\rm{II}}_{{\alpha},m} {\hat{B}}^{\rm{II}}_{{\alpha},m}+1.\label{sifact2}
\end{align}
From these factorizations the following lowering and raising
relations for the exceptional polynomials easily follow:
\begin{align}
&{\hat{A}}^{\rm{II}}_{{\alpha},m}\left[\XLagII{{\alpha}}{m}{n}\right] =
\XLagII{{\alpha}+1}{m}{n-1},\quad n\geq m+1,\\
&{\hat{B}}^{\rm{II}}_{{\alpha},m}\left[\XLagII{{\alpha}+1}{m}{n}\right] = (n-m)
\XLagII{{\alpha}}{m}{n+1},\quad n\geq m.
\end{align}
The above equations can be conveniently re-written as
\begin{align}
\label{eq:L2lower}
& \left( \frac{\XLagII{{\alpha}}{m}{n}}{\eta_{{\alpha}+2,m}}\right)' =
\left(\frac{\eta_{{\alpha}+1,m}}{\eta_{{\alpha}+2,m}}\right)^2
\frac{\XLagII{{\alpha}+1}{m}{n-1}}{\eta_{{\alpha}+1,m}}\,,\\
\label{eq:L2raise}
&\left( \frac{e^{-z}
z^{{\alpha}+1}}{\eta_{{\alpha}+1,m}}\XLagII{{\alpha}+1}{m}{n}\right)' = (n-m+1)
\left(\frac{\eta_{{\alpha}+2,m}}{\eta_{{\alpha}+1,m}}\right)^2
\frac{e^{-z} z^{{\alpha}}}{\eta_{{\alpha}+2,m}}\XLagII{{\alpha}}{m}{n+1}\,.
\end{align}
\subsection{Orthogonality}
The type II exceptional Laguerre polynomials are formally orthogonal
with respect to the weight
\begin{equation}
W^{\rm{II}}_{{\alpha},m}(z) :=\frac{ e^{-z} z^{\alpha}}{\eta_{{\alpha}+1,m}^2}= \frac{ e^{-z}
z^{\alpha}}{\left[\Lag{-{\alpha}-1}{m}(z)\right]^2}.
\end{equation}
The above weight is the solution $W^{\rm{II}}= {\hat{W}}$ of Pearson's equation
\eqref{eq:pearson} where
\[ p=z,\quad \hat{q} = (1+{\alpha}-z) -2 z(\log \eta_{{\alpha}+1,m})' \] are extracted
from \eqref{eq:L2def}. As a consequence, \eqref{eq:IIeigenval} and
Green's formula imply
\begin{equation}
\label{eq:IIgreen}
(n_2-n_1)\int \XLagII{{\alpha}}{m}{n_2}\,\XLagII{{\alpha}}{m}{n_1}\,
W^{\rm{II}}_{{\alpha},m}\, dz = z W^{\rm{II}}_{{\alpha},m}\,
\operatorname{Wr}\left[ \XLagII{{\alpha}}{m}{n_2}, \XLagII{{\alpha}}{m}{n_1}\right],
\end{equation}
where $n_2>n_1\geq m$ and where
\[\operatorname{Wr}[f,g] = f'g - fg'\]
denotes the usual Wronskian operator. The following
crucial result is established in \cite[Ch. 6.73]{Sz}.
\begin{prop}
\label{prop:etazeros}
For ${\alpha}>m-1$ the polynomials $\eta_{{\alpha},m}(z)$ have no zeros
in $[0,\infty)$. The number of negative real zeros is either $0$ or $1$
according to whether $m$ is even or odd, respectively.
\end{prop}
\noindent
Thus, assuming $\alpha>m-1$ and restricting the interval
of orthogonality to $[0,\infty)$, $W^{\rm{II}}_{{\alpha},m}$ is a weight with
finite moments of all orders, and the RHS of \eqref{eq:IIgreen}
vanishes, whch ensures genuine orthogonality in the $L^2$ sense.
\subsection{Zeros of the type II Laguerre polynomials}
Henceforth, let us assume that $\alpha > m-1$, where $m\geq 0$ is an
integer. As above, we will call the real positive zeros of
$\XLagII{{\alpha}}{m}{n}(z),\; n\geq m$ \emph{regular} and the negative and
complex zeros \emph{exceptional}. From \eqref{eq:L2ndef1} we have
\begin{equation}
\label{eq:L2z=0}
\XLagII{{\alpha}}{m}{m+j}(0) = (m+1) \binom{{\alpha}+j+1}{j} \binom{m-{\alpha}-1}{m+1}
,\quad j=0,1,2,\ldots
\end{equation}
Hence, $z=0$ is never a zero of such a polynomial.
\begin{prop}
The zeros of $\XLagII{{\alpha}}{m}{n}(z),\; n\geq m$, are simple.
\end{prop}
\begin{proof}
This follows by \eqref{eq:L2ndef1} \eqref{eq:L2ndef2} \eqref{eq:L2L}
and the fact that the zeros of the classical Laguerre polynomials are simple.
\end{proof}
\begin{prop}
\label{prop:L2zeros}
The polynomial $\XLagII{{\alpha}}{m}{n}(z),\; n\geq m$, has exactly $n-m$
regular
zeros.
\end{prop}
\begin{proof}
We prove the existence of at least $j=n-m$ regular zeros by
induction on $j$. The case $j=0$ is trivial. Suppose now that the
proposition has been established for $j\geq 0$ and ${\alpha}>m-1$. Since
${\alpha}+1>m-1$, the proposition is also true for
$\XLagII{{\alpha}+1}{m}{n}$. Let $\zeta_1,\ldots, \zeta_k,\; k\geq j$,
be the regular zeros of $\XLagII{{\alpha}+1}{m}{n}(z)$. By
\eqref{eq:L2raise} and Rolle's theorem, $\XLagII{{\alpha}}{m}{n+1}(z)$
has at least one zero in each of the intervals $(\zeta_1,\zeta_2),
(\zeta_2,\zeta_3),\ldots, (\zeta_{k-1},\zeta_k)$. Also by
\eqref{eq:L2raise}, there is a zero in $(0,\zeta_1)$ and a zero in
$(\zeta_i,\infty)$, for a total of at least $k+1$ zeros.
We conclude by showing that $j$ is also an upper bound for the
number of regular zeros. The proof is again by induction on $j=n-m$.
By \eqref{eq:L2ndef1},
\begin{equation}
\label{eq:L2ground}
\XLagII{{\alpha}}{m}{m} = (m-1-{\alpha})\Lag{-{\alpha}-1}{m} = (m-1-{\alpha})\eta_{{\alpha}+1,m}.
\end{equation}
by Proposition \ref{prop:etazeros}, the latter has no real,
non-negative zeros. The lowering relation \eqref{eq:L2lower} shows
that between two regular zeros of $\XLagII{{\alpha}}{m}{n}$ at least one
zero of $\XLagII{{\alpha}+1}{m}{n-1}(z)$ lies. Hence, if we assume that the latter
has at most $j-1$ regular zeros, then the former has at most $j$
regular zeros.
\end{proof}
\begin{prop}
The type II polynomial $\XLagII{{\alpha}}{m}{n},\; n\geq m$, has either 0 or 1
negative zeros, according to whether $m$ is even or odd.
\end{prop}
\begin{proof}
Let
\[\epsilon_{m} =
\begin{cases}
0 & m \text{ even}\\
1 & m \text{ odd.}
\end{cases}\] Let $\epsilon_{{\alpha},m,n}$ be the number of negative
zeros of $\XLagII{{\alpha}}{m}{n}$. We wish to show that
\[ \epsilon_{{\alpha},m,n} = \epsilon_m,\quad {\alpha}>m-1,\; n\geq m.\]
Suppose that $m$ is odd. By \eqref{eq:L2z=0},
\[ \operatorname{sgn} \XLagII{{\alpha}}{m}{n}(0) = (-1)^{m+1}.\]
By \eqref{eq:L2ndef1},
\[ \XLagII{{\alpha}}{m}{n} = \frac{m-1-j-{\alpha}}{m! j!} (-z)^{m+j} +
\text{lower degree terms.} \]
As a consequence,
\[ \lim_{z\to -\infty} \operatorname{sgn} \XLagII{{\alpha}}{m}{n}(z) = -1 .\] Hence,
$\epsilon_{{\alpha},m,n} \geq 1$ if $m$ is odd, and therefore
$\epsilon_{{\alpha},m,n} \geq \epsilon_m$.
By Proposition \ref{prop:L2zeros}, $\XLagII{{\alpha}}{m}{m+j}$ has
$j+\epsilon_{\alpha,m,n}$ real zeros. Hence, by \eqref{eq:L2lower},
$\XLagII{{\alpha}+1}{m}{m+j-1}$ has at least $j-1+\epsilon_{\alpha,m,n}$
real zeros. Continuing inductively, $\XLagII{{\alpha}+j}{m}{m}$ has at
least $\epsilon_{{\alpha},m,n}$ real zeros. Hence, by
\eqref{eq:L2ground} and Proposition \ref{prop:etazeros},
$\epsilon_{{\alpha},m,n} \leq \epsilon_m$, as was to be shown.
\end{proof}
For the type II exceptional Laguerre polynomials, a Heine-Mehler
type formula also holds:
\begin{prop}
As $n\to \infty$, we have
\begin{equation}
\label{eq:hmL2}
\XLagII{{\alpha}}{m}{n}(z/n)n^{-{\alpha}-1} \rightrightarrows
-\binom{m-1-{\alpha}}{m}
z^{-{\alpha}/2} J_{\alpha}(2\sqrt{z}).
\end{equation}
\end{prop}
\begin{proof}
Multiplying \eqref{eq:L2ndef2} by $j^{-1-{\alpha}}$, replacing $z\to z/j$,
and applying the classical
Heine-Mehler formula, we get
\begin{gather*}
j^{-1-{\alpha}} \XLagII{{\alpha}}{m}{m+j}(z/j) +z^{{\alpha}/2} J_{\alpha}(2\sqrt{z})
\Lag{-{\alpha}-1}{m}(z/j) \rightrightarrows 0,\quad j\to \infty.
\end{gather*}
The polynomials $\XLagII{{\alpha}}{m}{m+j}$ and $\Lag{-{\alpha}-1}{m}$ are uniformly
continuous on compact subsets of $\mathbb{C}$. Setting $n=m+j$, we have
by uniform continuity on compact subsets
\begin{gather*}
\Lag{-{\alpha}-1}{m}(z/n)\rightrightarrows \binom{m-{\alpha}-1}{m} \\
j^{-1-{\alpha}} \XLagII{{\alpha}}{m}{m+j}(z/j) -n^{-1-{\alpha}}
\XLagII{{\alpha}}{m}{n}(z/n)
\rightrightarrows 0
\end{gather*}
as $n\to \infty$. Equation \eqref{eq:Lz=0} is needed to establish
the first statement.
\end{proof}
\noindent
Note that, as a consequence of \eqref{eq:L2m=0}, the above assertion reduces to
the classical Heine-Mehler formula for $m=0$.
\begin{cor} Let $0<\tilde{z}_1< \tilde{z}_2< \tilde{z}_3< \cdots$ denote the
positive zeros of the Bessel function of the first kind $J_{\alpha}(z)$ arranged
in an increasing order
and let $0<z_{n,1}< z_{n,2} <\cdots < z_{n,m-n}$ be the regular
zeros of $\XLagII{{\alpha}}{m}{n}$ also arranged in increasing order.Then,
\begin{equation}
\label{eq:asymbessel2}
\lim_{n\to\infty} n z_{n,i} = \tilde{z}^2_i/4
\end{equation}
\end{cor}
\begin{proof}
The above result follows from \eqref{eq:hmL2} and Hurwitz's
theorem.
\end{proof}
Away from the interval of orthogonality, we can describe the asymptotic
behaviour as follows:
\begin{prop}
As $j\to \infty$ we have
\[-({\alpha}+1+j)^{-1} \frac{\XLagII{{\alpha}}{m}{m+j}(z)}{\Lag{{\alpha}}{j}(z)} =
\eta_{{\alpha}+1,m}(z)
+ O(j^{-1/2})\] on compact subsets of
$\mathbb{C}/[0,\infty)$.
\end{prop}
\begin{proof}
For the outer ratio asymptotics of the classical Laguerre
polynomials, we have
\[ (-z)^{t/2} \frac{\Lag{{\alpha}+t}{j}(z)}{
\Lag{{\alpha}}{j}(z)} = O(j^{t/2}),\quad j\to \infty \] uniformly on compact
subsets of $\mathbb{C}/[0,\infty)$. The desired
conclusion now follows by \eqref{eq:L2ndef2}.
\end{proof}
\begin{prop}
As $n\to \infty$ the exceptional zeros of $\XLagII{{\alpha}}{m}{n},\; n\geq m$,
converge to the zeros of $\eta_{{\alpha}+1,m}(z)=\Lag{-{\alpha}-1}{m}(z)$.
\end{prop}
\begin{proof}
The desired conclusion follows by the preceding Proposition and by
Hurwitz's theorem.
\end{proof}
\begin{figure}[h]\label{fig:Lag2zeros}
\includegraphics[width=0.75\textwidth]{Xzeros-Lag2-m15.pdf}
\caption{Exceptional zeros of the polynomials $\XLagII{{\alpha}}{m}{m+j}(z)$ for
$m=15$, ${\alpha}=14.01$ and $1\leq j\leq 22$. The squares denote the zeros of
$\Lag{-{\alpha}-1}{m}(z)$ to which the zeros of $\XLagII{{\alpha}}{m}{m+j}(z)$
converge for $j\to\infty$. }
\end{figure}
\section{Exceptional Jacobi polynomials}
\subsection{Definitions and identities}
Let $m\geq 0$ be a fixed integer, and $\alpha,\beta$ real numbers. Let
\begin{equation}
T_{{\alpha},{\beta}}[y] = (1-z^2)y'' +
(\beta-\alpha+(\alpha+\beta+2)z)y',
\end{equation}
denote the Jacobi differential operator. The classical Jacobi
polynomial of degree $n$ can be defined as the polynomial solution
$y=\Jac{\alpha}{\beta}{n}(z)$ of the second order linear differential equation
\begin{equation}
T_{{\alpha},{\beta}}[y] = -n(1+{\alpha}+{\beta}+n) y,\quad y(1) = \frac{({\alpha}+1)_n}{n}.
\end{equation}
Next, define
\begin{align}
T_{\alpha,\beta,m}[y] &= T_{\alpha,\beta}[y]+ (\alpha-\beta-m+1)m y
\\ \nonumber &\qquad -(\log \Jac{-{\alpha}-1}{{\beta}-1}{m})'\,
\Big(\beta(1-z) y+(1-z^2)y'\Big),\\
\label{eq:Aabmdef}
A_{\alpha,\beta,m}[y] &= (1-z)\Jac{-{\alpha}}{{\beta}}{m}\,
y'+(m-\alpha)\Jac{-{\alpha}-1}{{\beta}-1}{m}\, y,\\
B_{\alpha,\beta,m}[y] &=
\frac{(1+z)y'+(1+\beta)y}{\Jac{-{\alpha}}{{\beta}}{m}}.
\end{align}
The following operator factorizations can be verified by the
application of elementary identities.
\begin{align}
\label{eq:TabmAB}
T_{\alpha,\beta,m} &= A_{\alpha+1,\beta-1,m} B_{\alpha+1,\beta-1,m}
-(m-\alpha-1)(m+\beta) ,\\
\label{eq:TabBA}
T_{\alpha,\beta} &= B_{\alpha,\beta,m} A_{\alpha,\beta,m}
-(m-\alpha)(m+\beta+1).
\end{align}
For $n\geq m$, we define the degree $n$ exceptional Jacobi polynomial
to be
\begin{align}
\label{eq:hPdef}
\XJac{\alpha}{\beta}{m}{n} &= \frac{(-1)^{m+1}}{\alpha+1+j}
\,A_{\alpha+1,\beta-1,m}\left[
\Jac{\alpha+1}{\beta-1}{j}\right],\qquad j=n-m\geq 0,\\
&= \frac{(-1)^m}{\alpha+1+j} \left(\frac{1}{2}(1+{\alpha}+{\beta}+j)(z-1)
\Jac{-{\alpha}-1}{{\beta}-1}{m}
\Jac{\alpha+2}{\beta}{j-1}\right.\\ \nonumber
&\qquad\qquad\qquad\left. +
(\alpha+1-m)\,\Jac{-{\alpha}-2}{{\beta}}{m}
\Jac{\alpha+1}{\beta-1}{j}\right).
\end{align}
The exceptional polynomials and operator extend their classical
counterparts
\begin{align}
T_{\alpha,\beta,0}[y] &= T_{\alpha,\beta}[y],\\
\XJac{\alpha}{\beta}{0}{n} &= \Jac{\alpha}{\beta}{n}.
\end{align}
By construction, these polynomials satisfy several identities,
which we enumerate below. The factorizations
\eqref{eq:TabmAB} \eqref{eq:TabBA} give the intertwining relations
\begin{gather}
\label{eq:ATTA}
A_{\alpha+1,\beta-1,m}T_{\alpha+1,\beta-1} =
T_{\alpha,\beta,m}A_{\alpha+1,\beta-1,m},\\
T_{\alpha+1,\beta-1} B_{\alpha+1,\beta-1,m} =
B_{\alpha+1,\beta-1,m}T_{\alpha,\beta,m}.
\end{gather}
From the above relations we can derive the
eigenvalue equation for the $X_m$-Jacobi polynomials
\begin{equation}
\label{eq:hPlambda}
T_{\alpha,\beta,m} \left[\XJac{\alpha}{\beta}{m}{n}\right] =
-(n-m)(1+\alpha+\beta+n-m)\hPud{\alpha,\beta}{m}{n},\quad n\geq m.\\
\end{equation}
The factorization \eqref{eq:TabBA} implies the following identity
\begin{gather}
\label{eq:BhP=P}
(-1)^m(\alpha+1+j)(\beta \hPud{\alpha,\beta}{m}{m+j}+(z+1)
\hPudp{\alpha,\beta}{m}{m+j}) =\\ \nonumber
\qquad\qquad (\alpha+1-m+j) (\beta+m+j) \Pud{-\alpha-1,\beta-1}{m}
\Pud{\alpha+1,\beta-1}{j}\,.
\end{gather}
It will be useful to express ${\hat{P}}$ in a way that is symmetric in the dimension $j$
and the codimension $m$. Namely,
\begin{align}
\nonumber
& (-1)^m (\alpha+j)\hPud{\alpha-1,\beta+1}{m}{m+j} \\
\label{eq:hPrel1}
&\qquad = (\alpha-m)
\Jac{-\alpha-1}{\beta+1}{m} \Jac{\alpha}{\beta}{j} + (z-1)
\Jac{-\alpha}{\beta}{m} \Jacp{\alpha}{\beta}{j}\\
\label{eq:hPrel2}
&\qquad = (\alpha+j) \Pud{\alpha-1,\beta+1}{j}
\Pud{-\alpha,\beta}{m} -
(z-1) \Jac{\alpha}{\beta}{j} \Jacp{-\alpha}{\beta}{m}\,.
\end{align}
The first equation is just a restatement of the definition
\eqref{eq:hPdef}, while the second identity follows from the
classical relation
\begin{align}
\label{eq:z-1P'}
(z-1) \Jacp{\alpha}{\beta}{j} &=
\alpha\Pud{\alpha,\beta}{j} -(\alpha+j) \Pud{\alpha-1,\beta+1}{j}\,.
\end{align}
At the endpoints of the interval of orthogonality we have the following classical identities
\begin{align}
\label{eq:Pab-1}
\Pud{{\alpha},{\beta}}{n}(-1) &= (-1)^n \frac{({\beta}+1)_n}{n!},\\
\label{eq:Pab+1}
\Pud{{\alpha},{\beta}}{n}(1) &= \frac{({\alpha}+1)_n}{n!},
\end{align}
which in the case of exceptional Jacobi polynomials yield the following
generalizations
\begin{align}
\label{eq:hP+1}
\hPud{\alpha,\beta}{m}{n}(1) &=
\binom{\alpha+n-m}{n} \binom{n}{m} ,\quad n\geq m,\\
& = \frac{(\alpha+1-m)_{m+j}}{m!\,j!} ,\quad j=n- m, \\
\label{eq:hP-1}
\hPud{\alpha,\beta}{m}{m+j}(-1) &=
(-1)^j\frac{(\beta+j+m)(1+\alpha-m+j)}{(1+\alpha+j)}
\frac{(\beta+1)_{m-1}(\beta)_{j}}{m!\,j!}.
\end{align}
Define the 1st-order operators
\begin{equation}
\label{eq:hAabmdef}
{\hat{A}}_{\alpha,\beta,m}[y]
=\frac{\Jac{-{\alpha}-2}{{\beta}}{m}}{\Jac{-{\alpha}-1}{{\beta}-1}{m}} \left(y'
-(\log\Jac{-{\alpha}-2}{{\beta}}{m})' \, y\right)\,,
\end{equation}
\begin{equation}
{\hat{B}}_{\alpha,\beta,m}[y]
=(1-z^2)\frac{\Jac{-{\alpha}-1}{{\beta}-1}{m}}{\Jac{-{\alpha}-2}{{\beta}}{m}} \left[
y'- \left((\log\Jac{-{\alpha}-1}{{\beta}-1}{m})'+
\frac{\alpha+1}{1-z}-\frac{\beta+1}{1+z}\right) y \right]\,.
\end{equation}
The following ``shape-invariant'' factorizations relate
exceptional operators of the same codimension at different values of
the parameters $\alpha,\beta$
\begin{align}
{\hat{B}}_{\alpha,\beta,m} {\hat{A}}_{\alpha,\beta,m} &=T_{\alpha,\beta,m} \\
{\hat{A}}_{\alpha,\beta,m} {\hat{B}}_{\alpha,\beta,m} &=T_{\alpha+1,\beta+1,m}
+2+\alpha+\beta.
\end{align}
The corresponding intertwining relations, namely,
\begin{align}
T_{\alpha+1,\beta+1,m} {\hat{A}}_{\alpha,\beta,m} &={\hat{A}}_{\alpha,\beta,m}
T_{\alpha,\beta,m}, \\
{\hat{B}}_{\alpha,\beta,m} T_{\alpha+1,\beta+1,m} &= T_{\alpha,\beta,m}
{\hat{B}}_{\alpha,\beta,m},
\end{align}
give rise to the lowering and raising relations for the exceptional Jacobi
polynomials
\begin{gather}
\label{eq:hPlower}
\left( \frac{ \XJac{\alpha}{\beta}{m}{n}}{\Jac{-{\alpha}-2}{{\beta}}{m}}\right)'
= \frac{1}{2}(n-m+\alpha+\beta+1)
\left(\frac{\Jac{-{\alpha}-1}{{\beta}-1}{m}}{\Jac{-{\alpha}-2}{{\beta}}{m}}\right)^2\,
\frac{\XJac{\alpha+1}{\beta+1}{m}{n-1}}{\Jac{-{\alpha}-1}{{\beta}-1}{m}},\quad n\geq m+1,\\
\label{eq:hPraise}
(1-z)^{-\alpha}(1+z)^{-\beta}\left((1-z)^{\alpha+1}
(1+z)^{\beta+1}\frac
{\XJac{\alpha+1}{\beta+1}{m}{n}}{\Jac{-{\alpha}-1}{{\beta}-1}{m}}\right)' = \\ \nonumber
\qquad \qquad -2(n-m+1)
\left(\frac{\Jac{-{\alpha}-2}{{\beta}}{m}}{\Jac{-{\alpha}-1}{{\beta}-1}{m}}
\right)^2\frac{\XJac{\alpha}{\beta}{m}{n+1}}{\Jac{-{\alpha}-2}{{\beta}}{m}}
,\quad n\geq m.
\end{gather}
As usual, we denote by $f'(z)$ the derivative of $f$ with respect to
the $z$ variable.
\subsection{Orthogonality}
The exceptional Jacobi polynomials are formally orthogonal with
respect to ${\hat{W}}_{\alpha,\beta,m}(z) dz,\; -1\leq z\leq 1$, where
\begin{equation}
\label{eq:hWjacdef}
{\hat{W}}_{\alpha,\beta,m}(z) = \frac{(1-z)^\alpha
(1+z)^\beta}{\left[\Jac{-{\alpha}-1}{{\beta}-1}{m}(z)\right]^2}.
\end{equation}
In order to have orthogonality in the $L^2$ sense, additional conditions need to
be imposed on the parameters
$\alpha, \beta$, and $m$. The condition $\alpha,\beta>-1$ is necessary for the
measure \eqref{eq:hWjacdef} to have
finite moments of all orders. Another requirement is that the denominator
$\Jac{-{\alpha}-1}{{\beta}-1}{m}$ does not vanish for $z\in(-1,1)$, which imposes extra
conditions on $\alpha, \beta$, and $m$.
An analysis of the zeros of classical Jacobi polynomials can be found in
Szeg\H{o}'s book \cite[Chapter 6.72]{Sz}. First, let us recall that
$\Pud{\alpha,\beta}{n}(z)$ has a zero of multiplicity $k$ at $z=1$ if
$\alpha=-k,\;
k=1,\ldots, m$, and a zero of multiplicity $j$ at $z=-1$ if $\beta=-j,\;
j=1,\ldots, m$.
We also mention the degenerate cases where
$$\deg \Pud{\alpha,\beta}{n} = k \quad \text{ when } \quad
n+\alpha+\beta+1=-k,\qquad k=0,1,\ldots, n-1.$$
For such parameter values the $n$th Jacobi polynomial has degree $<n$. In these degenerate cases (where ${\alpha}+{\beta}$
is a negative integer), we have
\begin{equation}
\label{eq:Pdegen}
\binom{{\alpha}+m}{m} \Pud{{\alpha},{\beta}}{n}(z) = \binom{{\alpha}+n}{n}
\Pud{{\alpha},{\beta}}{m}(z),\quad {\alpha}+{\beta}=-1-m-n.
\end{equation}
Since $\beta>-1$, the denominator has a zero at $z=-1$ if and only if
$\beta=0$. However, the latter condition gives a weight with an
overall factor of $(1+z)^{-1}$, which would violate the assumption that it has
finite moments of all orders. Therefore we must impose $\beta\neq 0$. The
condition that
$\Pud{-\alpha-1,\beta-1}{m}(z)\neq0$ for $z\in(-1,1)$ is satisfied in
exactly two cases
\begin{itemize}
\item[(A)]Both $\beta$ and $\alpha+1-m$ $\in (-1,0)$.
\item[(B)]Both $\beta$ and $\alpha+1-m$ $\in (0,\infty)$.
\end{itemize}
For (A) we have $ m-2<\alpha< m-1$, while for
(B) we have $\alpha>m-1$. Therefore, in both cases
$\Pud{-1-\alpha,\beta-1}{m}(1) \neq 0$.
According to identity \eqref{eq:Pdegen}, we also
require
\begin{equation}
\label{eq:al1mnotin}
\alpha+1-m-\beta\notin\{0,1,\ldots, m-1\}.
\end{equation}
If this condition is violated, then $\deg
\Pud{-\alpha-1,\beta-1}{m}(z)<m$ and, therefore, the codimension (see
below for discussion) is $\alpha+1-m-\beta$, rather than $m$. We
therefore append condition \eqref{eq:al1mnotin} to the assumptions in
(A) and (B), to complete the following
\begin{prop}
\label{prop:albe1}
Suppose that $m\geq 1$. The measure ${\hat{W}}_{\alpha,\beta,m} dz,\;
z\in(-1,1)$ is positive definite with finite moments of all orders if and only
if $\alpha,\beta,m$ satisfy one of the following conditions
\begin{itemize}
\item[(A)] $\beta,\alpha+1-m\in (-1,0)$.
\item[(B)] $\beta,\alpha+1-m\in (0,\infty)$.
\end{itemize}
In order to ensure that $\deg \Jac{-\alpha-1}{ \beta-1}{m} = m$ it is
also necessary to require that $\alpha+1-m-\beta\notin\{0,1,\ldots,
m-1\}$.
\end{prop}
\noindent
\subsection{Exceptional Flag}
Let us define the following codimension $m$ polynomial flag $\{ U_{\alpha,\beta,m,j} \}_{j=1}^\infty$ where
\[ U_{\alpha,\beta,m,j} = \{ f(z) \in \mathcal{P}_{m+j-1}(z) :
\Jac{-{\alpha}-1}{{\beta}-1}{m} | (1+z) f' + \beta f \}.\] At the level of
flags, the factorizations \eqref{eq:TabmAB} \eqref{eq:TabBA}
correspond to the linear isomorphisms
\[ A_{\alpha+1,\beta-1,m}: \mathcal{P}_{j-1} \to U_{\alpha,\beta,m,j},\quad
B_{\alpha+1,\beta-1,m} : U_{\alpha,\beta,m,j} \to \mathcal{P}_{j-1},\quad
j=1,2,\ldots. \] Thus, the exceptional polynomials
$\hPud{\alpha,\beta}{m}{m+j}$ give a basis of the flag
$\{ U_{\alpha,\beta,m,j} \}_{j=1}^\infty$
\begin{prop}
Suppose that $\alpha,\beta,m$ satisfy either condition (A) or condition
(B). Then $\XJac{\alpha}{\beta}{m}{m+j}(z)$ is the
unique polynomial in $U_{\alpha,\beta,m,j}$, orthogonal to
$U_{\alpha,\beta,m,j-1}$ with respect to ${\hat{W}}_{\alpha,\beta,m}\,dz, \;
z\in (-1,1)$ that satisfies the normalizing condition
\eqref{eq:hP+1}.
\end{prop}
Once we have analyzed the underlying factorizations that give rise to $X_m$-Jacobi polynomials, and the conditions on the parameters that ensure their $L^2$-orthogonality, we can now turn our attention to describing some properties of their zeros.
\subsection{Zeros of exceptional Jacobi polynomials}
Let us refer to the real zeros of $\hPud{\alpha,\beta}{m}{m+j}(z)$ in
$z\in (-1,1)$ as the \emph{regular zeros}. All other zeros, whether in
$(-\infty,-1)\cup (1,\infty)$, or complex, will be said to be
\emph{exceptional zeros}.
\begin{prop}
Suppose that $\alpha,\beta,m$ obey either condition (A) or
condition (B). Then the regular zeros of
$\hPud{\alpha,\beta}{m}{m+j}(z)$ are simple.
\end{prop}
\begin{proof}
This follows from \eqref{eq:hPdef} \eqref{eq:BhP=P} and the
simplicity of the zeros of classical Jacobi polynomials.
\end{proof}
\begin{prop}
Suppose that $\alpha,\beta,m$ obey either condition (A) or
condition (B). Then $\hPud{\alpha,\beta}{m}{m+j}$ has exactly $j$
regular zeros and $m$ exceptional zeros.
\end{prop}
\begin{proof}
We begin by showing that $\XJac{\alpha}{\beta}{m}{m+j}$ has at least $j$
regular zeros by induction on $j$. The case $j=0$ is
trivial. Suppose now that the proposition has been established for
$j\geq 0$. Note that $\alpha+1,\beta+1,m$ always belong to class B by hypothesis. We observe also \cite[Chapter 6.72]{Sz} that $\Jac{-{\alpha}-1}{{\beta}-1}{m}(z)$
and $\Jac{-{\alpha}-2}{{\beta}}{m}(z)$ have no zeros in $z\in [-1,1]$.
Let $\zeta_1,\ldots, \zeta_i,\; i\geq j$,
be the regular zeros of $\XJac{\alpha+1}{\beta+1}{m}{m+j}$.
By \eqref{eq:hPraise} and Rolle's theorem,
$\XJac{\alpha}{\beta}{m}{m+j+1}(z)$ has at least one zero in each of the intervals
$(\zeta_1,\zeta_2), (\zeta_2,\zeta_3),\ldots,
(\zeta_{i-1},\zeta_i)$. There will also be a zero in
$(-1,\zeta_1)$ and a zero in $(\zeta_i,1)$, for a total of at least
$i+1$ zeros.
We conclude by showing that $\XJac{\alpha}{\beta}{m}{m+j}$ has at most
$j$ regular zeros. The proof is again by induction on $j$. Observe
that
\[ \XJac{\alpha}{\beta}{m}{m} = (-1)^m \left(\frac{\alpha+1-m}{\alpha+1+j}\right)
\Jac{-{\alpha}-2}{{\beta}}{m},\] and the latter has no zeros in
$[-1,1]$. Relation \eqref{eq:hPlower} shows that between two regular
zeros of $\XJac{\alpha}{\beta}{m}{m+j}$ there is at least one zero of
$\XJac{\alpha+1}{\beta+1}{m}{m+j-1}$. Hence, if we assume that the
latter has at most $j-1$ regular zeros, then the former has at most
$j$ regular zeros.
\end{proof}
\subsection{Asymptotic behaviour of the zeros}
Our next goal is to derive a representation for the $X_m$-Jacobi polynomials that
is amenable to asymptotic analysis.
\begin{prop}
The following identity holds:
\begin{align}
\label{eq:xjacaltrep}
&(-1)^m \XJac{\alpha}{\beta}{m}{m+j} = \\ \nonumber &\qquad
\frac{\alpha-\beta-m+1}{1+\alpha+j}
\Pud{-\alpha,\beta}{m-1}\left(\frac{j}{\alpha+\beta+2j}
\Pud{\alpha,\beta}{j}-\frac{\alpha+j}{\alpha+\beta+2j}
\Pud{\alpha,\beta}{j-1} \right) + \\ \nonumber &\qquad
\left(\frac{1+\alpha-m}{1+\alpha+j} \Pud{-2-\alpha,\beta}{m} +
\frac{j}{1+\alpha+j} \Pud{-\alpha-1,\beta-1}{m} \right)
\Pud{\alpha,\beta}{j}\,.
\end{align}
\end{prop}
\begin{proof}
We begin with \eqref{eq:hPrel2} and apply \eqref{eq:z-1P'} as well
as the following classical identities:
\begin{align*}
\Pud{1+\alpha,\beta-1}{j} &= \Pud{\alpha+1,\beta}{j-1} +
\Pud{\alpha,\beta}{j}\\
(\alpha+\beta+2j)(z-1) \Pud{\alpha+1,\beta}{j-1} &= 2j
\Pud{\alpha,\beta}{j} - 2(\alpha+j) \Pud{\alpha,\beta}{j-1}
\end{align*}
\end{proof}
\begin{prop}
Suppose that $\alpha,\beta>-1$. As $j\to \infty$ we have the
following asymptotic behaviour for $z$ in compact sets of $\mathbb
C\backslash[-1,1]$
\begin{equation}
\label{eq:hPasym}
\hPud{\alpha,\beta}{m}{m+j} -
(-1)^m\Jac{-{\alpha}-1}{{\beta}-1}{m} \Jac{{\alpha}}{{\beta}}{j}\rightrightarrows 0,\qquad j\to\infty.
\end{equation}
\end{prop}
\begin{proof}
We make use of the following well known ratio asymptotics formula
for classical Jacobi polynomials:
\begin{equation}
\frac{\Pud{\alpha,\beta}{j-1}(z)}{\Pud{\alpha,\beta}{j}(z)}
\rightrightarrows \frac{1}{z+\sqrt{z^2-1}},\quad z\notin [-1,1].
\end{equation}
The conclusion now follows directly from \eqref{eq:xjacaltrep}.
\end{proof}
As a straightforward consequence, the following corollary describes the
asymptotic behaviour of the
zeros of exceptional Jacobi polynomials.
\begin{cor}
The $j$ regular zeros of $ \hPud{\alpha,\beta}{m}{m+j}$ approach the zeros of
the classical Jacobi polynomial $\Pud{\alpha,\beta}{j}$ as $j\to\infty$, while
the $m$ exceptional zeros of $\hPud{\alpha,\beta}{m}{m+j}$ approach the zeros of
$\Pud{-\alpha-1,\beta-1}{m}$.
\end{cor}
The Heine-Mehler formula for the classical Jacobi polynomials states
\begin{equation}
\label{eq:hmjacclass}
n^{-\alpha} \Pud{\alpha,\beta}{n}(\cos(z/n)) \rightrightarrows
(z/2)^{-\alpha}J_\alpha(z).
\end{equation}
The $X_m$-Jacobi polynomials satisfy a generalized Heine-Mehler
formula, given by the following proposition.
\begin{prop}
When $n\to \infty$, we get
\begin{equation}
\label{eq:hmxjac}
n^{-\alpha} \hPud{\alpha,\beta}{m}{n}(\cos(z/n)) \rightrightarrows
(-1)^m(z/2)^{-\alpha}
\binom{m-1-\alpha}{m} J_\alpha(z),\quad n\geq m.
\end{equation}
\end{prop}
\begin{proof}
Taking the limit $j\to\infty$ in \eqref{eq:xjacaltrep} and using the
classical Heine-Mehler formula \eqref{eq:hmjacclass} leads to the desired
result,
keeping in mind also that
\[ \Pud{-\alpha-1,\beta-1}{m}(1) = \binom{m-1-\alpha}{m}.\]
\end{proof}
\section{Summary and Open problems}
We have provided suitable representations of exceptional polynomials in terms
of their classical counterparts by exploiting the isospectral Darboux
transformations that connect them. These representations allow to derive
Heine-Mehler type formulas for the exceptional Jacobi and Laguerre polynomials,
which describe the asymptotic behaviour of their \textit{regular zeros} (those
lying in the interval of orthogonality). The behaviour of the regular zeros of
exceptional polynomials follows the same Bessel asymptotics as the zeros of
their classical counterparts. We have also proved interlacing between
the zeros of exceptional and classical polynomials, while the zeros of
consecutive exceptional polynomials also interlace according to their
Sturm-Liouville character. As for the \textit{exceptional zeros} (those lying
outside the interval of orthogonality) we have established their number and
location and we have proved that for fixed codimension $m$ and large degree $n$
they approach the zeros of a classical polynomial. We have performed a careful
analysis of the admissible ranges of the parameters that ensure a well defined
Sturm-Liouville problem. We have also given raising and lowering relations for the exceptional polynomials. These relations correspond to a \textit{shape-invariant} factorization, i.e. a Darboux transformation that falls within the same class of operators, with a shift in the parameters (see for instance \eqref{sifact1}--\eqref{sifact2}), and they imply that the associated potentials in quantum mechanics will be exactly solvable and shape invariant.
It was recently noticed that more families of exceptional orthogonal polynomials can be constructed through
\emph{multi-step} Darboux or Darboux-Crum transformations \cite{GKM12a}, an idea that has been further developed
in \cite{Grandati1,Quesne3,SO5}. In this work we have analyzed the zeros of exceptional orthogonal polynomials
that can be obtained from the classical ones by a 1-step Darboux transformation.
These polynomials can be written as a first order linear
differential operator acting on their classical counterparts and the
exceptional weight is a classical weight divided by the square of a classical
polynomial with zeros outside the interval of orthogonality. An open problem is
to extend this analysis to multi-step exceptional families, where exceptional
polynomials are obtained by the action of an $m$-th order differential operator.
We believe that all exceptional orthogonal polynomials can be obtained
from a classical system by a multi-step Darboux transformation, \cite{GKM12c},
and the exceptional weight for these systems will have in its denominator the
Wronskian of all the factorizing functions, which are essentially classical
orthogonal polynomials. The characterization of all such Wronskians whose zeros
lie outside the interval of orthogonality becomes then a crucial question.
It is trivial to know the location of the zeros of a classical
polynomial, and therefore to constrain its parameters so that they fall
outside the interval of orthogonality for the exceptional weight to be regular. The question becomes much more involved when dealing with a Wronskian
of classical polynomials as it happens in the multi-step case. However, this question must be addressed in order to select those
multistep weights that are non-singular.
The position of the zeros for Wronskians of consecutive Hermite polynomials have
been investigated numerically by Clarkson \cite{clarkson1} since these functions
appear as rational solutions to nonlinear differential equations of Painlev\'e
type, \cite{clarkson2}. A further numerical analysis together with some
conjectures in a more general case have been recently put forward by Felder et
al.\cite{felder}, in connection with the theory of
monodromy free potentials. We stress that a Wronskian of classical polynomials
might have no real zeros even if the polynomials themselves do. The Adler-Krein
theorem \cite{adler,krein} provides a useful criterion to identify these cases,
and it is actually a much more general result for eigenfunctions of a
Schr\"odinger operator, not just polynomials. A generalization of this result is
being carried out by Grandati, who is extending the analysis to factorizing
functions of isospectral Darboux transformations \cite{Grandati1}, as opposed to
the Adler-Krein case which refers only to state-deleting Darboux
transformations, for which the factorizing functions are true $L^2$
eigenfunctions. The most general problem of Wronskians that involve factorizing
functions of mixed type remains unsolved.
\vskip 0.6cm
\paragraph{\textbf{Acknowledgements}}
\thanks{
The research of the first author (DGU) has been
supported by Direcci\'on General de
Investigaci\'on, Ministerio de Ciencia e Innovaci\'on of Spain, under
grant MTM2009-06973. The work of the second author
(FM) has been supported by Direcci\'on General de
Investigaci\'on, Ministerio de Ciencia e Innovaci\'on of Spain, grant
MTM2009-12740-C03-01. The research of the thirds author (RM) was supported in
part by NSERC grant RGPIN-228057-2009.
}
|
1,116,691,499,706 | arxiv | \section{Introduction}
Pre-trained Transformer models~\cite{Vaswani2017AttentionIA, Devlin2019BERTPO} have become a standard building block for many natural language processing (NLP) tasks, providing robust language representations which can be specialized on ``downstream'' classification and generation tasks. Despite their success, these models have large parameter counts, which limits their usability. Techniques for reducing these parameter counts and the corresponding computational overheads have become vital, especially given the recent breakneck pace of model growth~\cite{Radford2019LanguageMA, MTNLG}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{media/Fig1_squad_F1.pdf}
\vspace{-1.2em}
\caption{Performance overview relative to current state-of-the-art unstructured pruning methods \cite{Chen2020TheLT} \cite{Xu2021RethinkingNP} \cite{Sanh2020MovementPA}, in this order, on the 12-layer BERT-base-uncased model and the question-answering SQuAD v1.1 dataset.}
\label{fig:squad_F1}
\vspace{-0.8em}
\end{figure}
Several compression approaches are known for large language models (LLMs). One example is Knowledge Distillation (KD)
\cite{Hinton2015DistillingTK}, which led to smaller models like DistillBERT \cite{Sanh2019DistilBERTAD}, MobileBERT \cite{Sun2020MobileBERTAC}, and TinyBERT \cite{Jiao2020TinyBERTDB} which approximate the original model with minor accuracy changes. Other work has leveraged lower-precision representations to produce quantized models such as Q8BERT \cite{Zafrir2019Q8BERTQ8}, TernaryBERT~\cite{zhang2020ternarybert}, and Q-BERT~\cite{shen2020q}. An orthogonal approach, which is our primary focus, has been to apply fine-grained methods such as unstructured pruning to produce model families such as SparseBERT \cite{Xu2021RethinkingNP} and PruneBERT \cite{Sanh2020MovementPA}, which compress by removing individual weights. Figure \ref{fig:squad_F1} provides a comparative overview of state-of-the-art results for unstructured pruning on a standard model (BERT-base).
We investigate improved methods for unstructured and semi-structured (block) pruning approaches at BERT scale, by leveraging the second-order (curvature) approach pioneered by the Optimal Brain Surgeon framework~\cite{LeCun1989OptimalBD}. We put our results in the context of a \emph{compound compression} framework, which combines unstructured pruning with structured pruning and quantization, showing that these methods can be complementary.
To realize these compression gains in practice, we tailor the resulting compressed models to execute efficiently on a sparsity-aware CPU-based runtime engine~\cite{deepsparse}, leading to order-of-magnitude speedups at tolerable accuracy loss.
\noindent In summary, our contributions are as follows:
\setlist{nolistsep}
\begin{itemize}[noitemsep]
\item We perform a thorough exploration of weight pruning approaches applied to LLMs, including lottery-ticket, movement pruning, gradual magnitude and second-order methods.
\item We introduce a general second-order pruning method called \emph{Optimal BERT Surgeon (O-BERT-S)}, which also supports block pruning, and is the first second-order method to be both highly-accurate, and scalable to the dimensionality of BERT models.
\item We illustrate the benefits of this pruning approach in two practical scenarios. The first is comparing accuracy-vs-parameter-count relative to existing pruning methods for LLMs. In this context, Optimal BERT Surgeon sets new state-of-the-art results, significantly improving upon existing pruning methods. For illustration, when pruning BERT-base, Optimal BERT Surgeon outperforms Movement Pruning (MvP), the most accurate prior approach, by more than 2\% absolute F1 score at the same sparsity, and can match the accuracy of MvP with 3x fewer parameters.
\item We investigate the applicability of this pruning approach in a framework which \emph{compounds} popular compression approaches for LLMs, i.e. applying pruning in combination with layer dropping and/or quantization.
In this context, we show that our method can provide order-of-magnitude speedups on a sparsity-aware inference at tolerable accuracy loss, on commodity CPUs. For illustration, on a popular question-answering benchmark, we observe 8.4x higher inference throughput at less than 1\% relative accuracy loss to the dense BERT-base model, 10x speedup at < 2\% accuracy loss, 15x speedup at < 3\% drop, and 29x speedup at < 7.5\% relative accuracy loss. Our compression recipes are simple and easily-extensible.
\end{itemize}
\section{Background and Related Work}
\noindent\textbf{Transformer LLM Models} are usually built using multiple transformer layers, used to capture long-term input dependencies, using self-attention \cite{Vaswani2017AttentionIA}. Each transformer usually has some variation of two sub-components: \textit{multi head attention (MHA)} and \textit{fully connected feed forward networks (FFN)}. MHA contains many self-attention heads, each of which having three sub-components: queries, keys, and values. The output of the attention component is the concatenation of each attention head and is fed into the FFN.
The FFN is a fully-connected feed forward network which includes linear transformations and an activation function such as ReLU or GeLU.
Given the massive size of well-performing models, there has been growing interest in LLM compression.
Existing approaches build on compression for computer vision (CV) models, and successful methods are usually able to scale between tasks.
While many successful compression approaches exist, Transformer models have been shown to be fragile~\cite{DBLP:journals/corr/abs-2105-06990}, as minor perturbations can lead to model collapse.
We now briefly overview existing compression approaches.
\noindent\textbf{Unstructured pruning} techniques remove individual weights by setting them to zero, which can be exploited for both storage and computational speedup.
Pruning schemes are often motivated by weight saliency metrics which minimize the loss in accuracy due to pruning.
It is common to perform pruning in iterative steps, each of which removes weights from the network until a certain desired sparsity level is reached.
A popular saliency metric for weight pruning is weight magnitude, also known as \emph{magnitude pruning}~\cite{Han2015ADN, Gale2019TheSO}.
While popular for CNNs, this approach is believed to falter for LLMs~\cite{Xu2021RethinkingNP, Sanh2020MovementPA}.
We will show that, with the right parametrization, magnitude pruning can be competitive for LLMs.
\noindent\textbf{Second-order unstructured pruning} methods~\cite{LeCun1989OptimalBD, hassibi1993second, Singh2020WoodFisherES, Frantar2021EfficientMA} were developed in the context of vision models, and leverage complex approximations of model curvature, which can lead to improved pruning decisions.
However, {implementations of second-order pruning methods} do not immediately scale to BERT models, as they require an approximation of the inverse Hessian, which are expensive to store and compute with.
The approach we propose here is a variant of the WoodFisher / M-FAC methods~\cite{Singh2020WoodFisherES, Frantar2021EfficientMA}, carefully adapted to BERT models for both scale and accuracy. Specifically, we propose the first such method for BERT dimensionality, and extend existing second-order methods to semi-structured (block) compression.
\noindent \textbf{Movement Pruning} (MvP) \cite{Sanh2020MovementPA, lagunas21block} is specifically designed for pruning pre-trained LLMs during fine-tuning; intuitively, it removes the weights with the lowest rate of change during finetuning, even though they may not be near zero.
The resulting family of unstructured pruned models is called PruneBERT, and was the first to achieve high sparsity (80-90\%) with relatively small accuracy loss.
Our approach outperforms MvP in terms of accuracy at the same sparsity levels.
\noindent{\textbf{Upstream pruning}} approaches compress the pre-trained models themselves, during their training process, reducing the need for task-specific tuning of the pruning process.
\citet{Chen2020TheLT} specifically examined strategies for generating sparse pre-trained ``lottery tickets,''~\cite{Frankle2019TheLT} which would transfer well to a set of downstream datasets. Their key limitation, which we illustrate later, is that their approach incurs major accuracy loss even at moderate sparsities.
Recent work by~\citet{zafrir2021prune} also explored compressing \emph{pre-trained} models, and established that gradual magnitude pruning can be successful in this context, with careful tuning.
They established that the pre-trained sparse ``once-for-all'' (OFA) models can be finetuned to be competitive with downstream methods such as MvP.
Relative to the line of work, in this paper we examine the performance of the above methods, notably MvP, OFA, and Lottery Tickets, relative to a new second-order pruning implementation.
The approach we propose consistently improves upon the compression-accuracy trade-offs given by all these prior methods, both in pre-training and finetuning regimes.
\noindent\textbf{Structured Pruning} focuses on removing logical groupings of parameters without affecting accuracy. For LLMs,
structural pruning focuses on decreasing the number of Transformer layers, decreasing the hidden size, or removing attention heads, and requires structural understanding of the model. Specifically, \citet{Michel2019AreSH} and \citet{Voita2019AnalyzingMS} demonstrated that the attention heads vary in terms of importance, and that for some tasks nearly 40\% of heads can be removed without major impact on accuracy.
Other work has focused on showing that many of the layers can be removed~\cite{Sridhar2020UndividedAA}, on the order in which layers are removed \cite{DBLP:journals/corr/abs-2004-03844}, and on methods for improving model robustness to layer dropping~\cite{fan2019reducing}. Models like BORT \cite{DBLP:journals/corr/abs-2010-10499} combine structured pruning with an optimization approach to produce smaller models.
In some of our experiments, we will apply standard ``direct'' layer dropping in conjunction with unstructured and semi-structured pruning.
\noindent\textbf{Semi-Structured Pruning} is an intermediate approach, by which smaller groups, e.g. rectangular sets of weights~\cite{lagunas21block}, are set to zero. This approach has recently gained in popularity thanks to efficient computational support.
Here, we extend the second-order pruning formulation to such groupings, and show results for a specific grouping supported by a CPU inference engine.
\noindent\textbf{Quantization} represents weights and activations in lower precision~\cite{Courbariaux2016BinarizedNN}, and was used to obtain quantized language models such as Q8BERT \cite{Zafrir2019Q8BERTQ8}.
TernaryBERT~\cite{zhang2020ternarybert} used a complex distillation-based approach to obtain extremely low-bit representations, whereas~\citet{fan2020training} proposes a scheme which randomly quantizes groups of weights during training, leading to more accurate compressed models.
Finally, Q-BERT~\cite{shen2020q} employed approximate information about the Hessian spectrum in order to choose the quantization bit-widths applied to each layer.
Follow-up work~\cite{yu2022hessian} applied a similar approach to structured pruning in the context of convolutional and languange models, using an approximation of the Hessian trace to decide which layers should be pruned.
We note that this approach is quite different from the one we employ here, as we use completely different inverse-Hessian approximations to perform pruning decisions.
The focus of this paper is on \emph{weight pruning}, and on computational speedups achievable on commodity CPUs.
As such, the methods we investigate are rather different from quantization approaches. Moreover, it is impossible to directly compare to low-bitwidth quantized models in terms of inference speeds, as CPU inference frameworks do not currently support such custom bitwidth formats.
We will therefore only make use of standard Quantization-Aware Training (QAT) to 8-bit weights, which is well-supported on Intel CPUs, and showcase the resulting computational speedups in conjunction with layer-dropping and weight pruning.
\noindent\textbf{Knowledge Distillation (KD)} ~\cite{Hinton2015DistillingTK} trains a smaller ``student'' model against the outputs of a larger and more accurate ``teacher'' model, specifically adding a loss component which minimizes the KL-divergence between the two output distributions. KD uses a hardness parameter to control the mixture of regular loss and distillation loss, and a temperature parameter to control the softness of the probability distribution. Models trained with KD reformulate their loss as a linear mixture of regular and distillation loss, which is the approach we adopt in our setup too.
Approaches like DistillBERT \cite{Sanh2019DistilBERTAD}, have the student approximate the output of the teacher model, while TinyBERT \cite{Jiao2020TinyBERTDB}, MobileBERT~\cite{Sun2020MobileBERTAC}, and MiniLM \cite{Wang2020MiniLMDS} have the student approximate the teacher's intermediate representations, obtaining better results at the cost of higher complexity.
\section{Optimal BERT Surgeon (O-BERT-S) Pruning}
\label{sec:methods}
\subsection{Generalized Second-Order Block Pruning}
The pruning problem starts from a well-optimized dense model \( \vect{w}^* \in \mathbb{R}^d \), and aims to find a sparse version of $\vect{w}^*$, where many of the weights are set to zero,
and the remaining weights may be updated accordingly in order to preserve the loss.
It is common for this process to occur gradually, i.e. by progressively removing the weights.
A classic approach~\cite{LeCun1989OptimalBD, hassibi1993second} for ``optimal'' pruning of weights from $\vect{w}^*$ at a step is to expand the loss function $\mathcal{L}$ locally around $\vect{w}^*$ with respect to a sparse 0/1 weight mask $\vect{M}$. If we denote by $\mathbf{w}_M = (\vect{M} \odot \vect{w}^*)$, the model resulting from the Hadamard (element-wise) product between \( \vect{M} \in \{0,1\}^d \) and $\vect{w}^*$, we can use the Taylor expansion at $\mathbf{w}_M$ to obtain:
\begin{eqnarray*}
\mathcal{L}(\vect{w}_M) \simeq \mathcal{L}(\vect{w}^*) + (\vect{w}_M - \vect{w}^*)^\top \nabla \mathcal{L}(\vect{w}^*) \\ + \frac{1}{2} (\vect{w}_M - \vect{w}^*)^\top \mathbf{H} (\vect{w}^*) (\vect{w}_M - \vect{w}^*).
\end{eqnarray*}
Given that $\vect{w}^*$ is well-optimized, it is reasonable in practice to assume that \( \nabla \mathcal{L}(\vect{w}^*) \approx \vect{0} \)~\cite{Singh2020WoodFisherES}.
Then, the change in loss incurred by pruning a subset of weights can be expressed as
\begin{equation}
\label{eq:delta-loss}
\delta \mathcal{L} (\delta \w) \simeq \frac{1}{2} \delta \w^\top \mathbf{H} (\vect{w}^*) \delta \w
\end{equation}
\noindent where \( \delta \mathcal{L} (\delta \w) \coloneqq \mathcal{L}(\vect{w}_M) - \mathcal{L}(\vect{w}^*) \) and \( \delta \w \coloneqq \vect{w}_M - \vect{w}^* \).
In this context, a popular way of approximating the model's Hessian at $\vect{w}^*$ is via a dampened empirical Fisher Information Matrix~\cite{hassibi1993second}:
\begin{equation}
\label{eq:emp-fisher}
\mathbf{H}(\mathbf{w}) \simeq \widehat{\vect{F}} (\mathbf{w}) = \lambda \vect{I}_d + \frac{1}{m} \sum_{i=1}^{m} \nabla \mathcal{L}_i(\mathbf{w}) \nabla \mathcal{L}^\top_i(\mathbf{w}),
\end{equation}
\noindent where \( \lambda \geq 0 \) is a small dampening constant, \( \vect{I}_d \in \mathbb{R}^{d \times d} \) identity matrix and \( m \) is the number of gradient outer products used to approximate the Hessian. Given the positive-definiteness of \eqref{eq:emp-fisher}, the quadratic form \eqref{eq:delta-loss} is always nonnegative which is why we will refer to \( \delta \mathcal{L}(\delta \w) \) as a \textit{loss increase} incurred by pruning.
Returning to our pruning problem, assume we wish to identify a block of weights $Q$ of a given shape whose removal by ``zero-masking'' would incur minimum increase in loss. This leads to the following constrained optimization problem:
\begin{equation}
\label{eq:opt}
\begin{split}
&\min_{\delta \w}\quad \frac{1}{2} \delta \w^\top \widehat{\vect{F}} (\vect{w}^*) \delta \w \\&\phantom{x}\mathrm{s.t.}\quad \vect{e}_k^\top \delta \w + w_k = 0, \quad \forall k \in \textrm{Q}
\end{split}
\end{equation}
\noindent where \( \vect{e}_k \in \mathbb{R}^d \) stands for the \( k \)-th canonical basis vector. This derivation is known~\cite{hassibi1993second}; however, the optimal pruning update has only been derived for pruning individual weights.
Here, we will provide a generalized solution, which applies to general $Q$.
First, for convenience, we express the system of \( |\textrm{Q}| \) equality constraints in matrix-equation form as
\( \vect{E}_\textrm{Q} \delta \w + \vect{E}_\textrm{Q}\vect{w}^* = \vect{0}, \)
where \( \vect{E}_\textrm{Q} \in \mathbb{R}^{|\textrm{Q}| \times d} \) is a matrix composed of the corresponding canonical basis vectors \( \vect{e}_k\, (\forall k \in \textrm{Q}) \) arranged in rows. This optimization problem can be solved with the method of Lagrange multipliers. Specifically, we wish to find stationary points of the Lagrangian \( \mathcal{L}(\delta \w, \boldsymbol{\lambda}) \), where \( \boldsymbol{\lambda} \in \mathbb{R}^{|\textrm{Q}|} \) denotes a vector of Lagrange multipliers. Solving the system of equations
\(
\frac{\partial \mathcal{L}(\delta \w, \boldsymbol{\lambda})}{\partial \delta \w} = \vect{0} \) and
\( \frac{\partial \mathcal{L}(\delta \w, \boldsymbol{\lambda})}{\partial \boldsymbol{\lambda}} = \vect{0} \)
yields the following expression for the optimal weight update:
\begin{equation*}
\label{eq:update}
\delta \w^* = -\widehat{\vect{F}}^{-1}(\vect{w}^*) \vect{E}_\textrm{Q}^\top \pr{\vect{E}_\textrm{Q} \widehat{\vect{F}}^{-1}(\vect{w}^*) \vect{E}_\textrm{Q}^\top}^{-1} \vect{E}_\textrm{Q} \vect{w}^*.
\end{equation*}
Now, the corresponding loss increase incurred by the optimal weight update \( \delta \w^* \), which prunes a set of weights $Q$ and updates the remaining weights, can be expressed as the saliency score of the weights $Q$, which we denote by:
\begin{equation*}
\label{eq:saliency}
\rho_\textrm{Q} = \frac{1}{2} \pr{\vect{E}_\textrm{Q} \mathbf{w}^*}^\top \pr{\vect{E}_\textrm{Q} \widehat{\vect{F}}^{-1}(\vect{w}^*)\vect{E}_\textrm{Q}^\top}^{-1} \vect{E}_\textrm{Q} \mathbf{w}^*.
\end{equation*}
We will use this saliency/importance score to rank sets of weights for pruning. As a sanity check, if we prune a single weight $w_j$ at a time, our derivations will yield the standard formulas of~\cite{hassibi1993second, Singh2020WoodFisherES}.
\subsection{An Efficient Implementation}
Directly implementing and running the previously described second-order pruning approach for large language models, where number of weights to prune $\mathbf{w} \in \mathbb{R}^d$ is very large, is infeasible. In particular, this is due to the dependence on the inverse of the dampened empirical Fisher information matrix $\widehat{\vect{F}}^{-1}(\mathbf{w}) \in \mathbb{R}^{d \times d}$, appearing in formulations of the saliency score and of the optimal weight update.
We now describe how to circumvent these issues.
\subsubsection{Pruning the optimal set of weights}
Assume a gradual pruning setup, in which at each pruning step we wish to prune a model to a target sparsity $s \in [0, 1]$, effectively zeroing out $s \times d$ weights, in groups of size $|Q|$. Typically $s \times d \gg |Q|$, meaning that we want to remove multiple groups of weights of size $|Q|$ at the same time. Finding the optimal set of $\frac{s \times d}{|Q|}$ sets $Q$ is an intractable combinatorial problem, due to all the correlations between groups of weights.
Specifically, the number of possible combinations is given by the binomial coefficient $\binom{n}{k}$, where $n = \frac{d}{|Q|}$ and $k = \frac{s \times d}{|Q|}$.
This problem can be alleviated by ignoring correlations between different groups of weights $Q$, and solving only for correlations between the weights within the same group.
In practice, this boils down to evaluating the saliency score $\rho_\textrm{Q}$ for each group $Q$, and pruning the $\frac{s \times d}{|Q|}$ groups with the lowest score.
As pruning many weights in the same step can make the Taylor approximation of the loss function less accurate, one can consider pruning with multiple smaller sub-steps with recomputations of the Hessian approximation in-between them (without intermediate finetuning).
While this can further improve the quality of the pruning step~\cite{Frantar2021EfficientMA}, we do not implement this additional optimization since the competing methods do not utilize recomputations.
\subsubsection{Inverse empirical Fisher computation}
The key component of the above procedure in terms of space and time complexity is computing products with the inverse empirical Fisher.
A simple direct approach would be to perform a block-wise diagonal approximation of this matrix (which we detail next), and perform direct block inversion.
However, we found experimentally that this approach is too expensive in terms of time, and quite sensitive to the choice of dampening parameter $\lambda$.
As an alternative, we rely on the fact that the matrix we wish to invert is a sum of rank-1 matrices, and employ the Woodbury/Sherman-Morrison (WSM) inversion formula.
Specifically, given a sum $(\vect{A} + \vect{u}\vect{v}^\top)$ between an invertible matrix $\vect{A}$ and an outer product of vectors $\vect{u}$ and $\vect{v}$ with compatible dimensions, the inverse $(\vect{A} + \vect{u}\vect{v}^\top)^{-1}$ can be exactly calculated as $\vect{A}^{-1} - \frac{\vect{A}^{-1}\vect{u}\vect{v}^\top\vect{A}^{-1}}{1+\vect{v}^\top\vect{A}^{-1}\vect{u}}$. Placing the expression of the empirical Fisher in the WSM formula, we obtain the following recursive formulation, where $m$ is the number of gradients employed in the approximation:
\begin{eqnarray*}
& \widehat{\vect{F}}^{-1}(\mathbf{w}) = \widehat{\vect{F}}^{-1}_m(\mathbf{w}) & \\ = & \pr{\widehat{\vect{F}}_{m-1}(\mathbf{w}) + \frac{1}{m} \nabla \mathcal{L}_m(\mathbf{w}) \nabla \mathcal{L}_m^\top(\mathbf{w})}^{-1}. &
\end{eqnarray*}
\noindent Unrolling the recursion with $\widehat{\vect{F}}^{-1}_0(\mathbf{w}) = \frac{1}{\lambda} \vect{I}_d$, we can obtain an iterative formula to exactly calculate the inverse of the empirical Fisher matrix as
\begin{eqnarray*}
\label{eq:iterative}
& \widehat{\vect{F}}^{-1}(\mathbf{w}) = \widehat{\vect{F}}^{-1}_m(\mathbf{w}) = \\ & \frac{1}{\lambda} \vect{I}_d - \sum_{i=1}^{m} \frac{\pr{\widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w})}\pr{\widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w})}^\top}{m + \nabla \mathcal{L}_i^\top(\mathbf{w}) \widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w})}. &
\end{eqnarray*}
The iterative formulation enjoys a number of computational advantages over the direct implementation. The most notable ones are 1) avoiding explicit calls to the expensive and dampening-sensitive matrix inversion operations, and 2) allowing successive updates of the inverse as new gradients are computed, never needing to store all $m$ gradients of size $d$ and thus significantly reducing memory requirements.
\subsection{Memory and run-time complexity}
Computing and storing the inverse empirical Fisher $\widehat{\vect{F}}^{-1}(\mathbf{w}) \in \mathbb{R}^{d \times d}$ is prohibitively expensive for modern LLMs, which have hundreds of millions of parameters, due to the quadratic complexity on the number of weights $d$. However, \citet{Singh2020WoodFisherES} have shown that a diagonal block-wise approximation of the empirical Fisher matrix can be very accurate for pruning of convolutional neural networks. We adapt the same approach here, in the context of LLMs. Thus, for blocks of width $B$ along the main diagonal, memory requirements for the computation of the inverse Fisher matrix are reduced from the quadratic $\mathcal{O}(d^2)$ to a linear $\mathcal{O}(Bd)$ dependence on the number of weights $d$. At the same time, run-time complexity relaxes from $\mathcal{O}(md^2)$ to $\mathcal{O}(mBd)$.
As we will show, this computation can be efficiently and accurately performed for moderate values of $m$ and $B$.
{Another alternative we investigated was the matrix-free approach of~\citet{Frantar2021EfficientMA}, which does not require a block-wise approximation and has complexity $\Theta(dm)$. However, our investigation showed that this approach required high values of $m$ to be accurate, which leads to high memory cost in the case of BERT models.}
\subsection{Efficient and scalable implementation}
On the practical side, we have identified general hyper-parameters $B = 50$ for the block size, and $m = 1024$ for the number of gradients which produce state-of-the-art pruning results for all analyzed BERT models.
Moreover, for these parameter values, the block-wise approximation of $\widehat{\vect{F}}^{-1}(\mathbf{w})$ can be implemented very efficiently on modern accelerators. Specifically, we take advantage of the fact that such hardware favors batched matrix operations, and that the blocks of size $B \times B$ in $\widehat{\vect{F}}^{-1}(\mathbf{w})$ are independent, making it possible to update the inverse Fisher estimate with new gradients in one-shot. With $N_B = \frac{d}{B}$ we refer to the total number of blocks, {i.e.} the batch-dimension.
The procedure works as follows. First, we compute batched matrix-vector products $\widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w}) \in \mathbb{R}^{N_B \times B}$ and scalar denominators \[m + \nabla \mathcal{L}^\top_i(\mathbf{w}) \widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w}) \in \mathbb{R}^{N_B}.\]
Then, we update the inverse Fisher for each block by computing the scalar-scaled outer products
\[\pr{\widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w})} \pr{\widehat{\vect{F}}^{-1}_{i-1}(\mathbf{w}) \nabla \mathcal{L}_i(\mathbf{w})}^\top,\]
\noindent which are in $\mathbb{R}^{N_B \times B \times B}.$
In practice, for the 12-layer BERT-base model with $d=85M$ encoder weights to prune and block size $B=50$, the $\mathcal{O}(Bd)$ memory requirement translates to approximately 17GB, which can be easily kept on RTX 3090 card with 24GB of memory. While this amount of memory is available on high-performance GPUs, it is also straightforward to split the $N_B \times B \times B$ tensor along the batch-dimension $N_B$ and utilize additional GPUs or even memory swapping with CPU memory.
\section{Experimental Validation}
\label{sec:experiments}
\subsection{Compound Compression Setup}
\label{sec:compound}
\begin{figure*}[ht]
\includegraphics[scale=0.7]{media/Fig2_compound-compression.pdf}
\centering
\caption{Overview of the compound compression approach when applied \emph{downstream}.}
\label{fig:compoundingsparse}
\end{figure*}
The general compression setup we consider, illustrated in Figure \ref{fig:compoundingsparse}, compounds multiple compression methods, applied iteratively, in order to progressively remove parameter redundancies, at increasingly low levels of granularity.
In the basic setting, we are given a large ``upstream'' model pretrained on a large initial task, usually some form of language modeling, and wish to obtain a compressed ``downstream'' model, specialized to a finetuning task. Due to space constraints, we mainly focus the presentation on the case where compression is only applied on ``downstream'' applications, but we also present results when compression is applied to the ``upstream'' pretrained model.
\noindent\textbf{Downstream Compound Compression (DCC)} is a combination of existing techniques during finetuning, as we employ structured compression, distillation, unstructured pruning, and quantization.
A critical component, however, is the \emph{order} in which the compression techniques are applied:
given an original large model, we first apply \emph{structured compression} to remove \emph{logical structures}, e.g. layers and their intermediate representations, from the model. These structurally compressed models can then be pretrained as language modeling tasks. In conjunction, we leverage information from the uncompressed model by using a \emph{knowledge distillation} loss to train the smaller model, with respect to the teacher's output. Both pretraining and distillation have been shown to learn different linguistic aspect helpful for downstream tasks \cite{Turc2019WellReadSL}.
The second step aims to shed the remaining surplus parameters by performing \emph{unstructured compression}, via iterative second-order or magnitude pruning, followed by an additional finetuning period to recover accuracy.
This stage brings some of the largest compression gains, as well as significant improvements versus prior work. For example, we will be able to compress models by up to 10x with minimal accuracy loss, and more than double the compression ratio for the same accuracy relative to prior work~\cite{Sanh2020MovementPA}.
The third and last step applies quantization on top of the resulting sparse model, via quantization-aware training (QAT).
Clearly, some of the above compression steps are optional, depending on the inference infrastructure: e.g., one can skip quantization if there is no hardware support for lower-precision operations.
We note that, while prior work has also explored \emph{pairs} of compression methods in conjunction, e.g.~\cite{Sanh2020MovementPA}, we believe we are the first to examine compounding \emph{all} methods in a single pipeline.
This framework is naturally extensible: we also present experiments on \textbf{Upstream Compound Compression (UCC)}, which adapts the above scheme to the case where we wish to already obtain a compressed model on the large-scale pretraining setup. UCC is able to remove the need for parameter tuning on the downstream dataset; however, we observe experimentally that the success of UCC relative to DCC varies by task and compression target.
\begin{table*}[htb!]
\centering
\scalebox{0.8}{
\begin{tabular}{ccc|cccc|ccc}
\toprule
Task & \makecell{Dense\\ BERT} & Sparsity & \makecell{GMP\\(MvP)} & Soft MvP & \makecell{GMP\\ (ours)} & \makecell{O-BERT-S\\ (ours)} & \makecell{Lottery\\ Ticket} & \makecell{GMP\\(ours)} & \makecell{O-BERT-S\\(ours)}\\
\midrule
& Epochs && \multicolumn{4}{c|}{\textbf{10 Epochs} Pruning + Finetuning} & \multicolumn{3}{c}{\textbf{30 Epochs} Pruning + Finetuning}\\
\midrule
SQuAD & 88.54 & \makecell{80\% \\ 90\% \\ 97\%} & \makecell{- \\ 80.10 \\ 59.60} & \makecell{- \\ 84.90 \\ 82.30} & \makecell{- \\ 85.74 \\ 79.66} & \makecell{- \\ \textbf{87.98} \\ \textbf{84.65}} & \makecell{86.54\phantom{$^*$} \\ 68.00$^*$ \\ -\phantom{$^*$}} & \makecell{88.64 \\ 87.44 \\ 83.82} & \makecell{\textbf{89.04} \\ \textbf{88.31} \\ \textbf{85.98}}\\
\midrule
MNLI & 84.54 & \makecell{80\% \\ 90\% \\ 97\%} & \makecell{- \\ 78.30 \\ 69.40} & \makecell{- \\ 81.20 \\ 79.50} & \makecell{- \\ 81.83 \\ 79.34} & \makecell{- \\ \textbf{83.20} \\ \textbf{81.00}} & \makecell{82.60\phantom{$^*$} \\ 75.00$^*$ \\ -\phantom{$^*$}} & \makecell{83.64 \\ 82.79 \\ 79.95} & \makecell{\textbf{84.32} \\ \textbf{83.79} \\ \textbf{81.77}}\\
\midrule
QQP & 91.06 & \makecell{80\% \\ 90\% \\ 97\%} & \makecell{- \\ 79.80 \\ 72.40} & \makecell{- \\ 90.20 \\ 89.10} & \makecell{ - \\ 90.77 \\ 89.80} & \makecell{- \\ \textbf{90.89} \\ \textbf{90.23}} & \makecell{90.30\phantom{$^*$} \\ 90.00\phantom{$^*$} \\ -\phantom{$^*$}} & \makecell{91.47 \\ 91.19 \\ 90.48} & \makecell{\textbf{91.57} \\ \textbf{91.35} \\ \textbf{90.87}}\\
\bottomrule
\end{tabular}
}
\caption{Performance metrics of the 12-layer BERT-base-uncased model gradually pruned over 10 and 30 epochs at a corresponding downstream task with distillation loss from the Dense BERT model. We report mean F1 score for SQuAD and mean accuracy for MNLI and QQP over three runs for 30 epochs and one run for 10 epochs for the sole purpose of a fair comparison with \cite{Sanh2020MovementPA}. For standard deviations and additional metrics please see Tables \ref{tab:10_30_gradual_v2} and \ref{tab:10_30_gradual_stdev} in the Appendix. ($^*$ approximate results as the exact numbers are not available.)}
\label{tab:10_30_gradual}
\vspace{-1.2em}
\end{table*}
Our experimental validation first examines the accuracy-compression trade-off for unstructured compression, to establish the viability of second-order pruning for BERT.
Then, we integrate these gains in our compounding compression framework, and then finally evaluate the impact of compression both in terms of model size and inference speedups.
\noindent\textbf{Datasets and Baselines.}
We perform evaluation on a set of downstream English tasks, using well-established training hyper-parameters.
All of our experiments are built using a modified version of a popular Transformer library \cite{wolf-etal-2020-transformers} with a focus on \textit{BERT-base-uncased}, one of the most commonly used LLMs, composed of 12 transformer encoder layers with approximately 110M parameters. Following the community standards, we prune weights of all encoder layers ($\sim$85M parameters) and report sparsities w.r.t. this number.
For comparisons, we follow the approach used to illustrate the current state-of-the-art approaches~\cite{Sanh2020MovementPA, zafrir2021prune} for downstream and upstream pruning, respectively.
Specifically, our experiments evaluate performance on a variety of publicly available ``downstream'' English-language tasks commonly used to evaluate the impact of model compression: question answering (SQuAD v1.1 \cite{Rajpurkar2016SQuAD1Q}), sentence classification (Quora Duplicate Query Dataset (QQP) \cite{Guo2017DuplicateQQ}) and natural language inference (MNLI \cite{N18-1101}). \\
Training hyper-parameters, schedules, and other details can be found in the Appendix. Experimental data will be made public via Weights and Biases~\cite{wandb} along with compression ``recipes'', models, and our full implementation.
\subsection{Improved BERT Unstructured Pruning}
\label{sec:unstructured}
We first revisit the accuracy-compression trade-off induced by unstructured pruning, and show that it can be improved upon significantly.
\noindent\textbf{Goals and Setup.}
We compare existing approaches, notably MvP~\cite{Sanh2020MovementPA} downstream, Prune OFA~\cite{zafrir2021prune} upstream, and Lottery Ticket~\cite{Chen2020TheLT} both downstream and upstream, against gradual pruning, using either magnitude (GMP) or Optimal BERT Surgeon (O-BERT-S).
For a fair comparison with MvP, we consider the 10-epoch pruning and finetuning regime used to obtain the best results in~\citet{Sanh2020MovementPA}.
We optimized the learning rate schedule and the pruning frequency for both magnitude (GMP) and second-order (O-BERT-S) pruning. Specifically, we perform 2 epochs of finetuning, followed by 6 epochs of pruning, and 2 further epochs of finetuning of the compressed model. We prune uniformly per each layer with GMP, and globally across all layers with O-BERT-S. (Global magnitude pruning yielded worse results.)
The full hyper-parameters are described in Appendix~\ref{app:hyperparams-DownstreamPruning}.
The results are given in Table \ref{tab:10_30_gradual}, where GMP (MvP) is the baseline used by~\citet{Sanh2020MovementPA}, and GMP (ours) presents our results.
\noindent\textbf{Movement Pruning (MvP) Comparison.}
We observe that our variant of GMP significantly outperforms the GMP baseline of~\citet{Sanh2020MovementPA}, and can in fact achieve competitive performance with MvP itself.
Further, we observe that Optimal BERT Surgeon outperforms other methods by a significant margin (> 2\% absolute F1 score at the same sparsity): remarkably, O-BERT-S pruned to 97\% sparsity has similar accuracies to MvP at 90\% sparsity, which has roughly 3x more weights.
This suggests that second-order information can be quite useful for pruning and recovery. Figure~\ref{fig:gradual} presents the accuracy drops following each pruning step for GMP versus O-BERT-S: notice the large gaps in favor of the second-order method, which lead to sustained performance.
\noindent\textbf{Extended Finetuning.}
Next, we examine the accuracy effects of extending the pruning schedule to 30 epochs, matching the setup used for finetuning BERT Lottery Tickets (LTH)~\cite{Chen2020TheLT}.
We use the same schedule structure as before, but scaled to 30 epochs instead of 10. (Please see Appendix~\ref{app:hyperparams-DownstreamPruning} for full hyper-parameters.)
Table~\ref{tab:10_30_gradual} (right) presents results for Lottery Tickets, our variant of GMP, and O-BERT-S pruning.
The results show a clear accuracy difference between O-BERT-S and GMP, on the one side, and Lottery Tickets on the other, especially at higher sparsities.
This difference is justified since the LTH approach attempts to mainly transfer \emph{network connectivity}, whereas the other approaches can also benefit from the weight values.
Finally, we examined the impact of 30-epoch finetuning with soft MvP on SQuAD, targeting 90\% sparsity (not shown in the Table).
We obtained an (F1, EM) combination of $(87.42, 79.83)$ for MvP.
The F1 gap in favor of O-BERT-S is lower than at 10 epochs, suggesting that additional finetuning helps all methods; yet, it is far from negligible.
\subsection{Downstream Compound Compression}
\begin{table*}[tb!]
\centering
{\small
\begin{tabular}{c|c|cc|cccc}
\toprule
\multicolumn{2}{c}{} & \multicolumn{2}{c|}{Unstructured weight pruning} & \multicolumn{2}{c}{4-block pruning} & \multicolumn{2}{c}{QAT} \\
\midrule
Layers & Sparsity & \makecell{GMP (ours)} & \makecell{O-BERT-S (ours)} & \makecell{GMP (ours)} & \makecell{O-BERT-S (ours)} & \makecell{GMP (ours)} & \makecell{O-BERT-S (ours)} \\
\midrule
12 & \makecell{0\% \\ 80\% \\90\%} & \makecell{89.48 \\ 88.64 \\ 87.44} & \makecell{89.48 \\ \textbf{89.04} \\ \textbf{88.31}} & \makecell{89.48 \\ 87.85 \\ 85.95} & \makecell{89.48 \\ \textbf{88.57} \\ \textbf{87.57}} & \makecell{89.06 \\ 87.09 \\ 85.45} & \makecell{89.06 \\ \textbf{87.89} \\ \textbf{86.68}}\\
\midrule
6 & \makecell{0\% \\ 80\% \\90\%} & \makecell{88.32 \\ 87.30 \\ 85.56} & \makecell{88.32 \\ \textbf{88.20} \\ \textbf{86.78}} & \makecell{88.32 \\ 85.54 \\ 82.77} & \makecell{88.32 \\ \textbf{87.00} \\ \textbf{85.34}} & \makecell{87.94 \\ 84.03 \\ 82.51} & \makecell{87.94 \\ \textbf{86.10} \\ \textbf{84.59}}\\
\midrule
3 & \makecell{0\% \\ 80\% \\90\%} & \makecell{84.66 \\ 83.40 \\ 81.09} & \makecell{84.66 \\ \textbf{84.08} \\ \textbf{82.50}} & \makecell{84.66 \\ 80.61 \\ 77.38} & \makecell{84.66 \\ \textbf{82.79} \\ \textbf{80.69}} & \makecell{84.25 \\ 80.04 \\ 77.65} & \makecell{84.25 \\ \textbf{82.04} \\ \textbf{79.66}} \\
\bottomrule
\end{tabular}
\caption{F1 performance metrics of the 3-, 6-, and 12-layer BERT-base-uncased models compound-compressed over 30 epochs on SQuAD v1.1. For the corresponding exact-match (EM) results, please see Table~\ref{tab:downstream-block-additional} in the Appendix.
\label{tab:downstream-block}
}}
\end{table*}
The above results suggest that our approach can yield significant improvements when applied to unstructured pruning.
However, high-performance inference usually applies other types of compression, such as block (semi-structured) pruning, layer dropping, and quantization.
We now compound these compression approaches, following the pattern from Section~\ref{sec:compound}.
For this step, we apply layer dropping to 6 and 3 layers, followed by either GMP- or O-BERT-S-based pruning, for a total of 30 epochs.
We investigate both \emph{unstructured pruning} and \emph{4-block pruning}, in which contiguous blocks of 4 weights are either set to zero or kept dense, for which we use the implementation described in Section~\ref{sec:methods}. Both pruning types can be leveraged for computational speedups on CPUs~\cite{deepsparse},
but 4-block pruning can be used in addition with INT8 quantization, providing additional performance gains.
For this reason, we also perform quantization-aware training (QAT)~\cite{jacob2018quantization} on top of the 4-block pruned models. (See Appendix \ref{app:hyperparams-DownstreamQuantization} for its description.)
The results are given in Table~\ref{tab:downstream-block}, where we also report the accuracy of the dense BERT models (0\% sparsity) distilled from the 12-layer teacher in the same setup.
The results indicate that compression methods can be combined without model collapse, although the loss increases do compound.
We observe that Optimal BERT Surgeon consistently outperforms (well-tuned) GMP, and that the accuracy gaps are larger in the case of 4-block pruning, especially for smaller models.
The fact that the layer-dropped models are also highly compressible suggests that structured and fine-grained (unstructured) compression are complementary.
We find it remarkable that our 6-layer unstructured O-BERT-S-pruned model is competitive with the 12-layer MvP-pruned model when both are pruned to 90\%.
\noindent{\textbf{Speedup-vs-Accuracy.}}
We now evaluate the trade-off between speed and accuracy on the well-established SQuAD v1.1 CPU-inference benchmark.
As is common, we use a sequence length of 128 and a batch size of 32. Our evaluation metrics are number of items per second and the size of the model in MB, after standard gzip compression.
Figure~\ref{fig:inference} depicts relative accuracy versus magnitude of improvement in speed/model size.
We emphasize that, as baseline for 100\% recovery, we follow the community-standard, e.g.~\cite{Sanh2020MovementPA} and adopt the dense 12-layer BERT-base model with 88.54 F1 score, which is also the reason why some F1-recall values are above 100\%. As can be seen from the improved dense baselines in Table \ref{tab:downstream-block}, this model can also benefit from our 30-epoch pruning and finetuning framework.
For inference speed, the baseline is dense inference on DeepSparse, which matches the industry-standard ONNXRuntime inference engine.
Results suggest a roughly-linear trade-off between compression and accuracy loss, with a compression jump around 1\% accuracy drop, due to quantization being applied.
Specifically, we observe 8.4x higher inference throughput at less than 1\% relative accuracy loss to the dense BERT-base model, 10x speedup at < 2\%, 15x speedup at < 3\% drop, and 29x speedup at < 7.5\% accuracy loss.
This demonstrates how compounding compression can be used to optimize a single \textit{backbone} LLM to various latency requirements. Detailed results are in Appendix Table \ref{tab:deepsparseresults}.
\begin{table*}[h]
\centering
{\small
\begin{tabular}{ccc|cc|c}
\toprule
Task & \makecell{Dense\\ BERT} & Sparsity & Lottery Ticket & Prune OFA & \makecell{O-BERT-S \\(ours)} \\
\midrule
\makecell{SQuAD \\ F1, EM} & 88.54, 81.42 & \makecell{90\% \\ 97\%} & \makecell{68.00$^*$, -\\-} & \makecell{87.25, 79.83 \\ -} & \makecell{\textbf{88.42}, \textbf{81.31} \\ 84.39, 76.36} \\
\midrule
\makecell{MNLI \\ m, mm} & 84.54, 85.06 & \makecell{90\% \\ 97\%} & \makecell{75.00$^*$, -\\-} & \makecell{81.45, 82.43 \\ -} & \makecell{\textbf{82.29}, \textbf{83.40} \\ 78.85, 79.80} \\
\midrule
\makecell{QQP \\ acc, F1} & 91.06, 88.00 & \makecell{90\% \\ 97\%} & \makecell{90.00, 86.90\\-} & \makecell{\textbf{90.93}, \textbf{87.72} \\ -} & \makecell{90.83, 87.65 \\ 89.76, 86.36} \\
\bottomrule
\end{tabular}
}
\caption{Transfer learning performance of sparse pre-trained models obtained by applying our upstream pruning recipes, described in details in Appendix \ref{app:hyperparams-UpstreamPruning}. We report mean results from three runs with hyper-parameters described in Table \ref{tab:hyperparams-transfer}. For standard deviations please see Table \ref{tab:sparse-transfer-deviations} in the Appendix. ($^*$ indicates approximate results as the raw data numbers are not available.)}
\label{tab:sparse-transfer}
\end{table*}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{media/Fig3_gradual_diff.pdf}
\caption{F1 score on SQuAD v1.1 during unstructured pruning with GMP and Optimal BERT Surgeon for 90\% sparsity. GMP is affected by higher accuracy loss and recovers less.}
\label{fig:gradual}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{media/Fig4_inf_vs_comp.pdf}
\caption{F1 recall of uncompressed community-adopted 12-layer BERT baseline on SQuAD v1.1 relative to improvements in model inference speed and size.}
\label{fig:inference}
\end{minipage}
\vspace{-1.2em}
\end{figure*}
\subsection{Upstream Compound Compresssion}
Our results so far applied compression on the ``downstream'' finetuning task.
An appealing alternative is to perform compression directly ``upstream'', on the semi-supervised pre-training task~\cite{zafrir2021prune}.
For this, we begin by preparing a dense upstream teacher using the RoBERTa procedure~\cite{Liu2019RoBERTaAR}, detailed in Appendix \ref{app:hyperparams-UpstreamPruning}.
Then, following Prune OFA~\cite{zafrir2021prune} we scale our downstream pruning recipes to the upstream task, to create sparse pre-trained models.
To evaluate the resulting sparse models, we finetune the unpruned weights on three downstream tasks (SQuAD v1.1, QQP, MNLI).
We compare against Prune OFA~\cite{Zafrir2019Q8BERTQ8}, as well as against the LTH approach~\cite{Chen2020TheLT}.
The results in Table~\ref{tab:sparse-transfer} show that upstream Optimal BERT Surgeon pruning outperforms state-of-the-art methods by significant margins, except for the QQP dataset, where Prune OFA and O-BERT-S results are within standard deviation.
Moreover, in contrast to~\cite{zafrir2021prune}, which performed extensive hyper-parameter tuning, our transfer learning recipe is very basic: 8-epochs on all downstream tasks, coupled with a linearly decaying learning rate of 1.5e-4 (see Table~\ref{tab:hyperparams-transfer}). This suggests that sparse pre-trained models found by O-BERT-S constitute a strong starting point for sparse transfer.
Yet, downstream transfer can yield more accurate models, matching the intuition that additional compression gains can be obtained by specializing to the transfer task.
\section{Discussion}
We presented the first approximate second-order pruning method to apply to BERT scale, and investigated its practicality in the context of compounding compression techniques.
Our results significantly improve upon the known compression-accuracy trade-offs for BERT models, and show that we can leverage the resulting models for significant practical speedups on CPU inference. A consequence of our results is that techniques such as layer dropping, KD, second-order pruning, and quantization are complementary, and can therefore be applied in conjunction.
We emphasize that our compression recipes are simple and generic, which should help practical adoption and reproducibility, but also allow for future improvements. For example, the transfer learning results in Table \ref{tab:sparse-transfer} are obtained with the same set of hyper-parameters across all three datasets, and in the 8-epochs setup for a fair comparison against other methods, leaving a lot of space for additional improvements by extending the fine-tuning phase and by performing a hyper-parameter search for each dataset independently. The unstructured and semi-structured (4-block) pruning results in Tables \ref{tab:10_30_gradual} and \ref{tab:downstream-block} are obtained with the same recipes and minimal hyper-parameter modifications across all models and datasets.
In terms of the computational cost of our Optimal BERT Surgeon second-order pruning method for the 12-layer BERT-base model with 85M of prunable encoder weights, our implementation updates inverse Fisher estimate with a new gradient instantaneously and can run asynchronously while the next gradient is being fetched. Extracting the diagonal, computing inverse Fisher vector products, saliency scores and optimal weight updates for the dense model takes less than one second when the model is dense during the first pruning step, and then drops to almost zero as the model becomes more sparse. All these computations consume slightly more than 17GB of GPU memory, making the algorithm runnable on a single commodity RTX 3090 card, which has 24GB of memory. With this parametrization, our O-BERT-S gradual pruning runs had the same end-to-end running time as our GMP runs.
Finally, it is worth emphasizing that our parametrization of the standard gradual magnitude pruner produced drastically better results (in some cases even up to 20\% absolute F1 score improvement) relative to magnitude-based pruning results currently available in literature. This suggests that this approach can be competitive when tuned carefully.
In terms of future work, we aim to investigate distillation of intermediate model representations~\cite{Wang2020MiniLMDS}, as well as the role of compound compression applied on much larger language models, including generative ones.
\section{Broader Impact}
Our work is part of the general trend of producing inference efficient models which approximate performance of their larger bases. By and large, this work should help increase model efficiency, thereby reducing computational and ultimately monetary cost of executing such models. Moreover, it could allow models to be used by those who do not have access to expensive specialized computing clusters: for instance, our main experimental speedup results are aimed at widely-available CPUs.
\bibliographystyle{acl_natbib}
|
1,116,691,499,707 | arxiv | \section{Introduction}
\subsection{Background and motivation}
Many studies about cooperation of autonomous mobile robots have been conducted in the field of distributed computing.
These studies focus on the minimum capabilities of robots that permit to achieve a given task.
To model operations of robots, the \emph{Look-Compute-Move (LCM) model}~\cite{Suzuki99:Distributed} is commonly used.
In the LCM model, each robot repeats cycles of Look, Compute, and Move phases. In the Look phase, the robot observes positions of other robots. In the Compute phase, the robot executes its algorithm using the observation as its input, and decides whether it moves somewhere or stays idle. In the Move phase, it moves to a new position if the robot decided to move in the Compute phase.
To consider minimum capabilities, most studies assume that robots are \emph{identical} (\emph{i.e.}, robots execute the same algorithm and have no identifier), \emph{oblivious} (\emph{i.e.}, robots have no memory to record their past history), and \emph{silent} (\emph{i.e.}, robots do not have communication capabilities).
Furthermore, they have \emph{no global compass}, \emph{i.e.}, they do not agree on the directions.
Based on the LCM model, previous works clarified solvability of many tasks such as exploration, gathering, and pattern formation in continuous environments (aka two- or three-dimensional Euclidean space) and discrete environments (aka graph networks) (see a survey~\cite{flocchini2019distributed}).
In this paper, we focus on \emph{exploration} in graph networks, which is one of the most central tasks for mobile robots.
Two variants of exploration tasks have been well studied: the \emph{perpetual} exploration requires every robot to visit every node infinitely many times, and the \emph{terminating} exploration requires robots to terminate after every node is visited by a robot at least once.
During the last decade, many works have considered the perpetual and terminating exploration on the assumption that each robot has unlimited visibility, \emph{i.e.}, it observes all other robots in the network.
The perpetual exploration has been studied for rings~\cite{Blin10:Exclusive} and grids~\cite{Bonnet11:Asynchronous}.
The terminating exploration has been studied for lines~\cite{flocchini2011how}, rings~\cite{Devismes13:Optimal,Flocchini13:Computing, lamani2010optimal}, trees~\cite{Flocchini10:Remembering}, finite grids~\cite{Devismes12:Optimal,Devismes21:Terminating}, tori~\cite{Devismes19:Optimal}, and arbitrary networks~\cite{Chalopin10:Network}.
However, the capability of the unlimited visibility seems powerful and somewhat contradicts the principle of weak mobile robots.
For this reason, some studies consider the more realistic case of \emph{myopic} robots~\cite{Datta13:Ring,Datta13:Ring23}.
A myopic robot has limited visibility, \emph{i.e.}, it can see nodes (and robots on them) only within a certain fixed distance $\phi$.
Datta et al.\ studied the terminating exploration of rings for $\phi=1$~\cite{Datta13:Ring} and $\phi=2,3$~\cite{Datta13:Ring23}.
Not surprisingly, since myopic robots are weaker than non-myopic robots, many impossibility results are given for myopic robots.
To improve the task solvability, myopic robots with persistent visible light~\cite{Das16:Autonomous}, called myopic \emph{luminous} robots, have attracted a lot of attention.
Each myopic luminous robot is equipped with a light device that can emit a constant number of colors to other robots, a single color at a time.
The light color is persistent, \emph{i.e.}, it is not automatically reset at the end of each cycle, and hence it can be used as a constant-space memory.
Ooshita and Tixeuil~\cite{Ooshita21:Ring} studied the perpetual and terminating exploration of rings for $\phi=1$ in the synchronous (FSYNC), semi-synchronous (SSYNC), and asynchronous (ASYNC) models.
They showed that the number of robots required to achieve the tasks can be reduced compared to non-luminous robots.
Nagahama et al.~\cite{nagahama2019ring}
studied the same problem in case of $\phi\geq 2$ and showed that, in the SSYNC and ASYNC models, the number of robots required to achieve the tasks can be reduced compared to the case of $\phi=1$.
Bramas et al.\ studied the exploration of an infinite grid with myopic luminous and non-luminous robots in the FSYNC model~\cite{Bramas20:Infinite,Bramas20:Finding}.
Here they propose algorithms so that every node of an infinite grid is visited by a robot at least once.
In \cite{Bramas20:Infinite} robots agree on a \emph{common chirality}, \emph{i.e.},
robots agree on common clockwise and counterclockwise directions.
Bramas et al.~\cite{bramas2021optimal} also studied the perpetual exploration of a (finite) grid with myopic luminous and non-luminous robots in the FSYNC model on the assumption that robots agree on a common chirality.
Algorithms proposed in \cite{bramas2021optimal} have additional nice properties: they work even if robots are opaque (\emph{i.e.}, a robot is able to see another robot only if no other robot lies in the line segment joining them), and they are exclusive (\emph{i.e.}, no two robots occupy a single node during the execution).
This work also describes the way to extend their algorithms to acheive the terminating exploration and/or to work in the SSYNC and ASYNC models.
More concretely, this gives three algorithms to achieve the terminating exploration of a grid in case of a common chirality: algorithms for two robots with $\phi=1$ and six colors in the FSYNC model, two robots with $\phi=2$ and five colors in the FSYNC model, and two robots with $\phi=2$ and six colors in the SSYNC and ASYNC models.
However algorithms with a fewer number of colors or no common chirality are not known yet.
\subsection{Our contributions}
We focus on the terminating exploration of a (finite) grid with myopic luminous and non-luminous robots, and clarify lower and upper bounds of the required number of robots in various assumptions of synchrony, visible distance $\phi$, the number of colors, and a chirality.
Table \ref{table:summary} summarizes our contributions.
First, we prove that, in the SSYNC and ASYNC models, three myopic robots are necessary to achieve the terminating exploration of a grid if $\phi=1$ holds.
Note that this lower bound also holds for the perpetual exploration because we prove that robots cannot visit some nodes of a grid in this case.
Other lower bounds in Table \ref{table:summary} are given by Bramas et al.~\cite{bramas2021optimal}.
They are originally given as impossibility results for the perpetual exploration, however they still hold for the terminating exploration.
This is because Bramas et al.\ prove that, if the number of robots is smaller in each assumption, robots cannot visit some nodes.
Second, we propose algorithms to achieve the terminating exploration of a grid in various assumptions in Table \ref{table:summary}.
To the best of our knowledge, they are the first algorithms that achieve the terminating exploration of a grid by myopic robots with at most three colors and/or with no common chirality.
In addition, six proposed algorithms are optimal in terms of the number of robots.
\begin{table}[tb]
\centering
\caption{Terminating grid exploration with myopic robots. Notation $\phi$ represents the visible distance of a robot, $\ell$ represents the number of colors, and $*$ means the number of robots is minimum.}
\label{table:summary}
\begin{center}
\begin{tabular}{|c|c|c|c|>{\centering}p{1cm}c|>{\centering}p{1cm}c|}
\hline
\multirow{2}{*}{Synchrony} &
\multirow{2}{*}{$\phi$} &
\multirow{2}{*}{$\ell$} &
Common &
\multicolumn{4}{c|}{\#required robots} \\ \cline{5-8}
& & & chirality &
\multicolumn{2}{c|}{Lower bound} & \multicolumn{2}{c|}{Upper bound} \\ \hline
\multirow{8}{*}{FSYNC} & \multirow{4}{*}{2} & \multirow{2}{*}{2} & yes & 2 & \cite{bramas2021optimal}& $\mathbf{2^*}$ & \S\,\ref{secF22T2}\\ \cline{4-8}
& & & no & 2 & \cite{bramas2021optimal}& $\mathbf{3}$ &\S\,\ref{secF22F3}\\ \cline{3-8}
& & \multirow{2}{*}{1} & yes & 3 & \cite{bramas2021optimal} & $\mathbf{3^*}$ & \S\,\ref{secF21T3}\\ \cline{4-8}
& & & no & 3 & \cite{bramas2021optimal} & $\mathbf{4}$ & \S\,\ref{secF21F4}\\ \cline{2-8}
& \multirow{4}{*}{1} & \multirow{2}{*}{3} & yes & 2 & \cite{bramas2021optimal}& $\mathbf{2^*}$ & \S\,\ref{secF13T2}\\ \cline{4-8}
& & & no & 2 & \cite{bramas2021optimal}& $\mathbf{4}$ & \S\,\ref{secF13F4}
\\ \cline{3-8}
& & \multirow{2}{*}{2} & yes & 3 & \cite{bramas2021optimal} & $\mathbf{3^*}$ & \S\,\ref{secF12T3}\\ \cline{4-8}
& & & no & 3 & \cite{bramas2021optimal} & $\mathbf{5}$ & \S\,\ref{secF12F5}
\\ \hline
& \multirow{4}{*}{2} & \multirow{2}{*}{3} & yes & 2 & \cite{bramas2021optimal}& $\mathbf{2^*}$ &\S\,\ref{secA23T2}\\ \cline{4-8}
& & & no & 2 & \cite{bramas2021optimal}& $\mathbf{3}$ & \S\,\ref{secA23F3}\\ \cline{3-8}
SSYNC & & \multirow{2}{*}{2} & yes & 2 & \cite{bramas2021optimal}& $\mathbf{3}$ & \S\,\ref{secA22T3}
\\ \cline{4-8}
ASYNC & & & no & 2 & \cite{bramas2021optimal}& $\mathbf{4}$ & \S\,\ref{secA22F4}
\\ \cline{2-8}
& \multirow{2}{*}{1} & \multirow{2}{*}{3} & yes & $\mathbf{3}$ & \S\,\ref{secImpSSYNC} & $\mathbf{3^*}$ & \S\,\ref{secA13T3}\\ \cline{4-8}
& & & no & $\mathbf{3}$ & \S\,\ref{secImpSSYNC} & $\mathbf{6}$ & \S\,\ref{secA13F6}
\\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Preliminaries}
\subsection{System model}
The system consists of $k$ mobile robots and a simple connected graph $G=(V, E)$, where $V$ is a set of nodes and $E$ is a set of edges.
In this paper, we assume that $G$ is a finite $m\times n$ grid (or a grid, for short) where $m$ and $n$ are two positive integers, \textit{i.e.}, $G$ satisfies the following conditions:
\begin{itemize}
\item $V = \{v_{i,j} \,|\, i\in\{0,1,\ldots,m-1\},\, j\in\{0,1,\ldots,n-1\}\}$
\item $E = \{(v_{i, j}, v_{i', j'}) \,|\, v_{i, j}, v_{i', j'}\in V,\, |i-i'|+|j-j'|=1\}$
\end{itemize}
The indices of nodes are used for notation purposes only and robots do not know them.
Neither nodes nor edges have identifiers or labels, and consequently robots cannot distinguish nodes and cannot distinguish edges.
Robots do not know $m$ or $n$.
Figure \ref{global-direction} shows global directions labeled by North, East, South, and West on a grid.
Note that these directions are used only for explanations, and robots cannot access the global directions.
Each robot is on a node of $G$ at each instant.
When a robot $r$ is on a node $v$, we say $r$ \textit{occupies} $v$ and $v$ \textit{hosts} $r$.
The distance between two nodes is the number of edges in a shortest path between the nodes. The distance between two robots $r_1$ and $r_2$ is the distance between two nodes occupied by $r_1$ and $r_2$. Two robots $r_1$ and $r_2$ are neighbors if the distance between $r_1$ and $r_2$ is one.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=1.0]{globalDirection.pdf}
\end{center}
\caption{Global directions on a grid}
\label{global-direction}
\end{figure}
Robots we consider have the following characteristics and capabilities. Robots are \textit{identical}, that is, robots execute the same deterministic algorithm and do \textit{not} have unique identifiers. Robots are \textit{luminous}, that is, each robot has a light (or state) that is visible to itself and other robots. A robot can choose the color of its light from a discrete set $Col$. When the set $Col$ is finite, $\ell$ denotes the number of available colors (\textit{i.e.}, $\ell=|Col|$).
Robots have no other persistent memory and cannot remember the history of past actions.
Each robot can communicate by observing positions and colors of other robots (for collecting information), and by changing its color and moving (for sending information).
Robots are \textit{myopic}, that is, each robot $r$ can observe positions and colors of robots within a fixed distance $\phi$ ($\phi>0$ but $\phi\neq\infty$) from its current position. Since robots are identical, they share the same $\phi$.
Each robot distinguishes clockwise and counterclockwise directions according to its own \textit{chirality}.
The robots agree on a common clockwise direction if and only if they agree on a common chirality.
Each robot executes an algorithm by repeating three-phase cycles: Look, Compute, and Move phases. During the \textit{Look} phase, the robot takes a snapshot of positions and colors of robots within distance $\phi$. During the \textit{Compute} phase, the robot computes its next color and movement according to the observation in the Look phase. The robot may change its color at the end of the Compute phase. If the robot decides to move, it moves instantaneously to a neighboring node during the \textit{Move} phase.
To model asynchrony of executions, we introduce the notion of \textit{scheduler} that decides when each robot executes phases. When the scheduler makes robot $r$ execute some phase, we say the scheduler activates the phase of $r$ or simply activates $r$. We consider three types of synchronicity: the FSYNC (fully synchronous) model, the SSYNC (semi-synchronous) model, and the ASYNC (asynchronous) model.
In all models, time is represented by an infinite sequence of instants $0,1,2,...$.
No robot has access to this global time.
In the FSYNC and SSYNC models, all the robots that are activated at an instant $t$ execute a full cycle synchronously and concurrently between $t$ and $t+1$.
In the FSYNC model, at every instant, the scheduler activates all robots.
In the SSYNC model, at every instant, the scheduler selects a non-empty subset of robots and activates the selected robots.
In the ASYNC model, the scheduler activates cycles of robots asynchronously: the time between Look, Compute, and Move phases is finite but unpredictable.
Note that in the ASYNC model, a robot $r$ can move based on the outdated view obtained during the previous Look phase. Throughout the paper we assume that the scheduler is \textit{fair}, that is, each robot is activated infinitely often.
\subsection{Configuration, view, and algorithm}
\paragraph{Configuration.}
A configuration represents positions and colors of all robots.
At instant $t$, let $Q(t)$ be the set of occupied nodes, and let $M_{i, j}(t)$ be the multiset of colors of robots on node $v_{i, j} \in Q(t)$.
A \textit{configuration} $C(t)$ of the system at instant $t$ is defined as $C(t)=\left\{\left(v_{i,j},M_{i,j}(t)\right) \mid v_{i,j}\in Q(t)\right\}$.
If $t$ is clear from the context, we simply write $Q$, $M_{i,j}$ and $C$ instead of $Q(t)$, $M_{i,j}(t)$, and $C(t)$, respectively.
\paragraph{View.}
When a robot takes a snapshot of its environment, it gets a \textit{view} up to distance $\phi$.
Consider a robot $r$ on node $v_{i, j}$.
Let $c_r$ be a color of $r$.
We describe $M_{i', j'}=\bot$ if node $v_{i',j'}$ does not exist, that is, $i'\notin\{0,1,\ldots,m-1\}$ or $j'\notin\{0,1,\ldots,n-1\}$ holds.
Since $r$ does not know the global direction, it obtains one of the following four views in case of $\phi=1$ and a common chirality:
\begin{itemize}
\item North view: ${\cal V}_{1,\nu} = (c_r,M_{i-1,j},M_{i,j-1},M_{i,j},M_{i,j+1},M_{i+1,j})$
\item East view: ${\cal V}_{1,e} = (c_r,M_{i,j+1},M_{i-1,j},M_{i,j},M_{i+1,j},M_{i,j-1})$
\item South view: ${\cal V}_{1,s} = (c_r,M_{i+1,j},M_{i,j+1},M_{i,j},M_{i,j-1},M_{i-1,j})$
\item West view: ${\cal V}_{1,w} = (c_r,M_{i,j-1},M_{i+1,j},M_{i,j},M_{i-1,j},M_{i,j+1})$
\end{itemize}
In case of $\phi=1$ and no common chirality, $r$ obtains one of eight views, which include the above four views and the mirror images of them:
\begin{itemize}
\item Mirror image of ${\cal V}_{1,\nu}$: \\${\cal V}_{1,\nu,\mu} = (c_r,M_{i-1,j},M_{i,j+1},M_{i,j},M_{i,j-1},M_{i+1,j})$
\item Mirror image of ${\cal V}_{1,e}$: \\${\cal V}_{1,e,\mu} = (c_r,M_{i,j+1},M_{i+1,j},M_{i,j},M_{i-1,j},M_{i,j-1})$
\item Mirror image of ${\cal V}_{1,s}$: \\${\cal V}_{1,s,\mu} = (c_r,M_{i+1,j},M_{i,j-1},M_{i,j},M_{i,j+1},M_{i-1,j})$
\item Mirror image of ${\cal V}_{1,w}$: \\${\cal V}_{1,w,\mu} = (c_r,M_{i,j-1},M_{i-1,j},M_{i,j},M_{i+1,j},M_{i,j+1})$
\end{itemize}
When $r$ obtains one of the views, it cannot recognize which view it obtains, however it can compute other views by rotating and/or flipping the view.
Hence, we assume that, in case of a common chirality, $r$ obtains four views ${\cal V}_{1,\nu}, {\cal V}_{1,e}, {\cal V}_{1,s}, {\cal V}_{1,w}$ when it takes a snapshot.
Note that $r$ does not recognize which view corresponds to each of North, East, South, and West views.
Similarly, we assume that, in case of no common chirality, $r$ obtains eight views ${\cal V}_{1,\nu}, {\cal V}_{1,e}, {\cal V}_{1,s}, {\cal V}_{1,w}, V_{1,\nu,\mu}, {\cal V}_{1,e,\mu}, {\cal V}_{1,s,\mu}, {\cal V}_{1,w,\mu}$ when it takes a snapshot.
Similarly, in case of $\phi=2$ and a common chirality, $r$ obtains the following four views.
\begin{itemize}
\item North view: ${\cal V}_{2,\nu}=(c_r,M_{i-2,j},M_{i-1,j-1},M_{i-1,j},M_{i-1,j+1},M_{i,j-2},M_{i,j-1},M_{i,j},M_{i,j+1},\\M_{i,j+2},M_{i+1,j-1},M_{i+1,j},M_{i+1,j+1},M_{i+2,j})$
\item East view: ${\cal V}_{2,e}=(c_r,M_{i,j+2},M_{i-1,j+1},M_{i,j+1},M_{i+1,j+1},M_{i-2,j},M_{i-1,j},M_{i,j},M_{i+1,j},\\M_{i+2,j},M_{i-1,j-1},M_{i,j-1},M_{i+1,j-1},M_{i,j-2})$
\item South view: ${\cal V}_{2,s}=(c_r,M_{i+2,j},M_{i+1,j+1},M_{i+1,j},M_{i+1,j-1},M_{i,j+2},M_{i,j+1},M_{i,j},M_{i,j-1},\\M_{i,j-2},M_{i-1,j+1},M_{i-1,j},M_{i-1,j-1},M_{i-2,j})$
\item West view: ${\cal V}_{2,w}=(c_r,M_{i,j-2},M_{i+1,j-1},M_{i,j-1},M_{i-1,j-1},M_{i+2,j},M_{i+1,j},M_{i,j},M_{i-1,j},\\M_{i-2,j},M_{i+1,j+1},M_{i,j+1},M_{i-1,j+1},M_{i,j+2})$
\end{itemize}
In case of $\phi=2$ and no common chirality, $r$ obtains eight views, which include the above four views and the mirror images of them:
\begin{itemize}
\item Mirror image of ${\cal V}_{2,\nu}$: ${\cal V}_{2,\nu,\mu}=(c_r,M_{i-2,j}, M_{i-1,j+1},M_{i-1,j},M_{i-1,j-1}, M_{i,j+2},M_{i,j+1},M_{i,j},\\M_{i,j-1},M_{i,j-2}, M_{i+1,j+1},M_{i+1,j},M_{i+1,j-1}, M_{i+2,j})$
\item Mirror image of ${\cal V}_{2,e}$: ${\cal V}_{2,e,\mu}=(c_r,M_{i,j+2}, M_{i+1,j+1},M_{i,j+1},M_{i-1,j+1}, M_{i+2,j},M_{i+1,j},M_{i,j},\\M_{i-1,j},M_{i-2,j}, M_{i+1,j-1},M_{i,j-1},M_{i-1,j-1}, M_{i,j-2})$
\item Mirror image of ${\cal V}_{2,s}$: ${\cal V}_{2,s,\mu}=(c_r,M_{i+2,j}, M_{i+1,j-1},M_{i+1,j},M_{i+1,j+1}, M_{i,j-2},M_{i,j-1},M_{i,j},\\M_{i,j+1},M_{i,j+2}, M_{i-1,j-1},M_{i-1,j},M_{i-1,j+1}, M_{i-2,j})$
\item Mirror image of ${\cal V}_{2,w}$: ${\cal V}_{2,w,\mu}=(c_r,M_{i,j-2}, M_{i-1,j-1},M_{i,j-1},M_{i+1,j-1}, M_{i-2,j},M_{i-1,j},M_{i,j},\\M_{i+1,j},M_{i+2,j}, M_{i-1,j+1},M_{i,j+1},M_{i+1,j+1}, M_{i,j+2})$
\end{itemize}
\paragraph{Algorithm.}
An algorithm is described as a set of rules.
Each rule is represented as a combination of a label, a guard, and an action.
The guard represents possible views obtained by a robot.
Recall that robot $r$ obtains several views during the Look phase.
If some view of robot $r$ matches a guard in some rule, we say $r$ is enabled.
We also say the rule with the corresponding label is enabled.
If $r$ is enabled, $r$ can execute the corresponding action (\emph{i.e.}, change its color and/or move to its neighboring node) based on the directions of the matched view during Compute and Move phases.
If several views of $r$ match some guard or some view of $r$ matches several guards, one combination of a view and a rule is selected by the scheduler.
\subsection{Execution and problem}
\paragraph{Execution.}
An execution from initial configuration $C_0$ is a maximal sequence of configurations $E=C_0,C_1,...,C_i,...$ such that, for any $j>0$, we have
\textit{(i)} $C_{j-1}\neq C_j$,
\textit{(ii)} $C_j$ is obtained from $C_{j-1}$ after some robots move or change their colors, and
\textit{(iii)} for every robot $r$ that moves or changes its color between $C_{j-1}$ and $C_j$, there exists $0\leq j^\prime < j$
such that $r$ takes its decision to move or change its color according to its algorithm and its view in $C_{j^\prime}$.
The term ``\textit{maximal}'' means that the execution is either infinite or ends in a \textit{terminal configuration}, \textit{i.e.}, a configuration in which no robot is enabled.
\paragraph{Problem.}
A problem ${\cal P}$ is defined as a set of executions: An execution $E$ solves ${\cal P}$ if $E\in {\cal P}$ holds. An algorithm ${\cal A}$ solves problem ${\cal P}$ from initial configuration $C_0$ if any execution from $C_0$ solves ${\cal P}$. We simply say an algorithm ${\cal A}$ solves problem ${\cal P}$ if there exists an initial configuration $C_0$ such that ${\cal A}$ solves ${\cal P}$ from $C_0$.
In this paper, we consider the terminating exploration problem.
\begin{definition}[\textbf{Terminating exploration problem}]
The terminating exploration is defined as a set of executions $E$ such that 1) every node is visited by at least one robot in $E$ and 2) there exists a suffix of $E$ such that no robots are enabled.
\end{definition}
\subsection{Descriptions}
For simplicity, we describe a rule in an algorithm with a figure in Fig.\,\ref{ruleFig}.
Figure\,\ref{ruleFig}(a) represents a rule of an algorithm in case of $\phi=1$.
Figure\,\ref{ruleFig}(b) represents a rule in case of $\phi=2$.
Each graph in Fig.\,\ref{ruleFig} represents a guard.
The guard in Fig.\,\ref{ruleFig}(a) represents a view ${\cal V}_1=(c_r,M_{i-1,j},M_{i,j-1},M_{i,j},M_{i,j+1},M_{i+1,j})$, and similarly the guard in Fig.\,\ref{ruleFig}(b) represents a view ${\cal V}_2$.
If $M_{i',j'}=\emptyset$ holds, we paint the corresponding node white instead of writing $\emptyset$.
If $M_{i',j'}=\bot$ holds, we paint the corresponding node black instead of writing $\bot$.
If both $\emptyset$ and $\bot$ are acceptable, we paint the corresponding node gray.
If some view of robot $r$ with visible distance $\phi$ matches ${\cal V}_\phi$, $r$ is enabled.
In this case, if the scheduler activates $r$, it executes an action represented by $c_{new},Movement$.
Notation $c_{new}$ represents a new color of the robot.
Notation $Movement$ can be $Idle$, $\leftarrow$, $\rightarrow$, $\uparrow$, $\downarrow$ and represents the movement: $Idle$ implies a robot does not move, and $\leftarrow$ (resp., $\rightarrow$, $\uparrow$, $\downarrow$) implies a robot moves toward the node corresponding to $M_{i,j-1}$ (resp., $M_{i,j+1}$, $M_{i-1,j}$, $M_{i+1,j}$) of the guard.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=1.0]{descAlg.pdf}
\end{center}
\caption{Description of a rule in an algorithm}
\label{ruleFig}
\end{figure}
\section{An Impossibility result}
\label{secImpSSYNC}
In this section, we prove that, in the SSYNC model, two robots cannot achieve the terminating exploration if $\phi=1$ holds.
Since executions in the SSYNC model can happen in the ASYNC model, this impossibility also holds in the ASYNC model.
This implies that, in case of $\phi=1$, at least three robots are necessary to achieve the terminating exploration of grids in the SSYNC and ASYNC models.
In the following, we use terms of end nodes and inner nodes.
We say node $v$ is an end node if the degree of $v$ is smaller than four.
We say node $v$ is an inner node if the distance from $v$ to every end node is at least three.
\begin{theorem}
\label{grid-imp-ssync}
In case of $\phi=1$ and $k=2$, no algorithm solves the terminating exploration of grids in the SSYNC model. This holds regardless of the number of colors and a common chirality.
\end{theorem}
\begin{proof}
For contradiction, we assume that such an algorithm ${\cal A}$ exists.
Consider an execution $E = C_0,C_1,...$ of ${\cal A}$ in a $m\times n$ grid $G$ that satisfies $m\ge 9$ and $n\ge 9$.
Let $i$ be the minimum index such that some robot occupies an inner node at $C_i$.
Let $r_1$ be a robot that occupies an inner node at $C_i$ and $r_2$ be another robot.
Let $d$ be the distance between $r_1$ and $r_2$ at $C_i$.
We consider two cases: (1) $d \geq 2$ and (2) $d\leq 1$.
Consider Case 1, that is, $d \geq 2$ holds.
Let $v_1$ and $v_2$ be nodes that host $r_1$ and $r_2$, respectively, at $C_i$.
We further consider two sub-cases: (1-1) $v_2$ is not an end node, and (1-2) $v_2$ is an end node.
First assume that $v_2$ is not an end node (Case 1-1).
In this case, we can define nodes $v'_1$ and $v'_2$ such that $v'_1$ is a neighbor of $v_1$, $v'_2$ is a neighbor of $v_2$, $v'_2$ is not an end node, the distance between nodes $w_1$ and $w_2$ is at least two for any $w_1\in\{v_1,v'_1\}$ and any $w_2\in\{v_2,v'_2\}$.
Then we can prove that the scheduler makes $r_1$ and $r_2$ stay on nodes in $\{v_1,v'_1\}$ and $\{v_2,v'_2\}$, respectively, forever after $C_i$.
Consider configuration $C$ such that $r_1$ and $r_2$ stay on nodes in $\{v_1,v'_1\}$ and $\{v_2,v'_2\}$, respectively.
Since $r_1$ and $r_2$ cannot observe each other and they are not on end nodes, $r_x$ ($x\in\{1,2\}$) cannot distinguish directions, that is, $r_x$ obtains four identical views when it takes a snapshot.
This implies that, when $r_x$ moves, the scheduler can decide which direction $r_x$ moves toward.
Hence, if $r_1$ moves, the scheduler can move $r_1$ to another node in $\{v_1,v'_1\}$.
Similarly, if $r_2$ moves, the scheduler can move $r_2$ to another node in $\{v_2,v'_2\}$.
This implies that, at the configuration after $C$, $r_1$ and $r_2$ stay on nodes in $\{v_1,v'_1\}$ and $\{v_2,v'_2\}$, respectively.
Hence, inductively, after $C_i$, robots $r_1$ and $r_2$ continue to stay on nodes in $\{v_1,v'_1\}$ and $\{v_2,v'_2\}$, respectively.
This means that robots can visit at most two inner nodes until $C_i$ and visit at most two other inner nodes after $C_i$.
Since the number of inner nodes in $G$ is at least nine, robots cannot achieve the terminating exploration.
Next assume that $v_2$ is an end node (Case 1-2).
Let $v'_1$ be an inner node that is a neighbor of $v_1$.
Similarly to Case 1-1, we can prove that, if $r_1$ never observes $r_2$, $r_1$ continues to stay on nodes in $\{v_1,v'_1\}$.
This implies that, to achieve the terminating exploration, $r_2$ moves toward $r_1$ or visits the remaining nodes by itself.
In any case, $r_2$ leaves from end nodes, which reduces to Case 1-1.
Consider Case 2, that is, $d\leq 1$ holds.
Let $v_1$ be a node that hosts $r_1$.
Let $v_2$ be a node that hosts $r_2$ if $d=1$, and a neighbor of $v_1$ if $d=0$.
We can prove that, as long as each robot moves toward another robot or stays on its current node, robots continue to stay on nodes in $\{v_1,v_2\}$: if two robots stay on different nodes, they can only move toward another node, and if two robots stay on a single node $v_1$ or $v_2$, the scheduler can move them to another node in $\{v_1,v_2\}$.
Hence, eventually a robot moves to another node, say $v_3$, when the distance between two robots is one.
In this moment, the scheduler activates only this robot.
After the movement, the distance between $r_1$ and $r_2$ is two.
Similarly to Case 1, after the configuration, robots can visit only two other inner nodes.
This implies that robots can visit at most two inner nodes ($v_1$ and $v_2$) until $C_i$ and visit at most three other inner nodes ($v_3$ and two other inner nodes) after $C_i$.
Since the number of inner nodes in $G$ is at least nine, they cannot achieve the terminating exploration.
This is a contradiction.
\end{proof}
Note that this impossibility result also holds for the perpetual exploration because the proof of Theorem \ref{grid-imp-ssync} shows that robots cannot visit some nodes in this case.
\section{Terminating Grid Exploration Algorithms}
\subsection{Overview}
In this subsection, we give the overview of our algorithms.
All of our algorithms make robots explore the grid according to the arrow in Fig.\,\ref{routeFig}.
In other words, robots start exploration from the northwest corner and repeat the following behaviors:
\begin{enumerate}
\item Proceed east: Robots go straight to the east end of the grid.
\item Turn west: They go one step south and turn west.
\item Proceed west: Robots go straight to the west end of the grid.
\item Turn east: They go one step south and turn east.
\end{enumerate}
In each algorithm, we implement the behaviors of proceeding and turning.
While proceeding, robots recognize their forward direction by their form.
In the FSYNC model, since all robots are activated at every instant, they move forward at every instant and keep their initial form.
The robots repeat this behavior until they reach the end of the grid.
On the other hand, in the SSYNC and ASYNC models, not all robots are activated at the same time.
For this reason, we propose the way to make robots move forward by moving a single robot at every instant.
The difficult part is to implement the behaviors of turning.
Since robots do not know global directions, they must understand the south direction from the local information.
We realize this in two different approaches.
The first approach is to keep robots in two rows when proceeding east or west.
By making different forms in north and south rows, robots distinguish the two directions.
Mainly we use this approach in the case of no common chirality.
The second approach is used only in the case of a common chirality.
In this approach, robots change their form of proceeding depending on the directions.
That is, robots distinguish the east and west directions by their form.
In the case of a common chirality, robots can go south by turning right (resp. left) when they proceed east (resp. west).
In the second approach, robots do not have to keep themselves in two rows when proceeding.
This is the main reason why we can reduce the number of robots in the case of a common chirality.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=10cm]{route.pdf}
\end{center}
\caption{Route of grid exploration with our algorithm}
\label{routeFig}
\end{figure}
In the following subsections, we give terminating grid exploration algorithms in various assumptions.
We explain a set of rules and an execution from an initial configuration with figures.
In the explanations, we mention rules that can be applied in each configuration.
We omit explanations why other rules cannot be applied, but readers can easily check it by comparing the configuration and the set of rules.
\subsection{Algorithms for the FSYNC model}
In this subsection, we give terminating grid exploration algorithms for the FSYNC model.
\subsubsection{$\phi=2$, $\ell=2$, a common chirality, and $k=2$}
\label{secF22T2}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=2$, $\ell=2$, a common chirality, and $k=2$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}\}$.
The algorithm is given in Algorithm \ref{algorithmF22T2}.
\begin{algorithm}[t!]
\caption{Fully Synchronous Terminating Exploration for $\phi=2,\,\ell=2,$ $k=2$ with a Common Chirality
\label{algorithmF22T2}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{F22T2.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
From the initial configuration, robots with color ${\textsf G}$ and ${\textsf W}$ can execute rules $R1$ and $R2$, respectively.
Hence, they proceed east while keeping the form.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestF22T2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestF22T2.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmF22T2}}
\label{turnWestF22T2}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestF22T2}(a)).
From this configuration, the robot with color ${\textsf G}$ moves south by rule $R3$, and hence the configuration becomes one in Fig.\,\ref{turnWestF22T2}(b).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R4$.
At the same time, the robot with color ${\textsf G}$ moves west by rule $R5$.
Hence, the configuration becomes one in Fig.\,\ref{turnWestF22T2}(c).
\paragraph{Proceeding west.}
From the configuration in Fig.\,\ref{turnWestF22T2}(c), the robot with color ${\textsf G}$ and the robot with color ${\textsf W}$ can execute rules $R6$ and $R7$, respectively.
Hence, they proceed west while keeping the form.
\paragraph{Turning east.}
The process of turning east is shown in Fig.\,\ref{turnEastF22T2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnEastF22T2.pdf}
\end{center}
\caption{Turning east in an execution of Algorithm\,\ref{algorithmF22T2}}
\label{turnEastF22T2}
\end{figure}
After robots proceed west, they reach the west end of the grid (Fig.\,\ref{turnEastF22T2}(a)).
From this configuration, the robot with color ${\textsf G}$ moves south by rule $R8$.
At the same time, the robot with color ${\textsf W}$ moves by rule $R7$.
Hence, the configuration becomes one in Fig.\,\ref{turnEastF22T2}(b).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R9$, and hence the configuration becomes one in Fig.\,\ref{turnEastF22T2}(c).
From this configuration, two robots can proceed east again.
\paragraph{End of exploration.}
After robots visit all nodes and reach a south corner of the grid, the configuration becomes terminal.
In case that $m$ is odd, two robots visit the south end nodes while proceeding east, and hence they reach the southeast corner.
Immediately after node $v_{m-1, n-1}$ is visited, the configuration is $\{(v_{m-1,n-2},\{{\textsf G}\}),(v_{m-1,n-1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, two robots visit the south end nodes while proceeding west, and hence they reach the southwest corner.
Immediately after node $v_{m-1, 0}$ is visited, the configuration is $\{(v_{m-1,0},\{{\textsf G}\}),(v_{m-1,2},\{{\textsf W}\})\}$.
From this configuration, robots with colors ${\textsf G}$ and ${\textsf W}$ move by rules $R10$ and $R7$, respectively.
Hence, the configuration becomes $\{(v_{m-1,1},\{{\textsf G},{\textsf W}\})\}$.
At this configuration, no robots are enabled.
\subsubsection{$\phi=2$, $\ell=2$, no common chirality, and $k=3$}
\label{secF22F3}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=2$, $\ell=2$, no common chirality, and $k=3$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}\}$.
The algorithm is given in Algorithm \ref{algorithmF22F3}.
\begin{algorithm}[tbp]
\caption{Fully Synchronous Terminating Exploration for $\phi=2,\,\ell=2,$ $k=3$ Without Common Chirality
\label{algorithmF22F3}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf G}\}),(v_{1,0},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{F22F3.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
At the initial configuration, the robot on $v_{0,1}$ can execute rule $R1$, the robot on $v_{0,0}$ can execute rule $R2$, and the robot on $v_{1,0}$ can execute rule $R3$.
By repeatedly executing those rules, robots proceed east while keeping the form.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestF22F3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestF22F3.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmF22F3}}
\label{turnWestF22F3}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestF22F3}(a)).
From this configuration, two robots on west nodes move south by rules $R4$ and $R5$.
Hence, the configuration becomes one in Fig.\,\ref{turnWestF22F3}(b).
From this configuration, the robot with color ${\textsf G}$ at the east end of the grid moves south by rule $R6$ and the robot with color ${\textsf W}$ moves east by rule $R7$.
Consequently, the configuration becomes one in Fig.\,\ref{turnWestF22F3}(c).
\paragraph{Proceeding west and turning east.}
The form of robots in Fig.\,\ref{turnWestF22F3}(c) is a mirror image of the one that robots make to proceed east.
Hence, robots proceed west and turn east with the same rules as proceeding east and turning west, respectively.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west.
Eventually, the configuration becomes $\{(v_{m-2,0},\{{\textsf G}\}),(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
Node $v_{m-1,0}$ has not been visited yet.
From this configuration, the robot on $v_{m-2,0}$ moves to $v_{m-1,0}$ by rule $R8$, and hence the configuration becomes $\{(v_{m-1,0},\{{\textsf G}\}),(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots terminate the algorithm similarly to the odd case.
\subsubsection{$\phi=2$, $\ell=1$, a common chirality, and $k=3$}
\label{secF21T3}
In executions of Algorithm \ref{algorithmF22T2}, robots do not change their colors and robots with different colors do not occupy a single node.
Therefore, by representing the robot of color ${\textsf W}$ in Algorithm \ref{algorithmF22T2} with two robots of color ${\textsf G}$, we can construct a terminating exploration algorithm in case of $\phi=2$, $\ell=1$, a common chirality, and $k=3$.
\subsubsection{$\phi=2$, $\ell=1$, no common chirality, and $k=4$}
\label{secF21F4}
In executions of Algorithm \ref{algorithmF22F3}, robots do not change their colors and robots with different colors do not occupy a single node.
Therefore, by representing the robot of color ${\textsf W}$ in Algorithm \ref{algorithmF22F3} with two robots of color ${\textsf G}$, we can construct a terminating exploration algorithm in case of $\phi=2$, $\ell=1$, no common chirality, and $k=4$.
\subsubsection{$\phi=1$, $\ell=3$, a common chirality, and $k=2$}
\label{secF13T2}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=1$, $\ell=3$, a common chirality, and $k=2$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmF13T2}.
\begin{algorithm}[tbp]
\caption{Fully Synchronous Terminating Exploration for $\phi=1,\,\ell=3,$ $k=2$ with Common Chirality
\label{algorithmF13T2}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{F13T2.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
From the initial configuration, robots with colors ${\textsf W}$ and ${\textsf G}$ can execute rules $R1$ and $R2$, respectively.
Hence, they proceed east while keeping the form.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestF13T2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestF13T2.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmF13T2}}
\label{turnWestF13T2}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestF13T2}(a)).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R3$.
At the same time, the robot with color ${\textsf G}$ moves east by rule $R2$.
Hence, the configuration becomes one in Fig.\,\ref{turnWestF13T2}(b).
From this configuration, the robot on a south node changes its color to ${\textsf B}$ and moves west by rule $R4$.
At the same time, the robot on a north node moves south by rule $R5$.
Consequently, the configuration becomes one in Fig.\,\ref{turnWestF13T2}(c).
\paragraph{Proceeding west.}
From the configuration in Fig.\,\ref{turnWestF13T2}(c), the robot with color ${\textsf B}$ and the robot with color ${\textsf G}$ can execute rules $R6$ and $R7$, respectively.
Hence, they proceed west while keeping the form.
\paragraph{Turning east.}
The process of turning east is shown in Fig.\,\ref{turnEastF13T2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnEastF13T2.pdf}
\end{center}
\caption{Turning east in an execution of Algorithm\,\ref{algorithmF13T2}}
\label{turnEastF13T2}
\end{figure}
After robots proceed west, they reach the west end of the grid (Fig.\,\ref{turnEastF13T2}(a)).
From this configuration, the robot with color ${\textsf B}$ moves south by rule $R8$.
At the same time, the robot with color ${\textsf G}$ moves west by rule $R7$.
Hence, the configuration becomes one in Fig.\,\ref{turnEastF13T2}(b).
From this configuration, the robot with color ${\textsf B}$ changes its color to ${\textsf W}$ and moves east by rule $R9$.
At the same time, the robot with color ${\textsf G}$ moves south by rule $R10$, and hence the configuration becomes one in Fig.\,\ref{turnEastF13T2}(c).
From this configuration, two robots can proceed east again.
\paragraph{End of exploration.}
In case that $m$ is odd, two robots visit the south end nodes while proceeding east, and hence they reach the southeast corner.
Immediately after node $v_{m-1, n-1}$ is visited, the configuration is $\{(v_{m-1,n-2},\{{\textsf G}\}),(v_{m-1,n-1},\{{\textsf W}\})\}$.
From this configuration, the robot with color ${\textsf G}$ moves, and hence the configuration becomes $\{(v_{m-1,n-1},\{{\textsf G},{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, two robots visit the south end nodes while proceeding west, and hence they reach the southwest corner.
Immediately after node $v_{m-1, 0}$ is visited, the configuration is $\{(v_{m-1,0},\{{\textsf B}\}),(v_{m-1,1},\{{\textsf G}\})\}$.
From this configuration, the robot with color ${\textsf G}$ moves by rule $R7$, and hence the configuration becomes $\{(v_{m-1,0},\{{\textsf G},{\textsf B}\})\}$.
At this configuration, no robots are enabled.
\subsubsection{$\phi=1$, $\ell=3$, no common chirality, and $k=4$}
\label{secF13F4}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=1$, $\ell=3$, no common chirality, and $k=4$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmF13F4}.
\begin{algorithm}[tbp]
\caption{Fully Synchronous Terminating Exploration for $\phi=1,\,\ell=3,$ $k=4$ Without Common Chirality
\label{algorithmF13F4}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{1,0},\{{\textsf B}\}),(v_{1,1},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{F13F4.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
At the initial configuration, the robot on $v_{0,1}$ can execute rule $R1$, the robot on $v_{0,0}$ can execute rule $R2$, the robot on $v_{1,1}$ can execute rule $R3$, and the robot on $v_{1,0}$ can execute rule $R4$.
By repeatedly executing those rules, robots proceed east while keeping the form.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestF13F4}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestF13F4.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmF13F4}}
\label{turnWestF13F4}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestF13F4}(a)).
From this configuration, two robots on east nodes move south by rules $R5$ and $R6$.
At the same time, the other robots move east by rules $R2$ and $R4$.
Hence, the configuration becomes one in Fig.\,\ref{turnWestF13F4}(b).
From this configuration, two robots with color ${\textsf W}$ move west by rules $R7$ and $R8$.
At the same time, robots with color ${\textsf B}$ and ${\textsf G}$ move south by rules $R9$ and $R10$, respectively.
Consequently, the configuration becomes one in Fig.\,\ref{turnWestF13F4}(c).
\paragraph{Proceeding west and turning east.}
The form of robots in Fig.\,\ref{turnWestF13F4}(c) is a mirror image of the one that robots make to proceed east.
Hence, robots proceed west and turn east with the same rules as proceeding east and turning west, respectively.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west, and hence they reach the southwest corner.
Immediately after node $v_{m-1, 0}$ is visited, the configuration is $\{(v_{m-2,0},\{{\textsf W}\}),(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,0},\{{\textsf W}\}),(v_{m-1,1},\{{\textsf B}\})\}$.
From this configuration, the robot on $v_{m-2,0}$ moves to $v_{m-1,0}$ by rule $R5$.
At the same time, robots with colors ${\textsf G}$ and ${\textsf B}$ move west by rules $R2$ and $R4$, respectively.
Hence, the configuration becomes $\{(v_{m-2,0},\{{\textsf G}\}),(v_{m-1,0},\{{\textsf W},{\textsf W},{\textsf B}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots terminate the algorithm similarly to the odd case.
\subsubsection{$\phi=1$, $\ell=2$, a common chirality, and $k=3$}
\label{secF12T3}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=1$, $\ell=2$, a common chirality, and $k=3$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmF12T3}.
\begin{algorithm}[tbp]
\caption{Fully Synchronous Terminating Exploration for $\phi=1,\,\ell=2,$ $k=3$ with Common Chirality
\label{algorithmF12T3}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf G}\}),(v_{1,0},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{F12T3.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
At the initial configuration, the robot on $v_{0,1}$ can execute rule $R1$, the robot on $v_{0,0}$ can execute rule $R2$, and the robot on $v_{1,0}$ can execute rule $R3$.
By repeatedly executing those rules, robots proceed east while keeping the form.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestF12T3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestF12T3.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmF12T3}}
\label{turnWestF12T3}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestF12T3}(a)).
From this configuration, the robot at the east end moves south by rule $R4$.
At the same time, the other robots move east by rules $R2$ and $R3$.
Hence, the configuration becomes one in Fig.\,\ref{turnWestF12T3}(b).
From this configuration, the robot with color ${\textsf G}$ on a south node moves south by rule $R5$.
At the same time, the robot with color ${\textsf W}$ moves west by rule $R6$, and the robot on a north node changes its color to ${\textsf W}$ and moves south.
Consequently, the configuration becomes one in Fig.\,\ref{turnWestF12T3}(c).
\paragraph{Proceeding west.}
At the configuration in Fig.\,\ref{turnWestF12T3}(c), the robot on a west node can execute rule $R8$, the robot with color ${\textsf W}$ on a east node can execute rule $R9$, and the robot with color ${\textsf G}$ can execute rule $R10$.
Hence, they proceed west while keeping the form.
\paragraph{Turning east.}
The process of turning east is shown in Fig.\,\ref{turnEastF12T3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnEastF12T3.pdf}
\end{center}
\caption{Turning east in an execution of Algorithm\,\ref{algorithmF12T3}}
\label{turnEastF12T3}
\end{figure}
After robots proceed west, they reach the west end of the grid (Fig.\,\ref{turnEastF12T3}(a)).
From this configuration, the robot on a west node moves south by rule $R11$.
At the same time, the other robots move west by rule $R9$ and $R10$.
Hence, the configuration becomes one in Fig.\,\ref{turnEastF12T3}(b).
From this configuration, the robot with color ${\textsf W}$ on a south node moves south by rule $R12$.
At the same time, the robot with color ${\textsf G}$ moves east rule $R13$, and the robot on a north node changes its color to ${\textsf G}$ and moves south by rule $R14$.
Hence, the configuration becomes one in Fig.\,\ref{turnEastF12T3}(c).
From this configuration, three robots can proceed east again.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west.
Eventually, the configuration becomes $\{(v_{m-2,0},\{{\textsf W}\}),(v_{m-2,1},\{{\textsf W}\}),(v_{m-1,1},\{{\textsf G}\})\}$.
Node $v_{m-1,0}$ has not been visited yet.
From this configuration, the robot on $v_{m-2,0}$ moves to $v_{m-1,0}$ by rule $R11$.
At the same time, the other robots move west by rules $R9$ and $R10$, and hence the configuration becomes $\{(v_{m-2,0},\{{\textsf W}\}),(v_{m-1,0},\{{\textsf G},{\textsf W}\})\}$.
From this configuration, the robot on $v_{m-2,0}$ moves to $v_{m-1,0}$ by rule $R14$, and hence the configuration becomes $\{(v_{m-1,0},\{{\textsf G},{\textsf G},{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots visit the south end nodes while proceeding east.
Eventually, the configuration becomes $\{(v_{m-2,n-2},\{{\textsf G}\}),(v_{m-2,n-1},\{{\textsf G}\}),(v_{m-1,n-2},\{{\textsf W}\})\}$.
Node $v_{m-1,n-1}$ has not been visited yet.
From this configuration, the robot on $v_{m-2,n-1}$ moves to $v_{m-1,n-1}$ by rule $R4$.
At the same time, the other robots move east by rules $R2$ and $R3$, and hence the configuration becomes $\{(v_{m-2,n-1},\{{\textsf G}\}),(v_{m-1,n-1},\{{\textsf G},{\textsf W}\})\}$.
From this configuration, the robot on $v_{m-2,n-1}$ moves to $v_{m-1,n-1}$ by rule $R7$, and hence the configuration becomes $\{(v_{m-1,n-1},\{{\textsf G},{\textsf W},{\textsf W}\})\}$.
At this configuration, no robots are enabled.
\subsubsection{$\phi=1$, $\ell=2$, no common chirality, and $k=5$}
\label{secF12F5}
In executions of Algorithm \ref{algorithmF13F4}, robots do not change their colors and robots with colors ${\textsf G}$ and ${\textsf B}$ do not occupy a single node.
Therefore, by representing the robot of color ${\textsf B}$ in Algorithm \ref{algorithmF13F4} with two robots of color ${\textsf G}$, we can construct a terminating exploration algorithm in case of $\phi=1$, $\ell=2$, no common chirality, and $k=5$.
\subsection{Algorithms for the ASYNC model}
In this subsection, we give terminating exploration algorithms for the ASYNC model.
Clearly robots can achieve terminating exploration with those algorithms also in the SSYNC and FSYNC models.
\subsubsection{$\phi=2$, $\ell=3$, a common chirality, and $k=2$}
\label{secA23T2}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=2$, $\ell=3$, a common chirality, and $k=2$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmA23T2}.
\begin{algorithm}[tbp]
\caption{Asynchronous Terminating Exploration for $\phi=2,\,\ell=3,$ $k=2$ with Common Chirality
\label{algorithmA23T2}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{A23T2.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
From the initial configuration, the robot with color ${\textsf W}$ moves east by rule $R1$, and hence the configuration becomes $\{(v_{0,0},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\})\}$.
From this configuration, the robot with color ${\textsf G}$ moves east by rule $R2$, and hence the configuration becomes $\{(v_{0,1},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\})\}$.
After that, robots proceed east while keeping the form by repeatedly executing those rules.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestA23T2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA23T2.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA23T2}}
\label{turnWestA23T2}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestA23T2}(a)).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R3$, and hence the configuration becomes one in Fig.\,\ref{turnWestA23T2}(b).
From this configuration, the robot with color ${\textsf G}$ changes its color to ${\textsf B}$ and moves south by rule $R4$.
In the ASYNC model, after the robot with color ${\textsf G}$ changes its color, the other robot may observe the intermediate configuration (Fig.\,\ref{turnWestA23T2}(c)).
However, there are no rules that the other robot can execute in the intermediate configuration.
Consequently, the configuration becomes one in Fig.\,\ref{turnWestA23T2}(d).
\paragraph{Proceeding west.}
From the configuration in Fig.\,\ref{turnWestA23T2}(d), the robot with color ${\textsf B}$ moves west by rule $R5$.
Next, the robot with color ${\textsf W}$ moves west by rule $R6$.
After that, robots proceed west while keeping the form by repeatedly executing those rules.
\paragraph{Turning east.}
The process of turning east is shown in Fig.\,\ref{turnEastA23T2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnEastA23T2.pdf}
\end{center}
\caption{Turning east in an execution of Algorithm\,\ref{algorithmA23T2}}
\label{turnEastA23T2}
\end{figure}
After robots proceed west, they reach the west end of the grid (Fig.\,\ref{turnEastA23T2}(a)).
From this configuration, the robot with color ${\textsf B}$ moves south by rule $R7$, and hence the configuration becomes one in Fig.\,\ref{turnEastA23T2}(b).
From this configuration, the robot with color ${\textsf B}$ changes its color to ${\textsf G}$ by rule $R8$, and hence the configuration becomes one in Fig.\,\ref{turnEastA23T2}(c).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R9$, and hence the configuration becomes one in Fig.\,\ref{turnEastA23T2}(d).
From this configuration, two robots can proceed east again.
\paragraph{End of exploration.}
In case that $m$ is odd, two robots visit the south end nodes while proceeding east, and hence they reach the southeast corner.
Immediately after node $v_{m-1, n-1}$ is visited, the configuration is $\{(v_{m-1,n-2},\{{\textsf G}\}),(v_{m-1,n-1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, two robots visit the south end nodes while proceeding west, and hence they reach the southwest corner.
Immediately after node $v_{m-1, 0}$ is visited, the configuration is $\{(v_{m-1,0},\{{\textsf B}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
\subsubsection{$\phi=2$, $\ell=3$, no common chirality, and $k=3$}
\label{secA23F3}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=2$, $\ell=3$, a common chirality, and $k=2$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmA23F3}.
\begin{algorithm}[tbp]
\caption{Asynchronous Terminating Exploration for $\phi=2,\,\ell=3,$ $k=3$ Without Common Chirality
\label{algorithmA23F3}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{1,0},\{{\textsf B}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{A23F3.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
From the initial configuration, the robot with color ${\textsf B}$ moves by rule $R1$, and hence the configuration becomes $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{1,1},\{{\textsf B}\})\}$.
From this configuration, the robot with color ${\textsf W}$ by rule $R2$, and hence the configuration becomes $\{(v_{0,0},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,1},\{{\textsf B}\})\}$.
From this configuration, the robot with color ${\textsf G}$ by rule $R3$, and hence the configuration becomes $\{(v_{0,1},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,1},\{{\textsf B}\})\}$.
After that, robots proceed east while keeping the form by repeatedly executing those rules.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestA23F3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA23F3.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA23F3}}
\label{turnWestA23F3}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestA23F3}(a)).
From this configuration, the robot with color ${\textsf B}$ moves south by rule $R4$, and hence the configuration becomes one in Fig.\,\ref{turnWestA23F3}(b).
From this configuration, the robot with color ${\textsf G}$ changes its color to ${\textsf W}$ and moves south by rule $R5$.
In the ASYNC model, after the robot with color ${\textsf G}$ changes its color, other robots may observe the intermediate configuration (Fig.\,\ref{turnWestA23F3}(c)).
However, there are no rules that the other robot can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnWestA23F3}(d).
From this configuration, the robot with color ${\textsf B}$ moves east by rule $R6$, and hence the configuration becomes one in Fig.\,\ref{turnWestA23F3}(e).
From this configuration, the robot with color ${\textsf W}$ changes its color to ${\textsf G}$ and moves south by rule $R7$.
In the ASYNC model, after the robot with color ${\textsf W}$ changes its color, other robots may observe the intermediate configuration (Fig.\,\ref{turnWestA23F3}(f)).
However, there are no rules that the other robot can execute in the intermediate configuration.
Consequently, the configuration becomes one in Fig.\,\ref{turnWestA23F3}(g).
\paragraph{Proceeding west and turning east.}
The form of robots in Fig.\,\ref{turnWestA23F3}(g) is a mirror image of the one that robots make to proceed east.
Hence, robots proceed west and turn east with the same rules as proceeding east and turning west, respectively.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west.
Eventually, the configuration becomes $\{(v_{m-2,0},\{{\textsf W}\}),(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,1},\{{\textsf B}\})\}$.
Node $v_{m-1,0}$ has not been visited yet.
From this configuration, the robot with color ${\textsf W}$ moves to $v_{m-1,0}$ by rule $R8$, and hence the configuration becomes $\{(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,0},\{{\textsf W}\}),(v_{m-1,1},\{{\textsf B}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots terminate the algorithm similarly to the odd case.
\subsubsection{$\phi=2$, $\ell=2$, a common chirality, and $k=3$}
\label{secA22T3}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=2$, $\ell=2$, a common chirality, and $k=3$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}\}$.
The algorithm is given in Algorithm \ref{algorithmA22T3}.
\begin{algorithm}[tbp]
\caption{Asynchronous Terminating Exploration for $\phi=2,\,\ell=2,$ $k=3$ with Common Chirality
\label{algorithmA22T3}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{1,0},\{{\textsf G}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{A22T3.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
From the initial configuration, the robot with color ${\textsf W}$ moves east by rule $R1$, and hence the configuration becomes $\{(v_{0,0},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,0},\{{\textsf G}\})\}$.
From this configuration, the robot on $v_{0,0}$ moves east by rule $R2$, and hence the configuration becomes $\{(v_{0,1},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,0},\{{\textsf G}\})\}$.
From this configuration, the robot on $v_{1,0}$ moves east by rule $R3$, and hence the configuration becomes $\{(v_{0,1},\{{\textsf G}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,1},\{{\textsf G}\})\}$.
After that, robots proceed east while keeping the form by repeatedly executing those rules.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestA22T3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA22T3.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA22T3}}
\label{turnWestA22T3}
\end{figure}
After robots proceed east, they reach the east end of the grid (Fig.\,\ref{turnWestA22T3}(a)).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R4$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22T3}(b).
From this configuration, the robot with color ${\textsf G}$ on a south node changes its color to ${\textsf W}$ by rule $R5$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22T3}(c).
From this configuration, the robot with color ${\textsf G}$ moves east by rule $R6$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22T3}(d).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R7$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22T3}(e).
From this configuration, the robot with color ${\textsf G}$ moves south by rule $R8$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22T3}(f).
\paragraph{Proceeding west.}
From the configuration in Fig.\,\ref{turnWestA22T3}(f), the robot with color ${\textsf W}$ on a west node moves west by rule $R9$.
Next, the robot with color ${\textsf G}$ moves west by rule $R10$.
Then, the robot with color ${\textsf W}$ on a east node moves west by rule $R11$.
After that, robots proceed west while keeping the form by repeatedly executing those rules.
\paragraph{Turning east.}
The process of turning east in an execution of Algorithm \ref{algorithmA22T3} is shown in Fig.\,\ref{turnEastA22T3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnEastA22T3.pdf}
\end{center}
\caption{Turning east in an execution of Algorithm\,\ref{algorithmA22T3}}
\label{turnEastA22T3}
\end{figure}
After robots proceed west, they reach the west end of the grid (Fig.\,\ref{turnEastA22T3}(a)).
From this configuration, the robot with color ${\textsf W}$ on a west node moves south by rule $R12$, and hence the configuration becomes one in Fig.\,\ref{turnEastA22T3}(b).
From this configuration, the robot with color ${\textsf W}$ on a west node changes its color to ${\textsf G}$ by rule $R13$, and hence the configuration becomes one in Fig.\,\ref{turnEastA22T3}(c).
From this configuration, the robot with color ${\textsf G}$ on a north node moves west by rule $R14$, and hence the configuration becomes one in Fig.\,\ref{turnEastA22T3}(d).
From this configuration, the robot with color ${\textsf G}$ on a south node moves south by rule $R15$, and hence the configuration becomes one in Fig.\,\ref{turnEastA22T3}(e).
From this configuration, the robot with color ${\textsf G}$ on a north node moves south by rule $R16$, and hence the configuration becomes one in Fig.\,\ref{turnEastA22T3}(f).
From this configuration, two robots can proceed east again.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west.
Eventually, the configuration becomes $\{(v_{m-2,0},\{{\textsf W}\}),(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
Node $v_{m-1,0}$ has not been visited yet.
From this configuration, the robot on $v_{m-2,0}$ moves to $v_{m-1,0}$ by rule $R12$, and hence the configuration becomes $\{(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,0},\{{\textsf W}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots visit the south end nodes while proceeding east.
Eventually, the configuration becomes $\{(v_{m-2,n-2},\{{\textsf G}\}),(v_{m-2,n-1},\{{\textsf W}\}),(v_{m-1,n-2},\{{\textsf G}\})\}$.
Node $v_{m-1,n-1}$ has not been visited yet.
From this configuration, the robot on $v_{m-2,n-1}$ moves to $v_{m-1,n-1}$ by rule $R4$, and hence the configuration becomes $\{(v_{m-2,n-2},\{{\textsf G}\}),(v_{m-1,n-2},\{{\textsf G}\}),(v_{m-1,n-1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
\subsubsection{$\phi=2$, $\ell=2$, no common chirality, and $k=4$}
\label{secA22F4}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=2$, $\ell=2$, no common chirality, and $k=4$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}\}$.
The algorithm is given in Algorithm \ref{algorithmA22F4}.
\begin{algorithm}[tbp]
\caption{Asynchronous Terminating Exploration for $\phi=2,\,\ell=2,$ $k=4$ Without Common Chirality
\label{algorithmA22F4}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,0},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{A22F4.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
The process of proceeding east is shown in Fig.\,\ref{ProceedEastA22F4}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{ProceedEastA22F4.pdf}
\end{center}
\caption{Proceeding east in an execution of Algorithm\,\ref{algorithmA22F4}}
\label{ProceedEastA22F4}
\end{figure}
At the initial configuration or at a configuration immediately after turning east, robots make the form in Fig.\,\ref{ProceedEastA22F4}(a).
From this configuration, the robot with color ${\textsf W}$ on a south node moves east by rule $R1$, and hence the configuration becomes one in Fig.\,\ref{ProceedEastA22F4}(b).
From this configuration, the robot with color ${\textsf W}$ on an east node moves east by rule $R2$, and hence the configuration becomes one in Fig.\,\ref{ProceedEastA22F4}(c).
From this configuration, the robot with color ${\textsf W}$ neighboring to the robot with color ${\textsf G}$ moves east by rule $R3$, and hence the configuration becomes one in Fig.\,\ref{ProceedEastA22F4}(d).
From this configuration, the robot with color ${\textsf G}$ moves east by rule $R4$.
After that, robots proceed east while keeping the form by repeatedly executing those rules.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestA22F4}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA22F4.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA22F4}}
\label{turnWestA22F4}
\end{figure}
After robots proceed east, they reach the east end of the grid, and the configuration becomes one in Fig.\,\ref{turnWestA22F4}(a).
From this configuration, the robot at the east end moves south by rule $R5$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(b).
From this configuration, the robot with color ${\textsf W}$ on a north node changes its color to ${\textsf G}$ by rule $R6$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(c).
From this configuration, the robot with color ${\textsf G}$ on a west node moves south by rule $R7$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(d).
From this configuration, the robot with color ${\textsf G}$ on a north node moves east by rule $R8$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(e).
From this configuration, the robot with color ${\textsf G}$ on a west node changes its color to ${\textsf W}$ by rule $R9$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(f).
From this configuration, the robot with color ${\textsf W}$ on an east node moves south by rule $R10$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(g).
From this configuration, the robot with color ${\textsf G}$ moves south by rule $R4$, and hence the configuration becomes one in Fig.\,\ref{turnWestA22F4}(h).
\paragraph{Proceeding west and turning east.}
The form of robots in Fig.\,\ref{turnWestA22F4}(h) is a mirror image of the one that robots make to proceed east.
Hence, robots proceed west and turn east with the same rules as proceeding east and turning west, respectively.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west.
Eventually, the configuration becomes $\{(v_{m-2,0},\{{\textsf W}\}),(v_{m-2,1},\{{\textsf W}\}),(v_{m-2,2},\{{\textsf G}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
Node $v_{m-1,0}$ has not been visited yet.
From this configuration, the robot on $v_{m-2,0}$ moves to $v_{m-1,0}$ by rule $R5$, and hence the configuration becomes $\{(v_{m-2,1},\{{\textsf W}\}),(v_{m-2,2},\{{\textsf G}\}),(v_{m-1,0},\{{\textsf W}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots terminate the algorithm similarly to the odd case.
\subsubsection{$\phi=1$, $\ell=3$, a common chirality, and $k=3$}
\label{secA13T3}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq2, n\geq3)$ in case of $\phi=1$, $\ell=3$, a common chirality, and $k=3$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmA13T3}.
\begin{algorithm}[tbp]
\caption{Asynchronous Terminating Exploration for $\phi=1,\,\ell=3,$ $k=3$ with Common Chirality
\label{algorithmA13T3}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{0,2},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{A13T3.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
The process of proceeding east is shown in Fig.\,\ref{ProceedEastA13T3}.
We use the same procedure as a ring exploration algorithm in \cite{Ooshita21:Ring}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{ProceedEastA13T3.pdf}
\end{center}
\caption{Proceeding east in an execution of Algorithm\,\ref{algorithmA13T3}}
\label{ProceedEastA13T3}
\end{figure}
At the initial configuration or at a configuration immediately after turning east, robots make the form in Fig.\,\ref{ProceedEastA13T3}(a).
From this configuration, the robot with color ${\textsf G}$ moves east by rule $R1$, and hence the configuration becomes one in Fig.\,\ref{ProceedEastA13T3}(b).
From this configuration, the robot with color ${\textsf W}$ on a west node changes its color to ${\textsf G}$ and moves east by rule $R2$.
In the ASYNC model, after it changes its color to ${\textsf G}$, other robots may observe the intermediate configuration (Fig.\,\ref{ProceedEastA13T3}(c)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{ProceedEastA13T3}(d).
From this configuration, the robot with color ${\textsf G}$ on an east node changes its color to ${\textsf W}$ and moves east by rule $R3$.
In the ASYNC model, after it changes its color to ${\textsf W}$, other robots may observe the intermediate configuration (Fig.\,\ref{ProceedEastA13T3}(e)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{ProceedEastA13T3}(f).
After that, robots proceed east while keeping the form by repeatedly executing those rules.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestA13T3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA13T3.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA13T3}}
\label{turnWestA13T3}
\end{figure}
After robots proceed east, they reach the east end of the grid, and the configuration becomes one in Fig.\,\ref{turnWestA13T3}(a).
From this configuration, the robot with color ${\textsf G}$ on an east node changes its color to ${\textsf B}$ and moves south by rule $R4$.
In the ASYNC model, after it changes its color to ${\textsf B}$, other robots may observe the intermediate configuration (Fig.\,\ref{turnWestA13T3}(b)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnWestA13T3}(c).
From this configuration, the robot with color ${\textsf G}$ moves east by rule $R1$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13T3}(d).
From this configuration, the robot with color ${\textsf G}$ moves south by rule $R5$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13T3}(e).
From this configuration, the robot with color ${\textsf G}$ changes its color to ${\textsf B}$ and moves west by rule $R6$.
In the ASYNC model, after it changes its color to ${\textsf B}$, other robots may observe the intermediate configuration (Fig.\,\ref{turnWestA13T3}(f)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnWestA13T3}(g).
From this configuration, the robot with color ${\textsf W}$ moves south by rule $R7$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13T3}(h).
\paragraph{Proceeding west.}
The process of proceeding west is similar to that of proceeding east.
Robots with colors ${\textsf W}$ and ${\textsf B}$ for proceeding west move in the same way as robots with colors ${\textsf G}$ and ${\textsf W}$ for proceeding east, respectively.
The form in Fig.\,\ref{turnWestA13T3}(h) corresponds to one in Fig.\,\ref{ProceedEastA13T3}(b).
Rules $R7$, $R8$, and $R9$ for proceeding west correspond to rules $R1$, $R2$, and $R3$ for proceeding east, respectively.
Hence, robots proceed west keeping the form by repeatedly executing those rules.
\paragraph{Turning east.}
The process of turning east is shown in Fig.\,\ref{turnEastA13T3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnEastA13T3.pdf}
\end{center}
\caption{Turning east in an execution of Algorithm\,\ref{algorithmA13T3}}
\label{turnEastA13T3}
\end{figure}
After robots proceed west, they reach the west end of the grid (Fig.\,\ref{turnEastA13T3}(a)).
From this configuration, the robot with color ${\textsf W}$ on a west node changes its color to ${\textsf G}$ and moves south by rule $R10$.
In the ASYNC model, after it changes its color to ${\textsf W}$, other robots may observe the intermediate configuration (Fig.\,\ref{turnEastA13T3}(b)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnEastA13T3}(c).
From this configuration, the robot with color ${\textsf W}$ moves west by rule $R7$, and hence the configuration becomes one in Fig.\,\ref{turnEastA13T3}(d).
From this configuration, the robot with color ${\textsf W}$ changes its color to ${\textsf B}$ and moves south by rule $R11$.
In the ASYNC model, after it changes its color to ${\textsf B}$, other robots may observe the intermediate configuration (Fig.\,\ref{turnEastA13T3}(e)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnEastA13T3}(f).
From this configuration, the robot with color ${\textsf B}$ on a south node changes its color to ${\textsf G}$ and moves east by rule $R12$.
In the ASYNC model, after it changes its color to ${\textsf G}$, other robots may observe the intermediate configuration (Fig.\,\ref{turnEastA13T3}(g)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnEastA13T3}(h).
From this configuration, the robot with color ${\textsf B}$ moves south by rule $R13$, and hence the configuration becomes one in Fig.\,\ref{turnEastA13T3}(i).
From this configuration, the robot with color ${\textsf B}$ moves east by rule $R14$, and hence the configuration becomes one in Fig.\,\ref{turnEastA13T3}(j).
From this configuration, the robot with color ${\textsf B}$ changes its color to ${\textsf W}$ by rule $R15$, and hence the configuration becomes one in Fig.\,\ref{turnEastA13T3}(k).
From this configuration, robots can proceed east again since their form is the same as one in Fig.\,\ref{ProceedEastA13T3}(d).
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding east.
Eventually, the configuration becomes $\{(v_{m-1,n-2},\{{\textsf G}\}),(v_{m-1,n-1},\{{\textsf G},{\textsf W}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots visit the south end nodes while proceeding east.
Eventually, the configuration becomes $\{(v_{m-1,0},\{{\textsf W},{\textsf B}\}),(v_{m-1,1},\{{\textsf W}\})\}$.
At this configuration, no robots are enabled.
\subsubsection{$\phi=1$, $\ell=3$, no common chirality, and $k=6$}
\label{secA13F6}
We give a terminating exploration algorithm for $m\times n$ grids $(m\geq3, n\geq3)$ in case of $\phi=1$, $\ell=3$, no common chirality, and $k=6$.
A set of colors is $Col=\{{\textsf G}, {\textsf W}, {\textsf B}\}$.
The algorithm is given in Algorithm \ref{algorithmA13F6}.
\begin{algorithm}[tbp]
\caption{Asynchronous Terminating Exploration for $\phi=1,\,\ell=3,$ $k=6$ Without Common Chirality
\label{algorithmA13F6}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Initial configuration}}
\REQUIRE
\STATE $\{(v_{0,0},\{{\textsf G}\}),(v_{0,1},\{{\textsf W}\}),(v_{0,2},\{{\textsf W}\}),(v_{1,0},\{{\textsf W},{\textsf B}\}),(v_{1,1},\{{\textsf W}\})\}$
\renewcommand{\algorithmicrequire}{\textbf{Rules}}
\REQUIRE
\STATE
\centering
\includegraphics[width=0.95\textwidth]{A13F6.pdf}
\end{algorithmic}
\end{algorithm}
\paragraph{Proceeding east.}
The process of proceeding east is shown in Fig.\,\ref{ProceedEastA13F6-1} and Fig.\,\ref{ProceedEastA13F6-2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{ProceedEastA13F6-1.pdf}
\end{center}
\caption{Proceeding east in executions of Algorithm\,\ref{algorithmA13F6} (I)}
\label{ProceedEastA13F6-1}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{ProceedEastA13F6-2.pdf}
\end{center}
\caption{Proceeding east in executions of Algorithm\,\ref{algorithmA13F6} (I\hspace{-1pt}I)}
\label{ProceedEastA13F6-2}
\end{figure}
At the initial configuration or at a configuration immediately after turning east, robots make the form in Fig.\,\ref{ProceedEastA13F6-1}(a).
From this configuration, the robot with color ${\textsf G}$ moves east by rule $R1$, and hence the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-1}(b).
From this configuration, the robot with color ${\textsf W}$ on a west node changes its color to ${\textsf B}$ and moves east by rule $R2$.
In the ASYNC model, after it changes its color to ${\textsf B}$, other robots may observe the intermediate configuration (Fig.\,\ref{ProceedEastA13F6-1}(c)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-1}(d).
From this configuration, the robot with color ${\textsf W}$ occupying the same node as the robot with color ${\textsf G}$ changes its color to ${\textsf G}$ and moves east by rule $R3$.
In the ASYNC model, after it changes its color to ${\textsf G}$, other robots may observe the intermediate configuration (Fig.\,\ref{ProceedEastA13F6-1}(e)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-1}(f).
From this configuration, the robot with color ${\textsf B}$ occupying the same node as the robot with color ${\textsf W}$ changes its color to ${\textsf W}$ and moves east by rule $R4$.
In the ASYNC model, after it changes its color to ${\textsf W}$, other robots may observe the intermediate configuration (Fig.\,\ref{ProceedEastA13F6-1}(g)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-1}(h).
Fig.\,\ref{ProceedEastA13F6-2}(h) denotes the same configuration as one in Fig.\,\ref{ProceedEastA13F6-1}(h).
We show that the configuration eventually becomes one in Fig.\,\ref{ProceedEastA13F6-2}(m) regardless of the scheduler.
At the configuration in Fig.\,\ref{ProceedEastA13F6-2}(h), let $r_1$ be the robot with color ${\textsf W}$ on a northeast node and let $r_2$ be the robot with color ${\textsf B}$.
Then, $r_1$ can execute rule $R5$, and $r_2$ can execute rule $R6$.
If $r_2$ finishes $R6$ before $r_1$ finishes the compute phase of $R5$, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-2}(i).
If $r_1$ finishes the compute phase of $R5$ before $r_2$ finishes $R6$, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-2}(j).
If $r_1$ finishes the compute phase of $R5$ and $r_2$ finishes $R6$ at the same time, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-2}(k).
At the configurations in Fig.\,\ref{ProceedEastA13F6-2}(i) and Fig.\,\ref{ProceedEastA13F6-2}(k), robots cannot execute rules except $R5$, and hence the configuration eventually becomes one in Fig.\,\ref{ProceedEastA13F6-2}(m).
At the configuration in Fig.\,\ref{ProceedEastA13F6-2}(j), robots cannot execute rules except $R5$ and $R6$.
From this configuration, if $r_2$ finishes $R6$ before $r_1$ finishes $R5$, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-2}(k).
If $r_1$ finishes $R5$ before $r_2$ finishes $R6$, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-2}(l).
If $r_1$ finishes $R5$ and $r_2$ finishes $R6$ at the same time, the configuration becomes one in Fig.\,\ref{ProceedEastA13F6-2}(m).
At the configurations in Fig.\,\ref{ProceedEastA13F6-2}(l), robots cannot execute rules except $R6$, and hence the configuration eventually becomes one in Fig.\,\ref{ProceedEastA13F6-2}(m).
From the above discussion, the configuration eventually becomes one in Fig.\,\ref{ProceedEastA13F6-2}(m) in any case.
In this configuration, the form of robots is the same as in Fig.\,\ref{ProceedEastA13F6-1}(a).
Hence, robots proceed east while keeping their form by repeatedly executing those rules.
\paragraph{Turning west.}
The process of turning west is shown in Fig.\,\ref{turnWestA13F6-1} and Fig.\,\ref{turnWestA13F6-2}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA13F6-1.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA13F6} (I)}
\label{turnWestA13F6-1}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8]{turnWestA13F6-2.pdf}
\end{center}
\caption{Turning west in an execution of Algorithm\,\ref{algorithmA13F6} (I\hspace{-1pt}I)}
\label{turnWestA13F6-2}
\end{figure}
After robots proceed east, they reach the east end of the grid, and the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(a).
At this configuration, let $r_1$ be the robot with color ${\textsf B}$, and let $r_2$ be the robot with color ${\textsf G}$ on a northeast node.
Then, $r_1$ can execute rule $R6$, and $r_2$ can execute rule $R7$.
If $r_2$ finishes the compute phase of $R7$ before $r_1$ finishes $R6$, the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(b).
If $r_1$ finishes $R6$ before $r_2$ finishes the compute phase of $R7$, the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(d).
If $r_1$ finishes $R6$ and $r_2$ finishes the compute phase of $R7$ at the same time, the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(e).
At the configurations in Fig.\,\ref{turnWestA13F6-1}(d) and Fig.\,\ref{turnWestA13F6-1}(e), robots cannot execute rules except $R7$, and hence the configuration eventually becomes one in Fig.\,\ref{turnWestA13F6-1}(f).
At the configuration in Fig.\,\ref{turnWestA13F6-1}(b), robots cannot execute rules except $R6$ and $R7$.
From this configuration, if $r_2$ finishes $R7$ before $r_1$ finishes $R6$, the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(c).
If $r_1$ finishes $R6$ before $r_2$ finishes $R7$, the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(e).
If $r_1$ finishes $R6$ and $r_2$ finishes $R7$ at the same time, the configuration becomes one in Fig.\,\ref{turnWestA13F6-1}(f).
At the configuration in Fig.\,\ref{turnWestA13F6-1}(c), robots cannot execute rules except $R6$, and hence the configuration eventually becomes one in Fig.\,\ref{turnWestA13F6-1}(f).
Fig.\,\ref{turnWestA13F6-2}(f) denotes the same configuration as one in Fig.\,\ref{turnWestA13F6-1}(f).
From this configuration, the robot with color ${\textsf W}$ on a southwest node moves south by rule $R8$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(g).
From this configuration, the robot with color ${\textsf G}$ on a northwest node moves south by rule $R9$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(h).
From this configuration, the robot with color ${\textsf B}$ on an east node moves south by rule $R10$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(i).
From this configuration, the robot with color ${\textsf G}$ on an east node moves south by rule $R11$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(j).
From this configuration, the robot with color ${\textsf W}$ on an east node moves south by rule $R12$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(k).
From this configuration, the robot with color ${\textsf B}$ on a west node changes its color to ${\textsf W}$ by rule $R13$, and hence the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(l).
From this configuration, the robot with color ${\textsf G}$ on a northwest node changes its color to ${\textsf W}$ and moves west by rule $R5$.
In the ASYNC model, after it changes its color to ${\textsf W}$, other robots may observe the intermediate configuration (Fig.\,\ref{turnWestA13F6-2}(m)).
However, there are no rules that the other robots can execute in the intermediate configuration.
Hence, the configuration becomes one in Fig.\,\ref{turnWestA13F6-2}(n).
\paragraph{Proceeding west and turning east.}
The form of robots in Fig.\,\ref{turnWestA13F6-2}(n) is a mirror image of the one that robots make to proceed east.
Hence, robots proceed west and turn east with the same rules as proceeding east and turning west, respectively.
\paragraph{End of exploration.}
In case that $m$ is odd, robots visit the south end nodes while proceeding west.
Eventually, the configuration becomes $\{(v_{m-2,0},\{{\textsf G}\}),(v_{m-2,1},\{{\textsf G}\}),(v_{m-1,0},\{{\textsf W},{\textsf B}\}),(v_{m-1,1},\{{\textsf W},{\textsf B}\})\}$.
At this configuration, no robots are enabled.
In case that $m$ is even, robots terminate the algorithm similarly to the odd case.
\section{Conclusions}
In this paper, we have investigated terminating exploration algorithms for myopic robots in finite grids.
First, we have proved that, in the SSYNC and ASYNC models, three myopic robots are necessary to achieve the terminating exploration of a grid if $\phi=1$ holds.
Second, we have proposed fourteen algorithms to achieve the terminating exploration of a grid in various assumptions of synchrony, visible distance, the number of colors, and a chirality.
To the best of our knowledge, they are the first algorithms that achieve the terminating exploration of a grid by myopic robots with at most three colors and/or with no common chirality.
In addition, six proposed algorithms are optimal in terms of the number of robots.
For the future work, it is interesting to close the gap between the lower and upper bounds of the number of required robots.
It is also interesting to consider other tasks and topologies with myopic luminous robots.
\bibliographystyle{plain}
|
1,116,691,499,708 | arxiv | \section{Introduction}
The generation of cryptographically secure elliptic curves over prime fields is one of the most fundamental
and complex problems in elliptic curve cryptography. An elliptic curve (EC) is cryptographically
secure if its use in a cryptosystem guarantees robustness against all (currently) known attacks
(e.g. \cite{FR94,MOV93,PH78,SA98}). All these attacks can be avoided if the order of the EC possesses
certain properties.
An equally important alternative to cryptographic robustness (see e.g.,
\cite{SSK01}) requires that the order of the generated EC is a
prime number. Moreover, in certain applications it is necessary that the order of the EC is
prime \cite{BLS01}.
The most commonly used methods for the generation of ECs over prime fields
are the Complex Multiplication (CM) method
\cite{AM93,LZ94,M92} and the point counting method \cite{S95}.
In this paper we follow the first approach and study the use of
the CM method for generating ECs of prime order in $\mathbb{F}_p$.
Briefly, the CM method takes as input the order $p$ of the prime field and determines
a parameter $D$ called the CM discriminant and the order $m$ of the EC.
If the order $m$ satisfies the desired properties (e.g. is a prime number) then a class
polynomial is computed using the discriminant $D$ and the parameters of the EC are constructed
from a root modulo $p$
of this polynomial.
The most complex and demanding step of the CM method is the computation of the class
polynomial. The original version of the method requires the construction of a Hilbert polynomial
whose roots can be used directly for the construction of the EC parameters. The use of any
other class polynomial necessitates the existence of a transformation that will convert
the roots of this polynomial to the roots of the corresponding Hilbert polynomial.
Class polynomials are constructed with input the discriminant $D$ and by the term ``corresponding
polynomial" we mean the polynomial that is constructed with the same $D$.
The disadvantage of Hilbert polynomials is that their coefficients grow very large as the value of
discriminant increases and thus their construction requires high precision arithmetic and can be
very inefficient even for moderate values of $D$.
To overcome these shortcomings of Hilbert polynomials, two alternatives have been proposed for the case
of prime order ECs:
either to compute them off-line in powerful machines, and store
them for subsequent use (see e.g., \cite{SSK01}), or to use alternative class
polynomials for certain values of $D$ (see e.g.,
\cite{KSZ04_icisc}) and produce the required
Hilbert roots from them. The first approach however requires
storing and handling several Hilbert polynomials
with huge coefficients and this can induce problems especially in devices with limited resources.
These problems are addressed by the second approach.
Weber or $M_{D,l}(x)$ polynomials were used in the literature for the generation
of prime order elliptic curves \cite{KSZ04_icisc}. Both types of polynomials have
much smaller coefficients than the coefficients of the corresponding Hilbert polynomials and their use
can considerably improve the efficiency of the whole CM method.
More preciselly the logarithmic height of the coefficients of the Weber and $M_{D,l}(x)$ polynomials
is smaller by a constant factor than the corresponding logarithmic height of the
Hilbert polynomials.
Weber polynomials can be computed faster than $M_{D,l}(x)$ polynomials \cite{EM02}.
However, finding their roots requires computations in the extension field
$\mathbb{F}_{p^3}$ which makes the whole process more complicated.
The reason is that in the case of prime order ECs the discriminant $D$ must be congruent
to $3 \bmod 8$ and these values give rise to Weber polynomials with degree three times larger than the
degree of the corresponding Hilbert polynomials. Thus, one must find a root of the Weber polynomial in
the extension field $\mathbb{F}_{p^3}$ and then trasform it to a root of the Hilbert polynomial in
$\mathbb{F}_{p}$. The use of $M_{D,l}(x)$ polynomials tackles this difficulty
as their degree is equal to the degree of the Hilbert polynomials.
Furthermore, the use of Weber polynomials requires the storage of three times more coefficients and the memory
needed for this purpose can be larger than the corresponding memory required for the storage
of the $M_{D,l}(x)$ polynomials.
In \cite{ES04} the construction of another class of polynomials was proposed. We will denote these
polynomials as $M_{D, p_1, p_2}(x)$ because their construction is based on two prime numbers $p_1$ and $p_2$.
The degree of these polynomials is equal to the degree of the Hilbert polynomials and this is
a considerable advantage against Weber polynomials. Compared to the Weber polynomials, $M_{D, p_1, p_2}(x)$
polynomials have larger coefficients for all values of $p_1$ and $p_2$, except for $p_1=3,p_2=13$ and $p_1=5,p_2 = 7$.
Moreover, the modular equations which are used for the transformation of a root
of $M_{D, p_1, p_2}(x)$
polynomials to a root of the corresponding Hilbert polynomials have degree at least 2 in the root of Hilbert
polynomial (which makes the computations more ``heavy'') and their coefficients
are quite large (which makes their use less efficient).
In conclusion, the type of polynomial that one should use depends on
the particular application and the value of $D$.
It is clear that finding a class of polynomials which can be constructed more efficiently
than all previously mentioned polynomials, have
degree equal to the degree of the corresponding
Hilbert polynomials
and
have a modular equation with degree 1 in the root of Hilbert
polynomials, will considerably improve
the performance
of the CM method
for the generation of prime order elliptic curves and
will outweigh all previously used polynomials in every
aspect (e.g. precision requirements, storage memory, time efficiency).
Prime order ECs defined in
various fields were also treated in \cite{B01,BS06}.
In the first, the authors used the CM method with Hilbert
polynomials~\cite{B01} for the
generation of prime order ECs over extension fields, while in the second
the authors proposed a very efficient variant of the CM method for the construction
of prime order ECs over prime fields~\cite{BS06}.
Furthermore, a number of works appeared that compare variants of the CM method and
also present experimental results concerning the construction efficiency, such
as the work of M\"{u}ller and Paulus~\cite{MP97}, as well as the theses of
Weng~\cite{WE01} and Baier~\cite{B02}.
\paragraph{Our contribution}
Srinivasa Ramanujan (1887-1920) defined on his third notebook,
pages 392 and 393 in the pagination of \cite[vol. 2]{RamNotebooks}, the
values of five class polynomials for five different values of the discriminant $D$.
The simplicity and the small coefficients of these polynomials was remarkable.
In 1999 Bruce C. Berndt and Heng Huat Chan \cite{Berndt-Chan}
proved that if $D$ is squarefree and $D \equiv 11 \bmod {24}$ then the roots of these five polynomials
are real units and can generate the Hilbert class field.
Moreover, they asked for an efficient way of computing these polynomials for every discriminant $D$ (and not only for
the five values computed by Ramanujan).
In the rest of the paper, we will call them {\em Ramanujan polynomials}.
Interpreting the theorem of Berndt and Chan (that the roots of the Ramanujan polynomials can generate
the Hilbert class field for values $D \equiv 11 \bmod {24}$), we see that
Ramanujan polynomials can be used in the CM method as the aforementioned theorem proves that there is a
transformation of their roots
to the roots of the corresponding Hilbert polynomials. In addition, as $D \equiv 11 \bmod {24} \equiv 3 \bmod 8$,
Ramanujan polynomials can be used in the generation of prime order ECs.
The contribution of this paper is threefold. Firstly, we introduce for the first time the
use of Ramanujan polynomials in the CM method by providing an efficient algorithm for their
construction for all values of the discriminant. The theory behind this
construction is based on Shimura Reciprocity Law \cite{GeeBordeaux,GeeStevenhagen}
and all the mathematical proofs behind it are presented in \cite{KonstKonto}.
However, in the context of this paper we present a considerably simplified version of the
method described in \cite{KonstKonto} which can be equally used either by a mathematician or a practitioner
with no background in algebraic number theory and algorithmic class field theory.
Secondly, we observe that Ramanujan polynomials have the same degree
with their corresponding Hilbert polynomials and hence have roots
in $\mathbb{F}_{p}$. In addition, we provide the necessary transformation of a Ramanujan
polynomial's root to a root of the corresponding Hilbert polynomial and thus give all the information
that a practioner needs in order to use the new class of polynomials in the CM method.
Finally, we perform a comparative theoretical and experimental
study regarding the efficiency of the CM method using the
aforementioned Weber, $M_{D,l}(x)$ and $M_{D,p_1,p_2}(x)$ polynomials against the new class of
polynomials.
We
show that Ramanujan polynomials are by far the best choice when CM
method is used for the generation of prime order elliptic curves because their degree is equal to
the degree of the corresponding Hilbert polynomials and their construction is more efficient
than the construction of all previously used polynomials.
We show that the logarithmic height of the coefficients of the Ramanujan polynomials is asymptotically 36 times smaller
than the logarithmic height of the Hilbert polynomials and this allows us to
show that the precision requirements for the construction of Ramanujan
polynomials can be from 22\% to 66\% smaller than the precision requirements of all other
class polynomials.
In literature the ``efficiency'' of a class invariant (a root of a class polynomial) is measured
by the asymptotic ratio of the logarithmic height of a root of the Hilbert polynomial to
a root of the class polynomial in question. The best known class invariant is the one used
for the construction of
Weber polynomials with $D \not\equiv 0 \pmod 3$ and $D \equiv 3,7 \pmod 8$.
The roots of these Weber polynomials have logarithmic height that is asymptotically 72 times smaller than
the logarithmic height of the roots of the corresponding Hilbert polynomials.
However, in practice we are not interested in the logarithmic height
of the roots but in the logarithmic height of the polynomials, since the latter
measures the precision required for the construction of the polynomials.
In this paper we will show that these two heights coincide only if the class polynomial has
degree equal to the degree of the corresponding Hilbert polynomial.
For the construction of prime order elliptic curves,
Weber class polynomials have degree 3 times larger than the degree of the Hilbert polynomials.
We will show that in this case the logarithmic height
of the Weber polynomials is asymptotically 24=72/3 times less than
the logarithmic height of Hilbert polynomials and not 72. Thus, even though the height of Weber polynomials' roots
is smaller than the height of the roots of Ramanujan's class polynomials, the precision requirements for the
construction of the latter are smaller.
Ramanujan polynomials can also be used in the generation of special curves, such as MNT curves \cite{MNT00,MNT01,SB04} and in the generation of ECs that do not necessarily have prime order \cite{AM93,LZ94}.
It is interesting to note here that in the latter case, as our experiments indicated, Ramanujan polynomials
outweigh Weber polynomials for all values of the discriminant $D \not\equiv 7 \bmod 8$.
Moreover, problems such as primality testing/proving \cite{AM93} and the representability
of primes by quadratic forms \cite{C89} can be considerably improved with the use of Ramanujan polynomials.
This makes our analysis for these polynomials
even more useful.
The rest of the paper is organized as follows. In
Section~\ref{pprel} we review some basic definitions and facts
about ECs and the CM method.
In Section~\ref{class_polynomials} we review properties of Hilbert, Weber, $M_{D,l}(x)$ and $M_{D,p_1,p_2}(x)$ polynomials with
$D \equiv 3 \bmod 8$ and
in Section~\ref{ramanujan} we elaborate on the construction of Ramanujan
polynomials describing in an explicit way how they can be used in the CM method.
In Section~\ref{prec-section} we provide theoretical estimations for the precision requirements of all
previously mentioned polynomials and
in Section~\ref{exper} we present our experimental results.
\section{A Brief Overview of Elliptic Curve Theory and Complex Multiplication}
\label{pprel}
In this section we give a brief introduction to elliptic curve
theory and to the Complex Multiplication method for generating prime
order elliptic curves. Our aim is to facilitate the reading of the sections
that follow.
\subsection{Preliminaries of Elliptic Curve Theory}
\label{prel}
An {\em elliptic curve} over a finite field $\mathbb{F}_{p}$, $p$ a prime
larger than 3, is denoted by $E(\mathbb{F}_{p})$ and it is comprised of all
the points $(x, y) \in \mathbb{F}_{p}$ (in affine coordinates) such that
\begin{equation}
y^2 = x^3 + ax + b, \label{ec}
\end{equation}
with $a, b \in \mathbb{F}_{p}$ satisfying $4a^3 + 27b^2 \neq 0$. These
points, together with a special point denoted by $\cal O$ (the
{\em point at infinity}) and a properly defined addition operation
form an Abelian group. This is the {\em Elliptic Curve group} and
the point $\cal O$ is its zero element (see \cite{ACDFLNV06,BSS99,S86}
for more details on this group).
The {\em order}, denoted by $m$, is the number of points that
belong in $E(\mathbb{F}_{p})$.
The difference between $m$ and $p$ is measured by the so-called
{\em Frobenius trace} $t=p+1-m$ for which Hasse's theorem (see e.g.,
\cite{BSS99}) states that $|t|\leq 2\sqrt{p}$, implying that
$p + 1 - 2\sqrt{p} \leq m \leq p + 1 + 2\sqrt{p}$.
This is an important inequality that provides lower and upper bounds on the
number of points in an EC group.
The {\em order} of an element $P\in E(\mathbb{F}_p)$ is defined as the
smallest positive integer $n$ such that $nP = \cal O$. Langrange's
theorem implies that the order of a point $P\in E(\mathbb{F}_p)$
divides the order $m$ of the group $E(\mathbb{F}_p)$. Thus, $mP =
\cal O$ for any $P\in E(\mathbb{F}_p)$ and, consequently, the
order of a point is always less than or equal to the order of the
elliptic curve.
Among the most important quantities defined for an elliptic curve
$E(\mathbb{F}_{p})$ are the {\em curve
discriminant} $\Delta$ and the {\em $j$-invariant}. These two
quantities are given by the equations $\Delta = -16(4a^3 + 27b^2)$
and $j = -1728(4a)^3/\Delta$.
Given a $j$-invariant $j_0\in \mathbb{F}_p$
(with $j_0\neq 0,1728$) {\em two} ECs can be constructed. If $k =
j_0/(1728-j_0) \bmod p$, one of these curves is given by
Eq.~(\ref{ec}) by setting $a = 3k \bmod p$ and $b = 2k \bmod p$.
The second curve (the {\em twist} of the first) is given by the
equation $y^2 = x^3 + ac^2x + bc^3$
with $c$ any quadratic non-residue of $\mathbb{F}_{p}$.
If $m_1$ and $m_2$ denote the orders of an elliptic curve and its
twist respectively, then $m_1+m_2=2p+2$ which implies that if one
of the curves has order $p+1-t$, then its twist has order $p+1+t$,
or vice versa (see~\cite[Lemma VIII.3]{BSS99}).
\subsection{The Complex Multiplication Method}
\label{cmmethod}
As stated in the previous section, given a $j$-invariant one
may readily construct an EC. Finding a suitable $j$-invariant for
a curve that has a given order $m$ can be accomplished through the
theory of {\em Complex Multiplication} (CM) of elliptic curves
over the rationals. This method is called the {\em CM method} and
in what follows we will give a brief account of it.
By Hasse's theorem, $Z = 4p - (p+1-m)^2$ must be positive and,
thus, there is a unique factorization $Z = Dv^2$, with $D$ a
square free positive integer. Therefore
\begin{equation}
4p = u^2 + Dv^2
\label{eq:D}
\end{equation}
for some integer $u$ that satisfies the equation
\begin{equation}
m = p + 1 \pm u.
\label{orderm}
\end{equation}
The negative parameter $-D$ is called a {\em CM discriminant for
the prime $p$}. For convenience throughout the paper, we will use
(the positive integer) $D$ to refer to the CM discriminant.
The CM method uses $D$ to determine a $j$-invariant. This
$j$-invariant in turn, will lead to the construction of an EC of
order $p+1-u$ or $p+1+u$.
The CM method works as follows. Given a prime $p$, the
smallest $D$ is chosen for which there exists some integer $u$ for
which Eq.~(\ref{eq:D}) holds.
If neither of the possible orders $p+1-u$ and $p+1+u$ is
suitable for our purposes, the process is repeated with a new
$D$. If at least one of these orders is suitable, then
the method proceeds with the construction of the {\em Hilbert
polynomial} (uniquely defined by $D$) and the determination of
its roots modulo $p$.
Any root of the Hilbert polynomial can be used as a
$j$-invariant. From this root the corresponding
EC and its twist can be constructed as described in Section~\ref{prel}.
In order to find which one of the curves has the desired suitable
order ($m=p+1-u$ or $m=p+1+u$), Langrange's
theorem can be used as follows: we repeatedly choose points $P$ at random in
each EC until a point is found in one of the curves for which
$mP\neq{\cal O}$. This implies that the curve we seek is the other
one. Recently, different methods have been proposed for
choosing efficiently the correct elliptic curve in CM method \cite{NM05,RS07}.
The most demanding step of the CM method is the construction of the Hilbert polynomial, as it
requires high precision floating point and complex arithmetic.
As the value of the discriminant $D$ increases, the coefficients of the polynomials grow extremely
large and their computation becomes more inefficient.
In~\cite{B02,KSZ02}, a variant of the CM method was proposed to avoid this problem.
This variant starts with
a discriminant $D$ and a specific prime $p$ chosen at random, or
from a set of prescribed primes. It then computes $u$ and $v$
using Cornacchia's algorithm~\cite{C08} to solve Eq.~(\ref{eq:D}), and requires that the resulting
EC order $m$ is suitable (cf.~Section \ref{prel}).
Using this variant, the user can choose the value of the discriminant he wishes (and thus avoid very large values which was not possible in the original version of the CM method) or he can construct the Hilbert polynomials in a preprocessing phase and store them for later use.
In this way, the burden of their costly computation can be avoided during
the execution of the CM method. A similar variant was proposed in \cite{SSK01} for the construction of prime
order ECs.
We now turn to the generation of prime order ECs.
If $m$ should be a prime
number, then it is obvious that $u$ should be odd. It is also easy
to show that $D$ should be congruent to $3 \bmod 8$ and $v$ should
be odd, too.
In this paper, we follow the variant of the CM method proposed in ~\cite{B02,KSZ02} for the construction
of prime order elliptic curves. Thus,
we start with a CM discriminant $D \equiv 3 \bmod 8$ for the
computation of the Hilbert polynomial,
and then generate at random, or select from a pool
of precomputed {\em good} primes (e.g., Mersenne primes),
a prime $p$ and compute odd integers $u, v$ such that $4p =
u^{2}+Dv^{2}$.
Those odd integers $u, v$ can be computed with four different ways,
which are outlined in \cite{KSZ04_icisc}.
Once we have found primes $p$ and $m$ which satisfy Eq.~(\ref{eq:D}) and Eq.~(\ref{orderm}),
we can
proceed with the next steps, which
are similar to those of the original CM method.
If we could find a way to compute the roots of the Hilbert polynomials directly, it is clear that it
wouldn't be necessary to construct the polynomials (since only their roots are needed in the CM method).
Indeed, there are polynomials (known as class polynomials) \cite{EM02,ES03,KVY89,S02}
with much
smaller coefficients, which can be
constructed much more efficiently than Hilbert polynomials and their roots can be transformed to
the roots of the Hilbert polynomials. Thus, we can replace the Hilbert polynomials in the CM method
with another class of polynomials given that their roots can be transformed to the roots of the Hilbert
polynomials.
In the following section we will briefly review the definition of these polynomials along with another class of polynomials defined in \cite{ES04} (denoted as $M_{D,p_1,p_2}(x)$) and show how they can be
used in the CM method, while in Section \ref{ramanujan} we will propose the use of Ramanujan class
polynomials.
\section{Class Polynomials}
\label{class_polynomials}
In this section we define Hilbert, Weber, $M_{D,l}(x)$ and $M_{D,p_1,p_2}(x)$ polynomials for discriminant
values $D \equiv 3 \bmod 8$ and briefly discuss their use in the CM method. The interested
reader is referred to \cite{ES04,KSZ04_icisc} for proofs and details not given here.
\subsection{Hilbert Polynomials}
\label{sec_hpoly}
Every CM discriminant $D$ defines a unique Hilbert polynomial,
denoted by $H_{D}(x)$. Given a positive $D$, the Hilbert
polynomial $H_{D}(x) \in \mathbb{Z}[x]$ is defined as
\begin{equation}
H_{D}(x) = \prod_{\tau} (x-j(\tau))
\label{hx}
\end{equation}
for values of $\tau$ satisfying $\tau = (-\beta +
\sqrt{-D})/2\alpha$, for all integers $\alpha$, $\beta$, and
$\gamma$ such that (i) $\beta^2 - 4\alpha\gamma = -D$, (ii)
$|\beta| \leq \alpha \leq \sqrt{D/3}$, (iii) $\alpha \leq \gamma$,
(iv) $\gcd(\alpha, \beta, \gamma) = 1$, and (v) if $|\beta| =
\alpha$ or $\alpha = \gamma$, then $\beta \geq 0$.
The 3-tuple of integers $\left[ \alpha, \beta, \gamma \right]$
that satisfies these conditions is called a {\em primitive,
reduced quadratic form} of $-D$, with $\tau$ being a root of the
quadratic equation $\alpha z^{2}+\beta z+\gamma=0$. Clearly, the
set of primitive reduced quadratic forms of a given discriminant
is finite. The quantity $j(\tau)$ in Eq.~(\ref{hx}) is called {\em
class invariant} and is defined as follows. Let $z =
e^{2\pi\sqrt{-1}\tau}$ and $h(\tau) = \left(
\frac{\eta(2\tau)}{\eta(\tau)} \right)^{24}$, where $\eta(\tau)
= z^{1/{24}}\left( 1+\sum_{n \geq 1}
{(-1)^n\left(z^{n(3n-1)/2}+z^{n(3n+1)/2}\right)}\right)$ is the Dedekind eta-function.
Then, $j(\tau) = \frac{(256h(\tau)+1)^3}{h(\tau)}$.
It can be shown \cite{C89} that Hilbert polynomials with degree
$h$ have $h$ roots modulo $p$ when they are used in the CM method.
\REMOVED{
Let $h$ be the number of primitive reduced quadratic forms, which
determines the {\em degree} (or {\em class number}) of
$H_D(x)$. Then, the bit precision required for the
generation of $H_D(x)$ can be estimated
(see \cite{LZ94}) by
$\mbox{H-Prec}(D) \approx \frac{\ln 10}{\ln 2} (h/4 + 5) +
\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}
$
with the sum running over the same values of $\tau$ as the product
in Eq.~(\ref{hx}). It can be shown \cite{C89} that Hilbert polynomials with degree
$h$ have $h$ roots modulo $p$ when they are used in the CM method.
}
\REMOVED{
\begin{theorem} \cite{KSZ04_icisc}
A Hilbert polynomial $H_D(x)$ with degree $h$ has exactly $h$ roots modulo $p$ if and only if
the equation $4p = u^2 + Dv^2$ has integer solutions and $p$ does not divide the
discriminant\footnote{For a definition of the discriminant of a
polynomial see \cite{C93}.} $\Delta(H_{D})$ of the polynomial.
\label{hilbertroots}
\end{theorem}
There are finitely many primes dividing the discriminant $\Delta(H_{D})$
of the Hilbert polynomial and infinitely many primes to choose. In
elliptic curve cryptosystems the prime $p$ is at least 160 bits.
Therefore, an arbitrary prime almost certainly does not divide the
discriminant. In addition, the above theorem indicates that from a Hilbert polynomial
with $h$ roots, one can readily construct $h$ ECs using the CM method.
}
\subsection{Weber Polynomials}
The Weber polynomial $W_{D}(x)\in\mathbb{Z}[x]$ for $D \equiv 3 \bmod 8$
is defined as
\begin{equation*}
W_{D}(x) = \prod_{\ell} (x-g(\ell))
\end{equation*}
where $\ell = \frac{-b+\sqrt{-D}}{a}$ satisfies the equation $ay^2 + 2by + c=0$
for which $b^2 - ac= -D$ and (i) $\gcd(a,b,c)=1$, (ii) $|2b| \leq a \leq c$, and
(iii) if either $a=|2b|$ or $a=c$, then $b \geq 0$.
Let $\zeta=e^{\pi\sqrt{-1}/24}$.
The class invariant $g(\ell)$ for $W_D(x)$ is defined by
\begin{equation*}
g(\ell)= \left\{
\begin{array}{rl}
\zeta^{b(c - a - a^{2}c)} \cdot f(\ell) & \mbox{ if $2\mid\!\!\!\!/a$ and $2\mid\!\!\!\!/c$} \\
-(-1)^{\frac{a^2-1}{8}} \cdot \zeta^{b(ac^2 - a -2c)} \cdot f_1(\ell)
& \mbox{ if $2\mid\!\!\!\!/a$ and $2\mid c$} \\
-(-1)^{\frac{c^2-1}{8}} \cdot \zeta^{b(c - a - 5ac^2)} \cdot f_2(\ell)
& \mbox{ if $2\mid a$ and $2\mid\!\!\!\!/ c$}
\end{array}
\right.
\end{equation*}
if $D \equiv 3 \bmod 8$ and $D \not\equiv 0 \bmod 3$, and
\begin{equation*}
g(\ell)= \left\{
\begin{array}{rl}
\frac{1}{2}\zeta^{3b(c - a - a^{2}c)} \cdot f^3(\ell) & \mbox{ if $2\mid\!\!\!\!/a$
and $2\mid\!\!\!\!/c$} \\
-\frac{1}{2}(-1)^{\frac{3(a^2-1)}{8}} \cdot \zeta^{3b(ac^2 - a -2c)} \cdot f_1^3(\ell)
& \mbox{ if $2\mid\!\!\!\!/a$ and $2\mid c$} \\
-\frac{1}{2}(-1)^{\frac{3(c^2-1)}{8}} \cdot \zeta^{3b(c - a - 5ac^2)} \cdot f_2^3(\ell)
& \mbox{ if $2\mid a$ and $2\mid\!\!\!\!/ c$}
\end{array}
\right.
\end{equation*}
if $D \equiv 3 \bmod 8$ and $D \equiv 0 \bmod 3$.
The functions $f()$, $f_1()$ and $f_2()$ are called Weber functions and are defined by
(see \cite{AM93,ieee}):
\begin{eqnarray*}
f(y) & = & q^{-1/48}\prod_{r=1}^{\infty}(1+q^{(r-1)/2}) ~~~~~~~
f_1(y) = q^{-1/48}\prod_{r=1}^{\infty}(1-q^{(r-1)/2}) \\
f_2(y) & = & \sqrt{2}~~q^{1/24}\prod_{r=1}^{\infty}(1+q^{r}) ~~~~~~~
\mbox{ where } q = e^{2\pi y\sqrt{-1}}.
\end{eqnarray*}
For these cases of the discriminant ($D \equiv 3 \bmod 8$), the
Weber polynomial $W_D(x)$ has degree three times larger than the degree
of its corresponding Hilbert polynomial $H_D(x)$.
In \cite{KSZ04_icisc} it is shown that the Weber polynomial has roots in the
extension field $\mathbb{F}_{p^3}$.
Thus, in order to use Weber polynomials in the CM
method we must find at least one of their roots in the extension field $\mathbb{F}_{p^3}$.
The idea is that we replace Hilbert polynomials with
Weber polynomials and then try to compute a root of the Hilbert polynomial from
a root of its corresponding Weber polynomial.
To compute the desired Hilbert root, we proceed in three stages. First, we construct
the corresponding Weber polynomial. Second, we compute its roots in $\mathbb{F}_{p^3}$.
Finally, we transform the Weber roots to the desired Hilbert roots in $\mathbb{F}_p$ using
a modular equation $\Phi_{W}(x,j) = 0$. In particular, if $x$ is a root of Weber polynomial
and $j$ is a root of the corresponding Hilbert polynomial, then
\begin{equation}
\label{eq-phi-1}
\Phi_{W}(x,j) = (2^{12}x^{-24}-16)^3-2^{12}x^{-24}j
\end{equation}
if $D \not\equiv 0 \pmod 3$ and
\begin{equation}
\label{eq-phi-2}
\Phi_{W}(x,j) = (2^{4}x^{-8}-16)^3-2^{4}x^{-8}j
\end{equation}
if $D \equiv 0 \pmod 3$.
To compute a root of $W_D(x)$ in $\mathbb{F}_{p^3}$, we have to find an irreducible factor
(modulo $p$) of degree 3 of the polynomial. This can be achieved using
Algorithm 3.4.6 from \cite{C93}. The irreducible factor has 3 roots in $\mathbb{F}_{p^3}$
from which it suffices to choose one, in order to accomplish the third stage.
Details on the use of Weber polynomials in the construction of prime order elliptic curves can be
found in \cite{KSZ04_icisc}.
\subsection{$M_{D,l}(x)$ Polynomials}
\label{mdl_poly}
Even though Weber polynomials have much smaller coefficients than Hilbert polynomials
and can be computed very efficiently, the fact that their degree for $D \equiv 3 \bmod 8$
is three times larger than the degree of the corresponding Hilbert polynomials can be a potential problem,
because it involves computations in extension fields.
Moreover, the computation of a cubic factor modulo $p$ in a polynomial with degree $3h$ is more time consuming
than the computation of a single root modulo $p$ of a polynomial with degree $h$.
To alleviate these problems, the use of a relatively new class of
polynomials was proposed
referred as the $M_{D,l}(x)$ polynomials.
These polynomials have degree $h$ like Hilbert polynomials and thus they have roots modulo $p$.
They are constructed from a family of $\eta$-products:
$m_l(z)=\frac{\eta(z/l)}{\eta(z)}$ \cite{M00} for an integer $l \in \left\{3,5,7,13\right\}$.
The polynomials are obtained from this
family by evaluating their value at a suitably chosen system of quadratic forms. Once a polynomial is
computed, we can use a modular equation $\Phi_{l}(x,j) = 0$ (see Table~\ref{f_l}),
in order to compute a root $j$ modulo $p$ of the Hilbert polynomial
from a root $x$ modulo $p$ of the $M_{D,l}(x)$ polynomial.
\begin{table}
\begin{center}
\begin{tabular}{|p{1.5cm}|p{7.5cm}|} \hline
$l$ & $\Phi_l(x,j)$ \\ \hline \hline
3 & $(x+27)(x+3)^3-jx$ \\ \hline
5 & $(x^2+10x+5)^3-jx$ \\ \hline
7 & $(x^2+13x+49)(x^2+5x+1)^3-jx$ \\ \hline
13 & $(x^2+5x+13)(x^4+7x^3+20x^2+19x+1)^3-jx$ \\ \hline
\end{tabular}
\end{center}
\caption{Modular functions for different values of $l$.}
\label{f_l}
\end{table}
\subsection{$M_{D,p_1,p_2}(x)$ Polynomials}
\label{mdl_double_poly}
In authors of \cite{ES04} proposed the use of another class of polynomials. Like $M_{D,l}(x)$ polynomials,
these polynomials are constructed
using a family of $\eta$-products:
$m_{p_1,p_2}(z) = \frac{\eta(z/p_1)\eta(z/p_2)}{\eta(z/(p_1p_2))\eta(z)}$.
We will refer to the minimal polynomials of these products
as $M_{D, p_1,p_2}(x)$ where $D$ is the discriminant used for
their construction.
The only restriction posed on the discriminant is that
$
\left(
\frac{D}{p_1}
\right)\neq -1$ and $\left(
\frac{D}{p_2}
\right)\neq -1$ if $p_1\neq p_2$ or
$\left(
\frac{D}{p}
\right)\neq -1$ if $p_1=p_2=p$, where
$\left( \frac{\cdot}{\cdot}
\right)$ is the symbol of Kronecker.
The polynomials are obtained from this
family of $\eta$-products by evaluating their value at a suitably chosen system of quadratic forms.
In particular, the polynomial $M_{D,p_1,p_2}(x)\in\mathbb{Z}[x]$
is defined as
\begin{equation*}
M_{D,p_1,p_2}(x) = \prod_{\tau_Q} (x-m_{p_1,p_2}(\tau_Q))
\end{equation*}
where $\tau_Q = \frac{-B_i+\sqrt{-D}}{2A_i}$ for all representatives $S = \left\lbrace (A_i, B_i, C_i) \right\rbrace_{1 \leq i \leq h} $ of the reduced primitive quadratic forms of a discriminant $-D$ derived from a $(p_1p_2)$-system \cite{S02}.
Once a polynomial is
computed, we can use the modular equations $\Phi_{p_1,p_2}(x,j)=0$,
in order to compute a root $j$ modulo $p$ of the Hilbert polynomial
from a root $x$ modulo $p$ of the $M_{D, p_1,p_2}(x)$ polynomial.
However, a disadvantage of $M_{D, p_1,p_2}(x)$ polynomials is that
the corresponding modular polynomials $\Phi_{p_1,p_2}(x,j)$ have
degree at least 2 in
$j$ (which makes the computations more ``heavy'') and their coefficients
are quite large (which makes their
use less efficient) \footnote{For example,
notice in \cite{ES05} the size of the smallest modular polynomial $\Phi_{5,7}(x,j)$.}.
\REMOVED{
for $p_1, p_2 \notin \left\{2,3\right\}$ is $\Phi_{5,7}(x,j)$ and is equal to:
\begin{eqnarray*}
\Phi_{5,7}(x,j) & = & x^{48} + (-j+708)x^{47} + (35j+171402)x^{46} + (-525j+15185504)x^{45}\\
& & + (4340j+248865015)x^{44} + (-20825j+1763984952)x^{43} \\
& & + (52507j+6992359702)x^{42} + (-22260j+19325688804)x^{41} \\
& & + (-243035j+42055238451)x^{40} + (596085j+70108209360)x^{39} \\
& & + (-272090j+108345969504)x^{38} + (-671132j+121198179480)x^{37} \\
& & + (969290j+155029457048)x^{36} + (-1612065j+97918126080)x^{35} \\
& & + (2493785j+141722714700)x^{34} + (647290j-1509796288)x^{33} \\
& & + (-3217739j+108236157813)x^{32} + (3033590j-93954247716)x^{31} \\
& & + (-5781615j+91135898154)x^{30} + (1744085j-108382009680)x^{29} \\
& & + (1645840j+66862445601)x^{28} + (-2260650j-66642524048)x^{27} \\
& & + (6807810j+38019611082)x^{26} + (-2737140j-28638526644)x^{25} \\
& & + (2182740j+17438539150)x^{24} + (-125335j-8820058716)x^{23} \\
& & + (-1729889j+5404139562)x^{22} + (1024275j-1967888032)x^{21} \\
& & + (-1121960j+1183191681)x^{20} + (395675j-370697040)x^{19} \\
& & + (-54915j+103145994)x^{18} + (15582j-42145404)x^{17} \\
& & + (34755j-15703947)x^{16} + (-6475j-3186512)x^{15} \\
& & + (1120j-4585140)x^{14} + (-176j+1313040)x^{13} \\
& & + (j^2-1486j-38632)x^{12} + (-7j+399000)x^{11} \\
& & + (-19j+211104)x^{10} + (-9j+6771)x^{8} + (8j-6084)x^{7} \\
& & + (7j-5258)x^{6} + (j-792)x^{5} - 105x^{4} + 16x^{3} + 42x^{2} + 12x + 1\\
\end{eqnarray*}
}
The only modular polynomials that have degree 2 in $j$ are
$\Phi_{3,13}(x,j)$ and $\Phi_{5,7}(x,j)$.
In addition, $M_{D, 3, 13}(x)$ and $M_{D, 5, 7}(x)$ polynomials
are constructed more efficiently than other polynomials of the double eta family \cite{EM02}.
Thus, we only used these polynomials in our experiments.
\section{Ramanujan Polynomials}
\label{ramanujan}
In this section, we define a new class of polynomials which can be used in the CM method for the
generation of prime order ECs. We elaborate on their construction and provide the necessary
transformations of their roots to the roots of the corresponding Hilbert polynomials.
\subsection{Construction of Polynomials}
Srinivasa Ramanujan (1887-1920) defined
on his third notebook, pages 392 and 393 in the pagination of \cite[vol. 2]{RamNotebooks}
the values
\begin{equation} \label{tndef}
t_D=\sqrt{3} q_D^{1/18} \frac{f(q_D^{1/3}) f(q_D^3)}{f^2(q_D)} \in \mathbb{R}
\end{equation}
where $f(-q)=\prod_{d=1}^\infty (1-q^d) = q^{-1/24}\eta(\tau)$,
$q= \exp(2\pi i \tau)$, $q_D=\exp(-\pi \sqrt{D})$, $\tau \in \mathbb{H}$ ($\mathbb{H}$ is the upper half plane) and $\eta(\tau)$ denotes the Dedekind eta-function.
Without any further explanation on how he found them,
Ramanujan gave the following table of polynomials $T_D(x)$ based on $t_D$ for five values of $D$:
\[
\begin{array}{|c|c|}
\hline
D & T_D(x)\\
\hline
11 & x-1 \\
35 & x^2+x-1\\
59 & x^3+2x-1 \\
83 & x^3+2x^2+2x-1\\
107 & x^3-2x^2+4x-1\\
\hline
\end{array}
\]
In \cite{Berndt-Chan} Bruce C. Berndt and Heng Huat Chan proved that these polynomials indeed
have roots the Ramanujan values $t_D$. The method they used could not be applied for
higher values of $D$ and they asked for an efficient way of computing the polynomials $T_D$ for every $D$.
They also proved that if $D \in \mathbb{N}$ is squarefree so that $D\equiv 11 \bmod {24}$ then
$t_D$ is a real unit generating the Hilbert class field.
This actually means that the polynomials $T_D$ can be used in the CM method because their roots can be transformed
to the roots of the corresponding Hilbert polynomials. In addition, the remarkably small coefficients
of these polynomials are a clear indication that their use in the CM method can be especially favoured.
In this paper we will elaborate on the construction of these polynomials, which we will call {\em Ramanujan polynomials}
and we will provide an efficient algorithm for their computation
for every discriminant $D \equiv 11 \bmod {24}$.
The theory behind this
construction is based on Shimura Reciprocity Law \cite{GeeBordeaux,GeeStevenhagen}.
For the interested reader all mathematical proofs can be found in \cite{KonstKonto}.
However, in the rest of the section we will
present a considerably simplified version of the method
in \cite{KonstKonto}.
The Ramanujan polynomial $T_{D}(x)\in\mathbb{Z}[x]$ for $D \equiv 11 \bmod {24}$
is defined as
\begin{equation*}
T_{D}(x) = \prod_{\tau} (x-t(\tau))
\label{tx}
\end{equation*}
for values of $\tau$ satisfying $\tau=\frac{-\beta+\sqrt{-D}}{2\alpha}$ for all primitive, reduced
quadratic forms $[\alpha, \beta, \gamma]$ of $-D$.
Every value $t(\tau)$ that corresponds to a specific form $[\alpha, \beta, \gamma]$ is defined by
\begin{equation*}
t(\tau)= (\zeta_{72}^{6k}-\zeta_{72}^{30k})\sum_{i=0}^{5} a_{2i}R_i(\tau)
\end{equation*}
where $\zeta_{72}=e^{2\pi i /72}$ and
the functions $R_i$ with $i \in \{0,1,2,3,4,5\}$ are modular functions of level $72$ and are
defined by:
$R_0(\tau)= \frac{\eta(3\tau)\eta(\tau/3)}{ \eta^2(\tau)}$,
$R_1(\tau)= \frac{\eta(3\tau)\eta(\tau/3+1/3)}{ \eta^2(\tau)}$,
$R_2(\tau)= \frac{\eta(3\tau)\eta(\tau/3+2/3)}{ \eta^2(\tau)}$,
$R_3(\tau)= \frac{\eta(\tau/3)\eta(\tau/3+2/3)}{ \eta^2(\tau)}$,
$R_4(\tau)= \frac{\eta(\tau/3)\eta(\tau/3+1/3)}{ \eta^2(\tau)}$
and $R_5(\tau)=\frac{\eta(\tau/3+2/3) \eta(\tau/3+1/3) }{ \eta^2(\tau)}$.
\REMOVED{
\begin{equation*}
R_0(\tau)= \frac{\eta(3\tau)\eta(\tau/3)}{ \eta^2(\tau)}
\end{equation*}
\begin{equation*}
R_1(\tau)= \frac{\eta(3\tau)\eta(\tau/3+1/3)}{ \eta^2(\tau)}
\end{equation*}
\begin{equation*}
R_2(\tau)= \frac{\eta(3\tau)\eta(\tau/3+2/3)}{ \eta^2(\tau)}
\end{equation*}
\begin{equation*}
R_3(\tau)= \frac{\eta(\tau/3)\eta(\tau/3+2/3)}{ \eta^2(\tau)}
\end{equation*}
\begin{equation*}
R_4(\tau)= \frac{\eta(\tau/3)\eta(\tau/3+1/3)}{ \eta^2(\tau)}
\end{equation*}
\begin{equation*}
R_5(\tau)=\frac{\eta(\tau/3+2/3) \eta(\tau/3+1/3) }{ \eta^2(\tau)}
\end{equation*}
}
The value $k$ is equal to $9\det(L_2)-8\det(L_3)$ where $\det(L_2)$ and $\det(L_3)$ are the determinants of the following
matrices $L_n$ for $n=2$ or $3$ respectively:
\begin{equation*}
L_n= \left\{
\begin{array}{rl}
\begin{pmatrix} \alpha & \frac{(\beta-1)}{2} \\ 0 & 1 \end{pmatrix} & \mbox{ if $n\mid\!\!\!\!/\alpha$} \\
\begin{pmatrix} \frac{(-\beta-1)}{2} & -\gamma \\ 1 & 0 \end{pmatrix}
& \mbox{ if $n\mid \alpha$ and $n\mid\!\!\!\!/ \gamma$} \\
\begin{pmatrix} \frac{(-\beta-1)}{2}-\alpha & \frac{(1-\beta)}{2}-\gamma \\ 1 & -1 \end{pmatrix}
& \mbox{ if $n\mid \alpha$ and $n\mid \gamma$}
\end{array}
\right.
\label{l2}
\end{equation*}
The values $a_{2i}$ with $i \in \{0,1,2,3,4,5\}$ are the elements
of the third row of a $6\times6$ matrix $A$. Before describing the construction of $A$ we
need to define the following two matrices:
\begin{equation*} \label{ATAS}
S_0={\begin{pmatrix}
{0}&{\zeta_{72}^{3k}}&{0}&{0}&{0}&{0}\cr
{0}&{0}&{\zeta_{72}^{3k}}&{0}&{0}&{0}\cr
{\zeta_{72}^{6k}}&{0}&{0}&{0}&{0}&{0}\cr
\\
{0}&{0}&{0}&{0}&{{1}\over{\zeta_{72}^{3k}}}&{0}\cr
{0}&{0}&{0}&{0}&{0}&{{1}\over{\zeta_{72}^{6k}}}\cr
{0}&{0}&{0}&{{1}\over{\zeta_{72}^{3k}}}&{0}&{0}\cr
\end{pmatrix} },
\end{equation*}
\begin{equation*}
S_1=
{
\begin{pmatrix}
{1}&{0}&{0}&{0}&{0}&{0}\cr
{0}&{0}&{0}&1 \over{{\zeta_{72}^{3k}}({ {-\zeta_{72}^{30k} + \zeta_{72}^{6k}} }})&{0}&{0}\cr
{0}&{0}&{0}&{0}&{{{\zeta_{72}^{3k}}\over{-\zeta_{72}^{30k} + \zeta_{72}^{6k}}}}&{0}\cr
{0}&{ {\zeta_{72}^{3k}}({-\zeta_{72}^{30k} + \zeta_{72}^{6k}}) }&{0}&{0}&{0}&{0}\cr
{0}&{0}&{{-\zeta_{72}^{30k} + \zeta_{72}^{6k}}\over{\zeta_{72}^{3k}}}&{0}&{0}&{0}\cr
{0}&{0}&{0}&{0}&{0}&{1}\cr
\end{pmatrix} }.
\end{equation*}
Using $S_0$ and $S_1$ we can compute four new matrices $T_2 = S_0^{9}$, $T_3 = S_0^{-8}$, $S_2 = S_0^{-1} S_1 S_0^{-10} S_1 S_0^{-1} S_1 S_0^{-18}$ and
$S_3 = S_0^{-1} S_1 S_0^{7} S_1 S_0^{-1} S_1 S_0^{16}$.
\REMOVED{
\begin{equation}
S_2 = S_0^{-1} S_1 S_0^{-10} S_1 S_0^{-1} S_1 S_0^{-18}
\end{equation}
\begin{equation}
S_3 = S_0^{-1} S_1 S_0^{7} S_1 S_0^{-1} S_1 S_0^{16}
\end{equation}
\begin{equation}
T_2 = S_0^{9}
\end{equation}
and
\begin{equation}
T_3 = S_0^{-8}
\end{equation}
}
Now the matrix $A$ is equal to $A_{2} A_3 B$ where $B$ is equal to
\begin{equation*}
B=\left\{
\begin{array}{rl}
\begin{pmatrix}
1 & 0 & 0 & 0& 0 & 0 \\
0 & \zeta_{72}^{k-1} & 0 & 0 & 0& 0 \\
0 & 0 & \zeta_{72}^{2k -2} & 0 & 0 & 0 \\
0 & 0 & 0 & \zeta_{72}^{2k-2} & 0 & 0 \\
0 & 0 & 0 & 0 & \zeta_{72}^{k-1} &0 \\
0 & 0 & 0 & 0 & 0 & \zeta_{72}^{3k-3}
\end{pmatrix} \mbox{ if } k\equiv 1 \bmod 3 \\ \\ \\
\begin{pmatrix}
1 & 0 & 0 & 0& 0 & 0 \\
0 & 0 & \zeta_{72}^{k-2} & 0 & 0& 0 \\
0 & \zeta_{72}^{2k -1} &0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \zeta_{72}^{2k-1} & 0 \\
0 & 0 & 0 & \zeta_{72}^{k-2} &0 &0 \\
0 & 0 & 0 & 0 & 0 & \zeta_{72}^{3k-3}
\end{pmatrix} \mbox{ if } k\equiv 2 \bmod 3
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
A_n= \left\{
\begin{array}{ll}
S_nT_n^{\frac{1}{\alpha} \bmod {N(n)}}S_nT_n^{-\alpha}S_nT_n^{(\frac{1}{\alpha}(\frac{\beta-1}{2}) -1) \bmod {N(n)}} & \mbox{ if $n\nmid \alpha$} \\
T_n^{(1-\frac{\beta+1}{2}) \bmod {N(n)}}S_nT_nS_nT_n^\gamma & \mbox{ if $n\mid \alpha$ and $n\nmid \gamma$} \\
T_n^{(1-\frac{\beta+1}{2}-\alpha) \bmod {N(n)}}S_nT_nS_nT_n^{(-1+\alpha+\beta+\gamma) \bmod N(n)} & \mbox{ if $n\mid \alpha$ and $n\mid \gamma$}
\end{array}
\right.
\label{Gmatrix}
\end{equation*}
for $n=2,3$ and $N(2)=8$,$N(3)=9$.
\REMOVED{
\begin{equation}
F= \left\{
\begin{array}{rl}
S_1T_1^{(\frac{1}{\alpha} \bmod 8)}S_1T_1^{-\alpha}S_1T_1^{(\frac{1}{\alpha}((\frac{1}{\alpha} \bmod 8)(\frac{\beta-1}{2}) -1) )} & \mbox{ if $2\mid\!\!\!\!/\alpha$} \\
T_1^{(1-\frac{\beta+1}{2}) \bmod 8}S_1T_1S_1T_1 & \mbox{ if $2\mid \alpha$ and $2\mid\!\!\!\!/ \gamma$} \\
T_1^{(1-\frac{\beta+1}{2}-\alpha) \bmod 8}S_1T_1S_1T_1^{(\alpha+\beta+\gamma)^{-1}((\alpha+\frac{\beta+1}{2}-1) \bmod 8+\gamma + \frac{\beta-1}{2})} & \mbox{ if $2\mid \alpha$ and $2\mid \gamma$}
\end{array}
\right.
\label{Fmatrix}
\end{equation}
}
\REMOVED{
and
\begin{equation*}
B=\left\{
\begin{array}{rl}
\begin{pmatrix}
1 & 0 & 0 & 0& 0 & 0 \\
0 & \zeta_{72}^{k-1} & 0 & 0 & 0& 0 \\
0 & 0 & \zeta_{72}^{2k -2} & 0 & 0 & 0 \\
0 & 0 & 0 & \zeta_{72}^{2k-2} & 0 & 0 \\
0 & 0 & 0 & 0 & \zeta_{72}^{k-1} &0 \\
0 & 0 & 0 & 0 & 0 & \zeta_{72}^{3k-3}
\end{pmatrix} \mbox{ if } k\equiv 1 \bmod 3 \\ \\ \\
\begin{pmatrix}
1 & 0 & 0 & 0& 0 & 0 \\
0 & 0 & \zeta_{72}^{k-2} & 0 & 0& 0 \\
0 & \zeta_{72}^{2k -1} &0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \zeta_{72}^{2k-1} & 0 \\
0 & 0 & 0 & \zeta_{72}^{k-2} &0 &0 \\
0 & 0 & 0 & 0 & 0 & \zeta_{72}^{3k-3}
\end{pmatrix} \mbox{ if } k\equiv 2 \bmod 3
\end{array}
\right.
\end{equation*}
}
It is easy to see that every row in the matrix $A$ has only one non zero element. Thus, only one value $a_{2i}$ is not equal to zero and the computation of every value $t(\tau)$ requires the evaluation of only one value $R_i(\tau)$.
\subsection{Transformation of the Roots}
In order to use Ramanujan polyomials in the CM method, we must prove that they have roots modulo $p$ and then
find a transformation of their
roots modulo $p$ to the roots modulo $p$ of the corresponding Hilbert polynomials. The following theorem
proves that a Ramanujan polynomial with degree $h$ has exactly $h$ roots modulo $p$
under certain conditions (which are satisfied in the CM method):
\begin{theorem}
A Ramanujan polynomial $T_D(x)$ with degree $h$ has exactly $h$ roots modulo $p$ if and only if
the equation $4p = u^2 + Dv^2$ has integer solutions and $p$ does not divide the
discriminant $\Delta(T_{D})$ of the polynomial.
\label{ramanujanroots}
\end{theorem}
\begin{proof}
Let
$H_K$ be the Hilbert class field of the imaginary quadratic field
$K=\mathbb{Q}(\sqrt{-D})$, and
let $\mathcal{O}_{H_K}$ and $\mathcal{O}_K$ be the rings of algebraic integers
of $H_K$ and $K$ respectively.
Let $p$ be a prime such that $4p=u^2+Dv^2$ has integer solutions. Then,
according to \cite[Th. 5.26]{C89} $p$ splits completely in $H_K$.
Proposition 5.29 in
\cite{C89} implies that (since $t_D$ generates $H_K$) $T_D(x)$ has a root modulo $p$ if and only
if $p$ splits in $H_K$ and does not divide its
discriminant
$\Delta(T_{D})$. But since
$\frac{\mathcal{O}_{H_K}}{p \mathcal{O}_{H_K}}/ \mathbb{F}_p$ is
Galois, $T_D(x)$ has not only one root modulo $p$, but $h$
distinct roots modulo $p$.
\end{proof}
We will present now a method to
retrieve a root modulo $p$ of the Hilbert polynomial $H_{D}(x)$
from a root
modulo $p$ of the corresponding Ramanujan polynomial $T_{D}(x)$.
Our aim is to find a transformation that maps a real root of the Ramanujan polynomial to a real root of the
corresponding Hilbert polynomial. Then, we can reduce this transformation
modulo a prime ideal of the ring of integers of the Hilbert class field.
In this way we see that the same transformation will transfer a root of the Ramanujan polynomial modulo $p$
to a root of the Hilbert polynomial modulo $p$.
We know that if $\ell_0=(1,1,\frac{1+D}{4})$ is a quadratic form (known as the principal form) that corresponds
to the root $\tau_{\ell_0}=-\frac{1}{2}+i \frac{\sqrt{-D}}{2}$ then $j(\tau_{\ell_0})$ is a real root of the
Hilbert polynomial $H_D(x)$. The following lemma shows that $t_D$ is a real root of the Ramanujan polynomial
$T_D(x)$.
\begin{lemma} \label{modtn}
The value $t_D $ is a real root of the Ramanujan polynomial $T_D(x)$ and is equal to:
\[
t_D=\sqrt{3} R_2(\tau_{\ell_0}).
\]
\end{lemma}
\begin{proof}
Set
\begin{equation*}
q_D=\exp(-\pi \sqrt{D})=-\exp(2\pi i \tau_{\ell_0}),
\end{equation*}
where $\tau_{\ell_0}=-\frac{1}{2} + i \frac{\sqrt{-D}}{2}$.
Then
\begin{equation*}
f(q_D)= f( -\exp(2\pi i \tau_{\ell_0}))= \exp(2\pi i \tau_{\ell_0})^{-1/24} \eta(\tau_{\ell_0}),
\end{equation*}
\begin{equation*}
f(q_D^3)= \exp(2\pi i \tau_{\ell_0})^{-3/24} \eta(3\tau_{\ell_0}),
\end{equation*}
\begin{equation*}
f(q_D^{1/3})=\exp(2\pi i \tau_{\ell_0})^{-\frac{1}{3\cdot 24}}\eta(\frac{\tau_{\ell_0}}{3}).
\end{equation*}
Taking Eq.~(\ref{tndef}) and all the above equations into consideration
we can easily derive that $t_D=\sqrt{3} R_2(\tau_{\ell_0})$.
If we could prove that $t(\tau_{\ell_0}) = \sqrt{3} R_2(\tau_{\ell_0})$ then it will immediately follow
that $t_D = t(\tau_{\ell_0})$ and thus it is a root of the Ramanujan polynomial.
We have that
\[
t(\tau_{\ell_0})=(\zeta_{72}^6 -\zeta_{72}^{30}) R_2(\tau_{\ell_0}),
\]
since $k=1$ and the matrix $A=A_2 A_3 B$ is by computation equal to the identity matrix for every discriminant $D$. Notice that
the principal form equals $[\alpha,\beta,\gamma]=[1,1,\frac{1-D}{4}]$, therefore $2,3 \nmid \alpha=1$ and
$L_2=L_3=\mathrm{Id}_2$, $B=\mathrm{Id}_6$ and $A_n=S_nT_n^{\frac{1}{\alpha} \bmod {N(n)}}S_nT_n^{-\alpha}S_nT_n^{(\frac{1}{\alpha}(\frac{\beta-1}{2}) -1) \bmod {N(n)}}$ for $n=2,3$.
Finally, observe that $\sqrt{3}=\zeta_{72}^{6} -\zeta_{72}^{30}.$ Indeed, the value
$i \sqrt{3}$ can be expressed as a difference of two primitive $3$-roots of unity $\zeta_3,\zeta_3^2$ since
$i=\zeta_{72}^{18}$ and $\zeta_3=\zeta_{72}^{24}$. Thus $t(\tau_{\ell_0}) = \sqrt{3} R_2(\tau_{\ell_0}) = t_D$.
\end{proof}
\begin{lemma}
\label{lemma_trans}
Suppose $R_T$ is a real root of a Ramanujan polynomial $T_D(x)$.
Then, the real number $R_H$ obtained from the equation
\begin{equation}
\label{eq_trans}
R_H=(R_T^6-27R_T^{-6}-6)^3
\end{equation}
is a real root of the corresponding Hilbert polynomial $H_{D}(x)$.
\end{lemma}
\begin{proof}
Set $R_T = t_D$ and $R_H = j(\tau_{\ell_0})$.
Using Equations (4.4) and (4.5) from \cite{Berndt-Chan} it can be easily derived that
$h(e^{2\pi i\tau_{\ell_0}/3})-27h(e^{2\pi i\tau_{\ell_0}/3})^{-1}=\gamma_2(\tau_{\ell_0})+6$
where $\gamma_2^3(\tau_{\ell_0})=j(\tau_{\ell_0})$ and
\begin{equation}
\label{h_eq}
h(q)= \frac{f^{12}(-q^3)}{qf^6(-q)f^6(-q^9)}.
\end{equation}
Thus, $j(\tau_{\ell_0})=(h(e^{2\pi i\tau_{\ell_0}/3})-27h(e^{2\pi i\tau_{\ell_0}/3})^{-1}-6)^3$ which means that
we now have to find the relation between $t_D$ and $h(e^{2\pi i\tau_{\ell_0}/3})$.
Substituting $q$ with $e^{2\pi i\tau_{\ell_0}/3}$ in Eq.~(\ref{h_eq}) we have that
$h(e^{2\pi i\tau_{\ell_0}/3})=\frac{f^{12}(-e^{2\pi i \tau_{\ell_0}})}{e^{2\pi i\tau_{\ell_0}/3}f^6(-e^{2\pi i\tau_{\ell_0}/3})f^6(-e^{3(2\pi i\tau_{\ell_0})})}$.
Noticing that $q_D=\exp(-\pi \sqrt{D})=-\exp(2\pi i \tau_{\ell_0})$ and from Eq.~(\ref{tndef})
we derive that
$h(e^{2\pi i\tau_{\ell_0}/3})=-27t_D^{-6}$ which completes the proof of the lemma.
\end{proof}
The final step is to reduce Eq.~(\ref{eq_trans}) modulo $p$. The elements $R_H,R_T$
are not in $\mathbb{Z}$ but are elements of the ring of algebraic integers $\mathcal{O}_{H_K}$ of the Hilbert class field and
can be reduced modulo an ideal $P$ extending the ideal $p\mathbb{Z}$ of $\mathbb{Z}$.
But the ideal $p\mathbb{Z}$ splits completely, therefore the Galois extension $\frac{\mathcal{O}_{H_K}/P}{\mathbb{Z}/p\mathbb{Z} }$ is
the trivial one, and $\mathcal{O}_{H_K}/P$ is the field $\mathbb{F}_p$.
The argument above proves that Eq.~(\ref{eq_trans}) holds not only for the real roots of the polynomials
but also for their roots modulo $p$.
The interested reader is referred to \cite{C89,S04,ST87} for definitions on
algebraic number theory
not given here.
Using Eq.~(\ref{eq_trans}), we can easily derive the modular polynomial $\Phi_T(x,j)$ for Ramanujan polynomials.
The polynomial will be equal to:
\begin{equation}
\label{ramanujan-phi-eq}
\Phi_T(x,j) = (x^{12}-6x^6-27)^3 -jx^{18}.
\end{equation}
\section{Precision Requirements for the Construction of the Polynomials}
\label{prec-section}
In this section we focus on the precision required for the construction
of all previously mentioned polynomials.
In order to compare them, we introduce the notion of {\em logarithmic height} for
estimating the size of a polynomial. For a polynomial $g(x)=\sum_{i=0}^n a_i x^i \in \mathbb{Z}[x]$ its
logarithmic height is defined as
\[
H(g)=\max_{i=0,\ldots,n} \log_2 |a_i|.
\]
The value $H(g)$ is actually the bit-precision needed for performing all floating point computations
in order to obtain the coefficients
of the polynomial $g(x)$.
Starting from Hilbert polynomials, an estimation of their precision requirements in bits (and of their
logarithmic height also) was
given in \cite{LZ94}:
\begin{equation*}
\mbox{H-Prec}(D) \approx \frac{\ln 10}{\ln 2} (h/4 + 5) +
\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}
\label{hplz}
\end{equation*}
with the sum running over the same values of $\tau$ as the product
in Eq.~(\ref{hx}).
A slightly different bound was given in \cite{M90} which is remarkably accurate:
\begin{equation*}
\mbox{H-Prec1}(D) \approx 33 +
\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}.
\label{hplz1}
\end{equation*}
It will be shown in the rest of the section that based on this estimation, we can derive estimations of the
precision requirements of every class polynomial.
Let $f$ be a modular function, such that $f(\tau)$ for some $\tau \in \mathbb{Q}(\sqrt{-D})$ generates the Hilbert class field of
$\mathbb{Q}(\sqrt{-D})$.
The element $f(\tau)$ is an algebraic integer, and let us denote by $P_f$ its minimal polynomial.
For every modular function there is a polynomial $\Phi$ (called modular polynomial) such that
$\Phi(f,j) = 0$ where $j$ is the modular function used in the construction of Hilbert polynomials.
This polynomial equation is used (as we show in the previous section) in order to transform the roots of
the minimal polynomial
of a class invariant to the roots of the Hilbert polynomial. We have seen that in the cases
of Weber, $M_{D,l}(x)$ and Ramanujan polynomials the degree in $j$ of the modular polynomial is equal to 1
while for $M_{D,p_1, p_2}(x)$ polynomials is at least 2.
Asymptotically, one can estimate the ratio of the logarithmic height $h(j(\tau))$ of the algebraic integer $j(\tau)$
to the logarithmic height $h(f(\tau))$ of the algebraic integer $f(\tau)$
\footnote{Let $K$ be a number field, $\alpha\in K$ be an algebraic
number and $M_K$ be the set of absolute values on $K$. Following
the notation of \cite[VIII]{S86}, the absolute logarithmic height
of an element $\alpha \in K$ is defined as $h(\alpha) =
\frac{1}{[K:\mathbb{Q}]} \log_2 \left( \prod_{v\in M_K} \max \{
|\alpha|_v, 1 \} \right)$. }
. Namely,
\begin{equation}
\label{ratio11}
\lim_{h(j(\tau))\rightarrow \infty} \frac{h(j(\tau))}{h(f(\tau))}=\frac{\deg_f \Phi(f,j)}{\deg_j \Phi(f,j)}=r(f),
\end{equation}
where the limit is taken over all CM-points $\mathrm{SL}_2(\mathbb{Z})\tau \in \mathbb{H}$ \cite{HinSil}.
Concerning Weber polynomials, we can easily compute the values of $r(f)$ from
Eq. ~(\ref{eq-phi-1}) and Eq.~(\ref{eq-phi-2}).
Thus, when $D \not\equiv 0 \pmod 3$, $r(f)$ will be equal to 24 and when $D \equiv 0 \pmod 3$, $r(f)$ will be equal to 8.
A question that immediately arises is how Eq.~(\ref{ratio11}) can be used for the estimation of the logarithmic
height of the minimal polynomial $P_f$. The following Lemma gives an answer to this question.
\begin{lemma}
Suppose that $H(P_f)$ is the logarithmic height of the minimal polynomial of the algebraic number $f(\tau)$ and
$H(P_j)$ is the logarithmic height of the corresponding Hilbert polynomial. If $f(\tau)$ generates the Hilbert
class field then
\begin{equation}
\label{ratio12}
\lim_{h(j(\tau))\rightarrow \infty} \frac{H(P_j)}{H(P_f)}=\frac{\deg_f \Phi(f,j)}{\deg_j \Phi(f,j)}=r(f).
\end{equation}
If $f(\tau)$ does not generate the Hilbert
class field but an algebraic extension of it with extension degree $m$ then
\begin{equation*}
\lim_{h(j(\tau))\rightarrow \infty} \frac{H(P_j)}{H(P_f)}=\frac{\deg_f \Phi(f,j)}{\deg_j \Phi(f,j)}=\frac{r(f)}{m}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is based on the following bounds\cite[Th. 5.9]{S86}:
\begin{equation*}
-k + k h(a) \leq H(P_a) \leq k-1+ k h(a)
\end{equation*}
where $h(a)$ is the logarithmic height of the algebraic integer $a$ and $k$ is the degree of its minimal polynomial $P_a$.
If $f(\tau)$ generates the Hilbert class field then the degree of its minimal polynomial is equal to the degree of the
corresponding Hilbert polynomial. Suppose that their degree is equal to $k$. Then, we have that
\begin{equation}
\label{eq-f}
-k + k h(f(\tau)) \leq H(P_f) \leq k-1+ k h(f(\tau))
\end{equation}
and
\begin{equation*}
-k + k h(j(\tau)) \leq H(P_j) \leq k-1+ k h(j(\tau)).
\end{equation*}
Thus,
\begin{equation*}
\frac{-k + k h(j(\tau))}{k-1+ k h(f(\tau))} \leq \frac{H(P_j)}{H(P_f)} \leq \frac{k-1+ k h(j(\tau))}{-k + k h(f(\tau))}.
\end{equation*}
Taking the limit $h(j(\tau))\rightarrow \infty$ we obtain that
\begin{equation}
\label{eq-degree}
\frac{H(P_j)}{H(P_f)}\rightarrow r(f).
\end{equation}
In the case that $f(\tau)$ generates an algebraic extension of the Hilbert class field, we similarly have that
\begin{equation}
\label{eq-ext-degree}
\frac{H(P_j)}{H(P_f)}\rightarrow \frac{r(f)}{m}
\end{equation}
where $m$ is the degree of the extension. This is easily derived from the fact that the degree of the minimal polynomial
$P_f$ is $m$ times larger than the degree of the corresponding Hilbert polynomial and Eq.~(\ref{eq-f}) becomes
\begin{equation*}
-mk + mk h(f(\tau)) \leq H(P_f) \leq mk-1+ mk h(f(\tau)).
\end{equation*}
Thus,
\begin{equation*}
\frac{-k + k h(j(\tau))}{mk-1+ mk h(f(\tau))} \leq \frac{H(P_j)}{H(P_f)} \leq \frac{k-1+ k h(j(\tau))}{-mk + mk h(f(\tau))}.
\end{equation*}
\end{proof}
Eq.~(\ref{eq-degree}) and Eq.~(\ref{eq-ext-degree})
relate the precision required for the construction of Hilbert polynomials with the precision
needed for other classes of polynomials.
Estimating the height $H(P_j)$ of Hilbert polynomials with the quantity $\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$,
we can derive the precision requirements for the construction of every class polynomial by the equation:
\begin{equation*}
\frac{m}{r(f)} \frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha},
\end{equation*}
where $m$ is either 1 or larger.
Obviously,
we want to find class invariants $f(\tau)$ so that the ratio $r(f)$ is as big as possible.
However, there is a limit on the ratio $r(f)$.
It is known \cite{BS06} that $r(f) \leq 800/7$ and if the Selberg eigenvalue conjecture in \cite{Sarnak} holds
then $r(f) \leq 96$.
Concerning Weber polynomials,
when $D \equiv 3 \pmod 8$ their degree
is three times larger than the degree of the corresponding Hilbert polynomials. Therefore, for this case of
$D$, the estimation of the precision requirements will be approximately
$\frac{3}{r(f)}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$.
Concluding, an estimation of the precision requirements of Weber polynomials will be equal to
$\frac{1}{24}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ for $D \not\equiv 0 \pmod 3$
and $\frac{1}{8}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ for $D \equiv 0 \pmod 3$.
\REMOVED{
the estimations given
in Tables~\ref{prec:not-mod-3} and ~\ref{prec:mod-3}.
\begin{table}[h!tb]
\centering
\begin{minipage}{7.5cm}
\begin{small}
\begin{tabular}{||l|l||} \hline
$D$ & precision estimation \\ \hline \hline
$D \equiv 7 \pmod 8$ & $\frac{1}{72}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D \equiv 3 \pmod 8$ & $\frac{1}{24}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D/4 \equiv 2,6 \pmod 8$ & $\frac{1}{36}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D/4 \equiv 1 \pmod 8$ & $\frac{1}{36}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D/4 \equiv 5 \pmod 8$ & $\frac{1}{18}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
\end{tabular}
\caption{Precision estimations for $D \not\equiv 0 \pmod 3$.}
\label{prec:not-mod-3}
\end{small}
\end{minipage} \hspace*{0.5cm}
\centering
\begin{minipage}{7.5cm}
\begin{small}
\begin{tabular}{||l|l||} \hline
$D$ & precision estimation \\ \hline \hline
$D \equiv 7 \pmod 8$ & $\frac{1}{24}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D \equiv 3 \pmod 8$ & $\frac{1}{8}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D/4 \equiv 2,6 \pmod 8$ & $\frac{1}{12}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D/4 \equiv 1 \pmod 8$ & $\frac{1}{12}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$D/4 \equiv 5 \pmod 8$ & $\frac{1}{6}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
\end{tabular}
\caption{Precision estimations for $D \equiv 0 \pmod 3$.}
\label{prec:mod-3}
\end{small}
\end{minipage}
\end{table}
}
Based again on Eq.~(\ref{ratio12}), it can be concluded that
the precision required for the construction of the
$M_{D, l}(x)$ polynomials is approximately $\frac{1}{(l+1)}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$
and for
$M_{D, p_1, p_2}(x)$ polynomials is approximately $\frac{(p_1-1)(p_2-1)}{12(p_1+1)(p_2+1)}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$
where the sum runs over the same values of $\tau$ as the product
in Eq.~(\ref{hx}) \cite{EM02}.
Thus, it is equal to $\frac{1}{28}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ for $M_{D, 3, 13}(x)$ polynomials and to $\frac{1}{24}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ for $M_{D, 5, 7}(x)$ polynomials. The above precision estimations are summarized in Table~\ref{prec-ml}.
\begin{table}
\centering
\begin{tabular}{||l|l||} \hline
class polynomial & precision estimation \\ \hline \hline
$M_{D,3}(x)$ & $\frac{1}{4}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$M_{D,5}(x)$ & $\frac{1}{6}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$M_{D,7}(x)$ & $\frac{1}{8}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$M_{D,13}(x)$ & $\frac{1}{14}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$M_{D,5,7}(x)$ & $\frac{1}{24}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
$M_{D,3,13}(x)$ & $\frac{1}{28}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$ \\ \hline
\end{tabular}
\caption{Precision estimations for $M_{D,l}(x)$ and $M_{D,p_1,p_2}(x)$ polynomials.}
\label{prec-ml}
\end{table}
Finally, in order to find an estimation for the precision requirements of Ramanujan polynomials, we use
Eq.~(\ref{ratio12}) and Eq.~(\ref{ramanujan-phi-eq}).
We easily conclude
that the precision required for the construction of the Ramanujan polynomials is approximately
$\frac{1}{36}\frac{\pi\sqrt{D}}{\ln 2} \sum_{\tau} \frac{1}{\alpha}$.
\section{Implementation and Experimental Results}
\label{exper}
In this section, we discuss some issues regarding the
construction of the Weber, $M_{D,l}(x)$, $M_{D,p_1,p_2}(x)$ and Ramanujan polynomials. All
implementations and experiments were made in Pari 2.3.1 \cite{PARI2} compiled with GMP-4.2.1 kernel \cite{gnu:gnu} and have been carried out on a
double 2GHz Xeon machine running Linux 2.6.9-22 and equipped with 2Gb of main memory.
\begin{figure}
%
\begin{minipage}{7cm}
\centering\epsfig{file=all_polynomials_big_values.ps, width=4.7cm, angle = 270}
\end{minipage} \hfill
\begin{minipage}{7cm}
\centering\epsfig{file=all_polynomials_big_values_aristeidis.ps, width=4.7cm, angle = 270}
\end{minipage}
\caption{Bit precision for the construction of
all polynomials.}
\label{hwprecision}
\end{figure}
%
In Figure~\ref{hwprecision} we report on the precision needed for the
construction of all polynomials for various values of $D$.
In the left figure, we examine the precision requirements of Ramanujan, Weber ($D \not\equiv 0 \pmod 3$) and
$M_{D,l}(x)$ polynomials for all values of $l$.
The values of $D$ range from 30083 to 64163 while the degree $h$ ranges from 32 to 48.
We noticed, as the
theory dictates, that the precision required for the construction of Ramanujan polynomials is much less than the precision required for the construction of Weber and
$M_{D,l}(x)$ polynomials for all
values of $D$ that we examined. Weber polynomials require less precision than $M_{D,l}(x)$ polynomials, while
among them $M_{D,13}(x)$ polynomials require the least precision.
Examining larger values of the discriminant $D$ and adding $M_{D,3,13}(x)$ and
$M_{D,5,7}(x)$ polynomials in our comparison, we show (Figure~\ref{hwprecision} (right)) that
Ramanujan polynomials are constructed more efficiently than all other polynomials. $M_{D,3,13}(x)$ polynomials require less precision
than $M_{D,5,7}(x)$ polynomials which are constructed more efficiently than Weber polynomials.
In this figure, we examined all values of $D$ from 21840299 to 873600299 using a step of 21840000.
The degree $h$ of
the constructed polynomials (for these values of $D$) ranges from 2880 to 17472.
Summarising the results of our experiments, we see that Ramanujan polynomials outweight
$M_{D,13}(x)$, Weber, $M_{D,5,7}(x)$ and $M_{D,3,13}(x)$ polynomials as they require on average
66\%, 42\%, 32\%
and 22\% less precision respectively.
Table~\ref{prectable} shows this difference by presenting the exact bit precision needed for
the construction of the polynomials for several values of $D$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|p{1.8cm}|p{1.0cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|} \hline
$D$ & $h$ & $M_{D,13}(x)$ & Weber & $M_{D,5,7}(x)$ & $M_{D,3,13}(x)$ & Ramanujan \\ \hline \hline
109200299 & 5016 & 31270 & 18657 & 15546 & 13534 & 10624 \\ \hline
240240299 & 6944 & 45402 & 26837 & 22757 & 19834 & 15442 \\ \hline
349440299 & 9772 & 61933 & 37004 & 30768 & 26804 & 20998 \\ \hline
458640299 & 12660 & 77894 & 46387 & 38447 & 33633 & 26245 \\ \hline
698880299 & 13950 & 90734 & 54030 & 45311 & 39508 & 30813 \\ \hline
851760299 & 15904 & 101214 & 60333 & 50322 & 43984 & 34243 \\ \hline
\end{tabular}
\end{center}
\caption{Precision requirements (in bits) for the computation of $M_{D,13}(x)$, Weber, $M_{D,5,7}(x)$, $M_{D,3, 13}(x)$ and Ramanujan polynomials.}
\label{prectable}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|p{1.8cm}|p{1.0cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|} \hline
$D$ & $h$ & $M_{D,13}(x)$ & Weber & $M_{D,5,7}(x)$ & $M_{D,3,13}(x)$ & Ramanujan \\ \hline \hline
109200299 & 5016 & 134 & 245 & 68 & 59 & 47 \\ \hline
240240299 & 6944 & 271 & 492 & 138 & 119 & 94 \\ \hline
349440299 & 9772 & 518 & 950 & 262 & 227 & 179 \\ \hline
458640299 & 12660 & 842 & 1539 & 423 & 366 & 289 \\ \hline
698880299 & 13950 & 1087 & 1986 & 551 & 478 & 377 \\ \hline
851760299 & 15904 & 1379 & 2524 & 697 & 604 & 475 \\ \hline
\end{tabular}
\end{center}
\caption{Memory requirements (in MB) for the storage of $M_{D,13}(x)$, Weber, $M_{D,5,7}(x)$, $M_{D,3, 13}(x)$ and Ramanujan polynomials.}
\label{memorytable}
\end{table}
Comparing the number of bits for the storage of all classes of polynomials, it is clear that the memory required
for the storage of the Ramanujan polynomials is smaller than the memory needed for the other three classes of polynomials.
The percentages are the same as in the precision requirements of the polynomials with one exception: Weber polynomials.
Notice that the degree of Weber polynomials
is $3h$ and thus the memory used for the storage of Ramanujan polynomials is not only 42\% (like the precision
requirements) less than the
corresponding memory needed for the Weber polynomials but approximately 81\% less! This means that regarding the storage
requirements of all polynomials, Weber polynomials are by far the worst choice.
In Table~\ref{memorytable}
we present the memory in MB needed for the storage of all classes of polynomials for few values of $D$.
The difference in the efficiency of the construction of all classes of polynomials can be easily understood
noticing the polynomials for $D=299$ and $h=8$. Even though this is a small value for the discriminant,
the difference in the size of the coefficients of the polynomials is remarkable.
In particular, 25 bits are required for the storage of the coefficients of the $T_{299}(x)$ polynomial,
188 bits for
the storage of $W_{299}(x)$ polynomial, 112 bits for $M_{299, 13}(x)$ polynomial,
31 bits for $M_{299, 3, 13}(x)$ and 32 bits for $M_{299,5,7}(x)$.
\begin{equation*}
W_{299}(x) = x^{24} - 8x^{23}-12x^{22}-28x^{21}-56x^{20} -40x^{19} + 144x^{18} +144x^{17} +16x^{16} -112x^{15} -224x^{14} -416x^{13}
\end{equation*}
\begin{equation*}
-32x^{12} +256x^{11} +704x^{10} + 832x^{9} +640x^{8} -384x^{7} -1792x^{6} -1280x^{5} -256x^{4} +1280x^{3} +1536x^{2} +512x +256
\end{equation*}
\begin{equation*}
M_{299,13}(x) = x^{8} + 78x^{7}+793x^{6}+5070x^{5}+20956x^{4} +65910x^{3} + 134017x^{2} +171366x +28561
\end{equation*}
\begin{equation*}
M_{299,5,7}(x) = x^{8} - 8x^{7}+31x^{6}-22x^{5}+28x^{4} -2x^{3} - 19x^{2} +8x -1
\end{equation*}
\begin{equation*}
M_{299,3,13}(x) = x^{8} - 6x^{7}+16x^{6}+12x^{5}-23x^{4} +12x^{3} + 16x^{2} -6x +1
\end{equation*}
\begin{equation*}
T_{299}(x) = x^{8} + x^{7}-x^{6}-12x^{5}+16x^{4} -12x^{3} + 15x^{2} -13x +1
\end{equation*}
The time efficiency of the construction of the polynomials is clearly proportionate to the corresponding precision
requirements. However, notice that computing the Weber and $M_{D,l}(x)$ polynomials amounts to $2h$ evaluations of
the eta function $\eta$ while for Ramanujan and $M_{D, p_1, p_2}(x)$ polynomials we need to evaluate the function
$3h$ and $4h$
times respectively. This could be a disadvantage for Ramanujan and $M_{D, p_1, p_2}(x)$ polynomials, but this is not the case.
In particular, it was shown in \cite{EM02} that is sufficient for any polynomial to precompute the values of $\eta$ only at the $h$ reduced quadratic forms.
Finally, we note that the time required for the transformation of a root of a Weber, Ramanujan or $M_{D,l}(x)$
polynomial to a root of
the corresponding Hilbert polynomial is approximately the same. The situation gets worse when $M_{D, p_1, p_2}(x)$
polynomials are used, because the time for the transformation and the storage of the modular polynomials
are larger.
In conclusion, we showed that Ramanujan polynomials clearly outweight in every aspect all previously used class polynomials
for all values of the
discriminant $D \equiv 3 \bmod 8$ and therefore their use is particularly favored in the CM method for the generation of
prime order ECs.
|
1,116,691,499,709 | arxiv | \section{}
In latter years, a new field has emerged from the understanding, control and
manipulation of objects at nanoscale level (nano-objects). It is commonly known as
nanoscience. This field involves physics, chemistry and even engineering, and
addresses a huge number of important issues starting from basic science
and ending in a large variety of technological applications \cite{baletto}.
Among the nano-objects, the small clusters or nanoclusters play a very important
role, since they are the bricks of nanoscience. Therefore, the study of small
clusters deserves a special attention. In this respect,
the steps to follow for a complete description of a cluster can
be summarized in the following three questions:
what is the lowest energy structure?, what is the effect of increasing or decreasing
the temperature on the structural properties of a cluster? and the last step
deals with the kinetic effects in the formation of the nanocluster. Here,
we are only concerned about the first question for the silver octamer,
leaving the other two questions open for future investigations.
From the theoretical point of view, first-principles methods give an enormous advantage
for understanding, projecting and inventing new materials that is reflected in the huge number
of articles published in the field of materials science. Likewise, density functional theory (DFT)
has emerged as a new
and promising tool for {\it ab initio} electronic structure calculations and gives valuable information
about the geometry of nanoscale systems \cite{jena} but unfortunately it
not always predict the correct structure of the
cluster under consideration. In this regard, the silver octamer belongs to the group of the controversial
systems for which the lowest energy structure is unresolved by DFT.
Recently, P. Radcliffe {\it et al.} \cite{radcliffe} have proposed Ag$_8$ clusters
embedded in helium droplets as a suitable system for light amplification based on an optically
accessible long-living excited state (E*) and thereby, from the theoretical point of view, the
determination of the structure becomes a key point as the first step for identifying
and controlling the levels that populate E*. Up to now, it has not been
possible to make a reliable theoretical prediction of the most stable structure of Ag$_8$.
A review of the literature reveals that there are two competing geometries in eight-atom
clusters of s-electron elements, having
T$_d$ and D$_{2d}$ symmetry. In fact,
different levels of theories favor different geometries: DFT in its local
density approximation (LDA) \cite{fournier}, multireference configuration interaction
method \cite{bonacic1}, a tight-binding approach \cite{zhao}, and the many-body perturbation theory-based
calculations \cite{huda} give the
D$_{2d}$ geometrical shape as the lowest energy structure of Ag$_8$ whereas
the equation-of-motion coupled cluster method
\cite{bonacic}, time-dependent DFT only at LDA level \cite{yabana} and
molecular-dynamics simulations \cite{erkoc} predict a T$_d$ structure as the structural minimum.
It was reported in Refs.~\cite{bonacic,bonacic1} that
the D$_{2d}$ geometry is favored energetically over T$_d$ symmetry when explicit correlation treatments for 5s
electrons are included, but
since the calculated energy difference between T$_d$ and D$_{2d}$ isomers is very small, the
predicted theoretical ordering is uncertain.
One way of solving this vexing problem comes from the hand of the time-dependent density functional
theory (TDDFT) \cite{casida} that is a generalization of traditional ground stationary state DFT to treat
the dynamic response of the charge density to a time-dependent perturbation. TDDFT is
a powerful methodology towards the calculation of the
optical spectra, and thereby gives access to excited-state information.
In this brief report,
we have calculated the optical response of the T$_d$ and D$_{2d}$ geometries and they were compared both
with each other and the experimental findings. The atomic positions were fully optimized with
an all-electron DFT
implementation at the generalized gradient approximation (GGA) level, representing an improvement over
other TDDFT studies in small silver clusters \cite{yabana}. This work relies on the combination of the
traditional DFT and its generalization to excited states, as a promising tool for elucidating
structures that DFT by its own is unable to predict. Here, we
demonstrate that the D$_{2d}$ structure is the structural minimum of Ag$_8$ and the calculated
spectra allow us
to estimate the interaction of Ag$_8$ with the surrounding helium or argon matrix presented in experimental
observations.
With the aim of elucidating the lowest energy structure of Ag$_8$ cluster, we have
performed density functional theory-based calculations consisting of
a linear combination of Gaussian-type orbitals-Kohn-Sham-density-functional
methodology (LCGTO-KSDFM) to obtain the structural and
electronic ground-state properties \cite{salahub}, and
a TDDFT implementation to compute the electronic excitations \cite{marques}.
For the former, all-electron calculations were carried out
with DEMON-KS3P5 \cite{salahub} at GGA level to
take the exchange-correlation (XC) effects into account \cite{perdew}. An orbital
basis set of contraction pattern (633321/53211*/531+) was used in conjunction
with the corresponding (5,5;5,5) auxiliary basis set for
describing the s-, p- and d-orbitals \cite{huzinaga}. The grid for numerical
evaluation of the XC terms had 128 radial shells of points and each shell had
26 angular points. Spurious one-center contributions to the XC forces, typically
found in systems with metal-metal bonds when
using a nonlocal functional, are
eliminated in a similar way as has been done in Ref.~\cite{versluis}.
Trial geometries were fully optimized without symmetry and
geometry constraints for different multiplicities using the Broyden-Fletcher-
Goldfarb-Shanno algorithm \cite{broyden}. The multiplicities were ranged from 1 to 11 and in all
reported structures the singlet state was favored energetically. During the optimization, the convergence criterion
for the norm of the energy gradient was fixed to $10^{-4}$ a.u. while it was $10^{-7}$ a.u. for
the energy and $10^{-6}$ a.u. for the charge density.
For the latter, after inserting the atomic coordinates of the converged structures provided by DEMON-KS3P5
all the dynamical quantities are computed by evolving the electronic wave functions in real time
and real space \cite{marques}. The electron-ion interaction is described through the
Hartwigsen-Goedecker-Hutter
relativistic separable dual-space gaussian pseudopotentials \cite{hartwigsen} and the XC effect
were treated in the GGA, implemented via the Perdew-Burke-Ernzerhof functional \cite{perdew1}. The
grid in real space to solve the Kohn-Sham equations consists in a sum of spheres around each
atom of radius 5.5 \AA~and a mesh spacing of 0.23 \AA. The time step for the propagation
of the electronic orbitals was fixed to 0.0013 fs, which ensures the stability of time-dependent
propagation. An artificial electronic temperature of 10 K was included according to the
Fermi-Dirac function used to distribute the electrons among the accessible states.
After a review of the literature on silver clusters \cite{fournier,zhao,bonacic,huda,bonacic1},
we have decided to optimize, as a good candidate to the structural minimum of the octamer,
the following isomers of Ag$_8$: a D$_{2d}$ dodecahedron (D$_{2d}$-DD), which can also be
viewed as a distorted bicapped octahedron, a T$_d$ tetracapped tetrahedron
(T$_d$-TT) and a C$_s$ 1-pentagonal bipyramid (C$_s$-PBP) in Fournier's notation \cite{fournier}.
The main results (density of states (DOS), optimized structures,
polarizabilities, ground state energies, \ldots) of
the electronic structure calculations are
collected in Table~\ref{table1} and Fig.~\ref{fig1}. The LCGTO-KSDFM calculations clearly show that the
C$_s$-PBP geometry is energetically far from the lowest energy structure by an amount of
181.25 meV. Therefore, we will concentrate our attention in D$_{2d}$ and T$_d$ structures.
\begin{figure*}
\includegraphics[width=17.8cm,angle=0]{fig1.eps}
\caption{\label{fig1}(color online). Energy levels, partial and total density of states and
shapes of the delocalized molecular orbitals for the higher-lying occupied and
lower-lying unoccupied levels of Ag$_8$ isomers: a) T$_d$-TT
and b) D$_{2d}$-DD. The orbitals below the Fermi level
are double occupied; for
these levels only one molecular orbital is presented. The solid vertical line represents
the Fermi level.}
\end{figure*}
\begin{table
\caption{\label{table1}Ground-state energies relative to the most stable isomer (D$_{2d}$-DD)
and electronic structure properties of the
DFT-optimized Ag$_8$ cluster isomers.
The Fermi level is denoted by E$_f$ and $\Delta\xi$ stands for the HOMO-LUMO gap.
The mean static polarizability $\bar{\alpha}$ and
the polarizability anisotropy $\Delta\alpha$ were calculated under
the influence of an external electric field of strength 0.0005 a.u..}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
Symmetry & $\Delta$E$_{DFT}$ & E$_f$ & $\Delta\xi$ & $\bar{\alpha}$ & $\Delta\alpha$\\
&(meV) & (eV) & (eV) & (\AA$^3/$atom) & (\AA$^3/$atom)\\
\hline
D$_{2d}$& 0.00 & -4.302 & 1.719 & 6.32 & 1.22\\
T$_d$ & 6.14 & -4.574 & 2.335 & 6.46 & 0.01\\
C$_s$ & 181.25 & -4.048 & 1.327 & 6.45 & 1.71\\
\end{tabular}
\end{ruledtabular}
\end{table}
Despite the fact that the lowest energy isomer
corresponds to a D$_{2d}$ symmetry, it should be noted that the ground-state energy difference
between D$_{2d}$-DD and T$_d$-TT isomers is very small ($\Delta E_{D_{2d}\rightarrow T_d}=6.12 $ meV)
compared to 0.19 eV which is the averaged energy difference between the ground-state structure
and the second stable structure of Ag$_n$ (2$\le$n$\le$12) \cite{fournier}.
Furthermore, the polarizabilities and the HOMO-LUMO gaps (HLg) do not offer a clear
picture for elucidating the structural minimum of the octamer. That is, it is well known that
the Ag$_8$ cluster is a
closed-shell system and it was demonstrated experimentally
and theoretically that the closure of electronic shell manifest itself
in particularly large HLg ( see \cite{zhao} and references therein),
consequently the HLg reported in Table~\ref{table1} are on
the side of stabilization of the T$_d$-TT isomer.
As far as the reported polarizabilities are concerned, in molecular electronic distribution studies under the influence of an external electric
field, the relevant quantities are the mean static polarizability $\bar{\alpha}=
(\sum{_{i=1}^3}\alpha_{ii})/3$ and
the polarizability anisotropy $\Delta\alpha$, defined as:
\begin{equation}
\label{eq1}
\Delta\alpha=\sqrt{\frac{\sum\limits _{i,j=1,2\atop i<j}^{2,3}(
\alpha_{ii}-\alpha_{jj})^2+6\sum\limits _{i,j=1,2\atop i<j}^{2,3}\alpha_{ij}^2}{2}}
\end{equation}
where $\alpha_{ij}=\partial(\mu_e)_i/
\partial E_j$
is the ij-component of the polarizability tensor under the action of an external electric
field $E_j$. It is not a common procedure to express the polarizability anisotropy
such as it was defined in Eq.~(\ref{eq1}). The commonly-used definition omits the
second term ($6\sum_{i,j=1,2;i<j}^{2,3}\alpha_{ij}^2$) and thus
neglects the important influence that the off-diagonal elements of the second-rank polarizability tensor
play in the
symmetry considerations of the electric charge distribution \cite{alvarado}.
The mean static polarizabilities, reported in Table~\ref{table1},
are quite similar to each other showing in average that the electron
charge is nearly equally distributed among the three isomers. However, only
the polarizability anisotropy of the T$_d$-TT isomer is clearly reduced. This result
tends to stabilize the T$_d$ symmetry over D$_{2d}$ because the less polarizability anisotropy is,
the more spherically symmetric charge distribution is and the latter condition is favored by a closed-shell
system like the silver octamer. Indeed, the delocalized molecular orbitals for the higher-lying
occupied levels of the D$_{2d}$ and T$_d$ geometries, presented in Fig.~\ref{fig1}, exhibit a
hybridization of the atomic 5s levels with the 5p levels leading to a nearly spherical shape
whereas for the lower-lying unoccupied levels, the spherical symmetry becomes less important. The
proximity in energy and the corresponding superposition of the spatial distribution of the higher-lying
occupied molecular orbitals of the T$_d$-TT isomer
with respect to the ones of the D$_{2d}$-DD isomer, contribute to a spherical symmetrization of the
T$_d$-TT charge distribution.
The balance of the aforementioned contrary tendencies does not allow a reliable
prediction of the structure, such as it was discussed in literature \cite{fournier,bonacic,erkoc,zhao,huda}.
The TDDFT calculations using the structural parameters provided by the LCGTO-KSDFM as starting point
can shed some light on the better understanding of this vexing controversy on the structure of Ag$_8$.
In this respect, as it is shown in Fig.~\ref{fig2}, the calculated
spectrum for D$_{2d}$ symmetry is in excellent agreement with the resonant two-photon ionization
spectrum reported by F. Federmann {\it et al.} \cite{federmann} and the excitation spectrum reported
by C. F\'elix {\it et al.} \cite{felix}, whereas the T$_d$-TT calculated spectrum only shows one
resonant peak and it is about 0.31 eV
blue-shifted with respect to the experimental measurements. Thus, the LCGTO-KSDFM predicted structure
is confirmed by the TDDFT calculations of the optical response when it is compared to the experimental
evidences.
\begin{figure}
\includegraphics[width=8.5cm,angle=0]{fig2.eps}
\caption{\label{fig2}Comparison between two experimental recorded spectra
and the calculated
spectra for both (solid line) D$_{2d}$-DD and (dashed line) T$_d$-TT isomers
at a temperature of 10 K.
Open circles
correspond to the resonant two-photon-ionization (R2PI) spectroscopy on Ag$_8$ clusters in
He droplets \cite{federmann} while the solid triangles are for the excitation spectrum of
Ag$_8$ excited with monochromatic Xe light in an Ar matrix \cite{felix}.
The D$_{2d}$-DD isomer spectrum is in excellent agreement with
the experiment whereas the T$_d$-TT isomer spectrum is around 0.31 eV blue-shifted compared to
the R2PI spectrum.}
\end{figure}
Some of the relevant states involved in the transitions that populate the peaks of Fig.~\ref{fig2}
are provided through the calculated DOS depicted in Fig.~\ref{fig1}. For the D$_{2d}$ isomer,
the eigenvalues of the
HOMO and HOMO-1 states are both close together in energy and the HOMO-2 state is 0.7 eV
further down meanwhile for the T$_d$-TT isomer, this three states were grouped together in a window energy of only 3 meV. It is worthwhile to mention here that
the T$_d$-TT isomer has high symmetry, so many states will be degenerate.
However, because of small numerical errors, it is quite possible that states
that should strictly be degenerate will show as being within very small
energy windows.
In the energy range displayed in Fig.~\ref{fig2} for D$_{2d}$ symmetry, the TDDFT calculations predict four
electronic transitions starting from the twofold degenerate HOMO and HOMO-1 states
to two excited states separated 0.07 eV in energy and it gives rise to two peaks because of the nearness
in energy of the HOMO-1 and HOMO states. In the case of the T$_d$ symmetry, six transitions are predicted
giving rise to only one peak because the two excited states that electronic transitions populate are only
1 meV separated, as commented above.
Some consequences can be extracted going further in the analysis of the calculated spectrum
for D$_{2d}$-DD isomer when it is compared with the experimental references reported in Fig.~\ref{fig2}.
On one hand, for the case of R2PI experiment we attribute the slight difference
in peak position ($\sim$ 3 meV) to the helium environment through the formation of electron bubble states
that significantly blue shift the transition \cite{bartelt}. As already mentioned above, our TDDFT
calculations also confirm the
authors' feeling of Ref.~\cite{federmann} that the asymmetry of the peak involves more than one
transition. On the other hand, the comparison between the excitation spectrum in Ar matrix \cite{felix} and
the calculated D$_{2d}$ isomer spectrum allow us to measure the interaction of the Ar matrix with
the Ag$_8$ cluster. Thus, the shift of energy probably due to Ar matrix effects is estimated to be about 25 meV that is great enough
compared to the slight difference in peak position ($\sim$ 3 meV) attributed to the helium
surrounding the Ag$_8$ clusters in R2PI experiment. Consequently, the use of liquid helium droplets
as a spectroscopic matrix has the advantage over argon matrix of providing an environment more suitable
for study the electronic excitations of small and free silver clusters.
In conclusion, we have shown that a combination of a LCGTO-KSDFM and TDDFT approach is able to reproduce the
measured optical response of the silver octamer and allow us to elucidate its lowest energy structure
below 10 K.
Our calculation thus confirms that the structural minimum of Ag$_8$ is the D$_{2d}$-DD isomer, whose
geometrical structure is depicted in Fig.~\ref{fig1}. The TDDFT calculations have provided a number
of electronic transitions involving the resonant peaks showed in Fig.~\ref{fig2} and demonstrate that
the R2PI experiment is a good technique to measure experimentally the electronic excitations
of bare silver clusters because it is less aggressive than, for example, the experiments that consider argon
as the spectroscopic matrix.
The authors acknowledge the CESGA for the computing facilities.
The work was supported by the Xunta de Galicia under
the Project No. PGIDIT02TMT20601PR.
|
1,116,691,499,710 | arxiv | \section{Introduction}
Genes collectively form the organism's genomes and
can be viewed as ``atomic'' units whose evolutionary history forms
a tree. The history of species, which is also a tree, and the history
of their genes is intimately linked, since
the gene trees evolve along the species tree.
A detailed evolutionary scenario, therefore, consists of a gene tree, a species tree and
a reconciliation map $\mu$ that describes how the gene tree is embedded into the species tree.
A reconciliation map assigns vertices of the gene tree to the vertices
or edges in the species in such a way that (partial) ancestor relations given
by the genes are preserved by the map $\mu$. This gives rise to
three important events that may act on the genes through evolution:
\emph{speciation}, \emph{duplication}, and \emph{horizontal gene transfer (HGT)} \cite{Gray:83,Fitch2000}.
Inner vertices of the species tree represent speciation events.
Hence, vertices of the gene tree that are mapped to inner vertices in the species tree
underly a speciation event and are transmitted from the parent species into the daughter species.
If two copies from a single ancestral gene are formed and reside in the same species,
then a duplication event happened. Contrary, if one of the copies of a gene
``jumps'' into a different branch of the species tree, then a HGT event happened.
Since both, speciation and duplication events, occur in between different speciation events,
such vertices of the gene trees are usually mapped to the edges of the species tree.
The events speciation, duplication, and HGT classify pairs of genes as orthologs,
paralogs and xenologs, respectively \cite{Fitch2000}.
Intriguingly, these relations can
be estimated directly from sequence data using a variety of algorithmic
approaches that are based on the pairwise best match criterion \cite{GSH:19,GCG+18} and hence do
not require any \emph{a priori} knowledge of the topology of either the
gene tree or the species tree, see e.g.\
\cite{Roth:08,Altenhoff:09,Lechner:14,Altenhoff:16,Altenhoff2012, Lechner:11a,CBRC:02,RSLD15,Dessimoz2008,LH:92,tao2018novel,VV:19}.
Moreover, empirical estimated event-relations
can then be used to infer the history of event-labeled gene trees
\cite{lafond2015orthology,dondi2017approximating,LDEM:16,DEML:16,DONDI17,HHH+13,HSW:16,GAS+:17,GHLS:17} and, in some cases, also the species trees
\cite{HW:16book,hellmuth2017biologically,HHH+12}.
This line of research, in particular, has been very successful for the reconstruction
of event-labeled gene trees and species trees based solely on the information of
orthologous and paralogous gene pairs \cite{HLS+15}.
In this paper, we assume that the gene tree $T$ and and the types of
evolutionary events on $T$ are known.
For an event-labeled gene tree to be biologically feasible
there must be a putative ``true'' history that can explain the
inferred gene tree. However, in practice it is not possible to
observe the entire evolutionary history as e.g.\ gene losses
eradicate the entire information on parts of the history.
Therefore, the problem of determining whether an event-labeled gene tree is biologically feasible is reduced to the
problem of finding a valid reconciliation map, also known
as DTL-scenario \cite{THL:11,BAK:12,Doyon2010}.
The aim is then to find the unknown species tree $S$ and
reconciliation map between $T$ and $S$, if one exists.
Not all event-labeled gene trees $T$, however, are
biologically feasible in the sense that that there exists a species tree $S$
such that $T$ can be reconciled with $S$. In the absence of HGT, biologically
feasibility can be characterized in terms of ``informative'' triplets (rooted binary
trees on three leaves) that are displayed by the gene trees \cite{HHH+12}. In the
presence of HGT such triplets give at least necessary conditions for a gene tree
being biologically feasible \cite{hellmuth2017biologically}.
A particular difficulty that occurs in the presence of HGT is that gene trees with
HGT must be mapped to species trees only in such a way that genes do not travel back in time.
To be more precise, the ancestor ordering of the vertices in a species tree
give rise to a relative timing information of the species within the species trees. Within this context,
speciation and duplication events can be considered as a vertical evolution, that is,
the genetic material is transfered ``forward in time''. In contrast HGT, literally yield
horizontal evolution, that is, genetic material is transferred such that a gene and its
transferred copy coexist.
N{\o}jgaard et al.\ \cite{nojgaard2018time} introduced an axiomatic framework
for time-consistent reconciliation maps and characterize for given event-labeled gene trees $T$
and a \emph{given} species tree $S$ whether there exists a time-consistent reconciliation map or not.
This characterization resulted in an $O(|V|\log|V|)$-time algorithm to construct
a time-consistent reconciliation map if one exists.
However, one of the crucial open questions that were left open within this
context is as follows: \emph{
For a given event-labeled gene whose
internal vertices are labeled by speciation, duplication and HGT,
does there exists a polynomial-time algorithm
to reconstruct the \emph{unknown} species tree together with a
time-consistent reconciliation map, if one
exists?}
In this contribution, we show that the answer to this problem is affirmative
and provide an $O(n^3)$ time algorithm with $n$ being
the number of leaves of $T$ that allows is to verify whether there is a time-consistent species $S$ for the event-labeled gene tree and,
in the affirmative case, to construct $S$.
This paper is organized as follows. We provide in Section \ref{sec:prelim} all necessary definitions.
Moreover, we review some of the important results on gene and species tree, reconciliation maps and
time-consistency that we need here. In Section \ref{sec:gtc}, we formally introduce the gene tree consistency (GTC) problem, that
is, to find a time-consistent species for a given event-labeled gene tree. As a main result, we will
see that it suffices to start with a fully unresolved species tree that can then be stepwisely extended
to a binary species tree to obtain a solution to the GTC problem, provided a solution exists.
In Section \ref{sec:algoGTC}, we provide an algorithm to solve the GTC problem.
For the design of this algorithm, we will utilize an auxiliary directed graph $\ats{T}{S}$
based on a given event-labeled gene tree $T$ and a given species tree $S$.
This type of graph was established in \cite{nojgaard2018time}.
N{\o}jgaard et al.\ \cite{nojgaard2018time} showed that there is time-consistent map between $T$ and $S$ if and only if $\ats{T}{S}$
is acyclic. Our algorithm either reconstructs a species tree $S$ based on
the informative triplets that are displayed by the gene trees and that makes this graph $\ats{T}{S}$ eventually acyclic
or that returns that no solution exists.
The strategy of our algorithm is to construct $\ats{T}{S}$ starting with $S$ being a fully unresolved species tree
and stepwisely resolve this tree in a way that it ``agrees'' with the informative triplets and
reduces the cycles in $\ats{T}{S}$.
\section{Preliminaries}
\label{sec:prelim}
\subsubsection*{Trees, extensions and triplets}
Unless stated otherwise, all graphs in this work are assumed to be directed without explicit mention. For a graph $G$, the subgraph induced by $X \subseteq V(G)$ is denoted $G[X]$.
For a subset $Q \subseteq V(G)$, we write $G - Q = G[V(G) \setminus Q]$. We will write $(a,b)$ and $ab$ for the edge that link $a,b\in V(G)$ of directed, resp., undirected graphs.
All trees in this work are rooted and edges are directed away from the root.
Given a tree $T$, a vertex $v \in V(T)$ is a \emph{leaf} if $v$ has out-degree
$0$, and an \emph{internal vertex} otherwise. We write $\L(T)$ to denote the set
of leaves of $T$. A \emph{star tree} is a tree with only one internal vertex that is
adjacent to the leaves.
We write $x \preceq_{T} y$ if $y$ lies on the unique path from
the root to $x$, in which case $y$ is called a descendant of $x$ and $x$ is called an ancestor of $y$. We may also write $y \succeq_{T} x$ instead of $x \preceq_{T} y$.
We use $x \prec_T y$ for $x \preceq_{T} y$ and $x \neq y$. In the latter case,
$y$ is a \emph{strict ancestor} of $x$.
If $x \preceq_{T} y$ or $y \preceq_{T} x$ the vertices
$x$ and $y$ are \emph{comparable} and, otherwise, \emph{incomparable}.
If $(x,y)$ is an edge in $T$, and thus, $y \prec_{T} x$, then $x$ is the \emph{parent}
of $y$ and $y$ the \emph{child} of $x$. We denote with $\mathrm{ch}(x)$ the set of all children of $x$.
For a subset $X \subseteq V(T)$, the \emph{lowest common ancestor} $\ensuremath{\operatorname{lca}}_{T}(X)$ is the unique $\preceq_T$-minimal vertex that is an
ancestor of all vertices in $X$ in $T$. For simplicity, we often write $\ensuremath{\operatorname{lca}}_T(x,y)$ instead of $\ensuremath{\operatorname{lca}}_T(\{x,y\})$.
A vertex is \emph{binary} if it has $2$ children, and $T$ is
\emph{binary} if all its internal vertices are binary. A \emph{cherry} is an
internal vertex whose children are all leaves (note that a cherry may have more
than two children). A tree $T$ is \emph{almost binary} if its only non-binary
vertices are cherries. For $v \in V(T)$, we write $T(v)$ to denote the subtree
of $T$ rooted at $v$ (i.e. the tree induced by $v$ and its descendants).
\begin{definition}[Extension]
Let $x$ be a vertex of a tree $T$ with $\mathrm{ch}(x) = \{x_1, \ldots, x_k\}$, $k\geq 3$ and suppose that $X' \subset \mathrm{ch}(x)$ is a strict subset of $\mathrm{ch}(x)$.
Then, the \emph{$(x, X')$ extension} modifies $T$ to the tree $T_{x,X'}$ as follows:
If $|X'| \leq 1$, then put $T_{x,X'}=T$.
Otherwise, remove the edges $(x, x')$ for each $x' \in X'$ from $T$
and add a new vertex $y$ together with the edges $(x,y)$ and $(y,x')$ for all $x'\in X'$ to obtain the tree $T_{x,X'}$.
\label{def:ext}
\end{definition}
Conversely, one can obtain $T$ from $T_{x,X'}$ by contracting
the edge $(x,y)$ with $y=\ensuremath{\operatorname{lca}}_T(X')$. Given two trees $T$ and $T'$, we say that $T'$ is a \emph{refinement} of $T$ if
there exists a sequence of extensions that transforms $T$ into $T'$.
The \emph{restriction $T|_X$} of a tree $T$ to some subset $X \subseteq \L(T)$
is the the minimal subtree of $T$ that connects the leaves of $X$
from which all vertices with only one child have been suppressed, cf.\ \cite[Section 6.1]{semple2003phylogenetics}.
A \emph{rooted triplet}, or \emph{triplet} for short, is a binary tree with three leaves. We write $ab|c$ to denote the unique triplet on leaf set $\{a,b,c\}$ in which the root is $\ensuremath{\operatorname{lca}}(a,c) = \ensuremath{\operatorname{lca}}(b,c)$. We say that a tree $T$ \emph{displays} a triplet $ab|c$ if $a,b,c \in \L(T)$ and $\ensuremath{\operatorname{lca}}_T(a, b) \prec \ensuremath{\operatorname{lca}}_T(a,c) = \ensuremath{\operatorname{lca}}_T(b,c)$.
We write $rt(T)$ to denote the set of rooted triplets that $T$ displays.
Given a set of triplets $R$, we say that $T$ \emph{displays} $R$ if $R
\subseteq rt(T)$. A set of triplets $R$ is \emph{compatible}, if there is a tree that displays $R$.
We also say that $T$ \emph{agrees} with $R$ if, for every
$ab|c \in R$, $ac|b \notin rt(T)$ and $bc|a \notin rt(T)$.
Note, the term ``agree'' is more general than the term ``display'' and ``compatible'', i.e.,
if $T$ displays $R$ (and thus, $R$ is compatible), then $T$ must agree with $R$. The converse, however, is not always true.
To see this, consider the star tree $T$ , i.e., $rt(T) = \emptyset$, and let $R=\{ab|c,bc|a\}$.
It is easy to verify that $R$ is incompatible since
there cannot be any tree that displays both triplets in $R$. However, the set $R$ agrees with $T$.
We will consider rooted trees
$T=(V,E)$ from which particular edges are removed. Let $\mathcal{E}_T\subseteq E$ and
consider the forest $\ensuremath{T_{\mathcal{\overline{E}}}}\coloneqq (V,E\setminus \mathcal{E}_T)$. We can preserve the
order $\preceq_T$ for all vertices within one connected component of $\ensuremath{T_{\mathcal{\overline{E}}}}$ and
define $\preceq_{\ensuremath{T_{\mathcal{\overline{E}}}}}$ as follows: $x\preceq_{\ensuremath{T_{\mathcal{\overline{E}}}}}y$ iff $x\preceq_{T}y$ and
$x,y$ are in same connected component of $\ensuremath{T_{\mathcal{\overline{E}}}}$. Since each connected component
$T'$ of $\ensuremath{T_{\mathcal{\overline{E}}}}$ is a tree, the ordering $\preceq_{\ensuremath{T_{\mathcal{\overline{E}}}}}$ also implies a root
$\rho_{T'}$ for each $T'$, that is, $x\preceq_{\ensuremath{T_{\mathcal{\overline{E}}}}} \rho_{T'}$ for all $x\in
V(T')$. If $L(\ensuremath{T_{\mathcal{\overline{E}}}})$ is the leaf set of $\ensuremath{T_{\mathcal{\overline{E}}}}$, we define $L_{\ensuremath{T_{\mathcal{\overline{E}}}}}(x) = \{y\in
L(\ensuremath{T_{\mathcal{\overline{E}}}}) \mid y\preceq_{\ensuremath{T_{\mathcal{\overline{E}}}}} x\}$ as the set of leaves in $\ensuremath{T_{\mathcal{\overline{E}}}}$ that are reachable
from $x$. Hence, all $y\in L_{\ensuremath{T_{\mathcal{\overline{E}}}}}(x)$ must be contained in the same connected
component of $\ensuremath{T_{\mathcal{\overline{E}}}}$. We say that the forest $\ensuremath{T_{\mathcal{\overline{E}}}}$ displays a triplet $r$, if $r$
is displayed by one of its connected components. Moreover, $rt(\ensuremath{T_{\mathcal{\overline{E}}}})$ denotes
the set of all triplets that are displayed by the forest $\ensuremath{T_{\mathcal{\overline{E}}}}$.
We simplify the notation a bit and write $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u):=\sigma(L_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u))$.
\subsubsection*{Gene and species trees}
Let $\Gamma$ and $\Sigma$ be a set of genes and a set of species, respectively. Moreover, we assume to know the gene-species association, i.e., a surjective map $\sigma: \Gamma \rightarrow \Sigma$.
A \emph{species tree} is a tree $S$ such that $\L(S) \subseteq \Sigma$.
A \emph{gene tree} is a tree $T$ such that $\L(T) \subseteq \Gamma$.
Note that $\sigma(l)$ is defined for every leaf $l \in \L(T)$.
We extend $\sigma$ to interval vertices of $T$, and put
$\sigma_T(v) = \{\sigma(l) : l \in \L(T(v))\}$ for an internal vertex $v$ of $T$. We may drop the $T$ subscript whenever there is no risk of confusion.
We emphasize that species and gene trees need not to be binary.
Given a gene tree $T$, we assume knowledge of a labeling function $t : V(T) \cup
E(T) \rightarrow \{\odot, \mathfrak{s}, \mathfrak{d}, \mathfrak{t}\} \cup \{0, 1\}$. We require
that $t(v) \in \{\odot, \mathfrak{s}, \mathfrak{d}, \mathfrak{t}\}$ for all $v \in V(T)$ and $t(e)
\in \{0,1\}$ for all $e \in E(T)$. Each symbol represents a different vertex
type: $\odot$ are leaves, $\mathfrak{s}$ are speciations, $\mathfrak{d}$ are duplications and
$\mathfrak{t}$ indicates vertices from which a horizontal gene transfer started. Edges labeled by $1$ represent horizontal transfers
and edges labeled by $0$ represent vertical descent.
Here, we always assume that only edges $(x, y)$ for which $t(x) = \mathfrak{t}$ might be
labeled as transfer edge; $t(x,y)=1$. We let $\mathcal{E}_T
= \{e \in E(T) \colon t(e) = 1\}$ be the set of transfer edges.
For technical reasons, we
also require that $t(u) = \odot$ if and only if $u \in \L(T)$.
We write $(T; t, \sigma)$ to denote a gene tree $T$ labeled by $t$ having gene-species mapping $\sigma$.
In what follows we will only consider labeled gene trees $(T;t,\sigma)$ that satisfy the following three axioms:
\begin{description}
\item[(O1)] Every internal vertex $v$ has outdegree at least $2$.
\item[(O2)] Every transfer vertex $x$ has at least one transfer edge $e=(x,v)$ labeled $t(e)=1$, and at least one non-transfer edge $f=(x,w)$ labeled $t(e)=0$;
\item[(O3)] \emph{\textbf{(a)}}
If $x\in V$ is a speciation vertex with children $v_1,\dots,v_k$, $k\geq 2$,
then $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v_i) \cap \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v_j) =\emptyset$, $1\leq i<j\leq k$;
\emph{\textbf{(b)}}
If $(x,y) \in \mathcal{E}_T$, then
$\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(x)\cap \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(y) = \emptyset$.
\end{description}
These conditions are also called ``observability-axioms'' and are
fully discussed in \cite{hellmuth2017biologically,nojgaard2018time}.
We repeat here shortly the arguments to justify Condition (O1)-(O3).
Usually the considered labeled gene trees are obtained from genomic sequence data.
Condition (O1) ensures that every inner vertex leaves a historical
trace in the sense that there are at least two children
that have survived. If this were not the case, we would have no evidence that vertex $v$ ever exist.
Condition (O2) ensures that for an HGT event a historical trace remains of both the transferred and the non-transferred copy.
Furthermore, there is no clear evidence for a speciation vertex $v$
if it does not ``separate'' lineages, which is ensured by Condition (O3.a).
Finally (O3.b) is a simple consequence of the fact that if a transfer edge $(x,y)$
in the gene tree occurred, then the species $X$ and $Y$ that contain $x$ and $y$, respectively,
cannot be ancestors of each other, as otherwise, the species $X$ and $Y$ would not coexist
(cf.\ \cite[Prop.\ 1]{nojgaard2018time}).
We emphasize that Lemma 1 in \cite{nojgaard2018time} states that the leaf set
$L_1,\dots,L_k$ of the connected components $T_1,\dots,T_k$ of $\ensuremath{T_{\mathcal{\overline{E}}}}$
forms a partition of $L(T)$, which directly implies that
$\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(x) \neq \emptyset$ for all $x\in V(T)$.
\subsubsection*{Reconciliation maps and speciation triplets}
A \emph{reconciliation map from $(T;t,\sigma)$ to $S$}
is a map $\mu : V(T) \rightarrow V(S)\cup E(S)$ that
satisfies the following constraints for all $x\in V(T)$:
\begin{description}
\item[(M1)] \emph{Leaf Constraint.} If $x\in \Gamma$, then $\mu(x)=\sigma(x)$. \vspace{0.03in}
\item[(M2)] \emph{Event Constraint.}
\begin{itemize}
\item[(i)] If $t(x)=\mathfrak{s}$, then
$\mu(x) = \ensuremath{\operatorname{lca}}_S(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(x))$.
\item[(ii)] If $t(x) \in \{\mathfrak{d}, \mathfrak{t}\}$, then $\mu(x)\in E(S)$.
\item[(iii)] If $t(x)=\mathfrak{t}$ and $(x,y)\in \mathcal{E}_T$,
then $\mu(x)$ and $\mu(y)$ are incomparable in $S$.
\item[(iv)] If $t(x)=\mathfrak{s}$, then $\mu(v)$ and $\mu(u)$ are incomparable in $S$ for all distinct $u,v\in\mathrm{ch}(x)$.
\end{itemize} \vspace{0.03in}
\item[(M3)] \emph{Ancestor Constraint.} \\
Let $x,y\in V$ with $x\prec_{\ensuremath{T_{\mathcal{\overline{E}}}}} y$.
Note, the latter implies that the path connecting $x$ and $y$ in $T$
does not contain transfer edges.
We distinguish two cases:
\begin{itemize}
\item[(i)] If $t(x),t(y)\in \{\mathfrak{d}, \mathfrak{t}\}$, then $\mu(x)\preceq_S \mu(y)$,
\item[(ii)] otherwise, i.e., at least one of $t(x)$ and $t(y)$ is a speciation $\mathfrak{s}$, $\mu(x)\prec_S\mu(y)$.
\end{itemize}
\end{description}
We call $\mu$ the \emph{reconciliation map} from $(T;t,\sigma)$ to $S$.
The provided definition of a reconciliation map coincides with the one as given in
in \cite{hellmuth2017biologically,nojgaard2018time,NGD+17} and is a
natural generalization of the maps as in \cite{HHH+12,Doyon:09}
for the case, no HGT took place. In case that the event-labeling of $T$ is unknown, but a
species tree $S$ is given, the authors in \cite{THL:11,BAK:12} gave an axiom set,
called DTL-scenario, to reconcile $T$ with $S$. This
reconciliation is then used to infer the event-labeling $t$ of $T$.
Our axiom set for the reconciliation map is more general, nevertheless,
equivalent to DTL-scenarios in case the considered gene trees are binary \cite{nojgaard2018time,NGD+17}.
The question arises when for a given gene tree $(T;t,\sigma)$ a species tree $S$ together with a reconciliation map $\mu$ from $(T;t,\sigma)$
to $S$ exists. An answer to this question is provided by
\begin{definition}
Let $(T;t,\sigma)$ be a labeled gene tree
The set $\ensuremath{\mathcal{R}} (T;t,\sigma)$ is the set
of triplets $\sigma(a)\sigma(b)|\sigma(c)$ where $\sigma(a),\sigma(b),\sigma(c)$ are pairwise distinct
and either
\begin{enumerate}
\item $ab|c$ is a triplet displayed by $\ensuremath{T_{\mathcal{\overline{E}}}}$ and
$t(\ensuremath{\operatorname{lca}}_{\ensuremath{T_{\mathcal{\overline{E}}}}} (a, b, c)) = \mathfrak{s}$ or
\item $a, b \in L(\ensuremath{T_{\mathcal{\overline{E}}}} (x))$ and $c \in L(\ensuremath{T_{\mathcal{\overline{E}}}} (y))$ for some transfer edge $(x,y)$ or $(y,x)$ in $\mathcal{E}_T$
\end{enumerate}
\label{def:informativeTriplets}
\end{definition}
\begin{theorem}[\cite{hellmuth2017biologically}]
Let $(T;t,\sigma)$ be a labeled gene tree.
Then, there is a species tree $S$ together with a reconciliation map $\mu$ from $(T;t,\sigma)$ to $S$ if and only if $\ensuremath{\mathcal{R}} (T;t,\sigma)$
is compatible. In this case, every species tree $S$ that displays
$\ensuremath{\mathcal{R}} (T;t,\sigma)$ can be reconciled with $(T;t,\sigma)$.
Moreover, there is a polynomial-time algorithm that returns a species tree $S$ for
$(T;t, \sigma)$ together with a reconciliation map $\mu$ in
polynomial time, if one exists and otherwise, returns that
there is no species tree for $(T;t, \sigma)$.
\label{thm:SpeciesTriplets}
\end{theorem}
It has been shown in \cite{hellmuth2017biologically}, that if there is any reconciliation map
from $(T;t,\sigma)$ to $S$, then there is always
a reconciliation map $\mu$ that additionally satisfies for all $u\in V(T)$ with $t(u)\in \{\mathfrak{d},\mathfrak{t}\}$:
\[\mu(u) = (v,\ensuremath{\operatorname{lca}}_S(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u))\in E(S)\] where $v$ denotes the unique
parent of $\ensuremath{\operatorname{lca}}_S(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u))$ in $S$.
\begin{definition}
We define a simplified map $\hat{\mu}_{T,S}\colon V(T) \to V(S)$ that associates a vertex $v \in V(T)$ to the lowest common ancestor of $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$, i.e.,
\[ \hat{\mu}_{T, S}(v) \coloneqq lca_{S}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v))
\]
We call $\hat{\mu}_{T, S}$ an \emph{LCA-map}.
\label{def:hmu}
\end{definition}
\begin{remark}
Note that if $v$ is a leaf of $T$, we have $\hat{\mu}_{T, S}(v) = \sigma(v)$.
Moreover, the LCA-map $\hat{\mu}_{T, S}$ always exists and is uniquely defined, although there might be no
reconciliation map from $(T;t,\sigma)$ to $S$.
\label{rem:hmu}
\end{remark}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.8\textwidth]{./timeCons-LeastRes.pdf}
\end{center}
\caption{Taken from \cite[Fig.\ 4]{hellmuth2017biologically}.
From the binary gene tree $(T;t,\sigma)$ (right)
we obtain the species triples $\S(T;t,\sigma) = \{AB|D,AC|D\}$.
Note, vertices $v$ of $T$ with $t(v)=\mathfrak{s}$ and $t(v)=\mathfrak{t}$
are highlighted by ``$\bullet$'' and ``$\triangle$'', respectively.
Transfer edges are marked with an ``arrow''.
Shown are two (tube-like) species trees (left and middle) that
display $\S(T;t,\sigma)$. Thus, Theorem \ref{thm:SpeciesTriplets} implies that
for both trees a reconciliation map from $(T;t,\sigma)$ exists.
The respective reconciliation maps
for $(T;t,\sigma)$ and the species tree are given implicitly by
drawing $(T;t,\sigma)$ within the species tree.
The left species tree $S$ is least resolved for $\S(T;t,\sigma)$.
The reconciliation map from $(T;t,\sigma)$ to $S$ is unique,
however, not time-consistent. Thus, no time-consistent reconciliation
between $T$ and $S$ exists at all.
On the other hand, for $T$ and the middle species tree (that is a refinement of $S$)
there is a time-consistent reconciliation map.
}
\label{fig:least}
\end{figure}
We may write $\hat{\mu}, \hat{\mu}_{T}$ or $\hat{\mu}_{S}$ if $T$ and/or $S$ are clear from the context.
Note, however, that compatibility of $\ensuremath{\mathcal{R}} (T; t,\sigma )$ only provides a necessary condition for
a the existence of \emph{biologically feasible} reconciliation, i.e., maps that are additionally time-consistent.
To be more precise:
\begin{definition}[Time Map]\label{def:time-map}
The map $\tau_T\colon V(T) \to \mathbb{R}$ is a time map for the
rooted tree $T$ if $x\prec_T y$ implies $\tau_T(x)>\tau_T(y)$ for all
$x,y\in V(T)$.
\end{definition}
\begin{definition}[Time-Consistent] \label{def:tc-mu} A reconciliation map $\mu$ from
$(T;t,\sigma)$ to $S$ is \emph{time-consistent} if there are time maps
$\tau_T$ for $T$ and $\tau_S$ for $S$ satisfying the
following conditions for all $u\in V(T)$:
\begin{description}
\item[(B1)] If $t(u) \in \{\mathfrak{s}, \odot \}$, then
$\tau_T(u) = \tau_S(\mu(u))$. \label{bio1}
\item[(B2)] If $t(u)\in \{\mathfrak{d},\mathfrak{t} \}$ and, thus
$\mu(u)=(x,y)\in E(S)$, \label{bio2} then
$\tau_S(y)>\tau_T(u)>\tau_S(x)$.
\end{description}
If a time-consistent reconciliation map from $(T;t,\sigma)$ to $S$ exists,
we also say that $S$ is a \emph{time-consistent species tree for} $(T;t,\sigma)$.
\end{definition}
Figure \ref{fig:least} gives an example for two different species trees that
both display $\S(T;t,\sigma)$ for which only one admits a time-consistent
reconciliation map. Further examples can be found in
\cite{hellmuth2017biologically,nojgaard2018time}.
To determine whether a time-consistent map for a given gene and species tree
exists we will use an auxiliary graph as defined in \cite{nojgaard2018time}.
We will investigate the structure of this graph in the remaining part of this section.
\subsubsection*{Auxiliary graph construction}
Let $(T;t,\sigma)$ be a labeled gene tree and $S$ be a species tree.
Let $\ats{T}{S}$ be the graph with vertex set $V(\ats{T}{S}) = V(T) \cup V(S)$,
and edge set $E(\ats{T}{S})$ constructed from four sets as follows:
\begin{description}
\item[(A1)]
for each $(u, v) \in E(T)$, we have $(u', v') \in E(\ats{T}{S})$, where
\[
u' = \begin{cases}
\hat{\mu}(u) &\mbox{if $t(u) \in \{\odot, \mathfrak{s}\}$} \\
u &\mbox{otherwise}
\end{cases}
\quad \quad \quad
v' = \begin{cases}
\hat{\mu}(v) &\mbox{if $t(v) \in \{\odot, \mathfrak{s}\}$} \\
v &\mbox{otherwise}
\end{cases}
\]
\item[(A2)]
for each $(x, y) \in E(S)$, we have $(x, y) \in E(\ats{T}{S})$
\item[(A3)]
for each $u$ with $t(u) \in \{\mathfrak{d}, \mathfrak{t}\}$,
we have $(u, \hat{\mu}(u)) \in E(\ats{T}{S})$
\item[(A4)]
for each $(u, v) \in \mathcal{E}_{T}$, we have
$(lca_S(\hat{\mu}(u), \hat{\mu}(v)), u) \in E(\ats{T}{S})$
\end{description}
We are aware of the fact that the graph $\ats{T}{S}$ heavily depends on
the event-labeling $t$ and the species assignment $\sigma$ of the gene tree $(T;t,\sigma)$.
However, to keep the notation simple we will write, by slight abuse of notation, $\ats{T}{S}$
instead of the more correct notation $\ats{(T;t,\sigma)}{S}$.
The $\ats{T}{S}$ graph has four types of edges, and we shall refer to them as the A1, A2, A3-and A4-edges, respectively.
We note for later reference that if $(x, y)$ is an A1-edge such that $x, y \in V(S)$, then we must have $y \preceq_S x$ which follows from the definition of $\hat{\mu}_{T,S}$
and the fact that $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(y) \subseteq \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(x)$.
We emphasize, that the definition of $\ats{T}{S}$ slightly differs from the
one provided in \cite{nojgaard2018time}. While Properties (A2), (A3) and (A4)
are identical, (A1) was defined in terms of a reconciliation map $\mu$
from $(T;t,\sigma)$ to $S$ in \cite{nojgaard2018time}.
To be more precise, in \cite{nojgaard2018time} it is stated $u' = \mu(u)$
and $v' = \mu(v)$ for speciation vertices or leaves $u$ and $v$
instead of $u' = \hat{\mu}(u)$ and $v' = \hat{\mu}(v)$, respectively.
However, Condition (M1) and (M2.i) imply that $\mu(u)=\hat{\mu}(u)$
and $\mu(v)=\hat{\mu}(v)$ provided $\mu$ exists.
In other words, the definition of $\ats{T}{S}$ here and in \cite{nojgaard2018time}
are identical, in case a reconciliation map $\mu$ exists.
Since we do not want to restrict ourselves to the existence of a reconciliation
map (a necessary condition is provided by Theorem \ref{thm:SpeciesTriplets})
we generalized the definition of $\ats{T}{S}$ in terms of $\hat{\mu}$ instead.
For later reference, we summarize the latter observations in the following remark.
\begin{remark}
The graph $\ats{T}{S}$ does not explicitly depend on a reconciliation map.
That is, even if there is no reconciliation map at all, $\ats{T}{S}$ is always
well-defined.
\label{rem:ats-well-defined}
\end{remark}
The next lemma deals with possible self-loops in $\ats{T}{S}$.
\begin{lemma}
Let $(T;t,\sigma)$ be an event-labeled gene tree, $S$ be a species tree and $S^*$ be a refinement of $S$.
Moreover, let $l$ be a leaf of $S$ (and thus, of $S^*$).
Then $(l, l)$ is an edge of $\ats{T}{S}$ if and only if $(l, l)$ is an edge of $\ats{T}{S^*}$.
Furthermore, if there is a reconciliation map from $(T;t,\sigma)$ to $S$
then, the graph $\ats{T}{S}$ will never contain self-loops and
every edge $(u',v')$ in $\ats{T}{S}$ with $u',v'\in V(S)$,
is either an A1- or A2-edge and satisfies $v'\prec_S u'$.
\label{lem:self-loops}
\end{lemma}
\begin{proof}
Let $(T;t,\sigma)$ be an event-labeled gene tree, $S$ be a species tree and $S^*$ be a refinement of $S$.
Note that if $(l, l)$ is a self-loop of $\ats{T}{S}$ (respectively $\ats{T}{S^*}$), then $(l,l)$ must be an A1-edge, and so there is $(u, v) \in E(T)$ such that $\hat{\mu}_S(u) = \hat{\mu}_S(v) = l$ (respectively $\hat{\mu}_{S^*}(u) = \hat{\mu}_{S^*}(v) = l$). Since $l$ is a leaf, $\hat{\mu}_S(u) = \hat{\mu}_S(v) = l$ if and only if $\hat{\mu}_{S^*}(u) = \hat{\mu}_{S^*}(v) = l$, and the statement follows.
For the second statement, assume that there is a reconciliation map from $(T;t,\sigma)$ to $S$.
To see that $\ats{T}{S}$ does not contain self-loops, observe once again that self-loops can only be provided by A1-edges. So
assume, for contradiction, that there is an edge $(u, v) \in E(T)$ such that
$t(u),t(v)\in \{\odot, \mathfrak{s}\}$ and $\hat{\mu}(u)=\hat{\mu}(v)$. Since $t(u),t(v)\in
\{\odot, \mathfrak{s}\}$, Property (M1) and (M2.i) imply that $\hat{\mu}(u)=\mu(u)$ and
$\hat{\mu}(v)=\mu(v)$ for every reconciliation map $\mu$ from $(T;t,\sigma)$ to $S$.
Since $v\prec_T u$, Condition (M3.ii) implies that $\hat{\mu}(v) =
\mu(v)\prec_S\mu(u)=\hat{\mu}(u)$; a contradiction.
Now let $(u',v')$ be an edge in $\ats{T}{S}$ with $u',v'\in V(S)$.
Since all A3- or A4-edges involve vertices of $T$, we can conclude that
$(u',v')$ must either be an A1-edge or an A2-edge.
Clearly, if $(u',v')$ is an A2-edge, we trivially have $v'\prec_S u'$.
Assume that $(u',v')$ is an A1-edge.
Hence there is an edge $(u,v)\in E(T)$ such that
$u' = \hat{\mu}(u)$ and $v' = \hat{\mu}(v)$. This implies that
$t(u),t(v)\in \{\mathfrak{s},\odot\}$. Condition (M1) and (M2.i) imply
$\hat{\mu}(u) = \mu(u)$ and $\hat{\mu}(v) = \mu(v)$. Moreover,
since $(u,v)\in E(T)$, we have $t(u)=\mathfrak{s}$.
Now, we can apply Condition (M3.ii) to conclude that
$v' = \hat{\mu}(v)=\mu(v) \prec_S \mu(u) = \hat{\mu}(u) = u'$.
\end{proof}
The graph $\ats{T}{S}$ will be utilized to characterize gene-species tree pairs that admit a time-consistent reconciliation map.
For a given gene tree $(T;t,\sigma)$ and a given species tree $S$ the existence of a time-consistent reconciliation map
can easily be verified as provided by the next
\begin{theorem}[\cite{hellmuth2017biologically,nojgaard2018time}]
Let $(T;t,\sigma)$ be a labeled gene tree and $S$ be a species tree.
Then $T$ admits a time-consistent reconciliation map with $S$
if and only if $S$ displays every triplet of $\ensuremath{\mathcal{R}} (T; t,\sigma )$ and $\ats{T}{S}$ is acyclic.
The time-consistent reconciliation map can then be constructed in $O(|V(T)|\log(|V(S)|))$ time.
\label{thm:timeCons-S}
\end{theorem}
\section{Gene Tree Consistency}
\label{sec:gtc}
The main question of interest of this work is to determine whether a species tree $S$ exists at all for a labeled gene tree $T$.
Here, we solve a slightly more general problem: the one of refining a given almost binary species tree $S$ so that $T$ can be reconciled with it.
\vspace{3mm}
\noindent
The \textsc{Gene Tree Consistency} (GTC) problem:
\noindent
\textbf{Given:} A labeled gene tree $(T;t,\sigma)$ and an almost binary species tree $S$.
\noindent
\textbf{Question:} Does there exist a refinement $S^*$ of $S$ that displays $\ensuremath{\mathcal{R}} (T; t,\sigma)$
and such that $\ats{T}{S^*}$ is acyclic?
\vspace{3mm}
It is easy to see that the problem of determining the existence of a species
tree $S$ that displays $\ensuremath{\mathcal{R}} (T; t,\sigma)$
and such that $\ats{T}{S}$ is acyclic is a special case of this problem. Indeed, it suffices
to provide a star tree $S$ as input to the GTC problem, since every species tree
is a refinement of $S$.
\begin{definition}
A species tree $S^*$ \emph{is a solution} to a given GTC instance $((T;t,\sigma), S)$ if $S^*$ displays $\ensuremath{\mathcal{R}} (T; t,\sigma )$
and $\ats{T}{S^*}$ is acyclic.
\end{definition}
We first show that, as a particular case of the following lemma, one can restrict the search to binary species trees (even if $T$ is non-binary).
\begin{lemma}\label{lem:binarize}
Let $((T;t,\sigma), S)$ be a GTC instance and assume that a species tree $\hat{S}$ is a
solution to this instance. Then any refinement $S^*$ of $\hat{S}$ is also a solution
to $((T;t,\sigma), S)$.
\end{lemma}
\begin{proof}
We may assume that $\hat{S}$ is non-binary as otherwise we are done. Let $S^*$ be
any refinement of $\hat{S}$. First observe that we have $\ensuremath{\mathcal{R}} (T;t,\sigma)\subseteq rt(\hat{S}) \subseteq rt(S^*)$,
and thus $S^*$ displays $\ensuremath{\mathcal{R}} (T;t,\sigma)$.
It remains to show that
$\ats{T}{S^*}$ is
acyclic. We first prove that any single $(x,X')$ extension applied to $\hat{S}$ preserves
acyclicity. Let $S'\coloneqq \hat{S}_{x,X'}$ be any tree obtained from $\hat{S}$ after applying some $(x,X')$ extension.
As specified in Definition \ref{def:ext}, if $|X'| \leq 1$, then $S'=\hat{S}$. In this case, $\ats{T}{S'} = \ats{T}{\hat{S}}$ is acyclic and we are done.
Hence, suppose that $|X|>1$. Thus, a new node $y$ was created,
added as a child of $x$ and became the new parent of $X' \subset X$.
We claim that $\ats{T}{S'}$ is acyclic. For
the remainder, we will write $\hat{\mu}_{\hat{S}}$ and $\hat{\mu}_{S'}$ instead of
$\hat{\mu}_{T,\hat{S}}$ and $\hat{\mu}_{T, S'}$ since $T$ will remain fixed. We will make use
of the following properties.
\begin{enumerate}
\item[(P1)] for every subset $Z \subseteq \L(S)$, it holds that
$\ensuremath{\operatorname{lca}}_{\hat{S}}(Z) = \begin{cases}
\ensuremath{\operatorname{lca}}_{S'}(Z) &\mbox{if $\ensuremath{\operatorname{lca}}_{S'}(Z)\neq y$} \\
x &\mbox{otherwise}
\end{cases}$
\item[(P2)] For every $u\in V(T)$, it holds that
$\hat{\mu}_{\hat{S}}(u) = \begin{cases}
\hat{\mu}_{S'}(u) &\mbox{if $\hat{\mu}_{S'}(u) \neq y$ \textit{(Case P2.a)}} \\
x &\mbox{otherwise \textit{(Case P2.b)}}
\end{cases}$
\end{enumerate}
Property (P1) follows from the fact that $\L(S'(v)) = \L(\hat{S}(v))$ for any $v \in
V(S') \setminus \{y\}$ and $\L(S'(y)) \subset \L(\hat{S}(x))$. Therefore if
$\ensuremath{\operatorname{lca}}_{S'}(Z) = z\neq y $, then $z$ is also a common ancestor of $Z$ in
$\hat{S}$ and there cannot be lower common ancestor below $z$. If $z=y$,
then $x$ is a common ancestor of $Z$ in $\hat{S}$ and there cannot be a lower common
ancestor below $x$. Property (P2) is a direct consequence of (P1) and the
definition of $\hat{\mu}_{S'}$ and $\hat{\mu}_{\hat{S}}$.
Now, suppose for contradiction that $\ats{T}{S'}$ contains a cycle $C = (w_1,
\ldots, w_k, w_1)$.
Note that $\ensuremath{\mathcal{R}} (T;t,\sigma)\subseteq rt(\hat{S}) \subseteq rt(S')$.
Thus, Theorem \ref{thm:SpeciesTriplets}
implies that there is a reconciliation map from
from $(T;t,\sigma)$ to $S'$.
By Lemma \ref{lem:self-loops}, $\ats{T}{S'}$ does not
contain self-loops and thus $k>1$ for $C = (w_1,\ldots, w_k, w_1)$
Consider the sequence of vertices $\tilde{C} = (\tilde{w}_1, \ldots,
\tilde{w}_k, \tilde{w}_1)$ of vertices of $\ats{T}{\hat{S}}$ where we take $C$, but
replace $y$ by $x$ if it occurs. That is, we define, for each $1 \leq i \leq k$:
$$
\tilde{w}_i = \begin{cases}
w_i &\mbox{ if $w_i \neq y$ }\\
x &\mbox{ if $w_i = y$ }
\end{cases}
$$
We claim that every element in $\{(\tilde{w}_1, \tilde{w}_2), \ldots, (\tilde{w}_{k-1},
\tilde{w}_k),(\tilde{w}_k, \tilde{w}_1)\} \setminus \{(x, x)\}$ is an edge in
$\ats{T}{\hat{S}}$ (the pair $(x, x)$ can occur in $\tilde{C}$ if $(x, y)$ is in
$C$, but we may ignore it).
This will imply the existence of a cycle in
$\ats{T}{\hat{S}}$, yielding a contradiction.
We show that $(\tilde{w}_1, \tilde{w}_2) \in E(\ats{T}{\hat{S}})$, assuming that
$(\tilde{w}_1, \tilde{w}_2) \neq (x, x)$. This is sufficient to prove our claim,
since we can choose $w_1$ as any vertex of $C$ and relabel the other vertices
accordingly.
\begin{description}
\item[\textnormal{\em Case: $(w_1, w_2)$ is an A1-edge.} ] \ \\
Since $(w_1, w_2)$ is an A1-edge, it is defined by some edge $(u, v) \in E(T)$ and
must coincide with one of the edges in $\mathcal A = \{(u, v), (u, \hat{\mu}_{S'}(v)), (\hat{\mu}_{S'}(u), v),
(\hat{\mu}_{S'}(u), \hat{\mu}_{S'}(v))\}$.
Suppose that $w_1,w_2\neq y$.
Then, by construction of $\tilde w_1$ and $\tilde w_2$, we have
$\tilde w_1=w_1$ and $\tilde w_2=w_2$.
Hence, $(\tilde{w}_1, \tilde{w}_2) = (w_1, w_2)$ is an edge in $\mathcal A$.
By (P2), $\hat{\mu}_{\hat S}(u)=\hat{\mu}_{S'}(u)$ and $\hat{\mu}_{\hat S}(v)=\hat{\mu}_{S'}(v)$.
Hence, $(\tilde{w}_1, \tilde{w}_2)$ is of one of the form
$(u, v), (u, \hat{\mu}_{\hat S}(v)), (\hat{\mu}_{\hat S}(u), v), (\hat{\mu}_{\hat S}(u), \hat{\mu}_{\hat S}(v))$.
This implies that $(\tilde{w}_1, \tilde{w}_2)$ is an A1-edge that is contained in
$\ats{T}{\hat{S}}$.
If $w_1 =y$, then $y\in V(S')$ implies that $y=\hat{\mu}_{S'}(u)$.
By construction and (P2.b), $\tilde{w}_1 = x = \hat{\mu}_{\hat S}(u)$.
This, in particular, implies that $w_2 \notin \{x, y\}$ as otherwise, $\tilde{w}_2 = x$; contradicting
$(\tilde{w}_1, \tilde{w}_2) \neq (x, x)$.
By construction of $\tilde w_2$, we have $\tilde{w}_2=w_2$.
Thus, $(\tilde{w}_1,\tilde{w}_2)$ is either of the form
$(\hat{\mu}_{\hat S}(u), v)$ or $(\hat{\mu}_{\hat S}(u), \hat{\mu}_{\hat S}(v))$ depending on the label $t(v)$.
In either case, $(\tilde{w}_1,\tilde{w}_2)$ is an A1-edge that is contained in
$\ats{T}{\hat{S}}$.
If $w_2 = y$ then, by analogous arguments as in the case $w_1=y$, we have
$\tilde{w}_2 = x = \hat{\mu}_{\hat S}(v)$ and $\tilde{w}_1=w_1$. Again,
$(\tilde{w}_1,\tilde{w}_2)$ is an A1-edge that is contained in
$\ats{T}{\hat{S}}$.
In summary, $(\tilde{w}_1,\tilde{w}_2)$ is an A1-edge in
$\ats{T}{\hat{S}}$ whenever $(w_1, w_2)$ is an A1-edge in $\ats{T}{S'}$
\item[\textnormal{\em Case: $(w_1, w_2)$ is an A3-edge.} ] \ \\
Since $(w_1, w_2)$ is an A3-edge, we have $(w_1, w_2) = (u, \hat{\mu}_{S'}(u))$.
Since $u \in V(T)$, it holds that $w_1=u \neq y$ and thus, $\tilde{w}_1=w_1=u$.
Now we can apply similar arguments as in the first case:
either $\hat{\mu}_{S'}(u) \neq y$ and thus, $\tilde{w}_2 = w_2 = \hat{\mu}_{S'}(u) = \hat{\mu}_{\hat{S}}(u)$
or $\hat{\mu}_{S'}(u) =y$ and thus, $\tilde{w}_2 = x = \hat{\mu}_{\hat{S}}(u)$.
In both cases, $(\tilde{w}_1, \tilde{w}_2) = (u,\hat{\mu}_{\hat{S}}(u))$ which implies that
$(\tilde{w}_1, \tilde{w}_2)$ is an A3-edge in $\ats{T}{S'}$.
\item[\textnormal{\em Case: $(w_1, w_2)$ is an A2-edge.} ] \ \\
Since $(w_1, w_2)$ is an A2-edge, we have $(w_1, w_2)\in E(S')$ and hence,
$w_1$ is the parent of $w_2$ in $S'$. This implies that $w_2 \neq y$ as, otherwise,
$w_1=x$ and thus, $(\tilde{w}_1, \tilde{w}_2) = (x, x)$; a contradiction.
Thus, by construction, $\tilde{w}_2 = w_2$.
If $w_1 = y$, then $\tilde w_1 = x$ and, by construction of $S'$,
we have $(x, w_2) = (\tilde{w}_1, \tilde{w}_2) \in E(\hat{S})$.
In this case, $(\tilde{w}_1, \tilde{w}_2)$ is an A2-edge in $E(\ats{T}{\hat{S}})$.
Otherwise, if $w_1\neq y$, then $\tilde w_1=w_1$.
Hence, $(w_1, w_2) = (\tilde{w}_1, \tilde{w}_2) \in E(\hat{S})$
which implies that $(\tilde{w}_1, \tilde{w}_2)$ is an A2-edge in $E(\ats{T}{\hat{S}})$.
\item[\textnormal{\em Case: $(w_1, w_2)$ is an A4-edge.} ] \ \\
Since $(w_1, w_2)$ is an A4-edge, there is an edge $(u,v)\in \mathcal{E}_T$
such that $w_1 = \ensuremath{\operatorname{lca}}_{S'}(\hat{\mu}_{S'}(u),\hat{\mu}_{S'}(v))$ and $w_2=u$. Clearly,
$w_2=y$ is not possible, since $w_1$ corresponds to a vertex in $T$.
By construction, $\tilde w_2=w_2=u$.
Note that in $\ats{T}{\hat{S}}$, $(u, v)$ defines the A4 edge
$(\ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}_{\hat{S}}(u),\hat{\mu}_{\hat{S}}(v)), u)$.
Therefore, it remains to show that $\tilde w_1 = \ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}_{\hat{S}}(u),\hat{\mu}_{\hat{S}}(v))$.
Notice that
\begin{align*}
w_1 &= \ensuremath{\operatorname{lca}}_{S'}(\hat{\mu}_{S'}(u),\hat{\mu}_{S'}(v)) \\
&= \ensuremath{\operatorname{lca}}_{S'}(lca_{S'}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)), lca_{S'}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)) \\
&= \ensuremath{\operatorname{lca}}_{S'}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v))
\end{align*}
In a similar manner, we obtain
\begin{align*}
\ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}_{\hat{S}}(u), \hat{\mu}_{\hat{S}}(v)) = \ensuremath{\operatorname{lca}}_{\hat{S}}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v))
\end{align*}
Let $Z = \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$. Property (P1) implies that if $w_1 \neq y$, then
$\ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}_{\hat{S}}(u),\hat{\mu}_{\hat{S}}(v)) = \ensuremath{\operatorname{lca}}_{\hat{S}}(Z) = \ensuremath{\operatorname{lca}}_{S'}(Z) = w_1 = \tilde{w}_1$, as desired.
If $w_1 = y$, then $\ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}_{\hat{S}}(u),\hat{\mu}_{\hat{S}}(v)) = \ensuremath{\operatorname{lca}}_{\hat{S}}(Z) = x$ and $\tilde{w}_1 = x$, as desired.
\end{description}
We have therefore shown that a cycle in $\ats{T}{S'}$ implies a cycle in
$\ats{T}{\hat{S}}$. Since $\hat{S}$ is a solution, we deduce that $\ats{T}{S'}$ cannot
have a cycle, and it is therefore also a solution to $((T;t,\sigma), S)$.
To finish the
proof, we need to show that $\ats{T}{S^*}$ is acyclic. This is now easy to
see since $\hat{S}$ can be transformed into $S^*$ by a sequence of extensions. As we
showed, each extension maintains the acyclicity property, and we deduce that
$\ats{T}{S^*}$ is acyclic.
\end{proof}
This shows that we can restrict our search to binary species trees, and we may only require that it \emph{agrees} with $\ensuremath{\mathcal{R}} (T;t,\sigma)$.
\begin{proposition}
An instance $((T;t,\sigma), S)$ of the GTC problem admits a solution if and only if there exists a binary refinement $S^*$ of $S$ that
agrees with and, therefore, displays $\ensuremath{\mathcal{R}} (T;t,\sigma)$ such that $\ats{T}{S^*}$ is acyclic.
\label{prop:IFFbinRef}
\end{proposition}
\begin{proof}
Assume that $((T;t,\sigma), S)$ admits a solution $\hat{S}$ and let $R=\ensuremath{\mathcal{R}} (T;t,\sigma)$.
By Lemma~\ref{lem:binarize}, any binary refinement $S^*$ of $\hat{S}$ displays $R$ (and hence agrees with it) and $\ats{T}{S^*}$ is acyclic.
Conversely, suppose that there is a binary species tree $S^*$ that is a refinement of $S$ and agrees with $R$ such that $\ats{T}{S^*}$ is acyclic.
Since $\ats{T}{S^*}$ is acyclic, we only need to show that $S^*$ displays $R$. Let $ab|c \in R$. Because $S^*$ is binary, we must have one of $ab|c$, $ac|b$ or $bc|a$ in $rt(S^*)$. Since $S^*$ agrees with $R$, $ab|c \in rt(S^*)$, and it follows that $R\subseteq rt(S^*)$. Hence, $S^*$ displays $R$. Taking the latter arguments together, $S^*$ is a solution to the instance $((T;t,\sigma), S)$ of the GTC problem, which completes the proof.
\end{proof}
\section{An Algorithm for the GTC Problem}
\label{sec:algoGTC}
We need to introduce a few more concepts before describing our algorithm.
For a sequence $Q=(v_1,\ldots,v_k)$ we denote with $\mathcal{M}(Q) = \{v_1,\ldots,v_k\}$.
Given a graph $G$, a
\emph{partial topological sort} of $G$ is a sequence of distinct vertices $Q = (v_1, v_2, \ldots, v_k)$ such that for each $i \in [k]$,
vertex $v_i$ has in-degree $0$ in $G - \{v_1, \ldots, v_{i-1}\}$.
If, for any $v \in V(G)$, there is no partial topological sort $Q'$ satisfying $\mathcal{M}(Q') = \mathcal{M}(Q) \cup \{v\}$
then $Q$ is called a \emph{maximal topological sort}.
We note that in fact, the set of vertices in a maximal topological sort of $G$ is unique, in the sense
that for two distinct maximal topological sorts $Q,Q'$ of $G$ we always have
$\mathcal{M}(Q) = \mathcal{M}(Q')$.
\begin{lemma}[Properties of maximal topological sort]
Let $G = (V,E)$ be a graph and $Q$ and $Q'$ be maximal topological sorts of $G$.
Then, $\mathcal{M}(Q) = \mathcal{M}(Q')$.
In particular, $\mathcal{M}(Q) = V(G)$ if and only if $G$ is a directed acyclic graph.
If $x\in V\setminus \mathcal{M}(Q)$, then none of the vertices $y$ in $V$ for
which there is a directed path from $x$ to $y$ are contained in $\mathcal{M}(Q)$.
If $x\in \mathcal{M}(Q)$, then $x$ is not contained in any cycle of $G$.
\label{lem:Qproperty}
\end{lemma}
\begin{proof}
Let $Q, Q'$ be maximal topological sorts of $G$, with $Q = (v_1, \ldots, v_k)$
and assume, for contradiction that $\mathcal{M}(Q) \neq \mathcal{M}(Q')$.
Let $v_i$ be the first vertex in the sequence $Q$ such that $v_i \notin \mathcal{M}(Q')$.
Then all the in-neighbors of $v_i$ are in the set $\{v_1, \ldots, v_{i-1}\}$.
Moreover, by assumption $\{v_1, \ldots, v_{i-1}\} \subseteq \mathcal{M}(Q')$, implying that $v_i$ has in-degree $0$ in $G - \mathcal{M}(Q')$. Hence, we could append $v_i$ to $Q'$, contradicting its maximality. The fact that $\mathcal{M}(Q) = V(G)$ if and only if $G$ a directed acyclic graph is well-known and follows from the results of Kahn~\cite{Kahn:62}.
Let $x\in V\setminus \mathcal{M}(Q)$. Moreover, let
$P=(x=v_1,\dots v_k=y)$, $k\geq 2$, be a directed path from $x$ to $y$.
Since $x\notin \mathcal{M}(Q)$, $v_2$ has in-degree greater than $0$ in $G-\mathcal{M}(Q)$.
Therefore, $v_2\notin \mathcal{M}(Q)$ and, by induction,
$v_k=y\notin \mathcal{M}(Q)$.
We now show that no vertex $x\in \mathcal{M}(Q)$ can be contained in a cycle of $G$.
Assume, for contradiction, that there is a cycle $C$ such that some of its vertices are part of
a maximal topological sort $Q = (v_1,\dots,v_k)$ of $G$.
Let $v_i$ be the first vertex of $C$ that appears in $Q$.
Hence, $v_i$ must have in-degree $0$ in $G - \{v_1, \ldots, v_{i-1}\}$.
But this implies, that the in-neighbor of $v_i$ in $C$
must already be contained in $Q$; a contradiction.
\end{proof}
A maximal topological sort of $G$ can be found by
applying the following procedure: start with $Q$ empty, and while there is a
vertex of in-degree $0$ in $G - \mathcal{M}(Q)$, append $v$ to $Q$ and repeat. Then, $G$ is acyclic if an only if any maximal
topological sort $Q$ of $V(G)$ satisfies $\mathcal{M}(Q) = V(G)$. The latter argument is correct as it directly
mirrors the well-known algorithm by Kahn to find a topological sort of graph \cite{Kahn:62}.
Our algorithm will make use of what we call a \emph{good split refinement}.
To this end, we provide first
\begin{definition}[Split refinement]
\label{def:split-ref}
Let $S$ be an almost binary tree and let $x$ be a cherry of $S$.
We say that a refinement $S'$ of $S$ is a \emph{split refinement (of $S$ at $x$)}
if $S'$ can be obtained from $S$ by
partitioning the set $\mathrm{ch}(x)$ of children of $x$
into two non-empty subsets $X_1, X_2=\mathrm{ch}(x)\setminus X_1$,
and applying the extensions $(x, X_1)$ and then $(x, X_2)$.
\end{definition}
In other words, we split the children set of $x$ into
non-empty subsets $X_1$ and $X_2$, and add a new parent vertex above each
subset of size 2 or more and connect $x$ with the newly created parent(s)
or directly with $x'$ whenever $X_i=\{x'\}$.
We note that the two $(x, X_1)$ and $(x, X_2)$ extensions yield a valid refinement of $S$
since the set $X_2$ is a strict subset of the children of $x$ in $S_{x, X_1}$.
Also observe that a split refinement transforms
an almost binary tree into another almost binary tree that has one additional
binary internal vertex.
\begin{definition}[Good split refinement]
Let $((T;t,\sigma), S)$ be a GTC instance. Let $Q$ be a maximal topological sort of $\ats{T}{S}$, and let $S'$ be a split refinement of $S$ at some cherry vertex $x$.
Then $S'$ is a \emph{good split refinement} if the two following conditions are satisfied:
\begin{itemize}
\item
$S'$ agrees with $\ensuremath{\mathcal{R}} (T;t,\sigma)$;
\item
all the in-neighbors of $x$ in $\ats{T}{S'}$ belong to $\mathcal{M}(Q)$.
\end{itemize}
\end{definition}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.9\textwidth]{./working-exmpl.pdf}
\end{center}
\caption{Top left: the gene tree $(T;t,\sigma)$ from Fig.\ \ref{fig:least}
from which we obtain the species triples $\S(T;t,\sigma) = \{AB|D,AC|D\}$.
Moreover, the sequence of species trees $S_1, S_2$ and $S_3$ is
obtained by stepwise application of good split refinements. The species tree $S_4$ is an example of a split refinement of $S_2$ that is not good.
The corresponding graphs $\ats{T}{S}$ are drawn right to the respective species tree $S$.
For clarity, we have omitted to draw all vertices of $\ats{T}{S}$ that have degree $0$.
See text for further discussion.
}
\label{fig:working-exmpl}
\end{figure}
The intuition behind a good split refinement is that it refines $S$ by creating an additional binary vertex. Moreover, this refinement maintains agreement with $\ensuremath{\mathcal{R}} (T;t,\sigma)$ and, more importantly, creates a new vertex of in-degree $0$ in the auxiliary graph that can be used to extend the current maximal topological sort. Ultimately, our goal is to repeat this procedure until $Q$ contains every vertex, at which point we will have attained an acyclic graph.
As an example consider Fig.\ \ref{fig:working-exmpl}. The species tree $S_1$ corresponds to the star tree.
Clearly $S_1$ agrees with $\S(T;t,\sigma)$ since $R(S_1)=\emptyset$. However, $\ats{T}{S}$ contains cycles.
For the maximal topological sort $Q_1$ of $\ats{T}{S_1}$ we have
$\mathcal{M}(Q_1) = L(T)\cup \{1,2,5\}$.
Now, $S_2$ is a good split refinement of $S_1$, since $S_2$ agrees with $\S(T;t,\sigma)$ (in fact, $S_2$ displays $\S(T;t,\sigma)$)
and since $x=1'$ has no in-neighbors in $\ats{T}{S_2}$ which trivially implies that
all in neighbors of $x=1'$ in $\ats{T}{S_2}$ are already contained $\mathcal{M}(Q_1)$.
For the maximal topological sort $Q_2$ of $\ats{T}{S_2}$ we have
$\mathcal{M}(Q_2) = \mathcal{M}(Q_1)\cup \{1'\}$.
Still, $\ats{T}{S_2}$ is not acyclic.
The tree $S_3$ is a good split refinement of $S_2$, since $S_3$ agrees with $\S(T;t,\sigma)$
and the unique in-neighbor $1'$ of $x=2'$ in $\ats{T}{S_3}$ is already contained $\mathcal{M}(Q_2)$.
Since $\ats{T}{S_3}$ is acyclic, there is a time-consistent reconciliation map from
$(T;t,\sigma)$ to $S_3$, which is shown in Fig.\ \ref{fig:least}.
Furthermore, $S_4$ is not a good split refinement of $S_2$.
Although $S_4$ is a split refinement of $S_2$ and agrees with $\S(T;t,\sigma)$,
the in-neighbor $4$ of $x=2'$ is not contained in $\mathcal{M}(Q_2)$.
We will discuss later the question of finding a good split refinement efficiently, if one exists. For now, assume that this can be done in polynomial time.
The pseudocode for a high-level algorithm for solving the GTC problem is provided in Alg.\ \ref{alg:gtcRefinement}. We note in passing that this
algorithm serves mainly as a scaffold to provide the correctness proofs that are needed for the main Alg.\ \ref{alg:GoodSplit}.
\vspace{3mm}
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State Function $\textit{gtcRefinement}((T;t,\sigma), S)$
\If{$S$ is binary \label{line:binary}}
\If{$\ats{T}{S}$ is acyclic and $S$ agrees with $\ensuremath{\mathcal{R}} (T;t,\sigma)$ \label{line:GTC-properties}} %
\State return $S$\;
\Else \ return ``there is no solution''\;
\EndIf
\Else
\If{$S$ admits a good split refinement $S'$ at a vertex $x$ \label{line:good-split}}
\State return $\textit{gtcRefinement}((T;t,\sigma), S')$\;
\Else \ return ``there is no solution''\; \label{line:no-good-split}
\EndIf
\EndIf
\end{algorithmic}
\caption{GTC algorithm}\label{alg:gstr}
\label{alg:gtcRefinement}
\end{algorithm}
We prove some general-purpose statements first.
Let $I_G(v)$ denote the set of in-neighbors of vertex $v$ in a graph $G$.
\begin{lemma}\label{lem:ancestors-dont-change}
Let $((T;t,\sigma), S)$ be a GTC instance. Moreover,
let $S'$ be a split refinement of $S$ at a cherry $x$.
Then, for every vertex $y$ of $S$ such that $y \not\preceq_S x$, it holds that
$I_{\ats{T}{S'}}(y) = I_{\ats{T}{S}}(y)$.
\end{lemma}
\begin{proof}
Let $y\in V(S)$ be a vertex of $S$ satisfying $y \not\preceq_S x$.
Since $y\notin V(T)$, for every $z\in I_{\ats{T}{S'}}(y)$ or $z\in I_{\ats{T}{S}}(y)$ the edge $(z,y)$
cannot be an A4-edge.
If $(z,y)$ is an A2-edge in $\ats{T}{S}$ then, $(z,y)\in E(S)$, which is if and only if
$(z,y)\in E(S')$, since $y\not\preceq_S x$. In this case,
$z\in I_{\ats{T}{S}}(y)$ if and only if $z\in I_{\ats{T}{S'}}(y)$,
It remains to consider A1- and A3-edges. We translate here Property (P2.a) as in the proof of Lemma \ref{lem:binarize}.
It states that $\hat{\mu}_{S}(y)=\hat{\mu}_{S'}(y)$ since $y\not\preceq_{S'} x$ and thus, it cannot be the newly created vertex in $S'$.
Since this holds for every $y \not\preceq_S x$, this immediately implies that
$z\in I_{\ats{T}{S}}(y)$ and the edge $(z,y)$ is an A1-edge, resp., A3-edge in $\ats{T}{S}$
if and only if $z\in I_{\ats{T}{S'}}(y)$ and $(z,y)$ is an A1-edge, resp., A3-edge in $\ats{T}{S'}$.
\end{proof}
\begin{remark}
Lemma \ref{lem:self-loops} implies that if $S$ has a leaf that is in a self-loop in $\ats{T}{S}$, then we may immediately discard $S$ as it cannot have a solution (since any refinement will have this self-loop). For the rest of the section, we will therefore assume that no leaf of $S$ belong to a self-loop.
\label{rem:self-loop}
\end{remark}
We now show that if we reach a situation where there is no good split refinement, then there is no point in continuing, i.e. that it is correct to deduce that no solution refining the current $S$ exists.
\begin{proposition}
Let $((T;t,\sigma), S)$ be a GTC instance such that $S$ is not binary and
does not admit a good split refinement.
Then, $((T;t,\sigma), S)$ does not admit a solution.
\label{prop:no-solution}
\end{proposition}
\begin{proof}
We show that if $S$ does not admit a good split refinement, then
none of the binary refinements $S^*$ of $S$ is a solution to the
GTC instance $((T;t,\sigma), S)$.
Contraposition of Lemma \ref{lem:binarize} together with Prop.\ \ref{prop:IFFbinRef} then implies
that there is no solution at all for $((T;t,\sigma), S)$.
Thus, assume that $S$ is not binary (but almost binary, due to the definition of GTC instances)
such that $S$ does not admit a good split refinement.
Let $S^*$ be any binary refinement of $S$. We may assume that $S^*$ agrees with and thus, displays $\ensuremath{\mathcal{R}} (T,t,\sigma)$, as otherwise it is not a solution.
We show that $\ats{T}{S^*}$ contains a cycle.
Let $Q$ be a maximal topological sort of $\ats{T}{S}$. By Lemma \ref{lem:Qproperty}, $\mathcal{M}(Q)$ is independent of the choice of
the particular sequence $Q$.
Note that $V(S) \subseteq V(S^*)$ and therefore that $V(\ats{T}{S}) \subseteq V(\ats{T}{S^*})$.
In particular, $\mathcal{M}(Q) \subseteq V(\ats{T}{S^*})$.
Also notice that because of the A2 edges in $\ats{T}{S}$, if a vertex $x \in V(S)$ is not in $\mathcal{M}(Q)$, then no descendant of $x$ in $S$ is in $\mathcal{M}(Q)$.
We separate the proof into three claims.
\begin{owndesc}
\item[Claim 1:]
\emph{Let $x$ be a non-binary cherry of $S$. Then $x \notin \mathcal{M}(Q)$.}
Note that since
$x$ is non-binary and $S^*$ is
a binary refinement of $S$, there is a split refinement
$S'$ of $S$ at $x$ such
that $S^*$ refines $S'$.
Since $S^*$ agrees with $\ensuremath{\mathcal{R}} (T;t,\sigma)$, also $S'$ agrees with $\ensuremath{\mathcal{R}} (T;t,\sigma)$.
If all in-neighbors of $x$ in $\ats{T}{S'}$ are in $Q$, then $S'$ is a good split refinement; a contradiction.
So we may assume that $x$ has an in-neighbor $y$ in $\ats{T}{S'}$ such that $y \notin \mathcal{M}(Q)$.
Since $x\in V(S)$, the edge $(y, x)$ cannot be an A4-edge in $\ats{T}{S'}$.
If $(y, x)$ is an A1-edge in $\ats{T}{S'}$, then $x = \hat{\mu}_{S'}(v)$ for some $v \in V(T)$.
By construction, $L(S(x)) = L(S'(x))$ and thus, $\hat{\mu}_{S}(v) = \hat{\mu}_{S'}(v) = x$. Therefore,
$(y, \hat{\mu}_S(v)) = (y, x) \in E(\ats{T}{S})$.
Similarly, if $(y, x)$ is an A3-edge in $\ats{T}{S'}$, then $x = \hat{\mu}_{S'}(y)$ and again,
$(y, \hat{\mu}_S(y)) = (y, x) \in E(\ats{T}{S})$.
If $(y, x)$ is an A2-edge in $\ats{T}{S'}$, then $(y, x) \in E(\ats{T}{S})$ since the parent
of $x$ is the same in $S$ and $S'$. In all cases, $y$ is an in-neighbor of
$x$ in $\ats{T}{S}$. However, since $y \notin \mathcal{M}(Q)$, vertex $y$ remains an in-neighbor of $x$ in
the graph $\ats{T}{S} - \mathcal{M}(Q)$. It follows that $x \notin \mathcal{M}(Q)$, which proves Claim 1.
\end{owndesc}
\begin{owndesc}
\item[Claim 2:]
\emph{Let $v \in V(T) \setminus \mathcal{M}(Q)$. Then $v$ has in-degree at least $1$ in $\ats{T}{S^*} - \mathcal{M}(Q)$.}
Let $v \in V(T) \setminus \mathcal{M}(Q)$. Since $v \notin \mathcal{M}(Q)$, $v$ has in-degree at least $1$ in $\ats{T}{S} - \mathcal{M}(Q)$, or else it could be added to the maximal topological sort.
Let $(x, v)$ be an incoming edge of $v$ in $\ats{T}{S} - \mathcal{M}(Q)$, which is either an A1- or an A4- edge.
If $(x, v)$ is an A1- edge, we either have $x \in V(T)$ or $x \in V(S)$.
Suppose first that $x \in V(T)$. In this
case, the $(x, v)$ edge exists because $x$ is the parent of $v$ in $T$
with $t(x), t(v)$ both in $\{\mathfrak{d}, \mathfrak{t}\}$. This is independent of
$S$, and so $(x, v)$ is also an A1-edge of $\ats{T}{S^*} - \mathcal{M}(Q)$.
Suppose now that $x \in V(S)$. In this case, observe that
$x\notin \mathcal{M}(Q)$, since $(x, v)$ is an edge in $\ats{T}{S} - \mathcal{M}(Q)$.
Therefore, $x \in V(S) \setminus \mathcal{M}(Q)$. This, in particular, implies
that the parent $v_p$ of $v$ in $T$ satisfies $t(v_p) = \mathfrak{s}$ and
$\hat{\mu}_{S}(v_p) = x$. Since $S^*$ refines $S$, we must have $\hat{\mu}_{S^*}(v_p)
\preceq_{S^*} x$.
There are two cases, either
$\hat{\mu}_{S^*}(v_p)\notin V(S)$, in which case trivially $\hat{\mu}_{S^*}(v_p) \notin
\mathcal{M}(Q)$, or $\hat{\mu}_{S^*}(v_p)\in V(S)$. In the latter case, there is a
directed (possibly edge-less) path from $x$ to $\hat{\mu}_{S^*}(v_p)$ in $\ats{T}{S}$ due to the A2-edges.
Thus, we can apply Lemma
\ref{lem:Qproperty} to conclude that $\hat{\mu}_{S^*}(v_p) \notin \mathcal{M}(Q)$.
In either
case, $(\hat{\mu}_{S^*}(v_p), v)$ is an A1-edge of $\ats{T}{S^*} - \mathcal{M}(Q)$.
Therefore, $v$ has an in-neighbor in $\ats{T}{S^*}$ that does not belong to
$Q$.
Assume now that $(x, v)$ is an A4-edge.
Thus, there is an edge $(v,v') \in \mathcal{E}_T$ such that
$x= lca_{S}(\hat{\mu}_S(v), \hat{\mu}_S(v'))$.
Again since $S^*$ refines $S$, it is not hard to see that
$\ensuremath{\operatorname{lca}}_{S^*}(\hat{\mu}_S^*(v), \hat{\mu}_S^*(v')) \preceq_{S^*} x$.
By similar arguments as before, $\ensuremath{\operatorname{lca}}_{S^*}(\hat{\mu}_S^*(v), \hat{\mu}_S^*(v')) \notin \mathcal{M}(Q)$.
Thus, $(\ensuremath{\operatorname{lca}}_{S^*}(\hat{\mu}_S^*(v), \hat{\mu}_S^*(v')),v)$ is an A4-edge of
$\ats{T}{S^*}$. Hence,
$v$ also has in-degree at least $1$ in $\ats{T}{S^*} - \mathcal{M}(Q)$, which proves Claim 2.
\end{owndesc}
We prove the analogous statement for the species tree vertices.
\begin{owndesc}
\item[Claim 3:]
\emph{Let $x \in V(S) \setminus \mathcal{M}(Q)$. Then $x$ has in-degree at least $1$ in $\ats{T}{S^*} - \mathcal{M}(Q)$.}
Let $x \in V(S) \setminus \mathcal{M}(Q)$.
We may assume that $x$ has in-degree at least $1$ in $\ats{T}{S} - \mathcal{M}(Q)$, by the maximality of $Q$.
Notice that since $S^*$ is a binary refinement of $S$, there exists a sequence of split refinements that transforms $S$ into $S^*$. That is,
there is a sequence of trees $S = S_1, S_2, \ldots, S_k = S^*$ such that for $2 \leq i \leq k$, $S_i$ is a split refinement of $S_{i-1}$.
Let $(w, x)$ be an incoming edge of $x$ in $\ats{T}{S} - \mathcal{M}(Q)$. We consider the following three exclusive cases:
either $x$ is a binary or a non-binary interior vertex, or a leaf.
Suppose first that $x$ is a binary vertex of $S$. Because $S$ is almost binary, $x$ is not a descendant of any non-binary vertex of $S$.
By applying Lemma~\ref{lem:ancestors-dont-change} successively on each split refinement of the sequence transforming $S$ into $S^*$,
we obtain $I_{\ats{T}{S_1}}(x) = I_{\ats{T}{S_2}}(x) = \ldots = I_{\ats{T}{S_k}}(x) = I_{\ats{T}{S^*}}(x)$. In particular, $w \in I_{\ats{T}{S^*}}(x)$, which proves the claim for this case since $w \notin \mathcal{M}(Q)$.
Suppose now that $x$ is a leaf of $S$. If the parent $x_p$ of $x$ is binary,
then again, successive application of Lemma~\ref{lem:ancestors-dont-change} on
$S_1, \ldots, S_k$ implies that $I_{\ats{T}{S}}(x) = I_{\ats{T}{S^*}}(x)$, and
therefore that $w \in I_{\ats{T}{S^*}}(x)$. If $x_p$ is a non-binary cherry,
then $x_p \notin \mathcal{M}(Q)$ by Claim 1. There are two cases, either the parent $p(x)$
of $x$ in $S^*$ is identical to $x_p$ or not.
In the first case, $p(x)=x_p$ is not part of $Q$.
In the latter case, $p(x)$ refers to some newly added vertex during
the construction of $S^*$. In this case, $p(x)$ is not contained in $S$ and so neither in $Q$.
In summary,
the parent of $x$ in $S^*$
is not in $Q$. Due to the A2-edges, $x$ has in-degree at least $1$ in
$\ats{T}{S^*} - \mathcal{M}(Q)$.
Finally, suppose that $x$ is a non-binary interior vertex of $S$, i.e. $x$ is a cherry.
Let $S'$ be a split refinement of $S$ at $x$ such that $S^*$ refines $S'$. Recall that as in Claim 1, $S'$ agrees with $\ensuremath{\mathcal{R}} (T;t,\sigma)$.
This and the fact that $S$ does not admit a good split refinement implies
that $x$ has in-degree at least $1$ in $\ats{T}{S'} - \mathcal{M}(Q)$.
Now, $x$ is binary in $S'$. As before, there is a sequence of binary refinements transforming $S'$ into $S^*$. Since $x$ is not a descendant of any non-binary vertex in $S'$, by applying Lemma~\ref{lem:ancestors-dont-change} on each successive refinement,
$I_{\ats{T}{S'}}(x) = I_{\ats{T}{S^*}}(x)$.
It follows that $x$ has in-degree at least $1$ in $\ats{T}{S^*} - \mathcal{M}(Q)$ as well.
This proves Claim 3.
\end{owndesc}
Now, let $y \in V(S^*) \setminus V(S)$. Thus, $y$ must have been created by one of the extensions that transforms $S$ into $S^*$, and so in $S^*$, $y$ must be a descendant of a vertex $x$ such that $x$ is a cherry in $S$. Since $x \notin \mathcal{M}(Q)$ by Claim 1, and because of the A2-edges, $y$ must have in-degree at least $1$ in $\ats{T}{S^*} - \mathcal{M}(Q)$.
To finish the argument, note that $V(\ats{T}{S^*} - \mathcal{M}(Q)) = (V(T) \setminus \mathcal{M}(Q)) \cup (V(S) \setminus \mathcal{M}(Q)) \cup (V(S^*) \setminus V(S))$.
We just argued that each vertex in $V(S^*) \setminus V(S)$ has in-degree at least $1$ in $\ats{T}{S^*} - \mathcal{M}(Q)$, and by Claim 2 and Claim 3, it follows that every vertex of $\ats{T}{S^*} - \mathcal{M}(Q)$ has in-degree at least $1$. This implies that $\ats{T}{S^*} - \mathcal{M}(Q)$ contains a cycle, and hence that $\ats{T}{S^*}$ also contains a cycle. We have reached a contradiction, proving the lemma.
\end{proof}
We next show that if we are able to find a good split refinement $S'$ of $S$, the $((T; t, \sigma), S')$ instance is equivalent in the sense that
$((T; t, \sigma), S)$ admits a solution if and only if $((T; t, \sigma), S')$ also admits a solution.
First, we provide the following lemma for later reference.
\begin{lemma}\label{lem:q-stays-topo}
Let $((T;t,\sigma), S)$ be a GTC instance and let $Q$ be a maximal topological sort of $\ats{T}{S}$. Moreover,
let $S'$ be a split refinement of $S$ at a cherry $x$.
Then, for any maximal topological sort $Q'$ of $\ats{T}{S'}$, it holds that
$\mathcal{M}(Q) \subseteq \mathcal{M}(Q')$.
\end{lemma}
\begin{proof}
Assume without loss of generality that the cherry $x$ is non-binary in $S$, as otherwise $S=S'$ and we are done.
Let $x_1, x_2$ be the children of $x$ in $S'$, and assume furthermore w.l.o.g.
that $|\L(S'(x_1))| \geq |\L(S'(x_2))|$.
Note that $x_2$ could be a leaf, but that $x_1$ must be an internal vertex since $x$ is a non-binary cherry.
Now, if $\mathcal{M}(Q)=\emptyset$, then the lemma statement is trivially satisfied. Hence, assume that
$Q = (w_1, \ldots, w_l)$, $l\geq 1$.
We construct partial topological sorts $Q_0, Q_1, \ldots, Q_l$ of $\ats{T}{S'}$ as follows.
Define $Q_0 = ()$ as an empty sequence and, for each $1 \leq i \leq l$,
$Q_i$ is obtained from $Q_{i-1}$ by appending $w_i$ to $Q_{i-1}$ if $w_i \neq x$,
and if $w_i = x$, by appending $x$ and $x_1$ (in this order) to $Q_{i-1}$,
and then appending $x_2$ to $Q_{i-1}$ if it is not a leaf in $S'$.
We show, by induction, that each $Q_i$ is a partial topological sort of $\ats{T}{S'}$. The base case $i = 0$ is clearly satisfied.
So let us assume that for $i > 0$ the sequence $Q_{i - 1}$
is a partial topological of of $\ats{T}{S'}$. Consider now the vertex $w_i$.
\begin{owndesc}
\item[\textnormal{\em Case: $w_i \in V(S)$ and $w_i \not\preceq_S x$} ] \ \\
By Lemma~\ref{lem:ancestors-dont-change}, $I_{\ats{T}{S}}(w_i) = I_{\ats{T}{S'}}(w_i)$. Since each member of $I_{\ats{T}{S}}(w_i)$ precedes $w_i$ in $Q$,
$\mathcal{M}(Q_{i-1})$ contains $I_{\ats{T}{S}}(w_i)$. It follows that appending $w_i$
to $Q_{i-1}$ yields a partial topological sort $Q_i$ of $\ats{T}{S'}$.
\end{owndesc}
In the remaining cases, we will make frequent use of the fact that if
$Q'$ is a partial topological sort of $\ats{T}{S'}$ and $v$ is a vertex with
$I_{\ats{T}{S'}}(v)\setminus M = \emptyset$ for some (possibly empty) subset $M\subseteq \mathcal{M}(Q')$, then appending $v$ to $Q'$
yields a partial topological sort of $\ats{T}{S'}$.
In other words, we can w.l.o.g.\ assume $I_{\ats{T}{S'}}(v)\setminus M\neq \emptyset$
for all such considered sets.
\begin{owndesc}
\item[\textnormal{\em Case: $w_i \in V(S)$ and $w_i = x$} ] \ \\
We start by showing that the sequence $Q^x_{i-1}$ obtained by appending $x$ to $Q_{i-1}$
is a partial topological sort of $\ats{T}{S'}$.
Let $z \in I_{\ats{T}{S'}}(x)$.
Suppose first that $z \in V(S')$.
Then $(z, x)$ is either an A1- or A2-edge of $\ats{T}{S'}$. If $(z, x)$ is an A2-edge, then $z$ is the parent of $x$ in both $S$ and $S'$. Thus $(z, x) \in E(\ats{T}{S})$ and since $x \in \mathcal{M}(Q)$, we must have $z \in \mathcal{M}(Q)$. Moreover, $z$ must precede $x$ in $Q$, and it follows that $z \in \mathcal{M}(Q_{i-1})$. If $(z, x)$ is an A1-edge, then $x \preceq_{S'} z$.
If $x \prec_{S'} z$, then $x \prec_{S} z$ as well. Thus in $\ats{T}{S}$, there is a path of A2-edges from $z$ to $x$, implying that $z$ precedes $x$ in $Q$.
Finally if $x = z$, then $\ats{T}{S'}$ contains the self-loop $(x, x)$.
In this case, there is an edge $(u, v) \in E(T)$ such that $(x, x) = (\hat{\mu}_{S'}(u), \hat{\mu}_{S'}(v))$. By construction, $\L(S'(x)) = \L(S(x))$ and therefore, $(\hat{\mu}_S(u), \hat{\mu}_S(v)) = (x, x)$ is an edge of $\ats{T}{S}$. This case cannot occur, since it is impossible that $x \in \mathcal{M}(Q)$ if $x$ is part of a self-loop.
Therefore, $z$ precedes $x$ in $Q$ whenever $z \in V(S')$.
If instead $z \in V(T)$, then $(z, x)$ is either an A1- or A3-edge of $\ats{T}{S'}$,
in which case there is $z'\in V(T)$ such that $(z, x) = (z, \hat{\mu}_{S'}(z'))$.
By construction $\L(S'(x)) =\L(S(x))$ and therefore, $(z, \hat{\mu}_{S}(z')) = (z, x)$ is an edge of $\ats{T}{S}$.
Again, $z$ must precede $x$ in $Q$. We have thus shown that $z$ precedes $x$ in $Q$ for every $z \in I_{\ats{T}{S'}}(x) \subseteq \mathcal{M}(Q_{i-1})$.
Hence, appending $x$ to $Q_{i-1}$ yields a partial topological sort $Q^x_{i-1}$ of $\ats{T}{S'}$.
We continue with showing that $Q_{i}$ is a partial topological sort of $\ats{T}{S'}$.
Note, $Q_{i}$ is obtained by appending $x_1$ and, in case $x_2$ is not a leaf in $S'$, also $x_2$ to the partial topological sort $Q^x_{i-1}$ of $\ats{T}{S'}$.
Let $z \in I_{\ats{T}{S'}}(x_j)\setminus \{x\}$, where $x_j \in \{x_1, x_2\}$ is is chosen to be an interior vertex of $S'$.
Note, $x_j=x_1$ is always possible as argued at the beginning of this proof.
Suppose that $z \in V(S')$. In this case, $(z, x_j)$ cannot be an A2-edge since it would imply $x=z$; a contradiction.
Hence, $(z, x_j)$ is an A1-edge of $\ats{T}{S'}$ and $x_j \preceq_{S'} z$.
Similarly as before, if $x_j \prec_{S'} z$, then $x \prec_{S} z$ since $z
\neq x$. Thus, $z$ precedes $x$ in $Q$, since $\ats{T}{S}$ contains a path of A2-edges from $z$ to
$x$. If $x_j = z$, then there is an edge $(u, v) \in E(T)$
such that $(\hat{\mu}_{S'}(u), \hat{\mu}_{S'}(v)) = (x_j, x_j)$. Since $x_j$ is supposed not to be a
leaf in $S'$ and by construction of $S'$ from $S$, we must have in $S$ that $(\hat{\mu}_S(u), \hat{\mu}_S(v)) = (x, x)$, contradicting $x
\in \mathcal{M}(Q)$. Now, assume that $z \in V(T)$ in which case $(z, x_j)$ is
either an A1- or A3-edge in $\ats{T}{S'}$. Again, there must be a vertex
$z'\in V(T)$ such that $(z, x_j) = (z, \hat{\mu}_{S'}(z'))$. By construction
$\L(S'(x_i)) \subset \L(S(x))$. This and $x_j = \hat{\mu}_{S'}(z')$ immediately
implies that $x = \hat{\mu}_{S}(z')$.
Thus, $(z, \hat{\mu}_{S}(z')) = (z, x)$ is an edge of $\ats{T}{S}$ and $z$ must precede $x$ in $Q$.
Again,
this holds for every $z$ which implies implies that $I_{\ats{T}{S'}}(x_j) \setminus \{x\} \subseteq \mathcal{M}(Q^x_{i-1})$.
Thus, appending $x_1$ and $x_2$ to $Q^x_{i-1}$ after $x$ yields the partial topological sort $Q_i$ of $\ats{T}{S'}$.
\item[\textnormal{\em Case: $w_i \in V(S)$ and $w_i \prec_S x$} ] \ \\
Since $x$ is a cherry, $w_i$ must be a leaf in $S$.
Thus $x$ precedes $w_i$ in $Q$ and therefore, we may assume that $x, x_1$
and, in case $x_2$ is not a leaf in $S'$, also $x_2$ are contained in the partial topological sort $Q_{i-1}$
of $\ats{T}{S'}$ Note, $x_2$ could be absent from $Q_{i-1}$ if it is a leaf and $w_i = x_2$.
That is, we may assume that $w_i$ is a child of either $x, x_1$ or $x_2$ in $S'$,
and that the parent of $w_i$ in $S'$ is in $Q_{i-1}$.
Consider $z \in I_{\ats{T}{S'}}(w_i) \setminus \{x,
x_1, x_2\}$. If $z \in V(S')$, then $w_i \preceq_{S'} z$. As before, if $w_i \prec_{S'} z$, then $w_i \prec_{S} z$ and because of the A2-edges of $\ats{T}{S}$,
$z$ precedes $w_i$ in $Q$. If $z = w_i$, then $(w_i, w_i)$ is a self-loop in $\ats{T}{S'}$.
By Lemma~\ref{lem:self-loops},
$(w_i, w_i)$ is also a self-loop of $\ats{T}{S}$; a contradiction since, by Remark \ref{rem:self-loop},
$\ats{T}{S}$ has no self-loops on its leaves.
So assume that $z \in V(T)$. Then $(z, w_i)$ is an
A1- or A3-edge. Since $w_i$ is a leaf, we have that for any $v \in V(T)$,
$\hat{\mu}_{S'}(v) = w_i$ if and only if $\hat{\mu}_S(v) = w_i$. It follows that $(z,
w_i) \in E(\ats{T}{S})$. Therefore, $z$ precedes $w_i$ in $Q$ and $z$
belongs to $Q_{i-1}$. Thus we may append $w_i$ to $Q_{i-1}$ to obtain a
partial topological sort $Q_i$ of $\ats{T}{S'}$.
\item[\textnormal{\em Case: $w_i \in V(T)$} ] \ \\
Let $z \in I_{\ats{T}{S'}}(w_i)$. Thus, $(z,w_i)$ is either an A1- or A4-edge in $\ats{T}{S'}$.
If $z \in V(T)$, then $(z,w_i)$ is an A1-edge in $\ats{T}{S'}$. Since the event-labels in $T$ are fixed,
$(z,w_i)$ is an A1-edge in $\ats{T}{S}$ and thus,
$z \in I_{\ats{T}{S}}(w_i)$. Therefore, $z$ precedes $w_i$ in $Q$.
Now, suppose $z \in V(S')$.
If $(z, w_i)$ is an A1-edge, then the parent $u$ of $w_i$ in $T$ satisfies
$\hat{\mu}_{S'}(u) = z$. If $z \notin \{x, x_1, x_2\}$, then we immediately obtain $\hat{\mu}_{S}(u) = z$.
Hence, $(z, w_i) \in E(\ats{T}{S})$ and thus, $z$ precedes
$w_i$ in $Q$. If $z \in \{x, x_1, x_2\}$, then it is easy to verify that $\hat{\mu}_S(u) =
x$. Thus $(x, w_i) \in E(\ats{T}{S})$, $x$ precedes $w_i$ in $Q$.
By construction, we have added $x,x_1,x_2$ in one of the previous steps
to obtain $Q_j$, $1\leq j\leq i-1$. Hence, $z \in \{x, x_1, x_2\} $ precedes $w_i$ in $Q$.
If instead $(z, w_i)$ is an A4-edge, then $w_i$ has a child $v$ such that
$\ensuremath{\operatorname{lca}}_{S'}(\hat{\mu}_{S'}(w_i), \hat{\mu}_{S'}(v)) = z$.
Clearly, it holds that $z = \ensuremath{\operatorname{lca}}_{S'}(Z)$ for
$Z = \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(w_i) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$.
If $z \in \{x, x_1, x_2\}$, then $(\ensuremath{\operatorname{lca}}_S(Z), w_i) = (x, v) \in E(\ats{T}{S})$,
and if $z \notin \{x, x_1, x_2\}$, then $(\ensuremath{\operatorname{lca}}_S(Z), w_i) = (\ensuremath{\operatorname{lca}}_{S'}(Z), w_i) = (z, w_i) \in E(\ats{T}{S})$.
In both cases, $z$ precedes $w_i$ in $Q$.
In every case, each $z$ is already contained in $Q_{i-1}$, and we may append $w_i$ to $Q_{i-1}$
to obtain a partial topological sort $Q_i$ of $\ats{T}{S'}$.
\end{owndesc}
We have shown that $Q_l$ is a partial topological sort of $\ats{T}{S'}$
satisfying $\mathcal{M}(Q) \subseteq \mathcal{M}(Q_l)$. If we add in-degree $0$ vertices in
$Q_l$ until we obtain a maximal topological sort $Q'$ of $\ats{T}{S'}$, then
have $\mathcal{M}(Q) \subseteq \mathcal{M}(Q_l) \subseteq \mathcal{M}(Q')$, as desired.
\end{proof}
Our last step is to show that any good split refinement leads to a solution, if any.
\begin{theorem}\label{thm:equiv-refinement}
Let $((T; t, \sigma), S)$ be a GTC instance, and suppose that $S$ admits a
good split refinement $S'$. Then $((T;t, \sigma), S)$ admits a solution if and
only if $((T; t, \sigma), S')$ admits a solution.
Moreover, any solution for $((T;t, \sigma), S')$, if any, is also a solution for $((T;t,\sigma), S)$.
\end{theorem}
\begin{proof}
It is easy to see that any solution $S^*$
of $((T; t, \sigma), S')$ would be a solution for $((T; t, \sigma), S)$.
Hence, if $((T; t, \sigma), S)$ does not admit a solution,
then $((T; t, \sigma), S')$ cannot admit a solution.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\textwidth]{./all-the-s-guys.pdf}
\end{center}
\caption{A representation of the trees $S, S', S^*$ and $\hat{S}$. The $X_1, X_2$ and $X_3$ triangles represent subtrees containing only leaves from $X$ (same with $Y_1, Y_2, Y_3$ and $Y$). }
\label{fig:all-the-s-guys}
\end{figure}
For the converse, suppose now that $((T; t, \sigma), S)$ admits a solution. Thus,
there is a binary refinement $S^*$ of $S$
that displays $\ensuremath{\mathcal{R}} (T;t,\sigma)$ and such that $\ats{T}{S^*}$ is acyclic.
Let $S'$ be any good split refinement of $S$ at some cherry $x$ of $S$.
Furthermore,
let $x_1, x_2$ be the children of $x$ in $S'$, and let $X = \L(S'(x_1))$
and $Y = \L(S'(x_2))$. Note that $\{X,Y\}$ is a partition of the children
of $x$ in $S$.
Consider the trees $S^*|_X$ and $S^*|_Y$. We define another tree $\hat{S}$ obtained by replacing the children of $x$ in $S^*$ by $S^*|_X$ and $S^*|_Y$. More precisely, first observe that, by construction
$\L(S^*|_X)= X$ and $\L(S^*|_Y)= Y$. Moreover, for any binary refinement $S^*$
of $S$ it must hold that $\L(S^*(x))$ is the set of children $\mathrm{ch}(x)$ in $S$. In particular, $x$ is an
ancestor in $S^*$ of every vertex in $S^*|_X$ as well as in $S^*|_Y$.
Hence, we can safely replace the two subtrees $S^*(v_1)$ and $S^*(v_2)$ rooted at the two children $v_1,v_2$ of $x$ in $S^*$ by
$S^*|_X$ and $S^*|_Y$ (by defining the root of $S^*|_X$ and the root of $S^*|_Y$ as the two
new children of $x$) to obtain another tree $\hat{S}$ with $\L(\hat{S}) = \L(S^*)$.
By construction, $\hat{S}$ is identical to $S^*$, except that the two subtrees below $x$ are replaced by $S^*|_X$ and $S^*|_Y$.
An example of the trees $S, S', S^*$ and $\hat{S}$ is shown in Figure~\ref{fig:all-the-s-guys}.
Clearly, $\hat{S}(x_1)= S^*|_X$, resp., $\hat{S}(x_2) =S^*|_Y$ is a binary refinement of
$S'(x_1) $, resp., $S'(x_2)$. Moreover, $S^*|_{\L(S^*)\setminus (X\cup Y)}$ is a binary refinement
of $S'|_{\L(S')\setminus (X\cup Y)}$. Taking the latter two arguments together,
$\hat{S}$ is a binary refinement of $S'$.
We proceed with showing that $\hat{S}$ is a solution to $((T; t, \sigma), S')$.
To this end, we apply Prop.\ \ref{prop:IFFbinRef} and show that
$\hat{S}$ agrees with $\ensuremath{\mathcal{R}} (T; t,\sigma)$ and that $\ats{T}{\hat{S}}$ is acyclic.
Let us first argue that $\hat{S}$ agrees with $\ensuremath{\mathcal{R}} (T; t,\sigma)$.
Observe first that since $\hat{S}$ contains $S^*|_X$ and $S^*|_Y$ as subtrees, $\hat{S}$ displays all triples in $ab|c \in rt(S^*)$
with $a,b,c\in X$, or with $a,b,c\in Y$. Moreover, $\hat{S}$ displays all triples $ab|c \in rt(S^*)$
for which at least one of $a,b$ and $c$ is not contained in $X\cup Y$.
The latter two arguments and $\ensuremath{\operatorname{lca}}_{\hat{S}}(X\cup Y)=x$ imply that
$\hat{S}$ displays all triples $ab|c \in rt(S^*)$ except possibly those
for which $lca_{\hat{S}}(a,b,c) = x$.
Let $R_x = \{ab|c \in rt(\hat{S}) : lca_{\hat{S}}(a,b,c) = x\}$.
By the latter arguments, the only triplets in $rt(\hat{S})$ that are not in $rt(S^*)$
are in $R_x$, i.e. $rt(\hat{S}) \subseteq rt(S^*) \cup R_x$.
By the definition of a good split refinement, $S'$ agrees with $\ensuremath{\mathcal{R}} (T; t,\sigma)$.
Note that $R_x$ contains precisely those triples $ab|c$ for which either
$a,b\in X$ and $c\in Y$ or $c\in X$ and $a,b\in Y$. This observation immediately implies that
$R_x \subseteq rt(S')$.
We thus have $rt(\hat{S}) \subseteq rt(S^*) \cup rt(S')$ and since both $S^*$ and $S'$ agree with $\ensuremath{\mathcal{R}} (T; t,\sigma)$, it follows that $\hat{S}$ agrees with $\ensuremath{\mathcal{R}} (T; t,\sigma)$.
We must now argue that $\ats{T}{\hat{S}}$ is acyclic.
Assume for contradiction that $\ats{T}{\hat{S}}$ contains a cycle $C = (w_1, w_2, \ldots, w_k, w_1)$.
Since $\hat{S}$ is binary and agrees with $\ensuremath{\mathcal{R}} (T; t, \sigma)$, $\hat{S}$ displays $\ensuremath{\mathcal{R}} (T; t, \sigma)$ and Theorem \ref{thm:SpeciesTriplets}
implies that there is a reconciliation map from
from $(T;t,\sigma)$ to $\hat{S}$.
By Lemma \ref{lem:self-loops}, $\ats{T}{\hat{S}}$ does not
contain self-loops and thus $k>1$ for $C = (w_1,\ldots, w_k, w_1)$.
We will derive a contradiction by showing that $\ats{T}{S^*}$ contains a cycle.
The proof is divided in a series of claims.
\begin{owndesc}
\item[Claim 1:]
\emph{If $(u, v) \in E(\ats{T}{\hat{S}})$ and $u, v \notin V(\hat{S}(x))$, then $(u, v) \in E(\ats{T}{S^*})$.}
Note that $\hat{S}$ and $S^*$ are identical except for the subtree rooted at $x$.
Thus, $(u, v)$ is an A2-edge in $E(\ats{T}{\hat{S}}$ if and only if it is an A2-edge in $\ats{T}{S^*}$.
Moreover, for all other edge types, we have $\hat{\mu}_{\hat{S}}(u) = \hat{\mu}_{S^*}(u)$, $\hat{\mu}_{\hat{S}}(v) = \hat{\mu}_{S^*}(v)$,
as well as $\ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}(u), \hat{\mu}(v)) = \ensuremath{\operatorname{lca}}_{S^*}(\hat{\mu}(u), \hat{\mu}(v))$.
This directly implies that every edge $(u, v) \in E(\ats{T}{\hat{S}})$ that does not involve a vertex of $\hat{S}(x)$ is also in $S^*$.
This proves Claim 1.
\end{owndesc}
To stress once again,
since $\hat{S}$ is binary and agrees with $\ensuremath{\mathcal{R}} (T; t,\sigma)$, it must display
$\ensuremath{\mathcal{R}} (T; t,\sigma)$. Thus, we can apply Theorem \ref{thm:SpeciesTriplets} to
conclude that there is a reconciliation map from $\hat{S}$ to $(T; t,\sigma)$.
Now, let $Z = (V(T) \cup V(\hat{S})) \setminus V(\hat{S}(x)$.
Observe that $Z = (V(T) \cup V(S^*)) \setminus V(S^*(x)$.
If $C$ does not contain a vertex of $V(\hat{S}(x))$, then by Claim 1,
every edge of $C$ is also in $\ats{T}{S^*}$. Thus $C$ is also a cycle in $\ats{T}{S^*}$, contradicting that it is acyclic.
Therefore, we may assume that $C$ contains at least one vertex from $V(\hat{S}(x))$.
On the other hand, assume that $C$ does not contain a vertex of $Z$. Then all the vertices of $C$ belong to $V(\hat{S}(x))$.
Since, as we argued before, $\ats{T}{\hat{S}}$ does not contain self-loops, we conclude that every edge $(u,v)$ of $C$ is either an A1- or an A2-edge of $\ats{T}{\hat{S}}$
that satisfies $v \prec_{\hat{S}} u$. However, this implies that the edges of $C$ cannot form a cycle; a contradiction.
Therefore, $C$ must contain vertices from both $V(\hat{S}(x))$ and $Z$.
Assume, without loss of generality, that $w_1 \in V(\hat{S}(x))$ and $w_k \in Z$.
Now, $C$ can be decomposed into a set of subpaths that alternate between
vertices of $V(\hat{S}(x))$ and of $Z$. More precisely, we say that a subpath $P =
(w_i, w_{i+1}, \ldots, w_l)$ of $C$, where $1 \leq i \leq l \leq k$, is a
$V(\hat{S}(x))$-subpath if $w_i, \ldots, w_l \in V(\hat{S}(x))$. Similarly, we say that
$P$ is a $Z$-subpath if $w_i, \ldots, w_l \in Z$. Now, $C = (w_1, \ldots, w_k)$ is a
concatenation of subpaths $P_1, P'_1, P_2, P'_2, \ldots, P_h, P'_h$ such that
for $1 \leq i \leq h$, $P_i$ is a non-empty $V(\hat{S}(x))$-subpath and $P'_i$ is a
non-empty $Z$-subpath.
We want to show that $\ats{T}{S^*}$ contains a cycle. To this end, we will construct
a cycle $C^*$ in $\ats{T}{S^*}$ such that $C^*$ is the concatenation of subpaths $P_1^*, P_1', \ldots, P_h^*, P_h'$, where each $P^*_i$
is a subpath of $\ats{T}{S^*}$ that replaces $P_i$.
First notice that for each $1 \leq i \leq h$, all the edges of $P'_i$ are in $\ats{T}{S^*}$ by Claim 1. Therefore, every $P'_i$ is a path in $\ats{T}{S^*}$.
In what follows, we consider the $V(\hat{S}(x))$-subpath $P_i = (w_p, w_{p+1}, \ldots, w_q)$, where $1 \leq i \leq h$
($w_p = w_q$ may be possible if $P_i$ consists of a single vertex only).
Notice that $w_{p - 1}$ and $w_{q+1}$ are in $Z$ (where we define $w_{p-1} = w_k$ if $p = 1$ and $w_{q+1} = w_1$ if $p = k$).
We construct a path $P^*_i = (w^*_1, \ldots, w^*_r)$ of $\ats{T}{S^*}$ such that
$(w_{p-1}, w^*_1) \in E(\ats{T}{S^*})$ and $(w^*_r, w_{q+1}) \in E(\ats{T}{S^*})$.
To this end, we provide the following
\begin{owndesc}
\item[Claim 2:]
\emph{The vertex $x$ does not belong to $C$.}
Let $Q$ be a maximal topological sort of $\ats{T}{S}$
and let $Q'$ be a maximal topological sort of $\ats{T}{S'}$.
By Lemma~\ref{lem:q-stays-topo}, $\mathcal{M}(Q) \subseteq \mathcal{M}(Q')$.
Moreover, since $S'$ is a good split refinement, all the in-neighbors
of $x$ in $\ats{T}{S'}$ belong to $Q$. Since $\mathcal{M}(Q) \subseteq \mathcal{M}(Q')$, all the in-neighbors of
$x$ in $\ats{T}{S'}$ are also in $Q'$.
This and maximality of $Q'$ implies that
$x$ is itself also in $Q'$.
Let $\hat{Q}$ be a maximal topological sort of $\ats{T}{\hat{S}}$.
Since $\hat{S}$ can be obtained from a sequence of split refinements starting from $S'$,
Lemma~\ref{lem:q-stays-topo} implies that $\mathcal{M}(Q') \subseteq \mathcal{M}(\hat{Q})$.
In particular, $x \in \mathcal{M}(\hat{Q})$.
Lemma \ref{lem:Qproperty} implies that $x$ cannot be contained in any cycle of $\ats{T}{\hat{S}}$,
which proves Claim 2.
\end{owndesc}
Recalling that $\ats{T}{\hat{S}}$ does not contain self-loops,
every edge $(u, v)$ of $P_i$ is an A1- or A2-edge of $\ats{T}{\hat{S}}$
and satisfies $v \prec_{\hat{S}} u$.
This implies that either $w_q \prec_{\hat{S}} w_{q-1} \prec_{\hat{S}} \ldots \prec_{\hat{S}} w_p$,
or that $w_p=w_q$.
In either case, we have $w_q \preceq_{\hat{S}} w_p$.
By Claim 2, $w_p \neq x$. This and $w_p\in V(\hat{S}(x))$ implies that $w_p \prec_{\hat{S}} x$.
By construction of $\hat{S}$ we therefore have
$\L(\hat{S}(w_p)) \subseteq X$ or $\L(\hat{S}(w_p)) \subseteq Y$.
We will assume, without loss of generality, that $\L(\hat{S}(w_p)) \subseteq X$.
Since $w_q \preceq_{\hat{S}} w_p$, we have $\L(\hat{S}(w_q)) \subseteq \L(\hat{S}(w_p)) \subseteq X$.
We now construct two important sets $X_p \subseteq X$ and $X_q \subseteq X$ that are quite
helpful for out construction of a cycle $C^*$ in $\ats{T}{S^*}$.
\begin{owndesc}
\item[Claim 3:]
\emph{There exists a subset $X_p \subseteq X$ such that $w_p =\ensuremath{\operatorname{lca}}_{\hat{S}}(X_p)$ and
$(w_{p-1}, \ensuremath{\operatorname{lca}}_{S^*}(X_p)) \in E(\ats{T}{S^*})$.}
Since $w_p \in V(\hat{S})$, the edge $(w_{p-1}, w_p)$ is either an A1-, A2- or
A3-edge in $\ats{T}{\hat{S}}$. Suppose first that $(w_{p-1}, w_p)$ is an A2-edge. Then $w_{p-1}$ is
the parent of $w_p$ in $\hat{S}$. Since $w_p \prec_{\hat{S}} x$, this implies that
$w_{p-1} \in V(\hat{S}(x))$, contradicting $w_{p-1} \in Z$. Therefore, this case is
not possible.
Suppose that $(w_{p-1}, w_p)$ is an A1-edge defined by some $(u, v) \in E(T)$.
Then $w_p\in V(\hat{S})$ implies $w_p = \hat{\mu}_{\hat{S}}(v) = \ensuremath{\operatorname{lca}}_{\hat{S}}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v))$
and we define $X_p = \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$. We must prove that $(w_{p-1}, \ensuremath{\operatorname{lca}}_{S^*}(X_p)) \in E(\ats{T}{S^*})$.
Since $(u, v) \in E(T)$ yields the A1-edge $(w_{p-1}, w_p)$ in $\ats{T}{\hat{S}}$,
we have $t(v)\in \{\odot, \mathfrak{s}\}$.
Hence, $(u, v)$ yields some A1-edge $(z, \ensuremath{\operatorname{lca}}_{S^*}(X_p))$ in $\ats{T}{S^*}$
for some vertex $z$. In what follows, we show that $z=w_{p-1}$.
If $w_{p-1} \in V(T)$, then $w_{p-1} = u$
and $(u, v)$ defines the A1-edge $(u, \hat{\mu}_{S^*}(v)) = (w_{p-1},
\ensuremath{\operatorname{lca}}_{S^*}(X_p))$ in $\ats{T}{S^*}$. If $w_{p-1} \in V(\hat{S})$, then $w_{p-1} =
\hat{\mu}_{S^*}(u)$. Since $w_{p-1} \in Z$, vertex $w_{p-1}$ must be a strict ancestor of $x$ in $\hat{S}$.
This and the fact that $S^*$ and $\hat{S}$ coincide except possibly in
$S^*(x)$ and $\hat{S}(x)$ implies that
$\hat{\mu}_{\hat{S}}(u) = \hat{\mu}_{S^*}(u) = w_{p-1}$.
Hence, $(w_{p-1}, \ensuremath{\operatorname{lca}}_{S^*}(X_p)) \in E(\ats{T}{S^*})$.
Finally, suppose that $(w_{p-1}, w_p)$ is an A3-edge defined by some $u \in
V(T)$. Then $w_{p-1} = u$ and $w_p = \hat{\mu}_{\hat{S}}(u) = \ensuremath{\operatorname{lca}}_{\hat{S}}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u))$, where
$\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \subseteq X$. Define $X_p = \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$. Then $(w_{p-1}, w_p) = (u,
\ensuremath{\operatorname{lca}}_{\hat{S}}(X_p))$ and $(u, \hat{\mu}_{S^*}(u)) = (w_{p-1}, \ensuremath{\operatorname{lca}}_{S^*}(X_p)) \in
E(\ats{T}{S^*})$. This proves Claim 3.
\end{owndesc}
\begin{owndesc}
\item[Claim 4:]
\emph{There exists a subset $X_q \subseteq X$ such that $w_{q} = \ensuremath{\operatorname{lca}}_{\hat{S}}(X_q)$ and
$(\ensuremath{\operatorname{lca}}_{S^*}(X_q), w_{q+1}) \in E(\ats{T}{S^*})$.}
We show first that $w_{q+1} \in V(T)$. Assume, for contradiction, that $w_{q+1} \in V(\hat{S})$.
Since $(w_q,w_{q+1})$ is an edge of $\ats{T}{\hat{S}}$ and
since $w_{q} \in V(\hat{S})$, the edge $(w_q,w_{q+1})$ is an A2-edge in $\ats{T}{\hat{S}}$.
However, this implies that $w_{q+1} \prec_{\hat{S}} w_q$ and thus, $w_{q+1}\in V(\hat{S}(x))$;
a contradiction to $w_{q+1} \in Z$. Hence, $w_{q+1} \in V(T)$.
Therefore, $(w_q, w_{q+1})$ is either an A1- or A4-edge in $\ats{T}{\hat{S}}$.
Suppose first that $(w_q, w_{q+1})$ is an A1-edge of $\ats{T}{\hat{S}}$ defined by some $(u, v) \in E(T)$.
Then $(w_q, w_{q+1}) = (\hat{\mu}_{\hat{S}}(u), v)$, where $\hat{\mu}_{\hat{S}}(u) = \ensuremath{\operatorname{lca}}_{\hat{S}}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u))$ and where $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \subseteq X$.
Define $X_q = \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$. Then $(w_q, w_{q+1}) = (\ensuremath{\operatorname{lca}}_{\hat{S}}(X_q), w_{q+1})$, and
$(\hat{\mu}_{S^*}(u), v) = (\ensuremath{\operatorname{lca}}_{S^*}(X_q), w_{q+1})$ is an A1-edge of $\ats{T}{S^*}$.
Suppose instead that $(w_q, w_{q+1})$ is an A4-edge of $\ats{T}{\hat{S}}$ defined by some $(u, v) \in \mathcal{E}_T$ with $u=w_{q+1}$.
Then $(w_q, w_{q+1}) = (\ensuremath{\operatorname{lca}}_{\hat{S}}(\hat{\mu}_{\hat{S}}(u), \hat{\mu}_{\hat{S}}(v)), u) = (\ensuremath{\operatorname{lca}}_{\hat{S}}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)), u)$.
Define $X_q = \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$.
Hence, $w_q = \ensuremath{\operatorname{lca}}_{\hat{S}}(X_q)$,
and since $w_q = \in V(\hat{S}(x))$, we must have $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) \cup \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v) \subseteq X$.
Moreover, $(\ensuremath{\operatorname{lca}}_{S^*}(\hat{\mu}_{S^*}(u), \hat{\mu}_{S^*}(v)), u) = (\ensuremath{\operatorname{lca}}_{S^*}(X_q), w_{q+1})$ is an A4-edge of $\ats{T}{S^*}$.
This completes the proof of Claim 4.
\end{owndesc}
\begin{owndesc}
\item[Claim 5:]
\emph{Let $X_p$ and $X_q$ be subsets of $X$ as defined in Claim 3 and 4.
Then in $\ats{T}{S^*}$, there exists a path from $\ensuremath{\operatorname{lca}}_{S^*}(X_p)$ to $\ensuremath{\operatorname{lca}}_{S^*}(X_q)$.}
By Claim 3 and 4 we have $w_p = \ensuremath{\operatorname{lca}}_{\hat{S}}(X_p)$ and $\ensuremath{\operatorname{lca}}_{\hat{S}}(X_q) = w_q$, respectively.
As argued after the proof of Claim 2,
we have $\ensuremath{\operatorname{lca}}_{\hat{S}}(X_q) = w_q \preceq_{\hat{S}} w_p = \ensuremath{\operatorname{lca}}_{\hat{S}}(X_p)$.
Because $\hat{S}$ contains $S^*|_X$ as a rooted subtree, it follows that
$\ensuremath{\operatorname{lca}}_{S^*}(X_q) \preceq_{S^*} \ensuremath{\operatorname{lca}}_{S^*}(X_p)$.
Because of the A2-edges, there must be a path from $\ensuremath{\operatorname{lca}}_{S^*}(X_p)$ to
$\ensuremath{\operatorname{lca}}_{S^*}(X_q)$ in $\ats{T}{S^*}$. This completes the proof of Claim 5.
\end{owndesc}
We may now finish the argument.
For each $1 \leq i \leq h$, we let $P^*_i$ be the path obtained from Claim 5.
We claim that by concatenating the paths $P^*_1, P'_1, P^*_2, P'_2, \ldots, P^*_h, P'_h$ in $\ats{T}{S^*}$,
we obtain a cycle.
We have already argued that each $P^*_i$ and each $P'_i$ is a path in $\ats{T}{S^*}$.
The rest follows from Claim 4, since it implies that for each $1 \leq i \leq h$, the last vertex of $P^*_i$ has the first vertex of $P'_i$
as an out-neighbor, and the last vertex of $P'_i$ has the first vertex of $P^*_{i+1}$ as an out-neighbor
(where $P^*_{h+1}$ is defined to be $P^*_1$).
We have thus found a cycle in $\ats{T}{S^*}$, a contradiction to the acyclicity of $\ats{T}{S^*}$.
Hence, $\ats{T}{\hat{S}}$ is acyclic. This and the fact that $\hat{S}$ displays $\ensuremath{\mathcal{R}} (T;t,\sigma)$
implies that $\hat{S}$ is a solution to $((T; t, \sigma), S')$. Therefore, $((T; t, \sigma), S')$ admits a solution.
\end{proof}
\begin{theorem}
Algorithm \ref{alg:gtcRefinement} determines whether a given GTC instance
$((T; t, \sigma), S)$ admits a solution or not and, in the affirmative
case, constructs a solution $S^*$ of $((T; t, \sigma), S)$.
\label{thm:algo1}
\end{theorem}
\begin{proof}
Let $((T; t, \sigma), S)$ be GTC instance.
First it is tested in Line \ref{line:binary} whether $S$ is binary or not.
If $S$ is binary, then $S$ is already its binary refinement and
Prop.\ \ref{prop:IFFbinRef} implies that $S$ is a solution to
$((T; t, \sigma), S)$ if and only if $S$
agrees with $\ensuremath{\mathcal{R}} (T;t,\sigma)$ and $\ats{T}{S}$ is acyclic.
The latter is tested in Line \ref{line:GTC-properties}.
In accordance with Prop.\ \ref{prop:IFFbinRef}, the tree $S$ is returned whenever the latter conditions are satisfied and,
otherwise, ``there is no solution'' is returned.
Assume that $S$ is not binary. If $S$ admits no good split refinement, then
Alg.\ \ref{alg:gtcRefinement} (Line \ref{line:no-good-split}) returns
``there is no solution'', which is in accordance with Prop.\ \ref{prop:no-solution}.
Contrary, if $S$ admits a good split refinement $S'$, then we can apply
Theorem \ref{thm:equiv-refinement} to conclude that
$((T; t, \sigma), S)$ admits a solution
if and only if $((T; t, \sigma), S')$ admits a solution at all.
Now, we recurse on $((T; t, \sigma), S')$ as new input of
Alg.\ \ref{alg:gtcRefinement} in Line \ref{line:good-split}.
The correctness of Alg.\ \ref{alg:gtcRefinement} is finally ensured by
Theorem \ref{thm:equiv-refinement} which states that if
$((T; t, \sigma), S')$ admits a solution and thus, by Prop.\ \ref{prop:IFFbinRef},
a binary refinement $S^*$ which is obtained by a series of good split refinements
starting with $S$, is a solution for $((T; t, \sigma), S)$.
\end{proof}
\subsection{Finding a Good Split Refinement}
To find a good split refinement, if any, we can loop through each cherry $x$ and ask
``is there a good split refinement at $x$''? Clearly, every partition $X_1,X_2$ of $\mathrm{ch}(x)$
may provide a good split refinement and thus there might be $O(2^{|\mathrm{ch}(x)|})$ cases to be tested for each cherry $x$.
To circumvent this issue, we define a second auxiliary graph that is an extension of the well-known Aho-graph
to determine whether a set of triplets is compatible or not \cite{semple2003phylogenetics,Steel:book,Aho:81}.
For a given set $R$ of triplets, the Aho-graph has vertex set $V$ and (undirected) edges $\{a,b\}$
for all triplets $ab|c\in R$ with $a,b,c\in V$.
Essentially we will use this Aho-graph and add additionally edges to it.
The connected components of this extended graph eventually guides us to
the process of finding good split refinements. Before we make this definition
more precise we give the following.
\begin{lemma}\label{lem:good-ancestors}
Let $Q$ be a maximal topological sort of $\ats{T}{S}$. If there exists a
good split refinement $S'$ of $S$ at a cherry $x$, then every strict ancestor of
$x$ in $S$ and $S'$ is in $\mathcal{M}(Q)$.
\end{lemma}
\begin{proof}
Let $S'$ be a good split refinement of $S$ at $x$.
By construction, the sets of ancestors of $x$ in $S$ and $S'$ are equal.
Assume that there is a strict ancestor $y$ of $x$ that is not in $Q$.
Due to the A2-edges in $\ats{T}{S}$ there is a directed path $P$ from
$y$ to $x$ in $\ats{T}{S}$.
Lemma \ref{lem:Qproperty} implies that none of the vertices along this path
$P$ are contained in $\mathcal{M}(Q)$. Since $y$ is a strict ancestor of $x$ in $S$, we
can conclude that the parent $p(x)$ of $x$ in $S$ is not contained in $\mathcal{M}(Q)$.
Again, due to the A2-edges of $\ats{T}{S'}$, the pair $(p(x), x)$ is an edge
in $\ats{T}{S'}$ and hence, $p(x)$ is an in-neighbor of $x$ in $\ats{T}{S'}$.
However, since $S'$ is a good split refinement of $S$,
all the in-neighbors of $x$ in $\ats{T}{S'}$ must, by definition,
belong to $\mathcal{M}(Q)$; a contradiction. Thus, every strict ancestor $y$ of $x$ in $S$ and $S'$
is in $\mathcal{M}(Q)$.
\end{proof}
In what follows, when we ask whether a fixed $x$ admits a good split
refinement, we can first check whether all of its ancestors are in $Q$, where $Q$ is
maximal topological sort of $\ats{T}{S}$. If this is not the case, then, by contraposition of
Lemma~\ref{lem:good-ancestors}, we may immediately conclude that there is no good split refinement at $x$.
Otherwise, we investigate $x$ further. We define now the new auxiliary graph to determine whether the cherry $x$ of $S$ admits a good split refinement or not.
\begin{definition}[Good-Split-Graph]
Let $(T;t,\sigma)$ be a gene tree and $S$ be a species tree.
Moreover, let $Q$ be a maximal topological sort of $\ats{T}{S}$.
We define $G((T; t, \sigma), S, x) = (V, E)$ as the the undirected graph with vertex set
$V = \L(S(x))$. Moreover, an (undirected) edge $ab$ is contained in $E$
if and only if $a,b \in \L(S(x))$ and $a,b$ are distinct and satisfy at least one of the following conditions:
\begin{description}
\item[(C1)]
there exists $c \in \L(S(x))$ such that $ab|c \in \ensuremath{\mathcal{R}} (T; t,\sigma)$;
\item[(C2)]
there exists an edge $(u, v) \in E(T)$ such that $t(u) \in \{\mathfrak{d}, \mathfrak{t}\}$, $u \notin \mathcal{M}(Q)$, $t(v) = \mathfrak{s}$, and $\{a, b\} \subseteq \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$;
\item[(C3)]
there exists an edge $(u, v) \in E(T)$ such that $t(u) = t(v) = \mathfrak{s}$, $\hat{\mu}_{S}(u) = x$ and $\{a, b\} \subseteq \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$;
\item[(C4)]
there exists a vertex $u \in V(T) \setminus \mathcal{M}(Q)$ such that $t(u) \in \{\mathfrak{d}, \mathfrak{t}\}$ and $\{a, b\} \subseteq \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$.
\end{description}
\label{def:good-split-graph}
\end{definition}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.6\textwidth]{./working-exmpl-2.pdf}
\end{center}
\caption{Top right: the gene tree $(T;t,\sigma)$ from Fig.\ \ref{fig:least}
from which we obtain the species triples $\S(T;t,\sigma) = \{AB|D,AC|D\}$.
We start with the star tree $S_1$ (top left) and obtain $G((T; t, \sigma), S_1, 1')$,
which is shown right to $S_1$. $G((T; t, \sigma), S_1, 1')$ has four vertices $A,B,C,D$ and two edges.
The edge labels indicate which of the conditions in Def.\ \ref{def:good-split-graph}
yield the respective edge. In $G((T; t, \sigma), S_1, 1')$, there is
only one non-trivial connected component which implies the good split that
results in the tree $S_2$ (lower left). There is only one cherry $2'$ in $S_2$ and
the corresponding graph $G((T; t, \sigma), S_2, 2')$ is drawn right to $S_2$.
Again, the connected components give a good split that results in the binary
tree $S_3$. The tree $S_3$ is precisely the species tree as shown in the
middle of Fig.\ \ref{fig:least}.
}
\label{fig:working-exmpl-2}
\end{figure}
Intuitively, edges represent pairs of species that must belong to the same
part of a split refinement at $x$. That is, (C1) links species that would
contradict a triplet of $\ensuremath{\mathcal{R}} (T; t, \sigma)$ if they were separated (as in
the classical BUILD algorithm \cite{semple2003phylogenetics,Steel:book,Aho:81});
(C2) links species that would yield an A1-edge from a
vertex not in $Q$ into $x$ if they were separated; (C3) links species that
would create a self-loop on $x$ if they were separated; and (C4) links
species that would create an A3-edge from a vertex not in $Q$ into $x$ if separated. We want the
graph to be disconnected which would allow us to split the children of $x$ while
avoiding all the situations in which we create a separation of two children where we cannot ensure that this separation yields
a good split refinement at $x$.
Considering only such pairs of children turns out to be necessary and sufficient, and Theorem~\ref{thm:gsp-disco} below
formalizes this idea.
\begin{definition}
Given a graph $H$, we say that $(A, B)$ is a \emph{disconnected bipartition}
of $H$ if $A \cup B = V(H)$, $A \cap B = \emptyset$ and for each $a \in A, b \in B$, $ab \notin E(H)$.
\end{definition}
We are now in the position to
state how good split refinements can be identified.
Note, we may assume w.l.o.g. that $S$ agrees with $\ensuremath{\mathcal{R}}$, as otherwise there can be no good split refinement at all.
\begin{theorem}\label{thm:gsp-disco}
Let $((T; t, \sigma), S)$ be a GTC instance, and assume that $S$ agrees
with $\ensuremath{\mathcal{R}} (T; t, \sigma)$. Let $Q$ be a maximal topological sort of
$\ats{T}{S}$. Then there exists a good split refinement of $S$ if and only
if there exists a cherry $x$ of $S$ such that every strict ancestor of $x$
in $S$ is in $Q$, and such that $G((T; t, \sigma), S, x)$ is disconnected.
In particular, for any disconnected bipartition $(A, B)$ of $G$, the split refinement that partitions the children of $x$ into $A$ and $B$ is a good split refinement.
\end{theorem}
\begin{proof}
In what follows, put $G \coloneqq G((T; t, \sigma), S, x)$.
Suppose that there exists a good split refinement $S'$ of $S$.
Let $x$ be the cherry of $S$ that was refined from $S$ to $S'$, and let $x_1, x_2$ be the children of $x$ in $S'$.
Let $Q$ be a maximal topological sort of $\ats{T}{S}$. By Lemma~\ref{lem:good-ancestors}, every strict ancestor of $x$ in $S$ is in $Q$.
Let $A = \L(S(x_1))$ and $B = \L(S(x_2))$.
We claim that for any pair $a \in A, b \in B$, $ab \notin E(G)$.
Assume for contradiction that there is an edge $ab$ with $a \in A, b \in B$. We treat each possible edge type separately.
\smallskip
\noindent
{\bf (C1)}: Suppose that $ab \in E(G)$ because there exists $c \in \L(S(x))$ such
that $ab|c \in \ensuremath{\mathcal{R}} (T; t,\sigma)$. Because $a \in A$ and $b \in B$ and
by construction of $S'$, we either have $ac|b \in rt(S')$ if $c \in A$, or
$bc|a \in rt(S')$ if $c \in B$. In either case, $S'$ does not agree with
$\ensuremath{\mathcal{R}} (T; t,\sigma)$, contradicting that $S'$ is a good split refinement.
\smallskip
\noindent
{\bf (C2)}: Suppose instead that $ab \in E(G)$ because there exists an edge $(u,
v) \in E(T)$ with $t(u) \in \{\mathfrak{d}, \mathfrak{t}\}$, $u \notin \mathcal{M}(Q)$, $t(v) =
\mathfrak{s}$ and $a,b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$. By construction of $S'$ and due to the choice
of $A = \L(S(x_1))$ and $B = \L(S(x_2))$, we have
$\hat{\mu}_{S'}(v) = lca_{S'}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v))
\succeq_{S'} lca_{S'}(a,b) = x$. If $\hat{\mu}_{S'}(v) = x$, then $(u,
\hat{\mu}_{S'}(v)) = (u, x)$ is an A1-edge of $\ats{T}{S'}$. Thus, $x$ has in-neighbor
$u$ in $\ats{T}{S'}$ such that $u \notin
\mathcal{M}(Q)$, which contradicts that $S'$ is a good split refinement. So assume
that $\hat{\mu}_{S'}(v) \succ_{S'} x$. In this case, $(u, \hat{\mu}_{S'}(v))$ is an
A1-edge of $S'$, and by Lemma~\ref{lem:ancestors-dont-change}, $(u,
\hat{\mu}_{S'}(v)) \in E(\ats{T}{S})$. Since $u \notin \mathcal{M}(Q)$, we must have
$\hat{\mu}_{S'}(v) \notin \mathcal{M}(Q)$. Since $\hat{\mu}_{S'}(v) \succ_S x$, we obtain a
contradiction to Lemma~\ref{lem:good-ancestors}.
\smallskip
\noindent
{\bf (C3)}: Suppose that $ab \in E(G)$ because there exists an edge $(u, v) \in
E(T)$ with $t(u) = t(v) = \mathfrak{s}$, $\hat{\mu}_{S}(u) = x$ and $a,b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$.
Note, since $\ensuremath{\operatorname{lca}}_S(a,b)=x$ and $a,b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$,
it must hold that $x\preceq_S\hat{\mu}_S(v)$.
Moreover, $t(u) = t(v) = \mathfrak{s}$ implies that $u$ and $v$
are contained in the same connected component of $\ensuremath{T_{\mathcal{\overline{E}}}}$. This and
$v\prec_{\ensuremath{T_{\mathcal{\overline{E}}}}} u$ implies $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)\subseteq \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$. Hence,
$\hat{\mu}_S(v) \preceq_S \hat{\mu}_S(u)$.
Now, $x\preceq_S\hat{\mu}_S(v) \preceq_S \hat{\mu}_S(u) = x$ implies
$\hat{\mu}_S(v) =\hat{\mu}_S(u) = x$.
Therefore, $(\hat{\mu}_S(u), \hat{\mu}_S(v)) = (x, x)$ is an A1-edge
of $\ats{T}{S}$, and it follows that $x \notin \mathcal{M}(Q)$ (a vertex with a
self-loop cannot never be added to a maximal topological sort). Moreover,
because $a, b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$ and $a \in A = \L(S(x_1))$ and $b\in B = \L(S(x_2))$, it holds that
$\hat{\mu}_{S'}(u) = \hat{\mu}_{S'}(v) = x$. Hence $(x, x)$ is an A1-edge of
$\ats{T}{S'}$ as well, and $x$ has an in-neighbor not in $Q$ (namely $x$
itself). This contradicts the assumption that $S'$ is a good split
refinement.
\smallskip
\noindent
{\bf (C4)}:
Suppose that $ab \in E(G)$ because there is a vertex $u \in V(T) \setminus
\mathcal{M}(Q)$ such that $t(u) \in \{\mathfrak{d}, \mathfrak{t}\}$ and $a,b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$. The
reasoning is similar to Case (C2). That is, we must have $p :=
\hat{\mu}_{S'}(u) = lca_{S'}(\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)) \succeq_{S'} lca_{S'}(a, b) =
x$. Now, $\ats{T}{S'}$ contains the A3-edge $(u, p)$. We cannot have $p =
x$ because $u \notin \mathcal{M}(Q)$ and $S'$ is a good split refinement of $S$.
Thus $p \succ_{S'} x$. In this case, $(u, p)
\in E(\ats{T}{S})$ by Lemma~\ref{lem:ancestors-dont-change}. Thus $p$
cannot be in $\mathcal{M}(Q)$, which contradicts Lemma~\ref{lem:good-ancestors}.
\smallskip
We have thus shown that $ab$ cannot exist for any pair $a\in A$ and $b\in B$.
Since $A$ and $B$ form a partition of $V(G)$, the graph $G$ must be disconnected.
Conversely, suppose that there exists a cherry $x$ of $S$ such that $G$ is
disconnected and such that every strict ancestor of $x$ in $S$ is in $Q$.
Let $(A, B)$ be any disconnected bipartition of $G$.
Furthermore, let $S'$ be the split refinement of $S$ obtained by splitting
the children of $x$ into $A$ and $B$ and let $x_1, x_2$ be the two
children of $x$ in $S'$. W.l.o.g.\ assume that $x_1$ and $x_2$ is the ancestor of the
leaves in $A$ and $B$, respectively. We claim that $S'$ is a good split
refinement.
Let us first argue that $S'$ agrees with $\ensuremath{\mathcal{R}} (T; t, \sigma)$.
Assume for contradiction that $S'$ displays a triplet $ac|b$, but that $ab|c \in \ensuremath{\mathcal{R}} (T; t, \sigma)$.
By assumption, $S$ agrees with $\ensuremath{\mathcal{R}} (T; t, \sigma)$, so $ac|b \in rt(S') \setminus rt(S)$.
This implies that $\ensuremath{\operatorname{lca}}_{S'}(a, b) = \ensuremath{\operatorname{lca}}_{S'}(c, b) = x$.
W.l.o.g\ we may assume that $a, c \in A$ and $b \in B$.
However, Condition (C1) implies that we have the edge
$ab \in E(G)$, contradicting that $(A, B)$ forms a disconnected bipartition.
Therefore, $S'$ agrees with $\ensuremath{\mathcal{R}} (T; t, \sigma)$.
It remains to show that all in-neighbors of $x$ in $\ats{T}{S'}$ are contained in $\mathcal{M}(Q)$.
Assume, for contradiction, that there is an edge $(p, x) \in E(\ats{T}{S'})$
such that $p \notin \mathcal{M}(Q)$.
Since $x\in V(S')$, the edge $(p, x)$ it either an A1-, A2- or A3-edge in $\ats{T}{S'}$.
As it is now our routine, we check several cases separately.
\begin{owndesc}
\item[\textnormal{\em Case: $(p, x)$ is an A1-edge and $p \neq x$.}] \ \\
In this case $(p, x)$ is defined by some edge $(u, v) \in E(T)$. Suppose
that $(p, x) = (\hat{\mu}_{S'}(u), \hat{\mu}_{S'}(v))$. Since $p \neq x$, $p$ is a
strict ancestor of $x$ in $S'$, and hence also in $S$. This is not
possible, since we assume that every strict ancestor of $x$ in $S$ belongs
to $Q$ (whereas here we suppose $p \notin \mathcal{M}(Q)$). We deduce that $(p, x) =
(u, \hat{\mu}_{S'}(v))$. Therefore, $u \notin \mathcal{M}(Q)$, $t(u) \in \{\mathfrak{d},
\mathfrak{t}\}$ and $t(v) = \mathfrak{s}$. Moreover, since $\hat{\mu}_{S'}(v) = x$
and $x$ has only the two children $x_1$ and $x_2$ in $S'$,
we can conclude there are $a,b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$ such that $a \preceq_{S'}
x_1$ and $b \preceq_{S'} x_2$, i.e. $a \in A, b \in B$.
The latter two arguments imply that Condition (C2) is satisfied for
$a$ and $b$ and, therefore, $ab\in E(G)$; a contradiction to $(A, B)$ are forming a
disconnected bipartition.
\item[\textnormal{\em Case: $(p, x)$ is an A1-edge and $p = x$.} ] \ \\
In this case, $(p, x) = (x, x) = (\hat{\mu}_{S'}(u), \hat{\mu}_{S'}(v))$ is defined by some edge $(u, v)$ of $T$.
Since $x$ is an internal vertex of $S'$, we must have $t(u) = t(v) = \mathfrak{s}$.
Since $L(S(x)) = L(S'(x))$ and $x$ is a cherry in $S$, we also have $(\hat{\mu}_S(u), \hat{\mu}_S(v)) = (x, x)$.
Moreover because $\hat{\mu}_{S'}(v) = x = \ensuremath{\operatorname{lca}}_{S'}(x_1,x_2)$, there must exist distinct $a,b$ with
$a\prec_{S'}x_1$ and $b\prec_{S'}x_2$ such that $a,b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)$.
Thus, $a \in A, b \in B$. Moreover $ab$ satisfies the Condition (C3).
Thus, $ab\in E(G)$; a contradiction to our assumption that $(A, B)$ forms a
disconnected bipartition.
\item[\textnormal{\em Case: $(p, x)$ is an A2-edge.} ] \ \\
This case is not possible, since the parent of $x$ is the same in $S$ and $S'$, and we assume that all strict ancestors of $x$ in $S$ are in $Q$.
\item[\textnormal{\em Case: $(p, x)$ is an A3-edge.} ] \ \\
In this case, $(p, x) = (u, \hat{\mu}_{S'}(u))$ is defined by a vertex $u
\in V(T)$ such that $t(u) \in \{\mathfrak{d}, \mathfrak{t}\}$ and $\hat{\mu}_{S'}(u) = x$.
Since $u = p$ and, by assumption $p \notin \mathcal{M}(Q)$, we have $u \in V(T) \setminus \mathcal{M}(Q)$.
As in the A1-case, there must be $a, b \in \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$ such that $a \in A, b \in B$. Then $ab$ should be an edge of $G$ because of Condition (C4), a contradiction.
\end{owndesc}
We have shown that the $(p, x)$ edge cannot exist. Therefore in
$\ats{T}{S'}$, all the in-neighbors of $x$ are in $Q$. Since $S'$ also
agrees with $R$, it follows that splitting the children of $x$ into $(A, B)$
forms a good split refinement at $x$.
\end{proof}
\begin{algorithm}[tbp]
\caption{\texttt{TimeConsistent Species Tree}}
\begin{algorithmic}[1]
\Require Event-labeled gene tree $(T;t,\sigma)$
\Ensure Time-consistent species tree $S$ for $(T;t,\sigma)$, if one exists
\State Compute $\ensuremath{\mathcal{R}}(T;t,\sigma)$
\State $S\gets $ star tree on $\sigma(L(T))$
\State Compute $\hat{\mu}_{T,S}(u)$ for all $u\in V(T)$
\State Compute $\ats{T}{S}$
\State $Q \gets $maximal topological sort of $\ats{T}{S}$
\State Compute $G((T; t, \sigma), S, r)$, where $r$ is the root of $S$
\State \texttt{Has\_GoodSplit} $\gets$ TRUE
\While{$S$ contains a non-binary cherry and \texttt{Has\_GoodSplit} = TRUE}
\State \texttt{Has\_GoodSplit} $\gets$ FALSE
\ForAll{non-binary cherries $x$ of $S$ such that $y \in \mathcal{M}(Q)$ for all $y \succ_{S} x$} \label{for-loop}
\If{\texttt{Has\_GoodSplit} = FALSE and $G((T; t, \sigma), S, x)$ is disconnected} \label{first-if}
\State Compute disconnected bipartition $(A, B)$ of $G((T; t, \sigma), S, x)$.
\State $S \gets$ split refinement of $S$ at cherry $x$ based on $(A, B)$
\State Compute $\hat{\mu}_{T,S}(u)$ for all $u\in V(T)$
\State Compute $\ats{T}{S}$
\State $Q \gets $maximal topological sort of $\ats{T}{S}$
\State Let $x_1, x_2$ be the children of $x$
\State Compute $G((T; t, \sigma), S, x_1)$ and $G((T; t, \sigma), S, x_2)$
\State \texttt{Has\_GoodSplit} $\gets$ TRUE
\EndIf
\EndFor
\EndWhile
\If{$S$ is binary}
\ \Return $S$;
\Else \ \Return ``No time-consistent species tree exists'';
\EndIf
\end{algorithmic}
\label{alg:GoodSplit}
\end{algorithm}
A pseudocode to compute a time-consistent species for a given event-labeled gene tree $(T;t,\sigma)$, if one exists,
is provided in Alg.\ \ref{alg:GoodSplit}.
The general idea of Alg.\ \ref{alg:GoodSplit} is as follows.
With $(T;t,\sigma)$ as input, we start with a star tree $S$ and stepwisely refine $S$
by searching for good split refinements. If in each step a good split refinement exists
and $S$ is binary (in which case we cannot further refine $S$), then we found a time-consistent species tree $S$ for $(T;t,\sigma)$.
In every other case, the algorithm returns ``No time-consistent species tree exists''.
The correctness proof as well as further explanations are provided in the proof of Theorem \ref{thm:algo}.
To show that this algorithm runs in $O(n^3)$ time, we need first the following.
\begin{lemma}
$\ensuremath{\mathcal{R}} (T; t, \sigma)$ can be computed in $O(n^3)$ time, where $n = |\L(T)|$.
This boundary is tight.
\label{lem:comp-RT}
\end{lemma}
\begin{proof}
To compute $\ensuremath{\mathcal{R}} (T; t, \sigma)$ as in Def.\ \ref{def:informativeTriplets} we can proceed as follows:
We first compute the $\ensuremath{\operatorname{lca}}_{\ensuremath{T_{\mathcal{\overline{E}}}}}$'s for every pair of vertices within the connected components of $\ensuremath{T_{\mathcal{\overline{E}}}}$.
This task can be done in constant time for each pair of vertices after
linear preprocessing of the trees in $\ensuremath{T_{\mathcal{\overline{E}}}}$ \cite{HT:84,BF:00}.
Thus, we end in an overall time complexity of $O(n^2)$ to compute all $\ensuremath{\operatorname{lca}}_{\ensuremath{T_{\mathcal{\overline{E}}}}}$'s between the leaves of $T$.
We now compute the distance from the root $\rho_{\tilde T}$ to all other vertices in $V(\tilde T)$
for every connected component $\tilde T$ of $\ensuremath{T_{\mathcal{\overline{E}}}}$. The latter can be done for each individual connected
component $\tilde T$ via Dijkstra's algorithm in $O(|V(\tilde T)|^2)$ time.
As this must be done for all connected components of $\ensuremath{T_{\mathcal{\overline{E}}}}$ and since
$\sum_{\tilde T} |V(\tilde T)|^2 \leq (\sum_{\tilde T} |V(\tilde T)|)^2 = |V(T)|^2$
we end in time $O(|V(T)|^2) = O(n^2)$ to compute the
individual distances.
Now, for all three distinct leaves $a,b,c$ within the connected components of $\ensuremath{T_{\mathcal{\overline{E}}}}$,
we compare the relative order of $x=\ensuremath{\operatorname{lca}}_{\ensuremath{T_{\mathcal{\overline{E}}}}}(a,b)$, $y=\ensuremath{\operatorname{lca}}_{\ensuremath{T_{\mathcal{\overline{E}}}}}(a,c)$, and $z=\ensuremath{\operatorname{lca}}_{\ensuremath{T_{\mathcal{\overline{E}}}}}(b,c)$
which can be done directly by comparing the distances $d_{\tilde T}(\rho_{\tilde T},x)$, $d_{\tilde T}(\rho_{\tilde T},y)$ and $d_{\tilde T}(\rho_{\tilde T},z)$.
It is easy to see that at least two of the latter three distances must be equal.
Hence, as soon as we have found that two distances are equal but distinct from the third,
say $d_T(\rho_T,x)\neq d_T(\rho_T,y)=d_T(\rho_T,z)$, we found the triple $ab|c$ that is displayed by $\tilde T$.
If, in addition, $t(z)=\mathfrak{s}$ and $\sigma(a), \sigma(b), \sigma(c)$ are pairwise distinct,
then we add $\sigma(a)\sigma(b)|\sigma(c)$ to $\ensuremath{\mathcal{R}} (T; t, \sigma)$.
The latter tasks can be done in constant for every triple $a,b,c$.
Since there are at most $\binom{n}{3}=O(n^3)$ triplets in $T$,
we end in an overall time-complexity $O(n^3)$ to compute all triplets displayed by $T$
that satisfy Def.\ \ref{def:informativeTriplets}(1).
Now we proceed to construct for all transfer edges $(u,v)\in \mathcal{E}_T$ the triplets $\sigma(a)\sigma(b)|\sigma(c)$
for all $a,b\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(u))$ and $c\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(v))$ as well as for all $c\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(u))$ and $a,b\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(v))$
with $\sigma(a), \sigma(b), \sigma(c)$ being pairwise distinct.
To this end, we need to compute $L(\ensuremath{T_{\mathcal{\overline{E}}}}(w))$ for all $w\in V(T)$. We may traverse
every connected component $\tilde T$ of $\ensuremath{T_{\mathcal{\overline{E}}}}$ from the root $\rho_{\ensuremath{T_{\mathcal{\overline{E}}}}}$ to each individual leaf and
and for each vertex $w$ along the path from $\rho_{\ensuremath{T_{\mathcal{\overline{E}}}}}$ to a leaf $l$, we add the leaf $l$ to $L(\ensuremath{T_{\mathcal{\overline{E}}}}(w))$.
As there are precisely $|L(\tilde T)|$ such paths, each having at most $|V(\tilde T)|\in O(|L(\tilde T)|)$ vertices,
we end in $O(|L(\tilde T)|^2)$ time to compute $L(\ensuremath{T_{\mathcal{\overline{E}}}}(w))$ for all $w\in V(\tilde T)$.
As this step must be repeated for all connected components $\tilde T$ of $\ensuremath{T_{\mathcal{\overline{E}}}}$
we end, by the analogous arguments as in the latter paragraph, in $\sum_{\tilde T} O(|L(\tilde T)|^2) = O(n^2)$ time
to compute $L(\ensuremath{T_{\mathcal{\overline{E}}}}(w))$ for all $w\in V(T)$.
Now, for every transfer edge $(u,v)\in \mathcal{E}_T$ the triplets $\sigma(a)\sigma(b)|\sigma(c)$
(with $\sigma(a), \sigma(b), \sigma(c)$ being pairwise distinct) are added to $\ensuremath{\mathcal{R}} (T; t, \sigma)$
for all $a,b\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(u))$ and $c\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(v))$ as well as for all $c\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(u))$ and $a,b\in L(\ensuremath{T_{\mathcal{\overline{E}}}}(v))$.
Note, none of the trees $\ensuremath{T_{\mathcal{\overline{E}}}}$ contains transfer edges. Moreover,
for each transfer edge $(u,v)$ we have, by Axiom (O3), $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)\cap \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u) = \emptyset$.
The latter two arguments
imply that, for each transfer edge $(u,v)$, precisely $\binom{\sigma(L(\ensuremath{T_{\mathcal{\overline{E}}}}(v)))}{2} |\sigma(L(\ensuremath{T_{\mathcal{\overline{E}}}}(u)))|+ \binom{\sigma(L(\ensuremath{T_{\mathcal{\overline{E}}}}(u)))}{2} |\sigma(L(\ensuremath{T_{\mathcal{\overline{E}}}}(v)))|$
triplets are added.
Now, let $\mathcal{T} = \{T_1, T_2, \ldots, T_k\}$ be the set of trees in the forest
$\ensuremath{T_{\mathcal{\overline{E}}}}$. For each $i \in \{1, \ldots, k\}$, define $n_i = |\L(T_i)|$. Let us write $T_i \rightarrow T_j$ if there exists a transfer edge $(u, v) \in \mathcal{E}_T$ satisfying $u \in V(T_i), v \in V(T_j)$. It is easy to verify that there is exactly one transfer edge connecting
two distinct connected components of $\ensuremath{T_{\mathcal{\overline{E}}}}$, as otherwise,
some vertex of some $T_j$ would have in-degree $2$ or more in $T$.
For each transfer edge $(u, v) \in \mathcal{E}_T$, where $u \in V(T_i)$ and $v
\in V(T_j)$, we can bound the number of added triplets by
$\binom{|\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)|}{2} |\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)|+ \binom{|\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(v)|}{2}|\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)| \leq n_i^2 n_j +
n_i n_j^2$. The total number of triplets considered is then
at most
\begin{align}
\sum_{T_i \in \mathcal{T}} \sum_{T_j : T_i \rightarrow T_j} \left(n_i^2n_j + n_j^2n_i \right)
= & \sum_{T_i \in \mathcal{T}} n_i^2 \sum_{T_j : T_i \rightarrow T_j} n_j +
\sum_{T_i \in \mathcal{T}} n_i \sum_{T_j : T_i \rightarrow T_j} n_j^2 \label{eq:App}\\
\leq & \sum_{T_i \in \mathcal{T}} n_i^2 \cdot n + \sum_{T_i \in \mathcal{T}} n_i \cdot n^2 \nonumber\\
= & n \sum_{T_i \in \mathcal{T}} n_i^2 + n^2 \sum_{T_i \in \mathcal{T}} n_i \nonumber\\
\leq & 2n^3 \in O(n^3) \nonumber.
\end{align}
In the latter approximation, we have used the fact that
distinct trees $T_i$ and $T_j$ have disjoint sets of leaf sets (cf.\ \cite[Lemma 1]{nojgaard2018time}). Thus,
$\sum_{T_i \in \mathcal{T}} n_i \leq n$ and $\sum_{T_j : T_i \rightarrow T_j} n_i \leq n$.
In summary, $\ensuremath{\mathcal{R}} (T; t, \sigma)$ can be computed in $O(n^3)$ time.
Finally, note that if $(T; t, \sigma)$ is binary such that all inner vertices are labeled
as speciation $\mathfrak{s}$ and for all two distinct leaves $x,y\in L(T)$ we have $\sigma(x)\neq \sigma(y)$,
then $|\ensuremath{\mathcal{R}} (T; t, \sigma)|=\binom{n}{3}\in O(n^3)$. Hence, the boundary
$O(n^3)$ can indeed be achieved.
\end{proof}
We note that it is not too difficult to show that Algorithm~\ref{alg:GoodSplit} can be implemented to take time $O(n^4)$.
Indeed, each line of the algorithm can be verified to take time $O(n^3)$, including the construction of $\ats{T}{S}$
(which takes time $O(n \log n)$, as shown in \cite[Thm.\ 6]{nojgaard2018time})
and the construction of the $G((T;t,\sigma), S, x)$ graphs (by checking every triplet of $\ensuremath{\mathcal{R}} (T; t, \sigma)$ for (C1) edges, and
for every pair $a,b$ of vertices, checking every member of $V(T) \cup E(T)$ for (C2), (C3) or (C4) edges).
Since the main \textit{while} loop is executed $O(n)$ times, this yields complexity $O(n^4)$.
However, with a little more work, this can be improved to cubic time algorithm.
As stated in Lemma~\ref{lem:comp-RT}, we may have $\ensuremath{\mathcal{R}} (T;t,\sigma)
\in \Theta(n^3)$. Thus, any hope of achieving a better running time would
require a strategy to reconstruct a species tree $S$ without reconstructing
the full triplet set $\ensuremath{\mathcal{R}} (T;t,\sigma)$ that $S$ needs to display.
It may possible that
such an algorithm exists, however, this would be a quite
surprising result and may require a completely different approach.
\begin{theorem}\label{thm:algo}
Algorithm~\ref{alg:GoodSplit} correctly computes a time-consistent binary species tree for $(T;t,\sigma)$, if one exists, and
can be implemented to run in time $O(n^3)$, where $n = |\L(T)|$. In particular,
for every $O(n^3)$-time algorithm that needs to compute $\S(T;t,\sigma)$ this boundary is tight.
\end{theorem}
\begin{proof}
We first prove the correctness of the algorithm. Algorithm~\ref{alg:GoodSplit} takes as input a labeled gene tree $(T;t,\sigma)$. First $\S(T;t,\sigma)$ is computed
and the star tree $S$ (which clearly agrees with $\S(T;t,\sigma)$) will be furthermore refined.
Moreover, $S$ contains at this point of computation only one cherry, namely the root $r$ of $S$, and $G((T;t,\sigma),S,r)$
is computed. Now, in each step of the \emph{while}-loop it is first checked if $S$ is non-binary and if in one of the
previous steps a good split refinement has been found. In this case, it is first checked (\emph{for}-loop)
if there are non-binary cherries of the current tree $S$ for which all strict ancestors are contained in $\mathcal{M}(Q)$
with $Q$ being the maximal topological sort of $\ats{T}{S}$. If this is not the case for all non-binary cherries,
the \emph{while}-loop terminates according to Lemma \ref{lem:good-ancestors} and the algorithm correctly outputs
\emph{``No time-consistent species tree exists''}. Contrary, if there is a non-binary cherry $x$ for which all strict ancestors are contained in $\mathcal{M}(Q)$,
then it is checked whether we have not found already a good split for $S$ and if $G((T;t,\sigma),S,x)$ is disconnected.
In this case, we can apply Theorem \ref{thm:gsp-disco} to conclude that there is a good split refinement for $S$ at $x$
which is computed in the subsequent step. If, however, for all non-binary cherries $G((T;t,\sigma),S,x)$ is connected,
the algorithm correctly outputs \emph{``No time-consistent species tree exists''} according to Theorem \ref{thm:gsp-disco}.
Finally, if in each step of the \emph{while}-loop we have found a good split refinement and $S$ does not contain
a non-binary cherry, then $S$ must, by construction, be binary. In this case, repeated application of Theorem \ref{thm:equiv-refinement}
shows that the final binary tree $S$ is a solution to the underlying GTC instance and
the algorithm correctly returns $S$. Thus, Algorithm~\ref{alg:GoodSplit} correctly computes a time-consistent binary species tree for $(T;t,\sigma)$, if one exists.
We next analyze the running time of the algorithm.
Let $\Sigma = \sigma(\L(T))$ be the set of species. We will frequently use the fact that $|\Sigma| \leq n$.
The main challenge in optimizing this algorithm is to be able to efficiently construct and update the $G((T;t,\sigma), S, x)$ graphs. We will save this analysis for the end of the proof, and will ignore the time spent on graph updates for now.
We will assume that $\sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}(u)$ is computed and stored for each $u \in V(T)$. As argued in the proof of Lemma~\ref{lem:comp-RT},
this can be done in time $O(n^2)$.
Also by Lemma \ref{lem:comp-RT}, the triplet set $\ensuremath{\mathcal{R}} (T; t, \sigma)$ can be computed in $O(n^3)$ time.
Since every iteration of the main \textit{while} loop adds a new binary vertex
in $S$, the loop will be executed $O(n)$ times (since a binary tree on $|\Sigma| \leq n$
leaves has $O(n)$ internal vertices).
By \cite[Lemma 3]{nojgaard2018time}, computing $\hat{\mu}_{T,S}$ can be done in time O($n\log(|\Sigma|))=O(n\log(n))$. By \cite[Thm.\ 6]{nojgaard2018time}, the auxiliary graph $\ats{T}{S}$ can be computed in $O(|V(T)|\log(|V(S)|))$ time.
Since $O(|V(T)|) = O(n)$ and $O(|V(S)|) = O(n)$, the latter task can be done in $O(n\log(n))$ time.
Construction of $Q$ can be done in time
$O(|V(\ats{T}{S})| + |E(\ats{T}{S})|) = O(n)$ using the techniques of
Kahn~\cite{Kahn:62} and by observing that the edges in $E(\ats{T}{S})$
cannot exceed $|E(T)|+|E(S)|=O(n)$.
In each pass of the main \textit{while} loop, we iterate through $O(n)$
non-binary cherries. Let $c_1, \ldots, c_k$ be the non-binary cherries of
$S$, assuming that each auxiliary $G((T;t,\sigma), S, c_i)$ graph is
already pre-computed. Since $c_1, \ldots,
c_k$ are cherries of $S$, the sets in $\{\L(S(c_1)), \L(S(c_2)), \ldots
\L(S(c_k))\}$ must be pairwise disjoint. Denoting $n_i = |\L(S(c_i))|$, $1
\leq i \leq k$, we thus observe that $\sum_{i = 1}^{k}n_i \leq n$. In the worst
case, we go through every cherry and check connectedness in time $O(n_i^2)$
on each graph $G((T;t,\sigma), S, c_i)$ via ``classical'' breadth-first search. Thus in one iteration of the
main \textit{while} loop, the total time spent on connectedness
verification is $O(\sum_{i = 1}^{k}n_i^2) = O(n^2)$. When we apply a split
refinement, we compute $\hat{\mu}_{T,S}(u)$ for all $u\in V(T)$, $\ats{T}{S}$ and $Q$ at most once per
\textit{while} iteration, each operation being feasible in time $O(n \log n)$.
To be more precise, as soon as we have found a good split refinement we put
\texttt{Has\_GoodSplit}=TRUE. Hence, the \textit{if}-condition (Line \ref{first-if}) will then not be satisfied,
and we will not recompute the values $\hat{\mu}_{T,S}(u)$, $\ats{T}{S}$ and $Q$
again for the remaining non-binary cherries $x$ of $S$ within the \textit{for}-loop (Line \ref{for-loop}).
As there are $O(n)$ iterations, the time spent on operations other than
graph construction and updates is $O(n^3)$.
Let us now argue that the total time spent on the auxiliary $G((T;t,\sigma), S, x)$ graph updates can be implemented to take time $O(n^3)$.
To this end, we maintain a special data structure that,
for each 2-element subset $\{a,b\} \subseteq \Sigma$, remembers the members of $\Sigma \cup V(T)$ that may cause $ab$ to be an edge in the auxiliary graphs.
We describe this in more detail. For a certain species tree $S$, we say that $a,b \in \L(S)$ are \emph{siblings} if $a$ and $b$ have the same parent in $S$.
For any two siblings $a,b$ of $S$, define
\begin{align*}
l_1(a,b) &= \{c \in \Sigma : c \mbox{ is a sibling of $a$ and $b$, and } ab|c \in \ensuremath{\mathcal{R}} (T;t,\sigma)\} \\
l_2(a,b) &= \{u \in V(T) : \exists v \in V(T) \mbox{ such that $(u,v)$ satisfies (C2) }\} \\
l_3(a,b) &= \{u \in V(T) : \exists v \in V(T) \mbox{ such that $(u,v)$ satisfies (C3), with $x$ the parent of $a,b$ }\} \\
l_4(a,b) &= \{u \in V(T) : \mbox{$u$ satisfies (C4) }\}
\end{align*}
The $l_i$ sets are first initialized for $S$ being a star tree and then, subsequently updated
based on the current refined tree $S$.
We will show below that the initial step to construct
the $l_i$ sets for the star tree $S$ takes $O(n^3)$ time.
As we shall see below, when refining $S$ to $S'$ at some cherry $x$, only those elements remain in the $l_i$ sets
such that (C1), (C2), (C3), resp., (C4) is satisfied for $a,b\in L(S(x))$ if and only if
$l_1(a,b), l_2(a,b), l_3(a,b)$, resp., $l_4(a,b)$ is non-empty.
Therefore, to decide whether there is an edge $ab$ in $G((T;t,\sigma), S, x)$
it suffices to check whether one (of the updated) $l_i(a,b)$, $1\leq i\leq 4$
is non-empty, a task that can done in constant time for each two vertices $a,b\in \Sigma$.
Hence, in each step we can construct $G((T;t,\sigma), S, x)$ in $O(n^2)$ time.
Thus, to show that the entire procedure runs in $O(n^3)$ time, we have to prove that we can update the $l_i$ sets in $O(n^3)$ time.
We show how to maintain these four sets for each pair of siblings as $S$ undergoes split refinements, starting with the initial star tree $S$.
The set $l_1(a,b)$ can be constructed in time $O(n^3)$ for all $a,b$ by iterating through $\ensuremath{\mathcal{R}} (T;t,\sigma)$ once, and each $l_i(a,b)$, $i \in \{2,3,4\}$ can be constructed in time $O(n^3)$ by first constructing $\ats{T}{S}$ with its maximal topological sort $Q$ and, for each $a,b$ pair, checking every vertex and edge of $T$ for conditions (C2), (C3) and (C4). It is easy to see that each condition can be checked in constant time per edge or vertex.
Now assume inductively that $l_1,l_2,l_3$ and $l_4$ are known for each pair
of siblings of $S$. Instead of reasoning on the time to update these sets
during a particular iteration of the \textit{while} loop, we will argue on
the total number of times we must update each $l_i$ set during the whole
execution of the algorithm. Our aim is to show that each $l_i$ set requires
$O(n^3)$ updates in total, i.e. summing over all iterations of the loop.
When we apply a split refinement to $S$ at some cherry $x$, we create the
tree $S'$ on which we add the two children $x_1$ and $x_2$ under $x$. We
must update the corresponding sets. Let $X_1 = \L(S'(x_1))$ and $X_2 =
\L(S'(x_2))$. For $a,b \in X_1$, we may need to remove $c$ from $l_1(a,b)$
if $c \in X_2$ since it is not a sibling of $a$ and $b$ anymore. Thus after
a split refinement, for each $a,b \in X_1$ and each $c \in X_2$, we remove
$c$ from $l_1(a,b)$ if present (and we do the same for each $a'b' \in X_2,
c' \in X_1$). Therefore, each time that a pair $a,b \in \Sigma$ gets
separated from some $c \in \Sigma$ during the species tree construction, we
need $O(1)$ time to remove $c$ from $l_1(a, b)$. Importantly, this occurs
at most once during the whole algorithm execution. Therefore, in total we
spend time $O(1)$ one to update $l_1$ for each distinct $a,b,c \in \Sigma$,
and so the total time spent on updating $l_1$ is $O(n^3)$.
To update $l_2$ and $l_4$, we note that these two sets only depend on $t, \sigma_{\ensuremath{T_{\mathcal{\overline{E}}}}}$
and $Q$, and only $Q$ is not fixed by the input. After a split refinement and
computing the new maximal topological sort $Q'$ of $\ats{T}{S'}$, by
Lemma~\ref{lem:q-stays-topo}, we only add new elements to $Q$ (i.e. if $\mathcal{M}(Q)
\subseteq \mathcal{M}(Q')$). Thus for each $q \in \mathcal{M}(Q') \setminus \mathcal{M}(Q)$, we must simply
remove $q$ from $l_2(a, b)$ and $l_4(a,b)$, if present, for each $a,b \in \Sigma$.
This takes time $O(n^2)$ each time that a new vertex is added to a maximal topological sort after a split refinement.
Since, during the execution of the algorithm,
each vertex of $V(T) \cup V(S)$ is added to $Q$ at most once, this occurs
$O(n)$ times. It follows that the total time spend on updating $l_2$ and $l_4$ is $O(n^3)$.
Finally, $l_3$ depends on $\hat{\mu}_{T,S}$ but not on $Q$. After we apply a split refinement at $x$, transforming $S$ to $S'$,
$\hat{\mu}_S(u) \neq \hat{\mu}_{S'}(u)$ is only possible if $\hat{\mu}_{S}(u) = x$, in which case $\hat{\mu}_{S'}(u) \in \{x, x_1, x_2\}$. For each $u$ such that $\hat{\mu}_{S}(u) = x$, we remove $u$ from $l_3(a,b)$ where necessary. More precisely, if $\hat{\mu}_{S'}(u) = x$, we remove $u$ from $l_3(a,b)$ for each $a,b \in X_1 \cup X_2$ if present, since $x$ is not the parent of any two leaves $a,b$ now.
If $\hat{\mu}_{S'}(u) = x_1$, we remove $u$ from $l_3(a,b)$ for each $a,b \in X_2$, and if $\hat{\mu}_{S'}(u) = x_2$, we remove $u$ from $l_3(a,b)$ for each $a,b \in X_1$. One can see that for each $u \in V(T)$, we remove $u$ from $l_3(a,b)$ at most once for each distinct $a,b \in \Sigma$, and thus a total of $O(n^3)$ is spent on updating $l_3$ as well.
To summarize, the $l_i$ sets can be kept up-to-date after each split refinement in total time $O(n^3)$.
Since the other operations also tale time $O(n^3)$, the complete algorithm also takes $O(n^3)$ time.
Finally, among all algorithms that compute $\ensuremath{\mathcal{R}} (T; t, \sigma)$, Lemma \ref{lem:comp-RT} implies that the boundary $O(n^3)$ is tight.
\end{proof}
\section{Summary and Outlook}
Here, we considered event-labeled gene trees $(T;t,\sigma)$ that contain speciation, duplication and HGT.
We solved the Gene Tree Consistency (GTC) problem, that is,
we have shown how to decide whether a time-consistent species tree $S$ for a given gene tree $(T;t,\sigma)$ exists
and, in the affirmative case, how to construct such a binary species tree in cubic-time.
Since our algorithm is based on the informative
species triples $\S(T;t,\sigma)$, for which $\ensuremath{\mathcal{R}} (T;t,\sigma)\in \Theta(n^3)$ may possible, there is no non-trivial way to
improve the runtime of our algorithm. Our algorithm heavily relies on the structure of an auxiliary graph $\ats{T}{S}$
to ensure time-consistency and good split refinements to additionally ensure that the final tree $S$ displays $\S(T;t,\sigma)$.
This approach may have further consequence in phylogenomics. Since event-labeled gene trees $(T;t,\sigma)$
can to some extent directly be inferred from genomic sequence data, our method allows to test whether $(T;t,\sigma)$
is ``biologically feasible'', that is, there exists a time-consistent species tree for $(T;t,\sigma)$.
Moreover, our method also shows that all information about the
putative history of the species is entirely contained within the gene trees $(T;t,\sigma)$ and
thus, in the underlying sequence data to obtain $(T;t,\sigma)$.
We note that the constructed binary species tree is one of possibly many other time-consistent species trees for $(T;t,\sigma)$.
In particular, there are many different ways to choose a good split refinement, each choice may lead to a different species tree.
Moreover, the reconstructed species trees here are binary. This condition might be relaxed and one may obtain further species tree by
``contracting'' edges so that the resulting non-binary tree is still a time-consistent species tree for $(T;t,\sigma)$. This eventually
may yield so-called ``least-resolved'' time-consistent species trees and thus, trees that make no further assumption on
the evolutionary history than actually supported by the data.
As part of further work, it may be of interest to understand in more detail, if our approach can be used to
efficiently list all possible solutions, that is all time-consistent species trees for $(T;t,\sigma)$.
\bibliographystyle{plain}
|
1,116,691,499,711 | arxiv | \section{Introduction}
\label{sec:intro}
A characteristic feature of the binary fission process is the presence of two excited nuclear fragments moving antiparallel to one another at moderate velocities, $\approx 0.05 \ c$. The two fragments quickly de-excite by emission of neutrons, followed by gamma rays in the continuum, and finally gamma rays in the discrete-level region until the ground state, or a long lived isomeric state, is reached. Over the past few years, a renewed interest in the fission reaction was sparked by new measurements and refined theoretical models~\cite{Travar2021, Wilson2021, Schmidt2011, Talou2018}. Of particular interest are the event-by-event correlations between the fission fragment properties.
Neutron emission is relatively easy to separate between the two fragments. The evaporated neutrons have velocities comparable in magnitude to the speed of the fragments, resulting in strong kinematic focusing of the emission along the fission axis. By analysing both energy and direction of the neutrons and the fragments, it is possible to determine with decent accuracy which fragment emitted each neutron on an event-by-event case~\cite{Signarbieux1972, nifeneckergroups, Gavron1971}. Similarly, the discrimination of discrete gamma rays can be performed by gating on specific known transitions of nuclei~\cite{Wilson2021}. The same is not possible in general for statisical gamma-rays in the continuum. However, as shown by Maier-Leibniz~\cite{MaierLeibniz1965} and Pleasonton \textit{et al.}~\cite{Pleasonton1972}, the gamma-ray emission can be analysed using statistical methods such as Maier-Leibniz Doppler-Shift (ML-DS) technique.
The (ML-DS) technique has been successfully applied successfully in several experimental investigations of fission, including Refs.~\cite{Pleasonton1972, SchmidFabian1988, Wang2016, Travar2021}. However, given the recent interest in fragment correlations (see Refs~\cite{Wilson2021, Vogt2021, Bulgac2020, Schmidt2011}) we think it is important to present techniques capable of determining correlations between gamma-ray emission from the two fragments. In this paper, we will show that a simple generalization to the ML-DS technique allows the experimenter to separate the second order moments of the gamma-ray distribution without the need of changing the experimental apparatus.
\section{Overview of Technique}
\label{sec:overview}
The ML-DS technique has been applied in several experiments and several variants of the techniques exist. In its original formulation~\cite{MaierLeibniz1965, Pleasonton1972}, the method only employs a single gamma ray detector. One of the main recent variants of this technique, from Travar \textit{et al.}~\cite{Travar2021}, used two detectors in order to eliminate the effect of a fragment detector asymmetry. This version is briefly discussed here.
Let $N^L$ and $N^H$ be the random variables describing the gamma-ray multiplicities from the light and heavy fragments, respectively. Let us then take two detectors placed along the line of motion of the fragments, such that the light fragment flies in the direction of detector $I$ and the heavy fragment flies in the direction of detector $II$. Let $D^I$ and $D^{II}$ be the random variables describing the measured gamma-ray multiplicity distributions in detectors $I$ and $II$, respectively. Assuming that each gamma ray is independently detected , and thus the system response can be modeled as a binomial response, we have
\begin{subequations}
\begin{align}
D^I &=& \hat{B}(\epsilon_{L+})N^L + \hat{B}(\epsilon_{H-})N^H \\
D^{II} &=& \hat{B}(\epsilon_{L-})N^L + \hat{B}(\epsilon_{H+})N^H
\end{align}
\label{eq:randD}
\end{subequations}
where $\hat{B}(\epsilon)$ is the binomial response operator that models the effect of a system efficiency in measuring multiplicities, and the Doppler-corrected detection efficiency
\begin{equation*}
\epsilon_{F\pm} = \epsilon (1 \pm 2 \beta_F) \ ,
\end{equation*}
is given in terms of the detection efficiency $\epsilon$ of each detector to an isotropic source and the speed $\beta_F$, as a ratio of the speed of light, of the fragment $F = L,H$ emitting the radiation. Taking the mean of Eq.~\eqref{eq:randD}, we find the well-known formula for inferring the mean gamma-ray emission~\cite{Travar2021}
\begin{subequations}
\begin{align}
\langle N^L \rangle & = & \frac{ (1+ 2 \beta_H)\langle D^{I} \rangle - (1 - 2 \beta_H) \langle D^{II} \rangle}{4 \epsilon (\beta_L + \beta_H)} \\
\langle N^H \rangle & = & \frac{ (1 + 2 \beta_L)\langle D^{II} \rangle - (1 - 2 \beta_L) \langle D^{I} \rangle}{4 \epsilon (\beta_L + \beta_H)} \ .
\end{align}
\label{eq:meanEmit}
\end{subequations}
Eq.~\eqref{eq:meanEmit} is the conventional ML-DS technique. We now extend its use by obtaining the covariance between the emitting sources. Analysing the second order moments of Eq.~\eqref{eq:randD} distributions, we find that they are related to the emitted multiplicity distribution by the following
\begin{equation}
\begin{bmatrix}
\Sigma^2(D^I) \\
\Sigma^2(D^{II}) \\
\text{cov}(D^I, D^{II})
\end{bmatrix}
=
\begin{bmatrix}
\epsilon_{L+}^2 & \epsilon_{H-}^2 & 2 \epsilon_{L+} \epsilon_{H-} \\
\epsilon_{L-}^2 & \epsilon_{H+}^2 & 2 \epsilon_{L-} \epsilon_{H+} \\
\epsilon_{L+} \epsilon_{L-} & \epsilon_{H-} \epsilon_{H+} & \epsilon_{L+}\epsilon_{H+} + \epsilon_{L-}\epsilon_{H-}
\end{bmatrix}
\begin{bmatrix}
\sigma^2(N^L) \\
\sigma^2(N^H) \\
\text{cov}(N^L,N^L)
\end{bmatrix} \ ,
\label{eq:matLin}
\end{equation}
where we have introduced the reduced variances,
\begin{subequations}
\begin{align}
\Sigma^2(D^I) &=& \sigma^2(D^I) - \langle N^L \rangle \epsilon_{L+} (1 -\epsilon_{L+}) -\langle N^H \rangle \epsilon_{H-} (1 -\epsilon_{H-}) \\
\Sigma^2(D^{II}) &=& \sigma^2(D^{II}) - \langle N^L \rangle \epsilon_{L-} (1 -\epsilon_{L-}) - \langle N^H \rangle \epsilon_{H+} (1 -\epsilon_{H+}) \ .
\end{align}
\end{subequations}
The reduced variance has a physical meaning. We subtract the expected variance introduced by the binomial operator from the variance observed in the detectors. This subtraction can be understood in terms of removing the noise associated to an information channel, i.e., the binomial response. The mean emitted multiplicities appearing in the reduced variances can be determined using Eq.~\eqref{eq:meanEmit}.
After inverting the matrix equation in Eq.~\eqref{eq:matLin}, we obtain an expression for the covariance in the emission from the two fragments
\begin{equation}
\text{cov}(N^L, N^H) = \frac{2 \text{cov} \left(D^I, D^{II} \right) (1 + 4 \beta_H \beta_L ) - \left[ (1 + 2 \beta_H ) (1 - 2 \beta_L) \Sigma^2(D^I) +(1- 2 \beta_H ) (1 + 2 \beta_L) \Sigma^2(D^{II}) \right]}{16 \epsilon ^2 (\beta_L + \beta_H)^2} \ .
\label{eq:varD}
\end{equation}
We note that Eq.~\eqref{eq:varD} represents the difference of two factors similar in size, which is then amplified by the division of the very small factor in the denominator. For this reason, the equation has a very slow convergence and requires high statistics. Numerical analysis using reasonable values for the emission, efficiencies, and speed of the fragments, only converge after approximately $1\times 10^8$ events, thus requiring extremely long measurement times.
A factor that affects the determination of the covariance, and to a lesser extent the mean, is the capability of the detector of measuring the multiplicity. If only detectors capable of measuring a single interaction per event are used, both the mean and the covariance expression require that the probability of two incident particles on the detector be much smaller than 1. This restriction further increases the measurement time. To increase the efficiency of the system, it is possible to place detectors off of the fragment-motion axis. However, the efficiency of each detector would have to be modeled independently because the aberration of gamma-rays, apart for points along the fission axis, is in general dependent on the angular distribution of gamma rays in the inertial frame of the fragment.
Notwithstanding these limitations, as far as we know Eq.~\eqref{eq:varD} represents the only technique capable of measuring the covariance between the emission of gamma rays by two sources moving at relatively small speeds. This technique represents the missing piece in the experimental analysis of fission emission correlations. Together with the techniques discussed for separating the emission of neutrons and discrete gamma rays, this technique will allow to determine the correlations in the initial conditions of the fission fragments and gain a deeper understanding of the fission process.
\acknowledgments
This work was funded in-part by the Consortium for Monitoring, Technology, and Verification under Department of Energy National Nuclear Security Administration award number DE-NA0003920.
|
1,116,691,499,712 | arxiv | \section{Introduction}
\label{sec:intro}
Creative editing for photos has become a ubiquitous need due to the advances in a plethora of social media platforms. AI-based techniques~\cite{liu2021generative} significantly lower the barrier of fancy image editing that traditionally requires specialized software and labor-intensive manual operations. Deep neural networks can now produce compelling results for various low-level image editing tasks, such as image inpainting~\cite{suvorov2022resolution,yu2018generative}, composition~\cite{zhang2021deep,xue2022dccf,niu2021making}, colorization~\cite{zhang2016colorful,zhang2019deep,saharia2022palette} and aesthetic enhancement~\cite{deng2018aesthetic,chen2018deep}, by learning from richly available paired data. A more challenging scenario, on the other hand, is semantic image editing, which intends to manipulate the high-level semantics of image content while preserving image realism. Tremendous efforts~\cite{ling2021editgan,shen2020interpreting,alaluf2022hyperstyle,roich2022pivotal,shen2021closed,bau2020semantic} have been made along this way, mostly relying on the semantic latent space of generative models, \eg, GANs~\cite{goodfellow2020generative,karras2019style,zhang2022styleswin}, yet the majority of existing works are limited to specific image genres.
Recent large-scale language-image (LLI) models, based on either auto-regressive models~\cite{yu2022scaling,gafni2022make} or diffusion models~\cite{ramesh2022hierarchical,saharia2022photorealistic,rombach2021highresolution,gu2022vector}, have shown unprecedented generative power in modeling complex images. These models enable various image manipulation tasks~\cite{hertz2022prompt,ruiz2022dreambooth,kawar2022imagic,avrahami2022blended} previously unassailable, allowing image editing for general images with the guidance of text prompt.
However, even the detailed textual description inevitably introduces ambiguity and may not accurately reflect the user-desired effects; indeed, many fine-grained object appearances can hardly be specified by the plain language. Hence, it is crucial to develop a more intuitive approach to ease fine-grained image editing for novices and non-native speakers.
In this work, we propose an \emph{exemplar-based image editing} approach that allows accurate semantic manipulation on the image content according to an exemplar image
provided by users or retrieved from the database. As the saying goes, ``a picture is worth a thousand words''. We believe images better convey the user's desired image customization in a more granular manner than words. This task is completely different from image harmonization~\cite{tsai2017deep,guo2021image} that mainly focuses on color and lighting correction when compositing the foreground objects, whereas we aim for a much more complex job: semantically transforming the exemplar, \eg, producing a varied pose, deformation or viewpoint, such that the edited content can be seamlessly implanted according to the image context.
In fact, ours automates the traditional image editing workflow where artists perform tedious transformations upon image assets for coherent image blending.
To achieve our goal, we train a diffusion model~\cite{ho2020denoising,song2020score} conditioned on the exemplar image. Different from text-guided models, the core challenge is that it is infeasible to collect enough triplet training pairs comprising source image, exemplar and corresponding editing ground truth. One workaround is to randomly crop the objects from the input image, which serves as the reference when training the inpainting model. The model trained from such a \emph{self-reference} setting, however, cannot generalize to real exemplars, since the model simply learns to copy and paste the reference object into the final output. We identify several key factors that circumvent this issue. The first is to utilize a generative prior. Specifically, a pretrained text-to-image model has the ability to generate high-quality desired results, we leverage it as initialization to avoid falling into the copy-and-paste trivial solution.
However, a long time of finetuning may still cause the model to deviate from the prior knowledge and ultimately degenerate again. Hence, we introduce the information bottleneck for self-reference conditioning in which we drop the spatial tokens and only regard the global image embedding as the condition. In this way, we enforce the network to understand the high-level semantics of the exemplar image and the context from the source image, thus preventing trivial results during the self-supervised training.
Moreover, we apply aggressive augmentation on the self-reference image which can effectively reduce the training-test gap.
We further improve the editability of our approach in two aspects. One is that our training uses irregular random masks so as to mimic the casual user brush used in practical editing. We also prove that classifier-free guidance~\cite{ho2022classifier} is beneficial to boost both the image quality and the style resemblance to the reference.
To the best of our knowledge, we are the first to address this \emph{semantic image composition} problem where the reference is semantically transformed and harmonized before blending into another image, as shown in Figure~\ref{fig:teaser} and Figure~\ref{fig:supp_3}. Our method shows a significant quality advantage over prior works in a similar setting. Notably, our editing just involves a single forward of the diffusion model without any image-specific optimization, which is a necessity for many real applications. To summarize, our contributions are as follows:
\begin{itemize}[leftmargin=*]
\itemsep=-0.9mm
\item We propose a new image editing scenario, which semantically alters the image content based on an exemplar image. This approach offers fine-grained control while being convenient to use.
\item We solve the problem with an image-conditioned diffusion model trained in a self-supervised manner. We propose a group of techniques to tackle the degenerate challenge.
\item Our approach performs favorably over prior arts for in-the-wild image editing, as measured by both quantitative metrics and subjective evaluation.
\end{itemize}
\begin{figure*}[h]
\centering
\vspace{-0.8cm}
\includegraphics[width=1\textwidth]{figure/supp-3.pdf}
\caption{More visual results. Our approach can handle a wide variety of reference images and source images. }
\vspace{-0.8cm}
\label{fig:supp_3}
\end{figure*}
\section{Related Work}
\label{sec:related_work}
\noindent
\textbf{Image composition.}
Cutting the foreground from one image and pasting it on another image into a realistic composite is a common and widely used operation in photo editing.
Many methods~\cite{cohen2006color, jia2006drag, tao2010error,sunkavalli2010multi,tsai2017deep,cong2020dovenet,cun2020improving,reinhard2001color,perez2003poisson} have been proposed focusing on image harmonization to make the composite look more realistic.
Traditional methods~\cite{cohen2006color, jia2006drag, tao2010error} tend to extract handcrafted features to match the color distribution. Recent works~\cite{chen2019toward,guo2021intrinsic} leverage deep semantic features to improve the robustness.
A more recent work DCCF~\cite{xue2022dccf} proposes four human comprehensible neural filters in a pyramid manner and achieves a state-of-the-art color harmonization result. However, they all assume that the foreground and the background are semantically harmonious and only adjust the composite in the low-level color space while keeping the structure unchanged.
In this paper, we target at semantic image composition, taking the challenging semantic inharmony into consideration.
\noindent
\textbf{Semantic image editing.}
Semantic image editing, to edit the high-level semantics of an image, is of great interest in the vision and graphics community due to its potential applications. A steadily-developed line of work~\cite{shen2020interpreting, shen2021closed, richardson2021encoding, alaluf2022hyperstyle,bau2020semantic} carefully dissects the GAN's latent space, aiming to find semantic disentangled latent factors.
Whereas other research efforts leverage the discriminant model like attribute classifier~\cite{gao2021high, hou2022guidedstyle} or face recognition~\cite{li2019faceshifter, shen2018faceid} model to help disentangle and manipulate images.
Another popular direction of
works~\cite{gu2019mask, bau2020semantic, ling2021editgan,zhang2020cross,zhou2021cocosnet,wang2022pretraining} utilize semantic mask to control the editing.
Yet most existing methods are limited to specific image genres, such as face, car, bird, cat \etc.
In this work, we focus on introducing a model that works for general and complex images in a high-precision manner.
\noindent
\textbf{Text-driven image editing.}
Among the various kinds of semantic image editing, text-guided image editing has been attracting increasing attention recently. Early works~\cite{patashnik2021styleclip,abdal2022clip2stylegan,xia2021tedigan,gal2022stylegan,bau2021paint}
leverage pretrained GAN generators~\cite{karras2020analyzing} and text encoders~\cite{radford2021learning} to progressively optimize the image according to the text prompt.
However, these GAN-based manipulation approaches struggle on editing images of complex scenes or various objects due to the limited modeling capability of GANs.
The rapid rise and development of diffusion models~\cite{ramesh2021zero, ramesh2022hierarchical, saharia2022photorealistic} have shown powerful capability in synthesizing high-quality and diverse images. Many works~\cite{liu2021more,kim2022diffusionclip,ruiz2022dreambooth,kawar2022imagic,nichol2021glide, avrahami2022blended,hertz2022prompt} exploit diffusion models for text-driven image editing.
For example,
DiffusionCLIP~\cite{kim2022diffusionclip}, dreambooth~\cite{ruiz2022dreambooth}, and Imagic~\cite{kawar2022imagic} finetune the diffusion models case-specifically for different text prompts.
Blended Diffusion~\cite{avrahami2022blended} proposes a multi-step blended process to perform local manipulation using a user-provided mask.
While these methods achieve remarkably impressive results, we argue that the language guidance still lacks precise control,
whereas images can better express one's concrete ideas.
As such in this work we are interested in exemplar-based image editing.
\section{Method}
\label{sec:method}
We target at exemplar-based image editing that automatically merges the reference image (either retrieved from a database or provided by users) into a source image in a way that the merged image looks plausible and photo-realistic.
Despite the recent remarkable success of text-based image editing, it is still difficult to use mere verbal descriptions to express complex and multiple ideas.
While images, on the other hand, could be a better alternative for conveying people's intentions, as the proverb says: ``a picture is worth a thousand words".
Formally, denote the source image as $\mathbf{x}_s\in \mathbb{R}^{H\times W\times 3}$, with $H$ and $W$ being the width and height respectively.
The edit region could be a rectangular or an irregular shape (at least connected) and is represented as
a binary mask $\mathbf{m}\in \{0, 1\}^{H\times W}$ where value $1$ specifies the editable positions in $\mathbf{x}_s$.
Given a reference image $\mathbf{x}_r \in \mathbb{R}^{H'\times W'\times 3}$ containing the desired object,
our goal is to synthesize an image $\mathbf{y}$ from $\{\mathbf{x}_s,\mathbf{x}_r, \mathbf{m}\}$, so that the region where $\mathbf{m}=0$ remains as same as possible to the source image $\mathbf{x}_s$, while the region where $\mathbf{m}=1$ depicts the object as similar to the reference image $\mathbf{x}_r$ and fits harmoniously.
This task is very challenging and complex because it implicitly involves several non-trivial procedures.
Firstly, the model requires understanding the object in the reference image, capturing both its shape and texture while ignoring the noise from the background.
Secondly, it is critical to enable synthesizing a transformed view of the object (different pose, different size, different illumination \etc) that fits in the source image nicely.
Thirdly, the model needs to inpaint the area around the object to generate a realistic photo, showing a smooth transition across the merging boundary.
Last, the resolution of the reference image may be lower than the edit region. The model should involve super-resolution in the process.
\subsection{Preliminaries}
\label{method:naive}
\noindent
\textbf{Self-supervised training.}
It is impossible to collect and annotate paired data, \ie $\{(\mathbf{x}_s,\mathbf{x}_r, \mathbf{m}), \mathbf{y}\}$, for the training of exemplar-based image editing. It may take great expense and huge labor to manually paint reasonable output.
Thus, we propose to perform self-supervised training. Specifically, given an image and the bounding box of an object in the image, to simulate the training data, we use the bounding boxes of the object as the binary mask $\mathbf{m}$. We directly regard the image patch in the bounding box of the source image as the reference image $\mathbf{x}_r = \mathbf{m}\odot \mathbf{x}_s$. Naturally, the image editing result should be the original source image $\mathbf{x}_s$.
As such, our training data is composed of $\{(\bar{\mathbf{m}}\odot \mathbf{x}_s, \mathbf{x}_r, \mathbf{m}), \mathbf{x}_s\}$, where $\bar{\mathbf{m}} = \mathds{1} - \mathbf{m}$ stands for the complementary of the mask $\mathbf{m}$, and $\mathds{1}$ represents the all-ones matrix.
\noindent
\textbf{A naive solution.}
Diffusion models have achieved notable progress in synthesizing unprecedented image quality and have been successfully applied to
many text-based image editing works~\cite{GLIDE,kim2022diffusionclip,ruiz2022dreambooth,kawar2022imagic}.
For our exemplar-based image editing task, a naive solution is to directly replace the text condition with the reference image condition.
Specifically, the diffusion model generates image $\mathbf{y}$ by gradually reversing a Markov forward process. Starting from $\mathbf{y}_0 = \mathbf{x}_s$, the forward process yields a sequence of increasing noisy images $\{\mathbf{y}_t | t \in [1, T]\}$, where $\mathbf{y}_t = \alpha_t \mathbf{y}_0 + (1-\alpha_t)\mathbf{\epsilon}$, $\epsilon$ is the Gaussian noise, and $\alpha_t$ decreases with the timestep $t$. For the generative process, the diffusion model progressively denoises a noisy image from the last step given the condition $\mathbf{c}$ by minimizing the following loss function:
\begin{equation}
\mathcal{L} = \mathbb{E}_{t, \mathbf{y}_0,\mathbf{\epsilon}}\left\|\mathbf{\epsilon}_\theta(\mathbf{y}_t, \bar{\mathbf{m}}\odot \mathbf{x}_s, \mathbf{c}, t) - \mathbf{\epsilon}\right\|^2_2.
\end{equation}
For text-guided inpainting models, the condition $\mathbf{c}$ is the given text and is usually processed by a pretrained CLIP~\cite{radford2021learning} text encoder, outputting $77$ tokens.
Likewise, a naive solution is to directly replace it with CLIP image embeddings. We leverage the pretrained CLIP image encoder outputting $257$ tokens, including 1 class tokens and 256 patch tokens, denoted as $\mathbf{c} = \text{CLIP}_{\text{all}}(\mathbf{x}_r)$.
This naive solution converges well on the training set. However, when we apply to test images, we found that the generated result is far from satisfactory.
There exist obvious copy-and-paste artifacts in the edit region, making the generated image extremely unnatural, as illustrated in Figure~\ref{fig:issue}.
We argue that this is because, under the naive training scheme, the model learns a trivial mapping function: $ \bar{\mathbf{m}} \odot \mathbf{x}_s + \mathbf{x}_r = \mathbf{x}_s$.
It impedes the network to understand the content in the reference image and the connection to the source image, leading to failure generalization to test scenarios where the reference image is given arbitrarily but not the patch from the original image.
\begin{figure}[t]
\centering
\vspace{-0.1cm}
\includegraphics[width=1.0\columnwidth]{figure/issue.pdf}
\vspace{-0.6cm}
\caption{Illustration of the copy-and-paste artifacts of the naive solution. The generated image is extremely unnatural.}
\vspace{-0.3cm}
\label{fig:issue}
\end{figure}
\noindent
\textbf{Our motivation.}
How to prevent the model from learning such a trivial mapping function and facilitate model understanding in a self-supervised training manner is a challenging problem.
In this paper, we propose three principles.
1) We introduce the information bottleneck to force the network to understand and regenerate the content of the reference image instead of just copy.
2) We adopt strong augmentation to mitigate the train-test mismatch issue. This helps the network not only learn the transformation from the exemplar object, but also from the background.
3) Another critical feature for exemplar-based image editing is controllability. We enable control over the shape of the edit region and the similarity degree between the edit region and the reference image.
\subsection{Model Designs}
\label{method:ours}
\subsubsection{Information Bottleneck}
\noindent
\textbf{Compressed representation.}
We re-analyze the difference between text condition and image condition.
For text condition, the model is naturally compelled to learn semantics as text is an intrinsically semantic signal.
In regards to image condition, it is very easy to remember instead of understanding the context information and copying the content, arriving at the trivial solution.
To avoid this, we intend to increase the difficulty of reconstructing the mask region by compressing the information of the reference image.
Specifically, we only leverage the class token of a pretrained CLIP image encoder from the exemplar image as condition. It compresses the reference image from spatial size $224 \times 224 \times 3$ to a one-dimensional vector of dimension $1024$.
We find that this highly compressed representation tends to ignore the high-frequency details while maintaining the semantic information. It forces the network to understand the reference content and prevents the generator from directly copy-and-paste to reach the optimal results in training. For expressiveness consideration, we add several additional fully-connected (FC) layers to decode the feature and inject it into the diffusion process through cross attention.
\noindent
\textbf{Image prior.}
To further avoid the trivial solution of directly remembering the reference image, we leverage a well-trained diffusion model for initialization as a strong image prior. Specifically, we adopt a text-to-image generation model, Stable Diffusion~\cite{rombach2022high}, in consideration of two main reasons. First, it has a strong capability to generate high-quality in-the-wild images, thanks to the property that given any vector lying in its latent space will lead to a plausible image. Second, a pretrained CLIP~\cite{radford2021learning} model is utilized to extract the language information, which shares a similar representation to our adopted CLIP image embedding, making it a good initialization.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figure/pipeline3.pdf}
\vspace{-0.5cm}
\caption{Our training pipeline.}
\vspace{-0.3cm}
\label{fig:pipeline}
\end{figure}
\subsubsection{Strong Augmentation}
Another potential issue of self-supervised training is the domain gap between training and testing. This train-test mismatch stems from two aspects.
\noindent
\textbf{Reference image augmentation.}
The first mismatch is that the reference image $\mathbf{x}_r$ is derived from the source image $\mathbf{x}_s$ during training, which is barely the case for the testing scenario.
To reduce the gap, we adopt several data augmentation techniques (including flip, rotation, blur and elastic transform) on the reference image to break down the connection with the source image. We denote these data augmentation as $\mathcal{A}$.
Formally, the condition fed to the diffusion model is denoted as:
\begin{equation}
\mathbf{c} = \text{MLP}(\text{CLIP}(\mathcal{A}(x_r))).
\end{equation}
\noindent
\noindent
\textbf{Mask shape augmentation.}
On the other hand, the mask region $\mathbf{m}$ from the bounding box ensures that the reference image contains a whole object.
As a result, the generator learns to fill an object as completely as possible.
However, this may not hold in practical scenarios.
To address this, we generate an arbitrarily shaped mask based on the bounding box and use it in training.
Specifically, for each edge of the bounding box, we first construct a Bessel curve to fit it, then we sample $20$ points on this curve uniformly, and randomly add $1-5$ pixel offsets to their coordinates. Finally, we connect these points with straight lines sequentially to form an arbitrarily shaped mask.
The random distortions $\mathcal{D}$ on the mask $\mathbf{m}$ break the inductive bias, reducing the gap between training and testing. \ie,
\begin{equation}
\bar{\mathbf{m}} = \mathds{1} - \mathcal{D}(\mathbf{m}).
\end{equation}
We find these two augmentations can greatly enhance the robustness when facing different reference guidance.
\subsubsection{Control the mask shape}
Another benefit of mask shape augmentation is that it increases the control over mask shape in the inference stage.
In practical application scenarios, a rectangle mask usually can not represent the mask area precisely. e.g. the sun umbrella in Figure~\ref{fig:teaser}.
In some cases people would like to edit a specific region while preserving the other area as much as possible, this leads to the demand for handling irregular mask shapes. By involving these irregular masks into training, our model is able to generate photo-realistic results given various shape masks.
\subsubsection{Control the similarity degree}
To control the similarity degree between the edited area and the reference image, we find that classifier-free sampling strategy~\cite{ho2022classifier} is a powerful tool. Previous work~\cite{tang2022improved} found that the classifier-free guidance is actually the combination of both prior and posterior constraints.
\begin{equation}
\begin{split}
&\log p(\mathbf{y}_t|\mathbf{c}) + (s-1) \log p(\mathbf{c}|\mathbf{y}_t) \\
\propto & \log p(\mathbf{y}_t) + s(\log p(\mathbf{y}_t|\mathbf{c}) - \log p(\mathbf{y}_t)), \\
\end{split}
\end{equation}
where $s$ denotes the classifier-free guidance scale. It can also be regarded as the scale to control the similarity of the generated image to the reference image. A larger scale factor $s$ denotes the fusion result relies more on the conditional reference input. In our experiments, we follow the settings in ~\cite{tang2022improved} and replace $20\%$ reference conditions with a learnable vector $\mathbf{v}$ during training. This term aims to model $p(\mathbf{y}_t)$ with the help of a fixed condition input $p(\mathbf{y}_t|\mathbf{v})$. In the inference stage, each denoising step uses the modified prediction:
\begin{equation}
\tilde{\mathbf{\epsilon}}_\theta(\mathbf{y}_t, \mathbf{c}) =
\mathbf{\epsilon}_\theta(\mathbf{y}_t, \mathbf{v}) + s(\mathbf{\epsilon}_\theta(\mathbf{y}_t, \mathbf{c}) - \mathbf{\epsilon}_\theta(\mathbf{y}_t, \mathbf{v})).
\end{equation}
Without causing confusion, the parameters $t$ and $\bar{\mathbf{m}} \odot \mathbf{x}_s$ are omitted here for brevity. Above all, the overall framework of our method is illustrated in Figure~\ref{fig:pipeline}.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figure/comparison.pdf}
\vspace{-0.6cm}
\caption{Qualitative comparison with other approaches. Our method can generate results that are semantically consistent with the input reference images in high perceptual quality.}
\vspace{-0.4cm}
\label{fig:other_method}
\end{figure*}
\section{Experiments}
\subsection{Implementation Details and Evaluation}
\noindent
\textbf{Implementation details.}
In order to manipulate the real-world images, first we utilize a powerful text-to-image generation model, Stable Diffusion~\cite{rombach2022high}, as initialization to provide a strong image prior. Then we select OpenImages~\cite{kuznetsova2020open} as our training dataset. It contains a total of $16$ million bounding boxes for $600$ object classes on $1.9$ million images. During training, we preprocess the image resolution to $512 \times 512$, and train our model for $40$ epochs, which takes about 7 days on 64 NVIDIA V100 GPUs.
\noindent
\textbf{Test benchmark.}
To the best of our knowledge, no previous works target at exemplar-based semantic image editing (or semantic image composition). So we build a test benchmark for qualitative and quantitative analysis. Specifically, we manually select $3,500$ source images ($\mathbf{x}_s$) from MSCOCO~\cite{lin2014microsoft} validation set, each image contains only one bounding box ($\mathbf{m}$), and the mask region is no more than half of the whole image. Then we manually retrieve a reference image patch ($\mathbf{x}_{r}$) from MSCOCO training set. The reference image usually shares a similar semantic with mask region to ensure the combination is reasonable. We named it as COCO Exemplar-based image Editing benchmark, abbreviated as COCOEE. We will publish this benchmark, hoping to attract more follow-up works in this area.
\noindent
\textbf{Evaluation metrics.}
Our goal is to merge a reference image into a source image, while the editing region should be similar to the reference, and the fusion result should be photo-realistic. To measure these two aspects independently, we use the following three metrics to evaluate the generated images. 1) FID~\cite{heusel2017gans} score, which is widely used to evaluate generated results. We follow ~\cite{kynkaanniemi2022role} and use CLIP model to extract the feature, calculating the FID score between $3,500$ generated images and all images from COCO testing set. 2) Quality Score(QS)~\cite{gu2020giqa}, which aims to evaluate the authenticity of each single image. We take average of it to measure the overall quality of generated images. 3) CLIP score~\cite{radford2021learning},
evaluating the similarity between the edited region and the reference image. Specifically, we resize these two images to $224 \times 224$, extract the features via CLIP image encoder and calculate their cosine similarity. Higher CLIP score indicates the edited region is more similar to reference image.
\subsection{Comparisons}
Considering that no previous works aim at editing images semantically and locally based on an exemplar image, we select four related approaches as baselines to our method: 1) Blended Diffusion~\cite{avrahami2022blended}, it leverages the CLIP model to provide gradients to guide the diffusion sampling process.
We use a text prompt ``a photo of $C$" to compute the CLIP loss, where $C$ denotes the object class from exemplar image. 2) We slightly modify the Blended Diffusion by using the reference image to calculate the CLIP loss, denoted as Blended Diffusion (image). 3) Stable Diffusion~\cite{rombach2022high}. We use the text prompt as condition to represent the reference image, and inpaint the mask region. 4) We also choose the state-of-the-art image harmonization method DCCF~\cite{xue2022dccf} as baseline. Considering it can only fuse a foreground image to background, we first use an unconditional image inpainting model LAMA~\cite{suvorov2022resolution} to inpaint the whole mask region, then extract the foreground of reference image through an additional semantic mask, and finally harmonize it into the source image.
\noindent\textbf{Qualitative analysis.}
We provide the qualitative comparison of these methods in Figure~\ref{fig:other_method}. Text-guided Blended Diffusion is able to generate objects in the desired area, but they are unrealistic and incompatible with the source image. Another text-based method Stable Diffusion can generate much realistic results, but still fail to retain the characteristics of the reference image due to the limited representation of text information. Meanwhile, the image-guided Blended Diffusion also suffers from not similar to the reference image. We argue it may caused by the gradient guidance strategy that could not preserve enough content information. Finally, the generated result from image harmonization is almost the same as the exemplar image which is very incongruous with the background. The intrinsic reason is that the appearance of exemplar image can not match the source image directly in most cases. The generative model should transform the shape, size or pose automatically to fit the source image. In the last column of Figure~\ref{fig:other_method}, our method achieves a photo-realistic result while being similar to the reference.
\noindent\textbf{Quantitative analysis.}
Table~\ref{tab:quantitative_other_methods} presents the quantitative comparison results. The image-based editing method (including Blended Diffusion (image) and DCCF) reaches a high CLIP score, demonstrating that they are able to preserve the information from condition image, while the resulting image is of poor quality. The generated result from Stable Diffusion is much more plausible according to the FID and QS. However, it can hardly incorporate the conditional information of the image. Our approach achieves the best performance on all of these three metrics, verifying that it can not only generate high-quality images but also maintain the conditional information.
\begin{table}[t]
\small
\centering
\setlength\tabcolsep{1.8pt}
\caption{Quantitative comparison of different methods. We evaluate the generated image quality through FID and QS, and the semantic consistency to the reference image through the CLIP score.}
\vspace{-0.2cm}
\begin{tabular}{@{}lccc@{}}
\toprule
Method & FID ($\downarrow$) & QS ($\uparrow$) & CLIP Score ($\uparrow$) \\ \midrule
Blended Diffusion-Image~\cite{avrahami2022blended} & 4.60 & 67.14 & 80.65 \\
Blended Diffusion-Text~\cite{avrahami2022blended} & 7.52 & 55.89 & 72.62 \\
DCCF~\cite{xue2022dccf} & 3.78 & 71.49 & 82.18 \\
Stable Diffusion~\cite{rombach2022high} & 3.66 & 73.20 & 75.33 \\
Ours & \textbf{3.18} & \textbf{77.80} & \textbf{84.97} \\
\bottomrule
\end{tabular}
\label{tab:quantitative_other_methods}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figure/ablation.pdf}
\vspace{-0.6cm}
\caption{Visual ablation studies of individual components in our approach. We gradually eliminate the boundary artifacts through these techniques and finally achieve plausible generated results.}
\vspace{-0.3cm}
\label{fig:ablation}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure/cf.pdf}
\caption{Effect of classifier-free guidance scale $\lambda$. A larger $\lambda$ makes the generated region more similar to the reference.}
\label{fig:ablation_cf}
\vspace{-0.6cm}
\end{figure}
\noindent\textbf{User study.}
In order to obtain the user's subjective evaluation of the generated image, we conduct a user study on $50$ participants. In the study, we use $30$ groups of images, each group contains two inputs and five outputs. All these results in each group are presented side-by-side and in a random order to participants.
Participants are given unlimited time to rank the score from $1$ to $5$ ($1$ is the best, $5$ is the worst) on two perspectives independently: the image quality and the similarity to the reference image.
We report the average ranking score in Table~\ref{tab:user_study}. Overall, the image harmonization method DCCF is most similar to reference image since it's directly copied from it. Nonetheless, users prefer our results more than others given the realistic quality of ours.
\begin{table}[t]
\small
\caption{Average ranking score of image quality and semantic consistency. $1$ is the best, $5$ is the worst. Users rated ours as the best quality, and semantic consistency is second only to the image harmonization method which copies from the exemplar image.}
\vspace{-0.2cm}
\centering
\setlength\tabcolsep{2.8pt}
\begin{tabular}{lccc}
\toprule
Method & Quality ($\downarrow$) & Consistency ($\downarrow$) \\
\midrule
Blended Diffusion-Image~\cite{avrahami2022blended} & 3.83 & 3.84 \\
Blended Diffusion-Text~\cite{avrahami2022blended} & 3.93 & 3.95 \\
DCCF~\cite{xue2022dccf} & 3.09 & \textbf{1.66} \\
Stable Diffusion~\cite{rombach2022high} & 2.36 & 3.48 \\
Ours & \textbf{1.79} & 2.07 \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\label{tab:user_study}
\end{table}
\begin{table}[t]
\caption{Quantitative comparison of different variants of our method. We achieve the best performance by leveraging all these techniques.}
\vspace{-0.2cm}
\small
\centering
\begin{tabular}{@{}lcccc@{}}
\toprule
Method & FID ($\downarrow$) & QS ($\uparrow$) & CLIP Score ($\uparrow$) \\ \midrule
Baseline & 3.61 & 76.71 & 85.90 \\
+ Prior & 3.40 & 77.63 & \textbf{88.79} \\
+ Augmentation & 3.44 & 76.86 & 81.68 \\
+ Bottleneck & 3.26 & 76.62 & 81.41 \\
+ Classfier-Free & \textbf{3.18} & \textbf{77.80} & 84.97 \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\label{tab:quantitative_ablation}
\end{table}
\subsection{Ablation Study}
In order to achieve high-quality exemplar-based image editing, we introduce four key techniques, namely leveraging image prior, strong augmentation, information bottleneck and the classifier-free guidance. In this section, we perform five gradually changed setting to validate them:
1) We denote the naive solution in Section~\ref{method:naive} as baseline. It's directly modified from text-guided inpainting models by replacing the text to image as conditional signal.
2) We leverage the pretrained text-to-image generation model for initialization as an image prior.
3) To reduce the training-test gap, we adopt the strong augmentation on the reference image.
4) To further avoid falling into the trivial solution, we highly compress the image information to increase the difficulty of reconstructing the input image during training, we denote it as the information bottleneck.
5) At last, we use classifier-free guidance to further improve the performance.
We show the results in Table~\ref{tab:quantitative_ablation} and Figure~\ref{fig:ablation}. The baseline solution contains obvious boundary artifacts, and makes the generated image extremely unnatural.
By leveraging the image prior, the image quality improved according to the lower FID score. However, it still suffers from the copy-and-paste issue.
Adding augmentations can partially alleviate it.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figure/text_image.pdf}
\vspace{-0.7cm}
\caption{Comparison between progressively precise textual description and image as guidance. Using image as condition can maintain more fine-grained details.}
\vspace{-0.4cm}
\label{fig:txt_against_image}
\end{figure*}
When we further leverage the information bottleneck technique to compress the information, these boundary artifacts could be completely eliminated. Meanwhile, as the mask region should be generated instead of directly copied, the quality of this region will decrease because the difficulty of the generator increased significantly.
Finally, we add the classifier-free guidance to make the generated region more similar to the reference, it greatly boosts the overall image quality and achieves the best performance.
Meanwhile, we also investigate how the classifier-free scale affects our result. As shown in Figure~\ref{fig:ablation_cf}, as the scale $\lambda$ grows, the generated region is more and more like the reference input. In our experiment, we set $\lambda = 5$ by default.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure/more_results.pdf}
\vspace{-0.4cm}
\caption{In-the-wild exemplar-based image editing results.}
\vspace{-0.3cm}
\label{fig:more_results}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure/diverse_result.pdf}
\vspace{-0.4cm}
\caption{Our framework can synthesize realistic and diverse results from the same source image and exemplar image.}
\vspace{-0.3cm}
\label{fig:diverse_result}
\end{figure}
\subsection{From Language to Image Condition}
In Figure~\ref{fig:txt_against_image}, we provide a comparison of the controllability between language and image. From left to right, we try to inpaint the mask region with gradually detailed language description. As the language becomes more precise, the generated results indeed become more and more similar to the reference image. But it still exists a large gap with the image-guided result. Ours could maintain the fur, expression, and even the collar on the neck.
\subsection{In-the-wild Image Editing}
Benefiting from the stochasticity in the diffusion process, our method can generate multiple outputs from the same input. We show the diverse generated results in Figure~\ref{fig:diverse_result}. Although the synthesized images vary, all of them keep the key identity of the reference image. \eg, all the dogs have yellow fur, white chests and drooping ears. More selected exemplar-based image editing results are shown in Figure~\ref{fig:more_results} and appendix.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduce a novel image editing scenario: exemplar-based image editing, which aims to semantically alter the image content based on an exemplar image. We achieve this goal by leveraging self-supervised training based on the diffusion model. The naive approach causes the boundary artifacts issue, we carefully analyze it and solve it by proposing a group of techniques. Our algorithm enables the user to precisely control the editing, and achieves an impressive performance on in-the-wild images. We hope this work will serve as a solid baseline and help support future research in exemplar-based image editing area.
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,499,713 | arxiv | \section{Introduction}
\subsection{Alzheimer's disease and mild cognitive impairment}
Alzheimer's disease (AD) is the most common cause of dementia, characterized by continuous decline in cognition, memory and brain functions \cite{mattson2004pathways}. Mild cognitive impairment (MCI) is regarded as the intermediate stage of cognitive decline between healthy ageing and AD, where a proportion of MCI cases could be reversible. Further, although lacking curative treatment, earlier identification and intervention could modify the disease trajectory of AD progression and impact patient outcomes. Therefore, it is of crucial significance to accurately predict the conversion from MCI to AD, which remains a significant challenge due to the heterogeneous nature of AD\cite{au2015back}.
\subsection{Neuroimaging and brain networks}
Magnetic resonance imaging (MRI) is a commonly-used noninvasive technique for managing neuropsychiatric conditions. Previous studies suggest that MRI can detect the structural change of the brain in dementia patients \cite{elliott2020mri}. The MRI-derived biomarkers are reported significantly associated with cognitive decline, indicating the clinical value of MRI in dementia prediction\cite{cox2019structural}
The structural brain network, constructed from diffusion MRI (dMRI) or anatomical MRI, is promising to characterize the connectivity between cortical/subcortical regions defined according to prior knowledge of neuroanatomy. Specifically, dMRI-derived brain networks utilize tractography to quantify the connectivity strength of tracts that link brain regions. At the same time, anatomical MRI examines the possible associations among brain regions by calculating the covariance of the anatomical features (e.g., grey matter volume) among brain regions. Both types of structural brain networks are reported to effectively generate useful graph theoretical biomarkers associated with various neuropsychiatric conditions \cite{griffa2013structural,wei2021quantifying,wei2021structural} including dementia \cite{ajilore2014association}. Although dMRI provides more direct connectivity estimation than anatomical MRI, it is more difficult to acquire. In parallel to the efforts in developing data augmentation approaches \cite{li2021brainnetgan,barile2021data}, there is a pressing need to generate robust brain networks from more commonly available anatomical MRI.
\subsection{Deep learning for neuroimaging}
Deep learning models demonstrate reasonable performance in classifying AD and predicting MCI-to-AD conversion based on neuroimaging ~\cite{abrol2020deep}. However, the majority models are developed for computer vision tasks, which may not capture the connectomic properties of the brain. Therefore, the model performance could be limited for complex tasks involving domain knowledge, e.g., predicting disease trajectory of AD. Moreover, the interpretability of these models could be further improved to facilitate clinical translation. In parallel, the emerging graph neural network (GNN), designed for learning non-Euclidean data, is widely used in characterizing neuropsychiatric diseases \cite{schirmer2021neuropsychiatric}, which has shown encouraging performance in AD research, e.g., tau spread networks, brain structure geometric, based on MRI-derived graph structure data ~\cite{sarasua2021transformesh,song2020physics}.
\subsection{Related work}
In general, the approaches of predicting the MCI-to-AD conversion are categorized as static models (based on single time point data) or dynamic models (based on longitudinal data). Static models predict whether the conversion will happen in 36 months solely based on the baseline images~\cite{abrol2020deep,gao2020ad}, which, however, ignores the longitudinal changes along the disease trajectory. Therefore, although such approaches require fewer data, they could be sub-optimal due to the static nature of the predicting models.
In contrast, dynamic models are emerging approaches to predict MCI-to-AD conversion, particularly with the availability of longitudinal datasets, e.g., the Alzheimer's Disease Neuroimaging Initiative (ADNI). A recent study applies GNN and recurrent neural networks (RNN) to predict the patient outcomes in 18-month follow up by modeling the longitudinal brain networks from the baseline to 12-month as dynamic graphs \cite{kim2021interpretable}. However, this method showed limited performance, which might be due to its attempts to directly model the heterogeneous dementia population using the end-to-end training scheme. Moreover, the input brain networks are constructed solely from grey matter features, which ignores the common white matter abnormalities in AD.
\subsection{Proposed framework}
Previous studies show that anatomical MRI and dMRI share common features in reflecting brain structure \cite{alexander2013convergence,gu2019generating}. Hence, it could be feasible to generate brain networks from anatomical MRI with the guidance of dMRI that provides more specific information regarding white matter tracts. In this way, we could fully characterize both grey matter and white matter of the brain.
In addition, accumulating research shows that AD patients demonstrate accelerated brain ageing compared to healthy controls (CN) and MCI patients \cite{cole2018brain,gao2020ad}. Therefore, we hypothesize that the MCI-to-AD conversion could be predicted by modelling the deviation of AD patients from the healthy ageing trajectory, which could mitigate the challenge of modelling brain networks in the heterogeneous AD population.
Here we propose a learning framework to model the healthy ageing trajectory of brain networks with a graph encoder and a variational autoencoder (VAE). In addition, we design an RNN based algorithm that predicts the MCI-to-AD conversion based on the past longitudinal deviations/residuals of patients from the predicted ageing trajectory. In order to generate brain networks from commonly available anatomical MRI, we propose a self-supervised approach that uses an autoencoder to extract node features from T1 images and a cross-modal contrastive representative learning approach to extract edge features guided by dMRI. Our contributions include:
\begin{itemize}
\item A cross-modal learning approach to generate brain networks by extracting features from anatomical MRI under the guidance of dMRI.
\item A generative approach to predict the healthy ageing trajectories of brain network features using graph neural networks and variational autoencoders.
\item A recurrent learning algorithm that models longitudinal residuals between patient's actual features and patient's predicted features from the healthy ageing trajectory for future disease status prediction.
\item An interpretation approach that identifies the abnormality introduced from MCI-to-AD conversion by comparing the actual diseased brain networks and the predicted healthily aged brain networks from a customized graph decoder.
\end{itemize}
\section{Methods}
\subsection{Data preparation}
A longitudinal MRI dataset of AD, MCI and CN subjects is downloaded from ADNI website. Each subject has one T1 images at baseline, 6 months, 12 months and 18 months respectively. In total, the longitudinal dataset includes 191 stable CN, 126 stable MCI, and 91 converted MCI.
Another independent baseline cohort including 113 CN and 96 AD with dMRI and T1 available are also downloaded from the ADNI website.
T1 images of all subjects are transformed to the standard MNI-152 space by coregistrating with MNI-152 standard T1 of the FMRIB Software Library (FSL) using Advanced normalization tools \cite{avants2009advanced,jenkinson2012fsl}. For the dMRI of the independent baseline cohort, fractional anisotropy (FA) maps are derived using the FMRIB's Diffusion Toolbox and transformed to the standard space by coregistrating with the standard FA map.
\subsection{Brain network construction}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.8\textwidth]{FIG1.png}
\caption{Workflow of generating brain networks. \textbf{A.} T1 image and Dekikan grey matter node atlas are combined to obtain voxel vectors of node regions. \textbf{B.} Node voxel vectors are fed into an autoencoder to reduce dimensionality and produce node features for brain networks. \textbf{C.} Voxels of FA and T1 enclosed by the IIT white matter atlas are extracted as the edge voxel vectors. \textbf{D.} A cross-modal contrastive learning model is used to extract FA-related T1 features corresponding to edges. \textbf{E.} Node features and edge features are arranged into graph format for the downstream training.}
\label{fig:brainnetworks}
\end{figure}
Node and edge features of brain networks are both learnt from the independent baseline data. (Fig~\ref{fig:brainnetworks}). The Desikan grey matter atlas is used as the node atlas that divides cortical and sub-cortical regions into 68 separate areas with cerebellum excluded (Fig~\ref{fig:brainnetworks}A). For each subject, the Desikan node atlas is transformed back to the native space of the subjects using the inverse coregistration file from data preparation. The voxels enclosed by the node atlas are extracted and fed into an autoencoder for dimension reduction (Fig~\ref{fig:brainnetworks}B). All node voxels are sampled and zero-padded to 3000 as the input of the autoencoder, which consists of 4 layers with dimensions 1024, 512, 128, 32, respectively, and the output dimension is 32.
The IIT white matter atlas \cite{qi2021regionconnect} is a tractography atlas that indicates the path of 2227 the white matter tracts/edges connecting 68 regions of the Desikan atlas (Fig~\ref{fig:brainnetworks}C). Similar to nodes voxel extraction, the tract pathways of the IIT atlas are transformed back to the native space. Voxels of T1 and corresponding FA enclosed by IIT atlas are extracted and sampled as vectors (dimension 3000). Two multilayer perceptron (MLP, dimension: 3000, 1024, 512, 128, 32) respectively encode the voxel vectors of T1 and FA to features with a dimension of 32, and the project heads project the features to a common latent space (dimension = 128) where cross-modal contrastive representative learning are performed to align the features of T1 and FA (Fig~\ref{fig:brainnetworks}D). As such, the most tract-related features from T1 can be extracted under the guidance of the FA map. The loss is defined as:
\begin{equation}
L = -log \frac{exp(cos(Z_{T1}(i),Z_{FA}(i))/\tau)}{\sum_{j=1}^{N} exp(cos(Z_{T1}(i),Z_{FA}(j))/\tau)}
\end{equation}
where $Z_{T1}$ and $Z_{FA}$ are latent features of T1 and FA features of the tract $i$ after the project head, respectively; $cos()$ is the cosine similarity; $\tau$ is the temperature parameter (set to 0.01) ;$N$ is the size of the minibatch, $j$ is the index of other tracts in the minibatch, $j\neq i$.
Finally, the node and edge features extracted from the trained models are reformatted into a graph for the downstream tasks (Fig~\ref{fig:brainnetworks}E).
\subsection{Learning framework}
\begin{figure}[hbt!]
\centering
\includegraphics[width=\textwidth]{FIG2.png}
\caption{Learning framework. \textbf{A}. A GNN is pretrained to extract brain network features $F$ by performing the AD/CN classification task. \textbf{B}.A VAE is pretrained with a longitudinal stable CN cohort to predict the changed brain network features due to the ageing effect by incorporating the age gap into the bottle neck of the VAE. A graph decoder decodes the predicted brain network for interpretation. \textbf{C}. An RNN is trained with the MCI cohort to perform the prediction of the MCI-to-AD conversion at $t+2$. based on the brain networks of $t$ and $t+1$}
\label{fig:training}
\end{figure}
The learning framework for predicting MCI-to-AD conversion consists of three models that are trained separately.
A GNN consisting of three GATConv~\cite{velivckovic2017graph} layers and one global pooling layer is pre-trained as graph encoder to extract dementia-related features $F_1$ from the brain networks $G_1$ by performing AD/CN classification on the independent baseline cohort with BCELoss (binary cross-entropy loss) (Fig~\ref{fig:training}A). The dimension of the output graph feature is 256.
A VAE with a 3-layer (dimension: 256,64,16) MLP encoder and a 3-layer (dimension: 256,64,32) MLP decoder is trained to model the healthy ageing trajectories of the longitudinal stable CN cohort (Fig~\ref{fig:training}B). The task of the VAE is to predict the future features of brain networks $F_{t+n}$ by inputting a starting feature $F_{t}$ and age gap $n$. Specifically, the encoder projects the $F_{t}$ to a latent space which the age gap $n$ is fed into as a one-hot vector (dimension: 16). Then the decoder predicts the $F_{t+n}$ after the age-gap $n$ by training with the mean squared error loss (MSELoss) between predicted features $F'_{t+n}$ and the actual features $F_{t+n}$.
\begin{algorithm}
\caption{Training a RNN to predict conversion from MCI to AD}
\label{alg1}
\KwInput{Brain networks at $t$ and $t+1$: $\mathbf{G_t}$, $\mathbf{G_{t+1}}$}
\For{$t = 1,2, \cdots$ }
{
Apply pretrained GNN: $F_t=\mathbf{GNN}(G_t), F_{t+1}=\mathbf{GNN}(G_{t+1})$ \\
Predict features at $t+1$ from $t$: $F'_{t+1}=\mathbf{VAE}(F_t)$ \\
Compute residuals: $R_{i+1} = F_{i+1} - F'_{i+1}$ \\
Predict $t+2$ residual with RNN: $R'_{i+2} = \mathbf{RNN}(R_{i+1})$ \\
Predict MCI-to-AD conversion at $t+2$: $P_{conv} = \mathbf{MLP}(R'_{i+2})$
}
\end{algorithm}
An RNN with long short-term memory (LSTM) kernel is trained to predict the MCI-to-AD conversion based on the longitudinal cohort containing both stable and converted MCI cohorts (Fig~\ref{fig:training}C). Briefly, the RNN predicts whether the conversion will happen in $t+2$ based on the brain networks of $G_t$ and $G_{t+1}$. The recurrent training details are explained in Algorithm~\ref{alg1}.
For interpretation purposes, a graph decoder with two MLP is trained with VAE: the first MLP (dimension: 256, 512, 1024, (68*32)) decodes node embedding from the brain network feature $F'_{t+n}$, and the second MLP (dimension: (32*2):64:128:64:32) decodes features of connecting edges between two nodes from the concatenated node embedding.
The residual between the predicted and actual graph features of brain networks represents the patient's deviation from the healthy ageing trajectories during the MCI-to-AD conversion. By comparing the predicted and actual brain networks that converted to AD, we could identify the abnormal changes that cannot be explained by the healthy ageing effect.
\subsection{Benchmarks}
To evaluate the performance of brain networks generated using our approach, we constructed traditional diffusion MRI and T1 based structural brain networks, respectively for comparison.
For dMRI based brain networks (white matter connectivity), we performed whole-brain tractography on independent baseline CN cohort using the Anatomically-Constrained Tractography of MRtrix \cite{tournier2019mrtrix3}. The mean FA value of the tract fiber and the fiber counts is calculated as the edge weight among the nodes on the Desikan node atlas.
For T1 based edge matrices (grey matter association), we measured the grey matter volumes constrained by the node atlas using FreeSurfer \cite{fischl2012freesurfer}. We calculated a covariance matrix to characterize the connectivity between brain regions.
All benchmark networks and our proposed networks were utilized in the classification of AD/CN of independent baseline cohort for evaluation using the graph encoder of the learning framework.
For the task of predicting MCI-to-AD conversion, we included two benchmarks representing static modal and dynamic model, respectively.
The static prediction benchmark is a residual neural network (ResNet) proposed in \cite{abrol2020deep}. Note that the original study predicted the conversion in 36 months, while we are predicting the conversion in 18 months.
The interpretable temporal graph neural network (referred to as ITGNN) proposed in \cite{kim2021interpretable} was selected as the dynamic model benchmark, which consists of a GNN and an RNN. Briefly, the GNN encodes graph features of brain networks, and a LSTM learns from the graph feature to predict patients' outcomes (AD/CN/MCI). For benchmark purposes, we only included the longitudinal stable and converting MCI patients for training/testing and predict the patient status in 18 months. In addition, we input both the grey matter covariance networks and our proposed brain networks to produce two benchmark results.
All above models are implemented using Pytorch 1.10.0~\cite{paszke2019pytorch}. Five-fold cross validation, Adam optimiser, 0.0005 learning rate and 1000 training epochs and early convergence stopping are applied to all models.
\section{Results}
Results in table~\ref{tab:networks} show that our proposed method of generating brain networks achieved the highest performance for classifying AD/CN. Results in table~\ref{tab:conversion} show that our proposed method achieved highest prediction accuracy for the MCI-to-AD conversion in 18 months .
\begin{table}[]
\centering
\caption{Performance of brain networks for AD/CN classification}
\label{tab:networks}
\begin{tabular}{cccc}
\hline
Models & Accuracy & Sensitivity & Specificity \\ \hline
Fibre Counts networks & 0.818 & 0.841 & 0.792 \\
FA networks & 0.828 & 0.850 & 0.802 \\
Cortical volume networks & 0.761 & 0.770 & 0.750 \\
Proposed networks (edge) & 0.842 & 0.858 & 0.822 \\
Proposed networks (edge + node) & \textbf{0.861} & \textbf{0.885} & \textbf{0.833} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Performance comparison with benchmarks}
\label{tab:conversion}
\begin{tabular}{cccc}
\hline
Models & Accuracy & Sensitivity & Specificity \\ \hline
ResNet + baseline MRI & 0.802 & 0.813 & 0.794 \\
ITGNN+ cortical networks & 0.641 & 0.670 & 0.619 \\
ITGNN + proposed networks & 0.659 & 0.681 & 0.643 \\
Proposed + cortical networks & 0.779 & 0.791 & 0.770 \\
Proposed + proposed networks & \textbf{0.839} & \textbf{0.868} & \textbf{0.818} \\ \hline
\end{tabular}
\end{table}
The interpretation approach produces a residual brain network with high dimensional features. For visualization purposes, we average the residuals of edge features, retain the edges with top 5$\%$ highest residuals, and present one case example in Fig~\ref{fig:interpretation}. The interpretation results suggest that the proposed methodology is capable of capturing the abnormalities in brain networks from MCI-to-AD conversion, particularly the white matter hyperintensities related to cognitive decline.rticularly the white matter hyperintensities related with cognitive decline.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.45\textwidth]{FIG3.png}
\caption{Example of interpretation. \textbf{A.} A case example of a MCI patient who converted to AD. WMH(white matter hyper-intensity) is marked with red arrow. \textbf{B.} Distribution of tracts that are corresponding to the top $5\%$ residuals. Colorbar indicates number of edges crossing the voxel.}
\label{fig:interpretation}
\end{figure}
\section{Discussion and conclusion}
This study proposes an approach to construct structural brain networks with imaging representation as the node and edge features and an approach to predict MCI-to-AD conversion based on the neuroscience knowledge that AD patients tend to deviate from healthy ageing trajectories. The proposed method outperforms benchmark methods. In addition, interpretation suggests that the proposed method is sensitive to abnormal structural changes in the brain. Future possible improvements include integrating separate training stages and introducing a quantitative model interpretation. Overall, the proposed method shows promise to aid prognosis and risk assessment.
\bibliographystyle{splncs04}
|
1,116,691,499,714 | arxiv | \section{Introduction}\label{section:1}
We present a discontinuous Galerkin (DG) approximation of the linearized vibrations of an elastic structure. In many applications, the displacement field is not necessarily the variable of primary interest. We consider here the dual-mixed formulation of the elasticity eigenproblem because it delivers a direct finite element approximation of the Cauchy stress tensor and it permits to deal safely with nearly incompressible materials.
A mixed finite element approximation of the eigenvalue elasticity problem with reduced symmetry has been analyzed in \cite{MMR}. It consists in a formulation that only maintains the stress tensor as primary unknown, besides the rotation whose role is the weak imposition of the symmetry restriction. It is shown that a discretization based on the lowest order Arnorld-Falk-Winther element provides a correct spectral approximation and quasi optimal asymptotic error estimates for the eigenvalues and the eigenfunctions.
The ability of DG methods handle efficiently $hp$-adaptive strategies make them suitable for the numerical simulation of physical systems related to elastodynamics. Our aim here is to introduce an interior penalty discontinuous Galerkin version for the H(div)-conforming finite element space employed in \cite{MMR}. The $k^{th}$-order of this method amounts to approximate the Cauchy stress tensor and the rotation by discontinuous finite element spaces of degree $k$ and $k-1$ respectively. We point out that an H(curl)-based interior penalty discontinuous Galerkin method has also been introduced in \cite{BuffaPerugia} for the Maxwell eigensystem. The DG approximation we are considering here may be regarded as its counterpart in the H(div)-setting. As in \cite{BuffaPerugia}, our analysis requires conforming meshes, but the DG method still permits one to employ different polynomial element orders in the same triangulation. A further advantage of this DG scheme is that it allows to implement high-order elements in a mixed formulation by using standard shape functions. Let us remark that the DG method has also been analyzed in \cite{Antonietti} for the Laplace operator.
It is well known that the underlying source operator corresponding to mixed formulations is generally not compact. In our case, this operator admits a non physical zero eigenvalue whose eigenspace is infinite dimensional. It is then essential to use a scheme that is safe from the pollution that may appear in the form of spurious eigenvalues interspersed among the physically relevant ones. It turns out (cf. \cite{ActaBoffi, BoffiBrezziGastaldi(a)}) that, for mixed eigenvalue problems, the conditions guarantying the convergence of the source problem does not necessarily a correct spectral approximation (as it happens for compact operators \cite{BO}).
It has been shown in \cite{BuffaPerugia} that DG methods can also benefit from the general theory developed in \cite{DNR1,DNR2} to deal with the spectral numerical analysis of non-compact operators. We follow here the same strategy, combined with techniques from \cite{MMR, MMT}, to prove that our numerical scheme is spurious free. We also establish asymptotic error estimates for the eigenvalues and eigenfunctions. We treat with special care the analysis of the limit problem obtained when the Lam\'e coefficient tends to infinity.
We end this section with some of the notations that we will use below. Given
any Hilbert space $V$, let $V^n$ and $V^{\nxn}$ denote, respectively,
the space of vectors and tensors of order $n$ $(n= 2, 3)$ with
entries in $V$. In particular, $\bI$ is the identity matrix of
$\R^{\nxn}$ and $\mathbf{0}$ denotes a generic null vector or tensor.
Given $\btau:=(\tau_{ij})$ and $\bsig:=(\sigma_{ij})\in\R^{\nxn}$,
we define as usual the transpose tensor $\btau^{\t}:=(\tau_{ji})$,
the trace $\tr\btau:=\sum_{i=1}^n\tau_{ii}$, the deviatoric tensor
$\btau^{\tD}:=\btau-\frac{1}{n}\left(\tr\btau\right)\bI$, and the
tensor inner product $\btau:\bsig:=\sum_{i,j=1}^n\tau_{ij}\sigma_{ij}$.
Let $\O$ be a polyhedral Lipschitz bounded domain of $\R^n$ with
boundary $\DO$. For $s\geq 0$, $\norm{\cdot}_{s,\O}$ stands indistinctly
for the norm of the Hilbertian Sobolev spaces $\HsO$, $\HsO^n$ or
$\HsO^{\nxn}$, with the convention $\H^0(\O):=\LO$. We also define for
$s\geq 0$ the Hilbert space
$\HsdivO:=\set{\btau\in\HsO^{\nxn}:\ \bdiv\btau\in\HsO^n}$, whose norm
is given by $\norm{\btau}^2_{\HsdivO}
:=\norm{\btau}_{s,\O}^2+\norm{\bdiv\btau}^2_{s,\O}$ and denote
$\HdivO:={\H^0(\mathbf{div},\O)}$.
Henceforth, we denote by $C$ generic constants independent of the discretization
parameter, which may take different values at different places.
\section{The model problem}\label{section:2}
Let $\O\subset \R^n$ ($n=2,3$) be an open bounded Lipschitz
polygon/polyhedron representing an elastic body. We denote by $\bn$ the outward unit normal
vector to $\DO$ and assume that $\DO=\Gamma_D\cup\Gamma_N$, with $\textrm{int}(\Gamma_D)\cap \mathrm{int}(\Gamma_N) = \emptyset$.
The solid is supposed to be isotropic
and linearly elastic with mass density $\rho$ and Lam\'e constants $\mu$
and $\lambda$. We assume that the structure is fixed at $\Gamma_D\neq \emptyset$ and free
of stress on $\Gamma_N$.
We can combine the constitutive law
\begin{equation*}\label{constitutive}
\cC^{-1}\bsig=\beps(\bu) \qin\O,
\end{equation*}
and the equilibrium equation
\begin{equation}\label{motion}
\omega^2 \bu = \rho^{-1}\bdiv \bsig \qin \O,
\end{equation}
to eliminate either the displacement field $\bu$ or the Cauchy stress tensor $\bsig$ from the global spectral
formulation of the elasticity problem. Here,
$\beps(\bu):=\frac{1}{2}[\nabla\bu+(\nabla\bu)^{\t}]$ is the linearized strain tensor, and
$\cC:\ \R^{\nxn}\to\R^{\nxn}$ is the Hooke operator, which is given in terms of the Lam\'e coefficients
$\lambda$ and $\mu$ by
\[
\cC\btau
:=\lambda\left(\tr\btau\right)\bI + 2\mu\btau \qquad\forall\,\btau \in \R^{\nxn}.
\]
Opting for the elimination of the displacement $\bu$ and maintaining the stress tensor
$\bsig$ as a main variable leads to the following dual mixed formulation
of the elasticity eigenproblem:
Find $\bsig:\O\to \R^{\nxn}$ symmetric, $\br:\O\to \R^{\nxn}$ skew symmetric
and $\omega\in \mathbb{R}$ such that,
\begin{equation}\label{modelPb}
\begin{array}{rcll}
-\nabla\left(\rho^{-1} \bdiv\bsig \right) &=& \omega^2 \left( \cC^{-1}\bsig + \br \right) & \qin\O,
\\
\bdiv\bsig &=&\0 & \qon\Gamma_D,
\\
\bsig\bn&=&\0 & \qon\Gamma_N.
\end{array}
\end{equation}
We notice that the additional variable $\br:=\frac{1}{2}\left[\nabla\bu-(\nabla\bu)^{\t}\right]$ is the rotation.
It acts as a Lagrange multiplier for the symmetry restriction. We also point out that
the displacement can be recovered and also post-processed at the discrete level by using identity \eqref{motion}.
\medskip
Taking into account that the Neumann boundary condition becomes essential in the mixed formulation, we consider the closed subspace $\bcW$ of $\HdivO$ given by
\[
\bcW:=\left\{\btau\in \HdivO:\quad \btau\bn=\0\text{ on }\Gamma_N\right\}.
\]
The rotation $\br$ will be sought in the space
\[
\bcQ:=\set{\bs\in\LO^{\nxn}:\ \bs^{\t}=-\bs}.
\]
We introduce the symmetric bilinear forms
\[
\B{\sigmar, \taus}:= \int_{\O} \cC^{-1}\bsig:\btau + \int_{\O}\br:\btau + \int_{\O}\bs:\bsig
\]
and
\[
\A{\sigmar, \taus} := \int_{\O}\rho^{-1} \bdiv \bsig \cdot \bdiv \btau + \B{\sigmar, \taus}
\]
and denote the Hilbertian product norm on $\HdivO\times \L^2(\O)^{\nxn}$ by
\[
\norm{\taus}^2 := \norm{\btau}^2_{\HdivO} + \norm{\bs}^2_{0,\O}.
\]
The variational formulation of the eigenvalue problem \eqref{modelPb} is given as follows
in terms of $\kappa:=1 + \omega^2$ (see \cite{MMR} for more details):
Find $\kappa \in\R$ and $\0\neq (\bsig,\br)\in\bcW\times \bcQ$ such that
\begin{equation}\label{varForm}
\A{(\bsig,\br),(\btau,\bs)} = \kappa \, \B{(\bsig,\br),(\btau,\bs)}\quad \forall (\btau,\bs)\in \bcW\times \bcQ.
\end{equation}
We notice that the bilinear form
\[
(\bsig, \btau)_{\cC, \bdiv} := \int_{\O} \rho^{-1}\bdiv \bsig \cdot \bdiv \btau + \int_{\O} \cC^{-1} \bsig:\btau
\]
also defines an inner product on $\bcW$. Moreover, the following well-known result establishes that the norm
induced by $(\cdot, \cdot)_{\cC,\bdiv}$ is equivalent to $\norm{\cdot}_{\HdivO}$ uniformly in the Lam\'e coefficient $\lambda$.
\begin{prop}\label{normEquiv}
There exist constants $c_2\geq c_1>0$ independent of $\lambda$ such that
\[
c_1 \norm{\btau}_{\HdivO} \leq \norm{\btau}_{\cC,\bdiv} \leq c_2 \norm{\btau}_{\HdivO}\quad \forall \btau \in \bcW,
\]
where $\norm{\btau}_{\cC,\bdiv}:= \sqrt{(\btau,\btau)}_{\cC,\bdiv}$.
\end{prop}
\begin{proof}
The bound from above follows immediately from the fact that
\begin{equation}
\label{invcCop}
\int_{\O}\cC^{-1}\bsig:\btau = \frac{1}{2\mu}\int_{\O}\bsig^{\tD}:\btau^{\tD} +\frac{1}{n(n\lambda + 2\mu)} \int_{\O}(\tr\bsig)(\tr\btau)
\end{equation}
is bounded by a constant independent of $\lambda$. The left inequality may be found, for example, in \cite[Lemma 2.1]{MMR}.
\end{proof}
As a consequence of Proposition \ref{normEquiv},
there exists a constant $M>0$ independent of $\lambda$ such that
\begin{equation}\label{boundA}
\left|\A{\sigmar, \taus}\right| \leq M \, \norm{\sigmar} \norm{\taus} \quad \forall \sigmar, \taus \in \bcW\times \bcQ.
\end{equation}
\begin{prop}\label{infsupA-cont}
There exists a constant $\alpha>0$, depending on $\rho$, $\mu$ and $\O$ (but not on $\lambda$), such that
\begin{equation}\label{infsupa}
\sup_{\taus\in \bcW\times \bcQ} \frac{\A{\sigmar, \taus}}{\norm{\taus}} \geq \alpha \norm{\sigmar}\quad \forall \sigmar \in
\bcW\times \bcQ.
\end{equation}
\end{prop}
\begin{proof}
It follows from Proposition \ref{normEquiv} that
\begin{equation*}\label{elip0}
\A{(\btau, \mathbf{0}), (\btau, \mathbf{0})} = (\btau,\btau)_{\cC,\bdiv}
\geq C_1^2 \norm{\btau}^2_{\HdivO}
\qquad\forall\btau\in\bcW,
\end{equation*}
with $C_1>0$ independent of $\lambda$.
On the other hand, there exists a constant $\beta>0$ depending only on $\O$
(see, for instance, \cite{BoffiBrezziFortin}) such that
\begin{equation}\label{inSupbeta}
\sup_{\btau\in \bcW} \frac{\int_{\O}\bs:\btau}{\norm{\btau}_{\HdivO}} \geq \beta \norm{\bs}_{0,\O} \qquad
\forall \bs\in \bcQ.
\end{equation}
Consequently, the Babu\v{s}ka-Brezzi theory
shows that, for any bounded linear form $L\in \mathcal{L}(\bcW\times \bcQ)$, the
problem: find $\sigmar\in \bcW\times \bcQ$ such that
\[
\A{\sigmar, \taus} = L\big(\btau,\bs\big)\qquad \forall \taus \in \bcW\times \bcQ
\]
is well-posed, which proves \eqref{infsupa}.
\end{proof}
We deduce from Proposition \ref{infsupA-cont} and from the symmetry of
$A(\cdot, \cdot)$ that the operator $\bT: [\L^2(\O)^{\nxn}]^2 \to \bcW\times \bcQ$
defined for any $(\bF, \bg) \in [\L^2(\O)^{\nxn}]^2$, by
\begin{equation}\label{charcT}
\A{\bT(\bF, \bg), \taus} = \B{(\bF, \bg), \taus} \quad \forall \taus \in \bcW\times \bcQ
\end{equation}
is well-defined and symmetric with respect to $A(\cdot, \cdot)$. Moreover, there exists a constant $C>0$ independent of
$\lambda$ such that
\begin{equation}\label{bT}
\norm{\bT (\bF, \bg)} \leq C \norm{(\bF, \bg)}_{0,\O}\quad \forall (\bF, \bg) \in [\L^2(\O)^{\nxn}]^2.
\end{equation}
It is clear that $(\kappa,\sigmar)$
is a solution of \eqref{varForm} if and only if
$\left( \eta=\frac{1}{\kappa}, \sigmar\right)$ is an eigenpair for $\bT$. Let
\begin{equation}\label{K}
\bcK:=\set{\btau\in\bcW:\ \bdiv\btau=\0\ \ \mbox{in }\O}.
\end{equation}
{}From the definition of $\bT$, it is clear that
$\bT|_{\bcK\times\bcQ}:\bcK\times\bcQ\longrightarrow\bcK\times\bcQ$
reduces to the identity. Thus, $\eta=1$ is an eigenvalue of $\bT$ with eigenspace $\bcK\times \bcQ$. We introduce the orthogonal subspace to $\bcK\times\bcQ$ in
$\bcW\times\bcQ$ with respect to the bilinear form $B$,
\begin{equation*}
[\bcK\times\bcQ]^{\bot_{B}}
:=\left\{ (\bsig,\br)\in\bcW\times\bcQ:
\ \B{ (\bsig,\br), (\btau,\bs)}=0 \quad \forall (\btau,\bs)\in\bcK\times\bcQ\right\}.
\end{equation*}
\begin{lemma}
\label{L1}
The subspace $[\bcK\times\bcQ]^{\bot_{B}}$ is invariant for
$\bT$, i.e.,
\begin{equation*}\label{Tinvariant}
\bT([\bcK\times\bcQ]^{\bot_{B}})
\subset[\bcK\times\bcQ]^{\bot_{B}}.
\end{equation*}
Moreover, we have the direct and stable decomposition
\begin{equation}\label{split0}
\bcW\times\bcQ = [\bcK\times\bcQ]\oplus[\bcK\times\bcQ]^{\bot_{B}}.
\end{equation}
\end{lemma}
\begin{proof}
See Lemma 3.3 and Lemma 3.4 of \cite{MMR}.
\end{proof}
We deduce from Lemma \ref{L1} that there exists a unique projection $\bP:\, \bcW\times\bcQ\to \bcW\times\bcQ$
with range $[\bcK\times\bcQ]^{\bot_{B}}$ and kernel $\bcK\times\bcQ$
associated to the splitting \eqref{split0}.
Let us consider the elasticity problem posed in
$\O$ with a volume load in $\L^2(\O)^n$ and with homogeneous Dirichlet and Neumann boundary conditions on $\Gamma_D$ and $\Gamma_N$, respectively. According to \cite{D,grisvard}, there exists $\ws\in(0,1)$ that depends on $\O$, $\lambda$ and $\mu$ such that the displacement field that solves this problem belongs to $\H^{1+s}(\O)^n$ for all $s\in (0,\ws)$.
The following result shows that $\bP$ and $\bT\circ\bP$ are regularizing operators.
\begin{lemma}\label{reg}
For all $s\in (0, \ws)$,
$
\bP(\bcW \times \bcQ) \subset \HsO^{\nxn}\times \HsO^{\nxn}$ and
$\bT(\bP(\bcW \times \bcQ)) \subset \{\HsO^{\nxn}\times \HsO^{\nxn}:\, \bdiv \btau \in \H^1(\O)^n\}$.
Moreover, there exists a constant $C>0$ such that
\begin{equation}\label{reg1}
\norm{\bP \taus}_{\HsO^{\nxn}\times \HsO^{\nxn}} \leq C \norm{\bdiv \btau}_{0,\O}\quad \forall \taus \in \bcW\times \bcQ
\end{equation}
and
\begin{equation}\label{reg2}
\norm{\bT\circ \bP \taus}_{\HsdivO\times \HsO^{\nxn}} \leq C \norm{\bdiv \btau}_{0,\O}\quad \forall \taus \in \bcW\times \bcQ.
\end{equation}
\end{lemma}
\begin{proof}
Estimate \eqref{reg1} is proved in \cite[Lemma 3.2]{MMR} and \eqref{reg2} follows as a consequence of \eqref{reg1}, see \cite[Proposition~3.5]{MMR}.
\end{proof}
We point out that, in principle, the exponent $\ws$ and the constant $C$ in \eqref{reg1} depend on the Lam\'e coefficient $\lambda$. However, we know that \eqref{reg1} also holds true when $\lambda=+\infty$ (see the Appendix). Hence, it is natural to expect \eqref{reg1} to be satisfied uniformly in $\lambda$. However, to the best of authors' knowledge, such a result is not available in the literature. For this reason, from now on we make the following assumption.
\begin{assumption}\label{assumpt1}
There exist $\ws\in (0,1)$ and $\widehat{C}_0>0$ independent of $\lambda$ such that
\begin{equation*}\label{as1}
\norm{\bP \taus}_{\H^{s}(\O)^{\nxn}\times \H^{s}(\O)^{\nxn}} \leq \widehat{C}_0 \norm{\bdiv \btau}_{0,\O}\quad \forall \taus \in \bcW\times \bcQ,\quad \forall s\in (0, \ws).
\end{equation*}
\end{assumption}
This would immediately imply the existence of $\widehat{C}_1>0$ independent of $\lambda$ such that
\begin{equation*}\label{assumptreg2}
\norm{\bT\circ \bP \taus}_{\HsdivO\times \HsO^{\nxn}} \leq \widehat{C}_1 \norm{\bdiv \btau}_{0,\O}\quad \forall \taus \in \bcW\times \bcQ,\quad \forall s\in (0, \ws).
\end{equation*}
The next result gives the spectral characterization for the solution operator $\bT$.
\begin{prop}\label{specT}
The spectrum $\sp(\bT)$ of $\bT$ decomposes as follows
\[
\sp(\bT) = \set{0, 1} \cup \set{\eta_k}_{k\in \mathbb{N}}
\]
where $\set{\eta_k}_k\subset (0,1)$ is a real sequence of
finite-multiplicity eigenvalues of $\bT$ which converges to 0. The ascent of each of
these eigenvalues is $1$ and the corresponding eigenfunctions lie in $\bP(\bcW \times \bcQ)$.
Moreover, $\eta=1$ is an infinite-multiplicity eigenvalue of $\bT$ with associated eigenspace
$\bcK\times\bcQ$ and $\eta=0$ is not
an eigenvalue.
\end{prop}
\begin{proof}
See \cite[Theorem 3.7]{MMR}.
\end{proof}
We end this section by providing a bound of the resolvent $\big( z\bI - \bT \big)^{-1}$.
\begin{prop}\label{specT1}
If $z \notin \sp(\bT)$,
there exists a constant $C>0$ independent of $\lambda$ and $z$ such that
\begin{equation*}\label{resolvent}
\norm{\big(z\bI-\bT\big) ( \bsig, \br)} \ge\, C \,
\dist\big(z, \sp(\bT) \big)\, \norm{( \bsig, \br)} \quad \forall ( \bsig, \br)\in \bcW\times \bcQ,
\end{equation*}
where $\dist\big(z, \sp(\bT) \big)$ represents the distance between $z$ and
the spectrum of $\bT$ in the complex plane, which in principle depends on $\lambda$.
\end{prop}
\begin{proof}
See Proposition 2.4 in \cite{MMT}.
\end{proof}
\section{A discontinuous Galerkin discretization}\label{section:3}
We consider shape regular affine meshes $\mathcal{T}_h$ that subdivide the domain $\bar \Omega$ into
triangles/tetrahedra $K$ of diameter $h_K$. The parameter $h:= \max_{K\in \cT_h} \{h_K\}$
represents the mesh size of $\cT_h$. Hereafter, given an integer $m\geq 0$ and a domain
$D\subset \mathbb{R}^n$, $\cP_m(D)$ denotes the space of polynomials of degree at most $m$ on $D$.
We say that a closed subset $F\subset \overline{\Omega}$ is an interior edge/face if $F$ has a positive $(n-1)$-dimensional
measure and if there are distinct elements $K$ and $K'$ such that $F =\bar K\cap \bar K'$. A closed
subset $F\subset \overline{\Omega}$ is a boundary edge/face if
there exists $K\in \cT_h$ such that $F$ is an edge/face of $K$ and $F = \bar K\cap \partial \Omega$.
We consider the set $\mathcal{F}_h^0$ of interior edges/faces and the set $\mathcal{F}_h^\partial$ of boundary edges/faces.
We assume that the boundary mesh $\mathcal{F}_h^\partial$ is compatible with the partition $\DO = \G_{D} \cup \G_{N}$, i.e.,
\[
\bigcup_{F\in \mathcal{F}_h^D} F = \G_{D} \qquad \text{and} \qquad \bigcup_{F\in \mathcal{F}_h^N} F = \G_N,
\]
where $\mathcal{F}_h^D:= \set{F\in \mathcal{F}_h^\partial; \quad F\subset \G_D}$ and
$\mathcal{F}_h^N:= \set{F\in \mathcal{F}_h^\partial; \quad F\subset \G_N}$.
We denote
\[
\mathcal{F}_h := \mathcal{F}_h^0\cup \mathcal{F}_h^\partial\qquad \text{and} \qquad \mathcal{F}^*_h:= \mathcal{F}_h^{0} \cup \mathcal{F}_h^{N},
\]
and for any element $K\in \cT_h$, we introduce the set
\[
\mathcal{F}(K):= \set{F\in \mathcal{F}_h;\quad F\subset \partial K}
\]
of edges/faces composing the boundary of $K$.
The space of piecewise polynomial functions of degree at most $m$ relatively to $\cT_h$ is denoted by
\[
\cP_m(\cT_h) :=\set{ v\in L^2(\O); \quad v|_K \in \cP_m(K),\quad \forall K\in \cT_h }.
\]
For any $k\geq 1$, we consider the finite element spaces
\[
\bcW_h := \cP_k(\cT_h)^{\nxn}\qquad
\bcW_h^c := \bcW_h \cap \bcW
\qquad \text{and} \qquad \bcQ_h := \cP_{k-1}(\cT_h)^{\nxn} \cap \bcQ.
\]
Let us now recall some well-known properties of the Brezzi-Douglas-Marini (BDM)
mixed finite element \cite{BDM}. For $t>1/2$, the tensorial version of the BDM-interpolation operator
$\Pi_h: \H^t(\O)^{\nxn} \to \bcW_h^c$,
satisfies the following classical error estimate, see \cite[Proposition 2.5.4]{BoffiBrezziFortinBook},
\begin{equation}\label{asymp0}
\norm{\btau - \Pi_h \btau}_{0,\O} \leq C h^{\min(t, k+1)} \norm{\btau}_{t,\O} \qquad \forall \btau \in \H^t(\O)^{\nxn}, \quad t>1/2.
\end{equation}
For less regular tensorial fields we also have the following error estimate
\begin{equation}\label{asymp00}
\norm{\btau - \Pi_h \btau}_{0,\O} \leq C h^t (\norm{\btau}_{t,\O} + \norm{\btau}_{\HdivO}) \quad \forall \btau \in \H^t(\O)^{\nxn}\cap \HdivO, \quad t\in (0, 1/2].
\end{equation}
Moreover, thanks to the commutativity property, if $\bdiv \btau \in \H^t(\O)^{n}$, then
\begin{equation}\label{asympDiv}
\norm{\bdiv (\btau - \Pi_h \btau) }_{0,\O} = \norm{\bdiv \btau - \mathcal R_h \bdiv \btau }_{0,\O}
\leq C h^{\min(t, k)} \norm{\bdiv\btau}_{t,\O},
\end{equation}
where $\mathcal R_h$ is the $\LO^n$-orthogonal projection onto $\cP_{k-1}(\cT_h)^n$. Finally,
we denote by $\mathcal S_h:\ \bcQ\to\bcQ_h$ the orthogonal
projector with respect to the $\LO^{\nxn}$-norm.
It is well-known that, for any $t>0$, we have
\begin{equation}\label{asymQ}
\norm{\br-\mathcal S_h\br}_{0,\O}
\leq C h^{\min(t, k)} \norm{\br}_{t,\O}
\qquad\forall\br\in\H^t(\O)^{\nxn}\cap\bcQ.
\end{equation}
For the analysis we need to decompose adequately the space $\bcW^c_h\times\bcQ_h$.
We consider,
\[
\bcK_h = \left\{\btau \in \bcW^c_h;\quad \bdiv \btau = 0 \right\} \subset \bcK.
\]
\begin{lemma}\label{Ph}
There exists a projection $\bP_h:\, \bcW_h^c \times \bcQ_h \to \bcW_h^c \times \bcQ_h$ with kernel
$\bcK_h \times \bcQ_h$ such that for all $s\in (0,\ws)$, there exists a constant $C$ independent of $h$ and $\lambda$ such that
$$
\norm{(\bP - \bP_h)\sigmarh} \leq C\, h^s \norm{\bdiv \bsig_h}_{0,\O} \quad \forall \sigmarh\in
\bcW_h^c \times \bcQ_h.
$$
\end{lemma}
\begin{proof}
See the proof of estimate (ii) of Lemma 4.2 from \cite{MMR}
\end{proof}
For any $t\geq 0$, we consider the broken Sobolev space
\[
\H^t(\cT_h):=
\set{\bv \in \L^2(\O)^n; \quad \bv|_K\in \H^t(K)^n\quad \forall K\in \cT_h}.
\]
For each $\bv:=\set{\bv_K}\in \H^t(\cT_h)^n$ and
$\btau:= \set{\btau_K}\in \H^t(\cT_h)^{\nxn}$
the components $\bv_K$ and $\btau_K$ represent the restrictions $\bv|_K$ and $\btau|_K$.
When no confusion arises, the restrictions of these functions will be written
without any subscript. We will also need the space given on the skeletons of the triangulations $\cT_h$ by
\[
\L^2(\mathcal{F}_h):= \prod_{F\in \mathcal{F}_h} \L^2(F).
\]
Similarly, the components $\chi_F$
of $\chi := \set{\chi_F}\in \L^2(\mathcal{F}_h)$
coincide with the restrictions $\chi|_F$ and we denote
\[
\int_{\mathcal{F}_h} \chi := \sum_{F\in \mathcal{F}_h} \int_F \chi_F\quad \text{and}\quad
\norm{\chi}^2_{0,\mathcal{F}_h}:= \int_{\mathcal{F}_h} \chi^2,
\qquad
\forall \chi\in \L^2(\mathcal{F}_h).
\]
Similarly, $\norm{\chi}^2_{0,\mathcal{F}^*_h}:= \sum_{F\in \mathcal{F}^*_h}\int_{F} \chi_F^2$ for all $\chi\in \L^2(\mathcal{F}^*_h):= \prod_{F\in \mathcal{F}^*_h} \L^2(F)$.
From now on, $h_\mathcal{F}\in \L^2(\mathcal{F}_h)$ is the piecewise constant function
defined by $h_\mathcal{F}|_F := h_F$ for all $F \in \mathcal{F}_h$ with $h_F$ denoting the
diameter of edge/face $F$.
Given a vector valued function $\bv\in \H^t(\cT_h)^n$, with $t>1/2$,
we define averages $\mean{\bv}\in \L^2(\mathcal{F}_h)^n$ and jumps $\jump{\bv}\in \L^2(\mathcal{F}_h)$
by
\[
\mean{\bv}_F := (\bv_K + \bv_{K'})/2 \quad \text{and} \quad \jump{\bv}_F := \bv_K \cdot\boldsymbol{n}_K + \bv_{K'}\cdot\boldsymbol{n}_{K'}
\quad \forall F \in \mathcal{F}(K)\cap \mathcal{F}(K'),
\]
where $\boldsymbol{n}_K$ is the outward unit normal vector to $\partial K$. On the boundary of $\O$ we use the following
conventions for averages and jumps:
\[
\mean{\bv}_F := \bv_K \quad \text{and} \quad \jump{\bv}_F := \bv_K \cdot\boldsymbol{n}
\quad \forall F \in \mathcal{F}(K)\cap \DO.
\]
Similarly, for matrix valued functions $\btau\in \H^t(\cT_h)^{\nxn}$, we define $\mean{\btau}\in \L^2(\mathcal{F}_h)^{\nxn}$ and
$\jump{\btau}\in \L^2(\mathcal{F}_h)^n$ by
\[
\mean{\btau}_F := (\btau_K + \btau_{K'})/2 \quad \text{and} \quad \jump{\btau}_F :=
\btau_K \boldsymbol{n}_K + \btau_{K'}\boldsymbol{n}_{K'}
\quad \forall F \in \mathcal{F}(K)\cap \mathcal{F}(K')
\]
and on the boundary of $\Omega$ we set
\[
\mean{\btau}_F := \btau_K \quad \text{and} \quad \jump{\btau}_F :=
\btau_K \boldsymbol{n}
\quad \forall F \in \mathcal{F}(K)\cap \DO.
\]
Given $\btau \in \bcW_h$ we define $\bdiv_h \btau \in \L^2(\O)^n$ by
$
\bdiv_h \btau|_{K} = \bdiv (\btau|_K)$ for all $K\in \cT_h
$ and
endow $\bcW(h) := \bcW + \bcW_h$ with the seminorm
\[
|\btau|^2_{\bcW(h)} := \norm{\bdiv_h \btau}^2_{0,\O} + \norm{h_{\mathcal{F}}^{-1/2} \jump{\btau}}^2_{0,\mathcal{F}^*_h}
\]
and the norm
\[
\norm{\btau}^2_{\bcW(h)} := |\btau|^2_{\bcW(h)} + \norm{\btau}^2_{0,\O}.
\]
For the sake of simplicity, we will also use the notation
\[
\norm{(\btau, \bs)}^2_{DG} : = \norm{\btau}^2_{\bcW(h)} + \norm{\bs}^2_{0,\O}.
\]
The following result will be used in the sequel to ultimately derive a method free of spurious modes. Since according to Proposition \ref{specT} the spectrum of $\bT$ lies in the unit disk $\mathbb{D}:=\{ z\in\mathbb{C}:\, |z|\leq 1\}$, we restrict our attention to this subset of the complex plane.
\begin{lemma}\label{TDG}
There exists a constant $C>0$ independent of $h$ and $\lambda$ such that for all $z \in\mathbb{D}\setminus \sp(\bT)$ with $|z|\leq 1$, there holds
\[
\norm{(z \bI - \bT) \taus }_{DG} \geq C\dist\big(z,\sp(\bT)\big)|z|\, \norm{\taus}_{DG} \quad \forall \taus \in \bcW(h)\times \bcQ.
\]
\end{lemma}
\begin{proof}
We introduce
\[
(\bsig^*, \br^*):= \bT \taus \in \bcW\times \bcQ
\]
and notice that
\[
(z \bI - \bT)(\bsig^*, \br^*) = \bT (z \bI - \bT) \taus.
\]
By virtue of Proposition \ref{specT} and the boundedness of $\bT:\, [\L^2(\O)^{\nxn}]^2 \to \bcW\times \bcQ$ we have that
\begin{multline*}
C\dist\big(z,\sp(\bT)\big) \norm{(\bsig^*, \br^*)} \leq \norm{(z \bI - \bT)(\bsig^*, \br^*)} \leq
\norm{\bT (z \bI - \bT) \taus}\\ \leq \norm{\bT} \norm{(z \bI - \bT) \taus }_0
\leq \norm{\bT} \norm{(z \bI - \bT) \taus }_{DG} .
\end{multline*}
Finally, by the triangle inequality,
\begin{align*}
\norm{\taus}_{DG} \leq |z|^{-1} \norm{(\bsig^*, \br^*)} +& |z|^{-1} \norm{(z \bI - \bT) \taus }_{DG}\\
\leq &|z|^{-1}\left( 1+\dfrac{\norm{\bT}}{C\dist\big(z,\sp(\bT)\big)} \right) \norm{(z \bI - \bT) \taus }_{DG}\\
\leq & |z|^{-1}\left( \dfrac{C\dist\big(z,\sp(\bT)\big)+\norm{\bT}}{C\dist\big(z,\sp(\bT)\big)} \right)\norm{(z \bI - \bT) \taus }_{DG}.
\end{align*}
Hence,
\begin{equation*}
C|z|\left(\frac{\dist\big(z,\sp(\bT)\big)}{\norm{\bT}+\dist\big(z,\sp(\bT)\big)}\right)\norm{\taus}_{DG}\leq \norm{(z \bI - \bT) \taus }_{DG}.
\end{equation*}
Since $\dist\big(z,\sp(\bT)\big)\leq |z|\leq 1$ and $\|\bT\|\leq C'$ (with $C'$ independent of $\lambda$), we derive from the above estimate that
\begin{equation*}
\frac{C|z|}{1+C'} \dist\big(z,\sp(\bT)\big)\norm{\taus}_{DG} \leq \norm{(z \bI - \bT) \taus }_{DG},
\end{equation*}
and the result follows.
\end{proof}
\begin{remark}\label{roof}
If $E$ is a compact subset of $\mathbb{D} \setminus\sp(\bT)$, we deduce from Lemma \ref{TDG} that there exists a
constant $C>0$ independent of $h$ and $\lambda$ such that, for all $z\in E$,
\begin{equation*}\label{resid}
\norm{\big(z \bI - \bT \big)^{-1}}_{\mathcal{L}(\bcW(h)\times \bcQ, \bcW(h)\times \bcQ)} \leq \frac{C}{\dist\big(E,\sp(\bT)\big)|z|}.
\end{equation*}
\end{remark}
Let us now introduce the discrete counterpart of \eqref{varForm}. Given a parameter $\texttt{a}_S>0$, we introduce the symmetric bilinear form
\begin{multline*}
\Ah{\sigmar,\taus}:=
\int_{\O} \rho^{-1}\bdiv_h \bsig \cdot \bdiv_h \btau + \B{\sigmar, \taus} \\
+ \int_{\mathcal{F}^*_h} \texttt{a}_S h_{\mathcal{F}}^{-1}\, \jump{\bsig}\cdot \jump{\btau}-\int_{\mathcal{F}^*_h}
\left( \mean{\rho^{-1}\bdiv_h\bsig} \cdot \jump{\btau}
+ \mean{\rho^{-1}\bdiv_h\btau} \cdot \jump{\bsig} \right)
\end{multline*}
and consider the DG method: Find $\kappa_h\in \R$ and $0\neq\sigmarh\in \bcW_h\times \bcQ_h$ such that
\begin{equation}\label{DGshort}
\Ah{\sigmarh, (\btau_h,\bs_h)} = \kappa_h \B{\sigmarh, (\btau_h,\bs_h)}\qquad \forall (\btau_h,\bs_h)\in \bcW_h\times \bcQ_h.
\end{equation}
We notice that, as it is usually the case for DG methods, the essential boundary condition is directly incorporated
within the scheme.
A straightforward application of the Cauchy-Schwarz inequality shows that, for all $\sigmar, \taus \in \H^t(\bdiv, \cT_h)\times \bcQ$ ($t>1/2$), there exists a constant $M^*>0$ independent of $h$ and $\lambda$ such that
\begin{equation}\label{boundA1}
\left|\Ah{\sigmar,\taus}\right| \leq M^* \norm{\sigmar}^*_{DG}\, \norm{\taus}^*_{DG},
\end{equation}
where
\[
\norm{\sigmar}^*_{DG}:= \Big( \norm{\sigmar}^2_{DG} +
\norm{h_{\mathcal{F}}^{1/2} \mean{\bdiv \bsig}}^2_{0,\mathcal{F}^*_h}
\Big)^{1/2}.
\]
Moreover, we deduce from the discrete trace inequality (see \cite{DiPietroErn})
\begin{equation}\label{discTrace}
\norm{h^{1/2}_{\mathcal{F}}\mean{v}}_{0,\mathcal{F}_h}\leq C \norm{v}_{0,\O}\quad \forall v\in \cP_k(\cT_h),
\end{equation}
that for all $\sigmar\in \H^t(\bdiv, \cT_h)\times \bcQ$ ($t>1/2$), and $(\btau_h,\bs_h) \in \bcW_h\times \bcQ_h$,
\begin{equation}\label{boundA2}
\left|\Ah{\sigmar,(\btau_h,\bs_h)}\right| \leq M_{DG} \norm{\sigmar}^*_{DG}\, \norm{(\btau_h,\bs_h)}_{DG},
\end{equation}
with $M_{DG}>0$ is independent of $h$ and $\lambda$.
\section{The DG-discrete source operator}\label{section:4}
The following discrete projection operator from the DG-space $\bcW_h$ onto the $\H(\bdiv,\O)$-conforming mixed finite element space $\bcW^c$ is essential in the forthcoming analysis.
\begin{prop}\label{propC}
There exists a projection $\mathcal{I}_h:\, \bcW_h \to \bcW_h^c$ such that
the norm equivalence
\begin{equation}\label{equivN}
\underbar{C}\, \norm{\btau}_{\bcW(h)} \leq \Big( \norm{\mathcal{I}_h \btau}^2_{\HdivO} +
\norm{h_{\mathcal{F}}^{-1/2} \jump{\btau}}^2_{0,\mathcal{F}^*_h}
\Big)^{1/2} \leq \bar{C} \norm{\btau}_{\bcW(h)}
\end{equation}
holds true on $\bcW_h$ with constants $\underbar{C}>0$ and $\bar{C}>0$ independent of $h$. Moreover, we have that
\begin{equation}\label{L2Ph}
\norm{\bdiv_h (\btau- \mathcal{I}_h \btau)}^2_{0,\O} + \sum_{K\in \cT_h} h_K^{-2} \norm{\btau- \mathcal{I}_h \btau}^2_{0,K}
\leq C_0
\norm{h_{\mathcal{F}}^{-1/2} \jump{\btau}}^2_{0,\mathcal{F}^*_h},
\end{equation}
with $C_0>0$ independent of $h$.
\end{prop}
\begin{proof}
See \cite[Proposition 5.2]{MMT}.
\end{proof}
We can prove, with the aid of this result, that the bilinear form $A_h$ satisfies the following inf-sup condition that ensures the stability of our DG method.
\begin{prop}\label{infsupDh}
There exists a positive parameter $\textup{\texttt{a}}_S^*$ such that, for all $\textup{\texttt{a}}_S\geq \textup{\texttt{a}}_S^*$,
\begin{equation}\label{infsupABh}
\sup_{\taush\in \bcW_h\times \bcQ_h} \frac{\Ah{\sigmarh, \taush}}{\norm{\taush}_{DG}}
\geq \alpha_{DG} \norm{\sigmarh}_{DG} \quad \forall \sigmarh \in \bcW_h\times \bcQ_h
\end{equation}
with $\alpha_{DG}>0$ independent of $h$ and $\lambda$.
\end{prop}
\begin{proof}
It is shown in \cite[Proposition 3.1]{MMT} that
there exists a constant $\alpha_A^{c}>0$ independent of $h$ and $\lambda$ such that
\begin{equation*}\label{infsupA-disceq}
\sup_{\taush\in \bcW_h^c\times \bcQ_h} \frac{\A{\sigmarh, \taush}}{\norm{\taush}}
\geq \alpha_A^{c} \norm{\sigmarh}\quad \forall \sigmarh \in
\bcW_h^c\times \bcQ_h.
\end{equation*}
It follows that there exists an
operator $\Theta_h:\, \bcW^c_h\times \bcQ_h \to \bcW^c_h\times \bcQ_h$ satisfying
\begin{equation}\label{cota1}
\A{\sigmarh, \Theta_h \sigmarh} = \alpha_A^c\norm{\sigmarh}^2 \quad \text{and}\quad
\norm{\Theta_h \sigmarh} \leq \norm{\sigmarh}
\end{equation}
for all $\sigmarh\in \bcW^c_h\times \bcQ_h$.
Given $\taush\in \bcW_h\times \bcQ_h$, the decomposition $\btau_h = \btau_h^c + \tilde\btau_h$, with
$\btau_h^c := \mathcal{I}_h \btau_h$ and $\tilde\btau_h := \btau_h - \mathcal{I}_h \btau_h$, and \eqref{cota1} yield
\begin{multline}\label{split}
\Ah{\taush, \Theta_h \taushc+ \tausht}
= \alpha_A^c \norm{\taushc}^2 +\\
\Ah{\taushc, \tausht} +
\Ah{ \tausht, \Theta_h \taushc} +
\Ah{ \tausht, \tausht}.
\end{multline}
By the Cauchy-Schwarz inequality, \begin{multline*}
\Ah{ \tausht, \tausht}= \rho^{-1}\norm{\bdiv_h \tilde \btau_h}_{0,\O}^2 +
\texttt{a}_S \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h}^2 +
\int_{\O} \cC^{-1}\tilde \btau_h:\tilde \btau_h \\
- 2 \int_{\mathcal{F}^*_h} \mean{\rho^{-1}\bdiv_h \tilde \btau_h}\cdot \jump{\tilde \btau_h}
\,\, \ge\,\, \texttt{a}_S \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h}^2 \\
-2 \rho^{-1}\norm{h_\mathcal{F}^{1/2} \mean{\bdiv_h \tilde \btau_h}}_{0,\mathcal{F}^*_h} \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h}
\end{multline*}
and we deduce from \eqref{discTrace} and \eqref{L2Ph} that
\[
\Ah{ \tausht, \tausht} \geq (\texttt{a}_S -C_1 ) \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h}^2 ,
\]
with a constant $C_1$ independent of $h$ and $\lambda$.
We proceed similarly for the terms in the right-hand side of \eqref{split}.
Indeed, it is straightforward that
\begin{multline*}
\Ah{\taushc, \tausht} \geq -\rho^{-1}\norm{\bdiv \btau_h^c}_{0,\Omega}\norm{\bdiv_h \tilde \btau_h}_{0,\Omega} - C_2
\norm{\tilde \btau_h}_{0,\Omega}(\norm{\btau_h^c}_{0,\Omega}+ \norm{\bs_h}_{0,\Omega})- \\
\rho^{-1}\norm{h_\mathcal{F}^{1/2} \mean{\bdiv \btau^c_h}}_{0,\mathcal{F}^*_h}\norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h},
\end{multline*}
and using again \eqref{discTrace} and \eqref{L2Ph} we obtain
\begin{multline*}
\Ah{\taushc, \tausht} \geq
-C_3 \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h} \norm{\taushc} \geq \\
- \frac{\alpha_A^c}{4} \norm{\taushc}^2 - C_4\norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h}^2
\end{multline*}
with $C_4>0$ independent of $h$ and $\lambda$. Similar estimates lead to
\begin{multline*}
\Ah{ \tausht, \Theta_h \taushc} \geq
-C_5 \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h} \norm{\Theta_h \taushc}
\geq \\
-C_5 \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h} \norm{ \taushc},
\end{multline*}
where the last inequality follows from \eqref{cota1}. We conclude that there exists
$C_6>0$ independent of $h$ and $\lambda$ such that
\begin{equation*}
\Ah{ \tausht, \Theta_h \taushc} \geq
- \frac{\alpha_D^c}{4} \norm{\taushc}^2 - C_6 \norm{h_\mathcal{F}^{-1/2} \jump{ \btau_h}}_{0,\mathcal{F}^*_h}^2.
\end{equation*}
We then have shown that,
\begin{equation*}
\Ah{\taush, \Theta_h \taushc+ \tausht}\ge
\frac{\alpha_A^c }{2} \norm{ \taushc}^2 + \\\big(\texttt{a}_S -
C_7 \big)
\norm{h_\mathcal{F}^{-1/2}\jump{ \btau_h}}_{0,\mathcal{F}_h}^2,
\end{equation*}
with $C_7 := C_1+ C_4+C_6$. Consequently,
if $\texttt{a}_S > \texttt{a}_S^*:=C_7 + \frac{\alpha_A^c}{2}$,
\begin{equation*}
\Ah{\taush, \Theta_h \taushc+ \tausht}\ge \frac{\alpha_A^c }{2} \Big( \norm{\taushc}^2 +
\norm{h_{\mathcal{F}}^{-1/2} \jump{\btau}}^2_{0,\mathcal{F}^*_h} \Big),
\end{equation*}
and thanks to \eqref{equivN}, we conclude that there exists $\alpha_{DG}>0$ such that,
\begin{equation*}
\Ah{\taush, \Theta_h \taushc+ \tausht}
\geq \alpha_{DG}
\norm{\taush }_{DG}
\Big(\norm{\Theta_h \taushc + \tausht}_{DG}\Big),
\end{equation*}
which gives \eqref{infsupABh}.
\end{proof}
In the sequel, we assume that the stabilization parameter is big enough $\texttt{a}_S > \texttt{a}_S^*$ so that the inf-sup condition \eqref{infsupABh} is guaranteed. The first consequence of this inf-sup condition is that the operator $\bT_h: \L^2(\O)^{\nxn}\times \L^2(\O)^{\nxn} \to \bcW_h\times \bcQ_h$
characterized, for any $(\bF, \bg) \in [\L^2(\O)^{\nxn}]^2$, by
\begin{equation}\label{charcTDG}
\Ah{\bT_h(\bF, \bg), (\btau_h,\bs_h)} = \B{(\bF, \bg), (\btau_h,\bs_h)} \quad \forall (\btau_h,\bs_h)\in \bcW_h\times \bcQ_h
\end{equation}
is well-defined, symmetric with respect to $A_h(\cdot, \cdot)$ and there exists a constant $C>0$ independent of
$\lambda$ and $h$ such that
\begin{equation}\label{bTh}
\norm{\bT_h (\bF, \bg)}_{DG} \leq C \norm{(\bF, \bg)}_{0,\O}\quad \forall (\bF, \bg) \in [\L^2(\O)^{\nxn}]^2.
\end{equation}
We observe that if $(\kappa_h,\sigmarh)\in\mathbb{R}\times\bcW_h\times\bcQ$ is a solution of problem \eqref{DGshort} if and only if $(\mu_h,\sigmarh)$, with $\mu_h=1/(1+\kappa_h)$ is an eigenpair of $\bT_h$, i.e.
$$
\bT_h\sigmarh=\frac{1}{1+\kappa_h}\sigmarh.
$$
Analogously to the continuous case, we prove that the discrete resolvent associated to the discrete operator $\bT_h$ is bounded.
\begin{theorem}\label{cea}
Assume that $(\tilde \bsig, \tilde \br):=\bT(\bF, \bg)\in \H^t(\bdiv, \O)\times \H^t(\O)^{\nxn}$ for some $t>1/2$.
Then,
\begin{equation}\label{Cea}
\norm{(\bT - \bT_h)(\bF, \bg)}_{DG} \leq \left(1 + \dfrac{M_{DG}}{\alpha_{DG}}\right)
\inf_{\taush\in \bcW_h\times \bcQ_h} \norm{\bT(\bF, \bg) - \taush}^*_{DG}.
\end{equation}
Moreover, the error estimate
\begin{equation}\label{asymp}
\norm{(\bT - \bT_h)(\bF, \bg)}_{DG} \leq \, C\, h^{\min(t, k)}\,
\Big( \norm{\tilde \bsig}_{\H^t(\bdiv, \O)} + \norm{\tilde \br}_{\H^t(\O)^{\nxn}} \Big),
\end{equation}
holds true with a constant $C>0$ independent of $h$ and $\lambda$.
\end{theorem}
\begin{proof}
We first notice that the DG approximation \eqref{charcTDG} is consistent with regards to its continuous counterpart \eqref{charcT} in the sense that
\begin{equation}\label{consistency}
\Ah{(\bT-\bT_h)(\bF, \bg), \taush} = 0 \quad \forall \taush\in \bcW_h\times \bcQ_h.
\end{equation}
Indeed, by definition,
\begin{multline}
\label{id1}
\Ah{(\tilde \bsig, \tilde \br), \taush} = \int_{\O} \rho^{-1}\bdiv\tilde\bsig \cdot \bdiv_h \btau_h + \B{(\tilde \bsig, \tilde \br), \taush}\\ -
\int_{\mathcal{F}^*_h} \mean{\rho^{-1}\bdiv\tilde\bsig} \cdot \jump{\btau_h}.
\end{multline}
It is straightforward to deduce from \eqref{charcT}
\begin{equation}\label{you}
\nabla \left(\rho^{-1} \bdiv \tilde \bsig\right) = \cC^{-1}(\tilde \bsig - \bF) + \tilde \br - \bg\quad
\text{and} \quad
(\tilde \bsig - \tilde \bsig^\t)/2 = (\bF - \bF^\t)/2.
\end{equation}
Moreover, an integration by parts yields
\begin{multline*}
\int_{\O} \rho^{-1}\bdiv\tilde \bsig \cdot \bdiv_h \btau_h =
-\sum_{K\in \cT_h} \int_K \nabla(\rho^{-1}\bdiv \tilde \bsig): \btau_h + \sum_{K\in \cT_h} \int_{\partial K} \rho^{-1}\bdiv
\tilde \bsig \cdot \btau_h\bn_K\\
= -\sum_{K\in \cT_h} \int_K \nabla(\rho^{-1}\bdiv \tilde \bsig): \btau_h + \int_{\mathcal{F}^*_h} \mean{\rho^{-1}\bdiv
\tilde \bsig}\cdot \jump{\btau_h}.
\end{multline*}
Substituting back the last identity and \eqref{you} into \eqref{id1}
we obtain
\begin{equation*}
\Ah{(\tilde \bsig, \tilde \br), \taush} = \B{(\bF, \bg), \taush} \quad \forall \taush \in \bcW_h\times \bcQ_h
\end{equation*}
and \eqref{consistency} follows.
The C\'ea estimate \eqref{Cea} follows now in the usual way by taking advantage of \eqref{consistency}, the inf-sup condition \eqref{infsupABh}, estimate \eqref{boundA2}, and the triangle inequality.
It follows from \eqref{Cea} that
\begin{equation}\label{yaesta}
\norm{(\bT - \bT_h)(\bF, \bg)}_{DG} \leq \left(1 + \dfrac{M_{DG}}{\alpha_{DG}}\right)
\norm{(\tilde \bsig, \tilde \br) - (\Pi_h\tilde \bsig, \mathcal S_h\tilde \br)}^*_{DG}.
\end{equation}
Using the interpolation error estimates \eqref{asymp0}, \eqref{asympDiv} and \eqref{asymQ} we immediately obtain
\begin{equation}\label{cotaA}
\norm{(\tilde \bsig, \tilde \br) - (\Pi_h\tilde \bsig, \mathcal S_h\tilde \br)}_{DG} = \norm{(\tilde \bsig, \tilde \br) - (\Pi_h\tilde \bsig, \mathcal S_h\tilde \br)}\leq
C_0\, h^{\min(t, k)}\,
\Big( \norm{\tilde \bsig}_{\H^t(\bdiv, \O)} + \norm{\tilde \br}_{\H^t(\O)^{\nxn}} \Big).
\end{equation}
Moreover, we notice that
\begin{equation*}
\norm{h_{\mathcal{F}}^{1/2} \mean{\bdiv (\tilde\bsig-\Pi_h\tilde\bsig)}}_{0,\mathcal{F}^*_h}
\leq
\sum_{K\in \cT_h} \sum_{F\in \mathcal{F}(K)} h_F\norm{ \bdiv (\tilde\bsig-\Pi_h\tilde\bsig) }^2_{0,F}.
\end{equation*}
Under the regularity hypotheses on $\tilde\bsig$, the commuting diagram property satisfied by $\Pi_h$, the trace theorem and standard scaling arguments give
\[
h_F^{1/2}\norm{ \bdiv (\tilde\bsig-\Pi_h\tilde\bsig) }_{0,F} = h_F^{1/2}\norm{ \bdiv \tilde\bsig- \mathcal R_K \bdiv\tilde\bsig }_{0,F}
\leq C_2 h_K^{\min(t,k)} \norm{\bdiv \tilde\bsig}_{t,K}
\]
for all $F\in \mathcal{F}(K)$, where the $\L^2(K)$-orthogonal projection $\mathcal R_K:= \mathcal R_h|_K$
onto $\cP_{k-1}(K)$ is applied componentwise. It follows that
\begin{equation}\label{newE}
\norm{h_{\mathcal{F}}^{1/2} \mean{\bdiv (\tilde\bsig-\Pi_h\tilde\bsig)}}_{0,\mathcal{F}^*_h}
\leq C_3 h_K^{\min(t,k)} \left( \sum_{K\in \cT_h} \norm{\bdiv \tilde\bsig}_{t,K}^2 \right)^{1/2} \leq
C_3 h_K^{\min(t,k)} \norm{\bdiv \tilde\bsig}_{t,\O}.
\end{equation}
Combining \eqref{newE} and \eqref{cotaA} with \eqref{yaesta} proves the asymptotic error estimate \eqref{asymp}.
\end{proof}
\begin{lemma}\label{final}
For all $s\in (0,\ws)$, there exists a constant $C>0$ independent of $h$ and $\lambda$, such that for all $\sigmar\in \bcW\times \bcQ$
$$
\norm{(\bT - \bT_h)\bP\sigmar}_{DG} \leq \, C\, h^s\,
\norm{\bdiv \bsig}_{0,\Omega}.
$$
\end{lemma}
\begin{proof}
The result is a consequence of Theorem \ref{cea} by noticing that, by virtue of Lemma \ref{reg} and Assumption~\ref{assumpt1}, $\bT\circ\bP\subset\{(\btau,\br)\in[\H^s(\O)^{n\times n}]^2:\,\bdiv\btau\in\H^1(\O)^n\}$ for all $s\in(0,\ws)$.
\end{proof}
\begin{lemma}\label{TmenosTh}
For all $s\in (0,\ws)$, there exists a constant $C>0$ independent of $h$ and $\lambda$ such that
\[
\norm{(\bT - \bT_h)\taush}_{DG} \leq C \, h^s \, \norm{\taush}_{DG}\quad \forall \taush \in \bcW_h\times \bcQ_h.
\]
\end{lemma}
\begin{proof}
For any $\btau_h\in \bcW_h$ we consider the splitting $\btau_h = \btau_h^c + \tilde \btau_h$ with
$\btau_h^c:=\mathcal{I}_h \btau_h\in \bcW_h^c$. We have that
\begin{multline*}
(\bT - \bT_h)\taush = (\bT - \bT_h)(\tilde \btau_h,\0) + (\bT - \bT_h)(\btau_h^c, \bs_h)\\ = (\bT - \bT_h)(\tilde \btau_h,\0) +
(\bT - \bT_h)\bP_h(\btau_h^c, \bs_h),
\end{multline*}
where the last identity is due to the fact that $(\bI - \bP_h)(\btau_h^c, \bs_h)\in \bcK_h\times \bcQ_h$
and $\bT - \bT_h$ vanishes identically on this subspace. It follows that
\begin{multline*}
(\bT - \bT_h)\taush = (\bT - \bT_h)(\tilde \btau_h,\0) +
(\bT - \bT_h)(\bP_h - \bP)(\btau_h^c, \bs_h) + (\bT - \bT_h)\bP(\btau_h^c, \bs_h),
\end{multline*}
and the triangle inequality together with \eqref{bT} and \eqref{bTh} yield
\begin{multline*}
\norm{(\bT - \bT_h)\taush}_{DG} \leq \norm{(\bT - \bT_h)(\tilde \btau_h,\0)}_{DG} +
\norm{(\bT - \bT_h)(\bP_h - \bP)(\btau_h^c, \bs_h)}_{DG}\\ + \norm{(\bT - \bT_h)\bP(\btau_h^c, \bs_h)}_{DG}
\leq
\Big(\norm{\bT}_{\mathcal{L}([\L^2(\O)^{\nxn}]^2, \bcW\times \bcQ)} +
\norm{\bT_h}_{\mathcal{L}([\L^2(\O)^{\nxn}]^2, \bcW_h\times \bcQ_h)} \Big)\\
\Big( \norm{\tilde \btau_h}_{0,\O} + \norm{(\bP_h - \bP)(\btau_h^c, \bs_h)}\Big) +
\norm{(\bT - \bT_h)\bP(\btau_h^c, \bs_h)}_{DG}.
\end{multline*}
Using \eqref{L2Ph}, Lemma \ref{Ph}, Assumption \ref{assumpt1} and Lemma \ref{final} we have that
\[
\norm{\tilde \btau_h}_{0,\O} \leq C h \norm{\btau_h}_{\bcW(h)},
\]
\[
\norm{(\bP_h - \bP)(\btau_h^c, \bs_h)} \leq C h^s \norm{\bdiv \btau_h^c}_{0,\O} \leq C h^s \norm{\btau_h}_{\bcW(h)}
\]
and
\[
\norm{(\bT - \bT_h)\bP(\btau_h^c, \bs_h)}_{DG} \leq C h^s \norm{\bdiv \btau_h^c}_{0,\O} \leq C h^s \norm{\btau_h}_{\bcW(h)}
\]
respectively, which gives the result.
\end{proof}
\section{Spectral correctness of the DG method}
\label{APPROX}
The convergence analysis follows the same steps introduced in \cite{DNR1,DNR2}, we only need to adapt
it to the DG context, cf. also \cite{BuffaPerugia}.
For the sake of brevity, we
will denote in this section $\bcX:=\bcW\times\bcQ$, $\bcX_h:=\bcW_h\times\bcQ_h$
and $\bcX(h) := \bcW(h)\times \bcQ$. Moreover, when no confusion can arise, we
will use indistinctly $\bx$, $\by$, etc. to denote elements in $\bcX$
and, analogously, $\bx_h$, $\by_h$, etc. for those in $\bcX_h$.
Finally, we will use $\norm{\cdot}_{\mathcal{L}(\bcX_h, \bcX(h))}$
to denote the norm of an operator restricted to the discrete
subspace $\bcX_h$; namely, if $\bS:\bcX(h)\to \bcX(h)$, then
\begin{equation}\label{norm}
\norm{\bS}_{\mathcal{L}(\bcX_h, \bcX(h))}:=\sup_{\0\neq\bx_h\in\bcX_h}
\frac{\norm{\bS\bx_h}_{DG}}{\norm{\bx_h}_{DG}}.
\end{equation}
\begin{lemma}\label{ThDG0}
If $z \in\mathbb{D}\setminus \sp(\bT)$, there exists $h_0>0$ such that if $h\leq h_0$,
\[
\norm{(z \bI - \bT_h) \bx_h }_{DG} \geq C\dist\big(z,\sp(\bT)\big)|z|\, \norm{\bx_h}_{DG} \quad \forall \bx_h \in \bcX_h.
\]
with $C>0$ independent of $h$ and $\lambda$.
\end{lemma}
\begin{proof}
It follows from
\[
(z \bI - \bT_h) \bx_h = (z \bI - \bT) \bx_h + (\bT - \bT_h ) \bx_h
\]
and Lemma \ref{TDG} that
\[
\norm{(z \bI - \bT_h) \bx_h}_{DG} \geq \Big(C\dist\big(z,\sp(\bT)\big)|z| - \norm{\bT - \bT_h}_{\mathcal{L}(\bcX_h, \bcX(h))}\Big) \norm{\bx_h}_{DG}
\]
and the result follows from Lemma \ref{TmenosTh}.
\end{proof}
\begin{lemma} \label{ThDG}
If $z \in\mathbb{D}\setminus \sp(\bT)$, there exists $h_0>0$ such that if $h\leq h_0$,
\[
\norm{(z \bI - \bT_h) \bx }_{DG} \geq C \dist\big(z,\sp(\bT)\big)|z|^2\, \norm{\bx}_{DG} \quad \forall \bx \in \bcX(h),
\]
with $C>0$ independent of $h$ and $\lambda$.
\end{lemma}
\begin{proof}
Given $\bx\in \bcX(h)$ we let
\[
\bx_h^* = \bT_h \bx \in \bcX_h.
\]
We deduce from the identity
\[
(z \bI - \bT_h) \bx_h^* = \bT_h (z\bI - \bT_h)\bx
\]
and from Lemma \ref{ThDG0} that
\[
C\dist\big(z,\sp(\bT)\big)|z| \norm{ \bx_h^* }_{DG} \leq \norm{(z \bI - \bT_h) \bx_h^*}_{DG} \leq
\norm{\bT_h}_{ \mathcal{L}(\bcX(h), \bcX_h)} \norm{(z \bI - \bT_h)\bx}_{DG}.
\]
This and the triangle inequality leads to
\begin{multline*}
\norm{\bx}_{DG} \leq |z|^{-1} \norm{\bx^*_h}_{DG} + |z|^{-1} \norm{ (z \bI - \bT_h)\bx }_{DG}
\\
\leq|z|^{-1} \left( 1 + \frac{\norm{\bT_h}_{\mathcal{L}(\bcX(h), \bcX_h)}}{C\dist\big(z,\sp(\bT)\big)|z|} \right)\norm{ (z \bI - \bT_h)\bx }_{DG}.
\\
\leq|z|^{-1} \left( \frac{C\dist\big(z,\sp(\bT)\big)|z|+\norm{\bT_h}_{\mathcal{L}(\bcX(h), \bcX_h)}}{C\dist\big(z,\sp(\bT)\big)|z|} \right)\norm{ (z \bI - \bT_h)\bx }_{DG}.
\end{multline*}
Hence,
\begin{equation*}
C|z|\left(\frac{C\dist\big(z,\sp(\bT)\big)|z|}{\norm{\bT_h}_{\mathcal{L}(\bcX(h), \bcX_h)}+C\dist\big(z,\sp(\bT)\big)|z|}\right)\norm{\bx}_{DG}\leq \norm{(z \bI - \bT_h) \bx }_{DG}.
\end{equation*}
Now, using that $\dist\big(z,\sp(\bT)\big)\leq |z|\leq 1$ and $\|\bT_h\|_{\mathcal{L}(\bcX(h), \bcX_h)}\leq C'$ (with $C'$ independent of $\lambda$), from the estimate above we derive
\begin{equation*}
C|z|^2 \dist\big(z,\sp(\bT)\big)\norm{\bx}_{DG} \leq \norm{(z \bI - \bT) \taus }_{DG},
\end{equation*}
and the result follows.
\end{proof}
\begin{remark}\label{rem2}
If $E$ is a compact subset of $\mathbb{D} \setminus\sp(\bT)$ and $h$ is small enough, we deduce from Lemma \ref{ThDG} that $(z\bI-\bT_h):\bcX(h)\rightarrow\bcX(h)$ is invertible for all $z\in E$. Hence, $E\subset\mathbb{D}\backslash\sp(\bT_h)$. Consequently, for $h$ small enough, the numerical method does not introduce spurious eigenvalues. Moreover, we have that there exists a constant $C>0$ independent of $h$ and $\lambda$ such that, for all $z\in E$,
\begin{equation*}\label{resid}
\norm{\big(z \bI - \bT_h \big)^{-1}}_{\mathcal{L}(\bcX(h),\bcX(h))} \leq \frac{C}{\dist(E,\sp(\bT))|z|^2}.
\end{equation*}
\end{remark}
For
$\bx\in\bcX(h)$ and $\mathbb E$ and $\mathbb F$ closed subspaces of $\bcX(h)$, we set
$\delta(\bx,\mathbb E):=\inf_{\by\in\mathbb E}\norm{\bx-\by}_{DG}$,
$\delta(\mathbb E,\mathbb F):=\sup_{\by\in\mathbb E:\,\norm{\by}=1}\delta(\by,\mathbb F)$,
and $\gap(\mathbb E,\mathbb F):=\max\set{\delta(\mathbb E,\mathbb F),\delta(\mathbb F,\mathbb E)}$,
the latter being the so called \textit{gap} between subspaces $\mathbb E$ and
$\mathbb F$.
Given an isolated eigenvalue $\kappa\neq 1$ of $\bT$, we define
$$\texttt{d}_{\kappa}:=\frac{1}{2}\dist\big(\kappa,\sp(\bT)\setminus\{\kappa\}\big).$$
It follows that the closed disk $D_\kappa:=\{z\in\mathbb{C}:\quad |z-\kappa|\leq \texttt{d}_\kappa\}$ of the complex plane, with center $\kappa$ and boundary $\gamma$ is such that
$D_\kappa \cap \sp(\bT) = \set{\kappa}$. We deduce from Remark \ref{roof} that the operator
$\bcE:=\frac{1}{2\pi i}
\int_{\gamma}\left(z\bI-\bT\right)^{-1}\, dz:\bcX(h)\longrightarrow \bcX(h)$ is well-defined and bounded uniformly in
$h$. Moreover, $\bcE|_{\bcX}$ is a spectral projection in $\bcX$ onto the (finite dimensional) eigenspace $\bcE(\bcX)$ corresponding to the eigenvalue $\kappa$ of $\bT$. In fact,
\begin{equation}\label{equa}
\bcE(\bcX(h)) = \bcE(\bcX).
\end{equation}
To prove this, let $\kappa^*\in D_\kappa$ be an eigenvalue of $\bT:\, \bcX(h)\to \bcX(h)$ and
$\bx^*\in \bcX(h)$ be the corresponding eigenfunction. Since $\kappa^*\neq 0$ and $\bT(\bcX(h))\subset \bcX$, we actually have that $\bx^*\in \bcX$. Then, necessarily, $\kappa^*=\kappa$ and taking into account that $\bcE(\bcX)$ is the eigenspace associated with $\kappa$ we deduce \eqref{equa}.
Similarly, we deduce from Remark \ref{rem2} that, for $h$ small enough, the operator
$\bcE_h:=\frac{1}{2\pi i}
\int_{\gamma}\left(z\bI-\bT_h\right)^{-1}\, dz:\bcX(h)\longrightarrow \bcX(h)$ is also well-defined and bounded uniformly in $h$. Moreover, $\bcE_h|_{\bcX_h}$ is a projector in $\bcX_h$ onto the eigenspace $\bcE_h(\bcX_h)$ corresponding to the eigenvalues of $\bT_h:\, \bcX_h \to \bcX_h$ contained in $\gamma$. The same arguments as above show that we also have,
\begin{equation*}\label{equah}
\bcE_h(\bcX(h)) = \bcE_h(\bcX_h).
\end{equation*}
Our aim now is to compare $\bcE_h(\bcX_h)$ to $\bcE(\bcX)$ in terms of the gap $\gap$. In order to do that, we assume the following regularity assumption $\bcE(\bcX)\subset\H^t(\bdiv,\O)\times\H^t(\O)^{n\times n}$ with $t>s$.
\begin{lemma}\label{lot}
There exists $C>0$, independent of $h$ and $\lambda$, such that
\begin{equation}\label{E-Eh}
\displaystyle \norm{\bcE - \bcE_h}_{\mathcal{L}(\bcX_h, \bcX(h))} \leq \frac{C}{\texttt{d}_{\kappa}} \norm{\bT - \bT_h}_{\mathcal{L}(\bcX_h, \bcX(h))}.
\end{equation}
\end{lemma}
\begin{proof}
We deduce from the identity
\begin{equation*}\label{identRes}
\left(z\bI-\bT\right)^{-1}-\left(z\bI-\bT_h\right)^{-1} = \left(z\bI-\bT\right)^{-1}(\bT - \bT_h)\left(z\bI-\bT_h\right)^{-1}
\end{equation*}
that, for any $\bx_h\in \bcX_h$,
\begin{multline*}
\norm{(\bcE - \bcE_h)\bx_h}_{DG}\leq \frac{1}{2\pi}\int_{\gamma}\norm{[\left(z\bI-\bT\right)^{-1}-\left(z\bI-\bT_h\right)^{-1}]\bx_h}_{DG}|dz|\\
=\frac{1}{2\pi}\int_{\gamma}\norm{[\left(z\bI-\bT\right)^{-1}(\bT - \bT_h)\left(z\bI-\bT_h\right)^{-1}]\bx_h}_{DG}|dz|\\
\leq \frac{1}{2\pi}\int_{\gamma}\norm{\left(z\bI-\bT\right)^{-1}}_{\mathcal L(\bcX(h), \bcX(h))}\norm{\bT - \bT_h}_{\mathcal L(\bcX_h, \bcX(h))}\norm{\left(z\bI-\bT_h\right)^{-1}}_{\mathcal L(\bcX_h, \bcX_h)}\norm{\bx_h}_{DG}|dz|
\end{multline*}
and the result follows from Lemmas \ref{TDG} and \ref{ThDG}, the definition \eqref{norm} and the fact that for all $z\in\gamma$, $|z|\geq\kappa-\texttt{d}_{\kappa}\geq\frac{1}{2}\kappa.$
\end{proof}
\begin{theorem}
\label{conv}
There exists a constant $C>0$ independent of $h$ and $\lambda$ such that
\[
\gap(\bcE(\bcX), \bcE_h(\bcX_h)) \leq C \Big( \frac{\norm{\bT - \bT_h}_{\mathcal{L}(\bcX_h, \bcX(h))}}{{\texttt{d}_{\kappa}}} +
\delta(\bcE(\bcX), \bcX_h)\Big).
\]
\end{theorem}
\begin{proof}
As $\bcE_h$ is a projector, for $h$ sufficiently small, we have that $\bcE_h\bx_h=\bx_h$ for all $\bx_h\in\bcE_h(\bcX_h)$. It follows from \eqref{equa} that $\bcE\bx_h \in \bcE(\bcX)$, which leads to
\[
\delta(\bx_h,\bcE(\bcX))
\leq\norm{\bcE_h\bx_h-\bcE\bx_h}_{DG}
\leq\norm{\bcE_h-\bcE}_{\mathcal{L}(\bcX_h, \bcX(h))}\norm{\bx_h}_{DG}
\]
for all $\bx_h\in\bcE_h(\bcX_h)$. We deduce from \eqref{E-Eh} that
\begin{equation}\label{cE-cEh}
\delta( \bcE_h(\bcX_h), \bcE(\bcX))\leq \frac{C}{\texttt{d}_{\kappa}} \norm{\bT - \bT_h}_{\mathcal{L}(\bcX_h, \bcX(h))}.
\end{equation}
On the other hand, as $\bcE\bx=\bx$
for all $\bx\in\bcE(\bcX)$, for $h$ small enough and $\by_h\in \bcX_h$,
\begin{multline*}
\norm{\bx-\bcE_h\by_h}_{DG}
\leq\norm{\bcE(\bx-\by_h)}_{DG}
+\norm{(\bcE-\bcE_h)\by_h}_{DG}\leq \\
\norm{\bcE}_{\mathcal{L}( \bcX(h), \bcX(h))} \norm{(\bx-\by_h)}_{DG}
+\norm{(\bcE-\bcE_h)}_{\mathcal{L}(\bcX_h, \bcX(h))} \norm{\by_h}_{DG}
\\
\leq \big(\norm{\bcE_h}_{\mathcal{L}(\bcX(h), \bcX(h))}+2\norm{\bcE}_{\mathcal{L}(\bcX(h), \bcX(h))}\big)\norm{\bx - \by_h}_{DG}
+\norm{\bcE-\bcE_h}_{\mathcal{L}(\bcX_h, \bcX(h))}\norm{\bx}_{DG}.
\end{multline*}
Consequently,
\[
\delta(\bx, \bcE_h(\bcX_h)) \leq C (\delta(\bx, \bcX_h) + \norm{\bcE-\bcE_h}_{\mathcal{L}(\bcX_h, \bcX(h))} )
\]
for all $\bx \in \bcE(\bcX)$ such that $\norm{\bx}_{DG} = 1$ and using that the eigenspace $\bcE(\bcX)$ is finite dimensional we deduce that
\[
\delta(\bcE(\bcX), \bcE_h(\bcX_h)) \leq C (\delta(\bcE(\bcX), \bcX_h) + \norm{\bcE-\bcE_h}_{\mathcal{L}(\bcX_h, \bcX(h))} )
\]
and the result follows from the last estimate and \eqref{cE-cEh}.
\end{proof}
\begin{theorem}
Let $\kappa\neq 1$ be an eigenvalue of $\bT$ of algebraic multiplicity $m$ and
let $D_\kappa$ be a closed disk in the complex plane centered at $\kappa$ with boundary $\gamma$ such that
$D_\kappa \cap \sp(\bT) = \set{\kappa}$. Let $\kappa_{1, h}, \ldots, \kappa_{m(h), h}$ be the eigenvalues of
$\bT_h:\, \bcX_h \to \bcX_h$ lying in $D_\kappa$ and repeated according to their algebraic multiplicity.
Then, we have that $m(h) = m$ for $h$ sufficiently small and
\[
\lim_{h\to 0} \max_{1\leq i \leq m} |\kappa - \kappa_{i, h}| =0.
\]
Moreover, if $\bcE(\bcX)$ is the eigenspace corresponding to $\kappa$ and $\bcE_h(\bcX_h)$ is the
$\bT_h$-invariant subspace of $\bcX_h$ spanned by the eigenspaces corresponding to
$\set{\kappa_{i, h},\hspace{0.1cm} i = 1,\ldots, m}$ then
\[
\lim_{h \to 0} \gap(\bcE(\bcX), \bcE_h(\bcX_h)) = 0.
\]
\end{theorem}
\begin{proof}
We deduce from Lemma \ref{TmenosTh} that
\[
\lim_{h\to 0}\norm{\bT - \bT_h}_{\mathcal{L}(\bcX_h, \bcX(h))} = 0.
\]
Moreover, as $\bcE(\bcX) \subset \H^t(\bdiv, \O)\times \H^t(\O)^{\nxn}$, it follows from \eqref{asymp} that
\[
\lim_{h\to 0}\delta(\bcE(\bcX), \bcX_h) = 0.
\]
Hence, by virtue of Theorem \ref{conv}, we have that
\[
\lim_{h\to 0} \gap(\bcE(\bcX), \bcE_h(\bcX_h)) = 0,
\]
and, as a consequence, $\bcE(\bcX)$ and $\bcE_h(\bcX_h)$ have the same dimension provided $h$ is sufficiently small. Finally, being $\kappa$ an isolated eigenvalue and the radius of the circle $\gamma$ arbitrary, we deduce that
\[
\lim_{h\to 0} \max_{1\leq i \leq m} |\kappa - \kappa_{i, h}| =0.
\]
\end{proof}
\section{Asymptotic error estimates}
\label{ASYMP}
Along this section we fix a particular eigenvalue $\kappa\neq 1$ of $\bT$. We wish to obtain error
estimates for the eigenfunctions and the eigenvalues in terms of the quantity
\[
\delta^* (\bcE(\bcX) , \bcX_h):= \sup_{\bx\in \bcE(\bcX), \norm{\bx}=1}\inf_{\bx_h\in \bcX_h} \norm{\bx - \bx_h}^*_{DG}.
\]
\begin{theorem}\label{hatgap}
For $h$ small enough, there exists a constant $C$ independent of $h$ such that
\begin{equation}\label{aste}
\gap\big(\bcE(\bcX), \bcE_h(\bcX_h) \big)\leq \frac{C}{\texttt{d}_{\kappa}} \delta^* (\bcE(\bcX) , \bcX_h).
\end{equation}
\end{theorem}
\begin{proof}
As $\bcE(\bcX(h))=\bcE(\bcX)$ and $\bcE_h(\bcX(h))=\bcE_h(\bcX_h)$, it is equivalent to show that
\begin{equation*}\label{gogo}
\gap\Big(\bcE(\bcX(h)), \bcE_h(\bcX(h))\Big)\leq \frac{C}{\texttt{d}_{\kappa}} \delta^* (\bcE(\bcX) , \bcX_h).
\end{equation*}
We consider here again the disk $D_{\kappa}$ centered at $\kappa$ with radius $\texttt{d}_{\kappa}$ and boundary $\gamma$. We first notice that for all $z\in\gamma$
\[
\left(z\bI-\bT\right)^{-1}-\left(z\bI-\bT_h\right)^{-1} = \left(z\bI-\bT_h\right)^{-1}(\bT - \bT_h)\left(z\bI-\bT\right)^{-1},
\]
which implies
\begin{multline}\label{cuqui}
\norm{ (\bcE - \bcE_h)|_{\bcE(\bcX)} } \leq \frac{1}{2\pi }\int_{\gamma}\norm{\left(z\bI-\bT\right)^{-1}-\left(z\bI-\bT_h\right)^{-1}|_{\bcE(\bcX)}}|dz|\\
=\frac{1}{2\pi}\int_{\gamma}\norm{\left(z\bI-\bT_h\right)^{-1}(\bT - \bT_h)\left(z\bI-\bT\right)^{-1}|_{\bcE(\bcX)}}|dz|\\
\leq \frac{1}{2\pi}\int_{\gamma} \norm{\left(z\bI-\bT_h\right)^{-1}}_{\mathcal{L}(\bcX(h), \bcX(h))}\norm{(\bT - \bT_h)|_{\bcE(\bcX)}}_{\mathcal{L}(\bcX, \bcX(h))}\norm{\left(z\bI-\bT\right)^{-1}}_{\mathcal{L}(\bcX, \bcX(h))}|dz|\\
\leq\frac{C}{\texttt{d}_{\kappa}}\norm{(\bT - \bT_h)|_{\bcE(\bcX)}}_{\mathcal{L}(\bcX, \bcX(h))}
\end{multline}
Now, on the one hand, it is clear that
\[
\delta\Big(\bcE(\bcX(h)), \bcE_h(\bcX(h))\Big) \leq \norm{ (\bcE - \bcE_h)|_{\bcE(\bcX)} }_{\mathcal{L}(\bcX, \bcX(h))}.
\]
On the other hand, \eqref{cuqui}, the C\'ea estimate given by Theorem \ref{cea} and the fact that
$\bcE(\bcX)$ is finite dimensional yield
\begin{equation}\label{yum}
\norm{ (\bcE - \bcE_h)|_{\bcE(\bcX)} }_{\mathcal{L}(\bcX, \bcX(h))} \leq \frac{C}{\texttt{d}_{\kappa}} \delta^* (\bcE(\bcX) , \bcX_h),
\end{equation}
which proves that
\begin{equation}\label{gogo1}
\delta\Big(\bcE(\bcX(h)), \bcE_h(\bcX(h))\Big) \leq \frac{C}{\texttt{d}_{\kappa}} \delta^* (\bcE(\bcX) , \bcX_h).
\end{equation}
Consequently, as $\bcE(\bcX)\subset \H^t(\bdiv, \O)\times \H^t(\O)^{\nxn}$, we have that
\begin{equation}\label{Gamma}
\lim_{h\to 0} \delta\Big(\bcE(\bcX(h)) , \bcE_h(\bcX(h))\Big) = 0.
\end{equation}
It is shown in \cite{DNR2} that \eqref{Gamma}
implies that, for $h$ small enough, $\Lambda_h:=\bcE_h|_{\bcE(\bcX)}: \bcE(\bcX) \to \bcE_h(\bcX(h))$
is bijective and $\Lambda_h^{-1}$ exists and is uniformly bounded with respect to
$h$. Furthermore, it holds that,
\[
\sup_{\bx_h \in \bcE_h(\bcX(h)), \norm{\bx_h}_{DG} = 1} \norm{\Lambda_h^{-1} \bx - \bx}_{DG} \leq
2 \sup_{\by \in \bcE(\bcX(h)), \norm{\by}_{DG} = 1} \norm{\Lambda_h \by - \by}_{DG}.
\]
Hence,
\[
\delta\Big(\bcE_h(\bcX(h)), \bcE(\bcX(h))\Big) \leq \sup_{\bx_h \in \bcE_h(\bcX(h)), \norm{\bx_h}_{DG} = 1} \norm{\bx_h - \Lambda_h^{-1} \bx}_{DG} \leq 2 \sup_{\by \in \bcE(\bcX), \norm{\by}_{DG} = 1} \norm{\bcE \by - \bcE_h \by}_{DG},
\]
and \eqref{yum} shows that we also have $\displaystyle\delta( \bcE_h(\bcX(h)), \bcE(\bcX(h)))\leq \frac{C}{\texttt{d}_{\kappa}} \delta^* (\bcE(\bcX) , \bcX_h)$,
and the result follows from this last estimate and \eqref{gogo1}.
\end{proof}
\begin{theorem}\label{errorE}
Assume that $\bcE(\bcX) \subset \H^t(\bdiv,\O)\times \H^t(\O)^{\nxn}$, then there exists $C>0$ independent of $h$ and $\lambda$ such that
\begin{equation}\label{eigenspace}
\displaystyle\gap(\bcE_h(\bcX_h), \bcE(\bcX))\leq \frac{C}{\texttt{d}_{\kappa}} h^{\min\{t, k\}}.
\end{equation}
Moreover, there exists $C'>0$ independent of $h$ such that
\begin{equation}\label{eigenvalue}
\disp\max_{1\leq i\leq m} |\kappa - \kappa_{i, h}|\leq \frac{C'}{{\texttt{d}}_{\kappa}} \, h^{2\min\{t, k\}}
\end{equation}
\end{theorem}
\begin{proof}
Using the estimate \eqref{aste} from the last theorem and proceeding as in the proof of \eqref{asymp} we immediately obtain \eqref{eigenspace}.
Let $\kappa_{1, h}, \cdots, \kappa_{m, h}$ be the eigenvalues of
$\bT_h:\, \bcX_h \to \bcX_h$ lying in $D_\kappa$ and repeated according to their algebraic multiplicity.
We denote by
$\bx_{i,h}$ the eigenfunction corresponding to $\kappa_{i,h}$ and satisfying
$\norm{\bx_{i,h}}_{DG}=1$. We know from Theorem \ref{hatgap} that, if $h$ is sufficiently small,
\begin{equation*}
\delta(\bx_{i,h},\bcE(\bcX))\leq \frac{C}{\texttt{d}_{\kappa}} \delta^*(\bcE(\bcX), \bcX_h).
\end{equation*}
Then, there exists an eigenfunction $\bx:=(\bsig, \br) \in \bcE(\bcX)$ satisfying
\begin{equation*}
\norm{\bx_{i,h}-\bx}_{DG} = \delta(\bx_{i,h},\bcE(\bcX)) \leq \gap(\bcE_h(\bcX_h), \bcE(\bcX)) \leq \frac{C}{\texttt{d}_\kappa} \delta^*(\bcE(\bcX), \bcX_h)\to 0 \quad \text{as $h\to 0$},
\end{equation*}
which proves that $\norm{\bx}_{DG}$ is bounded from below and above by constant independent of $h$.
Proceeding as in the proof of the consistency property in Theorem \ref{cea} we readily obtain that
\begin{equation}\label{consist}
A_h(\bx, \by_h) = \kappa B(\bx, \by_h)
\end{equation}
for all $\by_h:=(\btau_h, \bs_h)\in \bcX_h$.
With the aid of \eqref{consist}, it is easy to show that the identity
\begin{equation*}
A_h(\bx-\bx_{i,h},\bx-\bx_{i,h})
- \kappa B(\bx-\bx_{i,h},\bx-\bx_{i,h})
=\left(\kappa_{i,h}-\kappa\right) B(\bx_{i,h},\bx_{i,h})
\end{equation*}
holds true. Now, according to Lemma~3.6 of \cite{MMR}, for any $\bx\in\bcE(\bcX)$, $\bx\neq 0$, it holds that $B(\bx,\bx)>0$.Thus, since $\bcE(\bcX)$ is finite-dimensional,
there exists $c>0$, independent of $h$, such that $B(\bx,\bx)\geq c\norm{\bx}_{DG}$.
This proves that $B(\bx_{ih},\bx_{ih})\geq\frac{c}2$ for $h$ sufficiently
small. We obtain from \eqref{boundA1} that
\begin{equation*}\label{eq11}
\frac{c}2 \abs{\kappa_{i,h}-\kappa} \leq \abs{A_h(\bx-\bx_{i,h},\bx-\bx_{i,h})} + |\kappa| \abs{B(\bx-\bx_{i,h},\bx-\bx_{i,h})} \leq C (\norm{\bx-\bx_{i,h}}_{DG}^*)^2.
\end{equation*}
Since $\bx:=(\bsig,\br)$ and $\bx_{i,h}:=(\bsig_h,\br_h)$, and by definition of $\|\cdot\|_{DG}^{*}$ we have
\begin{equation*}
\norm{\bx-\bx_{i,h}}_{DG}^*:=\|(\bsig,\br)-(\bsig_h,\br_h)\|_{DG}^{*}=\|(\bsig,\br)-(\bsig_h,\br_h)\|_{DG}+\|h_{\mathcal{F}}^{1/2}\mean{\bdiv(\bsig-\bsig_h)}\|_{\mathcal{F}_h^{*}}.
\end{equation*}
It follows from Theorem \ref{conv}, Lemma \ref{TmenosTh} and the interpolation error estimates \eqref{asymp0}-\eqref{asymQ} that
\begin{equation}\label{eq12}
\|(\bsig,\br)-(\bsig_h,\br_h)\|_{DG}\leq C_0 \gap(\bcE(\bcX), \bcE_h(\bcX_h)) \leq C_1 h^{\min\{t, k\}} \Big( 1 + \norm{\bsig}_{\H^t(\bdiv, \O)} + \norm{\br}_{\H^t(\O)^{\nxn}} \Big).
\end{equation}
On the other hand,
\begin{equation}\label{fin1}
\|h_{\mathcal{F}}^{1/2}\mean{\bdiv(\bsig-\bsig_h)}\|_{\mathcal{F}_h^{*}}\leq\|h_{\mathcal{F}}^{1/2}\mean{\bdiv(\bsig-\Pi_h\bsig)}\|_{\mathcal{F}_h^{*}}+\|h_{\mathcal{F}}^{1/2}\mean{\bdiv(\Pi_h\bsig-\bsig_h)}\|_{\mathcal{F}_h^{*}}
\end{equation}
and it follows from \eqref{newE} that
\begin{equation}\label{fin2}
\|h_{\mathcal{F}}^{1/2}\mean{\bdiv(\bsig-\Pi_h\bsig)}\|_{\mathcal{F}_h^{*}} \leq C_2 h_K^{\min(t,k)} \norm{\bdiv \bsig}_{t,\O}.
\end{equation}
Finally, using \eqref{discTrace}, \eqref{asympDiv} and \eqref{eq12} yield
\begin{align}\label{fin3}
\begin{split}
\|h_{\mathcal{F}}^{1/2}\mean{\bdiv(\Pi_h\bsig-\bsig_h)}\|_{\mathcal{F}_h^{*}} &\leq C_3\|\bdiv(\Pi_h\bsig-\bsig_h)\|_{0,\O}
\\ &\leq C_3
\big(\|\bdiv(\Pi_h\bsig-\bsig)\|_{0,\O}+\|\bdiv_h(\bsig-\bsig_h)\|_{0,\O}\big)\\
& \leq C_3 \big(\|\bdiv(\Pi_h\bsig-\bsig)\|_{0,\O}+\|(\bsig,\br)-(\bsig_h,\br_h)\|_{DG}\big) \\&\leq C_4h^{\min\{t,k\}}\Big( 1 + \norm{\bsig}_{\H^t(\bdiv, \O)} + \norm{\br}_{\H^t(\O)^{\nxn}} \Big).
\end{split}
\end{align}
Combining \eqref{eq11}, \eqref{fin1}-\eqref{fin3} and \eqref{eq12}, we obtain \eqref{eigenvalue}.
\end{proof}
\begin{remark}
In the proof provided above for the error estimate \eqref{eigenvalue} the constant $C'$ is not independent of $\lambda$. Indeed, according to the proof of Lemma~3.6 from \cite{MMR}, we have that
\begin{equation*}
B((\bsig,\br),(\bsig,\br))=\int_{\O}\mathcal{C}^{-1}\bsig:\bsig\geq\min\left\{\frac{n}{n\lambda+2\mu},\frac{1}{2\mu}\right\}\norm{\bsig}^2_{0,\O}\geq 0.
\end{equation*}
Therefore, the constant $c$ in the proof above tends to zero when $\lambda$ goes to infinity. However, the numerical experiments presented below suggest that \eqref{eigenvalue} holds true uniformly in $\lambda$.
\end{remark}
\begin{remark}
We notice that there is in \eqref{eigenspace} and \eqref{eigenvalue} a hidden reliance on $\lambda$ through the constant $\texttt{d}_{\kappa}:=\frac{1}{2}\dist\big(\kappa,\sp(\bT)\setminus\{\kappa\}\big)$ because $\sp(\bT)$ depends on $\lambda$. The constant $\texttt{d}_{\kappa}$ measures the deterioration of the error estimates given in Theorem \ref{errorE} when the eigenvalue $\kappa$ is too close to the accumulation point 0.
\end{remark}
\begin{remark}\label{regEigenfun}
We point out that, thanks to Lemma \ref{reg}, we always have that $\bcE(\bcX) \subset \set{(\btau,\br) \in [\HsO^{\nxn}]^2:\, \bdiv \btau \in \H^1(\O)^n}$ for all $s\in (0,\ws)$. Consequently, the error estimates given in Theorem \ref{errorE} will always hold true for any $t\in(0,\ws)$ even if $\ws\leq 1/2$. However, it may happen that some eigenspaces satisfy the regularity assumption of the theorem with $t \geq \ws$.
\end{remark}
\section{Numerical results}\label{NUMERICOS}
We present a series of numerical experiments to solve the elasticity eigenproblem in mixed form with the discontinuous Galerkin scheme \eqref{DGshort}. All the numerical results have been obtained by using the FEniCS Problem Solving Environment \cite{fenics}. For simplicity we consider a two-dimensional model problem. We choose $\Omega = (0,1)\times(0,1)$, $\rho = 1$, and a Young modulus $E=1$. We will let the Poisson ratio $\nu$ take different values in $(0,1/2]$. We recall that the Lam\'e coefficients are related to $E$ and $\nu$ by
\begin{equation*}
\label{LAME}
\lambda:=\frac{E\nu}{(1+\nu)(1-2\nu)}
\qquad\text{and}\qquad
\mu:=\frac{E}{2(1+\nu)}.
\end{equation*}
The limit problem corresponding to $\lambda=\infty$ is obtained by taking $\nu = 1/2$. In all our experiments we used uniform meshes with the symmetry pattern shown in Figure~\ref{FIG:MESH}. The refinement parameter $N$ represents the number of elements on each edge.
\begin{figure}[H]
\begin{center}
\begin{minipage}{6cm}
\centerline{\includegraphics[height=6cm, angle=-90]{MU4.eps}}
\centerline{$N=4$}
\end{minipage}
\begin{minipage}{6cm}
\centerline{\includegraphics[height=6cm, angle=-90]{MU6.eps}}
\centerline{$N=6$}
\end{minipage}
\caption{Uniform meshes}
\label{FIG:MESH}
\end{center}
\end{figure}
In the first tests we are concerned with the determination of a reliable stabilization parameter $\texttt{a}_S$. We know that the spectral correctness of the method can only be guaranteed if $\texttt{a}_S$ is sufficiently large (Proposition \ref{infsupDh}) and if the meshsize $h$ is sufficiently small (cf. Remark \ref{rem2}). In a first stage, we fix the refinement level to $N=8$ and report in Tables \ref{TABLA1}, \ref{TABLA2} and \ref{TABLA3} the 10 smallest vibration frequencies computed for different values of $\texttt{a}_S$. The polynomial degrees are given by $k=3, 4, 5$, respectively. The boxed numbers are spurious eigenvalues. We observe that they emerge at random positions when we vary $\texttt{a}_S$ and $k$ and they disappear completely when $\texttt{a}_S$ is sufficiently large.
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{ c c c c c }
\toprule
$\texttt{a}_S=5$ &$\texttt{a}_S=10$ &$\texttt{a}_S=20$ & $\texttt{a}_S=40$ & $\texttt{a}_S=80$ \\
\midrule
0.6804474&0.6804497&0.6804460&0.6804472& 0.6804472\\
1.6988814&1.6988904&1.6988615&1.6988797&1.6988800\\
1.8222056&1.8222073&1.8221859&1.8222050& 1.8222052 \\
2.9476938&2.9476927&\ffb{2.3856290} &2.9476928& 2.9476933\\
3.0174161&3.0174530&\ffb{2.3862301} &3.0174095& 3.0174114 \\
3.4432120&3.4432156&\ffb{2.5833172} &3.4432158&3.4432168\\
4.1417685&4.1417626& \ffb{2.5839852} &4.1417697&4.1417750 \\
4.6308354&4.6308072&2.9477062&4.6308465&4.6308549\\
4.7616007&4.7615186&3.0174627&4.7616237&4.7616317 \\
4.7879824&4.7879191&3.4432320&4.7880173&4.7880298\\
\bottomrule
\end{tabular}
\end{center}
\caption{Vibration frequencies for $k=3$, $\nu=0.35$ and $N=8$}
\label{TABLA1}
\end{table}
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{ c c c c c c }
\toprule
$\texttt{a}_S=5$ &$\texttt{a}_S=10$ &$\texttt{a}_S=20$ & $\texttt{a}_S=40$ & $\texttt{a}_S=80$ \\
\midrule
0.6805737&0.6805737&0.6805737&0.6805736&0.6805737\\
1.6990333&1.6990333&1.6990332&1.6990329&1.6990330\\
1.8222095&1.8222094&1.8222095&1.8222095&1.8222096\\
2.9476921&\ffb{2.2970057} &2.9476922&2.9476922&2.9476922\\
3.0176437&\ffb{2.3909952} &3.0176400&3.0176421&3.0176428\\
3.4432473&2.9476924&\ffb{3.1845593} &3.4432470&3.4432472\\
4.1417687&3.0176452&\ffb{3.4392819} &4.1417705&4.1417709\\
\ffb{4.5534365} &3.4432480&3.4432839&4.6309421&4.6309433\\
4.6309432&4.1417718&4.1417737&4.7615808&4.7615812\\
\ffb{4.7195356} &4.6309455&4.6309470&4.7882380&4.7882400\\
\bottomrule
\end{tabular}
\end{center}
\caption{Vibration frequencies for $k=4$, $\nu=0.35$ and $N=8$}
\label{TABLA2}
\end{table}
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{ c c c c c c }
\toprule
$\texttt{a}_S=5$ &$\texttt{a}_S=10$ &$\texttt{a}_S=20$ & $\texttt{a}_S=40$& $\texttt{a}_S=80$ \\
\midrule
0.6806522&0.6806522&0.6806522&0.6806522&0.6806522\\
1.6991254&1.6991254&1.6991255&1.6991250&1.6991253\\
1.8222137&1.8222137&1.8222138&1.8222137&1.8222137\\
2.9476935&2.9476935&\ffb{2.4714299} &2.9476935&2.9476935\\
3.0177848&3.0177848&\ffb{2.4822317} &3.0177827&3.0177844\\
3.4432656&3.4432656&2.9476935&3.4432652&3.4432656\\
4.1417853&4.1417852&3.0177862&4.1417845&4.1417852\\
4.6310201&4.6310201&3.4432657&4.6310172&4.6310196\\
4.7615803&4.7615803&4.1417853&4.7615800&4.7615802\\
4.7883889&4.7883889&4.6310208&4.7883835&4.7883878\\
\bottomrule
\end{tabular}
\end{center}
\caption{Vibration frequencies for $k=5$, $\nu=0.35$ and $N=8$}
\label{TABLA3}
\end{table}
Next, we present in Table \ref{TABLA4} different approximations of the first 10 vibration frequencies corresponding to $N = 8, 16, 32, 64$, obtained with $\texttt{a}_S =20$ and a polynomial degree $k = 3$. We notice that as the level of refinement increases the lower frequencies are progressively cleaned from spurious modes. We conclude that our method provides a correct approximation of the spectrum as long as $N$ and $\texttt{a}_S$ are large enough. In the forthcoming tests we will take $\texttt{a}_S=1000$. We point out that the previous tests have been carried out with a Poisson ratio $\nu=0.35$, but similar results were obtained for values ranging from 0.35 to 0.5.
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{c c c c }
\toprule
$N=8$ & $N=16$ & $N=32$ & $N=64$ \\
\midrule
0.6804460& 0.6806838 & 0.6807775& 0.6808142 \\
1.6988615& 1.6991595 & 1.6992689& 1.6993109 \\
1.8221859& 1.8222154 & 1.8222207& 1.8222228 \\
\ffb{2.3856290 } & 2.9476935 & 2.9476956& 2.9476963 \\
\ffb{2.3862301} & 3.0178279 & 3.0180082& 3.0180748\\
\ffb{2.5833172} &\ffb{3.2760743} & 3.4432923& 3.4433002 \\
\ffb{2.5839852 } &\ffb{3.2777582} & 4.1418082& 4.1418158\\
2.9477062&3.4432656 & \ffb{4.4519274} & 4.6311877 \\
3.0174627&\ffb{3.5133204} &\ffb{4.4548953} & 4.7615817\\
3.4432320&\ffb{3.5153213} & 4.6311437& 4.7886836\\
\bottomrule
\end{tabular}
\end{center}
\caption{Vibration frequencies for $k=3$, $\texttt{a}_S=20$ , $\nu=0.35$ and different refinement levels}
\label{TABLA4}
\end{table}
The subsequent numerical tests are aimed to determine the convergence rate of the scheme. With the boundary conditions considered in our model problem, it turns out that (cf. \cite{MMR} and the references therein) the regularity exponents $\ws$ defined in Lemma \ref{reg} are given by Table \ref{TABLA5} for different values of the Poisson ratio $\nu$.
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{c c}
\toprule
$\nu$ & $\ws$ \\
\midrule
0.35 & 0.6797\\
0.49&0.5999\\
0.5&0.5946\\
\bottomrule
\end{tabular}
\end{center}
\caption{Sobolev regularity exponents}
\label{TABLA5}
\end{table}
We present in Tables \ref{TABLA22.0}, \ref{TABLA22.1} and \ref{TABLA22.2} (corresponding to the polynomial degrees $k=2,3,4$, respectively) the first two vibration frequencies computed on a series of nested meshes for a range of Poisson ratios given by $\nu=0.35, 0.49, 0.5$. We also report in these tables an estimate of the order of convergence $\alpha$, as well as more accurate approximations of the vibration frequencies obtained by means of the least-squares fitting technique explained in \cite[Section 6]{MMR}. Comparing with the exponents given in Table \ref{TABLA5}, we observe that our method provides a double order of convergence for the vibration frequencies. Namely, in all cases we have $\alpha\simeq 2\ws$, which corresponds to the the worst possible order of convergence. The eigenfunctions corresponding to higher natural frequencies are oscillating but they can be more regular (see Remark \ref{regEigenfun}), which justifies the use of high polynomial orders of approximation. Finally, we point out that the method is clearly locking-free.
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{c |c c c c |c| c }
\toprule
$\nu $ & $N=16$ & $N=32$ & $N=48$ & $N=64$ & $\alpha$ &$\lambda_{ex}$ \\
\midrule
\multirow{2}{1cm}{0.35} &0.6806068 & 0.6807467& 0.6807850 & 0.6808020 & 1.34 & 0.6808381 \\
&1.6990672 & 1.6992327 & 1.6992773& 1.6992969 & 1.37 & 1.6993373 \\
\hline
\multirow{2}{1cm}{0.49} &0.6987402& 0.6991833& 0.6993160& 0.6993779& 1.19& 0.6995295 \\
&1.8359946& 1.8366760 & 1.8368781& 1.8369722 & 1.20 & 1.8372009 \\
\hline
\multirow{2}{1cm}{0.5} &0.7007298 & 0.7012091& 0.7013534& 0.7014210 & 1.18 & 0.7015881 \\
&1.8472390 & 1.8479824 & 1.8482043& 1.8483081 & 1.19 & 1.8485623 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Lowest vibration frequencies for $k=2$, $\texttt{a}_S=1000$ and convergence order}
\label{TABLA22.0}
\end{table}
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{c |c c c c |c| c }
\toprule
$\nu $ & $N=16$ & $N=32$ & $N=48$ & $N=64$ & $\alpha$& $\lambda_{ex}$ \\
\midrule
\multirow{2}{1cm}{0.35} &0.6806839& 0.6807775 & 0.6808029& 0.6808142& 1.35 & 0.6808379\\
&1.6991607& 1.6992690 & 1.6992981& 1.6993109& 1.37& 1.6993373 \\
\hline
\multirow{2}{1cm}{0.49} &0.6989872 & 0.6992929 & 0.6993836 & 0.6994258 & 1.20& 0.6995284 \\
&1.8363810 & 1.8368436 & 1.8369810 & 1.8370450 & 1.20& 1.8372002 \\
\hline
\multirow{2}{1cm}{0.5} &0.7009977 & 0.7013286 & 0.7014275 & 0.7014736 & 1.19 & 0.7015868 \\
&1.8476611 & 1.8481669 & 1.8483181 & 1.8483888 & 1.19 & 1.8485618 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Lowest vibration frequencies for $k=3$, $\texttt{a}_S=1000$ and convergence order}
\label{TABLA22.1}
\end{table}
\begin{table}[H]
\footnotesize
\singlespacing
\begin{center}
\begin{tabular}{c |c c c c |c| c }
\toprule
$\nu $ & $N=16$ & $N=32$ & $N=48$ & $N=64$ & $\alpha$& $\lambda_{ex}$ \\
\midrule
\multirow{2}{1cm}{0.35} &0.6807342& 0.6807973& 0.6808144& 0.6808219& 1.36 & 0.6808376 \\
&1.6992195& 1.6992917& 1.6993112 & 1.6993198 & 1.36 & 1.6993377 \\
\hline
\multirow{2}{1cm}{0.49} & 0.6991499& 0.6993638 & 0.6994272 & 0.6994567 & 1.20& 0.6995284 \\
& 1.8366280& 1.8369510 & 1.8370470 & 1.8370917 & 1.20& 1.8372000 \\
\hline
\multirow{2}{1cm}{0.5} &0.7011738& 0.7014060 & 0.7014751 & 0.7015075 & 1.19& 0.7015869 \\
&1.8479310& 1.8482851 & 1.8483911 & 1.8484407 & 1.19& 1.8485618 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Computed lowest vibration frequencies for $k=4$, $\texttt{a}_S=1000$ and convergence order}
\label{TABLA22.2}
\end{table}
\section{Appendix. The limit problem}
\label{APPendix}
As was shown in the previous section, the proposed method works fine also for the limit problem ($\lambda=+\infty$), namely, for perfectly incompressible elasticity. In this appendix, we will establish a spectral characterization in this case. Also, we will prove that the eigenvalues of the nearly incompressible elasticity problem converge to those of the incompressible elasticity problem as $\lambda\rightarrow \infty$.
In the limit case $\lambda=+\infty$, the bilinear forms $A$ and $B$ change in their definitions, since the term where $\lambda$ appears in \eqref{invcCop} vanishes. Therefore, the limit eigenvalue problem reads as follows: Find $\kappa\in\mathbb{R}$ and $(\bsig,\br)\in\bcW\times\bcQ$ such that
\begin{equation}\label{limit0}
A_{\infty}((\bsig,\br),(\btau,\bs))=\kappa B_{\infty}((\bsig,\br),(\btau,\bs))\qquad\forall(\btau,\bs)\in\bcW\times\bcQ
\end{equation}
with
\[
B_{\infty}((\bsig,\br),(\btau,\bs)):=\frac{1}{2\mu}\int_{\O}\bsig^{\tD}:\btau^{\tD}+\int_{\O}\br:\btau+\int_{\O}\bs:\bsig
\]
and
\[
A_{\infty}((\bsig,\br),(\btau,\bs)):=\int_{\O}\rho^{-1} \bdiv \bsig \cdot \bdiv \btau +B_{\infty}((\bsig,\br),(\btau,\bs))
\]
for all $(\bsig,\br),(\btau,\bs)\in\bcW\times\bcQ.$
It is easy to check that $A_{\infty}$ is a bounded bilinear form. Moreover, the arguments used in the proofs of Propositions~\ref{normEquiv} and \ref{infsupA-cont} hold true for $\lambda=+\infty$, so that $A_{\infty}$ satisfies the following inf-sup condition:
\begin{equation*}
\sup_{\taus\in \bcW\times \bcQ} \frac{A_{\infty}((\bsig,\br),(\btau,\bs))}{\norm{\taus}} \geq \alpha \norm{(\bsig,\br)}\qquad \forall(\bsig_,\br) \in
\bcW\times \bcQ.
\end{equation*}
In consequence, we are in a position to introduce a solution operator for the limit eigenvalue problem. Let $\bT_{\infty}: [\L^2(\O)^{\nxn}]^2 \to \bcW\times \bcQ$ be
defined for any $(\bF, \bg) \in [\L^2(\O)^{\nxn}]^2$ by
\begin{equation*}\label{PLIM}
A_\infty(\bT_{\infty}(\bF,\bg),(\btau,\bs))= B_\infty((\bF,\bg),(\btau,\bs))\qquad\forall(\btau,\bs)\in\bcW\times\bcQ.
\end{equation*}
It is easy to check that $\mu$ is a non-zero eigenvalue of $\bT_{\infty}$ with eigenfunction $(\bsig_{\infty},\br_{\infty})$ if and only of $\kappa = 1/\mu$ is a non-vanishing eigenvalue of problem \eqref{limit0} with the same eigenfunction.
Our first goal is to prove that the operators $\bT$ defined by \eqref{charcT} converges to $\bT_\infty$ as $\lambda$ goes to infinity. To recall that $\bT$ actually depends on $\lambda$, in what follows we will denote it by $\bT_\lambda$.
Before proving the convergence of $\bT_{\lambda}$ to $\bT_{\infty}$, we will characterize the spectrum of $\bT_{\infty}$.
Let $\bcK$ be defined as in \eqref{K} and
\begin{equation*}
[\bcK\times\bcQ]^{\bot_{B_{\infty}}}
:=\left\{ (\bsig,\br)\in\bcW\times\bcQ:
\ B_{\infty}( (\bsig,\br), (\btau,\bs))=0 \quad \forall (\btau,\bs)\in\bcK\times\bcQ\right\}.
\end{equation*}
We observe that $\bT_{\infty}|_{\bcK\times\bcQ}:\bcK\times\bcQ\rightarrow\bcK\times\bcQ$ reduces to the identity, so that $\mu=1$ is an eigenvalue of $\bT_{\infty}$. Moreover, its associated eigenspace is precisely $\bcK\times\bcQ$.
Let us introduce the following operator which will play a role similar to that of $\bP$ in the limit problem:
\begin{align*}
\bP_{\infty}:\bcW\times\bcQ&\rightarrow\bcW\times\bcQ,\\
(\bsig,\br)&\mapsto \bP_{\infty}\bsig:=(\widetilde{\bsig}, \widetilde{\br}).
\end{align*}
where $(\wbsig,(\wbu,\wbr))\in\bcW\times[\L^2(\O)^n\times\bcQ]$ is the solution of the following problem:
\begin{align}
\frac{1}{2\mu}&\int_{\O}\wbsig^{\tD}:\btau^{\tD}+\int_{\O}\wbu\cdot\bdiv\btau+\int_{\O}\btau:\wbr=0\qquad\forall\btau\in\bcW,\label{eq1}\\
&\int_{\O}\bv\cdot\bdiv\wbsig+\int_{\O}\wbsig:\bs=\int_{\O}\bv\cdot\bdiv\bsig\qquad\forall(\bv,\bs)\in\L^2(\O)^n\times\bcQ.\label{eq2}
\end{align}
The previous problem is well posed, since the ellipticity of $\int_{\O}\bsig^{\tD}:\btau^{\tD}$ in the corresponding kernel is established in Lemma 2.3 of \cite{MMR-stokes} and the following inf-sup condition holds true (see \cite{BoffiBrezziFortin}):
\[
\displaystyle\sup_{\btau\in\bcW}\frac{\int_{\O}\bv\cdot\bdiv\btau+\int_{\O}\bs:\btau}{\norm{\btau}_{\H(\bdiv,\O)}}\geq\beta(\norm{\bv}_{0,\O}+\norm{\bs}_{0,\O})\qquad\forall(\bv,\bs)\in\L^2(\O)^n\times\bcQ.
\]
We observe that problem \eqref{eq1}--\eqref{eq2} is a dual mixed formulation with weakly imposed symmetry
of the following incompressible elasticity problem with volumetric force density $-\bdiv\bsig$
\begin{align}
-\bdiv\wbsig&=-\bdiv\bsig\hspace{0.85cm}\text{in}\, \O,\label{strong1}\\
\frac{1}{2\mu}\wbsig^{\tD}&=\boldsymbol{\varepsilon}(\wbu)\hspace{1.49cm}\text{in}\, \O,\label{stron2}\\
\wbsig\bn&=\0\hspace{2.10cm}\text{on}\, \Gamma_N,\label{strong3}\\
\wbu&=\0\hspace{2.10cm}\text{on}\, \Gamma_D.\label{strong4}
\end{align}
It is easy to check that $(\wbsig,\wbu)\in\H(\bdiv,\O)\times \H^1(\O)^n$ satisfies \eqref{strong1}--\eqref{strong4} if and only if $(\wbsig,(\wbu,\wbr))\in\bcW\times[\L^2(\O)^n\times\bcQ]$ is the solution of \eqref{eq1}--\eqref{eq2} with $\wbr=\frac{1}{2}[\nabla\wbu-(\nabla\wbu)^{\t}].$
Now, by resorting to the relation between the incompressible elasticity and the Stokes problems, we conclude that there exists $\ws_{\infty}\in (0,1)$ depending only on $\O$ and $\mu$ (see for instance \cite{girault-raviart}) such that, for all $s\in (0,\ws_{\infty})$ the solution $\wbu$ of \eqref{strong1}--\eqref{strong4} belongs to $\H^{1+s}(\O)^n$ and the following estimate hold true
\begin{equation*}\label{regutilde}
\norm{\wbu}_{1+s,\O}\leq C\norm{\bdiv\bsig}_{0,\O},
\end{equation*}
with a constant $C$ independent of $\bsig$.
The following lemma is a consequence of this regularity result.
\begin{lemma}\label{reginfty}
For all $s\in (0,\ws)$ and $(\bsig,\br)\in\bcW\times\bcQ$, if $(\wbsig,(\wbu,\wbr))$ is the solution of \eqref{eq1}--\eqref{eq2}, then $\widetilde{\bsig}\in \H^s(\O)^{n\times n}$, $\widetilde{\bu}\in\H^{1+s}(\O)^{n\times n}$, $\wbr\in\H^s(\O)^{n\times n}$ and
\begin{equation*}
\norm{\wbsig}_{s,\O}+\norm{\wbu}_{1+s,\O}+\norm{\wbr}_{s,\O}\leq C\norm{\bdiv\bsig}_{0,\O},
\end{equation*}
with a constant $C$ independent of $\bsig$. Consequently, $\bP_{\infty}(\bcW\times\bcQ)\subset\H^{s}(\O)^{n\times n}\times\H^{s}(\O)^{\nxn}.$
\end{lemma}
We observe that $\bP_{\infty}$ is idempotent and that $\ker(\bP_{\infty})=\bcK\times\bcQ$. Moreover, being $\bP_{\infty}$ a projector, the orthogonal decomposition $\bcW\times\bcQ=(\bcK\times\bcQ)\oplus\bP_{\infty}(\bcW\times\bcQ)$ holds true. On the other hand, $\bP_{\infty}(\bcW\times\bcQ)$ is an invariant space of $\bT_{\infty}$ (see Proposition A.1 in \cite{MMR}).
\begin{prop}\label{regP}
For all $s\in (0,\ws)$
\begin{equation}\label{invsubset}
\bT_{\infty}(\bP_{\infty}(\bcW\times\bcQ))\subset\set{(\bsig^{*},\br^{*})\in\H^{s}(\O)^{\nxn}\times\H^{s}(\O)^{\nxn}:\, \bdiv\bsig^{*}\in\H^1(\O)^n},
\end{equation}
and there exists $C>0$ such that for all $(\bF,\bg)\in\bP_{\infty}(\bcW\times\bcQ)$, if $(\bsig^{*},\br^{*})=\bT_{\infty}(\bF,\bg)$, then
\begin{equation}\label{regstar}
\norm{\bsig^{*}}_{s,\O}+\norm{\bdiv\bsig^{*}}_{1,\O}+\norm{\br^{*}}_{s,\O}\leq C\norm{(\bF,\bg)}.
\end{equation}
Moreover, $\bT_{\infty}|_{\bP_{\infty}(\bcW\times\bcQ)}:\bP_{\infty}(\bcW\times\bcQ)\rightarrow \bP_{\infty}(\bcW\times\bcQ)$ is a compact operator.
\end{prop}
\begin{proof}
Let $(\bF,\bg)\in\bP_{\infty}(\bcW\times\bcQ)$ and $(\bsig^{*},\br^{*})=\bT_{\infty}(\bF,\bg).$ Hence, we have
\begin{align*}
&\int_{\O}\rho^{-1}\bdiv\bsig^*\cdot\bdiv\btau+\frac{1}{2\mu}\int_{\O}\bsig^{*\tD}:\btau^{\tD}+\int_{\O}\br^*:\btau=\frac{1}{2\mu}\nonumber\int_{\O}\bF^{\tD}:\btau^{\tD}+\int_{\O}\bg:\btau\quad\forall\btau\in\bcW,\\
&\int_{\O}\bsig^*:\bs=\int_{\O}\bF:\bs\qquad\forall\bs\in\bcQ.
\end{align*}
Then, testing the first equation of the system above with $\btau\in\mathcal{D}(\O)^{\nxn}\subset\bcW$, we have that
\begin{equation*}
-\rho^{-1}\nabla(\bdiv\bsig^{*})+\frac{1}{2\mu}\bsig^{*\tD}+\br^{*}=\frac{1}{2\mu}\bF^{\tD}+\bg.
\end{equation*}
Hence, since $\rho$ and $\mu$ are constants, we conclude that $\bdiv\bsig^{*}\in\H^1(\O)^n.$
Since $\bP_{\infty}(\bcW\times\bcQ)$ is invariant with respect to $\bT_{\infty}$, applying Lemma \ref{reginfty} we obtain directly \eqref{invsubset}. On the other hand, \eqref{regstar} is a consequence of Lemma \ref{reginfty}. Finally, the compactness of $\bT_{\infty}|_{\bP_{\infty}(\bcW\times\bcQ)}$ is a consequence of the following compact embedding
$$\set{(\bsig^{*},\br^{*})\in\H^{s}(\O)^{\nxn}\times\H^{s}(\O)^{\nxn}:\bdiv\bsig^{*}\in\H^1(\O)^n}\hookrightarrow\bcW\times\bcQ,$$
which allow us to conclude the proof.
\end{proof}
Now we are in position to establish a spectral characterization for $\bT_{\infty}.$
\begin{theorem}
The spectrum of $\bT_{\infty}$ decomposes as follows: $\sp(\bT_{\infty})=\set{0,1}\cup\set{\mu_k}_{k\in\mathbb{N}}$, where:
\begin{itemize}
\item[(i)] $\mu=1$ is an infinite-multiplicity eigenvalue of $\bT_{\infty}$ and its associated eigenspace is $\bcK\times \bcQ.$
\item[(ii)] $\mu=0$ is an eigenvalue of $\bT_{\infty}$ and its associated eigenspace is $\mathcal{Z}\times\bcQ$, where
$$\mathcal{Z}:=\set{\btau\in\bcW:\hspace{0.2cm}\btau^{\tD}=0}=\set{q\bI:\hspace{0.2cm}q\in\H^1(\O)\hspace{0.2cm}\text{and}\hspace{0.2cm}q=0\quad\text{on}\hspace{0.2cm}\Gamma_N}.$$
\item[(iii)] $\set{\mu_k}_{k\in\mathbb{N}}\subset(0,1)$ is a sequence of nondefective finite-multiplicity eigenvalues of $\bT_{\infty}$ which converge to zero and the corresponding eigenspaces lie in $\bP_\infty(\bcW\times\bcQ)$.
\end{itemize}
\end{theorem}
\begin{proof}
It is enough to follow the steps of Theorem 3.5 from \cite{MMR-stokes}.
\end{proof}
Now we are in position to establish the following convergence result.
\begin{lemma}\label{LIMITE}
There exists a constant $C>0$ such that
$$\|(\bT_{\lambda}-\bT_{\infty})((\bF,\bg))\|\leq \frac{C}{\lambda}\|(\bF,\bg)\|_{0,\O}\qquad\forall(\bF,\bg)\in[\L^2(\O)^{\nxn}]^2.$$
\end{lemma}
\begin{proof}
Let $(\bF,\bg)\in[\L^2(\O)^{\nxn}]^2$ and let $(\bsig_{\lambda},\br_{\lambda}):=\bT_{\lambda}(\bF,\bg)$ and $(\bsig_{\infty},\br_{\infty}):=\bT_{\infty}(\bF,\bg)$. Then, from \eqref{charcT} and the definition of $\cC$ we have
\begin{align*}
\int_{\O}\rho^{-1}\bdiv\bsig_{\lambda}\cdot\bdiv\btau+\frac{1}{2\mu}\int_{\O}\bsig_{\lambda}^{\tD}:\btau^{\tD}+&\frac{1}{n(n\lambda+2\mu)}\int_{\O}\tr(\bsig_{\lambda})\tr(\btau)+\int_{\O}\br_{\lambda}:\btau\\
&=\frac{1}{2\mu}\int_{\O}\bF^{\tD}:\btau^{\tD}+\frac{1}{n(n\lambda+2\mu)}\int_{\O}\tr(\bF)\tr(\btau)+\int_{\O}\bg:\btau,\\
\int_{\O}\bsig_{\lambda}:\bs=\int_{\O}\bF:\bs.&
\end{align*}
Whereas
\begin{align*}
&\int_{\O}\rho^{-1}\bdiv\bsig_{\infty}\cdot\bdiv\btau+\frac{1}{2\mu}\int_{\O}\bsig_{\infty}^{\tD}:\btau^{\tD}+\int_{\O}\br_{\infty}:\btau=\frac{1}{2\mu}\nonumber\int_{\O}\bF^{\tD}:\btau^{\tD}+\int_{\O}\bg:\btau\quad\forall\btau\in\bcW,\\
\nonumber&\int_{\O}\bsig_{\infty}:\bs=\int_{\O}\bF:\bs\qquad\forall\bs\in\bcQ.
\end{align*}
Subtracting the above equations we have
\begin{align}
\nonumber\int_{\O}\rho^{-1}&\bdiv(\bsig_{\lambda}-\bsig_{\infty})\cdot\bdiv\btau+\frac{1}{2\mu}\int_{\O}(\bsig_{\lambda}^{\tD}-\bsig_{\infty}^{\tD}):\btau^{\tD}\\
&+\int_{\O}(\br_{\lambda}-\br_{\infty}):\btau
=\frac{1}{n(n\lambda+2\mu)}\int_{\O}\tr(\bF-\bsig_{\lambda})\tr(\btau)\qquad\forall\btau\in\bcW,\label{mix1}\\
\int_{\O}(\bsig_{\lambda}-&\bsig_{\infty}):\bs=0\qquad\forall\bs\in\bcQ.\label{simetrico}
\end{align}
Testing this equation with $\btau:=\bsig_{\lambda}-\bsig_{\infty}$ and $\bs:=\br_{\lambda}-\br_{\infty}$ we have
\begin{align*}
\rho^{-1}\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}^2+\frac{1}{2\mu}\|\bsig_{\lambda}^{\tD}-\bsig_{\infty}^{\tD}\|_{0,\O}^2 =&\frac{1}{n(n\lambda+2\mu)}\int_{\O}(\tr(\bF)-\tr(\bsig_{\lambda}))\tr(\bsig_{\lambda}-\bsig_{\infty})\\
\leq&\frac{1}{n(n\lambda+2\mu)}\int_{\O}\|\tr(\bF)-\tr(\bsig_{\lambda})\|_{0,\O}\,\|\tr(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}\\
\leq&\frac{1}{n\lambda+2\mu}(\|\bF\|_{0,\O}+\|\bsig_{\lambda}\|_{0,\O})\|\bsig_{\lambda}-\bsig_{\infty}\|_{0,\O}\\
\leq&\frac{C}{n\lambda}\|(\bF,\bg)\|_{0,\O}\|\bsig_{\lambda}-\bsig_{\infty}\|_{0,\O},
\end{align*}
where we have used \eqref{bT} to bound $\norm{\bsig_{\lambda}}_{0,\O}$. Moreover
\begin{equation*}
\underbrace{\min\left\{\rho^{-1},\frac{1}{2\mu}\right\}}_{C_{\rho,\mu}}\left(\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}^2+\|\bsig_{\lambda}^{\tD}-\bsig_{\infty}^{\tD}\|_{0,\O}^2\right) \leq\frac{C}{n\lambda}\|(\bF,\bg)\|_{0,\O}\|\bsig_{\lambda}-\bsig_{\infty}\|_{0,\O}.
\end{equation*}
We observe that $(\bsig_{\lambda}-\bsig_{\infty})\in\bcW$ is symmetric due to equation \eqref{simetrico}. Then, we resort to the following estimate (see \cite{BoffiBrezziFortinBook} for instance)
\begin{equation*}
C\|\bsig_{\lambda}-\bsig_{\infty}\|_{0,\O}^2\leq \|\bsig_{\lambda}^{\tD}-\bsig_{\infty}^{\tD}\|_{0,\O}^2+\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}^2
\end{equation*}
with $C>0$ to deduce that
\begin{equation*}
C\|\bsig_{\lambda}-\bsig_{\infty}\|_{\H(\bdiv,\O)}\leq( \|\bsig_{\lambda}^{\tD}-\bsig_{\infty}^{\tD}\|_{0,\O}^2+\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}^2)^{1/2}.
\end{equation*}
Hence
\begin{align}
\nonumber\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}^2 +\|&\bsig_{\lambda}^{\tD}-\bsig_{\infty}^{\tD}\|_{0,\O}^2 \\
&\leq\frac{C_{\rho,\mu}}{n\lambda}\|(\bF,\bg)\|_{0,\O}( \|\bsig^{\tD}-\bsig_{\infty}^{\tD}\|_{0,\O}^2+\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}^2)^{1/2}\label{root}
\end{align}
and, finally,
\begin{equation}\label{cotaf}
\|\bsig_{\lambda}-\bsig_{\infty}\|_{\H(\bdiv,\O)}\leq\frac{C}{\lambda}\|(\bF,\bg)\|_{0,\O},
\end{equation}
with $C$ a positive constant depending on $\rho$, $\mu$ and $n$.
On the other hand, taking into account the inf-sup condition \eqref{inSupbeta}, \eqref{mix1}, Cauchy-Schwarz inequality, \eqref{root} and \eqref{cotaf}, we have
\begin{align}
\nonumber\displaystyle\beta &\norm{\br_{\lambda}-\br_{\infty}}_{0,\O}\\&\leq\sup_{\btau\in \bcW} \frac{\frac{1}{n(n\lambda+2\mu)}\int_{\O}\tr(\bsig_{\lambda}-\bsig_{\infty})\tr(\btau)-\int_{\O}\rho^{-1}\bdiv(\bsig_{\lambda}-\bsig_{\infty})\cdot\bdiv\btau-\frac{1}{2\mu}\int_{\O}(\bsig_{\lambda}^{\tD}-\bsig^{\tD}_{\infty}):\nonumber\btau^{\tD}}{\norm{\btau}_{\HdivO}}\\
\nonumber&\leq\sup_{\btau\in \bcW}\frac{\frac{C}{n\lambda+2\mu}\|(\bF,\bg)\|_{0,\O}\|\btau\|_{0,\O}+\rho^{-1}\|\bdiv(\bsig_{\lambda}-\bsig_{\infty})\|_{0,\O}\|\bdiv\btau\|_{0,\O}+\frac{1}{2\mu}\|\bsig_{\lambda}^{\tD}-\bsig^{\tD}_{\infty}\|_{0,\O}\|\btau^{\tD}\|_{0,\O}}{\norm{\btau}_{\HdivO}}\\
&\leq\frac{C}{\lambda}\|(\bF,\bg)\|_{0,\O}\label{cotafg}.
\end{align}
Hence, the proof follows by combining \eqref{cotaf} and \eqref{cotafg}.
\end{proof}
Now we are in a position to establish the following result.
\begin{theorem}
Let $\mu_{\infty}>0$ be an eigenvalue of $\bT_{\infty}$ of multiplicity $m$. Let $D$ be any disc of the complex plane centered at $\mu_{\infty}$ and containing no other element of the spectrum of $\bT_{\infty}.$ Then, for $\lambda$ large enough, $D$ contains exactly $m$ eigenvalues of $\bT_{\lambda}$ (repeated according to their respective multiplicities). Consequently, each eigenvalue $\mu_{\infty}>0$ of $\bT_{\infty}$ is a limit of eigenvalues $\mu$ of $\bT_{\lambda}$, as $\lambda$ goes to infinity.
\end{theorem}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.